text stringlengths 1.36k 1.27M |
|---|
Neighbors Still In Opposition To Clifton Terrace Lot Plan
Many of you who live in Candler Park have walked past the empty lot near the corner of Clifton Terrace and Terrace Ave. where the intermittent runoff stream moves rainwater from Page Ave. and the surrounding area into the Peavine Watershed.
This is the lot that is currently greenspace with a ditch running diagonally through the property.
The DOT sold this lot to a developer as part of the disposition of the remaining right-of-way land left over from the creation of Freedom Park. This developer then applied for a subdivision of the single lot in order to build two homes on this land.
The city has seen fit to give its approval of this subdivision, despite many negative issues raised by the CPNO, the city arborist, the Urban Design Commission, surrounding residents (including a lengthy petition opposing the subdivision), and the unanimous recommendation of NPU-N to “Deny the Application with Prejudice.” Needless to say, the City’s approval was a shock to many of us.
There are several main issues that have been raised against the subdivision and future development of this land as well as the original platting and sale by the DOT. Firstly, the original post-settlement plan of disposition and use of this land was outlined in an agreement (consisting of the Freedom Park Concept Plan and the SPI) supported by all of those involved, including the DOT. The Freedom Park Concept Plan clearly shows that all lots to be disposed in private sale are to front Terrace Ave. and Page Ave. It shows no lots to be created that would front Clifton Terrace. This “intent” has been confirmed by members of The Freedom Park Conservancy/CAUTION Inc.
However, this is NOT what happened. The DOT platted the land and had the Bureau of Planning sign off on it without any review – creating an “extra” lot that is basically the flood area/runoff ditch on Clifton Terrace. It seems that the DOT’s “intent” was to squeeze as many lots out of this land as it could.
Secondly, there are many environmental and flooding concerns related to this land the city seems to be ignoring. If one looks at the topography of the lot, it is clear that building two houses will be a major ordeal requiring serious grading and/or fill. This kind of activity is forbidden by the SPI-7 restrictions that exist on the lot.
Construction will create a large amount of sediment to be added into an already taxed sewer system and will rob the neighborhood of one of the only green areas where run-off water can be reabsorbed naturally. We don’t believe the City
Neighbors Oppose Clifton Terrace Lot Plan (cont’d. from page 1)
should be allowed to subdivide this lot without considering the potential damage, not to mention over-development to this area.
Thirdly, the Bureau of Planning has also seen fit to pass the subdivision without following many of their own rules. For example, there was no sign posted to notify residents of the application as required by the Land Subdivision Ordinance. In fact, there are far too many violations to mention here.
Some concerned residents have filed an appeal with the Bureau of Planning and a hearing has been set for August 16, 2002 at 1:00 pm in the 2nd Floor Council Chambers in City Hall (55 Trinity Ave., SW). I urge anyone who wants to see this land remain “green” to join us in our fight against the subdivision and to come down to show the Bureau of Zoning Adjustments that Candler Parkers oppose this plan. Or, if you like you can write, fax or e-mail the Bureau of Planning, Anne Fauver, and Cathy Woolard to express your opposition. For reference, the Subdivision application number that we oppose is Application Number SD-02-10... and the appeal application that we support is Application Number V-02-173. You can e-mail any ideas or thoughts that you might have to me at firstname.lastname@example.org.
—Sam Crawford, Candler Park Resident
Exploring “Residents Only” Parking For Candler Park
The residents of Josephine Street have been trying to get “Residents Only” parking for quite some time. The City of Atlanta recently suggested it would be more likely to grant us the parking status if the request included the entire Candler Park neighborhood. The way it works is this: each household gets four parking passes – basically two for you and two for your guests.
If you or anyone you know is interested in “Residents Only” parking in Candler Park, please email me at this address:
email@example.com. Thank you.
—Kelly Stocks
Laughter in the Park by Deb Milbrath
“Dishing” It Out: Guidelines For Satellite Receivers
It’s true: City of Atlanta Subdivision Ordinances require a formal permit known as a “special exception” for installing dish antennas in most residential districts of Candler Park. Special Exceptions by the Board of Zoning Adjustment are prescribed whenever a dish is to be placed in a required yard (i.e., setback areas in front, beside or behind a residence) as well as for attaching such dishes to the roof of primary or accessory structures on the property (Sec. 16-28.008.11).
Some Candler Parkers are concerned about the visual impact of satellite dishes attached to trees in a front yard, or to the roof of the primary residence in a location generally visible from the front of the property. Requirements for Special Exceptions provide that the location of satellite dishes not be objectionable to occupants of neighboring property or the neighborhood in general. Signal reception standards of such equipment are not considered sufficient grounds for approving the application, which may require screening or other buffering satisfactory to the neighborhood. For more information, contact the City of Atlanta Planning Bureau at 404-330-6145.
-Walt Weimar
Candler Park Pool
Our Pool Is Soooo Cool – Join Now!
Dive in to the best recreation opportunity in the neighborhood – membership for your family in the Candler Park Pool! Join now, and don’t miss a single session of swimming, sun and fun close to home! Make your check payable to CPPA (see below for membership options and fees), and mail to: CPPA, P.O. Box 5343, Atlanta, GA 31107.
Candler Park Pool Membership Options (please check ONLY ONE)
☐ Individual Youth (must have permission of legal guardian) $60
☐ Individual $100
☐ Senior (age 65 and over) $60
☐ Household (2 adults) $160
☐ Additional Household members: add $30 each for _______ additional
☐ Maximum per household $230
This is a ☐ New Membership ☐ Renewal
☐ I’ll help out this summer… please call me!
First Name ____________________________ Last Name ____________________________
Address _______________________________________________________________________
E-Mail __________________________________ Phone _________________________________
List each family member _________________________________________________________
Total number of members ______________ Amount enclosed: $ _______________________
The undersigned agrees to abide by the rules and regulations governing the pool, and understands that violators will be asked to leave.
Signature __________________________________ Signature of Legal Guardian __________
The Virtual Messenger: Official Notice Of Land Use Matters Proposed To Go Online
As Zoning VP for CPNO and an alternate NPU representative, I’ve frequently encountered a dilemma in my principal responsibility to provide notice to CPNO residents of land use matters for which a formal vote of recommendation is required.
*The Messenger*, published and hand-delivered through a small army of neighborhood volunteers to CPNO residents each month, is an important source of communications for our members, but due to publication deadlines must be prepared well in advance of the meeting dates.
This has often forced me to choose between a delay in submission of articles to Messenger editors that may jeopardize the timely delivery of this important publication prior to our meeting, or advising anxious applicants whose schedule requires NPU appearances as well as City of Atlanta agency hearings that are often than 60 days from deadline for *The Messenger* to go to the printer.
The solution I have discussed with some other CPNO officers is to propose a change to CPNO by-laws regarding the means by which “official” notice of land use matters is considered to have been “published.” Recent experience in meeting with residents who often discussed issues as a large group via e-mail regarding the DeKalb Avenue rezoning project has given me the confidence to propose that an online version of *The Messenger* would serve better as the official agenda for notice regarding CPNO land use matters. The main advantage would be that the online version could be edited for late-breaking land-use matters by being posted up to just a few days prior to the CPNO meeting date (usually well after the home delivery edition of *The Messenger* goes to the printer), while still affording reliable and accessible access to most CPNO residents via the Internet.
The deferrals of land use matters because of uncertain distribution of *The Messenger* could be avoided without diminishing the importance and timeliness of this publication as a source of information or a forum for neighborhood discussion.
Please offer any suggestions or concerns regarding this issue for everyone to consider for the next few months, and if online notice of CPNO land use matters should appear acceptable to a majority, I’ll propose to CPNO an amendment to the by-laws later this year.
—Walt Weimar
Coming soon: A long-delayed article about alleys (past and present) in Candler Park. Have an alley question? Email it to me now! —Walt Weimar
To celebrate our 30th anniversary, B.O.N.D. is reducing home equity rates by 30%
For 30 days in July all home equity rates are just 5.95%.
Limited funds available promotion ends 6pm July 30, 2002.
Visit us on line www.bondcu.com
Little 5 Points
433 Moreland Ave.
(404) 525-0619
BOND
COMMUNITY
FEDERAL CREDIT UNION
We are Your Neighborhood Partner Since 1972
HOURS
Monday-Friday 11-6
Saturday 11-3
FAIR: Council Rolls Back Millage Rate
On June 17, the City Council listened to citizens and voted unanimously to roll back the millage to fully reflect increased assessments.
There was some maneuvering just before the final vote with Jim Maddox proposing only a partial millage rollback. Only he and Debi Starnes voted for this proposal. (Ms. Starnes later said she was distracted and meant to vote against the measure. Maybe she was distracted during the entire Campbell administration, too.)
Thank you everyone who wrote and called the Councilmembers. They would not have done this if they did not know people were watching. That the citizens were able to stop this tax increase is a good start to the coming attractions: the hugely expensive sewer repair project and the soon-to-be-announced plans to reform city management.
For those of you who have any energy left, writing each of the Councilmembers a quick “Thank you” for rolling back the millage rate would be very helpful.
Once again, we have proven that an active citizenship can get results.
—Greg Smith
Free To Be, You And Me At 7 Stages Back Stage In L5P
On August 10, Synchronicity Performance Group presents the classic show for children, *Free to Be, You and Me*.
Originally created by Marlo Thomas as a film, album and book in 1974, this spectacular collection of stories, songs and poems is packed with enough inspiration to span several generations. Kids and parents will be singing along with our dynamic four-actor ensemble and learning important lessons of identity, acceptance, tolerance, hope and friendship.
Performances will be Saturdays & Sundays, August 10-25. One midnight show is being offered on Saturday, August 17. All tickets are $8. Each adult bringing 4 or more children is free.
Recommended for kids age 4 and up.
To purchase tickets, call the box office at 404-284-1151, or online at www.synchrotheatre.com.
ManyPaws
Pet Sitting By Daphne
404-378-6935
A Mature Approach To Loving Care For Your Animals
The Kirkwood School
Quality Childcare for Intown Families
Opening This September in Oakhurst!
Please visit our web page or call for more information
WWW.KIRKWOODSCHOOL.COM
404-373-1822
FAIR: Council Rolls Back Millage Rate
On June 17, the City Council listened to citizens and voted unanimously to roll back the millage to fully reflect increased assessments.
There was some maneuvering just before the final vote with Jim Maddox proposing only a partial millage rollback. Only he and Debi Starnes voted for this proposal. (Ms. Starnes later said she was distracted and meant to vote against the measure. Maybe she was distracted during the entire Campbell administration, too.)
Thank you everyone who wrote and called the Councilmembers. They would not have done this if they did not know people were watching. That the citizens were able to stop this tax increase is a good start to the coming attractions: the hugely expensive sewer repair project and the soon-to-be-announced plans to reform city management.
For those of you who have any energy left, writing each of the Councilmembers a quick “Thank you” for rolling back the millage rate would be very helpful.
Once again, we have proven that an active citizenship can get results.
—Greg Smith
Free To Be, You And Me At 7 Stages Back Stage In L5P
On August 10, Synchronicity Performance Group presents the classic show for children, *Free to Be, You and Me*.
Originally created by Marlo Thomas as a film, album and book in 1974, this spectacular collection of stories, songs and poems is packed with enough inspiration to span several generations. Kids and parents will be singing along with our dynamic four-actor ensemble and learning important lessons of identity, acceptance, tolerance, hope and friendship.
Performances will be Saturdays & Sundays, August 10-25. One midnight show is being offered on Saturday, August 17. All tickets are $8. Each adult bringing 4 or more children is free.
Recommended for kids age 4 and up.
To purchase tickets, call the box office at 404-284-1151, or online at www.synchrotheatre.com.
ManyPaws
Pet Sitting By Daphne
404-378-6935
A Mature Approach To Loving Care For Your Animals
Is your family growing too big for your pond?
Maya can help.
Maya Hahn
Re/Max Metro AtlantaCityside
Home Office: 404-522-0011
mayah@ mindspring.com
http://mayahhahn.realtor.com
The Kirkwood School
Quality Childcare for Intown Families
Opening This September in Oakhurst!
Please visit our web page or call for more information
WWW.KIRKWOODSCHOOL.COM
404-373-1822
Water Conservation: When Every Drop Counts
Georgia has been in a drought since 1998. And while rainfall patterns have been close to normal this spring, many water supplies are still low. The drought combined with an increase in population growth has put an added burden on our already limited water supply.
Now for the good news. A typical household can easily save 20,000 gallons per year by making a few behavioral changes, retrofitting some of our old plumbing fixtures, and implementing simple landscaping practices. With even a few changes, you’ll be helping preserve a precious natural resource while seeing a decline in your water and sewer bills!
Below is an explanation of the current watering restrictions and a few water conservation tips to help you get started.
Watering Restrictions in Atlanta
In late May this year, the state Environmental Protection Division announced that the watering restrictions would remain in place for the metro Atlanta area.
- Outdoor watering is not allowed from 10am to 10pm
- If your home or business has an *even* numbered street address, you may water before 10am or after 10pm on even numbered calendar days
- If your home or business has an *odd* numbered street address, you may water before 10am or after 10pm on odd numbered calendar days
Remember... watering restrictions apply to ALL outdoor watering. This includes washing your car, pressure washing your house, etc. Don’t despair. You can still take your car to a commercial car wash.
Water Efficient Irrigation
- Early morning hours are the best time to water — you minimize losses to evaporation & the spread of plant disease.
- Just because you can water every other day, doesn’t mean that you need to...don’t water when rain is in the forecast!
- Water slowly and deeply with soaker hoses or a drip irrigation system. Shallow, frequent watering encourages a shallow root system and reduces drought tolerance, and is 50% more efficient compared to conventional spray irrigation.
- Direct water to the roots of plants and avoid wetting the foliage to reduce evaporation and the potential for disease.
- Install automatic shutoff nozzles on hand-held hoses.
- Collect rainwater from your roof, by connecting a rainbarrel to your gutter system. Rainbarrels are available from the internet at www.gardeners.com.
Water Efficient Landscaping
- Call the City of Atlanta at 404-330-6801 to schedule a free water efficient landscaping consultation.
- Wait until fall for new plantings - the best time to plant is in the fall or early spring. Plants need time to establish a root system before they can successfully battle stress.
- Select drought tolerant plants. The University of Georgia Cooperative Extension Service has a wealth of information on drought-tolerant plants at interests.caes.uga.edu/drought/articles/restrictinfo.htm. In general, trees, shrubs, and perennials are more drought tolerant compared to annuals.
- Mulch and add organic matter to the soil to retain moisture.
- Minimize fertilization during dry periods—fertilizer increases the need for water.
- Minimize grass turf areas. Keep in mind that a healthy turf will survive drought periods. It will go dormant and turn brown during drought, but will regain its normal green color and growth when it receives adequate water.
For more info, visit the GA Department of Natural Resources website: www.dnr.state.ga.us/ and click on Environmental or P²AD.
Land Use/Zoning News And Information
- NPU-N held its annual By-Laws Ratification vote at the Little Five Points Community Center on Austin Avenue in Inman Park on Saturday June 29th. As a representative form of Neighborhood Planning Unit, NPU-N must annually certify to the City of Atlanta that a majority of eligible NPU voters have approved of the executive form of NPU organization as it relates to formal recommendations to City planning agencies regarding land use matters under the municipal subdivision ordinance. Results were not available in time for publication, but interested residents can contact NPU-N delegate Lexa King, or contact Assistant NPU Coordinator Nyna Gentry at 404-330-6722.
- James Lee, representing Candler Market located on McLendon Avenue at the intersection of Clifton Road, made a presentation to CPNO’s June meeting to explain proposed improvements to the store, including renovation of interior and exterior elements, addition of sandwich lines and conversion of storage space to a restaurant/wine bar with outside seating at the rear of the store. Permits related to the project, if/when required, will be presented at a CPNO meeting by the owners later this year, but interested residents can contact James Lee at 404-373-9787.
- Zone 6 Paint Day was held Saturday June 1st at Atlanta Police Department Zone 6 Headquarters on Hosea Williams Drive in Kirkwood, where scores of volunteers (including CPNO Safety VP Greg Reinhardt) were on hand most of the day to assist police officers and officials (including Major Banda and Officer Kelley) in making the offices look worthy of their own Tour of Homes! Lunch was served by APD, and CPNO officers appreciated the opportunity to share ideas with other neighborhood leaders.
-Walt Weimar
Kids Rock To LollipopRock
What happens when kids sing their own songs to oldies tunes?
It’s called LollipopRock!
The songwriters of this new CD are long time Druid Hills resident Deborah Hunter and daughter Kim Soltoro, who have taken music we all know and love from the 50s through the 90s, rewritten the lyrics for children, and had fresh, young kids perform them for the enjoyment of kids from baby to 7.
Parents and grandparents will enjoy recognizing Girls Just Want To Have Fun - which has become Squirrels Just Want To Have Fun - and Did You Ever Try To Make Up Your Mind - now Did You Ever Try To Draw A Straight Line.
The CD has 14 songs and the producers think you may actually enjoy listening to it over, and over, and over!
The CD is available online at www.lollipoprock.com, where you can listen to sound clips and learn more about it. You can also find it at select Wherehouse Music stores, select Hunter EyeCare offices, or by calling 1-866-621-4143 toll free.
Okay kids and parents... let’s get ready to LollipopRock on!
MARK SPENCER, LLC.
Artisan Quality Horticulture and Hardscape
- Landscape design
- Restoration
- Renovation
- Custom stone
- Paving
- Irrigation
- Decks
- Candler Park Resident
404-909-0422
I Specialize in Small Spaces
**CLASSIFIED ADS**
**CraZy MoOn aRt RoOm OFFERS** – Expressive art, group outings, private parties, workshops, kids classes, & team development. 678.878.6161 www.CraZyMoOnaRt.com.
**CSE CLEANING SERVICE** – Hi, we are a cleaning service that caters to your specific needs. We provide quality and flexibility. References available - Licensed, Bonded, Insured. Call Katrina, 404-373-9351.
**FOR PERSONALIZED RELIABLE CLEANING SERVICE** – with 10 years experience, call Pat Felty, 404-822-8043. From a neighborhood near and with neighborhood references.
**HOUSE REPAIR** – Interior/exterior painting, window glazing, light carpentry. Candler Park resident 10 yrs. References, free estimate. Lee Nicholson, 404-378-1343.
**JOHN THE LAWNGUY** – Mowing, aeration, gutters cleaned, free est., dependable. 404-638-6378.
**LANDSCAPING** – Design and installation of new trees, bushes, flowers, lawns, ponds, rock gardens, retaining walls, and much more! Make your yard a peaceful refuge or a delightful display of beauty & color! Affordable. David Masluk, 404-306-2177.
**LIGHTHALL’S CLEANING** – 404-893-9308. Customized cleaning including move-ins, move-outs, spring cleans. Owner/operator intown resident Debbie Lighthall.
**MASSAGE THERAPY** – Mary Kitchen, cmt. Neuromuscular therapy, cross fiber, deep tissue, cranial sacral, polarity. 16 years experience. 404-233-5768.
**MINOR HOME REPAIRS AND PROJECTS** – Carpentry, plumbing, and electrical work of various types. Odd jobs. $25/hr. + $20 house call. Honest, fair, prompt. Refs. Noah Glassman, 404-306-2177.
**MURPHY’S YARD SERVICE** – General yardwork, mowing, cleanup. FREE estimates. 404-622-1822.
**NOOK & CRANNY MAID SERVICE** – 404-688-3766. “Let us do your dirty work!” Est. 1990. Dependable, personalized service weekly, biweekly, monthly, one time, move-in/move outs, spring cleanings, homes/offices, neighborhood references. Licensed, Bonded, Insured.
**PIANO** – tuning, repairs, rebuilding, sales. Lessons all ages, levels. 404-378-8310.
**TOM WATSON, CERTIFIED ARBORIST** – KEEPING TREES ALIVE AND HEALTHY. General tree care, consulting, removals, educational talks. Insured, references. 404-378-9071, firstname.lastname@example.org
---
**CRIME REPORT**
Reported May 1–June 13, 2002 in Zone 6 police records.
- 400 block Euclid Terr, 5/1, bicycle stolen off porch.
- 500 block Terrace Ave, 5/1, vehicle stolen.
- 1200 block Druid PL, 5/1, vehicle damage, property theft.
- 300 block Moreland Ave, 5/6, 1 pm, person attempted to cash bad check.
- 1300 block Benning Pl, 5/6, theft from vehicle.
- 1300 block Euclid Ave, 5/6, 2am, vehicle broken into, items taken.
- 1200 block Mansfield Ave, 5/7, theft from vehicle.
- 400 block Moreland Ave, 5/7, theft from store.
- 200 block Elmira PL, 5/9, theft from vehicle.
- 400 block Candler St, 5/9, vehicle broken into.
- 400 block Oakdale Rd, 5/9, 12pm, residential burglary.
- Glendale Ave & McLendon Ave, 5/10, 6pm, male exposing himself.
- 1400 block North Ave, 5/10, 2pm, residential burglary.
- 1200 block Druid Pl, 5/12, 4pm, theft from auto.
- 300 block Oakdale Rd, 5/12, 10pm, subject arrested for possession of drugs.
- 400 block Moreland Ave, 5/12, 11am, stolen check cashed.
- 400 block Candler St, 5/13, residential burglary, back door kicked in.
- 400 block Moreland Ave, 5/14, 3pm, laptop computer stolen.
- 1300 block Euclid Ave, 5/17, 12am, identity fraud.
- 500 block Page Ave, 5/18, items stolen from job site.
- 1500 block Dekalb Ave, 5/19, vehicle broken into.
- 1300 block Euclid Ave, 5/20, items stolen from apartment.
- 500 block Sterling St, 5/23, residential burglary, entered through unlocked back door.
- 300 block Josephine St, 5/24, vehicle theft.
- 1300 block McLendon Ave, 5/29, bicycle and other items stolen from garage.
- 1600 block McLendon Ave, 6/1, alcoholic beverages stolen from restaurant cooler.
- 1500 block McLendon Ave, 6/3, subject arrested for possession of marijuana.
- McLendon Ave & Sterling St, 6/3, 10pm, victims robbed at gunpoint, victims offered $5 which was returned by the suspect.
- 300 block Candler Park Dr, 6/3, theft from vehicle.
- 300 block Mell Ave, 6/3, damage to vehicle.
- 1200 block McLendon Ave, 6/3, theft from vehicle.
- 1600 block McLendon Ave, 6/7, 12pm, suspect entered unlocked home, tried on clothing, property stolen.
- 1500 block McLendon Ave, 6/9, youth had loaded pistol, person on the scene stated that the pistol was pointed at him by the suspect.
- 1200 block Mansfield Ave, 6/9, residential burglary, rear window broken into.
- 300 block Oakdale Rd, 6/8, rear of house broken into, nothing stolen.
- 1200 block Dekalb Ave, 6/11, rental property vandalized.
- 1600 McLendon Ave, 6/13, alcoholic beverages stolen from restaurant cooler (again). |
Computation of magnetostatic field using second order edge elements in 3D
Z. Ren
Laboratoire de Génie Electrique de Paris, Universités Paris VI and XI,
Gif sur Yvette Cedex, France, and
N. Ida
Department of Electrical Engineering, The University of Akron,
Akron, Ohio, USA
Keywords Edge elements, Finite element modeling, Magnetostatics
Abstract Several second order edge elements have been applied to solving magnetostatic problems. The performances of these elements are compared through an example of magnetic circuit. In order to ensure the compatibility of the system equations and hence the convergence, the current density is represented by the curl of a source field. This avoids an explicit gauge condition which is cumbersome in the case of high order elements.
I. Introduction
Improvement of the accuracy in finite element modeling can be achieved through two methods: local mesh refinement and increase of the order of the shape functions of the elements. Local mesh refinement leads in some cases to deformed elements, which may worsen the stability of the system and the accuracy of the results. The use of high order elements turns out to be more effective in such situations.
The Whitney (nodal, edge, facet and volume) elements have proven their efficiency in electromagnetic field computation in the last decade (Bossavit, 1988). They are differential forms of different degrees. The main properties of these elements are: conformity (matching the corresponding field continuity conditions) and inclusion (the element of low degree is included in the element of high degree). The Whitney edge element (one-form element) has been widely used for solving electromagnetic field problems in various frequency ranges. However, these elements are built in first order.
The theory of high order edge (curl-conformal) and facet (div-conformal) elements was advanced in the early 1980s in Nédélec (1980). Unfortunately, in this reference, no specific vector basis function was reported. Further investigation has been carried out in recent years by different researchers.
Different high order edge elements were developed (Lee et al., 1991; Webb and Forghani, 1993; Wang and Ida, 1993; Ahagon and Kashimoto, 1995; Yioultsis and Tsiboukis, 1996; Kameari, 1998). These are mostly applied in the high frequency domain. Only few works can be found in low frequency and...
static field applications. The main difficulty in low frequency applications seems to be the application of gauge conditions.
This paper investigates several second order edge elements in the computation of magnetostatic fields. We will show that, in the case of a compatible formulation, when using an iterative solver such as the conjugate gradient method, the system converges without an explicit gauge condition. This is the same conclusion as in the case of first order element. The performance (accuracy and convergence behavior) of different elements is then compared.
II. Different types of second order edge elements
In this paper we consider the case of tetrahedral elements. The second order nodal element built on the tetrahedron is the Lagrange type and contains ten nodes (vertices plus one node in the middle of each edge). The high order edge elements must model correctly the range space and the null space of the curl operator. In the case of second order edge elements, the curl field must be complete to the first order in the range of the curl operator. The number of degrees of freedom to model a first order vector field is 12. The divergence free condition reduces this number to 11. To model the null space of the curl operator (the gradient field), the number of degrees of freedom is nine. In consequence, the number of degrees of freedom required in second order tetrahedral edge element is 20. These degrees of freedom are commonly assigned to the edges and facets (two per edge and two per facet).
The basis functions related to the edges and the facets take the following general forms:
(1) On an edge defined by the vertices \{i, j\}
\[ w_{ij} = \lambda_i (a_1 + b_1 \lambda_i + c_1 \lambda_j) \Delta \lambda_j + \lambda_j (a_2 + b_2 \lambda_j + c_2 \lambda_i) \nabla \lambda_i, \tag{1.a} \]
where \( \lambda_i \) is the barycentric coordinate of a point with respect to the vertex i. Permuting the indices ij in this expression gives another base function defined on the same edge.
(2) On a facet defined by vertices \{i, j, k\}:
\[ w_{ijk} = d_1 \lambda_i \lambda_j \nabla \lambda_k + d_2 \lambda_j \lambda_k \nabla \lambda_i + d_3 \lambda_k \lambda_i \nabla \lambda_j. \tag{1.b} \]
Rotating indices ijk leads to three basis functions on the surface, but only two of them are used.
Let \( W^1_2 \) denote the space of second order edge element defined by (1.a) and (1.b). It can be shown that \( W^1_2 \) belongs to the following domain of the curl operator:
\[ W^1_2 \subset H(\text{curl}) = \{ u | u \in IL^2(\Omega), \text{curl } u \in IP_1(\Omega) \cap D(\Omega) \} \]
where \( IL^2(\Omega) \) is the Hilbert space of a square integrable vector field, \( IP_1(\Omega) \) the three dimensional space of first order polynomials and \( D(\Omega) = \ker(\text{div}) \) the space of divergence free functions, over \( \Omega \), respectively.
Each of the functions (1.a) and (1.b) is tangentially continuous through the interface of two adjacent elements. In general, $w_{ij}$ forms a second order vector field turning around the opposite edge. Its circulation vanishes on all edges except on the edge $ij$. Each term of the function $w_{ijk}$ describes a second order vector field normal to the facet opposite the vertex $i$, $j$, or $k$. Consequently, the function $w_{ijk}$ forms a field turning around edges that do not belong to the surface $ijk$. The circulation of $w_{ijk}$ is obviously zero along all edges.
The element defined by (1.a) and (1.b) describes a complete first order curl field:
\[
\text{curl } w_{ij} = [a_1 - a_2 + (2b_1 - c_2)\lambda_i + (-2b_2 + c_1)\lambda_j] \nabla \lambda_i \times \nabla \lambda_j,
\]
(1.c)
\[
\text{curl } w_{ijk} = (d_1 - d_3)\lambda_i \nabla \lambda_j \times \nabla \lambda_k + (d_2 - d_1)\lambda_j \nabla \lambda_k \times \nabla \lambda_i \\
+ (d_3 - d_2)\lambda_k \nabla \lambda_i \times \nabla \lambda_j,
\]
(1.d)
provided that the coefficients of the first order terms in these expressions are not simultaneously zero.
It must be emphasized that, in general, the degrees of freedom do not have a direct physical meaning such as the circulation of a field along edges, unless an orthogonal condition (the line integrals of the basis function on each edge is independent from each other) is satisfied. Unfortunately, this condition cannot be realized because the line integrals of $w_{ij}$ and $w_{ijk}$ on a line given on the facet $ijk$ are usually not independent. But in general case, the circulation of a field along an edge is a linear combination of several degrees of freedom.
The coefficients in expressions (1.a) and (1.b) can be determined in various ways and this leads to different kinds of elements.
A. Lee’s element
The basis functions that related on the edges and the facets of Lee’s element (Lee et al., 1991) are, respectively,
\[
w_{ij} = \lambda_i \nabla \lambda_j,
\]
(2.a)
\[
w_{ijk} = \lambda_i \lambda_j \nabla \lambda_k,
\]
(2.b)
The curls of Lee’s element are
\[
\text{curl } w_{ij} = \nabla \lambda_i \times \nabla \lambda_j,
\]
(2.c)
\[
\text{curl } w_{ijk} = \lambda_i \nabla \lambda_j \times \nabla \lambda_k - \lambda_j \nabla \lambda_k \times \nabla \lambda_i.
\]
(2.d)
It can be noted that Lee’s element belongs to the Webb’s hierarchical elements (Webb and Forghani, 1993). The hierarchy means that the basis functions of
the high order elements include all basis functions of the spaces of lower order elements. This allows mixing of different order of elements in the same mesh without the difficulty of matching field continuities. It is a helpful property for adaptive mesh (mixed $h$- and $p$-refinement) generation.
**B. Ahagon’s element**
Ahagon’s element (Ahagon and Kashimoto, 1995) is derived from the decomposition of the gradient of second order nodal shape functions. The inclusion property requires that the gradient of second order nodal element is included in the second order edge element. This property means that the sum of edge element basis functions for the edges meeting at a vertex must be the gradient of nodal function on this vertex:
$$\sum_j w_{ij} = \text{grad } w_i,$$
where $w_i$ is the basis function of the second order nodal element related to node $i$. The basis function derived in such a manner has the following form:
$$w_{ij} = \lambda_i(-1 + 4\lambda_j)\nabla\lambda_j + \lambda_j(1 - 4\lambda_i)\nabla\lambda_i,$$
$$w_{ijk} = 4\lambda_i\lambda_j\nabla\lambda_k - 4\lambda_j\lambda_k\nabla\lambda_i$$
Taking the curl, we have
$$\text{curl } w_{ij} = (-2 + 12\lambda_i)\nabla\lambda_i \times \nabla\lambda_j,$$
$$\text{curl } w_{ijk} = 4\lambda_i\nabla\lambda_j \times \nabla\lambda_k - 8\lambda_j\nabla\lambda_k \times \nabla\lambda_i + 4\lambda_k\nabla\lambda_i \times \nabla\lambda_j$$
**C. Yioultsis’ element**
Yioultsis’ element (Yioultsis and Tsiboukis, 1996) takes the weighted fields as degrees of freedom. By applying some constraints such that the linear combination of basis functions of edge elements gives the gradient of nodal elements, the following basis functions are achieved:
$$w_{ij} = \lambda_i(-4 + 8\lambda_j)\nabla\lambda_j + \lambda_j(2 - 8\lambda_i)\nabla\lambda_i,$$
$$w_{ijk} = 16\lambda_i\lambda_j\nabla\lambda_k - 8\lambda_j\lambda_k\nabla\lambda_i - 8\lambda_k\lambda_i\nabla\lambda_j$$
Their curls are
$$\text{curl } w_{ij} = 6(-1 + 4\lambda_i)\nabla\lambda_i \times \nabla\lambda_j,$$
$$\text{curl } w_{ijk} = 24(\lambda_i\nabla\lambda_j \times \nabla\lambda_k - \lambda_j\nabla\lambda_k \times \nabla\lambda_i)$$
D. Kameari’s element
The placement of degrees of freedom on the facets in above described elements is asymmetric. This may cause some difficulty for the numerical implementation. To get a symmetric edge element, Kameari proposed to add one node in the middle of each facet (Kameari, 1998). This results in a 14 nodes nodal element. The terms $\lambda_i \lambda_j \lambda_k$ are added to the second order polynomials to form nodal basis functions. To build second order edge element, three degrees of freedom are assigned on each face. The total number of degrees of freedom is 24. By doing so and after applying an orthogonal condition such that
$$\int_{l_j} w_i dl = \delta_{ij}$$
where $\delta_{ij}$ is the Kronecker, the following basis functions are obtained:
$$w_{ij} = [\lambda_i (-33 + 63\lambda_i + 30\lambda_j) \nabla \lambda_j + \lambda_j (-5 + 15b_2 \lambda_j - 18c_2 \lambda_j) \nabla \lambda_i] / 10,$$
(5.a)
$$w_{ijk} = 3(31\lambda_i \lambda_j \nabla \lambda_k + 7\lambda_j \lambda_k \nabla \lambda_i + 7\lambda_k \lambda_i \nabla \lambda_j) / 5$$
(5.b)
The curls of these functions are
$$\text{curl } w_{ij} = 2(-7 + 36\lambda_i) \nabla \lambda_i \times \nabla \lambda_j / 5,$$
(5.c)
$$\text{curl } w_{ijk} = 72(\lambda_i \nabla \lambda_j \times \nabla \lambda_k - \lambda_j \nabla \lambda_k \times \nabla \lambda_i) / 5$$
(5.d)
It is noted that, in Kameari’s element, the degrees of freedom are the circulation of the field along edges, unlike the other elements.
Adding four nodes in the element increases the dimension of the null space of the curl operator to 13 but does not affect the dimension of its range space. The curl of its element is complete to the first order in the range of the curl operator just like the other elements.
III. Application in magnetostatics
Consider a magnetostatic problem in a bounded region $\Omega$. The boundary of $\Omega$ is split in two: $\partial \Omega = \Gamma_b \cup \Gamma_h$ and the intersection of $\Gamma_b$ and $\Gamma_h$ is empty. On the boundary, the boundary conditions $n \cdot b = 0$ on $\Gamma_b$ and $n \times h = 0$ on $\Gamma_h$ hold.
Working with the magnetic vector potential $a$, the variational formulation is derived by solving weakly Ampere’s theorem:
Find $a \in W_2^{-1}_b$ such that
$$\int_{\Omega} \frac{1}{\mu} \text{curl} a' \cdot \text{curl} a \, d\Omega = \int_{\Omega_j} a' \cdot j \, d\Omega \quad \forall a' \in W_2^{-1}_b$$
(6)
where $\Omega_j$ denotes the excitation coil contained in $\Omega$. $W_2^{-1}_b$ is the second order
edge element space including the boundary condition on $\Gamma_b$:
$$W_2^{-1}_b = \{a \in W_2^{-1} | n \times a = 0 \text{ on } \Gamma_b\}$$
In equation (6), the system matrix is singular because $W_2^{-1}$ includes the null space of the curl operator. The solution of $a$ is not unique and a gauge condition must be applied to ensure its uniqueness. Assuming $\Omega$ is topologically trivial, the kernel of the curl operator is a gradient field. The number of zero eigenvalues of the curl-curl matrix is equal to the number of nodes (including those defined on the edges and eventually on the facets) minus one. This corresponds to the dimension of null space of the curl operator. The unknowns (as well as related equations) to be removed can be thus determined by a spanning tree technique, like the case of first order element (Albanese and Rubinacci, 1990). However, the construction of a tree in the case of second order element can be very complicated. Moreover, according to the experience with first order elements, the use of a tree can cause some instability of the system and affect the accuracy of the solution. The tree technique does not seem to be the best solution.
It has been shown that convergence can be achieved without an explicit gauge condition provided that the system equation is compatible (Ren, 1996), i.e. the right hand side belongs to the range of the curl-curl matrix, or the discrete form of the RHS must be divergence free. In order to enforce compatibility, we express $j$ by curl $t$, where $t$ is a vector potential defined in a domain $\Omega_i$ containing the coil $\Omega_j$. It can be seen as a source field. Replacing $j$ of (6) by curl $t$ and integrating by parts, we get the following formulation
Find $a \in W_2^{-1}_b$ such that
$$\int_{\Omega} \frac{1}{\mu} \text{curl} a' \cdot \text{curl} a \, d\Omega = \int_{\Omega_j} \text{curl} a' \cdot t \, d\Omega \quad \forall a' \in W_2^{-1}_b$$
(7)
This formulation is unconditionally compatible whatever the discretisation of $t$. Usually, according to its nature, the vector $t$ is interpolated by the edge element (first order in our application). To solve the formulation (7), no explicit gauge condition is needed when using an iterative solver (Ren, 1996).
The previously described elements have been applied to approximate $W_2^{-1}$. In order to avoid the ambiguity that may occur during the assignment of degrees of freedom on the facets (in the asymmetric case of two unknowns per facet), the two basis functions $w_{ijk}$ on a facet $ijk$ are chosen in such a way that $i < j$ and $i < k$. This ensures a unique choice of degrees of freedom on the facets.
The following section compares the performance of these elements through an example.
IV. Comparison of results
The example to be considered is a linear magnetostatic problem. It concerns a magnetic circuit. One-quarter of the domain is shown in Figure 1. The domain
is meshed by 1,050 tetrahedral elements. The mesh contains 5,824 unknowns of which 1,960 are related to edges and 3,864 to facets. In the case of Kameari’s element, the number of unknowns is 7,750 of which 5,790 are related to facets. The system of equations corresponding to (7) is solved by the diagonal preconditioned conjugate gradient solver.
The results of the field (magnetic flux density) distribution given by the four kinds of elements are almost identical. This is not surprising because all these elements model correctly the range space of the curl operator with complete first order polynomials. The dimensions of the range space of the curl operator are identical for all these elements. Nevertheless, the convergence behaves very differently as can be seen in Figure 2, where the error represents the residue of the vector potential. The best convergence is obtained for Lee’s element (Lee et al., 1991) (which belongs to the Webb’s hierarchical element (Webb and Forghani, 1993)) (curve L). The convergence of Yioultsis’s element (Yioultsis and Tsiboukis, 1996) is relatively slow (curve Y). Ahagon’s element (Ahagon and Kashimoto, 1995) (curve A) and Kameari’s element (Kameari, 1998) (curve K)
Figure 1. Example of a magnetic circuit
Figure 2. Convergence behaviors of various elements
have the same order of convergence, faster than Y’s and slower than L’s, but K’s element requires more cpu time because its number of unknowns is much higher than other elements.
In order to understand the difference of the convergence behaviors, the eigenvalues of the elementary curl-curl matrix constructed over a standard element \{(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1)\} are computed. There are 9 zero eigenvalues for L’s, A’s and Y’s elements and 13 zero eigenvalues for Kameari’s element. This corresponds to the null space of the curl operator as expected. The number of non-zero eigenvalues of K’s element is the same as the others. This confirms that the dimensions of the range space of all these elements are the same. The non-zero eigenvalues are given in the Table I.
It can be observed that the convergence behavior of different elements is related to their maximal eigenvalue of the curl-curl matrix. This observation is different from the classical conclusion where the convergence is related to the condition number (the maximal eigenvalue over the minimal eigenvalue). It must be noted that this conclusion is true for positive definite systems. In our case, the system equation is semi-positive definite. The minimal eigenvalue is zero and the condition number is infinite. It seems that in such a case, the conditioning of matrix system and hence the convergence behavior is affected by the maximal eigenvalue. The smaller the maximal eigenvalue is, the faster the system converges. The example shows that Lee’s element behaves better than other elements.
V. Comparison with first order element
A question often asked is whether the $p$-refinement technique (increasing the order of basis functions) or $h$-refinement technique (diminishing the size of elements) must be used to improve accuracy of results. In this section we try to give a comparison of the performance of second order and first order edge elements for the computation of magnetostatic fields.
In order to have a rational comparison, the refinement of the first order element mesh is arranged so that the number of unknowns is of the same order as for the second order element. The example of Figure 1 is considered. The
| Lee | Ahagon | Yioultsis | Kameari |
|---------|----------|-----------|---------|
| 0.0083 | 4.8000 | 0.8186 | 10.3680 |
| 0.0351 | 0.6667 | 19.2000 | 10.3680 |
| 0.0023 | 1.2900 | 1.8008 | 10.3680 |
| 0.0049 | 0.8996 | 1.8008 | 0.8407 |
| 0.0172 | 1.1034 | 35.1814 | 1.6725 |
| 0.0204 | 2.6908 | 6.5088 | 1.6725 |
| 0.0319 | 2.8084 | 8.8557 | 4.0976 |
| 0.0380 | 3.9456 | 24.6835 | 4.0976 |
| 0.3566 | 4.0020 | 26.6221 | 12.3326 |
| 1.3652 | 5.9308 | 39.0369 | 31.3712 |
| 1.3868 | 7.1529 | 53.9214 | 31.3712 |
Table I. Non-zero eigenvalues of the elemental curl-curl matrix over a standard element
first order element mesh contains 5,520 elements and 5,700 unknowns (compared with 5,824 unknowns for second order element). The convergence behavior of the first order element is compared with that of Lee’s element in Figure 3.
The result shows that for the same order of number of unknowns, the two kinds of elements offer almost the same convergence behavior. But the first order element consumes much less cpu time because its stiffness matrix is more sparse than that of the second order element. In this example, the number of non-zero elements (diagonal plus the symmetric part) in the stiffness matrix is 43,828 for the first order element whereas this number is 106,888 for the second order element, more than twice of the first one. So, for the same order of number of elements, the second order element requires more cpu time and more memory space.
As concerns accuracy, the first order element provides a piecewise constant flux density field whereas the second order element gives a piecewise linear approximation. The solution behaves much better for the second order element, especially where the variation of the field is significant. This statement is clearly shown by the distribution of the magnetic flux density on a line in the air gap of the magnetic circuit (Figure 4).
To get a good solution with less cpu time and memory, the best solution is undoubtedly to mix the first and second order elements. The high order elements are to be used only where it is necessary. In this point of view, the hierarchical elements (Webb and Ida, 1993) to which Lee’s element belongs, may be useful.
VI. Conclusions
Four kinds of second order edge elements have been applied to calculate magnetostatic fields. The compatibility of the formulation is ensured by introducing a source field to represent the current density and by projecting

**Figure 3.**
Convergence behaviors of first and second order elements
Figure 4. Comparison of $b_z$ along a line in the airgap
Key
- result of 1st order of element
- result of 2nd order of element
this field on the curl of the element space. The convergence of the system is achieved without explicit gauge condition.
Through a comparison on a magnetic circuit problem using the same mesh, we conclude that all these elements provide the same accuracy as concerns the curl field (the flux density). However, the conditionings of the matrix system of these elements are very different. This is clearly illustrated by their convergence behaviors and also the difference of eigenvalues of a standard element. In this sense, Lee’s element seems to be better than the others. Moreover, it is simple in form and belongs to hierarchical elements which may be helpful for mixed $h$- and $p$-adaptive mesh refinement.
A comparison with first order elements has also been carried out. Results show that for the same order of unknowns, the first order element is less time consuming because its stiffness matrix is more sparse. But second order elements provide smoother field results. This comparison confirms the necessity of mixing different orders of elements in adaptive mesh generation.
It must be indicated that the comparison given in this paper concerns the magnetostatic field case. The conclusion may change for magnetodynamic field computation. In fact, even though all these elements provide the same curl field, the approximation of the primal field by these elements may be different. Their performance for the computation of magnetodynamic fields will be the subject of further study.
References
Ahagon, A. and Kashimoto, T. (1995), “Three-dimensional electromagnetic wave analysis using high order edge elements”, *IEEE Trans. Magn.*, Vol. 31 No. 3, pp. 1753-6.
Albanese, R. and Rubinacci, G. (1990), “Magnetostatic field computations in terms of two component vector potentials”, *Int. J. for Num. Meth. in Engineering*, Vol. 29, pp. 515-32.
Bossavit, A. (1988), “Whitney forms: a class of finite elements for three dimensional computations in electromagnetism”, *IEE Proc. A*, Vol. 135 No. 8, pp. 493-500.
Kameari, A. (1999), “Symmetric second order edge elements for triangles and tetrahedrons”, *IEEE Trans. Mag.*, Vol. 35 No. 3, pp. 1394-7.
Lee, J.F., Sun, D.K. and Cendes, Z.J. (1991), “Tangential vector finite elements for electromagnetic field computation”, *IEEE Trans. Mag.*, Vol. 27 No. 5, pp. 4032-5.
Nédélec, J.C. (1980), “Mixed finite element in $\mathbb{R}^3$”, *Numer. Math.*, Vol. 35, pp. 315-41.
Ren, Z. (1996), “Influence of the R.H.S. on the convergence behaviour of the curl-curl equation”, *IEEE Trans. Magn.*, Vol. 32 No. 3, pp. 655-8.
Wang, J. and Ida, N. (1993), “Curvilinear and higher order ‘edges’ finite elements in electromagnetic field computation”, *IEEE Trans. Mag.*, Vol. 29 No. 2, pp. 1491-4.
Webb, J.P. and Forghani, B. (1993), “Hierarchical scalar and vector tetrahedra”, *IEEE Trans. Magn.*, Vol. 29 No. 2, pp. 1495-8.
Yioultsis, T.V. and Tsiboukis, T. D. (1996), “Multiparametric vector finite element: a systematic approach to the construction of three-dimensional, high order, tangential vector shape functions”, *IEEE Trans. Magn.*, Vol. 32 No. 3, pp. 1389-92. |
New France Meets New England
Over the years New France grew slowly, perhaps too slowly. By the 1750s New France had about 80,000 Europeans while New England had nearly 1,250,000! Why such a difference?
The focus of life in New France was the fur trade. The traders spent their time in the woods and on the rivers collecting furs. They were not really interested in having cities. Cities meant less land for animals.
While the French were roaming all over the Great Lakes region, the British were busy building towns and starting farms. The farms fed the growing population. The British allowed many people to come to America who wanted religious freedom. The French did not.
Even worse, the British were beating the French at fur trading. They offered better goods and at lower prices to the Indians! The British kept moving farther west. A struggle over North America was building.
Spats, Arguments, and Fights
The French put Charles Langlade in charge of protecting their interests. Langlade was born at Michilimackinac. His mother was Ottawa and his father was French. He knew much about the Indians’ way of fighting in the woods.
In 1754, leading 250 Ottawa and Ojibwa warriors, Langlade went east to assert French control. The French and British fought at Pittsburgh. A
young George Washington led the British soldiers. But Washington had to surrender and admit he was trespassing. The fireworks of the French and Indian War had begun! The war spread like a fire to all British and French territories around the world. Either side could lose its empire. At first, the French could not be beaten. They won battle after battle. The French woodsmen and Native Americans were a big help.
**Everyone Wants the Indians**
Both sides were tugging at the different tribes to join them. The French used warriors from western tribes; many of them came from Michigan. The Native Americans were often willing to attack the British because the British were starting farms and towns, and taking away Indian land in the process.
Later, French began to lose the battles. To get more support, Charles Langlade met with a grand council of the Michigan tribes across the river from Detroit. It was March, 1759. A famous chief named Pontiac was at the meeting listening. It was reported that Langlade’s speech went something like this:
“Listen!
“My Brothers, I will not try to tell you that the French are still winning the war. But do not make the mistake of thinking that because there have been setbacks, the French are lost. Now I... ask again that you raise up your tomahawks in the French cause, which is and must be your cause as well.... you know in your hearts the French are your friends.... They love the land as you love it and know that it belongs not to individuals, but to all, to share equally. The English may ply you with great gifts.... but the gifts disappear when you have won and your land disappears as well. If you do not fight him with the French, then mark what I say, the time will come when you will have to fight him alone....”
*Wilderness Empire* by Allan W. Eckert.
The French and British worked hard to gain the support of the Indians. The Indians fought for both sides during the French and Indian War. (Courtesy Mackinac State Historic Parks)
Trouble On the St. Lawrence
The St. Lawrence River was the supply line into New France. In 1758 the British sailed up the river and took control. The St. Lawrence became the pathway for British victory. Soon the French could not get supplies to trade with the tribes.
In 1759, it came down to a battle outside the gates of Quebec, the capital of New France. The French fort was strong and it sat on a steep hill looking over the river. Charles Langlade, Chief Pontiac, and about 400 Indians from the Michigan area were there.
The two sides had been fighting for 80 days. The French were having problems but it looked as though the British would have to leave soon. Their supplies were very low and the river would freeze soon. The British commander took one last gamble.
Quebec City- Will They See Us Coming?
On a rainy and moonless night, British soldiers quietly crossed the St. Lawrence River. The soldiers pulled themselves slowly up the great cliff. During the night more and more men scrambled to the top. The boats went back and forth across the river all night. It was amazing, but the French did not see them!
At six in the morning the French general got on his horse and rode outside the fort where he saw about 4,000 British soldiers lined up. He gasped, “This is serious!” The British had done what seemed impossible.
The alarm was given and thousands of French troops ran out of the fort. An intense battle began. The generals from both sides were killed and the French lost the battle. Once the British controlled Quebec, they controlled the St. Lawrence River. No supplies could reach French soldiers in the west- including Michigan.
An Empire Dies!
That night Charles Langlade met with Pontiac and then the Indians headed back to the Great Lakes country. By 1760 all of New France had been surrendered to the British.
What Are the Facts?
1. Why was New France weaker than New England at the time of the French and Indian War?
2. What was the French and Indian War about?
3. What argument did Charles Langlade use to convince Indians from the Michigan area to fight against the British?
4. Why was the St. Lawrence River important in the French and Indian War?
Express yourself:
If the British had lost the French and Indian War, give your opinion about how Michigan would be different today.
Chapter 4 Section 2
Chief Pontiac Rebels
The British made mistakes in dealing with the tribes. The Indians became quite upset. Chief Pontiac finally attacked and tried to drive the British out.
The British Take Over Michigan
The British came to Detroit to take charge of Fort Pontchartrain. Its new name would be Fort Detroit. Though the French soldiers left, many of the fur traders and those with small farms stayed behind. The French continued to be the largest non-Indian group in Michigan for another 60 years! Even though the French seemed friendly, some of them hoped their soldiers would come back some day. They told this to the Native Americans too.
Bad Policies
The British government was not wise in dealing with the tribes. There were new orders not to give them presents or to trade any gunpowder to them. By now the tribes were used to using guns, and they needed gunpowder for hunting. Also the British demanded more furs for everything the tribes needed. Some traders cheated the Indians too. Meanwhile, the British wanted land for farms and settlers.
What one artist thought Pontiac looked like. (Art by Dirk Gringhuis)
The tribes were growing restless. They knew the French and British were still at war in Europe. There were rumors the French army would return. At Detroit, Major Gladwin was given command and he was not friendly to the Indians. The Potawatomi, Ottawa, and Huron all had villages near Detroit. Chief Pontiac lived in the Ottawa village where Windsor, Canada is today.
**Trouble Is Brewing**
On April 27, 1763 Pontiac invited many tribes to come to a meeting along the Ecorse River. He talked about a way to get rid of the British. He would ask to meet with Major Gladwin in the fort. Pontiac would bring many warriors in with him. Each man would be wearing a large blanket.
Over the next few days Indians began to show up at the French blacksmith asking for metal files. Did he ask why they wanted so many files? Did he provide the files with a curious look or just a little smile? Pontiac’s warriors were cutting the barrels of their muskets short with the files! The guns would be under their blankets when they walked into the fort!
Mid-morning on May 7, 1763, 11 chiefs and 60 warriors solemnly walked into the fort—their fingers close to gun triggers. But they soon realized something was wrong. The soldiers were not going about business as usual. They were armed and ready. Every move of the Indians was tensely watched by the Englishmen. How did they know? There are many stories explaining how the British learned about Pontiac’s plan. One says a young French woman heard about the plan and wanted to warn her boyfriend who was a British trader.
A grim Pontiac told Major Gladwin this was not the way to hold a council and he left. The next day Pontiac asked if all of his warriors could come and smoke a pipe of peace with the British. Gladwin told him only chiefs could come.
The next day all the Ottawa came to the fort anyway. They were not allowed to go in. Pontiac was furious that his plans had not worked. The 120 or so British soldiers knew they were in real danger because Pontiac had about 800 warriors.
Pontiac Attacks Detroit
Suddenly yells and war cries came from the woods. The British soldiers tensed. They were far from any help. Warriors rushed up to the fort and furiously tried to hack a hole in the wooden wall with their *tomahawks*. *Tomahawks* are small metal hatchets used in fighting. After many warriors were killed, the Native Americans became convinced they could not cut their way into the fort.
That night the Indians started fires against the wooden walls. British soldiers raced back and forth with buckets of water to stop the flames. Officers expected the Indians would try to do the same thing the next night so they took precautions. A hole was cut through the wall from the inside and a cannon placed to fire on anyone coming close to the wall! In the darkness that night many more Indians died.
For days the battle continued. The British were becoming desperate. They had only three bullets left for each soldier and very little food. Major Gladwin clung to the hope that supplies and reinforcements would come by the end of the month...just a few more days. On May 30, the soldiers could see the supply canoes in the distance on the river. As the boats came closer,
those in the fort were horrified as they realized the canoes had Indians in them. Pontiac and his men had already captured the food and ammunition. The fight went on. All of June and most of July passed with the British closed up in Fort Detroit. Finally the British reached Detroit by water with another 280 men and supplies.
**Time Runs Out**
The tribal warriors did not give up. They were sure the soldiers would run low on ammunition again. But time was also going against them. It was fall and they needed to hunt and gather food for the winter. Warriors, with their families, began to drift away. Some of the Indian groups made peace. Near the end of October, Pontiac received a letter from the French telling him France and England had finally made peace. The French would not send soldiers to help them, so Pontiac decided to stop the attack. The Indian siege of 153 days was the longest in American history and showed Pontiac’s skill as an organizer and warrior.
**Secret Plans For Michilimackinac**
Detroit was not the only British fort the Native Americans attacked. At Fort Michilimackinac, the Chief of the Ojibwa tribe had become friends
Fort Michilimackinac was attacked by the Ojibwa during a lacrosse game. Many British soldiers were killed. (Courtesy Michigan Bell an Ameritech Company)
with the British commander. The chief suggested the Ojibwa and Sauk tribes play a game of baggataway or lacrosse in honor of the British king’s birthday. The commander agreed it was a fine idea. The Indians could play just outside the fort. *Lacrosse is an Indian game with two teams having many players. The players have small rackets and try to move a ball across the other team’s goal.*
Alexander Henry was one of the British fur traders at the Fort. He was adopted as a brother by Wawatam, another Ojibwa chief. It was Chief Wawatam who invited Henry to come on a hunt with him and his wife. Wawatam said he was “worried by the noise of evil birds.” That was a tribal expression meaning there might be trouble. Since Henry was waiting for his canoes of supplies, he turned the chief down. He was touched when Wawatam and his wife left with tears in their eyes. That same day many Indians came into the fort to trade. Henry was puzzled when the only goods they bought were tomahawks!
**The Lacrosse Game**
Many of the soldiers came out to watch the game. It was a great sight! The British commander made a large bet on the Ojibwa side. Even though it was a warm day, Native American women wrapped in blankets sat near the gate. Suddenly the ball went over the wall and into the fort. The players rushed inside after it. As they ran they snatched weapons from under the women’s blankets. One officer held off several Indians with his sword until he was killed, but few of the soldiers had time to defend themselves.
The Frenchman Charles Langlade and his family were watching the fighting from their house. Alexander Henry ran up to the Langlades and begged them to help him. Langlade said,
**“What do you expect me to do?”**
Amazingly, a Native American woman who worked in their house took Henry up the back stairs and hid him in the attic.
**War Spreads Far and Wide**
During that long summer of 1763 the British soldiers learned of other disasters. Many tribes all along the western frontier had risen up to throw out the British. By July, nine of the 12 forts west of the Ohio River had been captured. Only Fort Detroit, Fort Niagara on the Niagara River, and Fort Pitt at Pittsburgh held out against the tribes. It was part of the greatest Native American uprising against the Europeans in American history and became known as Pontiac’s War or Pontiac’s Rebellion. A rebellion is an armed attack against those in control. *It could be an attack against a king, a government, or the military.* Pontiac was a courageous leader who was trying to keep his people’s land the best way he knew.
Pontiac’s Rebellion greatly upset the British. They certainly did not want this sort of trouble to happen again so they decided to stop any more settlers from moving west onto tribal land. Maybe this would keep the tribes from attacking the forts. The British made the Proclamation of 1763 which said it was illegal for any settler to go west of the Appalachian Mountains.
It is always hard to please two groups at the same time. Settlers in the American colonies had long counted on the right to be able to move west and start farms and villages in “new” areas. The Proclamation of 1763 was a real aggravation to them. It became one of many problems between American colonists and the British government which finally led to the War for Independence.
Pontiac’s Rebellion, which started to push the British out of the tribal lands, did succeed in a way. The British were pushed out after the War for Independence, but this did not help Pontiac and his people. The American colonists started the United States and more settlers swarmed west than ever before!
**The African American Trader**
In the mid-1760s Jean de Sable (JHAN duh SAW bul) came to Michigan. De Sable, born on the island of Haiti, was an African American with a French background.
It is said de Sable became good friends with Chief Pontiac and lived near his camp, trading with the tribes. This African American moved west when Pontiac left Michigan. In 1779 he became the first non-Native American to have a permanent settlement at the portage of Chicago.
De Sable was well-educated and spoke English, French and several tribal languages. He had a good reputation among both the English and Native Americans. Jean de Sable was a unique person and one of the first African Americans in the land of Michigan.
**What Are the Facts?**
1. Which things made the tribes angry once the British took over Michigan?
2. Who made the plan to take over the fort at Detroit in 1763? Did the French encourage or help with this plan?
3. How did the tribes plan to surprise the British soldiers at Fort Michilimackinac?
4. Chief Wawatam acted as an individual and had different feelings from those who attacked Fort Michilimackinac. What did Chief Wawatam do? Give an example to show his feelings were different.
5. How did Pontiac’s Rebellion help lead to the American War for Independence from the British?
Chapter 4 The British–A New Flag on the Frontier 1750-1784
Chapter Review
Consider the Big Picture
The French lost control of the St. Lawrence River–their way into the Great Lakes–and thus they lost their empire of New France.
The British did not control Michigan for a long time–less than 40 years, but many things changed during this time.
The rules the British finally used when trading with the tribes upset them very much and caused a war led by Chief Pontiac.
Many Michigan area tribes switched from fighting against the British to fighting with them.
During the Revolutionary War, Detroit was a British base for raids to other parts of the country.
Building Your Michigan Word Power
Write each of these six words in a column. (lacrosse, massacre, rebellion, scalp, siege, tomahawk) Next to each one write the best definition from those below. There are extra definitions.
something from the top of a person’s head; a large and heavy ax;
an uprising against those in control; a short fight; a complete victory with the loss of most of the enemy; a very long attack against a fort or city; the French name for a Native American game; a small metal hatchet
Talking About It
Get together in a small group and talk about-
What does the discovery of a path up a steep hill in Quebec city have to do with the French leaving Michigan which is over 600 miles away?
In 1760 most tribes in the Michigan area were for the French, but by the time of the Revolutionary War they were fighting with the British. What happened? Why did they change sides? What was the goal of the tribes?
Pontiac’s plans to force the British out of the Great Lakes was quite amazing. Can you think of any other time in history where native people have attacked so many forts with such success? What was the reason behind this success?
Put It To Use
Imagine you are a newspaper reporter based in Detroit for the Michigan Frontier Gazette. By writing four short articles, report the events of the Revolutionary War from the viewpoint of a person in Detroit.
Most of Michigan history to this point is about Native Americans, the French and the British–but the book also mentions three people from other places. Who are they and where were they from? Write a biography of each person. |
ABSTRACT
The notion of machine-aided cognition implies an intimate collaboration between a human user and a computer in a real-time dialogue on the solution of a problem, in which the two parties contribute their best capabilities. In order for this intimate collaboration to be possible, a computer system is needed that can serve simultaneously a large number of people, and that is easily accessible to them, both physically and intellectually. The present MAC System is a first step toward this goal. The purpose of this paper is to present a brief description of the current system, to report on the experience gained from its operation, and to indicate directions along which future developments are likely to proceed.
Paper presented at the Symposium on Computer Augmentation of Human Reasoning, Washington, D.C., June 16, 1964. This paper will be published in the Proceedings of the Symposium and in the January 1965 issue of the IEEE SPECTRUM.
"Work reported herein was supported by Project MAC, an M.I.T. research program sponsored by the Advanced Research Projects Agency, Department of Defense, under Office of Naval Research Contract Nonr-4102(01). Reproduction in whole or in part is permitted for any purpose of the United States Government."
This empty page was substituted for a blank page in the original document.
THE MAC SYSTEM: A PROGRESS REPORT
INTRODUCTION
The notion of machine-aided cognition implies an intimate collaboration between a human user and a computer in a real-time dialogue on the solution of a problem, in which the two parties contribute their best capabilities. In order for this intimate collaboration to be possible, a computer system is needed that can serve simultaneously a large number of people and that is easily accessible to them, both physically and intellectually. The present MAC System is a first step towards this goal.
The case for machine-aided cognition with the aid of a multiple-access computer system was very eloquently argued by Professor John McCarthy in his 1961 lecture "Time-Sharing Computer Systems".\(^1\) The views he presented were largely the consensus of a committee that had just completed a comprehensive study of the future computational needs of M.I.T. These concepts were also embodied in the Compatible Time-Sharing System (CTSS)\(^2\) developed by the M.I.T. Computation Center under the leadership of Professor F. J. Corbato; an early version of this system was first demonstrated in November, 1961.\(^3\) The MAC computer system is a direct descendant of CTSS.
The last section of McCarthy's lecture introduced the notion of a community utility capable of supplying computer power to each "customer" where, when, and in the amount needed. Such a utility would be in some way analogous to an electrical distribution system. That is, it could provide each individual with logical tools to aid him in his intellectual work, just as electric tools today aid him in his physical work. In this regard, it might be said that the present state of the computer as a source of logical power is similar to that of the early steam engine as a source of mechanical power. The steam engine could generate much more power than could any man or animal, and therefore could perform tasks that were previously impossible. However, the power generated could not be supplied on an individual
\(^1\) Superscripts refer to numbered items in the References.
basis to aid men in their daily work, until the advent of electrical power distribution.
The analogy between electric power and computer power illustrates only one of the aspects of a computer utility--namely, its ability to provide the equivalent of a private computer whose capacity is adjustable to individual needs. Of much greater importance to the individual customer would be the benefits that such a utility could make available to him by placing at his fingertips a great variety of services in the form of public procedures, data, and programming aids, and by allowing him to store and retrieve his own private files of data and programs. Furthermore, a computer utility could provide customers having common interests with convenient means for collaboration. For instance, designers working together on a complex system could check continually the status of the overall design as each of them develops and modifies his own contribution.
The MAC System is an experimental computer utility which, since November, 1963, has been serving a small but varied segment of the M.I.T. community. It is the most extensive of several time-sharing systems in operation. In spite of its experimental character and its limited capabilities, it has gained quick acceptance as a daily working tool. The purpose of this article is to present a brief description of the current system, to report on the experience gained from its operation, and to indicate directions in which future developments are likely to proceed. The scope is limited to the general organization of the system and to the basic services it can provide. In particular, no mention will be made of the problem-oriented languages and other special programming facilities that are being added to the system by the system users themselves, thereby increasing its intellectual accessibility. Thus the work reported here is only a small part of the overall research effort encompassed by Project MAC.
EQUIPMENT CONFIGURATION
The primary terminals of the MAC System are, at present, 52 Model 35 Teletypes and 56 IBM 1050 Selectric teletypewriters (adaptations of the "golfball" office typewriter). These are located mostly, but not exclusively, within the M.I.T. campus. Each of these terminals can dial, through the M.I.T. private branch exchange, either the IBM 7094 installation of Project MAC or the similar installation of the M.I.T. Computation Center. The supervisory programs of the two computer installations may, independently, accept or reject a call, depending on the identity of the caller. Access to the MAC System can also be gained from any station of the Telex or TWX telegraph networks. Some tests and demonstrations have been conducted from European locations, and experiments are being planned in collaboration with a number of universities to provide further experience with long-distance operation of the system.
Although Teletypes and other typewriter-like terminals are adequate for many purposes, some applications demand a much more flexible form of graphical communication. The MAC System includes for this purpose the initial model of a multiple-display system developed by the M.I.T. Electronic Systems Laboratory for computer-aided design. The system includes two oscilloscope displays with character generator and light pen, as well as some special-purpose digital equipment that performs the light-pen tracking, and simplifies the task of the computer in maintaining the display, and in performing common operations such as translating and rotating the display. The two oscilloscopes can be operated independently of each other; communication with the computer can be operated independently of each other; communication with the computer can be achieved by means of the light pen, and also through a variety of other devices such as knobs, push buttons, toggle switches, and a typewriter. The meaning of a signal from one of these input devices is entirely under program control. The whole display system communicates with the IBM 7094 of the MAC installation through the direct-data channel, and the display data are stored in the central memory of the 7094. Thus, the display must be located in a room
adjacent to the computer installation. Remote operation would require the addition of a memory and some processing capacity for local maintenance of the display.
A separate, very flexible display terminal is provided by a DEC PDP-1 computer which can communicate from a remote location with the MAC computer installation through a 1200-bit-per-second telephone connection. The PDP-1 can also be used as a buffer between the MAC computer and the display system just described above, thereby permitting simulation and study of remote operation of the latter.
All of these terminals can, in principle, operate simultaneously and independently of the MAC computer installation by time sharing its central processor. However, in order to insure prompt response, the number of terminals active at a given time is limited by the supervisory program to 24. This number has already grown to 24 from 16, and is expected to grow further and possibly to double in the next few months, although maximisation of this number is not a primary objective at this time.
The equipment configuration of the MAC computer installation is illustrated in Fig. 1. The IBM 7094 central processor has been

**Fig. 1 Equipment Configuration**
modified to operate with two 32,000-word banks of core memory and to provide facilities for memory protection and relocation. These features, together with an interrupt clock and a special operating mode (in which input-output operations and certain other instructions result in traps), were necessary to assure successful operation of independent programs coexisting in core memory. One of the memory banks is available to the users' programs; the other is reserved for the time-sharing system supervisory program. The second bank was added to avoid imposing severe memory restrictions on users because of the large supervisory program, and to permit use of existing utility programs (compilers, etc.), many of which require all or most of a memory bank.
The central processor is equipped with six data channels, two of which are used as interfaces to conventional peripheral equipment such as magnetic tapes, printers, card readers, and card punches. A third data channel provides direct data connection to terminals that require high-rate transfer of data, such as the special display system.
Each of the next two data channels provides communication with a disc file and a drum. The storage capacity of each disc file is 9 million computer words, and the capacity of each drum is 185 thousand words. The time required to transfer 32,000 words in or out of core memory is approximately two seconds for the disc file and one second for the drum. The two disc files, with a total capacity of 18 million words, are used to store the users' private files of data and programs, as well as public programs, compilers, etc. The two drums are used for temporary storage of active programs that have to be moved out of the core memory to make room for other programs. Thus, in this respect they act as an extension of the core memory. Drums with transfer rates that are four times faster will be substituted for the present ones in the near future.
The transmission control unit (IBM 7750) consists of a stored-program computer which serves as an interface between the sixth data channel and up to 112 communication terminals capable of telegraph-rate operation (approximately 100 bits per second). An appropriate number of these terminals are connected by trunk lines.
to the M.I.T. private branch exchange and to the TWX and TELEX networks. Higher rate terminals can be readily substituted for groups of these low-rate terminals; for instance, on the present MAC System, three 1200-bit-per-second terminals are installed, one of which provides the communication channel to the PDP-1 computer. All of these terminals are compatible with Bell System Data-Phone data sets.
Part of the core memory of the transmission control unit is used as an output buffer, because the supervisory program and its necessary buffer space have grown in size to the point of occupying the whole of the A bank of core memory. The design intent was and still is to provide sufficient input-output buffer space in the main memory to eliminate unnecessary core-to-core transfers; the present mode of operation is a makeshift made necessary by equipment limitations.
The Digital Equipment Corporation's PDP-1 computer is not an integral part of the MAC Time-Sharing System, except when connected to it as indicated above. It was acquired to permit early experimentation with light-pen interaction with a display, and for other very high speed interaction work. It includes a 16,000-word core memory, Microtapes, a high-speed channel, and a scope display with character generator and light pen. It will be replaced by a PDP-6 computer in the near future.
OPERATING PROGRAM
The operating program of the MAC Computer System is a direct descendant of the Compatible Time-Sharing System (CTSS) developed by the M.I.T. Computation Center, and described in the manual published in July 1963. Many parts of the operating program have since been rewritten to facilitate system maintenance, and various facilities have been added that had been described in the manual but had not then been implemented. Furthermore, it now includes various user-developed features such as the I-O adapter for the display system described in the preceding section, compilers for various new languages, and other programming aids.
The heart of the MAC System is the supervisory program which resides at all times in the A bank of core memory. This program handles the communication with all the terminals, schedules the
login t193 fano
W 1036.4
PASSWORD
PARTY LINE BUSY, STANDBY LINE HAS BEEN ASSIGNED
T0193 2859 LOGGED IN 06/09/64 1036.7
CTSS BEING USED IS MACU5K
SHIFT MINUTES
ALLOTTED USED SINCE 06/09/64 1036.7
1 100 0.0
2 100 0.0
3 100 0.0
4 100 0.0
LAST LOGOUT WAS 06/09/64 1036.7
TRACK QUOTA= P, 500 Q. 0041 TRACKS USED.
R 5,550+1,000
Fig. 2a Print Out of Demonstration Session - Part a
input
d 1038.3
00010 print format text,$ $
00020 print format text, $type range n1. to n2. on 2 lines $
00030 read format b, fn1
00040 read format b, fn2
00050 n1=fn1
00060 n2=fn2
00070 print format text,$primes are$
00080 through loopb, for n=n1,1,n.g.n2
00090 whenever n=1,3?when?
00100 "loopb whenever (n-1)*l.e.0,transfer to loopb
00110 through loopa,for i=2,1,l.g.(n/l)
00120 "loopa whenever (n-(n/l)*l).e.0,transfer to loopb
00130 print format a,n
00140 loopb continue
00150 print format text,$range finished$
00160 execute exit
00170 execute command"$$nt,
00180 vector values a=${(18)}
00190 vector values b=${(f20.4)}
00200 integer n,i,?
00210 integer n1.,n1,n2
00220 end of program
MAN. delete,100
MAN. delete,160
MAN. file prime mad
d 1057.6
R .800+3.616
mad prime
W 1058.4
THE FOLLOWING NAMES HAVE OCCURRED ONLY ONCE IN THIS PROGRAM AND ARE ALL
ASSIGNED TO THE SAME LOCATION. COMPIATION WILL CONTINUE.
PRINT
ERROR 02051 IN PART3. CARD NO. 00090
INVALID MODE FOR SOME OPERAND IN PRECEDING STATEMENT
NO TRANSLATION.
R .600+.835
edit prime mad
W 1059.2
00230
MAN. 130 print print format a A""",a,n
MAN. file prime mad
W 1101.1
R 1.066+1.200
Fig. 2b Print Out of Demonstration Session - Part b
time sharing of the central processor on the part of active programs, moves these programs in and out of core memory, and performs a variety of bookkeeping functions necessary to protect users' files and to maintain detailed accounting of the system usage. The services that the system can provide are organized in the form of commands, that is, instructions that system users can give to the system. The programs corresponding to the commands are permanently stored in the disc files; when a command is issued, a copy of the corresponding program is made, loaded, and executed just as if it were a user's program. The language facilities available in the system include FAP, MAD, MADTRAN (a translator of FORTRAN into MAD), COMIT, LISP, SNOBOL, a limited version of ALGOL, on-line, extended versions of SLIP and GPSS, and two problem-oriented languages for Civil Engineering named COGO and STRESS.
The system is rapidly evolving through the addition of new language facilities and other utility programs and programming aids. The operating program itself is now being modified by system programmers working on line, and modifications are occasionally introduced without even interrupting the operation of the system. The entire system, exclusive of users' files, includes approximately 1/2 million words of code, of which a little more than 50 thousand were specifically written for the system, and the rest are adaptations and modifications of compilers and other programming aids developed at M.I.T. and elsewhere.
Some of the basic services available from the system are illustrated in the six parts of Fig. 2 which represent a complete record of a demonstration session held on June 9, 1964 between 10:36 and 11:39 A.M. The total computer time used was 1.3 minutes. The author's typing appears in lower case letters, and the computer's replies are in capital letters. All digits represent computer replies except those intermixed with lower case text, and the two lines following "type range n1..n2. etc..." in Fig. 2(d). Each command is immediately acknowledged by the computer with a character W, followed by the time of the day in which the first two digits are hours, the next two are minutes, and the one following the period indicates tenths of a minute. The end of an interaction is indicated by the letter R,
followed by the sum of the two numbers indicating the total number of seconds of computer time used during the whole interaction. The first of the two numbers indicates the processing time, and the second represents the time wasted in swapping programs back and forth between core memory and drum.
The session was started by issuing the login command, followed by the author's problem number and name. The computer then asked for his password, which is not recorded because the printing mechanism of the typewriter is automatically disconnected while the password is typed. All the lines assigned to the author's user group were already in use. Therefore, a stand-by line was assigned because the total number of lines in use was less than 24. This condition introduced the possibility of an automatic logout of the author's problem. The rest of Fig. 2(a) contains various information, including the allotment of computer time in minutes which had been made that very morning, and of which none had yet been used.
Figures 2(b) and 2(c) illustrate some of the facilities for writing, editing, filing, and compiling programs and for retrieving and printing
```
mad prime
W 1105.k
LENGTH 00205,T.V. SIZE 00005, ENTRY 00043
R 2.400*1.000
printf prime mad
W 1106.0
00010 PRINT FORMAT TEXT,$ $
00020 PRINT FORMAT TEXT, $TYPE RANGE N1. TO N2. ON 2 LINES $
00030 READ FORMAT B,FN1
00040 READ FORMAT B, FN2
00050 N1=FN1
00060 N2=FN2
00070 PRINT FORMAT TEXT,$PRIMES ARE$
00080 THROUGH LOOPB, FOR N=N1,1,N.G.N2
00090 WHENEVER N.L.3, TRANSFER TO PRINT
00110 THROUGH LOOPA,FOR I=2,1,1,0.(N/1)
00120 LOOPA WHENEVER (N-(N/I)+1).E.0,TRANSFER TO LOOPB
00130 PRINT PRINT FORMAT A,N
00140 LOOPB CONTINUE
00150 PRINT FORMAT TEXT,$RANGE FINISHED$
00170 EXECUTE DORMNT.
00180 VECTOR VALUES A=$(18)$
00180 VECTOR VALUES B=$(F20,8)$
00200 VECTOR VALUES TEXT=$(1246)$
00210 INTEGER I,I,N1,N2
00220 END OF PROGRAM
R .416*1.016
```
Fig. 2c Print Out of Demonstration Session - Part c
private files. The input command causes the computer to type out successive line numbers as each statement of the program is written. The program in Fig. 2(b), a corrected version of which is printed out in Fig. 2(c), can be used to compute prime numbers as illustrated in Fig. 2(d). The program is written in the MAD language. Some of the
```
load prime
W 1111.1
R 3.100+1.200
save prime
W 1111.4
R .866+.600
listf prime saved
W 1111.7
6/09/64 PRIME SAVED P 17
R .550+1.416
resume prime
W 1112.0
#EXECUTION.
TYPE RANGE N1. TO N2. ON 2 LINES
1000000.
1001000.
PRIMES ARE
1000003
1000033
1000037
1000039
1000081
1000099
1000117
1000121
1000133
1000151
1000159
1000171
1000183
1000187
1000193
1000199
1000211
1000-QUIT,
R 8.400+2.216
```
Fig. 2d Print Out of Demonstration Session - Part d
typing errors in Fig. 2(b) were intentional, others accidental. The quotation mark erases the preceding character and itself; thus, two successive quotation marks erase the preceding two characters. The question mark erases the entire line up to that point. Each line is terminated by giving a carriage return. A carriage return (i.e., entering a null line) is also used to enter the manual mode of input, as illustrated in line 230. In the manual mode, preceding lines can be deleted or corrected by issuing appropriate commands. After the deletion of two superfluous lines, the program was filed under the name Prime Mad as part of the author's private file.
Next, the mad command was issued to cause the program to be compiled by the MAD translator. The first attempt at compilation failed because of an error for which a diagnostic was printed. The error consisted of the omission of the word print in line 130. The edit command was then used to reopen the program file, the error was corrected in the manual mode, and the program was refiled under the same name. The second attempt to compile the program was successful, as indicated in Fig. 2(c), and the corrected program was then printed out using the printf command.
The binary version of the PRIME program was then loaded (together with the necessary library subroutines) by use of the load command, and the resulting core image and machine state were recorded by use of the save command. The latter command created a new file named PRIME SAVED, as indicated by the system reply to the command listf prime saved. The program was then started by issuing the command resume prime. It could also have been started by issuing the command start prime instead of save prime, or by loading and starting the program in one operation by means of the command loadgo prime.
The PRIME program asked for the numbers N1 and N2 defining the desired range of prime numbers; the numbers 1,000,000 and 1,001,000 were given. The typing out of the prime numbers was interrupted by pressing twice the break button on the Teletype, which resulted in the system's replying with the word QUIT.
The command listf without arguments causes the system to list the contents of the user's own private file directory as illustrated in
Fig. 2(e). There are four items with PRIME as first name: PRIME MAD, the original program shown in Fig. 2(c), PRIME BSS, its translation in machine language, PRIME MADTAB, a table (storage map) useful for debugging purposes and PRIME SAVED, the complete machine state after loading the program. The item named PRIMES MAD is a slightly different version of PRIME MAD.
Items are deleted from the file by issuing the delete command followed by the names of the items, as shown also in Fig. 2(e). The
```
listf
W 1118.5
20 FILES 61 TRACKS USED
DATE NAME MODE NO. TRACKS
6/09/64 PRIME SAVED P 17
6/09/64 PRIME BSS P 1
6/09/64 PRIME MADTAB P 1
6/09/64 PRIME MAD P 1
6/08/64 MON04 SAVED P 12
6/08/64 PRIMES MAD P 1
6/05/64 FILTER SAVED P 19
5/12/64 GSUBA BSS P 3
5/12/64 SCANA BSS P 3
5/08/64 SCANT BSS P 2
R .616+.800
```
```
delete prime saved prime bss prime madtab primes mad mon04 saved
W 1122.9
R 3.200+.400
```
```
delete *bss*** bss
W 1123.7
R 1.666+.200
```
```
listf
W 1123.9
2 FILES 21 TRACKS USED
DATE NAME MODE NO. TRACKS
6/09/64 PRIME MAD P 1
6/05/64 FILTER SAVED P 19
R .616+.616
```
**Fig. 2e Print Out of Demonstration Session - Part e**
command delete, with an argument consisting of an asterisk followed by a space followed by a name, causes all items with the stated name as second name to be deleted. The result of issuing the two delete commands in Fig. 2(e) was the elimination from the file of all
items except PRIME MAD and FILTER SAVED, as indicated in the following printout of the file directory.
A program for monitoring the system operation, by the name of MONO4 SAVED, is available in the public file accessible to all system users. This program must first be copied into one's own private file by means of the copy p command, as shown in Fig. 2(f).
```
copy p mon04 saved
W 1125.9
#R 2.616+.400
resume mon04 1
W 1126.1
CTSS UP AT 902.8 06/09/64.
NUSERS= 22 TIME= 1126.2
29.7 29.7 BACKGROUND,
46.6 46.6 FOREGROUND,
17.6 17.6 SWAP TIME,
6.2 6.2 LOAD TIME,
15.5 15.5 USER WAIT,
6.6 6.6 SWAP WAIT,
3.9 3.9 LOAD WAIT,
NUSERS= 23 TIME= 1127.2
19.8 29.6 BACKGROUND,
43.6 46.5 FOREGROUND,
27.3 17.7 SWAP TIME,
9.3 6.2 LOAD TIME,
25.5 15.6 USER WAIT,
10.3 6.7 SWAP WAIT,
5.5 3.9 LOAD WAIT,
QUIT,
R .000+2.616
delete mon04 saved
W 1127.9
R 1.000+.400
logout
W 1138.9
T0193 2859 LOGGED OUT 06/09 /64 1139.1
TOTAL TIME USED= 01.3 MIN.
Fig. 2f Print Out of Demonstration Session - Part f
```
Since this program is stored as a record of a previous machine state, it is started by issuing the resume command. A second argument in the command (the digit 1) causes the program to be run at 1-minute intervals. At time 1126.2 there were 22 active users; 1 minute later there were 23. The other figures are percentages of the computer time devoted to various operations. FOREGROUND refers to the computer time devoted to running programs requested by on-line users. BACKGROUND refers to computer time available for running normal batch processing, which takes place automatically when no service is requested by on-line users. SWAP TIME is the processor time wasted in moving a program from core memory to drums and vice versa. LOAD TIME is the processor time devoted to loading programs from the disc files into core memory. The first four figures in each table add to 100 percent (or approximately so). The last three figures refer to the times in which the processor is totally idle while input-output operations are taking place. The USER WAIT is part of the foreground, and it is already included in the FOREGROUND figure. Similarly, SWAP WAIT and LOAD WAIT are already included in SWAP TIME and LOAD TIME. The figures in the second column of each table are percentages evaluated from the time at which the system was last placed in operation; for example, 902.8 in Fig. 2(f). The figures in the first column are percentages over the last 1 minute interval; in the first table they are identical to those in the second column.
After the monitor program was deleted from the file, the session was terminated by issuing the logout command. The session lasted approximately 1 hour, and was interrupted by two telephone calls lasting for a total of approximately 15 minutes. The total computer time used—swap and load time included—was 1.3 minutes.
The record of the demonstration session illustrates a few of the most basic commands of the MAC System. The total number of commands available to all users is, at present, 68. This number is continually increasing as new facilities, developed by users as well as by the system programmers, are added to the system. In addition, a variety of programs of lesser general interest are available in the public file, from which they can be copied into private
files. This same public file is also used to facilitate the transfer of programs and data between system users. Among the special facilities available for operation of the display system are provisions for displaying text, for drawing on the screen, and for rotating three-dimensional objects. The public part of the system consists at present of approximately 1/2 million words of code. The users' private files barely fit into the two disc storage units whose total capacity is 18 million words.
OPERATING EXPERIENCE
The MAC System has been operating in roughly its present form since the middle of November, 1963. It is now in operation 24 hours a day, 7 days a week. Maintenance, disc dumping and loading, and occasional non-time-sharing operation require approximately 4 hours per day. The on-line use of the system has steadily increased since November to April; the total number of computer hours charged to on-line users (the sum of the numbers printed out by the system on completion of each command) was 311 in April and 297 in May. In other words, the computer time devoted to serving on-line users amounted to approximately 42 percent of total clock hours. The background use is not included in these figures. The total number of user-hours between login's and logout's turns out to be approximately 17 times the number of computer hours used.
The system is usually fully loaded (24 on-line users) during the day, and almost fully loaded in the evening until midnight and sometimes later. The system is very seldom idle even in the early morning hours.
Facilities for detailed monitoring of system operation have become available only very recently, and therefore no dependable data can be presented at this time. It must be stressed in this regard that it is far from obvious what measurements would provide a useful characterization of the system performance, in view of the many variables involved, and of the complexity of their interactions. Furthermore, the frequency and character of user requests vary with time, they are highly unpredictable, and of course cannot and should not be restricted or controlled in any realistic measurement of system performance.
The performance figure of greatest interest to the user is the response time (the time interval between the issue of a request and the completion on the part of the computer of the requested task) as a function of the bare running time of the corresponding program. The response time depends on the scheduling algorithm employed by the system, as well as on the number and character of the requests issued by other users.
The scheduling algorithm operates roughly as follows: Each user request is assigned an initial priority which depends only on the size of the program that must be run. The highest priority is assigned to the smallest programs. The highest priority programs are allowed to run for a maximum of 4 seconds before being interrupted, whereas lower priority programs are allowed to run for longer intervals which are multiples of 4 seconds. The lower the priority, the longer is the allowed interval. If a program run is not completed within the allowed interval, the program is transferred from core memory to drum (the state of the machine being automatically preserved), and its priority is reduced by 1 unit.
The curves of Fig. 3 illustrate the behavior of the wait time and swap time as a function of program run time for programs 50 words long and 25,000 words long. The swap time is defined for this purpose as the time spent in transferring the program between disc file and drum on the one hand, and core memory on the other; wait time is the time during which the computer is performing tasks that are not pertinent to that program. Obviously, the total response time is the sum of the bare run time (the abscissa in Fig. 3), the swap time, and the wait time. The points in Fig. 3 were obtained from measurements performed by M. Jones, and the curves are rough interpolations between the points. Each point represents the average value of 30 consecutive measurements. The number of users during the time that the measurements were being made varied from a low of 13 to a high of 21; it was between 17 and 20 most of the time. The scattering of the points in Fig. 3, together with the variability of the number of users while the measurements were being performed, should make clear the kind of difficulties one faces in obtaining precise and accurate measurements of system performance.
Fig. 3 Behavior of Wait Time and Swap Time as Functions of Run Time
It is clear from Fig. 3 that the swap time constitutes a large overhead for short runs. The swap time is defined in this figure to include also the initial transfer of the program from disc file to core memory and the final transfer back into the disc file; the two-way transfer of 32,000 words between disc file and core memory takes approximately 4 seconds, and the same transfer between drum and core memory takes approximately 2 seconds. The wait time increases less than linearly with run time, and, as expected, is not greatly affected by the size of the program.
The MAC computer installation has experienced a normal number of hardware failures. It should be realized however, that hardware failures are much harder to diagnose in a multiple-access system, because of the impossibility of reproducing the machine conditions at the time of failure. Moreover, the probability distribution of machine states in a multiple-access operation is quite different from that in a batch-processing operation; therefore hardware troubles that become apparent in the former mode of operation
often go unnoticed in the latter. Above all, it is often very difficult to determine whether a particular malfunctioning is the result of a hardware failure, a system-programming error, or even an error in the logical design of the machine. There is no doubt that multiple-access systems present substantially more difficult maintenance problems than do conventional systems.
The most delicate aspect of the operation of a multiple-access system of the MAC type is the responsibility assumed by the system managers with respect to the users' programs and data that are permanently stored in the disc files. Elaborate precautions must be taken to protect the contents of the disc files against malfunctioning of the system, as well as against actions of individual users. The supervisory program restricts the access of each user to his own private file, and to public files which he can not modify. A full copy of the contents of the disc file is made twice a day to minimize the loss in case of malfunctioning. While losses of users' programs and data have occurred, their frequency and seriousness have not discouraged users from entrusting their work to the system.
The system users number a little more than 200, with 10 academic departments represented among them. Although most users have had previous experience in programming, there is a growing group of users who are working entirely with programs developed by others, or through high-level problem-oriented languages that enable them to communicate with the system in an English-like language appropriate to their professional field.\textsuperscript{8-11}
Enthusiasm mixed with a great deal of frustration is the most common reaction to the system on the part of its users. The system was very quickly accepted as a daily working tool, particularly by computer specialists. This quick acceptance, however, was accompanied by the kind of impatience with failures and shortcomings that is characteristic of customers of a public utility. The capacity of the system is limited, and therefore users are often unable to log in because the system is already fully loaded. Furthermore the system may not be in operation because of equipment or programming failures, just when one is planning to use it. In other words, the system is far from being as reliable and dependable as a utility.
should be. Yet, the experience since last November has shown that it is perfectly feasible for a computer system to be the object of research and development for some people and, simultaneously, an effective working tool for others. System experimentation and use are not only compatible but mutually beneficial.
It is becoming increasingly evident that the system's ability to provide the equivalent accessibility of a private computer is only a secondary, although necessary, characteristic. What users find most helpful is the fact that the system places literally at their fingertips a great variety of services for writing, debugging and compiling programs, and facilities for working on problems in their own fields through appropriate problem-oriented languages.
The system users themselves are beginning to contribute to the system in a substantial way by "publishing" their work in the form of new commands. As a matter of fact, an editorial board is being established to review such work and formally approve its inclusion in the system. Thus, the system is beginning to become the repository of the procedural and data knowledge of the community that it serves. A corollary of this trend is, of course, the increasing difficulty that users find in ascertaining what facilities of interest to them are included in the system. In other words, the system is beginning to have the undesirable as well as the desirable characteristics of a library.
SYSTEM TRENDS
The organization of the MAC System appears to be at the threshold between two basically different points of view on computer systems and their utilization. The traditional view is that of a processor serving one user at a time and executing programs in succession, with a negligible amount of interaction during execution with the user himself or any other part of the outside world. A corollary of this view is that the processor, the memory, and the peripheral equipment must be designed to fit the requirements of the "typical user" rather than the average requirements of users as a group. Thus, the system as a whole can be used efficiently only by specifically tailoring programs to its peculiarities.
The present MAC System is still organized in a traditional manner in the sense that programs, whether public or private, are executed in succession as independent and indivisible entities. The fact that one program may be interrupted by a higher priority program is irrelevant for the purposes of the present discussion. For instance, the execution of a system command generates a copy of the corresponding program (stored in the disc file), which is then run just as if it were a user program. Thus, if several users are simultaneously compiling programs written in the same language, several copies of the same compiler are simultaneously and independently shuttling back and forth between core memory and drum. Another consequence is that any interactive program must be present in main memory in its entirety when data or instructions are needed from the user, even though the presence of one or two of its subroutines may be sufficient to accomplish the interaction.
A further and very serious aspect of the present mode of operation is that only one complete, executable user program can reside in core memory at any one time. The implication is that the central processor must remain totally idle while a new program is being transferred into core memory or while necessary input-output operations are taking place. The idle time is very substantial in the present MAC System, as indicated by the WAIT percentages in Fig. 2(f).
These system inadequacies are clearly the result of an organization keyed to the traditional point of view of a central processor executing independent programs one at a time. Furthermore, the characteristics of the present equipment would preclude in practice, if not in principle, any substantially different system organization. In order to overcome these limitations, one must approach the system design problem from a point of view considerably different from the traditional one.\textsuperscript{12,13} We observe in this regard that even the term "time-sharing" is inappropriate and somewhat misleading, because it emphasizes the necessary but secondary goal of providing the equivalent of a private computer. The term "multiple access" is also misleading when it is applied to the central processor. The emphasis should instead be on the system ability to provide convenient
and flexible multiple access to an ever-changing structure of procedures and data capable of interacting as parts of distinct processes. In other words, it is the software structure that is important, and the hardware assumes the secondary role of providing convenient means for access to it.
If the system goal is to provide convenience of access to such a software structure, one is naturally led to view the system itself as memory centered rather than processor centered. Furthermore, the memory that forms the heart of the system would be not the main core memory, but the bulk memory, consisting of disc files or similar devices in which the procedures and data are normally stored. The main memory would play, instead, the role of a giant buffer matching the relatively low transfer rate of the bulk memory to the fast processing rate of processors and input-output channels. When the system is looked at from this point of view, it assumes the appearance more of a message-store-and-forward communication system, than of a traditional computer system. Indeed, the smooth flow of messages is of paramount importance to efficient operation; thus, the major function of the supervisory program is to organize the transfer of messages (procedures, data, as well as input-output messages) in such a way as to avoid unnecessarily long queues, and to insure efficient utilization of the available equipment.
It is also clear that a computer system intended to serve a large number of people simultaneously must be organized so that it will avoid any unnecessary duplication of information in either main or bulk memory, and unnecessary transfers between the two. This statement implies that procedures and data must be executable as common parts of processes that have been simultaneously and independently initiated by different users. It also implies the possibility of executing processes involving several subroutines in such a manner that only the necessary subroutines are in core memory at any given time. The program segmentation scheme advocated by J.B. Dennis is keyed to these objectives.\(^{13}\)
A corollary to this view of a computer system is the functional subdivision of the hardware into pools of equipment serving the same function, with each piece of equipment being duplicated to meet the average system demand. The point here is that, if the objective of
the system is to provide convenient access to the procedures and data stored in the bulk memory, enough equipment must be provided to perform the necessary functions, so that a bottleneck in one part of the system will not prevent the full utilization of other parts. The equipment itself can then be designed to match the average demand of users as a group, rather than the requirements of the "typical users". Furthermore, if each piece of equipment is duplicated within the system, one can envision the achievement through on-line maintenance of a level of reliability and continuity of operation that is unthinkable for the present MAC System.
In conclusion, the experience with the present MAC System suggests a trend toward memory-centered, as opposed to processor-centered, systems, including pools of bulk memories, core memories, central processors, and input-output channels, all communicating with one another, with the core memories acting as buffers. On the software side, the trend seems to be in the direction of executing processes that consists of many subroutines and data structures which are never assembled into a single program, and some of which may be common to other independent processes simultaneously in execution. This view of computer systems is indeed very different from the traditional one. Its implications are far from clear. Their exploration is a major objective in the development of the next MAC System.
ACKNOWLEDGEMENTS
This paper reports on the accomplishments and ideas of a great many people associated with Project MAC, far too many to list them individually. However, special credit is due to Professor J.F. Corbato, Deputy Director of the M.I.T. Computation Center, and to his staff who developed the Compatible Time-Sharing System, and who are mainly responsible for its evolution into the present MAC System. It is a pleasure to be their spokesman, a pleasure mixed with fear of having failed to do full justice to their work.
The successful operation of a computer utility during its first year of existence is an accomplishment of which all those concerned with the system management and maintenance can be rightly proud.
Their devotion to the goal of continuous operation has often extended far above and beyond the call of duty.
The financial support of the Advanced Research Projects Agency of the Department of Defense, and the managerial support of the Office of Naval Research are gratefully acknowledged. Their confidence and interest in Project MAC are a constant source of encouragement. In particular, I wish to express my personal gratitude to Dr. J.C.R. Licklider whose technical vision and contagious enthusiasm as Assistant Director of ARPA were responsible for the initiation of Project MAC.
Finally, I wish to express my gratitude to the I.B.M. Corporation, and in particular to Mr. Loren Bullock, its long-time representative at M.I.T., for their helpful cooperation in planning and implementing equipment modifications to meet the special needs of the MAC System. I am similarly grateful to the New England Telephone and Telegraph Company for its help and cooperation in engineering the Teletype network in the MAC System.
REFERENCES
1. J. McCarthy, "Time-Sharing Computer Systems," Management and the Computer of the Future, (Edited by M. Greenberger), 221-235, The M.I.T. Press, Cambridge, Mass. (1962).
2. The Compatible Time-Sharing System, A Programmer's Guide, by the M.I.T. Computation Center, the M.I.T. Press, Cambridge, Mass. (1963).
3. F.J. Corbato, M.M. Daggett and R.C. Daley, "An Experimental Time-Sharing System," AFIPS Conference Proceedings, Vol. 21, 335-344, (1962).
4. S. Boilen, E. Fredkin, J.C.R. Licklider, and J. McCarthy, "A Time-Sharing Debugging System for a Small Computer," AFIPS Conference Proceedings, Vol. 23, 51-57, (1963).
5. J. Schwartz, "A General-Purpose Time-Sharing System," AFIPS Conference Proceedings, Vol. 25, 397-411, (1964).
6. J.B. Dennis, "A Multiuser Computation Facility for Education and Research," Communications of the ACM, Vol. 7, 521-529, Sept. 1964.
7. R. Stotz, "Man-Machine Console Facilities for Computer-Aided Design," AFIPS Conference Proceedings, Vol. 23, 323-328, (1963).
8. D.T. Ross and C.G. Feldman, "Verbal and Graphical Language for the AED System: A Progress Report," Technical Report MAC-TR-4, M.I.T., 1964.
9. J.M. Biggs and R.D. Logcher, "Stress: A Problem-Oriented Language for Structural Engineering," Technical Report MAC-TR-6, M.I.T., 1964.
10. J. Weizenbaum, "OPL-1: An Open-Ended Programming System Within CTSS," Technical Report MAC-TR-7, M.I.T., 1964.
11. M. Greenberger, "The OPS-I Manual," Technical Report MAC-TR-8, M.I.T., 1964.
12. F.J. Corbato, "System Requirements for Time-Shared Computers," Technical Report MAC-TR-3, M.I.T., 1964.
13. J.B. Dennis, "Program Structure in a Multi-Access Computer," Technical Report MAC-TR-11, M.I.T., 1964. |
NANOSTRUCTURED ENERGETIC MATERIALS
R.V. Shende, S. Subramanian, S. Hasan, S. Apperson, K. Gangopadhyay, and S. Gangopadhyay*
Department of Electrical and Computer Engineering
University of Missouri-Columbia,
Columbia, MO 65211
P. Redner, D. Kapoor, and S. Nicolich
US Army ARDEC
Picatinny, NJ 07806
ABSTRACT
This paper reports synthesis of metastable intermolecular composite (MIC) containing CuO nanorods, nanowires combined with aluminum nanoparticles. These composites were prepared using ultrasonic mixing and self-assembly approach. The combustion wave speed as high as $2300 \pm 100$ m/s was achieved for the MIC composites. We also report that the combustion wave speed can be easily tuned from 1 m/s to 2300 m/s for the nanoenergetic composites prepared using mesoporous Fe$_2$O$_3$ gel, nanoparticles of WO$_3$, MoO$_3$, Bi$_2$O$_3$, and CuO mixed with Al-nanoparticles and addition of other chemicals in nanoscale. Tunable combustion speed is found to depend not only on the type of oxidizer but also on the nanostructural arrangement present in the energetic composites.
1. INTRODUCTION
Nanotechnology plays a significant role in the development of novel energetic materials. The goal of reducing the size of an energetic system while maintaining performance has become a reality with the introduction of nanosized fuels and oxidizers. Merely mixing these components will create random hot spot density distribution and thus, limit the energy transfer rates. Homogenous mixing or organization of fuel and oxidizer nanoparticles, however, should enhance the interfacial contact area and accelerate the combustion wave front. Organization of nanoparticles is achieved using self-assembly approaches (Subramanian et al., 2005; Kim et al., 2004). When spherical nanoparticle morphology is selected, self-organization may restrict few smaller nanoparticles on larger ones against cylindrical (rod like) morphology, where relatively larger number of nanoparticles can be assembled. Due to the organization of nanoparticles, higher contact area is established between fuel and oxidizer components, which effectively improve the combustion wave characteristics. These tunable nanoenergetic materials will be useful for various applications such as high-temperature non-detonable gas generators, adaptable flares, green primers for propellants and explosives, high power/energy explosives. Overall, the nanoenergetic materials together with MEMS (Microelectromechanical Systems) technology should provide improved level of performance with the reduction in the size of current warheads and weapon systems.
The synthesis of oxidizer rod-like geometry (nanorod) has been reported using solid templates like mesoporous silica (Martin, 1994), polymeric systems (Bhattacharya et al., 2000), arc discharge methods (Zhou et al., 1999), and laser ablation (Morales and Lieber, 1998). They were also synthesized by inorganic condensation method following a sequential route of olation and oxolation reactions in an aqueous solution (Jean-Pierre, 2000). Among these methods, the wet chemical approach of inorganic condensation is attractive for the nanorods synthesis because this method has better control over the size and aspect ratio of the nanorods (Wang et al., 2003).
Low aspect ratio nanorods can be made into high aspect ratio nanowires using various processes. Surfactant templating (Wang et al., 2002, 2000), hydrothermal (Yang et al., 2006), membrane templating (Martin, 1994) etc. are available for the synthesis of nanowires and nanorods. When higher surface area oxidizer nanowires are used, higher interfacial contact area between fuel and oxidizer should enhance the rate and extent of energy release.
Nanostructured energetics can also be prepared by combining mesoporous oxidizer with fuel nanoparticles. Mesoporous materials have pores in the range of 20-500 Å in diameter provide larger surface area. This can be easily prepared using the sol-gel approach. To achieve ordered arrangement of pores and uniform pore size distribution, surfactant templating method is very effective (Mehendale et al., 2006). By ordering of mesopores in an energetic composite, hot spot density of
| 1. REPORT DATE | 01 NOV 2006 |
|----------------|-------------|
| 2. REPORT TYPE | N/A |
| 3. DATES COVERED | - |
| 4. TITLE AND SUBTITLE | Nanostructured Energetic Materials |
| 5a. CONTRACT NUMBER | |
| 5b. GRANT NUMBER | |
| 5c. PROGRAM ELEMENT NUMBER | |
| 6. AUTHOR(S) | |
| 5d. PROJECT NUMBER | |
| 5e. TASK NUMBER | |
| 5f. WORK UNIT NUMBER | |
| 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) | Department of Electrical and Computer Engineering University of Missouri-Columbia, Columbia, MO 65211 |
| 8. PERFORMING ORGANIZATION REPORT NUMBER | |
| 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) | |
| 10. SPONSOR/MONITOR’S ACRONYM(S) | |
| 11. SPONSOR/MONITOR’S REPORT NUMBER(S) | |
| 12. DISTRIBUTION/AVAILABILITY STATEMENT | Approved for public release, distribution unlimited |
| 13. SUPPLEMENTARY NOTES | See also ADM002075., The original document contains color images. |
| 14. ABSTRACT | |
| 15. SUBJECT TERMS | |
| 16. SECURITY CLASSIFICATION OF: | |
| a. REPORT | unclassified |
| b. ABSTRACT | unclassified |
| c. THIS PAGE | unclassified |
| 17. LIMITATION OF ABSTRACT | UU |
| 18. NUMBER OF PAGES | 38 |
| 19a. NAME OF RESPONSIBLE PERSON | |
self-propagating combustion wave front can be controlled, which will improve the performance of nanoenergetic composite.
The extent of energy release also depends on the oxidizer material used in the energetic composite. For example, in the energetic reactions of the composites containing Fe$_2$O$_3$, WO$_3$, MoO$_3$, Bi$_2$O$_3$, and CuO, combined with Al, the theoretical energy release greatly varies (Fisher and Grubelich, 1998). Ideally, when these oxidizers react with fuel, energy release should correspond to the theoretical energy. However, in reality, the size of oxidizer and fuel components and their structural arrangement provide resistance to mass and heat transfer processes, which primarily govern the combustion characteristics of the MIC materials.
Superior combustion wave speeds can be achieved if fuel and oxidizers are placed in the closest possible proximity. Nanostructural arrangement is possible using self-assembly approach where fuel and oxidizer can be placed next to each other using a molecular linker. Self assembly is achieved via electrostatic interaction mechanism (Kim and Zachariah, 2004), charge transfer processes (Shimazaki et al., 1997), and polymers binding methods (Malynych et al., 2002). If polymer monolayer is used to bind fuel and oxidizer nanoparticles, the combustion characteristics should improve the performance of the energetic composite. At higher concentration range similar to that of typical binders, however, polymers will act as heat sink and reduce the hot-spot density of a self-propagating combustion wave front. Therefore, self-assembled arrangement of fuel and oxidizer will play significant role in achieving the desired combustion characteristics of MIC materials.
In this paper, oxidizer nanorods and nanowires were prepared using surfactant templating approach and later, they were combined with Al-nanoparticles using ultrasonic mixing and self-assembly process to prepare MIC materials. Mesoporous ordered Fe$_2$O$_3$ gel was synthesized and combined with Al-nanoparticles. In addition, the composites of several oxidizers mixed with Al-nanoparticles were also prepared and evaluated. We show that tunable combustion wave speed and pressure wave velocity can be achieved by varying the nanostructural arrangements and addition of chemicals in nanoscale.
2. EXPERIMENTAL
2.1 Materials
Nanoparticles of CuO (8-10 nm) (Alfa Aeser, MA), WO$_3$ (Aldrich, WI), MoO$_3$ and Bi$_2$O$_3$ (Accumet Materials, NY) and nanoparticles of Al (avg. size 80 nm with 2 nm passivation layer from Nanotechnologies, Inc. TX) were obtained and used to prepare energetic composites. The precursor, CuCl$_2$ for nanorod and nanowire synthesis was obtained from Fisher Scientific and used without purification. Poly(4-vinyl pyridine) (P4VP) for self-assembly, polyethylene glycol octadecyl ether (Brij76), and propylene oxide were obtained from Sigma Aldrich, WI.
2.2 Composite of oxidizer and fuel nanoparticles using ultrasonic mixing
Accurately weighed 0.2 g of CuO/WO$_3$/MoO$_3$/Bi$_2$O$_3$ and Al-particles were mixed together at an equivalence ratio of 1.6 and placed in 2-propanol in a sealed bottle. The mixture was sonicated in an ultrasonic bath (Fisher 8835) for 6-8 hrs. The slurry was dried at 95°C for 15 min to obtain powder.
2.3 Synthesis of CuO nanorods/nanowires
In the method, CuCl$_2$.2H$_2$O, NaOH and PEG-400 were mixed in a weight ratio of 1:0.6:1.2 and ground together with a pestle and mortar. Mixing and grinding were continued for 30 min, which resulted in black colored slurry. This was sonicated in 500 ml of de-ionized water for 3 hrs and then centrifuged at 4000 rpm for 10 min to obtain a precipitate of CuO (Wang et al., 2003). The precipitate was dried at 100°C, pulverized and calcined at 450°C for 4 hrs.
To synthesize CuO nanowires, about 1 g of CuCl$_2$.2H$_2$O was dissolved into 8 ml of PEG-400 (polyethylene glycol) in deionized water. Solution of 0.6 g of NaOH in 8 ml of PEG-400 was prepared and mixed with CuCl$_2$ solution. The growth of nanowire started with the addition of excess amount of ethanol into the final solution. The precipitate was then washed thoroughly with ethanol and calcined at 400 °C for 6 hrs.
2.4 Synthesis of mesoporous Fe$_2$O$_3$ gel
Solution of 17% Brij 76 was prepared in ethanol and it was heated to 60°C and maintained for 15 min under constant stirring. One gram of Fe(NO$_3$)$_3$.9H$_2$O was dissolved in 5.5 ml of ethanol, which was slowly added to solution of Brij 76 under gentle stirring. The resultant solution mixture was then placed in sonication bath for another 10 minutes. After sonication, 5.2 ml of the propylene oxide was added to this solution under gentle stirring. The gel time was around 1-2 minutes (Mehendale et al., 2006).
2.5 Self-assembled composite
In the first step, 0.5 g of CuO nanorods were sonicated for 4 hrs in 500 ml of 2-propanol containing 0.1% (w/v) P4VP polymer. After sonication, the solution
was centrifuged at 4000 rpm for 10 min to separate nanorods from the solution. These P4VP coated nanorods were washed with 2-propanol to remove excess polymer and the solution was centrifuged to recover the nanorods. Finally, the polymer coated CuO nanorods were dried at 120°C for 1.5 hrs to remove the solvent and to establish a bonding with the oxidizer surface. In the second step, 0.4 g of P4VP coated nanorods were mixed with 0.17 g of Al-nanoparticles in 1.5 ml of 2-propanol and the mixture was dispersed for several hrs in a sonic bath. Finally, the particles were separated by repetitive centrifugation, washing and dried at 95°C for 10 min.
2.6 Combustion wave velocity measurement
The combustion wave speed of the composites were measured using on-chip diagnostic technique (Bhattacharya et. al., 2006; Apperson et al., 2006) and by the optical method (Plantier et al., 2005). The on-chip method is based on time-varying resistance (TVR) of sputter-coated thin platinum film, in which the resistance of the film changes as energetic reaction propagates over it. By knowing the voltage differential over a time period and the length of a TVR film, the combustion speed was determined. For the optical method, a Lexane tube of 0.8 cm$^3$ volume was filled up with the 200 mg of nanoenergetics powder and inserted into an aluminum block. This block was mounted with the holders for the optical fibers. Tektronix TDS460A 4-channel digital oscilloscope was fitted to a set of spatially spaced ThorLabs photodiodes and optical fibers. The energetic reaction was triggered with a spark igniter at one end of the tube and the oscilloscope recorded an output voltage signal in time for the sequentially placed photo detectors. The combustion wave speed of energetic material was then calculated on the basis of the differential between signal rise times of the individual photo detector.
3. RESULTS AND DISCUSSION
Nanorods of CuO were synthesized by reacting CuCl$_2$ with NaOH in the presence of PEG micelles. This reaction is moderately exothermic, produces Cu(OH)$_2$, which on heating dehydrates into CuO. Adsorption of PEG on the surface of colloids in a solution reduces the rate of growth of the colloid limited by the adsorbed surfaces. When the entire surface adsorbs the surfactant, the growth of colloid into macrostructure is restricted and more controlled and directed growth in a specific crystallographic orientation occurs. TEM image of calcined CuO is shown in Fig. 1, which shows rodlike geometry.
The combustion wave speed as a function of equivalence ratio ($\Phi = \frac{(F/O)_{actual}}{(F/O)_{stoichiometry}}$, where F is fuel and O is oxidizer) is shown in Fig. 2. The optimum speed corresponds to a slightly fuel-rich composite with $\Phi$ ranging from 1.2 to 1.8. At $\Phi=1.4$, the combustion wave speed is 1300 m/s, which increases to about 1650 m/s at $\Phi=1.6$ and decreases to 900 m/s at $\Phi=1.8$. Thus, equivalence ratio of 1.6 is considered optimum for a composite of CuO nanorods and Al-nanoparticles. Overall, the combustion flame velocity is found to be a strong function of the equivalence ratio.
Lower molecular weight non-ionized PEG surfactant was chosen to prepare copper oxide nanowires as it forms chain like structure when self-assembled in water due to its flexible structure. Copper oxide nanowires with average length of 100–1000 nm (Fig. 3) were synthesized using wet chemistry approach, elaborated earlier in the experimental section. These nanowires were mixed with Al-nanoparticles at the equivalence ratio of 1.6, which produced combustion wave speed of 1900 m/s, about 20% higher than the speed of the MIC composite containing CuO nanorods and Al-nanoparticles. Enhancement in the combustion wave front speed is due to the increase in the surface area in the case of nanowire morphology as compared with the nanorod.
Fig. 3 TEM image of CuO nanowires prepared using PEG surfactant.
The combustion wave speeds of various nanoenergetic composites are shown in Figure 4. We observed a burn rate of 550-780 m/s for a conventional mixture of CuO (Alfa Aeser) and Al nanoparticles (Composite 6) which increased to an average value of 1650 m/s for a mixture of CuO nanorods/Al-nanoparticles (Composite 7). The combustion wave speed can be further increased if fuel and oxidizer are placed in the closest possible proximity. The self-assembly process was employed where Al-nanoparticles were assembled on CuO nanorods using poly(4-vinyl pyridine) (P4VP) polymer (Gangopadhyay et al., 2005). The mechanism of self assembly is attributed to the nitrogen of pyridyl group in the polymer, which has a lone pair of electrons available for donation to form a covalent bond with metals and interact also with metal oxides. The combustion wave speed recorded of the self-assembled composite showed the best value of about 2300 ± 100 m/s, which is significantly higher than the physical mixtures. Such supersonic self-propagating combustion waves generate shock waves with Mach Number higher than 2. Thus, by creating nanostructured organization of fuel and oxidizer, combustion characteristics can be easily improved.
Fig. 4 Tunable combustion wave speeds of nanoenergetic materials (1) porous Fe$_2$O$_3$, (2) WO$_3$, (3) MoO$_3$, (4) Bi$_2$O$_3$, (5) ordered mesoporous Fe$_2$O$_3$, (6) CuO nanoparticles, (7) CuO nanorod, (8) CuO nanowires, and (9) self-assembled; all mixed with Al-nanoparticles (80 nm)
Fig. 5 TEM image of self-assembled Al- nanoparticles around CuO nanorods.
Other than CuO, we also synthesized mesoporous Fe$_2$O$_3$ following sol-gel route where iron nitrate was hydrolyzed and the sol was polymerized in presence of Brij-76 surfactant (Mehendale, 2006). The mesoporous Fe$_2$O$_3$ prepared with the use of surfactant templating produced ordered porous structure, which is shown in Fig. 6. In the absence of surfactant templating no ordering of the pores was observed. These mesoporous oxidizers were combined with Al-nanoparticles to prepare energetic composites. The combustion velocities are shown earlier in Fig. 4, which indicate that the combustion speed is higher for the composite prepared with ordering of mesopores (Composite 5), believed to happen due to uniform hot spot density distribution in the self-propagating combustion wavefront.
Fig. 6 Mesoporous Fe$_2$O$_3$ gel was prepared using Brij-76 surfactant templating.
If polymers are combined with nanostructured oxidizers, MIC materials can be further modified to tune their pressure characteristics as well. Additionally, polymers also reduce the electrostatic discharge ignition sensitivity of MIC materials.
CONCLUSION
Higher combustion wave velocity of CuO nanowire based MIC composite can be attributed to higher surface area creating higher hot spot density as compared with the nanorods. Further improvement in the performance is achieved by linking fuel and oxidizer components via self-assembly approach. Tunable combustion wave speeds are obtained by selecting various oxidizer materials mixed (or, self-assembled) with different sizes of nanoaluminum and by changing the equivalence ratio. Among the oxidizers, the composite of Fe$_2$O$_3$ yields lowest combustion wave speed whereas, CuO shows highest speeds. In general, the composites of CuO are found to be superior to the composites of WO$_3$, MoO$_3$ and Bi$_2$O$_3$ nanoparticles. Overall, composites prepared by combining nanostructured oxidizers with Al-nanoparticles are shown to have improved combustion characteristics as compared to random mixing of oxidizers and fuel nanoparticles.
ACKNOWLEDGEMENTS
Authors gratefully acknowledge the financial support by the US Army, ARDEC, Picatinny, NJ and the National Science Foundation.
REFERENCES
Apperson, S., Bhattacharya, S., Gao, Y., Subramanian, S., Hasan, S., Hossain, M., Shende, R.V., Redner, P., Kapoor, Niccolich, S., Gangopadhyay, K., Gangopadhyay, S., 2006: On-Chip Initiation and Burn Rate Measurement of Thermite Energetic Reactions, *Proc. Mater. Res. Soc. Symp.* 0896-H03-02.
Bhattacharya, S., Saha, S. K., Chakravorty, D. 2000: Nanowire Formation in a Polymer Film, *Appl. Phys. Lett.* 76, 3896-3898.
Bhattacharya, S., Gao, Y., Apperson, S., Subramanian, S., Taltantsev, E., Shende, R.V., and Gangopadhyay, S., 2006: A Novel On-Chip Diagnostic Method to Measure Burn Rates of Energetic Materials, *J. Ener. Mater.* 24, 1-15.
Fisher, S.H. and Grubelich, M.C., 1998: Theoretical Energy Release of Thermites, Intermetallics, and Combustible Metals, *Proc. 24th International Seminar*, Monterey, CA.
Gangopadhyay, S.; Shende, R., Subramanian, S.; Hasan, S.; Gangopadhyay, K. Synthesis of Nanoenergetic Materials, US Patent, Oct. 2005 (applied).
Jean-Pierre, Metal Oxide Chemistry Synthesis, From Solution to Solid State, Wiley, 2000.
Kim, S.H. and Zachariah, M.R., 2004: Enhancing the Rate of Energy Release from NanoEnergetic Materials by Electrostatically Enhanced Assembly, *Adv. Mater.*, 16, 1821-1825.
Malynych, S.; Luzinov, I.; Chumanov, G. 2002: Poly(Vinyl Pyridine) as a Universal Surface Modifier for Immobilization of Nanoparticles, *J. Phys. Chem. B* 106, 1280-1285.
Martin, C. R., 1994: Nanomaterials: A Membrane-Based Synthetic Approach, *Science* 266, 1961-1966.
Mehendale, B., Shende, R.V., Subramanian, S., Gangopadhyay, S. 2006: Nanoenergetic Composite of Mesoporou Iron Oxide and Al-nanoparticles, *J. Ener. Mater.*. (in press).
Morales, A.M., Lieber, C.M., 1998: A Laser Ablation Method for the Synthesis of Crystalline Semiconductor Nanowires, *Science*. 279, 208-211.
Prakash, A.; McCormick, A.V.; Zachariah, M.R. 2004: Aero-Sol-Gel Synthesis of Nanoporous Iron-Oxide Particles: A Potential Oxidizer for Nanoenergetic Materials, *Chem. Mater.* 16, 1466-1471.
Planter, K.B.; Pantoya, M.L.; Gash A.E. 2005: Combustion Wave Speeds of Nanocomposite AlFe$_2$O$_3$: The Effects of Fe$_2$O$_3$ Particle Synthesis Technique, *Combust. Flame*, 140, 299-309.
Shimazaki Y., Mitsuishi M., Ito S., Yamamoto M. 1997: Preparation of the Layer-by-Layer Deposited Ultrathin Film Based on the Charge-Transfer Interaction, *Langmuir* 13, 1385-1387.
Subramanian, S., Hasan, S., Bhattacharyya, S., Gao, Y., Apperson, S., Hossain, M., Shende, R.V., Gangopadhyay, S., Redner, P., Kapoor, D., and Niccolich, S., 2005: Self-Assembled Nanoenergetic Composite, *Proc. Mater. Res. Soc. Symp.* 0896-H01-05.1.
Wang, W.; Liu; Z; Liu, Y.; Xu, C.; Zheng, C.; Wang, G. 2003: A simple wet-chemical synthesis and characterization of CuO nanorods, *Appl. Phys. A-Mater. Sci. Proc.* 76, 417-420.
Wang, W.Z., Wang, G.H., Wang, Y.J., Zhan, Y.J., Liu, Y.K., Zheng, C.L., 2002: Synthesis and Characterization of Cu$_2$O Nanowires by a Novel Reduction Route, *Adv. Mater.* 14, 67-69.
Wang L; Cui S; Wang Z; Zhang X. 2000: Multilayer Assemblies of Copolymer PSOH and PVP on the Basis of Hydrogen Bonding, *Langmuir* 16, 10490-10494.
Yang, L., Ying, C., Meiye, L., Lili, L., Lihong, D., 2006: *In situ* synthesis and assembly of copper oxide nanocrystals on copper foil via a mild hydrothermal process, *J. Mater. Chem.* 16, 192-198.
Zhou Zhou, Y.; Yu, S.H.; Cui, X.P.; Wang, C.Y.; Chen, Z.Y. 1999: Formation of Silver Nanowires by a Novel Solid-Liquid Phase Arc Discharge Method *Chem. Mater.*, 11, 545-546.
NANOSTRUCTURED ENERGETIC MATERIALS
Shubhra Gangopadhyay
LaPierre Chair Professor
Department of Electrical and Computer Engineering
University of Missouri – Columbia
Columbia, Missouri 65211-2300 USA
Tel. – (573) 882-4070, Fax – (573) 882-0397
Electronic Mail – email@example.com
Research Team: Keshab Gangopadhyay, Rajesh Shende, S. Subramanian, S. Hasan, S. Apperson University of Missouri – Columbia
Collaborators: P. Redner, D. Kapoor, and S. Nicolich, US Army ARDEC Picatinny, NJ 07806
Supported by Picatinny, ONR and NSF
US Patent filed 2005
Copyright: UMC – 11/15/2006
OUTLINE
- MOTIVATION
- SYNTHESIS OF OXIDIZER NANOSTRUCTURES
- MATERIAL SYSTEMS – CuO, Fe₂O₃, MoO₃
- STRUCTURAL CHARACTERIZATION
- SELF ASSEMBLY OF OXIDIZER NANORODS AND FUEL NANOPARTICLES
- CONCEPT
- EXPERIMENTAL
- PERFORMANCE OF NANOENERGETIC MATERIALS
- CONCLUSION
- SCOPE FOR FUTURE WORK
MOTIVATION
Micron sized Energetic Particles
Nano Energetic Particles
Ordered Nanoenergetic composite
Random & inhomogeneous
Ordered & homogeneous
Fuel
Oxidizer
* Reduced mass transport
* Higher Energy Release
Modified sol-gel
Nanorods self assembled with fuel
US Patent filed 2005
Copyright: UMC – 11/15/2006
Existing technologies
Particles mixing (Image at 60 eV)
\[ 2 \text{Al} + 3 \text{MO} \rightarrow \text{Al}_2\text{O}_3 + 3\text{M} + \Delta H \]
Sol-gel with Al nanoparticles
(AMPTIAC, 6 (1), 43 (2002))
Problems: Particles coagulation, non-homogenous distribution
Therefore, lower interfacial area for the reaction and lower energy release
Self-assembly of micelles
Cetyl trimethylammonium salt
Tail
Head
Micelle
Cross-linked core
Functionalized core
Inorganic/Organic Species
Solution Synthesis
Self Organization
Organized Chemical System
Template removal!
Nanoporous Material
SYNTHESIS OF CuO NANOSTRUCTURES
Cylindrical Micelle formation of surfactant in aqueous solution.
CuCl₂·2H₂O + NaOH
Reaction in the presence of PEG
Removal of surfactant by repeated washing in water and ethanol yields free standing nanorods.
CuCl₂ + 2NaOH → Cu(OH)₂ + 2NaCl
Decomposition
Cu(OH)₂ → CuO + H₂O + ΔH
Schematic of the experimental steps
Depending on the molecular weight of PEG used, the aspect ratio of the nanostructures can be tuned and thus leading to formation of nanorods or nanowires. One can also obtain nanowires by slightly changing the mixing procedure.
TEM IMAGES OF CuO NANOSTRUCTURES
Aspect Ratio of Nanorods is about 4.
Mean Aspect Ratio of nanowires is about 50.
TEM IMAGES OF Fe₂O₃ and MoO₃ NANORODS
Fe₂O₃ was prepared by the reaction of ferrous chloride with sodium nitrate, adjusting the pH with HCl and controlling the temperature.
MoO₃ nanorods were prepared using the inorganic condensation method using PEG surfactant.
D = 30 nm
L = 400 nm
MoO₃
L = 400 nm
D = 70 nm
Comparison of the X-ray diffraction data with that of ICDD confirm the formation of monoclinic phase of CuO. The crystal structure is monoclinic phase with lattice constant of...
The absorption peaks observed in our samples match closely with the reported one for CuO by G. Kliche and Z.V. Popovic, *Phys. Rev. B* 42 (1990), p. 10060.
TEM IMAGES OF POROUS OXIDIZERS
Porous iron oxide without surfactant
ORDERED MESOPOROUS iron oxide with Brij 76 templating
Porous copper oxide nanoparticles
CuO Nanowells with P123
Copyright: UMC – 11/15/2006
The nitrogen group present in PVP has a lone pair of electrons, readily available for forming covalent bonds with oxygen in CuO or with oxygen present in Al$_2$O$_3$ (2 nm passivating layer on Al nanoparticles). Using optimized concentration of PVP leads to monolayer coating either on nanorods or Al nanoparticles. Subsequent mixing of the oxidizer nanorods and fuel nanoparticles facilitate self assembled energetic composite, where the fuel nanoparticles are in close proximity to the oxidizer nanostructures.
“py” group bonded to CuO and Al
schematic of the self-assembled structure showing “py” groups bonded to Cu and Al through oxygen,
C O H N Cu Al
EXPERIMENTAL APPROACH
Scheme 1
Nanorods + Poly (4-vinylpyridine) → Sonication and cleaning → Nanorod with a polymer monolayer
Scheme 2
Al nanoparticles + Poly (4-vinylpyridine) → Sonication and cleaning → Al nanoparticles coated with PVP
Nanorods
US Patent filed 2005
Copyright: UMC – 11/15/2006
TEM IMAGES OF SELF ASSEMBLED ENERGETIC COMPOSITE
(A) Al nanoparticles
(B) Al nanoparticles self-assembled with CuO nanorods
(C) Al nanoparticles self-assembled with Fe₂O₃ nanorods
US Patent filed 2005
Copyright: UMC – 11/15/2006
EXPERIMENTAL SET-UP FOR BURN RATE AND PRESSURE MEASUREMENTS
Oscilloscope
Testing chamber
Pressure signal conditioners
Optical fibers
Burn rate measurement
On-chip method
Pressure measurement
Our results indicate that burn rate is strongly dependent on equivalence ratio and it is optimum at a ratio of 1.6 for CuO-Al system.
\[ \Phi = \frac{(F/O)_{\text{actual}}}{(F/O)_{\text{stoichiometric}}} \]
TUNABLE BURN RATES
1) Porous Fe$_2$O$_3$, (2) WO$_3$, (3) MoO$_3$, (4) Bi$_2$O$_3$, (5) ordered mesoporous Fe$_2$O$_3$, (6) CuO nanoparticles, (7) MoO$_3$ nanorods (8) Mesoporous CuO nanoparticles (9) CuO nanorods, (10) CuO nanowires, and (11) self-assembled CuO system; all mixed with Al-nanoparticles (80 nm)
Fig. 3A) Photograph showing shell like pattern on a chip produced after ignition of MIC 3 in a polycarbonate well and it is similar to curved shock wave segments, B) 2-D pattern of shell like curved segments of shock wave with Mach stem, C) shock wave and reaction zone generated during propagation of a supersonic combustion wave into unburned gas, D) supersonic combustion flame front creating a shock wave inside a tube, and E) a pattern created after a well-dispersed slurry of MIC 3 was coated inside a tube, dried, and ignited using a spark igniter.
Shock Tube Experiments using pressure sensors and optical sensors
Model 113A03 Pressure Sensors
Data for Optical Fibers 1 & 3
Top Left: Pressure sensors shown in orange.
Top Right: Light sensor data for the first and third sensors shown in blue
Bottom Right: Schematic for reference
Bottom Left: Mach number for various nanoenergetics
TEM images of Teflon coated Al Nanoparticles
10%
1%
10%
1%
Uncoated Al nanoparticles (APS: 80 nm) have ESD energy of 0.98 mJ.
Burn rate and Pressure of CuO – uncoated Al nanoparticles MIC materials are 1500 m/s and 2.2 MPa respectively.
Burn rate and Pressure of CuO – 1% Teflon coated Al nanoparticles MIC materials are 1200 m/s and 7.2 MPa respectively. Further work is in progress.
Microencapsulation of energetic nanoparticles
Encapsulation
Nanoparticles
Microencapsulated granules
Nanoparticles
Microencapsulated granules
Microencapsulated granules in a glass tube
MIC MATERIAL PRESSURE MEASUREMENTS IN MILLIMETER SCALE CELL
Experimental Setup
Pressure measurements on lexane millicell 20mg of each material.
Volume of the cell 30 cubic mm.
HMT - Hexamethylenetetraamine
Pressure Records for Regular CuO(nanorods)+Al(80)
- Pellet Packing Pressure 0 MPa (powder)
- Pellet Packing Pressure 70 MPa
- Pellet Packing Pressure 162 MPa
- Pellet Packing Pressure 347 MPa
- Pellet Packing Pressure 578 MPa
Pressure, bar
Time, s
Pressure Records for Self Assembled
CuO(nanorods)+Al(80), Fe₂O₃(porous)+Al(80),
CuO(nanorods)+2%HMT+Al(80), MoO₃+Al(80)
Pressure, bar
Time, s
Pressurization Rate vs. Packing Pressure
Pressurization rate of 20mg samples in small plastic milliwell
Applied force for pellet making, kg
Pressurization Rate, MPa/s
Packing pressure for pellet making, MPa
- CuO+2%HMT
- CuO Self Assembled
- CuO Regular
- MoO3
- Fe2O3
Peak Pressure vs. Packing Pressure
Peak Pressure of 20mg samples in small plastic milliwell
Applied force for pellet making, kg
Peak Pressure in Experiment, MPa
Packing pressure for pellet making, MPa
- CuO+2%HMT
- CuO Self Assembled
- CuO Regular
- Fe2O3
- MoO3
TEM micrographs of ammonium nitrate, ammonium perchlorate and nitrocellulose nanoparticles
CuO and iron oxide based composite for propellant application
Ammonium nitrate nanoparticles
Porous iron oxide infiltrated with Ammonium Nitrate
Burn rate= 20-200 m/s
Patterned energetics on a micro chip
Circuit for Point-Selectable Initiation
MEMS/ NEMS Applications at MU for Defense Needs
Dr. Shubhra Gangopadhyay, Dr. R. Shende, Steve Apperson, Shantanu Bhattacharya, Dr. M. Hossain, Dr. M. Almasari
Objective
Miniaturization of existing explosive systems will allow smaller packaging to provide the soldier with lightweight equipment. As new technology is developed, the soldier is being made to carry more ‘gadgets’.
Developing Lab-On-Chip Diagnostics System for improved safety and reduced cost. Existing systems require large and expensive equipment and facilities.
Combustion on a chip
On-Chip Burn Rate For Nano-Energetics
MEMS-Based Initiators
Applications for small arms and multipoint detonation casing
Initiation Results
MEMS Based Safety & Arming (S&A) Device
S&A devices are an integral component of every Munitions System
MEMS Based Thruster
Microthruster can be used for propulsion and precise positioning of microsatellites
Portable power generator using energetics
Preliminary Findings – Thermoelectric Generation
One-Chip Shock-wave Generator
• Capable of creating a shock-wave without detonation
• Will have controllable shock fronts
• Applicable in non-contact imaging
Preliminary findings - Propagation through Microchannels
These are the most narrow channels to date that energetic materials have propagated through.
(A) PDMS channels bonded on a platinum coated glass substrate
(B) Preignition image of a channel packed with MIC slurry
(C) Post ignition PDMS channels loaded with MIC
CONCLUSION
- Higher combustion wave velocity of CuO nanorods and nanowires based MIC composite can be attributed to higher surface area creating higher hot spot density as compared with bulk.
- Our results demonstrate tunable combustion wave speeds can be obtained by selecting various oxidizer materials mixed (or, self-assembled) with different sizes of nanoaluminum and by changing the equivalence ratio.
- Among the oxidizers, the composite of $\text{Fe}_2\text{O}_3$ yields lowest combustion wave speed whereas, CuO shows highest speeds.
- Within the CuO-Al MIC system, self assembled composite wherein fuel Al nanoparticles lie in close proximity to oxidizer nanostructures exhibit the highest speed of $2300 \pm 100 \text{ m/s}$.
- CuO based MIC materials are found to produce shockwaves with a Mach number of
- These materials can be very effective in a number of applications as Green Primers, Propellants, Reactive Blast Materials. |
Seattle Mental Health—Eastside
Seattle Mental Health - Eastside serves severely mentally ill persons and persons with grief and stress-related issues. They provide services at offices on Northup Way and also at an Overlake Office on 140th Avenue NE between NE 20th Street and NE 24th Street. SMH also provides services in Redmond, Snoqualmie, Renton, Auburn, and Kent, in addition to services in Seattle.
Agency information: (206) 324-2400; access to services: (206) 324-0206. These numbers will connect you with other offices.
Rebuilding Together
Rebuilding Together, formerly known as Christmas in April * Eastside, has continued to provide rehabilitation for homes of low income homeowners, using volunteer labor and donated materials. Volunteers provide services on the last Saturday in April to enable Eastside recipients to maintain their own homes. Services have included building a wheelchair ramp, obtaining a furnace, doing yard work, and house painting. The Eastside affiliate, which has repaired 68 homes since it began in 1997, will accept applications from both volunteers and homeowners throughout the year.
Information: (425) 455-0179, firstname.lastname@example.org
Tentative EISCC Program Schedule
February 10, 2004 - EISCC overview & planning
March 9, 2004 - Interfaith program
April 13, 2004 - Advocacy
Welcome To Eastside Interfaith
Is Your Congregation or Agency Represented?
Welcome, visitor and members, to Eastside Interfaith Social Concerns Council (EISCC). The Council provides a forum for exchange of information among Eastside congregations and agencies, and encourages interfaith dialogue. Volunteers help to maintain Congregations for the Homeless, the legislative task force, and other programs.
Membership is voluntary, open to Eastside congregations of all faiths. There are currently 42 member congregations and 20 social agencies. Member congregations contribute $25.00 annually to EISCC and select two voting delegates (clergy and/or lay). Clergy, youth representatives, persons who work with youth or social ministry programs, and other lay persons are especially invited.
Associate members are social and health service programs whose purposes and activities are consistent with the purposes of EISCC. Each pays a fee of $10.00 annually, and has voice but does not vote.
Meetings are held the second Tuesday of each month (except August), 12:00 noon – 1:30 p.m. (bring your sack or fast food lunch; beverages are provided), in the Chapel Lounge (Room 315, former sanctuary, entered through the upper parking lot on NE 17th Street, off 100th Ave. NE), of the First Presbyterian Church, 1717 Bellevue Way NE, Bellevue. No reservations are necessary. Parking is free, either in the upper lot near the meeting room, off 100th Avenue NE; or in the lower parking lot, off Bellevue Way.
Information: Rick Russell (425) 746-2411.
EISCC UPDATE
JANUARY, 2004
This newsletter has been edited by Sally Wing since 1994-1995. Since then, three or four issues have been published each year, with the help of many volunteers. An EISCC newsletter was published previously between 1986 (possibly earlier) and 1989, with Carla Vendeland as editor.
Please give us information about coming events in your congregation, task force, or organization that are open to others in the community. Letters to the editor are welcome. Send your items to the EISCC address or to the Editor, Sally Wing, P.O. Box 5556, Bellevue, WA 98006-0056. (425) 747-5924. Fax: (425) 462-5601. Email: email@example.com
Emergency Services Directory
Eastside Interfaith Social Concerns Council publishes an Emergency Services Directory, which is distributed to congregations and agencies at two-year intervals. Additional copies of the 2004-05 directory are available on request.
Information: Nadine Shannon (425) 746-6019.
Head Start Program For Families
Head Start is a federally-funded preschool program for low-income families. Three- and four-year olds participate four days a week for three and one-half hours a day; three- to five-year olds five days a week from 6:30 a.m. to 6:30 p.m. Services include breakfast, lunch, and snack; bus transportation; health services; family support; employment and training; and mental health counseling. Services are free or low cost depending on family eligibility.
Volunteers are needed to assist teachers, read to children, play music, cook with the children, or simply be a friendly face. To volunteer or to refer someone, contact Heather Duncan, Family Service Worker, (425) 456-5766.
Crisis Clinic Resource Center
The Crisis Clinic community resource center yields referrals for such needs as food and toys, and the crisis line, staffed by volunteers, responds to calls on such subjects as relationship issues, mental illness, grief and loss, domestic violence, and chemical dependency.
Information: (206) 461-3200, Crisis line: (206) 461-3222 or www.crisisclinic.org
St. Andrew’s Housing Group
Provision of housing for the working poor is emphasized by St. Andrew’s Housing group, which has several multi-family rental units, and is developing more.
Information: (425) 746-1699.
Congregations For Kids
Congregations for Kids works with the Bellevue School District to help needy children within the school district. It held its annual Good Start Back to School drive in late summer to help low-income children get ready for school with new school supplies. Later in the fall, the group collected Warm Coats for Kids.
Information: Nancy Jacobs, chair, (425) 883-6406 or firstname.lastname@example.org
Thank You
Thank you to Paul Burnham, Betty Spohn, Marilyn Haymond & other volunteers for publishing this newsletter.
Is Your Zip Code Correct?
Postal Service rules require that we verify correctness of all zip codes on our mailing list. If yours is not correct, please notify Marilyn Haymond (425) 641-1497
EMPLOYMENT TASK FORCE
Information about the job market and assistance in planning a job search, interviewing and resume writing are available at the Employment Resource Center, Bellevue Stake Center, 14536 Main Street (near 148th Avenue and the Hopelink Center) Tuesdays, 12:00 noon to 8:00 p.m. and Wednesdays, 8:00 a.m. to 5:00 p.m. (206) 687-6942. Services are available to all persons regardless of church affiliation, and are also available in Renton and Everett, as part of the international network of employment offices established by the Church of Jesus Christ of Latter Day Saints.
The Salvation Army Eastside is located at 11555 Northup Way, Suite 100, Bellevue, WA 98004 (Mailing address: P.O. Box 749, Bellevue, WA 98009), according to Bob Giuliano and Captain Ken Perine. Telephone number (425) 827-1930.
Emergency and disaster services include food, shelter, clothing, and utilities. They provide services to sick and housebound elderly persons, to “latch-key” children, and to incarcerated persons and their families.
Saint Margaret’s Thrift Shop
The Saint Margaret’s Thrift Shop is located next to St. Margaret’s Episcopal Church on Factoria Boulevard SE. Their hours are Monday through Saturday, 10:00 a.m. to 2:00 p.m., and Wednesday, 10:00 a.m. to 9:00 p.m. They have continued to receive donations of clothing and other goods, and to make donations to a variety of Eastside causes. Information: Joan Abel, (425) 641-6830.
Assistance League Of The Eastside
Operation School Bell, a program of the Assistance League, provides children in Bellevue, Lake Washington, and Northshore school districts with clothing and school supplies. They also provide assault survivor kits, including clothing and hygiene items, to local hospitals; and companionship at two long-term care centers. Information: Leslie Young, (425) 881-0298 or www.alecares.org.
EISCC Volunteers Needed
EISCC needs a secretary and a treasurer to replace pro tem persons in these positions. Your time and energy can help to maintain EISCC services, including Congregations for the Homeless, legislative task force, and other programs. Please contact the editor, Sally Wing (425) 747-5924, or email@example.com.
Puget Sound Blood Center
This regional resource is supported by volunteer blood donors and volunteer workers, who assist in coordination, registration, and monitoring. Information: Leigh Chapman, Donor Resources Representative, (425) 462-4395. E-mail: firstname.lastname@example.org.
Friends Of Youth
Services for children, youth, and families encourage individual growth and promote constructive relationships. Services include counseling, housing, support for young parents, chemical dependency treatment, and foster care. Information: (425) 869-6490 or email@example.com.
Issaquah Food & Clothing Bank
Year round and holiday provision for food, clothing, and children’s toy needs are available through the Issaquah Food & Clothing Bank. Information: (425) 392-4123.
Congregations For The Homeless
Chair: Nadine Shagnon (425) 746-6019
firstname.lastname@example.org
Regular meeting: 1st Tuesday, 12 noon to 1 p.m.
St. Luke’s Lutheran Church - 3030 Bellevue Way NE, Bellevue
This program celebrated its tenth anniversary in September, 2003. It provides safe secure shelter, three meals daily, and employment assistance for homeless men on the Eastside, to return them to being members of the community. Host congregations, who provide facilities for a month, are assisted by support congregations and other agencies and individuals who provide financial and volunteer support. A total of 67 congregations currently participate. The cities of Bellevue, Issaquah, Kirkland, and Redmond, and United Way have provided grants for our services. Services by 3108 volunteers enable 99% of the income to be used for direct services.
Currently there are between 15 and 22 men in the shelter each night. Clients’ average age is between 34 and 55. Clients must be at least 18, and occasionally one is 75. Clients typically have good employment skills, but struggle in a difficult job market.
We contract with Catholic Community Services to provide counseling as needed and employment case management. There is always one site manager at the host congregation with the clients. Sleeping facilities at the host congregation and three meals daily are provided pro bono by volunteers.
You may send support directly to Congregations for the Homeless, P.O. Box 662, Bellevue, WA 98009-0662. When making contributions to United Way for this program, please write in “EISCC Congregations for the Homeless.”
Homeless men may be referred to these Eastside agencies for screening:
Catholic Community Services
(425) 284-2211 or 1-800-872-3204
Jewish Family Service
(425) 461-3240
Salvation Army
(425) 827-1930
Hopelink
Bellevue (425) 943-7555
Northshore (425) 485-6521
Carnation (425) 333-4163
Kirkland (425) 889-7880
Redmond (425) 882-0241
Children’s Response Center
This agency, a service of Harborview Hospital, serves children who have been abused, as well as those who have observed other traumatic events, and family members, and provides medical and legal advocacy and educational programs as well as group and individual counseling.
Information: Gayle Zeller, education and prevention coordinator, (425) 688-5130, www.childrensresponsecenter.org
Hopelink
The Bellevue office of Hopelink provides services, which include programs for senior citizens and people with disabilities, and a day care center for children.
Children whose families cannot afford lunch are provided with free and reduced lunches during the school year, and the “Brown Bag Lunch Brigade” in the summer.
Also part of Hopelink is RotaCare Clinic, a free walk-in medical clinic offered Saturdays (except in August) from 9:30—11:30 a.m. The clinic, sponsored by Eastside Rotarians, is staffed by two volunteer physicians, and works with social service agencies to provide follow up care and care for chronic illness. Volunteers also include social workers and interpreters.
The Family Development Program assists low-income families to set and accomplish goals. The program provides emergency services, life skills workshops, parenting classes and support groups, and ongoing support and advocacy. Information: (425) 943-7550.
EISCC members and others regularly contribute to the EISCC Eastside Emergency Services Fund, administered by Hopelink. Other Centers serve the rest of North and East King County. Expenditures are made for rent, utilities, water, gasoline (automobiles), and motel occupancy. Contributions may be sent to Eastside Hopelink, Bellevue Service Center, 14812 Main Street, Bellevue, 98007. Information: Jennifer Cole-Wilson (425) 943-7555.
The Eastside Literacy Council is part of Hopelink, teaching people to read, write, and speak English. Alice Ferrier, director of the Literacy Council, is head of literacy programs at Hopelink.
HEALING MASS
Sacred Heart Catholic Church holds a healing mass on the fourth Monday of each month, 5:30 p.m. at the church, 9460 NE 14th Street, Bellevue. Prayers are offered for and with all persons who are in need of physical, emotional, or spiritual healing. The presider will anoint anyone who wishes to receive the Sacrament of Healing. All are welcome to attend, especially those in need of healing and their families and friends. Information: (425) 454-9536.
Bellevue Crop Walk
The third annual Bellevue Crop Walk will be held Saturday, May 15, 2004, beginning and ending at First Presbyterian Church, to alleviate hunger world-wide. All are welcome to participate. Proceeds will be divided, 25% to Hopelink and the Emergency Feeding Program, 75% to Church World Service. Information: Carol Ready (425) 744-8783, or email@example.com.
Catholic Community Services
Catholic Community Services at 12828 Northup Way, Suite 100, Bellevue 98005-1932, provides emergency and transitional housing, legal advocacy, and counseling. Information: (425) 284-2211 or 1-800-872-3204.
Volunteer Chore Services, a program of CCS, helps elders and adults with disabilities to maintain independence in their own home, by assisting with such tasks as grocery shopping, light housekeeping, minor home repair, moving, and yard work.
MAMMA’S HANDS
Denny Hancock, founder and director of Mamma’s Hands, has provided food on Wednesday evening in downtown Seattle for several years, and has also picked up and delivered furniture to House of Hope in North Bend, which provides transitional housing for homeless women. He has arranged for donation of telephones for homeless person to call home. Information: (206) 915-2073 or Website: www.MammasHands.org.
Invitation to Participate
To: Eastside Interfaith Social Concerns Council P.O. Box 662, Bellevue, WA 98009-0662 Marilyn Haymond (425) 641-1497
Name ____________________________________________
Address ____________________________________________
_____________________________________________________
Phone ______________________________________________
Cong./Agency _________________________________________
Congregation member: $25 __
Agency member: $10 __
Newsletter only: $5 __
I can volunteer for ______________________________________
More Info about ______________________________________
______________________________________________________
WWW
Websites You May Find Useful
Since many people use the Internet regularly both for e-mail and for seeking information on the Internet, we’d like to encourage you to share favorite Websites with other EISCC members.
◊ Churchcouncilseattle.org is the site of the Church Council of Greater Seattle, which provides links to other ecumenical organizations, peace and justice groups, children’s organizations, government, etc.
◊ The Interfaith Alliance is a faith-based voice countering the radical right and promoting the positive role of religion. You have an opportunity to subscribe to updates on legislative issues. Its website is tialliance.org.
◊ The U.S. House of Representatives house.gov and the U.S. Senate senate.gov each have a website, which includes their current schedules, bills, history, and directory.
◊ Vatican news releases in four languages are provided by an independent international news agency at Zenit.org.
◊ Shamash: The Jewish Internet Consortium provides information, education, discussion, and community-building resources, at shamash.org.
◊ The American Jewish Committee identifies local chapters, coming events, and national legislation, at ajc.org.
◊ Worldwide Faith Network whose website is WFN.org, provides news releases from a wide variety of faith groups.
◊ Information about status of legislation in Washington is available at leg.wa.gov.
◊ United Religions Initiative is a grassroots international interfaith organization. Its Emergency Response Network offers a model in which more than 100 religious leaders have formed a network to respond instantly to acts of hate violence perpetrated against religious, ethnic, and racial minorities. www.uri.org
Children’s Action Network
Children’s Action Network, a program of the Children’s Alliance, promotes action on child care, health care, juvenile justice, welfare reform, and other children and family issues. They publish Children’s Action Alerts, especially during the legislative session, providing background information for contact with legislators and other action. There is no charge; if you join the Network you are asked to take five minutes each week to speak up for children.
Information: Gabriela Quintana, Field Organizer, 172—20th Avenue, Seattle, WA 98122. Phone: (206) 324-0340; Fax: (206) 325-6291; e-mail: firstname.lastname@example.org. Website: www.childrensalliance.org.
Mental Health Task Force Seeks Your Response
Task Force Information: Sally Wing, (425) 747-5924 or Fax: (425) 462-5601 or Email: email@example.com
The Mental Health Task Force can provide suggested prayers, suggestions for offering assistance, a list of community resources, information about the National Alliance for the Mentally Ill and publications by Pathways to Promise.
Eastside Baby Corner
This thirteen-year-old, all-volunteer program collects, sorts, and distributes through agencies everything for babies up to 14-year olds. They are the “last safety net,” providing a toy bank, layettes and other supplies year round. Volunteers, including children and parents, are needed to sort materials. Cash donations are needed to supplement donated materials. Donations may be left at St. Louise Catholic Church, 141 – 156th Avenue SE, Bellevue. Information: Karen Ridlon, President (425) 865-0234 or firstname.lastname@example.org.
North Bellevue Community Center
Bellevue residents of all ages can participate in a variety of classes and services offered at the North Bellevue Community Center, including senior trips, and classes and programs for families and adults.
The Center is located at 4063 - 148th Avenue NE, Bellevue. Information: (425) 452-7681 or city of bellevue.org (click on departments, parks, and recreation.)
Hot Meal Program
St. Andrew’s Lutheran Church sponsors a hot meal program for all interested persons, each Thursday evening at 5:30 p.m. at 2650-148th Ave. SE, Bellevue. Volunteers are needed to expand the program.
Information: Jan Jeide (425) 881-5370 or email@example.com
Kindering Center
The Kindering Center is a non-profit neurodevelopmental center which provides services to children and their families with disabilities and special needs from birth to three years. Services include language therapy, parenting classes in four languages, and counseling. Information: Jennifer Pineda, (425) 747-4004 or www.kindering.org
Jewish Family Service
Programs include services for seniors, disabled persons, children, and youth. A monthly Eastside food bank is available, as well as emergency assistance and refugee resettlement, at 1811 - 156th Ave. N.E., Bellevue (425) 643-2221. They serve an ethnic population, which includes Hispanic, Asian, and European, with English as a Second Language classes, employment services, social services, and holiday gifts for children.
Information: Carol Mulin (206) 861-3176 or firstname.lastname@example.org
Bridge Ministries serves persons with physical and developmental disabilities and their families. A guardianship program is available for developmentally disabled persons who lack family support. Volunteer opportunities include one-to-one friendship and circles of friends, obtaining and repairing durable medical equipment and providing it to persons in need, and assisting congregations to provide a welcoming environment. Information: The Reverend Bruce Knofel (425) 828-1431 or email@example.com
Emergency Feeding Program
This county-wide program, with Church Council of Greater Seattle as the parent organization, has 37 distribution sites, including St. Andrew’s Lutheran Church, Overlake Park Presbyterian Church, and First Congregational Church in Bellevue. They request specific basic foods, encouraging food-of-the month donations. The agency serves the needs of persons in emergency hunger situations and provides counseling and referrals to enable independence. Information: Catherine Hillard (425) 562-0698.
City Of Bellevue Supports Diversity Program
The City of Bellevue’s cultural diversity program provides workshops, events, and programs designed to help residents of all races and cultures to be productive contributors to the community. The program, which began in 1992, has received awards from the Association of Washington Cities and the National Black Caucus of Local Elected Officials.
Weekly radio program on voices of Diversity is broadcast on Mondays 7:30 p.m. KBCS-FM, 91.3. Hosted by Kevin Henry, the program is jointly produced by KBCS-FM and the City of Bellevue Parks and Community Services Cultural Diversity Program. It features people and organizations which foster diversity, and spotlights diversity and social service programs in Bellevue. Information: (425) 452-2835.
A television program, also called “Voices of Diversity” airs on Bellevue Channel 21 at 4:30 p.m. and 8:30 p.m. on Wednesdays, and 1:00 p.m. and 8:30 p.m. on Fridays.
The program is coordinated by Kevin Henry, and his cultural diversity assistant Callie Shanafelt, Office of Cultural Diversity, Parks & Community Services, P.O. Box 90012, Bellevue, WA 98009. (425) 452-7886 or (425) 452-7922. Fax: (425) 688-2814. Website: www.ci.bellevue.wa.us/parks/diversity/welcome.htm
CITY MINISTRIES
Located at the City Church in Kirkland, City Ministries provides perishable food through 35 distribution outlets on the Eastside and in Seattle. Clothing and home furnishings can also be obtained. They distribute to individuals and families, network and partner with other churches, and provide training for other ministries. Information: Joel Pike (425) 881-0366.
Legislative Task Force
Chair: Joy Pocasangre (425) 747-0877
e-mail: firstname.lastname@example.org
Purpose of the Task Force is education and advocacy for faith communities on federal, state, and local issues. Activities include annual participation in advocacy days in Olympia in the spring; and liaison with Washington Association of Churches and other advocacy groups. During the legislative session, you can send a message to your state legislators through the legislative hot line, 1-800-562-6000.
Multifaith Works Aids People With AIDS
Multifaith Works oversees several projects which mobilize volunteers, primarily from faith communities, to provide critical services which assist people affected by life-threatening or chronic illnesses to live with personal dignity and integrity. Multifaith AIDS project (MAPS) provides supportive low-income housing to people living with AIDS. CareTeam volunteers offer practical support and friendship to individuals and households living with AIDS. Shanti volunteers provide one-to-one emotional support to people with life-threatening illnesses included HIV/AIDS.
In addition to these HIV/AIDS related programs, Multifaith Works also includes the MS Housing Joint Venture with the Multiple Sclerosis Association of King County to open a home for people moderately disabled by multiple sclerosis.
Its programs also include the Multifaith Alliance of Reconciling Communities (MARC), which builds bridges between faith communities and the Gay/Lesbian/Bisexual/Transgendered/Questioning community.
Information: (206) 324-1520 or email@example.com
Eastside Interfaith Social Concerns Council Mission Statement
We, Eastside Interfaith Social Concerns Council, believe that we are guided by the moving of God’s spirit in our community to work together in a spirit of caring and celebration. We honor and respect each others’ religious heritages, welcome and pray for each other, and share information about pressing community needs.
We provide a forum to educate, advocate, initiate, coordinate, support, and through task forces and other means, work for the common good of the Eastside community to address human needs and improve the quality of life.
Youth Eastside Services
Youth Eastside Services (YES) provides parent support groups, drug and alcohol counseling, and individual counseling. Volunteers are active as receptionists, intake workers, and mentors. Information: (425) 747-4937 |
Arno Tausch, Almas Heshmati and Hichem Karoui:
The political algebra of global value change
Nova Publisher, New York, 2015, 532 p.
ISBN: 978-1-62948-899-8
Analysing the change in global values and its implications for our political and economic system has a relatively rich history, which spans back to the 1980s, when the first World Values Survey was conducted. With the help of these data sets it is possible to obtain some bearing on which way the preferences of the world’s population are heading, and get information on phenomena such as attitudes on competition and free markets, social expenditures or bribery. This is particularly useful when we wish to analyse changes in consumption, religion or attitudes towards minorities of a given country or a region. The goal of this book is to analyse the World Values Survey and define the determinants of values which characterise certain cultures and/or civilizations. By doing so the book aims at adding to the scientific discourse in three areas: values as determinants of economic growth; how values characterize certain cultures, and last whether the relative decline of the power of the Global North can be correlated with changes in its own values. The authors adopt Promax Factor Analysis to investigate the relationships within the dataset of the World Values Survey and to underline their points.
The first chapter of the book investigates the development of global value research. It introduces the concept of a relationship between the loss in religious values and a rise in the shadow economy. The authors argue, based on Robert Barro’s research, that religion affects economic outcomes by fostering religious beliefs which may influence individual characteristics such as thrift, work ethic and honesty. This is an interesting concept which may answer why the Global North is in decline. However we should remark at this point that according to for
instance Paul Kennedy, the decline of empires had nothing to do with religion. Catholic Spain in the 16th and 17th century for instance was deeply religious and its decline had more to do with imperial overstretch, complacency and the sudden acquisition of wealth. By acquiring new riches from South America the population of Spain was not pushed to perform and work, simply put, they were not hungry enough to progress. It is no surprise that World War 2 was a major incubator for innovation, as people were pushed to the limit as at no time in history before. The question therefore is: what is pushing the Global North to perform at its peak of capacity? The authors also introduce the works of the 14th century Arab sociologist and historian Ibn Khaldun. According to Ibn Khaldun, changes within the society can be attributed to the changes in values (similar to Paul Kennedy, however the main variable is not necessarily religion as mentioned by Robert Barro). It can be assessed within the span of four generations: the first generation retains desert qualities, desert toughness and savagery. The second generation during a life of comparative ease and authority get used to a life of luxury and plenty, and the population gets used to obedience. The third generation has mainly forgotten the life of desert life and toughness as if it had never existed, and luxury reaches its peak. Group feeling disappears completely. If someone attacks them, the third generation cannot defend itself, thus by the fourth generation it will be destroyed. If we are assessing countries of the Global North we can see, that for instance recruitment for the armies is at an all-time low, less people are willing to put their lives at risk of danger for a country which no longer honours them. In the remainder of the chapter the authors provide a detailed analysis of the standard economic growth theories and attempt to quantify how the loss of religious values has led to the rise of the shadow economy. The authors quantify the propensity for having a shadow economy by assessing how people respond to the idea of not paying on the public transport system. Interestingly enough in secular countries the avoidance for paying fares is far higher than in religious countries, making it more likely to have a proportionally larger shadow economy, which in turn may lead to lower rates of economic growth.
Chapters 3, 4 and 5 contain a wealth of interesting and thought provoking ideas. It is in Chapter 3 where the authors start building on the analysis of the shadow economy (which they began in Chapter 1) and try to quantify its effects on the economy. Measuring the size of the shadow economy poses a number of challenges, however the authors introduce several methods with which we can judge the size of it. For instance, the authors find a negative relationship between the shadow economy and the level of development, democracy and press freedom. Factors which may increase the size of the shadow economy include: increased tax burden, rising state regulatory activities, low tax morale, lower institutional quality and trust in the surroundings. Chapter 4 contains the basis for Hofsted's mention of individual index; § uncertainty ambiguity deal with Catholic Asia. For comparative auto the auth can be obedience. hart (p. When still stic predicta diversity and into the data the US a oriented as Belar of the re factors v chapter. contro trend to Muslim too man global c is the lo which m ocratic c
The r as femin influenc
with religion. religious and the sudden ca the popula- were not hun- major incubator history before. form at its peak ury Arab soci- iges within the Kennedy, how- Robert Barro). eration retains ration during a and plenty, and ainly forgotten luxury reaches them, the third ll be destroyed. at for instance ling to put their . In the remain- : standard eco- religious values the propensity o the idea of not secular countries tries, making it rich in turn may
ought provoking analysis of the nify its effects oses a number with which we ive relationship democracy and ow economy in- low tax morale, r 4 contains the basis for understanding global values research. It introduces the works of Geert Hofstede, Shalom Schwartz, Eldad Davidov and Ronald Inglehart. The above mentioned authors provide to some extent different viewpoints for the measurement of national values. For Hofstede (p. 141) the variables are power distance; individualism vs. collectivism; masculinity vs. femininity; uncertainty avoidance index; long-term orientation and indulgence vs. restraint. Just as an example the uncertainty avoidance index deals with a society’s tolerance for uncertainty and ambiguity, and it assesses to what extent the individuals within the society can deal with a novel situation. The results show that the index is highest in Roman Catholic and Orthodox cultures and lowest in Protestant cultures and Southeast Asia. For Schwartz and Davidov (p. 156), there are seven dimensions for the comparative analysis of global values: embeddedness, hierarchy, mastery, affective autonomy, intellectual autonomy, egalitarianism and harmony. According to the authors, Muslim societies rank very high on values of embeddedness which can be characterised by traits such as social order, the respect for tradition, obedience, politeness, national security, devoutness, honouring of elders etc. Inglehart (p. 169) characterises values as traditional values and self-expression values. When survival is uncertain, cultural diversity may seem threatening and people will stick to traditional gender roles or emphasize familiar norms to maximize predictability in an uncertain world. However, when survival is taken for granted, diversity is not only tolerated, but positively valued since it is something new and interesting in an otherwise materialistic and similar world. The analysis of the data in the chapter shows that not surprisingly Australia, New Zealand and the US are the most self-expression oriented countries, whereas the most survival oriented are the ones where the Orthodox Christian heritage is strongest such as Belarus, Russia or the Ukraine. In Chapter 5 the authors provide the results of the re-analysis of the World Values Survey based on their own dimension of factors which complements the analysis of the authors mentioned in the previous chapter. On page 229 the authors make one of their most thought provoking and controversial statements when they say “with great caution that there is a certain trend towards racism and traditional religion in too many places in the global Muslim community (Umma), and a lack of the values of tolerance and respect in too many places in the global Muslim community (Umma), all compared to the global community of humankind.” Another very interesting result of Chapter 5 is the low values of Russia in the avoidance of authoritarian character (p. 263), which may explain to some extent the lack of regime change and worsening democratic conditions within Russia.
The remainder of the book (Chapters 6–12) addresses important issues such as feminism in the Muslim world and find that it is not really the religion which influences how women live, but the region they live in. Another issue which
the authors discuss is Arabic opinion of issues such as separation of religious practices from political and social life or separation of religion from politics, and whether these factors are requirements of a democratic country. These chapters however not only provide the result of econometric analysis, but also endeavour to combine the results of the calculations with classic texts which shaped the living and the livelihood of people in modern history. Such an example can be seen for instance on page 339 when the authors conduct a Promax Factor Analysis of the Ten Commandments.
The book of Arno Tausch, Almas Heshmati and Hichem Karoui is not only recommended for those who wish to read an update on how global values have changed in the last couple of years, but also for those who want to take a broader perspective and understand some of the philosophical underpinnings of value change. It is a really interesting and thought provoking addition to the literature and helps us understand, with quantifiable variables, how values in certain regions have changed in the last couple of years and what implications these might carry.
András Tétényi
Assistant professor
Corvinus University of Budapest
Email: email@example.com
Martin H. Wolfson and Gerald A. Epstein (eds.) The Handbook of the Political Economy of Financial Crises
Oxford University Press, New York and Oxford, 2013, 770 p.
ISBN: 978-0-19024-093-6
The Handbook of the Political Economy of Financial Crises – published by Oxford University Press – is one of the many recent efforts to better understand the dynamics, causes and implications of the Great Recession. The contributors come from a wide range of ideological backgrounds and offer their insight on topics ranging from productive incoherence (Ilene Grabel) to world money (Costas Lapavistas). In addition to an overview of the theoretical and empirical contributions to the field, the handbook outlines four policy solutions to address the potentially disastrous effects of financial crises: a) capital controls, b) macroeconomic management to discourage excessive capital inflows, c) regional cooperation in the periphery to ameliorate the effects of shocks that emerge from the major financial centre and d) global initiatives, notably world money, to reduce the ability of the centre to wreak havoc on the rest of the system.
The Handbook of the Political Economy of Financial Crises from the perspective of the authors in this volume reveals the forces behind the financial innovations for America.
In section two, approaches for understanding central bank operations and the role of three, the global economy, & Erturk, Palm & Hubbard, Guttrub revolves around the provides insights into the emerging world as a whole.
The role of financial crises is discussed by Stockhammer and Rude, Palley, M. Li provide a solution to limit financial crises. Given the vast diversity of the contributors, methodological differences are inevitable. The Minskyan approach is most of the pieces of the puzzle. Minsky’s interpretation of stylized models misses the crucial critique of the real world to explain how financial economy in some cases.
James Crotty’s Efficient Market Hypothesizing Model and the inability to capture of the extent of deregulation that characterizes the innovations of the neoliberal era. Secondly, the assumptions of the EMH are not |
Of Heterotopias and Ethnoscapes: The Production of Space in Postcolonial North Africa
The Faculty of Oregon State University has made this article openly available. Please share how this access benefits you. Your story matters.
| Citation | Rice, L. (2003). Of Heterotopias and Ethnoscapes: The Production of Space in Postcolonial North Africa. Critical Matrix, 14, 36-75. |
|----------------|-------------------------------------------------------------------------------------------------------------------------------|
| DOI | |
| Publisher | Program in Women’s Studies, Princeton University |
| Version | Version of Record |
| Terms of Use | http://cdss.library.oregonstate.edu/sa-termsofuse |
Of Heterotopias and Ethnoscapes: The Production of Space in Postcolonial North Africa
Laura Rice
It is difficult to . . . show the involvements of culture with expanding empires, to make observations about art that preserve its unique endowments and at the same time map its affiliations . . . Territory and possessions are at stake, geography and power. Everything about human power is rooted in the earth, which has meant we must think about habitation, but it has also meant that people have planned to have more territory and therefore must do something about its indigenous residents . . . Just as none of us is outside or beyond geography, none of us is completely free from the struggle over geography. That struggle is complex and interesting because it is not only about soldiers and cannons but also about ideas, about forms, about images, and imaginings.
(Edward Said, *Culture and Imperialism*)
**Introduction: The Postcolonial Spatial Turn**
The focus of postcolonial studies has shifted in the last decade or so from a struggle over history, the narratives of winners and losers—as recorded by the winners and resisted by the losers—to a struggle over geography. Power inequities formerly embodied in Manichean conceptualizations (Colonizer/Colonized, Oppressor/Oppressed, Occidental/Oriental, Self/Other, First World/Third World, Center/Margin, Global/Local) are
now interrogated as part of the complex and shifting operations of “spatial economies of power.” Discursive approaches, targeting the relational and productive rather than the mutually exclusive and reductive, interrogate issues of meaning and representation, subjectivity and agency, culture and imperialism, identity and power. The world that many of us are today engaged in, whether as actual or armchair travelers, is a world of migrant subjectivities where we struggle with the affiliations and ideologies, the cultural particularities and international connections that map the situatedness of each of us.
In this essay, I explore the production of this postcolonial space first by approaching it from the standpoint of aesthetics. Through a close reading of Tayeb Salih’s *Season of Migration to the North*, I look at the ways identity is destabilized by what Foucault calls heterotopic spaces—spaces like that of the mirror which force us to think about issues of representation. Next I extend this inquiry by examining postcolonial spatiality from an autobiographical standpoint, exploring some of the ways I, as a middle-class American academic, have constructed, and am constructed by, my experience of having married into a recently sedentarized, Bedouin family in southern Tunisia. In addition to Foucault’s trope of the heterotopia, I have used Arjun Appadurai’s notion of the *ethnoscape* to describe this production of space, because it specifically challenges the center-periphery model of cross-cultural relations. That is, it encourages us to look at particular sites in the global landscape as environments populated by diverse actors whose situatedness—as workers, or tourists, or capitalists, or teachers, or women, or men, or insiders, or outsiders, etc.—inflect their experience of that space. The ethnoscape provides an uneven terrain in which heterotopic sites are multi-faceted mirrors. Perhaps the most disconcerting aspect of this second interrogation has been the discovery that struggles over geography also involve the charting of interior landmarks, and the landmarks that we choose suggest the ways we seek to anchor our own migrant subjectivities. As Said notes, we must think about habitation, and the postcolonial spatial turn is crucial to this process.
The contemporary theoretical emphasis on spatial economies of
power was the result of “a growing skepticism concerning older explanatory and predictive models” based on time as a privileged medium.\textsuperscript{6} These earlier historicized accounts not only reduced complex global relations to uniform, one-dimensional narratives but also simultaneously obscured spatial understandings that might help to reveal the manifold and heterogeneous practices of power as it operates internationally. As Dick Hebdige notes: “Shifts in the political imaginary occur no doubt for all kinds of reasons though an influential figure here must surely be Foucault whose antagonism toward teleological ideas of progress is encapsulated in his substitution of the image of the “network” for models of linear ‘development.’”\textsuperscript{7} In an interview on “Questions on Geography” (1976), Foucault suggested that discourses foregrounding time have embedded in them individual, autobiographical modes of understanding. Those based on spatial metaphors, however, point to relations between things, and thus to the operations of power:
Metaphorizing the transformations of discourse in a vocabulary of time necessarily leads to the utilization of the model of individual consciousness with its intrinsic temporality. Endeavoring on the other hand to decipher discourse through the use of spatial, strategic metaphors enables one to grasp precisely the points at which discourses are transformed in, through, and on the basis of relations of power.\textsuperscript{8}
Since the time of this suggestive interview, postcolonial theory has done much to illuminate the ways that individual consciousnesses are themselves produced as a matter of relational, spatial conceptualizations.
Of particular interest for this essay is Foucault’s challenge to the reality/utopia dyad that he destabilizes with his concept of heterotopia. In an early set of lecture notes written in 1967 and published after his death as “Of Other Spaces,” Foucault notes that some sites are semiotically significant because they cause us to reflect upon and turn our attention back toward the other sites to which they are juxtaposed. That is, they have
the “curious property of being in relation with all the other sites, but in such a way as to . . . neutralize, or invert the set of relations that they happen to designate, mirror, or reflect” (24). These special sites are of two sorts: utopias or heterotopias. Utopias are “sites with no real place . . . [They have] a general relation of direct or inverted analogy with the real space of [society]. They present society itself in a perfected form . . . [U]topias are fundamentally unreal spaces” (24). Utopias exist in some future time or in some imagined place. Heterotopias, on the other hand, are found in every culture; they are real (that is, material) places that serve as mirrors of other real (material) sites, destabilizing them. For example, the cemetery, or “city of the dead,” is a counter-site to the city of the living within which or next to which it exists. It is a site that forces us to consider the relation between the living and the dead. The relation does not always suggest the same meaning, but it is always meaningful—and often uncomfortable if scrutinized too closely. Likewise, the prison, the madhouse, and the brothel are real places whose relation to normalized social sites—schools, think tanks, or nuclear households—can reflect disconcerting similarities. Thus, in relation to the larger society, heterotopias serve as counter-sites that reflect, contest, and invert other normalized sites in the culture.
Foucault notes that utopias and heterotopias are mirrors of societies in different ways:
The mirror is, after all, a utopia, since it is a placeless place. In the mirror I see myself there where I am not, in an unreal, virtual space that opens up behind the surface . . . such is the utopia of the mirror. But it is also a heterotopia in so far as the mirror does exist in reality, where it exerts a sort of counteraction on the position I occupy. From the standpoint of the mirror I discover my absence from the place where I am since I see myself over there . . . The mirror functions as a heterotopia in this respect: it makes this place that I occupy at the moment when I look at myself in the glass at once absolutely real, connected with all the space that surrounds it, and absolutely unreal, since in order to be perceived it has to pass through this virtual point which is over there. (24)
The mirror serves as a heterotopia when it focuses our attention on the ambiguous relationship between what we think of as reality, and representation: a site where this dynamic of recognition/misrecognition is especially pronounced. The destabilizing force of the heterotopia rests in its ability to foreground the representational foundation upon which we construct what we commonly think of as reality. It shifts our attention to the power of representation to manage, manipulate, and distort reality.
Foucault ends his lecture by noting that heterotopias unfold as the antipode of all other “real” spaces:
Either their role is to create a space of illusion that exposes every real space, all the sites inside of which human life is partitioned, as still more illusory . . . Or else, on the contrary, their role is to create a space that is other, another real space, as perfect, as meticulous, as well arranged as ours is messy, ill constructed, and jumbled. This latter type would be the heterotopia, not of illusion, but of compensation, and I wonder if certain colonies have not functioned somewhat in this manner. (27)
I argue that the colonies function as just such heterotopic mirrors of both illusion and compensation. To the extent that French texts or discourses displaced North African histories and identities, replacing them with Franco-centric representations for which North Africa was the stage, the colony was a heterotopia grounded in illusion. The theoretical benefits to humanity of the *mission civilisatrice* were constantly being called into question by the facts of colonization: appropriations of land and repression of populations. To the extent that the colony offered the possibility of making up for what was lost at home, the colony was a heterotopia of compensation. For example, the French expansion of colonial holdings in
Algeria is intimately tied to France’s humiliating loss of Alsace and Lorraine to the Germans in 1871. Humiliation at home was to be redeemed by prowess abroad, and between 1871 and 1898, the settler community almost doubled in Algeria (from 119,000 to 200,000). For the 1,183 families who immigrated to Algeria from Alsace-Lorraine in the 1870s, however, this compensation turned out to be an illusion. Essentially factory workers unused to farming, only about a third (387) of the families stayed in the colony.\(^{10}\)
Obviously, the indigenous inhabitants of colonies and former colonies do not escape from these spatial relations unscathed. This essay addresses how they too are drawn into these productions of space, often in ways that are destructive of their own cultural sites and systems. In the case of Tayeb Salih’s *Season of Migration to the North*, the protagonist Mustapha Sa’eed sees himself as constituted by the mirror of the colonizer’s ideology. He is the “native.” He inverts relations in this process of identification, making the “native” a mirror held up to reveal the colonizers’ mentality. Yet these reciprocal acts of discursive violence do not result in a better grasp of authentic realities in their aftermath. In the case of the ethnoscapes of the struggle over geography in colonial and contemporary Tunisia, the scientific projects and military archives that are the legacy of Western intervention in North Africa have affected the ways in which I perceive my in-laws’ history and they perceive mine. My in-laws have experienced Western incursions on their home territory as acts of violence and maldevelopment.\(^{11}\) What I have learned from researching archives is countered by the family history I have gleaned through oral stories and histories from my in-laws—but not in a direct fashion. That is, I learned my own family’s history from genealogies and albums, through pictures of men dressed in military uniforms and women in drawing rooms. My family archives seemed to be about history and biography—a progressive story. The military reports about my in-laws are more about classification and surveillance. They stand in heterotopic relation not only to my in-laws’ oral stories, or the normalized story of my own family, but also to the connections between our histories. As will be demonstrated, the ethnoscapes in the latter part of the
essay are palimpsests that seek to capture the shared spaces inflected by subjects who occupy different political and ethnic locations.
Part I. The Houses Mustapha Sa’eed Built: Of Home and Heterotopia
We should therefore have to say how we inhabit our vital space, in accord with all the dialectics of life, how we take root, day after day, in a “corner of the world.”
For our house is our corner of the world. As has often been said, it is our first universe, a real cosmos in every sense of the word.
(Gaston Bachelard, *The Poetics of Space*)
In Tayeb Salih’s *Season of Migration to the North*, the young narrator who has returned to his traditional village on a bend of the Nile expects to engage in the poetics of dwelling that he associates with the comforts of childhood. His longing for rootedness takes on material form when he wakes on the morning after his arrival in the room where he slept as a child:
I looked through the window at the palm tree standing in the courtyard of our house and I knew that all was still well with life. I looked at its strong straight trunk, at its roots that strike down into the ground, at the green branches hanging down loosely over its top, and I experienced a feeling of assurance. I felt not like a storm-swept feather but like that palm tree, a being with a background, with roots, with a purpose.\(^{12}\)
This rootedness is disrupted when he finds that a stranger, Mustapha Sa’eed, has established himself there. Like the narrator, Sa’eed embodies an unsettling mixture of local knowledge and occidental ways he picked up living for years in England. He represents the sort of hybridity and even contagion that contact with Western culture brings. His inserting himself into the village forces the narrator to examine his own liminal identity.
This is an examination the narrator wants to avoid: “I forgot [Sa’eed] after that, for I began to renew my relationship with people and things in the village. I was happy in those days, like a child that sees its face in the mirror for the first time” (4). However, Sa’eed does not allow the narrator to maintain this comfortable misrecognition. When the narrator tells Sa’eed that he has just finished a doctorate, having spent “three years delving into the life of an obscure English poet,” Sa’eed laughs and tells him: “We have no need of poetry here. It would have been better if you’d studied agriculture, engineering or medicine” (9). The narrator, who is retelling the disturbing life story of Mustapha Sa’eed to other listeners, suddenly becomes self-reflexive: “Look at the way he says ‘we’ and does not include me, though he knows this is my village and that it is he—not I—who is the stranger” (9). This early interaction allows Salih to explore the ways tradition and modernity call each other into question, to illuminate the ways the “other” becomes crucial to a definition of the self, and to explore the continual slippage between the self as subject and as object.
The narrator senses duplicity in Sa’eed; like an image in a mirror, Sa’eed exists in a space of nonbeing. He is a phantom to himself and a mocking double for the narrator. Both are trapped between cultures. Saree Makdisi notes that, while the narrator responds to the conflicted self that is the product of colonial relations by trying to wish the problem away, Sa’eed “does so not by becoming entirely European or entirely Arab, but by becoming both, but never at the same time, in the same place, or with the same people.” These contradictions are not only threaded into the conflicting voices and the unstable chronological shifts in the novel that Makdisi points to, but most importantly for this study, in the heterotopic environments of the novel which house these virtual selves that are produced by perpetual performances. The narrator, who has accepted Mustapha Sa’eed’s invitation to dine with other village notables at his home, discovers that even in the village, Sa’eed lives a divided life:
When the conversation fell away and I found myself not greatly interested in it, I would look around me as though trying to find in the rooms and walls of the house the
answer to the questions revolving in my head. It was, however, an ordinary house, neither better nor worse than those of the well-to-do in the village. Like the other houses it was divided into two parts: one for the women and the other containing the diwan or reception-room, for the men. To the right of the diwan I saw a rectangular room of red brick with green windows; its roof was not the normal flat one but triangular like the back of an ox. (11–12)
Mustapha Sa’eed’s mysterious English building houses his worst and most intimate nightmares. Sa’eed, who participates in village life by farming and by sharing his knowledge of Western science and law with the local agricultural committee, reveals a different self when, having gotten drunk, he suddenly begins to recite “English poetry in a clear voice and with an impeccable accent” (14). The revelation, akin to speaking in tongues, of this alien poetic dwelling—“his eyes wandering off into the horizon within himself” (14)—horrifies the narrator: “I tell you had the ground suddenly split and revealed an afreet\(^{11}\) standing before me, his eyes shooting out flames, I would not have been more terrified. All of a sudden there came to me the ghastly, nightmarish feeling that we—the men grouped together in that room—were not a reality but merely some illusion” (14–15). Sa’eed brings the devil of alterity into the village. Not only can the narrator not go home again, alien forms have appeared in the village space, a house with a roof “like the back of an ox” and a farmer who spouts English romantic lyrics both representing only the beginning of this descent into uncertainty. The narrator confronts Sa’eed, saying “It’s clear you’re someone other than the person you claim to be . . . Wouldn’t it be better if you told me the truth?” (15). Sa’eed responds, “I am this person before you, as known to everyone in the village” (16). Convinced that there is something hidden here, the narrator asks himself: “Should I speak to my father? Should I tell [my childhood friend] Mahjoub? Perhaps the man had killed someone somewhere and had fled from prison? Perhaps he—but what secrets are there in this village?” (16). The narrator swears he will get to the bottom of Sa’eed’s
mysterious past, but in doing so discovers it mirrors his own. Sa’eed is indeed the murderer that the narrator intuits, but in his story rests the figuration of what is to come when the entire village becomes complicit in another murder.
In his quest to unearth Sa’eed’s past, the narrator discovers that while Sa’eed was living in London, he regularly seduced “innocent” young English girls, some of whom later committed suicide. Sa’eed seduces one of these girls, the Arabic-speaking Ann Hammond, after a lecture at Oxford he’d given on the poet Abu Nawas: “And so it was with us: she, moved by poetry and drink, feeding me with sweet lies, while I wove for her intricate and terrifying threads of fantasy” (145). She claims to see in his eyes “the shimmer of mirages in hot deserts” and to hear in his voice “the screams of ferocious beasts in the jungles” (145). He sees in the blueness of her eyes “the faraway shoreless seas of the North.” His house in London is “a lethal den of lies,” deliberately built up “lie upon lie:”
the sandalwood and incense; the ostrich feathers and ivory and ebony figurines; the paintings and drawings of forests of palm trees along the shores of the Nile, boats with sails like doves’ wings, suns setting over the mountains of the Red Sea, camel caravans wending their way along sand dunes on the borders of the Yemen, baobob trees in Kordofan, naked girls from the tribes of the Zandi, the Nuer and the Shuluk, fields of banana and coffee on the Equator, old temples in the district of Nubia; Arabic books with decorated covers written in ornate Kufic script; Persian carpets, pink curtains, large mirrors on the walls, and coloured lights in the corners. (146)
This heterotopia, a space of lies furnished with real artifacts, mirrors the way the English imagination has constructed Africa. As do so many Orientalist harem paintings hanging in museums in England or France, Sa’eed’s exotic decor mimics a space that exists only in the colonial mindset. The hodgepodge of African and Oriental artifacts, placed in impossible juxtaposition, is reminiscent of the heterogeneous exotic landscapes Flaubert’s Emma Bovary adored in her keepsakes albums. They are impossible landscapes made up of images looted—as were so many museum artifacts—from real places. Not only have these looted symbols been disentangled from the worlds that gave birth to them, they have been recombined in such a way as to displace the reality of those environments.
This Orientalist masquerade is reversed in Sa’eed’s Sudanese dwelling, a simulacrum that brings real English spaces into question. Sa’eed’s English house in the Sudan is constructed around seemingly everyday English artifacts: a fireplace, Persian rugs, Victorian chairs, oil portraits, and a substantial library. It is a heterotopia of the “real” space of English rooms where everyday “Englishness” is performed and British civilization archived. Sa’eed’s library boasts many Western classics, including works by scholars of empire like Gibbon and soldiers of empire like Macaulay and Kipling, as well as scientific works by the psychologists of empire. He has a copy of *Totem and Taboo: Some Points of Agreement Between the Mental Lives of Savages and Neurotics* in which Freud constructs the racial “other” in culturally ethnocentric terms, theorizing non-Western societies as primitive on an evolutionary scale and immature on a psychological scale.\(^{13}\) Sa’eed also owns Octavo Mannoni’s *Prospero and Caliban: The Psychology of Colonization*.\(^{14}\) Like the artifacts in Sa’eed’s Orientalist chamber of lies, the books in his English library hold a mirror up to the West’s colonial imagination. Our young narrator asks himself in the midst of this room: “What play-acting is this? What does he mean?” (137). Sa’eed’s own scholarly work includes four books: *The Economics of Colonialism, Colonialism, and Monopoly*, *The Cross and Gunpowder*, and *The Rape of Africa*, titles that suggest a highly critical view of the colonial project, yet Sa’eed’s library contains not a single Arabic book (137). Perhaps the most telling sign of Sa’eed’s confusion of identity is that his *Koran* is an English translation.
The narrator tries to say what kind of space this room embodies by drawing analogies to similar spaces: “A graveyard. A mausoleum. An insane idea. A prison. A huge joke. A treasure chamber. ‘Open Sesame, and let’s divide up the jewels among the people’” (137-38). Some of these
analogical spaces are what Foucault calls heterotopias of deviation: “those in which individuals whose behavior is deviant in relation to the required mean or norm are placed” (25). The hospital, the insane asylum, and the prison are among those heterotopias of deviation Foucault studies. These spaces of deviation play a significant role in Sa’eed’s biography, Sa’eed himself commenting that his bedroom, a seductive den of lies, was “like an operating theatre in a hospital” (31). When he is on trial for the murder of his wife, Jean Morris, his defenders claim Sa’eed was acting in a fit of mad passion, that he was “a genius whom circumstances [had] driven to killing” (32). Sa’eed himself longs for death but is put in prison instead: “I was hoping the court would grant me what I had been incapable of accomplishing [suicide]” (68). Sa’eed’s room, in fact, leads to a jumbled series of heterotopic analogies. As cemetery or mausoleum, it is a heterotopic space of the dead that mirrors the space of the living in ambiguous ways. As an ironic space, a huge joke, this room signifies the rootlessness of our representations of reality, yet also their power to have real effects. As cross-cultural space full of secrets to be discovered, Sa’eed’s chamber is both the locus of truths and a den of lies: on the one hand, his library contains all the wealth of empire, the treasures of Western thought; on the other, it is merely the den of thieves whose bits of looted cultural knowledge will be brought to light when seen from the perspective of non-Western cultures.
Mustapha Sa’eed’s rooms, and the displacement of “how we take root” in the colonial context, reveal the less innocent side of the power that Bachelard accords to houses and poems as creative imaginative spaces. In a lecture calculated to seduce his British audience, Sa’eed represents the hedonist poet, Abu Nawas, as “mystical” and “Sufi.” He creates a room full of cheap Oriental splendor as the reflection of an English woman’s fantasies. In carrying out these heterotopic acts of mirroring, Sa’eed seduces not only Ann Hammond, but himself: “Though I realized I was lying, I felt that somehow I meant what I was saying and that she too, despite her lying, was telling the truth. It was one of those rare moments of ecstasy for which I would sell my whole life; a moment in which, before your very eyes, lies are turned
into truths, history becomes a pimp, and the jester is turned into a sultan” (144). British Orientalists, in eroticizing such a vision of the East, produced the myth of a dangerous exotic masculinity that allows Sa’eed to seduce their daughters and attain the illusion of dominance himself. As a creature of the margin, an orphan from Sudan, and a “Black Englishman” in Britain, Sa’eed is fascinated by the instability of categories. Sa’eed, a person without a home, discovers a home of sorts; he dwells as an exotic self in an exotic land conjured up for him by the British. However, recognizing the discursive violence of the British, and even taking delight in beating them at their own game, does not mean that Sa’eed escapes from this representational hall of mirrors. Instead, when he returns to the Sudan, he builds his own “occidentalist” site that mirrors his ideal of an enlightened British self. Sa’eed builds for us an imagined England juxtaposed to an imagined Orient, both trapped in the violent discursive space of colonial relations.
Salih’s novel is not content, however, to let us off this easily from coming to terms with postcolonial space. At novel’s end, when the young narrator enters the dark English room that has been closed since Sa’eed’s disappearance into the Nile, he has a literal illumination:
I struck a match. The light exploded on my eyes and out of the darkness there emerged a frowning face with pursed lips that I knew but could not place. I moved toward it with hate in my heart. It was my adversary Mustafa Sa’eed. The face grew a neck, the neck two shoulders and a chest, then a trunk and two legs, and I found myself standing face to face with myself. This is not Mustafa Sa’eed—it’s a picture of me frowning at my face from a mirror. (135)
During the course of the novel, readers become aware that the narrator and Mustafa Sa’eed are doppelgängers. The narrator, like Sa’eed, is educated in Britain, is occasionally mistaken for Sa’eed’s son, and falls in love
with Sa’eed’s widow, Hosna Bint Mahmoud, becoming the guardian of Sa’eed’s two sons. Sa’eed and the narrator are mirror images of one another that reflect the colonial condition as a series of hybrid, alienated subjectivities. This fact is brought to a culmination when the narrator enters the room with the roof like the back of an ox at the end of the novel. In the English room, he opens Sa’eed’s notebook and reads, “My Life Story—by Mustafa Sa’eed,” but the notebook is empty. In lieu of this autobiography, the narrator later comes across an unfinished lyric poem written by Sa’eed:
The sighs of the unhappy in the breast do groan
The vicissitudes of Time by silent tears are shown
And love and buried hate the winds away have blown.
Deep Silence has embraced the vestiges of prayer,
Of moans and supplications and cries of woeful care.
And dust and smoke the traveler’s path ensnare.
Some, souls content, others in dismay.
Brows submissive, others . . . (152-53)
The narrator scratches out the line and substitutes one written by himself: “Heads humbly bent and faces turned away” (153). This line echoes the thoughts the narrator had as he entered the room: “I must begin where Mustafa Sa’eed left off. Yet he at least made a choice, while I have chosen nothing . . . If only I had told [Hosna bint Mahmoud] the truth [that I loved her] perhaps she would not have acted as she did. I had lost the war because I did not know and did not choose . . . Now I am on my own: there is no escape, no place of refuge, no safeguard . . . Where, then, were the roots that struck down into times past?” (134). By the end of the novel, the narrator has come full circle to the idea of lost rootedness, the illusion of wholeness, the recognition that the village is not without secrets and lies. In Salih’s postcolonial world, then, we experience a discursive space that suggests none of us can go home again, neither to the certainties of positivism nor to the routines of tradition.
Part II. Global Vernaculars
The new global cultural economy has to be seen as a complex, overlapping, disjunctive order that cannot be understood in terms of existing center-periphery models . . . I propose that an elementary framework for exploring such disjunctions is to look at the relationship between five dimensions of global cultural flows that can be termed (a) ethnoscapes, (b) mediascapes, (c) technoscapes, (d) financescapes, and (e) ideoscapes. The suffix -scape allows us to point to the fluid, irregular shapes of these landscapes . . . {it} also indicate(s) that these are not objectively given relations that look the same from every angle of vision but, rather, that they are deeply perspectival constructs, inflected by the historical, linguistic, and political situatedness of different sorts of actors: nation-states, multinationals, diasporic communities, as well as subnational groupings and movements . . . and even intimate face-to-face groups, such as villages, neighborhoods, and families . . . By ethnoscape, I mean the landscape of persons who constitute the shifting world in which we live: tourists, immigrants, refugees, exiles, guest workers . . .
(Arjun Appadurai, Modernity at Large)
The dilemmas of perspective and representation embodied in the ethnoscape allow us to problematize the ways different groups inhabiting the “same” space may construct and experience it in vastly different ways. We are all shaped by homegrown practices of understanding space at the same time that our lives are increasingly interconnected with the lives of others. The articulation of our particular experiences within international contexts is what I have thought of as being expressed by the global vernacular—shifting local ways of relating to a shared global context. In this section I will look at the production of space in the Chenini-Gabès oasis in Tunisia in the colonial and postcolonial eras. The landscape of Chenini provides a sort of irregular palimpsest with which we can map the spatial practices of the different groups who inhabited this space as well as those who merely passed
through: sedentary oasis dwellers, caravans, nomads, explorers, colonial soldiers, government officials, foreign entrepreneurs, migrants headed for Europe, and European tourists. By the same token, this landscape also suggests porous borders between these spaces and reciprocally constructed subjectivities, as the colonizer and the colonized become the tourist and the migrant. While many North Africans certainly experience a sense of exile, deterritorialization, and hybridity imposed upon them by globalization,\(^1\) they structure the space of migration through a cultural pattern of periodic return as well.
This sense of dwelling has its imaginative roots in a kin-based social structure that locates home not in one geographic place, but rather in that place where the family is, inside the larger cycle of a seasonal migration. It is significant in Salih’s *Season of Migration to the North*, for example, that the wanderer Sa’eed grew up with no father, no relatives, no brothers or sisters. His mother, “her face like a mask,” was “like a stranger on the road whom circumstances chanced to bring me.” Sa’eed tells the narrator: “I used to have—you may be surprised—a warm feeling of being free, that there was not a human being, [my] mother or father, to tie me down as a tent peg to a particular spot, a particular domain” (19). Many North Africans resemble the narrator whose migrations remain tied to a particular spot, a particular domain. Like the narrator, they survive by learning to adjust to the dynamics of a world where peoples are increasingly interrelated by educational needs, work orbits, and money flows on the one hand, and increasingly threatened by loss of cultural and political autonomy on the other. A sedentary villager, Salih’s narrator traces his cultural rootedness to Bedouin culture when he stops overnight in the desert: “Lying under this beautiful, compassionate sky, [I] feel that we are all brothers . . . On a night such as this you feel you are able to rise up to the sky on a rope ladder. This is the land of poetry and the possible—and my daughter is named Hope” (112-13). We are not talking about nomadism as a postcolonial trope for a wandering subjectivity, but rather nomadism as a historical experience that has shaped the cultural subjectivity of a specific group. As a recent development study by Saverio Krätli has shown, “[Nomadic] societies usually have long traditions of self-government, with sophisticated institutional structures
and exceptionally high levels of social capital . . . They can be very confident, articulate, and entrepreneurial, have good negotiating and management skills, and show a strong sense of dignity and self-respect.” Nomadic groups were seen by the central authorities as inherently anarchic and dissident; they have been represented as being by nature, a “drifting, unskilled under-class.” Sedentarization and marginalization were political tools employed by both the French colonizer, and later the urban nationalist governments after independence, to bring this cultural independence under state control. The social and economic marginalization that globalization has brought about in places like Chenini, both through IMF-encouraged, multinational buyouts of local government-owned industries and high unemployment, has forced recent generations to become economic nomads. This new nomadism, however, is superimposed upon older cultural patterns of nomadic self-sufficiency, kin-centeredness, and cyclical return. The production of mental and material space in Chenini reflects these varied ethnoscapes, each expressed in their own global vernaculars.
Ethnoscape I: The Heterotopia of Development
Monsieur Debureaux [sic] and Castillon de Saint Victor proceeded yesterday to launch their first balloon. Taken by a calm when night fell, it remained until morning near Ras El Oued. During the morning, natives from Chenini tried to pull it to the ground, tear it to pieces, and cut the guiderope. They were stopped by the arrival of the aeronauts who managed to relaunch the balloon in a westerly direction. The natives are now wanted fugitives and they will be punished by the administration.
(Telegram: 15 January 1903, from the French Commissariat in Gabès to the Resident General in Tunis)
The balloon launched by Capitaine Edouard Deburaux (Léo Dex) in 1903 was the outcome of arguments presented to the Academy of Sciences in Paris and the Smithsonian in Washington, D.C. concerning the feasibility of crossing the Sahara by balloon from Gabès to Niger in five days.
This larger project never did materialize, but it did provide the seed for Jules Verne’s novel *Cinq semaines en ballon* (1863). In turn, novels like Verne’s popularized development experiments in the colonies and reinforced the ideologically-charged, hidden assumptions upon which colonial exploitations were based. French colonial scientific projects, such as Dex’s, are heterotopic in that they raise questions both about the mix of science and fiction that went into these grand schemes, and about the ethics of the use of colonized space. As Dex’s telegram indicates, part of the colonial production of space was the assumption that colonizers had a right to make colonized land into their scientific laboratories. Violation of the space of the local inhabitants, as Franz Fanon argued in the case of Algeria, is a form of colonial violence that involves not only physical but psychological violation:
Because it is a systematic negation of the other person and a furious determination to deny the other person all attributes of humanity, colonialism forces the people it dominates to ask themselves the question constantly: “In reality, who am I?”
The defensive attitudes created by this violent bringing together of the colonized man and the colonial system form themselves into a structure that then reveals the colonial personality . . . In Algeria there is not simply the domination but the decision to the letter not to occupy anything less than the sum total of the land. The Algerians, the veiled women, the palm trees, and the camels make up the landscape, the natural background to the human presence of the French.
Hostile nature, obstinate and fundamentally rebellious, is in fact represented in the colonies by the bush, by mosquitoes, natives, and fever, and colonization is a success when all this indocile nature has finally been tamed. Railways across the bush, the draining of swamps, and a native population which is nonexistent politically and economically are in fact one and the same thing.
The landscape here is an ethnoscope embodying competing constructions of space. Colonial positivist mythologies provided the general ideology of the superiority of European civilization, and under this banner, warring European interests vied with one another. The native populations, on the other hand, obviously did not consider themselves nonentities either politically or economically, nor were they content to allow either the Bey, who ruled from his capital in Tunis, or the French to move freely in their territory. Even before the French invaded Tunisia in 1881 making Tunisia a French “protectorate,” the ethnoscope included squabbling French scientists and entrepreneurs, other European powers such as Britain and Italy who vied for hegemony over the local rulers, and those local rulers in the urban North who were interested in keeping the Bedouins under control. The attempts of the French to act on the North African landscape as if it were a wilderness, an open laboratory for French experimentation calling out for colonial interventions, were complicated by the recalcitrance of the locals who refused to simply fade into the landscape. Rather they continually united to oppose the French presence in the south, or they attempted to use that presence for their own local ends. Large colonial projects tended to become heterotopic sites where these situated interests clashed.
Perhaps the most astonishing heterotopic project was Captain François Elié Roudaire’s effort, supported by Ferdinand de Lesseps (of Suez Canal fame) and Saharan explorer Henri Duveyrier, to create an “Inland Sea” by flooding lowlands in the Sahara. A necklace of oases, familiarly referred to as the “baraka belt,” follows the northern edge of the desert from Gabès all the way into Morocco. In addition, a chain of dry salt lakes, or chotts, also extends west across the middle of Tunisia from Gabès on the eastern Mediterranean coast into eastern Algeria. Roudaire and Lesseps calculated that by blasting a channel just north of Gabès, they could flood these chotts. Had Roudaire had his way, this region (and the oases around it) would have been submerged beneath a man-made lake the size of Lake Geneva. Along the northern and southern shores would have been French-run plantations, shipping industries, and military complexes. The actual project never took place, but it provided the narrative for Jules Verne’s *L’Invasion de la mer* (1905). The projects presented to the French scientific community had all the compensatory promise of being sites where nature would be perfectly engineered, yet they mirrored a reality where categories mix, fact is subverted by its dependence on fictions, and the marginalized refuse to fade into the background but rather contribute to a messy and ill-constructed ethnoscope.
Roudaire, with the backing of the French government and the French geographical societies, led two expeditions into the region to assess the feasibility of creating an inland sea where the *chotts* were.\(^{23}\) The record of the follow-up hearings on the project held by the Ministry of Foreign Affairs contains 546 pages of details about the project, including geological surveys, testimony about environmental impact, engineering calculations, impact on military capabilities, and maps.\(^{24}\) As the details of one map appended to the official report show, the area to be flooded by the inland sea would have surrounded the major date growing oases of Tozeur and Nefta in Tunisia and submerged much of the territory around El Oued in Algeria. This project would have ruined the intensive agriculture around which the cultures of these oases had developed over centuries. The inland sea would, it was argued, put the French in control of these dissident areas where there were continual uprisings. By flooding the sands into which nomadic groups disappeared, the French could eliminate their escape route. In addition, by inundating the oasis towns, they would eliminate places considered hotbeds of religious maraboutic influence.\(^{25}\) The inland sea, General Favé testified, would undercut the way the nomads fought, because it would allow the French to attack from behind: “we know that [nomadic fighters] leave far to their rear all that they hold most precious, that is their old, their wives, their children, their herds, just about everything they possess; the loss of what is called the *Smala* is the worst thing that can happen to a nomadic people.”\(^{26}\)
Roudaire argued that the *chotts* are what remains of the Lake of Triton mentioned in antiquity, which dried up at the beginning of the Christian era. He used references he had found in texts by Herodotus, Homer, Pindar, Scylas, and Pomponius Melas to argue that the Lake of Triton at one time connected the *chotts* to the Mediterranean through a river, now
gone, near Gabès. He maintained that there were no remnants of shells or marine life evident because they were covered by sand. He claimed that old anchors, rumored to have been there, had been removed by the local inhabitants. Roudaire, making some leaps of the imagination, attached the place-names used in older texts to contemporary towns and landmarks to support his contention that the desert was once a sea. Opposing this idea, Auguste Pomel, a noted French geologist who was Roudaire’s contemporary, argued that the bedrock along the coast of Gabès precluded the idea that there had ever been a passage there or an inland sea. He testified that the mixture of science and mythology associated with the ill-founded and grandiose idea that the “inland sea” would restore the ancient “Lake of Triton” rather resembled the work of a well-known novelist.\textsuperscript{27}
Backers such as Lesseps defended against these attacks on Roudaire’s project by pointing out that, in the case of the Suez Canal, faulty surveying indicating that the land was below sea-level had blocked the canal for years, and arguments that the climate would be damaged and the bitter lakes inundated with salt had been proven wrong. Lesseps allied himself with Roudaire not only on a theoretical level, but on a material one as well. Lesseps, whose father had been French consul in Tunis decades earlier, used his Tunisian connections to obtain a land concession in the name of the Compagnie Concessionnaire (his own name, however, appears as owner of this property on a map housed in the Archives Diplomatiques in Nantes). Lesseps, or his company, was accorded the farmland on either side of the proposed canal to be excavated at Oued Mellah above Gabès that would feed water from the Mediterranean to the Inland Sea.\textsuperscript{28}
Official records speak of a Colonel Joseph Allegro, whose father was an Italian consular agent for the Tunisian Bey in Algeria and whose mother was Arab. Allegro was in Tunisia in 1881 to police the Bedouins to make sure they did not join the Algerians in uprisings against the French.\textsuperscript{29} He was eventually given command of the entire Gabès region, considered important for its caravan trade and port. The Inland Sea project was still being debated at the time, and having Allegro as governor
of the region meant that Lesseps and others could expect policies to favor their interests. Lesseps, who received concessions to buy land cheaply, planned to build a port, and drill artesian wells on the land on either side of the Inland Sea canal. He counted on Allegro to recruit local labor and facilitate these commercial projects. Mining and agricultural projects were envisioned as well.\textsuperscript{30} Although Lesseps continued to vigorously defend the project, the Inland Sea scheme was finally squelched in the mid-1880s when Roudaire died suddenly. Even today, the local inhabitants refer to this nineteenth-century concession land as “Lesseebs.” Its red-tiled buildings, a farmhouse and factory, are heterotopic landmarks: once symbols of French colonial modernization schemes, the deserted and crumbling structures stand as emblems of a failed imperial project.\textsuperscript{31}
**Ethnoscape II: The Heterotopia of the Military Archive**
The French government sent 30,000 troops into Tunisia in April of 1881 and, after some minor resistance from the northern tribes and none from the Bey’s army, signed the Treaty of Bardo in May, turning Tunisia into a French protectorate.\textsuperscript{32} Insurrections continued further south, however. By December 1881, French troops had gained control over the Aradh, a central plain of the south around Gabès. The rebels then moved beyond the \textit{chotts} and into the chain of oases along northern rim of the Sahara, joining up with the powerful confederations of tribes who inhabited the area. By June 1883 when the Treaty of La Marsa was signed, completing the Bardo process and making Tunisia an official French protectorate, these southern territories were still only marginally controlled by the French forces which sent isolated columns into these areas they defined as given over to anarchy and pillage.\textsuperscript{33} Advisors and interpreters with experience working with tribal administrations accompanied the army. The job of this \textit{Service des Renseignements} (called \textit{Service des Affaires Indigènes} after 1900) was to acquaint themselves with tribal leaders to assess their attitudes “toward the French government,” “to assess the loyalty of the chiefs to the Bey,” and to “collect data on economic, geographic, and demographic conditions.”\textsuperscript{34} Having divided Tunisia into administrative districts called \textit{circles}, they carried out these tasks from the early 1880s
until 1914. By the late 1880s, fully three-fifths of the officers in the service were assigned to the Gabès region where the nomads had never really accepted government control. Virtually no Europeans were settled in the region due to climatic conditions, so there was little chance of “civilian-military conflict over the treatment of settlers.” Considering this space on a practical level, French forces were interested in policing the Tunisian-Tripolitanian border and in diverting the caravan trade between sub-Saharan Africa and the Mediterranean from Tripoli to Tunisian ports.
The French *Service des Renseignements* created a meticulously compiled military archive, now housed at the *Archives Diplomatiques* in Nantes. The ethnographic reports are not as uniform as the scientific tables of naturalists which, in Foucault’s words, “squared and spatialized” knowledge by removing the object studied from the ecological system of which it was part. Reading the reports on the Hazem to find out about the tribe of which I was now a member by marriage, I learned much more about the mental landscape of the French administration of the period than about family history. From the time they were established in Algeria in 1844, part of the mission of the *Bureaux Arabes* was to collect information about the indigenous “other.” As Perkins points out, monthly reports expected of the sub-bureau officers were routinely formatted. The officers were often instructed by their superiors in colonial administrations not to mix their own political assessments into the reports. Thus the requirement to report on a monthly basis according to a preset formula, and the concomitant instructions not to express opinions or meddle in official decision-making, often led to misleading reports: “one serious problem was lack of enthusiasm on the part of the officers writing the reports, [it being considered] a bothersome and unprofitable way to spend their time.” The result was that the same reports were resubmitted without change, other than having the old date scratched out and replaced with a current one. Decisions about colonial administration often were handed down from as far away as Tunis or Paris based on this information, so however erroneous, self-serving, or mistaken the reports, they could lead to real effects.
The heterotopia of the military archive bases its ethnographic reports
on two modes of research: the scholarly repository of knowledge about the culture and the local observation of nomadic groups. Feeling that those soldier-scholars assigned to the *Bureaux Arabes* ought to have some minimal training to carry out these tasks, the French administration decided in 1853 that the *Bureaux Arabes* ought to have on hand some basic texts if possible, such as Herodotus (c.490 - c.425 BCE), Tacitus (c.56 AD - c. 120 AD), and Sallust (c.86 BCE - c.35 BCE). French policy dealing with contemporary inhabitants was buttressed in the minds of some colonialists by so-called hereditary claims to the land made on the basis of their prior occupation by “nos ancêtres, les Romains.” The *Bureaux Arabes* were also encouraged to collect reference works by indigenous scholars such as Ibn Khaldun and Leo Africanus.
While this is an epistemological improvement over the geographies of alien scholars writing a millennium earlier, a scholar-traveler such as Ibn Khaldun was, nonetheless, distinctly urban in his perspective. Having once been attacked and robbed by nomads, Ibn Khaldun does include passages in his *Muqaddimah* (Introduction) that have created an urban myth about nomads that is still very current. City-dweller Fatima Mernissi reflects this urban perspective in her memoir *Dreams of Trespass*:
[Ibn Khaldun] identified city peoples as the positive poles of Muslim culture, and peripheral peoples, such as peasants and nomads, as the negative, destructive ones. This perception of urban centers as birthplaces of ideas, culture, and wealth, and rural populations as unproductive, rebellious, and undisciplined has infiltrated all Arab visions of development up until our own day. Even today in Morocco, the epithet ‘aroubi, that is, a person of rural origin, is still a commonly heard insult.”
The *Muqaddimah* itself is more critical of city-life and more generous toward Bedouin traits than Mernissi suggests. Ibn Khaldun writes:
Sedentary people are much concerned with all kinds of
pleasures. They are accustomed to luxury and success in worldly occupations and to indulgence in worldly desires. Therefore, their souls are colored with all kinds of blameworthy and evil qualities... Bedouins may be as concerned with worldly affairs as [sedentary people are]. However... they are closer to the first natural state and more remote from... evil habits... Sedentary life constitutes the last stage of civilization and the point where it begins to decay. It also constitutes the last stage of evil and of remoteness from goodness. It has thus become clear that Bedouins are closer to being good than sedentary people.\textsuperscript{12}
This same problematic—that of observers assigning inverse cultural value to groups opposed to their own—must have also affected the nineteenth-century officers who were trying to use the \textit{Muqaddimah} to interpret the nomads whom they were observing. The divine scheme within which Ibn Khaldun places Bedouin culture is the inverse of the evolutionary scale upon which positivist histories were constructed.
In writing ethnographic reports on the Bedouin tribes, the officers of the \textit{Service des Renseignements} then had at least three mental maps shaping their observations: first, the telescoped time that made them the direct heirs of the Romans; next, the evolutionary time that made the nomads their “contemporary ancestors;” and finally, the military report format set out in a circular issued in 1883. The report begins with a chronology that reflected the French military interests: that is, the history of tribe X before the French occupation, the behavior of tribe X after 1881, etc. These histories are based on local oral history, but “the facts,” we are assured, “have been stripped of their superstitious envelopes and of [O]riental amplification.”\textsuperscript{13} The historical genealogies include family trees, but only the males have names. Thus, I found that the descendants of Si Mohamed el Midassi (caliph of the Hazem in 1887) had seven children: Ali, Mahmed, and five girls. In the genealogies, women are routinely erased as people with names and reduced to gendered numbers. The next
chapters are descriptive: overview of the tribal administrative, judicial, and religious organization; a list of notables such as tribal heads or influential religious leaders; observations about diverse personages; reports on topography, agriculture, commerce, and industry. The final double page of the report form is for statistics in which all the countable elements are plugged into a uniform chart: human population (males, females, and children); warriors; tents and huts; houses; ethnic origin (Kabyles, Arabs, and Berbers); numbers of horses, mules, camels, cattle, sheep, goats, and donkeys; amount of land cultivated. The final column is for general observations. Generally, this last column was empty.
Looking up the clan I now belong to, the Kouarna, I try to read these numbers: they are the largest group, with about twice as many females as males (100 males, 210 females, 175 children), they have 60 tents and no houses; 17 warriors are mentioned as opposed to 2 or 3 for other similarly sized groups; they have 19 horses (as opposed to 1 to 3 for other groups), and a great many camels, sheep, goats, and donkeys, but no cultivated land. What does this mean? They look completely nomadic. Why did they have so many warriors? Did this have anything to do with the imbalance between men and women in the population? Or was the imbalance explained by polygamy? Why are they the only clan that does not allow the practice of bride price? Why did they need so many caravan animals? Were they traders? Raiders? Smugglers? The Hazem did not have mosques, zaouias (religious retreats), or Qur’anic schools but did belong to Sufi brotherhoods. Brief notes by the officer who wrote the report add some hints. Si Mohamed el Midassi was listed as conscientious and devoted, honest, straightforward, but a fanatic about political and religious independence. These reports were written for the purpose of filling in the blanks to assess how a given tribe may or may not be a threat to French power. In the process, the classification of traits leads to the classification and ordering of ethnic groups, and as colonizers, the French have the power to spread out on a grid and scrutinize the other, the nomads. As a Western reader of the reports, I found myself implicated in looking at them largely from the perspective of those who wrote them—not because I agreed with the content, but because I shared in
scrutinizing it. The only sign of life was Si Mohamed el Midassi. As Michel de Certeau notes in *The Practice of Everyday Life*: “Something essential is at work in . . . everyday historicity, which cannot be dissociated from the existence of the subjects who are the agents and authors . . . Indeed, like [a] God, who “communicates only with cadavers,” our “[scientific] knowledge seems to consider and tolerate in a social body only inert objects.”\(^{43}\) To see the particulars of the lives of these unknown in-laws, displayed and evaluated according to colonial military categories, was to participate in a form of violence as individual relatives were turned into ethnographic types, and visible everyday facts became clues to invisible positivist value systems upon which they were being inscribed as impoverished, or primitive. Si Mohammed el Midassi remained an agent, concretely ambiguous, an unknown man with a name.
**Ethnoscape III: The Tourist and the Migrant**
On the bluff above *Ras el Oued* there is a large parking lot where the tour buses stop to unload masses of French, British, and German tourists for a panoramic view of Chenini, a rare marine oasis of a quarter million palm trees just north of the Sahara. The tourists would have already seen the Roman dam and aqueduct as their buses drove through Chenini on the way to the lookout point. This parking lot marks the border between where my husband’s family lived prior to the 1959 flood that drove them out of the gardens in the ravine, and the street along the top of the bluff where we live now. The grove begins in a deep ravine, *Ras el Oued* (head of the wadi/dry riverbed), and follows the wadi’s course, continually fanning out as it takes the shape of a delta where it meets the Mediterranean. While Gabès, known in antiquity as Tacape, had the reputation of having always been a sort of administrative center (for the Phoenicians it was the end of the caravan route in their trade with the Numidians, for the Carthaginians it became a port, for the Romans a colony, for the French a military outpost), it’s never been much of a tourist attraction—except for Chenini with its extensive gardens, described by al-Tidjani in 1306 as “a true earthly paradise.”\(^{44}\) My husband’s family arrived with the second wave of invading Arabs, the Banu Hilal in the 11th century. While urban historians such as Ibn Khaldun described them as a plague of locusts, others note “the Hilalis caused no damage to either the town [of Gabès]—which was then surrounded by a strong wall—or to the oasis, although it was undefended.”46 This restraint contrasted with the actions of those besiegers, working for central powers, such as Yahya ben Ghaniya who in 1195 “laid waste to the oasis where, it is claimed, he left only one palm-tree standing to mark the spot,” Abu Zakariyya under whose hand “its palm-grove was laid waste” in 1286, and Abu'l 'Abbas who “had its date palms cut down” in 1387.47
Today from the parking lot you can see beyond the ravine out onto the desert plains to the west where the smoke stacks of the cement plant (now owned by a Portuguese multinational) mark the horizon at Khanzeria, on the road toward the chotts. The waterless plain in between has now become a garbage dump where burns take place at night and the winds that once carried Léo Dex’s balloon bring the smell of smoke when they shift at night. From the parking lot at Ras el Oued, the place where my husband was born is still visible on the opposite side of the ravine where the family settled once they became sedentarized.48 This home was a cave hollowed out in the side of a ravine and a sort of platform upon which was built a room of toub (mud), a house architects call a “horizontal subterranean dwelling” or “cave with terrace and ghorfa addition.”49 About 400 yards away, dug deep into the base of the ravine wall is another sort of subterranean dwelling made by the forces of civilization: an entire warren of German bunkers from WWII. Ras el Oued is only about 30 miles north of the historic Mareth line, and the family remembers its being occupied at different times by various armies—German, Italian, French, British, American—fighting a war in their gardens, a war in which Tunisians had no stake. On the night of my husband’s birth in 1952, so the story goes, French bullets were slamming into the wall of the ghorfa on the terrace as the French military attempted to root out disidents from the rebellious South.
At the foot of the bluff just below the observation point, tourists can look down at the roofs of the Chela Club, a failed hotel with an empty
swimming pool and a bar that draws the few local drinkers and stranded tourists when it is open. Family pictures from the sixties show the children zooming down a rock formation that became a slide when the irrigation canals were opened and emptied into a large natural pool in the middle of the gardens my husband’s family owned. Pictures of this natural slide and pool are on the postcards sold in local tourist shops. They were exactly where the empty cement hotel pool is today, the real space now a heterotopia of the garden it once was, eternalized in tourist brochures. There are other heterotopias operating here, invisible to the tourist gaze. When Bourguiba’s central government was in its state-run socialist experimental phase, owners were forced to sell the gardens at a tenth of their worth. When the socialist experiment later failed, the hotel consortium acquired the gardens at convenient prices. Thus, the hotel might be seen as a heterotopia created by officials of the nation-state, who were as interested in controlling the peoples of the South as the military authorities of the Bey and the French had been. A later sign of this centralized control are the smoke stacks of the Chemical Group, a complex of phosphate processing plants marking the horizon over the Mediterranean. They stand on the spot where Jules Verne’s novel *La Mer Intérieure* opens. The chemical plants are working a less visible destruction as they drain the water table for use in processing phosphates. Chenini, which used to have 250 natural springs, now has none and is pumping water for irrigation.
Loss of the gardens, because of a combination of tourism and industry, led to a return to nomadism for many in the oasis—a new kind of wandering. Younger members of local families migrated to France and started sending back remittances so that their families could survive. This new form of nomadism, despite all of its hardships, is permeated by the mental space of older forms of migration. With two sons sending back remittances from migrant work in France, my in-laws built their houses, which sweep down from the bluff along the ravine’s edge following the curve of a wide street. Our house in Chenini is a typical Arab one; its rooms surround a large black and white tiled courtyard and gardens. Three families live around the courtyard, but two are often away in Tunis.
or the U.S. Other houses in the extended family reflect migrations as well. One brother built a French style villa with surrounding gardens and walls—an architectural space that is the inverse of the Arab house. Another’s Arab house and orchard are currently rented out to caretakers. The man who inherited it won’t use it until he retires and returns from France where he has been working for 30 years. All the relatives who migrated to France still send remittances home each month, still send their children to spend the summer, still own space and are building in the oasis. Chenini is a place where emigrating, moving away, does not mean staying away, and the material architecture of the house, its empty space and rebars up on the roof for future use, reflect the idea of temporary migration rather than emigration.
The global vernacular of Chenini is also linked to and expressed in poetic form. In nomadic tradition, where oral poetry is a primary art form, the *douar* (circle of tents) represents this poetic space. The architecture of the traditional Arab house, with its rooms bordering a central communal courtyard, mirrors this poetic social space. As Algerian rebel Emir Abd al-Qadir explained in an elegy to desert life, beauty manifests itself in spaces that echo one another, in plays on words: the *beyt ech chi’r* (stanza/room of poetry) and the *beyt ech cha’ar* (a tent/room of hair). Abd al-Qadir understood his *smala* with its tents as a community that echoed a larger natural, cosmic space: “we pitch our tents in circular groups; / The earth is covered with them, as is the firmament with stars.” Abd al-Qadir wrote in other poems that the desert is a place of salubrity: “Ages have told of the salubrity of the Sahara. / All disease and sickness dwell only beneath the roof of cities. / In the Sahara, whoever is not reaped by the sword sees days without number; / Our old men are the most aged of all men.” And in the desert, God has provided “camels that can, in the space of one sun, transport us from the land of injustice to the land of freedom.” In comparison, as Ibn Khaldun also pointed out, the city is not the end point of evolution but the beginning of decay. Abd al-Qadir notes that city dwellers often live in unacceptable conditions, both in their mental and physical space; they dwell in a deleterious production of space: “You camp always in the same place, in the
midst of offal and eaten by lice. Your profession is that of a domestic; working all the time . . . your land is the land of crimes . . . plague, disease, and rulers, who make you into slaves and let you be consumed by the government."54 These productions of space contrasting the rural with the urban, and the local oasis culture with the metropolitan space of colonialism and postcolonial migration, are reflected still today in the local oasis poetry.
The oases of southern Tunisia are the birthplace of poets, and oral poetry is often played over loud speakers during festivals. A poetry reading I attended in a neighboring town drew enormous groups of families and had the atmosphere of anticipation and excitement a rock concert would have in the U.S. Poetry is memorized, improvised, and circulated by men, women, and children. One popular form is the *Bedouin-Beldi* debate, a poem in which a girl from the country verbally spars with a girl from the city, traditional knowledge confronts modernization, and the local confronts the global. Typically the *Bedouin-Beldi* debate begins with a framing stanza asking a poet to record the verbal battle: "By Allah, O Poet, pick up your quill and record who provoked the initial ill."55 Representative of such exchanges is one like this quarrel between Nadia, the Beldiyya from the city, and R'gaia, the Bedouin's daughter:
Now, R'gaia answered,
"You violate modesty, you stray from the narrow,
You parade around the city, like a painted she-camel.
I am a decent girl, my origins are known, my family are Arabs, they protect me at home."
The Beldiyya said,
"You sit hidden away in a well-guarded tent,
To buy clothing for you would be money ill-spent!
You're like a nag well-broken, time's passed you by, and your status is token"
R'gaia responds:
"I can pitch a goat-hair tent and I can survive years of scarcity
We show our guests hospitality, a Bedouin’s tent means salubrity.
But you contract asthma from damp walls, you waste your time on doctor’s calls . . .
You just ape Europeans in the end, wearing a skirt that shows all in a wind . . .”
Nadia says,
“I wear fashionable clothes and frequent the souk
My father puts me in charge, lets me help him at work
But others decide for you, you have no say in what you do.”
. . .
R’gaia advises . . .
“It would be easy for me to lacerate you, between my wisdom and my sharp tongue. . .
If I came to the city, I’d learn to survive
I’d pick things up quickly; unlike you, I’d stay alive
When you leave the city, you’re a fish out of water,
You wouldn’t survive like the Bedouin’s daughter.
You wouldn’t survive in Bedouin lands,
You can neither adapt nor withstand harsh winds.”
The debate typically ends not with a decision about who is the winner, but rather with a recognition that each must allow the other’s perspective and respect the other’s space: “After the quarrel, they rose, made up, and were friends / Each said to the other: ‘I apologize’ and left both hearts at ease.” As with their male compatriots who are migrant workers, the girls are shaped by forces larger than themselves: local customs, cultural imperialism, structural adjustment, global capitalism. But in their recognition of a shared space in which they dwell, they shape global forces as best they can in accordance with their vernacular identities.
*Bedouin-Beldi* debate is a very local and colloquial form in which women speakers address global issues from their corner of the world. It is speech from the margin. The debates address issues such as personal status, gender roles, sustainable development, and community identity.
Performances of the debates vary: sometimes women poets may recite and improvise at popular poetry festivals or wedding ceremonies; sometimes a single poet, either male or female, reports the debate, as in the poem cited above. The *Bedouin-Beldi* debate opens a performance space that, as ethnoscapes, recognizes the instabilities of perspective and representation that define postcolonial subjectivity and the situated and social nature of identity construction. The angle of speech in the poem, if we might call it that, confronts a world that has been defined by patriarchy, colonialism, and masculinist individualism. These are the forces that shape Salih’s *Season of Migration to the North*, the heterotopic French colonial projects, and their contemporary counterparts in the maldevelopment of Chenini. An ideology of winners and losers is produced in this postcolonial space. While the speakers’ critiques of these forces in the *Bedouin-Beldi* debate are clear in the criticisms they fire off at one another, the elements the speakers share—identity as embedded in community, female agency, and mutual recognition—give this speech its transgressive and transformative power.
My research in Tunisia has been generously supported by two Fulbright grants, grants from the Bureau of Educational and Cultural Exchange, and grants from Oregon State University. I would also like to thank David Ball and the other editors at *Critical Matrix* who with great patience pushed this essay, the first personal essay I have ever written, through several sea changes. Finally, I would like to thank my husband Karim and our extended family in Tunisia for making my house there a home.
1 The term postcolonial in this essay refers to the critical position that involves the reconsideration of colonial history, particularly from the perspective of those who suffered its effects. Emphasizing the contemporary social and cultural impact of colonialism, it foregrounds the spatial overlapping of the colonial period and its aftermath. See Robert J.C. Young, *Postcolonialism: An Historical Introduction* (Oxford: Blackwell, 2001), 4.
2 Lawrence Grossberg, “The Space of Culture, the Power of Space” in *The Post-Colonial Question: Common Skies, Divided Horizons*, ed. Iain Chambers and Lidia Curti (London: Routledge, 1996), 170.
3 Ibid., 171.
4 Michel Foucault, “Of Other Spaces,” trans. Jay Miskowiec, *Diacritics* (Spring 1986): 22–27. As an editor’s footnote in *Diacritics* 16.1 indicates, these notes from a lecture given in March 1967 were published under the title “Des Espaces Autres” by the French journal *Architecture-Mouvement-Continuité* in October 1984. Although not reviewed for publication by the author, thus not part of the official corpus of his work, the manuscript was released into the public domain for an exhibition in Berlin shortly before Foucault’s death (22).
5 Arjun Appadurai, *Modernity at Large: Cultural Dimensions of Globalization* (Minneapolis: U of Minnesota P, 1996).
6 Dick Hebdige, “Subjects in Space,” *New Formations* 11 (1989–90): vi-vii.
7 Ibid., vii.
8 Michel Foucault, “Questions on Geography,” *Power/Knowledge: Selected Interviews & Other Writings 1972-1977*, edited by Colin Gordon, trans. Colin Gordon, Leo Marshall, John Mephan, and Kate Soper (New York: Pantheon, 1980), 69–70. Unless otherwise indicated, as in this case, all translations from the French are my own.
9 The most cited example of this phenomenon is perhaps Lacan’s description of the mirror-stage when an infant, fragmented by realizing its separateness from the world
around it, recognizes with jubilation the unity of its reflection in a mirror. This is also a moment of misrecognition as the child is not that whole, autonomous being reflected in a real mirror, or in the mirror of the gaze of others. Heterotopias are sites that make us painfully aware that our experience of what Lacan calls the Real is always mediated by the Symbolic. Jacques Lacan, “The Mirror-Stage as Formative of the I as Revealed in Psychoanalytic Experience,” *Écrits: A Selection*, trans. Alan Sheridan (London: Tavistock, 1977), 1–7.
10 Jamil M. Abun-Nasr, *A History of the Maghrib*, 2nd ed. (Cambridge: Cambridge UP, 1975), 256.
11 Conventional approaches to development that speak of former colonies as “underdeveloped” or “developing” assume lack as a starting point. They do not embed development in the cultural, social, and ethical values of these societies. Rather, development schemes are tied to the economic and political agendas of the still influential colonial powers. Local interests and needs, as well as polycentric global opportunities, are blocked by this “maldevelopment.” Escaping maldevelopment requires both a revision of development schemas on the ground and a decolonization of the mind. See Samir Amin, *Maldevelopment: Anatomy of a Global Failure* (London: Zed Books, 1990); Raff Carmen, *Autonomous Development: Humanising the Landscape – An Excursion into Radical Thinking and Practice* (London: Zed Books, 1996); Ngugi Wa Thiongo, *Decolonizing the Mind: The Politics of Language in African Literature* (London: James Currey, 1986); and Mohamed Cherif Sahli, *Décoloniser l’histoire: l’Algérie accuse le complot contre les peuples africains* (Alger: Entreprise Algérienne de presse, 1986).
12 Tayeb Salih, *Season of Migration to the North*, trans. Denys Johnson-Davies (Colorado Springs: Three Continents Press, 1984), first ed. Arabic, 1969. For an extended discussion of hybridity in the novel, see Patricia Geesey, “Cultural Hybridity and Contamination in Tayeb Salih’s *Mawstin al-hijra ila al-Shamal* (*Season of Migration to the North*)”, *Research in African Literatures* 28.3 (Fall 1997): 128-140.
13 Saree S. Makdisi, “The Empire Renarrated: *Season of Migration to the North* and the Reinvention of the Present” in *Colonial Discourse and Post-Colonial Theory: A Reader*, edited by Patrick Williams and Laura Chrisman (New York: Columbia UP, 1994), 542.
14 The *afreet* is “a demon or spirit from the Djinn world, of great strength and cunning; often a snatcher of women.” See Inea Bushnaq, *Arab Folktales* (New York: Pantheon, 1986), xxvii.
15 For a discussion of this process, see: Diana Fuss, *Identification Papers* (London: Routledge, 1995), 35; and Johannes Fabian, *Time and the Other: How Anthropology Makes Its Object* (New York: Columbia UP, 1983).
Mannoni’s book includes, for example, psychoanalytic interpretations of seven dreams recounted by the Malagasy. Projecting his own racial bias into the world of the Malagasy, Mannoni interprets the recurrence of black bulls, black men, and Senegalese soldiers in the dreams of the invaded people he interviewed, as real and ancestral father figures; a gun, in this reading, becomes an obvious phallic symbol. As Frantz Fanon pointed out in a critique of Mannoni’s *Prospero and Caliban*, ethnopsychiatry conveniently pathologized mental illness as located in the genetic make-up of the colonized individual rather than in pathogenic colonial relations. Fanon points out that Malagasy dreams should be interpreted in relation to their real context—colonial exploitation. Eighty thousand Malagasy, 1 out of every 50 natives on the island, were killed by French forces: “The rifle of the Senegalese soldier is not a penis but a genuine rifle, model Lebel 1916” (83). See Frantz Fanon, *Black Skin, White Masks* (New York: Grove Press, 1967). For a more lengthy discussion of the import of Fanon’s critique, see David Macey, *Frantz Fanon: A Biography* (New York: Picador, 2000), 188–192.
Their leaving home is not quite defined by the displacement and nostalgia that Edward Said speaks of in “Reflections on Exile” where exile “is the unhealable rift between a human being and a native place, between the self and its true home: its essential sadness can never be surmounted,” *Reflections on Exile and Other Essays* (Cambridge, MA: Harvard UP, 2000), 173. Nor is it synonymous with the trauma of the deterritorialized. See May Joseph, *Nomadic Identities: The Performance of Citizenship* (Minneapolis: U of Minnesota P, 1999), who notes that for young “non-African” citizens of Asian descent expelled suddenly from Tanzania, Uganda, and Kenya in the early years of independence for reasons their parents found “unspeakable,” the sense of being inexplicably deterritorialized was traumatic: “Asians of my generation found themselves adrift in new countries of domicile, with no explanations for the hasty farewells and abrupt departures, no narrative of return to make the leaving more bearable, faced instead with a recalcitrant silence” (2).
Saverio Krätli, “Education Provision to Nomadic Pastoralists: A Literature Review,” IDS Working Paper, 126.
Ibid.
“Télégramme 15 janvier 1903,” Protectorat Tunisie, 1ère versement, “Explorateurs,” No. 958.1, Archives diplomatique à Nantes.
Verne used explorers and scientists such as Ferdinand de Lesseps and Léo Dex as the protagonists of his science fiction books. Given the telegram Dex sent, indicating the non-cooperation of indigenous groups in this scientific endeavor, we are justified in pointing out that fictions were part and parcel of the scientific projects proposed by the colonial scientists. For example, Dex argued that he could carry out his project by sending an unmanned craft from Gabès to Niger: “If it crashes on the way, it would
have been sighted, in any case by the nomads of the desert, who, because its passage would be an extraordinary phenomenon in their eyes, would carry the news, which would allow us at least to have an idea of the course followed by this balloon and perhaps to find its wreckage as well as the data recorders with which it would be equipped.” “Exposé Sommaire,” Protectorat Tunisie, 1ère versement, “Explorateurs,” No. 958.1, Archives Diplomatique à Nantes.
23 Frantz Fanon, *The Wretched of the Earth*, trans. Constance Farrington (New York: Grove, 1963) 250; first ed. French 1961.
24 In 1874 Roudaire worked his way from Chegga west of the Algerian *chotts*, along the northern edge of the lakes to the Tunisian border, then back along the southern edge to Chegga again. Amidst blinding light, burning days, and freezing nights during which his crew was allowed only four hours sleep, Roudaire took measurements to determine whether the *chotts* were below sea-level. He presented the findings of his survey at the 1875 International Congress of Geographic Sciences in Paris to great acclaim, and was backed by a commission of the French Academy of Sciences. Another expedition to be launched in 1876 ran into snags when it was noted that sending a military force into Tunisia, then an independent country, and planning to flood it might not sit well with the inhabitants of Tunisia. Roudaire managed to replace French military personnel with Tunisian military personnel, and carried out the second survey in 1876, returning to become the talk of the town in Paris.
25 *Commission Supérieure pour l’examen du Project de Mer Intérieure dans le sud de l’Algérie et de la Tunisie* (1882) Protectorat Tunisie, 1ère versement, “Mer Intérieure (1881–1885)” No. 1269.
26 Marabout in the nineteenth-century lexicon of Tunisia and eastern Algeria referred to men and women devoted to God’s adoration and linked to God by saintliness. See Julia Clancy-Smith, *Rebel and Saint: Muslim Notables, Populist Protest, Colonial Encounters. Algeria and Tunisia, 1800–1904* (Berkeley: U of California P, 1994).
27 “Mer intérieure Africaine” (1882) np. in *Commission Supérieure*. The *smala* is the circle of tents belonging to a nomadic tribe. Perhaps the most famous example of what a *smala* could contain and represent was found in the roving capital of Emir Abd al-Qadir of Algeria who fought a guerrilla war against the French from 1832 to 1847. His *smala* was a nomadic group of 30,000 people, protected by 5,000 soldiers who continually eluded the French military, according to historians Bruno Etienne, *Abdelkader. Isthme des isthmes* (Paris: Hachelle, 1994) and Charles André Julien, *Histoire de l’Algérie contemporaine*, Vol. 1 (Paris: Presses universitaire de France, 1964). Aouli, Redjala, and Zoummeroff, in their biography *Abd el-Kader* (Paris: Fayard, 1994) say the *smala* was made up of more than 339 *douars* (tent circles) numbering between 60,000 and 70,000 people. It contained Abd al-Qadir’s treasury and his flocks. In it
were the families of his followers, as well as his own family.
27 *Commission Supérieure* 1882, 53. Belief in this story that the *chotts* were the Lake of Triton was not only embraced by literary travelers such as the Englishman Thomas Shaw but also by explorers such as Henri Duveyrier, and by noted geographers such as James Rennell whose career has been analyzed by Mathew Edney in *Mapping an Empire: The Geographical Construction of British India, 1765-1843* (Chicago: U of Chicago P, 1997).
28 “Projet d’acte de Concession de S. A. le Bey de Tunis,” article 7 in *Commission Supérieur*.
29 André Martel, *Luis-Arnold et Joseph Allegro. Consuls du Bey de Tunis à Bône: A l’arrière-plan des relations franco-maghrebines, 1830—1881* (Paris: Presses universitaires de France, 1967), 9, 193.
30 Ibid., 196.
31 Jacques Berque, in his study *French North Africa: The Maghrib Between Two World Wars* (New York: Praeger, 1967), discusses why it was that the vineyard (which the French saw as the symbol of modernization) and the tilled field (which was the image of the wisdom and bounty of the metropolis) had precisely the opposite effect on the dispossessed native farmer for whom the vineyard and its farmhouse were a stigmas on the land: “The most provocative symbol of the colonial epoch in the Maghrib is that of the tiled farmhouse, a cheerful dwelling standing amid vineyards. It aroused the most violent, and violently opposed, reactions from Frenchmen and the people of the Maghrib. The fact that it was surrounded by more significant forces matters little; it implied all the rest. Banks, military camps, factories and schools may have played at least as important a part, but none made so deep an impression on everyone’s feelings as this French farmstead, this heraldic emblem on African soil” (35).
32 La Capitaine Chavanne, *Historique du Service des Affaires Indigènes de Tunisie (1881-1930)* (Tunis: Résidence Générale de France à Tunis, 1931), 7.
33 Ibid., 8.
34 Kenneth J. Perkins, *Qaids, Captains, and Colon: French Military Administration in the Colonial Maghrib, 1844-1934* (New York: Africana, 1981), 22.
35 Ibid., 24.
36 Michel Foucault, *The Order of Things: An Archeology of the Human Sciences* (New York: Vintage, 1973), 132.
37 Perkins, 68.
38 Ibid., 70-71.
39 This Roman legacy was often referred to by the colonialist writer Louis Bertrand. During the colonial period, schools taught students from the Maghrib about “nos ancêtres, les Gauls.” After independence, Tunisian writer Salah Garmadi rejected neo-colonialism by making reference to “nos ancêtres, les Bédouins.”
40 Perkins, 61.
41 Fatima Mernissi, *Dreams of Trespass: Tales of a Harem Girlhood* (Reading, MA: Addison-Wesley, 1994), 69. Talking with Monia Hejaiej, whose *Behind Closed Doors: Women’s Oral Narratives in Tunis* (1996) contains several tales whose meanings pivot on the differences between Beldi (urban Tunisian) culture and Bedouin culture, and listening to stories told by my in-laws, I have found that the same general narrative may be told in both places, but the value assigned urban or rural culture shifts according to the context of the storyteller. See Monia Hejaiej, *Behind Closed Doors: Women’s Oral Narratives in Tunis* (New Brunswick, NJ: Rutgers UP, 1996), which includes the stories “The Peasant,” “Long Live the Beldi,” “The Bedouin and the Trousers,” and “The Peasant and the Beldi.”
42 Ibn Khaldun, *The Muqaddimah: An Introduction to History*, Vol. 1, trans. Franz Rosenthal (New York: Bollingen Series XLIII, Pantheon, 1958), 255.
43 “Tribu des Hazems,” p. 2 in “Gabès: Notices politiques et ethniques des Caïdats et Khalîfâts, 1919/1950,” Protectorat Tunisie, 1ère versement, “Tunisie -Postes du Sud,” No. 450, AD.
44 Michel Certeaue, *The Practice of Everyday Life*, trans. Steven F. Rendall (Berkeley : U of California P, 1984), 20-21; first ed. French 1980.
45 “Kabis,” *Encyclopedia of Islam*, Ed. Van Donzel, Lewis, and Pellat, vol. IV, 65-66 (1974), 339.
46 Ibid., 337.
47 Ibid., 338.
48 Were we to follow the lead of the officers of the *Bureaux Arabes* and *Service des Renseignements*, judging them according to the pattern set out by Pliny the Elder (A.D. 23-79) we would discover that “The Cave-dwellers hollow out caverns, which are their dwellings; they live on the flesh of snakes, and they have no voice, but only
make squeaking noises, being entirely devoid of intercourse by speech.” Pliny the Elder, *Natural History*, Vol. 2, trans. H. Rackham. (Cambridge, MA: Harvard UP, 1942), 251.
49 Gideon S. Golany, *Earth-Sheltered Dwelling in Tunisia: Ancient Lessons for Modern Design* (Newark: U of Delaware P, 1988) 20, 32.
50 C. Sonneck, *Chants Arabes du Maghreb*, Paris: Librarie Orientale et Américaine, 1904, p. 218; see also, Muhammad ibn Abd al-Qadir Jaza’iri, *Kitab Tabfet al-za’ir fi ma’athir al-Amir ‘Abd al-Qadir wa-akbbar al-Jaza’ir*, Alexandria, 1903, p. 531.
51 General Eugène Daumas, *The Horses of the Sahara and the Manners of the Desert with Commentaries by the Emir Abd-el-Kader*, trans. James Hutton (London: Wm H. Allen & Co., 1863), 193; first ed. French 1858.
52 Ibid., 195.
53 Eugène Daumas and Ausone de Chancel. *Le Grand Désert ou Itinéraire d’une caravane du Sabara du Pays de Nègre* (Paris: Quintette 1985), 297. Reprinted from *Le Grand Désert ou Itinéraire d’une caravane du Sabara du Pays de Nègres* (Paris: Michel Lèvy Frères, 1856).
54 Ibid.
55 Translated from Ali Ardhaoui, *Best Poetic Qasidas # 2*, Side A (audio tape in Tunisian Arabic). I thank Karim Hamdy for collaborating with me on this translation. |
NAVSTA Champs....2nd year
Captain’s Call
by Captain John M. Will, Jr.
Adios, amigos.
This is my last opportunity to communicate with you via this column. I want to use it to express my appreciation for the complete support given me by all levels of seniority within this command. I could not have asked for a finer group of shipmates and friends. In this, I am speaking not only of you, the officers and crew of CANOPUS, but also of your lovely and supporting families as well. You are true professionals, proud of the job you do, and possessive of the CAN-DO spirit that has become the CANOPUS hallmark. I would like to stay on as your Commanding Officer, but to do so would prevent someone else from having the pleasure of working with the best team in the Navy. My relief, Captain Dwaine Griffith, comes aboard with an excellent background and an outstanding reputation. I would not want to turn the reins over to a lesser qualified individual. I am sure he will continue those policies which are beneficial and improve on those for which change would be advantageous. I look forward to watching CANOPUS’ continued growth and her receipt of well deserved plaudits and awards. I’ll not say goodbye, but farewell, with the hope that we may serve together again. My best wishes to all of you for the future.
J. M. Will, Jr.
Dear CANOPUS Family,
In recent months, we have happily “hailed” the new and regretfully “farewelled” the departing members of our ever changing group. And now comes the most difficult time of all for John, me, and the Will brigade--saying goodbye ourselves.
During two enriching years with you, we have enjoyed numerous successes and a few hardships together. I wish time would permit me to thank each one of you individually and personally, again and again, for your loving support and sincere thoughtfulness to us.
We will miss you! We will always remember you! May we see you soon!
Smooth sailing and fond wishes from all the Wills.
Linda Will
Editorial
So you think you know who you are.
Prove it!
"Why should I carry my I. D. card around with me? I live on board." "You only need your I. D. card if you are going to leave the ship."
You have probably heard some of these statements before. I should say, you have probably heard some of these EXCUSES before.
The main and only purpose for having an I. D. card, or in the case of restricted men a form stating that your I. D. card is being held is to correctly and beyond the shadow of a doubt identify yourself if the need should arise.
For example, if you are asked to identify yourself and you tell the Marine security guard that you don't have and I. D. card, he can only assume that you are a security violator. This situation could get quite uncomfortable for you. You have better things to do than having to be marched to the Security Office and your division officer has better things to do than to come down and identify you.
Face it fellows, a military I. D. card weighs all of one half of an ounce. That is even with extra heavy lamination. You surely won't break your back by carrying it in your pocket. You wouldn't dare leave it in the "other pair of pants" on payday, so get in the practice of always having it with you. It will save you a lot of trouble.
Chaplain's Corner
by Chaplain (LCDR) R. R. Crowe
Their car was exceeding the speed limit and there was a light rain on the road, making driving even more hazardous. The wife nervously says to the husband, "You're driving too fast! You'll get a ticket or cause an accident!" He smiles and says confidently, "No I won't, honey, cause I get special protection from the Lord." At that moment, there is a bright flash of light in the road ahead as a lightning bolt strikes a tree in front of the car. Immediately the car slows down to the legal speed and the husband looks up and soberly says, "I was only kidding about that, Lord!"
We usually get away with our wrong doing because nothing dramatic happens like in the story above. We think, Hey, I can get away with it. Most punishment for wrong doing is deferred and not immediately executed. Excessive eating, drinking or general abuse of our bodies takes a while to have its effect. We ignore the immediate symptoms but later the full price is exacted. Excessive anger, fear, worry or depression also have long term payments in the form of a host of degenerative diseases.
Along more subtle lines we also break moral laws with seeming impunity. The payment is the destruction of once beautiful lives. Next to God's name, the holiest thing in the universe is his institution of marriage. If we fail to see the sanctity of marriage and act accordingly, we pay in the loss of the greatest joys known to men and women. Continual disrespect results in the loss of peace of mind, then the loss of happiness and finally the loss of even the sexual function itself. We cannot run roughshod over the precious gifts the creator has endowed us with. Men and women are more than just animal. This is poetic irony that the very gift we center our lives around becomes lost itself. We think that we have gotten away with it but not so---the payment comes later.
"Because sentence against an evil deed is not executed speedily, the heart of the sons of men is fully set to do evil." Proverbs 8:11.
FEEDBACK: Where are we going?
No one can deny that the world is changing, and changing fast.
The liberal forces are in a continual battle with the reactionary forces to the right. The technological capabilities that should help improve man's lot and draw him closer together seems to have emphasized his differences. In a period that is bringing the Communist Party into the governments of our NATO allies, we still find a 300 year old religious war in Northern Ireland.
In a similar situation, we find ourselves trying to solve the problems of alcohol abuse and at the same time there is a growing effort to legalize the various drugs being used for recreation.
Our President is pushing the human rights issue around the world and women's rights groups have brought the female touch to all but the combat arms of the Defense Department. Previously, we have looked at these problems in the Helmsman through the Feedback column. It seems everyone was in favor of equal rights for women, if they can do the job, they are welcome to it. There is a strong current in favor of easing restrictions on drugs, particularly among the younger crew members.
This month, instead of walking the ship and getting a random sample of opinion, we would like to aim for a broader sample of opinion. Well written opinions will be published in future issues of the Helmsman, and we hope to create a forum for the discussion of events and problems of interest to the crew. To kick it off, we would like to look at the human rights situation, in particular, homosexual rights.
With the impending assignment of women to service craft, we assume that there would be an increase in the sexual interaction of the crew, which we also assume is at this time, nonexistent. Granted that women will be as hardworking and professional, if not more so, than their male counterparts, there is a natural and normal attraction for the opposite sex which will sooner or later take its course.
Since this end result is sexual relations between two sailors, where do you draw the line? Obviously, this behavior is inappropriate on the ship, but what about on the beach? If there is nothing wrong with heterosexual relations between two sailors while on their own time, why should there be any limitations against homosexual activities? If it is all right to use drugs off the ship (as many people seem to believe), why start drawing lines at sexual preferences?
As long as a man does his job well and does not infringe the rights of others, should he not have the same rights as all other citizens? Logically, sex or sexual preferences should have no bearing on a person's qualifications for a job. If a person is qualified and competent, he should have every opportunity to perform that job without discrimination. If the Equal Rights Amendment is passed and it allows no discrimination because of sex, does that mean that homosexuals will be guaranteed the right to serve on ships, as well as women? If this is true, there will be a lot of adjustment required.
Are you ready for that much adjustment?
"No, you idiot, I said 'Bring me the harbor PILOT'!"
"No, it's not a tiger, but it could be a bear. The CANOPUS cruise book is in the works but we need your help. Anyone with experience or a desire to get involved with the best cruise book ever produced, come down to the Public Affairs Office or call ext. 230."
Human Goals Officer
In this issue I would like to address middle managers. I will give you some basic information about human goals. For the benefit of those that are not aware of what is included in the human goals program I will simply name the major areas:
* Leadership and Management training
* Organizational development
* Overseas diplomacy
* Equal opportunity and race relations
* Drug and alcohol education
Many people in the Navy have a misconception about the purpose and objective of human goals and usually will use the label “human relations” for the program they know little about. In addition many people think that the purpose and objective is to have everyone be friends, to be chummy, or to have one big happy family. Let me clearly state what the overall objective is and the basic purpose. The overall objective of the human goals program is mission accomplishment. The purpose is to do so in the most EFFECTIVE ways possible.
My primary job on this ship is to assist command through the human goals program and through any other way the command desires to utilize my skills for mission accomplishment. I’ve brought with me seventeen years of experience in all sizes and types of ships in the U.S. Navy. I have had extensive training plus three years of experience in the human goals field. I have assisted no less than thirty ships and stations in the field and I have a great deal of expertise in human goals which is available to you.
Before human goals programs came into being problems and problem areas were dealt with from the standpoint of leadership and management. In today’s Navy and more specifically aboard this ship I see that managers generally commit one of two errors.
1. They never refer people to other resources. They feel that they can handle their own problems by themselves. There is no doubt that they meet with some success—but couldn’t the long hours and manpower expended been less costly by referring that individual to a resource with special training in that problem area? Or perhaps training is available to this manager to enable him to deal with the problem more effectively?
2. They refer ALL their problems to someone else—usually to a resource with special training, along with this they expect one hundred percent success. In addition, they place all responsibility on that specialist. For example; a supervisor takes (refers) a man to the alcohol counselor and expects magic to happen—the supervisor then assumes that he has fulfilled his obligation and that the counselor now ‘owns’ the problem.
Common sense runs deep through effective leadership and management practices, we can go to extremes such as the ones cited above or face our responsibilities as managers and do what is right. It is my firm belief that most of our problems can be solved through the use of effective management principles—and for the problems we can’t solve there are other human resources that can assist us. There are many ways to the top of the mountain, where some things may work for one they may not work for another.
Leadership and management training are available as well as organizational development training—take advantage of this training, it may help you become more effective than you are already. And for those that live in the past or are still clinging to the dream of how it was on “the Lake”, “the SIMON LAKE” or “the other ship”, perhaps that is just an excuse for what you can’t do. And if your reply is “They won’t let me!” then perhaps you’ve quit trying. But fear not—I also can give a course in “institutionalized common sense.”
I’ve observed many of our managers who have retired “on the job”—how about joining up again—it can be worthwhile. There are many people on this ship that give of their time and talents to human goals—Do you know who they are? Do you know any of these people?
LCDR Crowe—Chaplain
LT Garcia—Medical Officer
CW04 Crisp—Equal Opportunity Assistant
FTCM Trimble—Command Master Chief Petty Officer
SKCM Groom—Equal Opportunity Program Specialist
MRC Traff—Alcohol Counselor
QMC Gould—Drug and Alcohol Program Assistant
DPC Johnson—Racial Awareness Facilitator
MR1 Stocking—Collateral Duty Alcohol Counselor
I encourage all of you to make use of the resources available. As for me, I am here to serve.
RMC Pacheco (EXT 368)
ETN2 Jeffery Westrick and his wife Janis are smiling because he has just reenlisted for guaranteed assignment to a school of his choice and almost $12,000.
Paths to a commission
The principal purposes of the Navy's officer training programs are to increase the usefulness of the individual to the Navy and to provide the climate and career incentives that will attract and retain high-quality enlisted personnel.
- **Naval Academy.** The Naval Academy at Annapolis provides four years of college study leading to a Bachelor of Science degree and to a commission in the Regular Navy or Marine Corps.
- **Naval Reserve Officers Training Corps (NROTC).** This program supplements the Naval Academy by training young enlisted men and women for careers as commissioned naval officers while they are attending the college or university of their choice. There is also a two-year NROTC college program designed to augment the current four year program. It is available to those who meet the eligibility requirements.
NAVAL RESERVE OFFICER PROGRAMS
- **Officer Candidate School (Men).** The Officer Candidate School (OCS) program for men who already possess a bachelor degree provides officer indoctrination training for selectees at Newport, R.I.
- **Aviation Officer Candidate (AOC).** This program provides pilot training for selected applicants leading to a commission in the Naval Reserve and designation as Naval Aviator.
- **Naval Flight Officer Candidate (NFOC).** This program provides training leading to final designation as a Naval Flight Officer or Air Intelligence Officer.
- **Navy Judge Advocate General Corps (JAG).** To qualify for the JAG program, Navy persons must first graduate from an accredited law school and be a member of the bar. When selected for this program, Navy people are appointed lieutenants and receive training at both OCS and the Naval Justice School.
- **Navy Physician’s Assistant Program.** Under this program, enlisted men and women in pay grade E-5 or above may receive an additional three years of general medical training and an appointment as Physician’s Assistant Warrant Officer.
- **Limited Duty Officer (LDO) Program.** This program is open to male personnel, E-6, 7 or 8, who have completed between eight to 16 years of active naval service. The LDO serves as a technical manager in fields embracing his enlisted specialty.
- **Broadened Opportunity for Officer Selection and Training (BOOST).** This program opens up careers as naval officers to enlisted men and women who have demonstrated leadership potential, but whose academic backgrounds may be inadequate. BOOST provides up to two years of individually tailored instruction aimed at preparing students to compete for entry into the Naval Academy or NROTC officer procurement programs.
OTHER PROGRAMS
- **Medical Service Corps.** Senior hospital corpsman (HM) and dental technicians (DT), who possess the necessary qualifications and motivation, have an opportunity to compete for commissions in the Medical Service Corps.
- **Warrant Officer Program.** The warrant officer program provides a direct path of advancement to warrant officer status for outstanding enlisted men and women on active duty in the Regular Navy or Naval Reserve.
The schools and programs are there for you to use. Stop in and talk it over with your career counselor. Don’t be a “Steam’n Seaman” for four years. Your future in the Navy is loaded with opportunity, limited only by your desire to take advantage of it.
SAILOR OF THE MONTH
Fredrick H McCaslin Jr.
How exactly does one get to be named Sailor of the Month? Hull Maintainence Technician Fireman Frederick H. McCaslin, Jr., CANOPUS Sailor of the Month for July says, “Do what you are told and give 100%. I also try to get along with everybody.”
HTFN McCaslin entered the Navy on April 28th, 1977. Upon completion of basic training at Great Lakes, he went on to Philadelphia for HT “A” school. The Mills, Pennsylvania native then reported to the CANOPUS on November 19, 1977. After a brief stint of mess cooking, HTFN McCaslin was assigned to R-1, where he works as the divisional supply petty officer. He says that he enjoys working with the supply system because it is furthering his business education, but wishes he could work more in his rating.
“I like the HT rating because there are a variety of job areas I can work in. I am not restricted to one set pattern. For example, I can go into shipfitting, carpentry, sheet metal work and many other areas,” explains HTFN McCaslin.
“HTFN McCaslin puts in a lot of extra time and accepts changes to his daily routine without any complaints,” says HTCM James Roney, Assistant Hull Repair Officer. “He is an outstanding young man—the calibre of person we like to see in the Navy.”
When not at work, HTFN McCaslin actively participates in divisional functions as well as the CANOPUS intramural athletic program. He also enjoys touring in Spain.
That is what one has to do to be named Sailor of the Month.
Conservation:
Turn the tap off while brushing your teeth. By wetting the brush before and rinsing briefly afterwards, you can save up to nine gallons of water.
Double sea duty credit
WASHINGTON, D.C. (NES) . . . How would you like to go to sea for one year and get sea duty credit for two years? You can do this and see the exotic Middle East at the same time if you volunteer for a one year tour on board USS LaSalle (AGF 3).
BUPERSNOTE 1300 of May 25, 1977, instituted the double sea duty credit. The homeport of Commander, Middle East Force Staff and the flagship LaSalle is Norfolk and a forthcoming revision to OPNAVINST 4600.16B will list them as “permanently deployed.” As such, personnel ordered to these units are entitled to have their dependents transported to the homeport of Norfolk or the location of their choice in the United States. Additionally, you can be guaranteed a return to your present duty location or assignment to one of your duty preferences (consistent with billet availability) following your one year tour on LaSalle. Volunteers are needed for all shipboard ratings, and you may request this duty via Enlisted Transfer and Special Duty Request (NAVPERS 1306/7).
CANOPUS ALL-STARS
NAVSTA Rota champs
2nd place NAVEUR champs
It was a very good year. CANOPUS destroyed all competition on Naval Station Rota, finishing the year with a 17-3 softball record, undisputed champions. On to the NAVEUR tournament in Naples for more of the same, we had the best statistics and the best play until the very last when the All-Star team from Rota gained a little revenge. Even so, four of our players went on to the All-Navy competition; MT1 Gale Bridges, SK2 Walter Stahl, DP1 Mick Murray and LCpl Rick Ruggiero. While the entire team didn’t make it back to the States, it took the talents of the best players from the best league in Europe to stop them. And we were already champions of that league. We know All-Navy caliber players when we see them, and the whole team could have gone on. Congratulations on a real fine season, to some fine players.
Number 10, Mike Brandt gets a piece of one to give the ‘enemy’ something to worry about.
It’s the ones that count. Navy Rick Ruggiero runs home.
A good offense still needs a strong defense, and the CANOPUS had both this season. Ken Adams takes the throw in plenty of time.
The ones that count that make the game. All-star Rick Ruggiero coming home after a three-run triple to start the game from the right field line.
Coach Wally Waldhauser lends a hand with the gear. Don't worry Chief, even Billy Martin sometimes gets left holding the bag.
This makes it all worth while, receiving the congratulations of the losing team.
Absentee voting: an essential duty
The importance of voting, whether you're at home, overseas, at sea or living in other than your state of legal residence, was summed up by Edmund Burke: "All that is necessary for injustice to prevail is for enough good men to do nothing."
The vote is the key to our whole system of government. The hallmark of our democracy is to control that, the people, exercise over our government by our free vote. But it takes a majority of informed citizens, conscientiously voting in each election, to assure that our democratic rights are preserved. If too many people are too busy, too indifferent, or too lacking in conviction to vote, we run the risks that go hand in hand with minority rule. History has shown us case after case in which individual liberties have been lost because the majority of the people neglected their responsibilities.
In the United States, we elect more than 500,000 public officials. They serve at all levels of government and their decisions directly influence our lives and well being. All of these officials derive their authority from us, the voters who elected them.
Our fundamental right to choose those who will make the decisions that affect our lives does not diminish as the distance from the voting booth increases. You can vote, using the absentee ballot. It is really very simple. First, you register as a voter (if required); request an absentee ballot; vote and return the ballot to the election district.
You will probably get your absentee ballot by using the Federal Post Card Application (FPCA). The FPCA is a postace free post card printed and distributed by the federal government for use by absentee voters.
Mail the FPCA in ample time to comply with your state deadline. Be certain that enough time is allowed to accomplish whatever additional actions are required by state law to obtain, mark and return an absentee ballot in time for it to be counted.
The FPCA does not require any postage if mailed in a U. S. postage facility--including military post offices.
If the FPCA reaches its proper destination before the applicable deadlines and if it has been filled out correctly, you should receive something back in the mail--either a form to complete and return, an absentee ballot or both, depending on your state.
If you receive no communication from the state within a reasonable length of time, submit a second FPCA. Through your voting assistance officer, you should also bring such problems to the attention of department and voting action officers, who will see the matter is investigated. Keep in mind that in such a situation, time is of the essence.
Using an FPCA form to obtain an absentee ballot may not be the only method of voting available to you. The existence of alternative procedures depends upon three major factors: who the voter is, where the voter is and the law of the voter's state of legal residence.
To be sure you do it right and to get the assistance in filing the necessary form(s), see your voting assistance officer or voting representative right away.
The Overseas Citizens Voting Rights Act of 1975 establishes procedures for absentee registration and voting in all federal elections (i.e., for the President, Vice President, U. S. Senators and U. S. Representatives) for the benefit of any person who can satisfy all of the following requirements: be a U. S. citizen, reside outside the United States and have a valid passport or military I. D. card and registration issued under the authority of the Secretary of State. People who fulfill these requirements are authorized to register and vote absentee in the state or election district in which he or she was last domiciled before leaving the United States. Further, such persons must, at the time of departure, meet all qualifications to vote in federal elections under current state law; must meet all state or district qualifications consistent with the voting act; must not maintain a home be registered to vote nor vote in any other state, election district or U. S. territory or possession.
MM2 Jose Basilio reenlists with the aid of his Dept. head, LT Ramsey. Petty Officer Basilio shipped for a school of his choice in the STAR program and received over $11,000.
Rota Divers do it....UNDERWATER!
by ET1 Richard Ames
For three weeks in late June and early July, ten people made the most of an opportunity to learn about SCUBA, Self Contained Underwater Breathing Apparatus, diving. Classes were taught by CANOPUS' own TM2 Nick Alexiades in conjunction with the Aquanauts Diving Club.
Prior to commencing the classes, students were required to demonstrate a minimum level of swimming ability and watermanship by completing a practical test. The test consisted of swimming sixteen lengths of a pool, treading water for fifteen minutes and then swimming half the length of the pool underwater.
"Nick did most of the talking."
Considerable preparation was necessary before the students could make their first ocean dive. Safe diving, with or without SCUBA equipment, includes a thorough knowledge and practice of basic water skills. Classes in the practical skills and theories related to safe diving were provided; these covered the physics of diving; medical aspects of diving; fundamentals of compressed gases used during diving; skin and SCUBA diving equipment; first aid for divers; marine environment and marine life; how to plan a SCUBA dive; the "buddy system"; underwater communications and emergency plans.
The practical skills were taught in a pool. Initial instruction within a closed area allowed the students to familiarize themselves with the equipment and how to use it safely and effectively. After learning the basic skin diving skills, the students progressed to the use of SCUBA equipment. It was now time for the skills learned and practiced in the pool to be used in the ocean.
"The surf was minimal, but surf entries were practiced nevertheless."
The open water classes were held at Trafalgar, about an hour and a half drive from Rota. This is a popular place for diving here in the area. The first class at Trafalgar concentrated on the use of skin diving equipment in the open water. Dr. Jan Garcia gave a lesson on practical first aid. While in the ocean, students practiced towing, mask clearing, and ear equalization.
"First Aid: The Doctor's turn."
The second class held at Trafalgar was the first open water SCUBA dive. Emphasis was placed on the necessity to plan each dive and to keep each diver's physical capacity in mind.
more diving on page 12
Diving con’t.
For the students to receive their certificates of qualification as a SCUBA diver from PADI (Professional Association of Diving Instructors), they must attend each of the nine class sessions and complete three ocean dives subsequent to the classroom sessions. The instruction provided meets the requirements of American National Standard Z-86.3-1972, totalling approximately 50 hours.
Diving for naval personnel in the Rota area is being promoted by the Naval Station sponsored Aquanauts Diving Club. The club meets at 8 p.m. on the first and third Wednesday of the month. The first meeting of the month is held at the base Community Center; the second is held at Benny’s pool in Rota. Another SCUBA class is being planned for September—are you a prospective diver?
At the beach for the first ocean dive. From Left to right, front row: Bob Thorp, Jan Garcia, Ken Wallace, Nick Alexiades, Bruce Gustin, Rudy Hill, Bob Iberri. Back row: Fred Edwards, Lester Sykes, Bill Ferguson, Dick Ames, Judy Grant.
Here come the Movies!
In this issue, we are starting an article to announce new movies that are on their way to the CANOPUS and the Naval Station Theater. As we are not able to preview any of these films, we will simply write a synopsis of the film from the information provided to us. If anyone would like to become a Rex Reed or Gene Shalit and take a stab at becoming a movie critic, let us know. We would be glad to print a movie review column.
Enough for plans for the future. On their way to the CANOPUS are:
SATURDAY NIGHT FEVER
In Brooklyn’s Bay Ridge section, John Travolta contends with a menial job in a paint store and constant harassment from his parents. Only on Saturday night, when he dances at 2001 Odyssey disco, does he become his own boss and king of the disco scene.
HIGH ANXIETY
Dr. Mel Brooks, who suffers from high anxiety, takes over as head of the Psycho Neurotic Institute for the Very Very Nervous. This film parodies segments of Hitchcock classics. Producer, director, writer and singer Mel Brooks triumphs in the end over a long stemming conspiracy between the head nurse and a resident psychiatrist.
SISTER STREETFIGHTER
Li Hunglung goes to Tokyo to look for her missing brother. He is a narcotics agent from the Hong Kong Police Department who disappeared three months ago. She sneaks into the house where her brother is being held and is confronted by killers hired to do away with her brother. After narrowly escaping death, she invades the house again, locates her brother and confronts the killers for a second and final time.
FIVE DAYS FROM HOME
It’s just a week before Christmas when T. M. Pryor (George Peppard) escapes from prison. He is a former police officer, convicted of killing his wife’s lover. With only six days before parole, he learns that his nine-year-old son has been critically injured in an automobile accident in which his ex-wife is killed. When his appeal for early release is denied, escape is the only route.
RETURN OF THE WORLD’S GREATEST DETECTIVE
Bumbling police officer Sherman Holmes has a slight run-in with a motorcycle and as a result of his concussion, believes that he is the legendary detective Sherlock Holmes.
We are nearing the end of summer and the end of the softball season. Our Varsity team was moving at full speed until stopped by the Rota All Star team. This will give proof that here in Rota, the competition is tougher than anywhere else in NAVEUR. But it wasn't a total loss. Four CANOPUS players were named to the All Star team and will be going to compete in the All Navy Finals in Port Hueneme, California. They are MT1 Gale Bridges, DP1 Mick Murray, SK2 Walt Stahl, and LCpl Rick Ruggiero. Congratulations to these fine players.
For the fall season, we will be supporting a wide range of sports activity. For the rough and tough, we have the CANOPUS Big Orange Crush. Here's your last chance to prove how mean you are, as the Navy doesn't play tackle football back in the states anymore. So, if you would like to take somebody's head off, practice started July 31st and if you have not been out there, you're already late.
Touch football and volleyball are also warming up. We had some great teams playing last year and the touch football team even produced a base champion. This year, we'll pay them a little more attention and see where they go. Volleyball has had a little more experience and should run a bit smoother. Further down the road is the basketball season, so there should be something for everyone.
In our continuing SITE II beautification project, we have cleaned up the street side of the Rec Center and we're trying to get the back side looking better.
If we could only convince people that this is not a dumping ground, we could set up a few tables and maybe a couple of horseshoe courts. Many people are taking advantage of the new, improved Rec Center. We go through pool table felt at an amazing rate, and the washer and dryer are working from opening to closing time. The basketball area gets a good workout everyday and the game room is usually busy. If there are any problems or anything you would like to see added, we have a suggestion box just inside the front door. Drop us a note.
There has been a change in the Day Trip reservation procedure. SITE II military personnel still go first. This includes any guest they wish to bring along, such as dependents and relatives. Next comes unescorted SITE II dependents and they are allowed to reserve seats for guests such as relatives. All other active duty military personnel and dependents must sign up under the third area as others, where previously they were allowed as guests, they must now travel under their own reservations. So it is worth the trouble to sign up early, if you have friends you would like to bring along on these trips. We take reservations two weeks in advance. Everyone is reminded that with this priority system, unless you are active duty military, we can not guarantee you a seat until just before the bus leaves. If someone with a higher priority than you shows up that morning, you will be bumped---the same as any other Space A travel.
That's about it for this month. Look forward to your Que Pasa for up to date information on tours and other activities at Special Services.
VIII FIESTA
DE LA URTA
ROTA FERIA GROUNDS
For those who can't make it to the Fiesta de la Urta, the Feria de la Vendimia will be held in Jerez de la Frontera 6-11 September.
The ship's varsity softball team finished the Rota Naval Station softball league in First Place, with a 17-3 record. The team depended on a strong defense to hold opposing teams to an average of just under 3 runs per game while the potent offense averaged 11.5 runs per game. During the season, the team held the opposition scoreless twice and to only 1 run in eight games. MT1 Gale Bridges was the only pitcher of record, although SN Dave Holst pitched 33 innings during the season and gave up only five runs. Some of the leading hitters for the team were Dave Holst at .553, MM3 Mike Myers at .552, TM2 Tony Frock .552, DPI Mick Murray .534, and OM2 Mike O'Shaughnessy .525. The defense was anchored by dependable play by HT1 Greg Ferritto at 1st and SN Mike Demey at 3rd, and the league's best double play combo, Mick Murray and Tony Frock at shortstop and 2nd respectively. These two led the league with 33 double plays in 20 games. The outfield starred the rifle arms of Mike O'Shaughnessy in left center with nine opposing players thrown out at 1st base, while the strong arms of LCpl Rick Ruggiero in left and Walt Stahl in right center kept the opposing base runners honest.
At the NAVEUR Tournament held in Naples, Italy 27-30 July, the CANOPUS started out by crushing the USS ALBANY by a score of 21-0. This was one of only three shutouts in the tournament, and featured the three hit pitching of Gale Bridges, and four hits each by Tony Frock, Mike Demey, Walter Stahl and Rick Ruggiero, and two double plays. In the second game CANOPUS drew the team from NAS Sigonella, who had just defeated Kenitra (in one of the other tournament shutouts 4-0), and although the game was tied at two apiece in the fourth inning, the good guys kept threatening and finally broke through in the 6th with four runs to win the game 6-2, Bridges again being the winning pitcher. Both Tony Frock and Mike Demey got three hits, and another double play.
To this point in the NAVEUR tournament the Rota team had defeated the USS GILMORE/La Maddalena by a score of 5-4 and the home team, Naples, by a score of 16-4. So game three for both teams promised to be a good one, and it was, although CANOPUS lost to Rota 6-4. The game went eight innings and either team could have won earlier, because the game featured a tight defense with two more double plays by CANOPUS and one by Rota. This game put CANOPUS in the losers bracket, with the team responding with a 15-1 victory over Nea Makri, Greece.
In a game in which Bridges gave up only six hits and featured a long three run home-run by outfielder Walter Stahl in the second inning. A home-run by Mick Murray in the 4th, an inside the park three run homer by Mike O'Shaughnessy in the sixth, and another two run shot by Stahl in the same inning capped off the score.
In game five for CANOPUS, it was another match with NAS Sigonella, and this time CANOPUS sent them home with their second loss, this time with a 6-1 score. Sigonella scored their only run on four hits, and were throttled by two more double plays. This brought CANOPUS into the championship game against Rota, and was the second doubleheader for the CANOPUS in three days. Although once again Gale Bridges pitched well for the team, giving up only 7 hits, four of these hits came in the fourth inning and were complimented by the games only walk, one of three CANOPUS errors, and a three-run homer for Rota's usually light hitting first baseman, Ron Cain. CANOPUS was held to four hits by Rota's fine pitcher, Jay Harmon, but had two base runners on in the 1st, 2nd and 4th innings and couldn't come up with the clutch hit necessary to get a rally started.
**Intramural Softball Standings**
| No. | TEAM | WON | LOSS | PERCENT |
|-----|--------|-----|------|---------|
| 1 | SK's | 23 | 3 | .885 |
| 2 | MARNAV | 21 | 5 | .808 |
| 3 | R-1 | 20 | 5 | .800 |
| 4 | CSS-16 | 17 | 6 | .739 |
| 5 | S-3 | 18 | 7 | .720 |
| 6 | ENG | 6 | 5 | .636 |
| 7 | R-5 | 15 | 9 | .625 |
| 8 | W-5 | 15 | 10 | .600 |
| 9 | OPSNAV | 12 | 14 | .462 |
| 10 | R-2 | 10 | 14 | .417 |
| 11 | R-8 | 10 | 16 | .385 |
"If you let me go, Chief, I'll grant you three wishes... On the other hand, if you eat me you'll probably get Kepone poisoning!"
This will be a very busy month for sports activities in CANOPUS. As I am writing this article, the Varsity Softball team is in Naples, Italy, playing in the NAVEUR Championship Tournament after having won the NAVSTA Rota championship with a 17-3 record.
In the Intramural Softball league, the lead continues to change hands almost daily, as there are five teams with very close records. At this writing, the Storekeepers are in the lead, but by press time this could very well change again. With this type of competition, why not come out and watch some very good softball when you can think of nothing better to do?
The Varsity Touch Football team had their organizational meeting a few days ago and are now getting in shape for the season opener on September 9th in the NAVSTA Rota league. Anyone else who may have the desire to play or coach should contact the Athletic Director at Special Services extensions 411 or 2471. Also, I have had a few questions about a CANOPUS Intramural Touch Football league, so if enough interest is shown, we will start a league. At least five teams are needed, so keep in touch. I will hold a meeting during the third week of August concerning Intramural Touch Football.
Be sure to watch for next month’s article as I will explain the formula and rules under which the Captain’s Cup for Athletics will be awarded this December.
The CANOPUS Varsity and Dependent Women’s Volleyball teams are now practicing several times per week getting ready for a clean sweep this season. Both teams are looking good at this point. Coach Byrd of the Varsity says, “No second place this year; we are going to win it all!” Come out to the Base Gym on August 14th for the season opener and cheer them on to victory.
Tackle football is just around the corner!!! Head Coach Dave Roth held a staff meeting with his coaches Bob Sprinkle, Rick Martin, ‘Chili’ Wilson and Tom Best a few days ago preparing to start practice on Monday July 31st at 1830. Practice will be every day, Monday through Friday on the football field near the Rec Center. In case anyone is not familiar with Coach Roth, he has coached youth athletics here in Rota for the past three years, coming out with the winning team in every sport, every year. This year, with a lot of coaxing, he has agreed to give up youth football and take on the Varsity team and make it a winner again. Remember last year when we lost way to many games. Well, Coach Roth plans to show the Station we are still around this year. We wish him lots of luck! As the “OLDTIMERS” of a couple years ago remember, the SITE II GREEN MACHINE sliced through and mowed down the base competition, so this year, let’s get out there and watch the BIG ORANGE CRUSH do the same.
Remember, if you have any questions concerning athletics, please feel free to contact me at Special Services at extensions 411 or 2471 or better yet, just drop by the Rec Center for a visit.
MT2 Alan Miles and his wife Rhonda remain a Navy family as he ships over for a $12,000 bonus. Captain Will happily congratulates MT2 Miles.
The USS Canopus Association deeply appreciates the donation of this Helmsman issue from:
James C. Daniels
Grove City, OH
Served November 1977 to March 1980
Rate/Rank: JO2
Division/Shop: Command Public Affairs
CANOPUS CUTIE. Nikki-Dee Ilene Troehler, reported on board March 7, 1978 at 2240. Her measurements were 7lbs 9oz, 21 inches. |
Mechanism for controlling the monomer–dimer conversion of SARS coronavirus main protease
Cheng-Guo Wu, Shu-Chun Cheng, Shiang-Chuan Chen, Juo-Yan Li, Yi-Hsuan Fang, Yau-Hung Chen and Chi-Yuan Chou
Acta Cryst. (2013). D69, 747–755
Copyright © International Union of Crystallography
Author(s) of this paper may load this reprint on their own web site or institutional repository provided that this cover page is retained. Republication of this article or its storage in electronic databases other than as specified above is not permitted without prior permission in writing from the IUCr.
For further information see http://journals.iucr.org/services/authorrights.html
Acta Crystallographica Section D: Biological Crystallography welcomes the submission of papers covering any aspect of structural biology, with a particular emphasis on the structures of biological macromolecules and the methods used to determine them. Reports on new protein structures are particularly encouraged, as are structure–function papers that could include crystallographic binding studies, or structural analysis of mutants or other modified forms of a known protein structure. The key criterion is that such papers should present new insights into biology, chemistry or structure. Papers on crystallographic methods should be oriented towards biological crystallography, and may include new approaches to any aspect of structure determination or analysis. Papers on the crystallization of biological molecules will be accepted providing that these focus on new methods or other features that are of general importance or applicability.
Crystallography Journals Online is available from journals.iucr.org
Mechanism for controlling the monomer–dimer conversion of SARS coronavirus main protease
Cheng-Guo Wu, Shu-Chun Cheng, Shiang-Chuan Chen, Juo-Yan Li, Yi-Hsuan Fang, Yau-Hung Chen and Chi-Yuan Chou*
The *Severe acute respiratory syndrome coronavirus* (SARS-CoV) main protease (M\textsuperscript{pro}) cleaves two virion polyproteins (pp1a and pp1ab); this essential process represents an attractive target for the development of anti-SARS drugs. The functional unit of M\textsuperscript{pro} is a homodimer and each subunit contains a His41/Cys145 catalytic dyad. Large amounts of biochemical and structural information are available on M\textsuperscript{pro}; nevertheless, the mechanism by which monomeric M\textsuperscript{pro} is converted into a dimer during maturation still remains poorly understood. Previous studies have suggested that a C-terminal residue, Arg298, interacts with Ser123 of the other monomer in the dimer, and mutation of Arg298 results in a monomeric structure with a collapsed substrate-binding pocket. Interestingly, the R298A mutant of M\textsuperscript{pro} shows a reversible substrate-induced dimerization that is essential for catalysis. Here, the conformational change that occurs during substrate-induced dimerization is delineated by X-ray crystallography. A dimer with a mutual orientation of the monomers that differs from that of the wild-type protease is present in the asymmetric unit. The presence of a complete substrate-binding pocket and oxyanion hole in both protomers suggests that they are both catalytically active, while the two domain IIIs show minor reorganization. This structural information offers valuable insights into the molecular mechanism associated with substrate-induced dimerization and has important implications with respect to the maturation of the enzyme.
1. Introduction
Coronaviruses (CoVs) belong to the order *Nidovirales*, which are enveloped positive-stranded RNA viruses with a large genome of about 30 kb (Gorbalenya \textit{et al.}, 2006). They include important pathogens of humans and other animals (Weiss & Navas-Martin, 2005). In late 2002, a novel CoV causing severe acute respiratory syndrome (SARS) with a 15% fatality rate emerged and spread to three continents in six months (World Health Organization; http://www.who.int/csr/sars/country/2003_08_15/en/). In the following few years, the discovery of two further species of human CoVs, NL-63 and HCoV-HKU1, as well as SARS-CoV-like strains in bats, confirmed the great diversity of CoVs (van der Hoek \textit{et al.}, 2004; Lau \textit{et al.}, 2005; Li \textit{et al.}, 2005; Woo \textit{et al.}, 2005). In September 2012, the World Health Organization was informed of several cases of acute respiratory syndrome with renal failure owing to infection with a novel CoV in the Middle East. The possible global health implications are still under critical evaluation. Nevertheless, this re-emphasizes the possibility of the future reemergence of SARS or a related disease. Therefore, studies to aid our understanding of these viruses and the development of novel antiviral inhibitors are both urgent and necessary.
The coronavirus nonstructural polyproteins (pp1a and pp1ab) are cleaved by two kinds of viral cysteine proteases: a main protease (M\textsuperscript{pro} or 3CL\textsuperscript{pro}; EC 188.8.131.52) and a papain-like protease (EC 184.108.40.206; Snijder \textit{et al.}, 2003). This process is considered to be a suitable antiviral target because cleavage is required for viral maturation (Wu \textit{et al.}, 2006; Chou \textit{et al.}, 2008; Zhu \textit{et al.}, 2011). M\textsuperscript{pro} cleaves the polyproteins at 11 sites that contain the canonical L–Q–↓(A/S/N) sequence (Hegyi & Ziebuhr, 2002). Catalysis by M\textsuperscript{pro} has been studied extensively over the years using both kinetic and mutagenesis approaches (Anand \textit{et al.}, 2003; Chou \textit{et al.}, 2004; Hsu, Kuo \textit{et al.}, 2005; Lin \textit{et al.}, 2008; Shi \textit{et al.}, 2008; Chen \textit{et al.}, 2008; Hu \textit{et al.}, 2009; Cheng \textit{et al.}, 2010). Structural information is also available on M\textsuperscript{pro} from SARS CoV and many other CoVs (Anand \textit{et al.}, 2002; Yang \textit{et al.}, 2003; Xue \textit{et al.}, 2008; Zhao \textit{et al.}, 2008). The structure of the coronaviral M\textsuperscript{pro} consists of three domains and the catalytic dyad His/Cys is located at the interface between domains I and II. The first two domains have an antiparallel β-barrel structure that forms a folding scaffold similar to other viral chymotrypsin-like proteases (Anand \textit{et al.}, 2002, 2003). Domain III is a five-helix fold that contributes to the dimerization of M\textsuperscript{pro} (Anand \textit{et al.}, 2002, 2003). There is a very long loop (residues 176–200) between domains II and III. Recent studies have suggested that foldon unfolding of SARS-CoV M\textsuperscript{pro} domain III alone is able to mediate the interconversion between the monomer and a three-dimensional domain-swapped dimer under physiological conditions (Kang \textit{et al.}, 2012); nevertheless, how the two domain IIIIs meet and remain together until α-helix swapping takes place is still unknown.
Mature M\textsuperscript{pro} is a stable homodimer in which the two subunits are arranged perpendicularly to each other (Yang \textit{et al.}, 2003). Mutation or deletion of the N-finger (the first seven residues) and the C-terminus (residues 298–306) can lead to a
**Figure 1**
AEC pattern of the R298A mutant of SARS-CoV M\textsuperscript{pro}. The amount of protein used was 15 μl (1 mg ml\textsuperscript{−1}) and the total volume of the cell was 330 μl. (a) A typical trace of the absorbance at 250 nm of the R298A mutant during an experiment at a substrate concentration of 200 μM. The symbols represent experimental data and the lines are the results fitted to the Lamm equation using the SEDFIT program (Chou \textit{et al.}, 2011; Schuck, 2000). The best-fit distribution result is shown by dashed lines in (c). (b) The absorbance at 405 nm tracing the released product (pNA) after the first hour of the same experiment. The time interval between two successive spectra, from black to cyan, is 10 min. The inset plot shows the product at different times. The line indicates the best-fit result for the initial velocity calculation. (c) The continuous $c(s)$ distributions of the M\textsuperscript{pro} R298A mutant from the best-fit analysis of the 250 nm data. The data points in 50 mM phosphate buffer, pH 7.6 are shown by solid lines; while those in the presence of peptide substrate at 40 and 200 μM are shown by dotted and dashed lines, respectively. The two straight dashed lines indicate the positions of the monomer (M) and dimer (D). The residual bitmaps of the raw data and the best-fit results are shown as insets. (d) Plot of the initial velocities (from 405 nm results) versus substrate concentration. The line represents the best-fit results according to the Michaelis–Menten equation.
monomeric M\textsuperscript{pro} with little enzyme activity (Yang \textit{et al.}, 2003; Chou \textit{et al.}, 2004; Hsu, Chang \textit{et al.}, 2005). M\textsuperscript{pro} containing additional N- and C-terminal segments of the polyprotein undergoes autoprocessing to yield the mature protease \textit{in vitro} (Hsu, Kuo \textit{et al.}, 2005). Inactive as a monomer, the binding of the peptide substrate or of an N-terminally and/or C-terminally elongated M\textsuperscript{pro} molecule is able to induce dimerization of M\textsuperscript{pro}, allowing catalysis (Cheng \textit{et al.}, 2010; Chen \textit{et al.}, 2010; Li \textit{et al.}, 2010). The effect of substrate-induced dimerization is reversible and can be blocked by the mutation of a key residue, Glu166, which is responsible for the binding to Ser1 of the other protomer and is one of the residues recognizing Gln P1 of the substrate (Anand \textit{et al.}, 2002; Yang \textit{et al.}, 2003; Cheng \textit{et al.}, 2010). In crystal structures of monomeric mutants of SARS-CoV M\textsuperscript{pro}, such as R298A or G11A, the oxyanion loop (Ser139–Cys145) is converted to a short 3_10-helix; this leads to complete collapse of the oxyanion hole, resulting in enzyme inactivation (Shi \textit{et al.}, 2008; Hsu \textit{et al.}, 2009).
Despite the availability of a large amount of biochemical and structural information on M\textsuperscript{pro}, the mechanism by which monomeric M\textsuperscript{pro} is converted to a dimer during the maturation process is currently poorly understood. Here, we report the crystal structure of the SARS-CoV M\textsuperscript{pro} R298A mutant in the presence of peptide substrate. The structure reveals a functional dimeric form but with a minor change in the relative orientation of the two domain IIIs. Detailed exploration of this structure provides a better and more detailed understanding of the mechanisms that control the dimerization of coronaviral M\textsuperscript{pro}.
### 2. Materials and methods
#### 2.1. Protein expression and purification
The R298A mutant of SARS-CoV M\textsuperscript{pro} inserted into the pET-28a(+) vector (Cheng \textit{et al.}, 2010) was expressed in \textit{Escherichia coli} BL21 (DE3) cells. Cultures were grown in LB medium at 310 K for 4 h and were then induced with 0.4 mM isopropyl β-D-thiogalactopyranoside; this was followed by overnight incubation at 293 K. After centrifugation at 6000g at 277 K for 10 min, the cell pellets were resuspended in lysis buffer (20 mM Tris pH 8.5, 250 mM NaCl, 5% glycerol, 0.2% Triton X-100, 2 mM β-mercaptoethanol) and lysed by sonication. The crude extract was then centrifuged at 12 000g at 277 K for 25 min to remove the insoluble pellet. The supernatant was incubated with 1 ml Ni–NTA beads (Qiagen, Hilden, Germany) at 277 K for 1 h and loaded into an empty column. After flowthrough and washing with washing buffer (20 mM Tris pH 8.5, 250 mM NaCl, 8 mM imidazole, 2 mM β-mercaptoethanol), the protein was eluted with elution buffer (20 mM Tris pH 8.5, 30 mM NaCl, 150 mM imidazole, 2 mM β-mercaptoethanol). The resulting protein fraction was then loaded onto an S-300 gel-filtration column (GE Healthcare) equilibrated with running buffer (20 mM Tris pH 8.5, 100 mM NaCl, 2 mM dithiothreitol). The purity of the collected fractions was analyzed by SDS–PAGE. Fractions containing the M\textsuperscript{pro} protein were pooled and concentrated to 30 mg ml\textsuperscript{-1} using an Amicon Ultra-4 30 kDa centrifugal filter (Millipore). The typical yield of protein was 5–10 mg per litre of cell culture.
#### 2.2. Active enzyme centrifugation
Active enzyme centrifugation (AEC) can be used to observe quaternary-structural changes and catalytic activity simultaneously (Chou \textit{et al.}, 2011). The analytical ultracentrifugation experiments were performed with an XL-A analytical ultracentrifuge (Beckman, Fullerton, California, USA) using an An-50 Ti rotor (Cheng \textit{et al.}, 2010). A commercially available double-sector Vinograd-type band-forming centrepiece (Beckman, Fullerton, California, USA) was used for AEC (Chou \textit{et al.}, 2011; Chou & Tong, 2011). Briefly, 15 μl of the R298A mutant of M\textsuperscript{pro} (1 mg ml\textsuperscript{-1}) was added to the small well above the sample sector. After the cell had been assembled, 350 μl peptide substrate (TSAVLO-pNA from GL Biochem, Shanghai, People’s Republic of China) dissolved in D$_2$O at 0, 40 or 200 μM was loaded into the bulk-sample sector space. Centrifugation was then carried out at 42 000 rev min\textsuperscript{-1}. Absorbance at 250 nm was chosen to allow detection of the protein band, while absorbance at 405 nm was used to monitor the catalytic release of the product, pNA. The spectrum was recorded continuously using a time interval of 600 s per scan and a step size of 0.003 cm. A typical trace of the 250 nm spectral results is shown in Fig. 1(a). The data set was then fitted to a continuous $c(x)$ distribution model using the \textit{SEDFIT} program (Schuck, 2000; Brown & Schuck, 2006). The signals at 405 nm were used to calculate the initial velocities and were then fitted to the Michaelis–Menten equation, from which the kinetic parameters $K_m$ and $k_{cat}$ were determined (Chou \textit{et al.}, 2011).
#### 2.3. Protein crystallization
Crystals of the R298A mutant were obtained at 295 K by the sitting-drop vapour-diffusion method. The protein solution was set up at 15 mg ml\textsuperscript{-1} and included 1 mM TSVALQ-pNA.
### Table 1
Summary of crystallographic information for M\textsuperscript{pro} R298A (pH 8.0).
Values in parentheses are for the highest resolution shell.
| Space group | P1 |
|-------------|----|
| Unit-cell parameters (Å, °) | $a = 55.0$, $b = 59.4$, $c = 59.8$, $\alpha = 71.3$, $\beta = 73.4$, $\gamma = 72.3$ |
| Resolution (Å) | 30.09–2.00 (2.0–2.09) |
| $R_{\text{merge}}$ (%) | 3.5 (22.0) |
| ($I/σ(I)$) | 21.5 (3.2) |
| Completeness (%) | 94.7 (82.6) |
| No. of reflections | 35837 (4672) |
| Multiplicity | 2.2 (2.0) |
| R factor (%) | 19.0 |
| Free R factor | 24.1 |
| R.m.s.d., bond lengths (Å) | 0.009 |
| R.m.s.d., bond angles (°) | 1.3 |
$^a$ $R_{\text{merge}} = \sum_{hkl} \sum_{i} |I(hkl)_i - \langle I(hkl) \rangle| / \sum_{hkl} \sum_{i} I(hkl)$, where $I(hkl)$ is the integrated intensity of a given reflection and $\langle I(hkl) \rangle$ is the mean intensity of multiple corresponding symmetry-related reflections. $^b$ $R = \sum_{hkl} |F_{\text{obs}}| - |F_{\text{calc}}| / \sum_{hkl} |F_{\text{obs}}|$, where $F_{\text{obs}}$ and $F_{\text{calc}}$ are observed and calculated structure factors, respectively. $^c$ The free R factor is the R factor calculated using a random 5% of data that were excluded from the refinement.
The reservoir solution consisted of 0.1 M Tris pH 8.0, 30%(w/v) PEG 300, 5%(w/v) PEG 1000. Large but poorly diffracting crystals appeared in 3 d and were used for microseeding. Single crystals of cubic shape and with dimensions of 0.2–0.3 mm were obtained in less than a week. All crystals were cryoprotected in reservoir solution with 1 mM TSAVLO-pNA and were then flash-cooled in liquid nitrogen. Crystal soaking in 2 mM TSAVLO-pNA failed to improve the electron density of the substrate. Soaking at higher peptide concentrations was impossible owing to low solubility.
2.4. Data collection, structure determination and refinement
X-ray diffraction data were collected at 100 K on SPXF beamline 13B1 at the National Synchrotron Radiation Research Center (Taiwan) using an ADSC Quantum-315r CCD (X-ray wavelength 0.976 Å). The diffraction images were processed and scaled using the HKL-2000 package (Otwinowski & Minor, 1997). The crystal belonged to space group $P1$, with unit-cell parameters $a = 55.0$, $b = 59.4$, $c = 59.8$ Å, $\alpha = 71.3$, $\beta = 73.4$, $\gamma = 72.3^\circ$. There are two M$^{\text{pro}}$ molecules in the asymmetric unit. The structure was solved by the molecular-replacement method with the program Phaser (McCoy et al., 2007) using the structure of wild-type M$^{\text{pro}}$ (PDB entry 1uk4) as the search model (Yang et al., 2003). Manual rebuilding of the structure model was performed with Coot (Emsley & Cowtan, 2004). Structure refinement was carried out using the program REFMAC (Murshudov et al., 2011). Data-processing and refinement statistics are summarized in Table 1. The crystal structure has been deposited in the Protein Data Bank (PDB entry 4h13).
3. Results and discussion
3.1. Substrate-induced dimerization of SARS-CoV M$^{\text{pro}}$
To explore the influence of substrate binding on the dimerization of M$^{\text{pro}}$, we performed AEC experiments on the R298A mutant of M$^{\text{pro}}$ with or without the peptide substrate. Figs. 1(a) and 1(b) show typical absorbance traces at 250 and 405 nm of the R298A mutant during the experiment at a substrate concentration of 200 μM. After fitting the 250 nm signals to the continuous size-distribution model, it was obvious that the R298A mutant was monomeric (2.1 S) in the absence of substrate (Fig. 1c, solid lines), whereas it was dimeric (2.8 S) at a substrate concentration of 200 μM (Fig. 1c, dashed lines). Somewhat surprisingly, at a substrate concentration of 40 μM the major species (2.5 S) of the R298A mutant was located between the monomer and the dimer (Fig. 1c, dotted lines). According to previous studies, these results suggested that the R298A mutant is a rapid self-association protein, similar to wild-type M$^{\text{pro}}$ (Cheng et al., 2010). These observations using AEC

**Figure 2**
Dimeric structure of the R298A mutant of SARS-CoV M$^{\text{pro}}$. (a) The overall structure of the dimeric M$^{\text{pro}}$ R298A mutant. The two protomers are coloured cyan and green, respectively. (b) Crystal packing of the R298A mutant. Two molecules form a biological dimer in the unit cell. (c) Final 2$F_o - F_c$ electron density contoured at 1.0σ for residues 297–299 of the R298A mutant. The Arg298 residue in wild-type M$^{\text{pro}}$ is also shown (green). (d) Schematic drawing in stereoview showing the detailed interactions at the active site of the R298A mutant. The final 2$F_o - F_c$ electron density around the active site (contoured at 2.5σ, blue mesh) and the C atoms of the nucleotides P3–P1 (Val-Leu-Gln) residues are coloured orange. Hydrogen-bonding interactions are indicated by red dashed lines. All structure figures were produced with PyMOL (http://www.pymol.org).
confirmed the substrate-induced dimerization of the R298A mutant. Moreover, after calculating the initial velocities from signals at 405 nm and then fitting them to the Michaelis–Menten equation (inset in Figs. 1b and 1d), an apparent $K_m$ of 380 $\mu$M and an apparent $k_{\text{cat}}$ of 0.012 s$^{-1}$ were calculated. These values are very close to those for wild-type M$^{\text{pro}}$ from AEC analysis (Chou et al., 2011). This confirmed that the dimeric R298A protein is functionally the same as that of wild-type M$^{\text{pro}}$. However, based on the crystal structure of the monomeric R298A mutant (Shi et al., 2008), the transition from monomer to dimer should be impossible because of the dramatic rotation of domain III and the formation of a short 3$_{10}$-helix from an active-site loop; these changes result in the catalytic machinery being frozen in a collapsed state. Furthermore, recent studies suggested that the R298E mutant maintained N-terminal autocleavage activity comparable to that of wild-type M$^{\text{pro}}$, although it did cause complete dimer dissociation and disruption of trans-cleavage activity (Chen et al., 2010). This indicated that N-terminal autocleavage of M$^{\text{pro}}$ is not dependent on a ‘mature’ dimeric protease, but on an
**Figure 3**
Comparisons with the monomeric structure of the R298A mutant of M$^{\text{pro}}$. (a) Overlay of the current structure of the R298A dimer (in cyan and green) with that of the R298A monomer (grey; Shi et al., 2008). The orientation of domain III shows a 33° change compared with that in the monomer. The red lines represent the positions of residues 193–200 in the two structures. The region in the box is enlarged in (b). (b) Detailed interactions at the dimer interface of the R298A mutant. The red dashed lines indicate hydrogen bonds associated with the dimer, while black dashed lines indicate hydrogen bonds in the monomer. (c) Substrate-binding cavities mapped onto the current structure. Both protomers show a large and deep cavity (magenta surface representing 0.0) near the catalytic dyad, which is essentially the same as that in wild-type M$^{\text{pro}}$ at pH 7.6, although wild-type M$^{\text{pro}}$ at pH 6.0 shows one active protomer and one inactive protomer (Yang et al., 2003). The cavity was calculated using DS Modeling 1.7 (Accelrys) and was drawn using Discovery Studio Visualizer 2.5 (Accelrys).
‘intermediate’ or ‘loose’ dimer (Chen et al., 2010). On the other hand, we have expressed and purified an R298A mutant with an N-terminal extension similar to that of the R298E mutant (Chen et al., 2010). However, the mutant lost its trans-cleavage activity (less than 0.8%) and failed to form a dimer even in the presence of peptide substrates (Supplementary Fig. 1\(^1\)). Efforts to crystallize the N-terminally extended R298A mutant were also unsuccessful. To delineate the mechanism for the conversion of M\(^{pro}\) from an inactive monomer to a functional dimer, we next determined the crystal structure of the dimeric R298A mutant at a resolution of 2.09 Å (Table 1). It is important to note that we used an R298A mutant with an authentic N-terminus and a C-terminus with eight extra residues (LEH\(_8\)) for convenience in purification. According to previous studies (Chen et al., 2010), in the presence of peptide substrates the structure should be more like a mature dimer with trans-cleavage activity, not a premature dimer with an N-terminal extension.
### 3.2. Overall structure of the R298A mutant of SARS-CoV M\(^{pro}\) in the presence of peptide substrate
The original goal of our experiment was to determine the binding modes of peptide substrates to the R298A mutant. We therefore included high concentrations of peptide substrate (1 mM; threefold higher than the \(K_m\)) in the cocrystallization conditions. The refined atomic model agreed well with the crystallographic data and the expected bond angles and bond lengths (Table 1). About 91% of the residues are in the most favoured region of the Ramachandran plot; none are in the disallowed region.
Consistent with our expectations and the AEC results, two essentially identical monomers of the R298A mutant exist in the asymmetric unit, with an r.m.s.d. of 1.1 Å between their C\(^α\) atoms (Figs. 2a and 2b). There is a minor change in orientation between the two copies of domain III and therefore the r.m.s.d. value decreases to 0.28 Å when domain III is excluded. The overall structure of the dimeric R298A mutant is similar to other structures of SARS-CoV M\(^{pro}\) reported previously. For example, the r.m.s.d. between equivalent C\(^α\) atoms of this dimeric structure and the structure of wild-type M\(^{pro}\) at pH 8.0 is 0.59 Å (Yang et al., 2003). The r.m.s.d. is 0.41 Å when the structure is compared with that of wild-type M\(^{pro}\) in complex with a peptide aldehyde inhibitor (Zhu et al., 2011). However, the position of domain III is different when these structures are compared (see below) and has been excluded from the comparisons.
Our crystallographic analysis of the R298A mutant shows that there is no evidence for any electron density beyond the C\(^β\) atom of this side chain (Fig. 2c). This is a confirmation that our protein carries the R298A mutation and that only mutant protein was crystallized in our experiments. Unfortunately, we cannot observe the complete electron density of the peptide substrate, even though there is some residual electron density near the His41/Cys145 catalytic dyad of subunit A (green mesh in Fig. 2d). To evaluate the possible substrate-binding mode in this structure, we generated a model with the P3–P1 substrate residues (Val-Leu-Gln) based on the structure of the C145A mutant (Hsu, Kuo et al., 2005). In the model, the side chains of P1 Gln and P2 Leu are able to fill the residual electron density and an O atom of the carboxyl group of P1 Gln is located in the oxyanion hole (Fig. 2d). In contrast, in subunit B there is no residual electron density at the same position, suggesting that the statuses of the two active sites in the dimer may not be identical, although the amide N atoms of Gly143 and Cys145 in each subunit are oriented into the oxyanion hole. In addition, most of the residues in the active site, including Phe140, His163, Glu166, His172 and the catalytic dyad His41/Cys145, are in the same positions as those of wild-type M\(^{pro}\) (Yang et al., 2003). This seems to confirm that the catalytic machinery of the R298A dimer is functional.
### 3.3. Comparison with the monomeric structure of the R298A mutant
The mutual arrangement of the domains of the R298A mutant is changed dramatically and therefore we compared our dimer structure with that of the monomer (PDB entry 2qcy; Shi et al., 2008) based on superposition of domains I and II. As shown in Fig. 3(a), the most obvious conformational change is a 33° rotation of domain III. This rotation allows steric hindrance between the two copies of domain III during dimerization to be avoided. The variability of the mutual domain arrangement suggests that conversion from the monomeric form to the dimeric form is feasible. Nevertheless, the dimerization of some other monomeric mutations, such as G11A and S139A, still needs to be confirmed, although they show a similar overall structure to the R298A monomer (Chen et al., 2008; Hu et al., 2009).
The structural comparison also shows that in the present structure the short 3\(_{10}\)-helix of residues Ser139-Phe140-Leu141 is disrupted and adopts a loop conformation similar to that of the wild type (Fig. 3b; see below). The unfolding of this helix enables the insertion of the N-finger of the other subunit, which is further stabilized by the interaction of Glu166 with Ser1' (primed residue numbers indicate subunit B; Fig. 3b). Most importantly, the key stacking interaction between the rings of Phe140 and His163, which is important to prevent His163 from being protonated (Yang et al., 2003), are maintained (Fig. 2d). This ensures that His163 can efficiently interact with the side chain of P1 Gln. In contrast to the collapsed substrate-binding site in the R298A monomer (Shi et al., 2008), there is a large and complete substrate-binding pocket in each subunit (Fig. 3c). This further suggests that both subunits of the R298A dimer are catalytically active.
In addition, at the dimer interface of the present structure, ion pairs and hydrogen bonds, such as Arg4′–Glu290 and Ser139–Gln299', can also be seen, which is a similar situation to that of the wild-type structure (Fig. 3b; see below). All of these observations confirm that the dimerization of R298A is very similar to that of wild-type M\(^{pro}\), with only minor differ-
---
\(^1\) Supplementary material has been deposited in the IUCr electronic archive (Reference: DW5035). Services for accessing this material are described at the back of the journal.
ences. Owing to the lack of a monomeric wild-type M\textsuperscript{pro} structure, the monomeric and dimeric R298A structures provide valuable insights into the dimerization process of the enzyme, which undergoes a dramatic mutual arrangement of the domains (Hsu, Kuo \textit{et al.}, 2005; Chen \textit{et al.}, 2008, 2010; Cheng \textit{et al.}, 2010; Li \textit{et al.}, 2010).
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figure4.png}
\caption{Comparisons with the structure of wild-type M\textsuperscript{pro}. (a) An overlay of the current structure of the R298A dimer (cyan and green) with that of wild-type M\textsuperscript{pro} in complex with peptidyl aldehyde inhibitor (magenta; PDB entry 3n3d; Zhu \textit{et al.}, 2011). The red arrows show the orientation change affecting the two domain IIIIs. The region in the box is enlarged in (b). (b) The hydrogen-bonding interaction between the two domain IIIIs. The red dashed line indicates the hydrogen bond between Ser284 and Thr285 in the R298A mutant, while the black dashed lines show hydrogen bonds between the two Thr285 residues and the bound water (red sphere) in wild-type M\textsuperscript{pro}. (c) and (d) show the detailed interactions of the two proteins A near the active sites of subunit A (cyan) and subunit B (green), respectively. The red dashed lines indicate hydrogen bonds, and the pairs are R298A magenta, while wild-type M\textsuperscript{pro} (magenta) shows the same interactions (PDB entry 3n3d; Zhu \textit{et al.}, 2011). Overlay of the P2–P1 residues in the two structures (orange in R298A structures and grey in wild-type M\textsuperscript{pro}), respectively) confirmed that one O atom of the carboxyl group of PI Glu was located in the oxyanion hole.}
\end{figure}
3.4. Reorganization of the dimer in the R298A mutant
The R298A mutant shows an apparent change in the relative orientation of domain III. When we compared our structure with two wild-type structures (Yang et al., 2003; Zhu et al., 2011), namely the free enzyme at pH 8.0 (PDB entry 1uk2) and the peptide aldehyde inhibitor complex (PDB entry 3n3d), a relative shift of $5-24^\circ$ could be observed for the two copies of domain III in the dimer compared with the free enzyme, while there was a shift of $6-17^\circ$ compared with the inhibitor complex (Fig. 4a). Interestingly, we found that there is a hydrogen-bonding interaction between residues Ser284 and Thr285 in the $\alpha L-\alpha L$ loop, while there is a water molecule between the two Thr285 residues in the structure of wild-type M$^{pro}$ (Fig. 4b). The direct contact between the two $\alpha L-\alpha L$ loops leads to a shift of the $\alpha L$ and $\alpha L$ helices, further causing the change in the orientation of the whole domain III. This observation also suggests that the two copies of the folded domain III are able to bind to each other by this interaction at the initial stage and to wait until the swapping of the $\alpha$-helix to form a three-dimensional domain-swapped dimer or a more stable and super-active octameric M$^{pro}$ (Zhang et al., 2010; Kang et al., 2012).
Furthermore, in the present structure each domain III is even closer to domains I and II of the other subunit, although subunit A shows a larger change in orientation than subunit B. At the dimer interface of the R298A mutant, in addition to two intermolecular Arg4–Glu290 ion pairs (Arg4–Glu290' and Arg4'–Glu290'), there are two Ser139…Gln299 hydrogen bonds (Ser139…Gln299' and Ser139'…Gln299') and two Ser1…Glu166 ion pairs (Ser1…Glu166' and Ser1'…Glu166'); this contrasts with some of the structures of wild-type M$^{pro}$ (PDB entries 1uk2 and 1uk4), which only show one pair of each. A similar assembly can also be observed in the structures of authentic wild-type M$^{pro}$ (PDB entry 2h2z) and its complex with a peptide aldehyde inhibitor (PDB entry 3n3d), although in these two structures the Arg298 residues do not interact with Ser123 of the other subunit (Figs. 4c and 4d; Xue et al., 2007; Zhu et al., 2011). This suggests that M$^{pro}$ may not require all of the possible intermolecular interactions for dimerization, especially in the presence of substrates or peptidyl inhibitors. Moreover, previous studies have suggested that Glu166 plays a pivotal role in connecting the substrate-binding site to the dimer interface (Yang et al., 2003; Chen et al., 2008; Cheng et al., 2010). In our structure, the interactions between the main-chain amide of Ser1 with the carboxyl group of the Glu166 side chain of the other subunit provide direct evidence to explain why the mutation at Glu166 blocks substrate-induced dimerization of M$^{pro}$ (Cheng et al., 2010). Remarkably, mutation of Arg298, which should be detrimental to dimerization, may be compensated by these interactions, while most of the other residues in the active site show only small changes.
4. Conclusion
Our studies show that SARS-CoV M$^{pro}$, the dimerization of which is important for its catalytic activity, is able to tolerate large orientation changes, especially involving domain III. Mutation of Arg298, when introduced at the dimer interface, disturbs the dimerization; nevertheless, in the presence of peptide substrate the dimerization is able to be induced or rescued by intermolecular hydrogen-bond (Ser139…Gln299) and ion-pair (Ser1…Glu166) interactions. Based on the existence of a complete substrate-binding pocket and a complete oxyanion hole, we suggested that the dimer is still catalytically active, even though there are conformational rearrangements of the two copies of domain III in the dimer. AEC experiments confirmed that the kinetic parameters of the R298A mutant are similar to those of wild-type M$^{pro}$. The present studies provide valuable insights into the mechanisms that control the monomer–dimer switch during the maturation process of M$^{pro}$.
We thank G.-G. Chang and the reviewers for helpful discussions. This research was supported by grants from the National Health Research Institute, Taiwan (NHRI-EX101-9947S1) and the National Science Council, Taiwan (98-2320-B-010-026-MY3) to C-YC. Portions of this research were carried out at the National Synchrotron Radiation Research Center, a national user facility supported by the National Science Council of Taiwan, ROC. The Synchrotron Radiation Protein Crystallography Facility is supported by the National Core Facility Program for Biotechnology.
References
Anand, K., Palm, G. J., Mesters, J. R., Siddell, S. G., Ziebuhr, J. & Hilgenfeld, R. (2002). EMBO J. 21, 3213–3224.
Anand, K., Ziebuhr, J., Wadhwani, P., Mesters, J. R. & Hilgenfeld, R. (2003). Science, 300, 1763–1767.
Brown, P. H. & Schuck, P. (2006). Biophys. J. 90, 4651–4661.
Chen, S., Hu, T., Zhang, J., Chen, J., Chen, K., Ding, J., Jiang, H. & Shen, X. (2008). J. Biol. Chem. 283, 554–564.
Chen, S., Jonas, F., Shen, C. & Hilgenfeld, R. (2010). Protein Cell, 1, 59–74.
Cheng, S.-C., Chang, G.-G. & Chou, C.-Y. (2010). Biophys. J. 98, 1327–1336.
Chou, C.-Y., Chang, H.-C., Hsu, W.-C., Lin, T.-Z., Lin, C.-H. & Chang, G.-G. (2004). Biochemistry, 43, 14958–14970.
Chou, C.-Y., Chien, C.-H., Han, Y.-S., Prebenda, M. T., Hsieh, H.-P., Turk, B., Chang, G.-G. & Chen, X. (2008). Biochem. Pharmacol. 75, 1601–1609.
Chou, C.-Y., Hsieh, Y.-H. & Chang, G.-G. (2011). Methods, 54, 76–82.
Chou, C.-Y. & Tong, L. (2011). J. Biol. Chem. 286, 24417–24425.
Emsley, P. & Cowtan, K. (2004). Acta Cryst. D60, 2126–2132.
Gorbalenya, A. E., Snijders, L., Ziebuhr, J. & Snijder, E. J. (2006). Virology, 117, 17–37.
Hegyi, A. & Ziebuhr, J. (2002). J. Gen. Virol. 83, 595–599.
Hoek, L. van der, Pyre, K., Jebbink, M. F., Vermeulen-Oost, W., Berkhout, R. J., Wolthers, K. C., Wertheim-van Dillon, P. M., Kaandorp, J., Spaargaren, J. & Berkhout, B. (2004). Nature Med. 10, 368–373.
Hsu, M.-F., Kuo, C.-J., Chang, K.-T., Chang, H.-C., Chou, C.-C., Ko, T.-P., Shu, H.-L., Chang, G.-G., Wang, A. H.-J. & Liang, P.-H. (2005). J. Biol. Chem. 280, 31257–31266.
Hsu, W.-C., Chang, H.-C., Chou, C.-Y., Tsai, P.-J., Lin, P.-I. & Chang, G.-G. (2007). J. Biol. Chem. 282, 22741–22748.
Hu, T., Zhang, Y., Li, L., Wang, K., Chen, S., Chen, J., Ding, J., Jiang, H. & Shen, X. (2009). Virology, 388, 324–334.
Kang, X., Zhong, N., Zou, P., Zhang, S., Jin, C. & Xia, B. (2012). Proc. Natl Acad. Sci. USA, 109, 14900–14905.
Lau, S. K. P., Woo, P. C. Y., Li, K. S. M., Huang, Y., Tsoi, H.-W., Wong, B. H. L., Wong, S. S. Y., Leung, S.-Y., Chan, K.-H. & Yuen, K.-Y. (2005). *Proc. Natl Acad. Sci. USA*, **102**, 14040–14045.
Li, C., Qi, Y., Teng, X., Yang, Z., Wei, P., Zhang, C., Tan, L., Zhou, L., Liu, Y. & Lai, L. (2010). *J. Biol. Chem.*, **285**, 28134–28140.
Li, W. *et al.* (2005). *Science*, **310**, 676–679.
Lin, P.-Y., Chou, C.-Y., Chang, H.-C., Hsu, W.-C. & Chang, G.-G. (2008). *Arch. Biochem. Biophys.*, **472**, 338–342.
McCoy, A. J., Grosse-Kunstleve, R. W., Adams, P. D., Winn, M. D., Storoni, L. C. & Read, R. J. (2007). *J. Appl. Cryst.*, **40**, 658–674.
Murshudov, G. N., Skubák, P., Lebedev, A. A., Punnu, N. S., Steiner, R. A., Nicholls, R. A., Winn, M. D., Long, F. & Vagin, A. A. (2011). *Acta Cryst. D67*, 355–367.
Otwinowski, Z. & Minor, W. (1997). *Methods Enzymol.*, **276**, 307–326.
Schuck, P. (2000). *Biophys. J.*, **78**, 1606–1619.
Shi, J., Sivaraman, J. & Song, J. (2008). *J. Virol.*, **82**, 4620–4629.
Snijder, E. J., Bredenbeek, P. J., Dobbe, J. C., Thiel, V., Ziebuhr, J., Poon, L. L. M., Guan, Y., Rozanov, M., Spaan, W. J. M. & Gorbalenya, A. E. (2003). *J. Mol. Biol.*, **331**, 991–1004.
Weiss, S. R. & Navas-Martin, S. (2003). *Microbiol. Mol. Biol. Rev.*, **69**, 635–664.
Woo, P. C. Y., Lau, S. K. P., Chu, C.-M., Chan, K. H., Tsoi, H. W., Huang, Y., Wong, B. H. L., Poon, R. W. S., Cai, J. J., Luk, W.-K., Poon, L. L. M., Wong, S. S. Y., Guan, Y., Peiris, J. S. M. & Yuen, K.-Y. (2005). *J. Virol.*, **79**, 884–895.
Wu, C.-Y., King, K.-Y., Kuo, C.-J., Fang, J.-M., Wu, Y.-T., Ho, M.-Y., Liao, C.-L., Shie, J.-I., Liang, P.-H. & Wong, C.-H. (2006). *Chem. Biol.*, **13**, 261–269.
Xue, X., Yang, H., Chen, W., Zhao, Q., Li, J., Yang, K., Chen, C., Jin, Y., Bartlam, M. & Rao, Z. (2007). *J. Mol. Biol.*, **366**, 965–975.
Xue, X., Yu, H., Yang, H., Xue, F., Wu, Z., Shen, W., Li, J., Zhou, Z., Ding, X., Zhuo, Q., Zhang, X. C., Liao, M., Bartlam, M. & Rao, Z. (2008). *J. Virol.*, **82**, 2515–2527.
Yang, H., Yang, M., Ding, Y., Liu, Y., Lou, Z., Zhou, Z., Sun, L., Mo, L., Ye, S., Pang, H., Gao, G. F., Anand, K., Bartlam, M., Hilgenfeld, R. & Rao, Z. (2003). *Proc. Natl Acad. Sci. USA*, **100**, 13190–13195.
Zhang, S., Zhong, N., Xue, F., Kang, X., Ren, X., Chen, J., Jin, C., Lou, Z. & Xia, B. (2010). *Protein Cell*, **1**, 371–383.
Zhao, Q., Li, S., Xue, F., Zou, Y., Chen, C., Bartlam, M. & Rao, Z. (2008). *J. Virol.*, **82**, 8647–8655.
Zhu, L., George, S., Schmidt, M. F., Al-Gharabli, S. I., Rademann, J. & Hilgenfeld, R. (2011). *Antiviral Res.*, **92**, 204–212. |
| Item | Details of error | Impact on evaluation of seismic safety | Analysis of cause by contractor |
|------|-----------------|---------------------------------------|----------------------------------|
| (1) | During the input of data for analysis of the EW seismic response of the Unit No. 5 reactor building to the computer program, a key error resulted in the input of one erroneous figure in data concerning the relationship between bending moment and strain for the earthquake-resistant walls (the bending moment value of the secondary breakpoint*).
(Error) $289.9 \times 10^6 \text{t} \cdot \text{cm} \rightarrow$ (Correction) $280.9 \times 10^6 \text{t} \cdot \text{cm}$ | *Because the response value determined in the analysis was lower than the primary breakpoint,* which was lower than the secondary breakpoint,* the results of the analysis were unchanged irrespective of the input error. | Contractor (Company A):
• At the time of the analysis (January 2007), a method of checking that data had been correctly input had not yet been established in in-house rules.
• Because of this, despite the fact that the staff members responsible for the analysis checked the screen after inputting each figure based on materials documenting bases for input, in the one case indicated, the figure was not checked on the input screen. It is believed that the figure was overlooked because it was difficult to discriminate between figures on the screen.
• In addition, because no staff members other than those responsible for conducting the analysis checked that the data had been correctly input, the error was not discovered. |
| (2) | During the input of data for analysis of the horizontal seismic response of the Unit No. 5 exhaust stacks (external stacks) to the computer program, a key error resulted in the input of one erroneous figure for the cross-sectional secondary moment of the base.
[Diagram]
Error: $1343 \times 10^8 \text{cm}^4$ (Actual input value)
Correction: $1344 \times 10^8 \text{cm}^4$
*The correct figure was included in the model diagram reproduced in the seismic safety evaluation report, but an erroneous figure was input to the computer program. | *While the results of the analysis using the correct data indicated that some corrections were necessary in the report, it has been determined that the error had no impact on the evaluation of the seismic safety of the reactor facilities.
[Diagram]
Comparison of response before and after correction of input data (Direction: 90° horizontal) | Contractor (Company A):
• At the time of the analysis (March 2007), a method of checking that data had been correctly input had not yet been established in in-house rules.
• Because of this, despite the fact that the staff members responsible for the analysis checked the screen after inputting each figure based on materials providing bases for input, in the one case indicated, the figure was not checked on the input screen. It is believed that the figure was overlooked because it was difficult to discriminate between figures on the screen.
• In addition, because no staff members other than those responsible for conducting the analysis checked that the data had been correctly input, the error was not discovered. |
In three separate cases, figures for the axial springs, part of the data for analysis of vertical seismic response in the Unit No. 5 seawater heat exchanger building, which should have been input based on the 1999 edition of the Standard for Structural Calculation of Reinforced Concrete Structures ("RC Standard" below), were instead input based on the 1991 edition.
- While the results of the analysis using the correct data indicated that some corrections were necessary in the report, the error had no impact on the evaluation of the seismic safety of the reactor facilities.
| Subject of follow-up evaluation | Evaluation status | Generated values (MPa) | Evaluation benchmark values (MPa) |
|---------------------------------|-------------------|------------------------|----------------------------------|
| Reactor equipment cooling seawater system pipes | Before follow-up evaluation | 175 | 354 |
| After follow-up evaluation | 176 | |
| Reactor equipment cooling seawater system pipe supports | Before follow-up evaluation | 203 | 245 |
| After follow-up evaluation | 200 | |
Comparison of response before and after correction of input data
(Black lines: Errors; Red lines: Corrections)
Contractor (Company A):
- Corrections were made based on analysis models formulated during the design process with reference to the latest standards, etc., and these were employed in analyses for the evaluation of seismic safety. The analysis models formulated at the time of design (around July 1998) were produced using specifications (weight, axial spring, and vertical spring of ground) based on the 1991 RC Standard. However, at the time of the analysis (January 2007), the RC Standard had been revised in 1999, making correction of the specifications necessary.
- At the time of the analysis, in-house rules had not been established specifying the formulation of documents clearly setting out the bases for input and providing methods for verifying correct input of input data. Because of this, the members of staff responsible for the analysis brought up design data on the screen and replaced figures calculated using 1991 RC Standard data with figures that they calculated on a calculator, one by one, using 1999 RC Standard data, without formulating materials documenting bases for input that indicated where corrections had been made. During this process, the values for the axial springs were overlooked and not corrected.
- In addition, following correction of the data, neither the members of staff responsible for conducting the analysis nor any other members of staff verified whether the data had been appropriately corrected, and the errors were therefore not discovered.
Due to a misunderstanding, an incorrect coefficient (0.4) was employed in calculating vertical load for the evaluation of the reactor building ceiling crane runway girders for Units No. 3 and 4 (one instance for each unit, for a total of two instances).
- Calculation of vertical load due to weight of runway girder
\[ \text{Error} W \times \alpha_v \times 0.4 \rightarrow \text{(Correction)} W \times \alpha_v \times 1.0 \]
(W: Weight of runway girder, \( \alpha_v \): Vertical seismic intensity)
- While the results of the analysis using the correct data indicated that some corrections were necessary in the report, the error had no impact on the evaluation of the seismic safety of the reactor facilities.
| Subject of follow-up evaluation | Evaluation status | Generated value (N/mm\(^2\)) | Evaluation benchmark value (N/mm\(^2\)) |
|---------------------------------|-------------------|-------------------------------|----------------------------------------|
| Unit No. 3 Runway girders | Before follow-up evaluation | 24.5 | 325 |
| After follow-up evaluation | 247 | | |
| Unit No. 4 Runway girders | Before follow-up evaluation | 171 | 235 |
| After follow-up evaluation | 172 | | |
Note) Because loads other than the weight of the runway girders themselves are dominant in relation to the generated values for both Units No. 3 and 4, the errors had minimal impact on the generated values.
Contractor (Company A):
- At the time of the analysis (around January 2007), the formulation of documents clearly setting out the bases for input was not stipulated in in-house rules. Because of this, while materials were formulated specifying the weight and other data employed in the analysis, the members of staff responsible for the analysis did not formulate materials documenting the bases for inputs that specified the method employed to calculate vertical loads from these data.
- Because of this, the staff members responsible for conducting the analysis mistakenly confused the calculation of the combination of loads in the same direction with the calculation of the combination of horizontal seismic force and vertical seismic force, and multiplied the weight of the runway girders by the coefficient employed in these calculations (0.4).
- In addition, the error was overlooked because neither the staff members responsible for conducting the analysis nor other staff members checked the method employed to calculate vertical loads.
An error was made in one case in the figure for maximum horizontal response acceleration, part of the input data used in the analysis of the pipes for the Unit No. 5 emergency diesel generator system pipes.
(Error) 1.13(G) → (Correction) 1.16(G)
The floor response spectrum and the maximum response acceleration calculated in the analysis of the seismic response of the reactor buildings was employed as input data in the analysis conducted for the pipes. Because the pipes under analysis were located on multiple stories of the building, the highest value for acceleration should have been selected for use from among the values for each story. However, due to an error, a value other than the highest value was selected.
- Evaluation of pipe stress employs the highest of the values for acceleration given by the maximum stress acceleration and the floor response spectrum. In the case of the pipe in question, the acceleration given by the floor response spectrum is greater than the maximum response acceleration, and there is therefore no change in the results of the evaluation of the pipe.
With regard to the input data employed in simulations of changes in water levels due to the hydraulic characteristics of the water intake equipment for Units No. 3, 4 and 5, the errors shown below were made in the case of loss coefficients for the connections between intake water towers and intake tunnels and between intake tunnels and intake water ponds (two errors in the case of Unit No. 3, two in the case of Unit No. 4, and three in the case of Unit No. 5).
- Coefficients for structures of the same type that closely resembled the structures under analysis were mistakenly employed.
- Configurations are slightly different in the cases of Units No. 3 and 4, and there is therefore also a slight difference in loss coefficients. However, the figure for Unit No. 3 was used for Unit No. 4 based on the belief that they were the same.
- Key errors were made when inputting data in the formulas employed at the stage of calculations to provide bases for input.
Follow-up analyses using the correct data produced no major differences in results for either the side for water level increase or the side for water level decrease, demonstrating that the error had no impact on the evaluation of the seismic safety of the reactor facilities.
| Section | Details of error in relation to loss factor |
|---------|--------------------------------------------|
| | Error | Correction |
| Connection between intake water tower and intake tunnel | Reverse flow side | Unit No. 5 : 0.613 | Unit No. 5 : 0.664 |
| | Forward flow side | Unit No. 3 : 0.343
Unit No. 4 : 0.365
Unit No. 5 : 0.525 | Unit No. 3 : 0.348
Unit No. 4 : 0.369
Unit No. 5 : 0.528 |
| | Reverse flow side | Unit No. 3 : 1.561
Unit No. 4 : 1.555
Unit No. 5 : 1.565 | Unit No. 3 : 0.562
Unit No. 4 : 0.559
Unit No. 5 : 0.568 |
Contractor (Company B):
- Company B verified the fact that the members of staff of Company S responsible for conducting the analysis and other members of staff had checked the validity of the bases for input, but did not check the validity of the data recorded in the documentation of the bases for input.
Contractor (Company C):
- At the time of the analysis (October 2006), the formulation of materials showing the bases for inputs was not stipulated in our in-house rules. Because of this, despite the fact that the staff members responsible for conducting the analysis formulated materials recording the data employed in the analysis, they did not record sources, bases, or calculation procedures for the loss coefficients. This is believed to have led the staff members responsible for the analysis to have mistakenly used incorrect figures in the calculation of the loss coefficients in the input data, and mistakes in calculations were made due to key errors.
- In addition, the staff members responsible for conducting the analysis did not become aware of the errors because they did not conduct sufficient checks following the calculation of the incorrect loss coefficients, and data was not checked by any other members of staff. |
The image shows a close-up view of a snake's skin, which is composed of scales arranged in a pattern. The scales have a distinct texture and are outlined by darker lines, creating a pattern that resembles a grid or a series of interconnected shapes. The scales appear to be overlapping, with some scales partially covering others, giving the skin a three-dimensional appearance. The overall coloration of the scales is a mix of dark and light shades, with the lighter areas possibly indicating the presence of a reflective layer or a different material within the scale structure. The image captures the intricate details of the snake's skin, highlighting the complexity and beauty of its natural design.
Thomas Barbour
NHILO · NISI · CRUCE
M. E. Bloch, med. do.
Ichthyologie ou histoire naturelle des poissons.
Vol. V & VI,
contenant 216 planches.
Le relieur a perdu
la 20ème planche,
qui représente le
Salmo salar S.
le 29-11-37
Dr. M. Wunderlich
1884.
Louise Bloch
gefürstet
Weimar 1889
1289
1940
Cyprinus Erythrophthalmus.
Die Plotze.
1. Taf:
The Rud.
Rotengle.
Krüger, jun: del.
Bodenchr. sc.
1.2.3
Cyprinus Rutilus.
Das Rothauge.
La Rosse.
The Roach.
AISI
CYPRINUS NASUS.
Die Nase.
1758
Cyprinus Vimba.
Die Zärthe.
A.SL
Cyprinus Dobula
Der Döbel
Krüger jun. del.
Bodenehr sc.
17
Cyprinus Jeses.
Der Aland od. Göse.
Le Vilain Oul le Meunier.
The Chub.
A.J. St.
Cyprinus Aspius.
Der Raupfe.
Krüger, inn: del:
Bodenchr. Jö.
W.S.
Fig. 1: Cyprinus bipunctatus.
Die Alun = Bleake
Le Surlin
The Sperlin
2.
Cypr: Gobio.
Der Grundlin
Le Goujon
The Gudgeon
3. Cypr:Amaris
Der Butterlin
Le Boucire
The Silver-Carp
4. Cypr:Alburnus.
Der Uxeloy
La Wiblette
The Bleak
5. Cypr: Phoxinus.
Die Elritze
Le Veron
The Minow
The circle is a symbol of unity, wholeness, and infinity. It represents the cycle of life, the continuous flow of energy, and the interconnectedness of all things. In many cultures, the circle is seen as a sacred shape, representing the divine and the infinite. The circle is also a symbol of perfection, as it has no beginning or end, and it is always complete.
In geometry, the circle is defined as the set of all points in a plane that are equidistant from a given point, known as the center. The distance from the center to any point on the circle is called the radius. The diameter of a circle is twice the length of the radius, and it is the longest possible line segment that can be drawn within the circle. The circumference of a circle is the distance around its edge, and it is calculated using the formula $C = 2\pi r$, where $r$ is the radius and $\pi$ is a mathematical constant approximately equal to 3.14159.
The circle is also a symbol of balance and harmony. It represents the idea that everything in the universe is connected and interdependent. The circle is often used in meditation and spiritual practices to help individuals achieve a state of inner peace and balance. The circle is also a symbol of the natural world, as it represents the cycles of nature, such as the seasons and the phases of the moon.
In art and design, the circle is a versatile shape that can be used to create a variety of effects. It can be used to create a sense of movement and dynamism, as well as a sense of stability and calm. The circle can also be used to create a sense of harmony and balance, as it represents the idea that everything in the universe is connected and interdependent.
In conclusion, the circle is a powerful symbol that represents unity, wholeness, infinity, perfection, balance, harmony, and the natural world. It is a shape that has been used throughout history to represent a wide range of ideas and concepts, and it continues to be a popular symbol in art, design, and spirituality today.
Cyprinus Ballerus.
Die Zope.
The first step in the process is to identify the problem and understand its context. This involves gathering information about the issue, analyzing data, and determining the root cause. Once the problem is identified, we can then develop a plan to address it. This may involve creating a new product or service, improving an existing one, or changing the way a company operates. The plan should be specific, measurable, achievable, relevant, and time-bound (SMART). It should also take into account any potential risks and how they can be mitigated.
In addition to developing a plan, it is important to communicate with stakeholders throughout the process. This includes employees, customers, suppliers, and other partners. By keeping everyone informed and involved, we can build support for the changes needed to address the problem. Communication should be clear, concise, and honest. It should also be ongoing, so that everyone has the opportunity to provide feedback and ask questions.
Finally, it is essential to monitor progress and evaluate results. This involves tracking key performance indicators (KPIs) and using data to measure success. If the plan is not working as expected, we need to be able to adjust our approach quickly. This may involve making changes to the plan itself, or finding new ways to implement it. By being flexible and responsive, we can ensure that we are always moving towards our goals.
Overall, addressing problems requires a systematic approach that involves identifying the issue, developing a plan, communicating with stakeholders, monitoring progress, and evaluating results. By following these steps, we can increase the likelihood of success and achieve positive outcomes.
Cyprinus Blicca.
Die Güster.
La Bordelière.
The Blicca
vol
CYPRINUS CARASSIUS.
Die Karausche.
Le Carassin.
The Crucian.
Cyprinus Gibelio.
Die Giebel.
4/5
Cyprinus Brama
Der Bley od. Brassem.
La Brême
The Bream
Krieger jun. del
Bodenscher sc.
2019
Cyprinus Tinca
Der Schleie.
The Tench.
10
Cyprinus Tinca auratus.
Der Goldfisch.
1940
Cyprinus Carpio
Der Karpfen
Le Carp
The Carp
The 1970s
Rex Cytrinorum.
Der Spießkarper.
La Reine de Carp.
The Royal-Carp.
Cyprinus Barbus
Der Barbe
Le Barbeau
The Barbel
Krüger, jun. del.
Bodenehr, Sc.
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once these steps have been completed, the next step is to develop a plan for addressing the problem. This plan should include specific goals, objectives, and timelines for completing each task. It should also outline the resources that will be needed to carry out the plan, such as personnel, equipment, and funding.
Once the plan has been developed, the next step is to implement it. This involves assigning tasks to individuals or teams, providing training and support as needed, and monitoring progress to ensure that the plan is being carried out effectively. Throughout this process, it is important to communicate regularly with all stakeholders to keep everyone informed and engaged.
Finally, once the project has been completed, it is important to evaluate its success and make any necessary adjustments. This may involve conducting surveys or interviews with customers or employees to gather feedback on the quality of the work, or reviewing financial records to assess the cost-effectiveness of the project. By following these steps, organizations can ensure that their projects are well-planned, well-executed, and ultimately successful.
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
kristo junior d.d.
The first step is to identify the problem.
Salmo Trutta.
Die Lachs für alle.
La Truite.
The Trout.
Salmo fario
Die Teichforelle
The Trout
Salmo Fario
Die Wald- oder Stein-Forelle
La Truite brune
The brown-Trout
Salmo Teymallis
Die Forelle.
The Grayling.
L'Ombre d'Auvergne.
Salmo Lavaretus
Der Schnäpel
Le Lavaret
The Quinault
A. Rügergen del.
J. Bodenheyr Sc.
Salmo Thymallus latus
Die breite Aalebe.
La Thymale large.
The Broad-Gwiniad.
The following is a list of the most common types of data that can be collected and analyzed using the methods described in this paper:
- **Demographic Data**: Information about the age, gender, race, ethnicity, education level, income, employment status, and other demographic characteristics of individuals or groups.
- **Behavioral Data**: Information about the behaviors, attitudes, and preferences of individuals or groups, such as their online activities, social media usage, shopping habits, and health-related behaviors.
- **Geospatial Data**: Information about the location and movement of individuals or objects, such as their travel patterns, residential addresses, and business locations.
- **Financial Data**: Information about the financial transactions, investments, and assets of individuals or organizations, such as their bank accounts, credit card balances, and investment portfolios.
- **Health Data**: Information about the health status, medical history, and treatment of individuals, such as their medical records, prescription medications, and health insurance coverage.
- **Environmental Data**: Information about the natural environment, such as weather patterns, air quality, water quality, and land use.
- **Educational Data**: Information about the educational attainment, academic performance, and career paths of individuals, such as their grades, test scores, and college majors.
- **Legal Data**: Information about legal proceedings, court cases, and criminal records, such as arrest records, trial transcripts, and sentencing information.
- **Political Data**: Information about political affiliations, voting behavior, and campaign contributions, such as party affiliation, candidate preferences, and donation amounts.
- **Social Media Data**: Information about the content, interactions, and engagement on social media platforms, such as tweets, posts, comments, and likes.
- **Sports Data**: Information about the performance, statistics, and rankings of athletes and teams, such as game scores, player statistics, and team standings.
- **Sports Data**: Information about the performance, statistics, and rankings of athletes and teams, such as game scores, player statistics, and team standings.
Salmo Maræna
Die große Maräne
La grande Maréne
The Great Marona
Arüser jun. del.
E. Bodenker d.
The following is a list of the most important and frequently used terms in the field of computer science.
Salmo Eperlanus
Der Stint
L'Epéran
The Smelt.
Salmo Eperlano-Marinus
Der See-Stint.
L'Epéran du Mer.
The Smelt.
Salmo Maraenula
Die kleine Marane.
La petite Marane.
The Small Marane.
Fig. 2.
**CLUPEA SPRATTUS**
Der Breitling
The Sprat
La Sardine.
Fig. 1.
**CLUPEA HARENGUS**
Der Hering
Le Hareng
The Herring
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of the problem, and determining the desired outcome. Once these elements have been identified, a plan can be developed to address the issue.
The next step is to develop a solution that addresses the identified problem. This may involve creating new processes or procedures, implementing new technologies, or making changes to existing systems. It is important to ensure that the solution is feasible and will have a positive impact on the organization.
Once the solution has been developed, it must be implemented. This involves training employees on the new processes or technologies, communicating the changes to stakeholders, and monitoring the progress of the implementation. It is important to ensure that the implementation is done smoothly and efficiently.
Finally, the last step is to evaluate the effectiveness of the solution. This involves measuring the results against the desired outcome and making any necessary adjustments. It is important to ensure that the evaluation is thorough and objective, so that the best possible solution can be identified.
In conclusion, the process of developing and implementing a solution to a problem requires careful planning, execution, and evaluation. By following these steps, organizations can ensure that their solutions are effective and sustainable.
Fig. 1 Clupea Alosa.
Spira Alba.
Fig. 2 Clupea Encrasicolus.
Der Anjoris.
L'Anchois.
The Anchovy.
1986
Fig. 1. Cobitis Fossilis
Der Schlampitzer
La Loche d'Etang
The Muddy Loach
Fig. 2. Cobitis Taenia
Der Steinpfeffer
Loche de rivière
The Ribbon Loach
Fig. 3. Cobitis Barbatula
Die Schmert
La Loche
The Bearded
The first step in the process is to identify the problem. This involves understanding the nature of the issue and its impact on the organization. Once the problem is identified, the next step is to develop a plan to address it. This plan should be comprehensive and include all necessary steps to resolve the issue. Finally, the plan should be implemented and monitored to ensure that it is effective.
F. sox Lucio.
Der Hecht.
Le Brochet.
The Pike.
Krüger, jun. del:
A. F. Schmidt Minor. f.
Fissox Belone
Der Hornbecht.
L'Orphie
The Ghar.
@
Silurus Glanis.
Der Wels.
Le Silure.
The Sheat-fish
Largo par ad.
25$
1. Silurus Clarias.
Der Langbart.
Le Barbare.
The Longbearded.
3. Silurus Asciuta.
Der Plat-bauch.
L'Asciote.
The Crack-Belley.
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date.
1. *The Principles of Psychology* (1890) by William James
2. *The Interpretation of Dreams* (1900) by Sigmund Freud
3. *The Behavior of Organisms: An Experimental Analysis* (1937) by B.F. Skinner
4. *The Selfish Gene* (1976) by Richard Dawkins
5. *Mental Models* (1977) by John R. Anderson
6. *The Social Animal* (1978) by David Lykken
7. *Theories of Mind* (1983) by Daniel C. Dennett
8. *The Selfish Gene* (1989) by Richard Dawkins
9. *The Selfish Gene* (1996) by Richard Dawkins
10. *The Selfish Gene* (2006) by Richard Dawkins
Note: The list is not exhaustive and includes only a few examples of influential books in the field of psychology.
Cyprinus Jdus.
Der Trudding.
L'Éde.
The Jdus-Carp.
Krüger, jun. del.
C. F. Schmidt, sc.
The following is a list of the most common causes of death in the United States, according to the Centers for Disease Control and Prevention (CDC).
Cyprinus Cultratus
Die Ziege
Le Rapon
The Knife-Carp.
Gobius Lanceolatus
Die Lanzettgrundel
La Lancette
The Lancet-Goby
Gobius Niger
Die Meergrundel
Le Bouterot
The Black-Goby
CATAPHRACTUS.
1. Cataphractus
2. Cottus Gobio
Fig. 2
COTTUS GOBIO.
Der Kaulkopf.
Le Chebot.
The River Bullhead.
IWS
Cottus Scorpius.
Der See=Scorpion
Le Scorpion
The Father Lascher
16
JENS FABER.
Der Sonnenfisch.
La Dorée.
The Dorée.
4/5
HECRONIOTES PLATESSA.
La Pie
Tr. Plie.
Pleuronectes Rhombus
Das Tierreich oder Fluthall
La Perle
The Pearl
Pleuronectes flesus.
Der Thorbut od Flunder.
Le Fletz.
The Flounder.
Pleronectes Solea
Die Ziege.
La Sole.
The Sole
Fischer j. a del
C.F. Schmidt j.
Pleuronectes Limanda
Die Glairke od Kliessche
La Limande
The Dab
Krüger, jun del
A.F. Schmidt, Jr
PLEURONECTES HIPPOGLOSSUS.
Der Heilige Butt.
The Holybutt.
Le Flétan.
PLEURONECTES ARGUS.
Der Argus.
L'Argus.
The following is a list of the most important and frequently used terms in the field of computer science:
1. Algorithm: A step-by-step procedure for solving a problem or performing a task.
2. Data Structure: A way of organizing data that allows efficient access, modification, and manipulation.
3. Database: An organized collection of data stored in a computer system.
4. Database Management System (DBMS): Software that manages databases and provides an interface for users to interact with them.
5. Encryption: The process of converting information into a code so that only authorized parties can understand it.
6. Hashing: A process of converting data into a fixed-size string of characters.
7. Interface: A way for two systems to communicate with each other.
8. Network: A collection of computers and devices connected together.
9. Operating System (OS): A software program that controls the hardware and software resources of a computer.
10. Programming Language: A set of instructions that a computer can understand and execute.
11. Query: A request for information from a database.
12. Security: The protection of data and systems from unauthorized access, use, disclosure, disruption, modification, or destruction.
13. Software: A set of instructions that tell a computer what to do.
14. System: A collection of related components that work together to achieve a common goal.
15. User: An individual who uses a computer or other electronic device.
16. Virtualization: The creation of a virtual version of a physical resource.
17. Web Application: A software application that runs on a web server and is accessed through a web browser.
18. Wireless Network: A network that uses radio waves to transmit data between devices.
19. XML: eXtensible Markup Language, a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable.
20. YAML: Yet Another Markup Language, a markup language that is used to represent structured data.
PLEURONECTES MAXIMUS
Der Steinbut.
Le Turbot.
The Turbot.
Pleuronectes Passer
Der Linke Stachelflunder
Le Moineau de Mer
The Left Flounder
Kruger jun. del.
F. Schreber f.
PERCA LUCIOPERCA.
Der Zander.
Le Zander.
The Pike-Perch.
19
PERCA FLUVATILIS.
Der Barsch.
La Perche.
The Perch.
W
W
2 Gasterosteus Spinachia.
Der Dornfisch.
The Fifteen Spined Stickleback.
3 Gasterosteus Aculeatus.
Der Stichling.
L' Espinoche.
The Stickleback.
4 Gasterosteus Pungitius.
Der Kleine See-Stichling.
La petite Espinoche.
The lesser Stickleback.
2 Perca Cernua.
Der Kaulbarsch.
12
Scomber Scomber.
The Hakeele.
Le Maquereau
The Mackerel.
Scomber Thunnus.
Der Thunfisch
Le-Thon
The Tunny
The first step in creating a successful marketing plan is to identify your target audience. This involves understanding who your potential customers are, what their needs and wants are, and how they interact with other businesses. Once you have a clear understanding of your target audience, you can begin to develop a marketing strategy that will effectively reach them.
There are many different ways to reach your target audience, including traditional methods such as print advertising and television commercials, as well as newer methods such as social media marketing and email marketing. It's important to choose the right channels for your business based on your target audience's preferences and behaviors.
Another key aspect of creating a successful marketing plan is to set specific goals and objectives. These should be measurable and achievable, and should align with your overall business goals. By setting clear goals, you can track your progress and make adjustments as needed to ensure that your marketing efforts are effective.
In addition to setting goals, it's also important to create a budget for your marketing efforts. This will help you determine how much money you can spend on various marketing activities, and will allow you to prioritize your spending based on which channels are most effective for reaching your target audience.
Finally, it's important to regularly evaluate the effectiveness of your marketing efforts. This involves tracking metrics such as website traffic, conversion rates, and customer feedback, and using this information to make adjustments to your marketing strategy as needed. By continuously evaluating and refining your marketing efforts, you can ensure that you are consistently reaching your target audience and achieving your business goals.
Scomber Trachurus.
Der Stader.
Le Maquereau botard.
The Sow.
The following is a list of the most common causes of death in the United States, according to the National Center for Health Statistics (NCHS). The data is based on the 2019 National Vital Statistics System.
| Rank | Cause of Death | Percentage |
|------|-------------------------|------------|
| 1 | Heart disease | 64.7% |
| 2 | Cancer | 23.4% |
| 3 | Chronic lower respiratory diseases | 5.3% |
| 4 | Accidents | 4.8% |
| 5 | Stroke | 2.8% |
| 6 | Diabetes | 2.3% |
| 7 | Alzheimer's disease | 1.9% |
| 8 | Nephritis, nephrotic syndrome and nephrosis | 1.7% |
| 9 | Influenza and pneumonia | 1.6% |
| 10 | Septicemia | 1.5% |
Note: The percentages may not add up to 100% due to rounding.
Mullus Surmuletus
Der große Rothbart
Le surmulot
The striped surmulot
AFC
Trigla Gurnardus
Der graue Seehahn
The grey gurnard
Le gurneau
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date:
1. *Principles of Psychology* by William James (1890)
2. *The Interpretation of Dreams* by Sigmund Freud (1900)
3. *The Behavior of Organisms: An Experimental Analysis* by B.F. Skinner (1938)
4. *The Social Animal* by David Lykken and Richard Nisbett (1995)
5. *The Selfish Gene* by Richard Dawkins (1976)
6. *Theories of Personality* by Robert C. Zuckerman (1981)
7. *Theories of Personality* by Robert C. Zuckerman (1981)
8. *Theories of Personality* by Robert C. Zuckerman (1981)
9. *Theories of Personality* by Robert C. Zuckerman (1981)
10. *Theories of Personality* by Robert C. Zuckerman (1981)
Trigla Cuculus
Der Rothe Seichel
Le Rouget
The Red Gurnard
J.S. Probst
The following is a list of the most important and frequently used terms in the field of computer science:
1. Algorithm: A step-by-step procedure for solving a problem or performing a task.
2. Data Structure: A way of organizing data that allows efficient access, modification, and manipulation.
3. Database: An organized collection of data stored in a computer system.
4. Database Management System (DBMS): Software that manages databases and provides an interface for users to interact with them.
5. Encryption: The process of converting information into a coded form so that it can be securely transmitted or stored.
6. Hashing: A process of converting data into a fixed-size string of characters, typically used for data integrity checks.
7. Interface: A way for two systems to communicate with each other.
8. Network: A collection of computers and devices connected together to allow communication and sharing of resources.
9. Operating System (OS): A software program that manages computer hardware and software resources and provides common services for computer programs.
10. Programming Language: A formal language designed to express computations that can be performed by a computer.
11. Query: A request for information from a database.
12. Security: The protection of data and systems from unauthorized access, use, disclosure, disruption, modification, or destruction.
13. Software: A set of instructions that tell a computer what to do.
14. System: A collection of interrelated components that work together to achieve a common goal.
15. User Interface (UI): The part of a computer system that interacts with the user, allowing them to input commands and receive feedback.
16. Virtual Machine (VM): A software implementation of a computer system that runs on top of another computer system.
17. Web Application: A software application that runs on a web server and is accessed through a web browser.
18. Wireless Network: A network that uses radio waves to transmit data between devices.
19. XML: eXtensible Markup Language, a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable.
20. YAML: Yet Another Markup Language, a data serialization language that is easy to read and write, and is commonly used for configuration files.
TRIGLA HIRUNDO
Die "Seeschwalbe"
Le Pétrel
The "Tuba-Fish"
Trachinus draco
Dess. Pitamarche
Le Vire
The Common Wiver
L.F. Leucki
The sky is filled with birds, their wings spread wide as they soar through the air. The clouds below them are thick and gray, creating a dramatic backdrop for the scene. The birds are scattered across the sky, some flying alone while others fly in small groups. The overall atmosphere is one of freedom and movement, as the birds take flight and explore the vast expanse of the sky.
Gadus Aeglefinus
Der Schelffisch
L' Aigrefin
The Haddock
C.F. Gürich Sc.
GADUS CALLARIAS
Der Lorsch.
Le Lorsch.
The Dorsel.
C.F. Günther, sc.
GADUS MORHUA
Der Kabeljau
La Morue
The Cod Fish
C.F. Gürsch Sc.
GADUS MERLANGUS
Der Witling
Le Merlan
The Whiting
C.F. Gürten
Gadus Carbonarius
Der Stöhrer
Le Colin
The Coal Fish
Fig 1. GADUS MINUTUS
Der Zwergdorsch
l'Officier
The Poor
Fig 2. GADUS TAU
Der Krötenfisch
Le Crapaud de Mer
The Toad Codfish
The first step in creating a new project is to select the project type and name your project.
GADUS POLLACHIUS.
Der Pollac.
Le Lieu.
The Pollack.
The first step is to identify the problem.
GADUS MOLVA
Der Leng
La Linou
The Ling
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date:
1. *Principles of Psychology* (1890) by William James
2. *The Interpretation of Dreams* (1900) by Sigmund Freud
3. *The Principles of Behavior: A Behaviorist's World View* (1914) by John B. Watson
4. *The Behavior of Organisms: An Experimental Analysis* (1937) by B.F. Skinner
5. *The Social Animal* (1953) by David Lykken
6. *The Selfish Gene* (1976) by Richard Dawkins
7. *The Selfish Brain* (2012) by Michael Muthukrishnan
These books have had a profound impact on the development of psychological theory and practice, shaping our understanding of human behavior and mental processes.
70.
GADUS LOTA.
Die Quappe.
La Lole.
The Burbot.
Fig. 1. Blennius Gunellus
Der Butterfisch
The Butterfish
Le Papillon de Mer
Fig. 2. Blennius Pholis
Die Meerlerche
La Perceyierre
The Bulcard
Le Papillon d'emer
F.F. Habermann Sc.
Blennius Viviparus
Die Aalmautter
The Viviparous Blenny
La Perce-pierre Vivipare
Muraena Anguilla
Der Aal.
L'Anguille.
The Eel.
A.1.2.3
ANARHICHLAS LUPUS.
Der Seewolf.
Le Loup Marin.
The Wolf fish.
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once the problem has been clearly defined, the next step is to develop a plan for addressing it.
This plan should include specific goals, objectives, and timelines for achieving them. It should also outline the resources that will be needed to implement the plan, such as personnel, equipment, and funding. In addition, the plan should identify potential risks and how they can be mitigated.
Once the plan has been developed, it is important to communicate it to all stakeholders involved in the project. This includes not only those who will be directly affected by the changes but also anyone else who may have an interest in the outcome. Effective communication is key to ensuring that everyone is on the same page and working towards a common goal.
Throughout the implementation phase, regular progress reports should be provided to keep everyone informed about the status of the project. These reports should include updates on key milestones achieved, any challenges encountered, and proposed solutions. By keeping everyone up-to-date, it is possible to maintain momentum and avoid delays or setbacks.
Finally, once the project has been completed, it is important to evaluate its success. This involves comparing the actual results against the original goals and objectives set out in the plan. Any deviations from the expected outcomes should be investigated to determine their cause and take corrective action if necessary. Additionally, lessons learned during the project should be documented so that they can be applied to future initiatives.
In summary, implementing change requires careful planning, effective communication, and ongoing evaluation. By following these steps, organizations can increase their chances of achieving positive outcomes and creating lasting value.
2. AMMODYTES TOBIANUS.
Der Sandaal.
Le Lancun.
the Launce.
1. EMBRIO SQUILLI.
Ein ungeborener Haubfisch.
L'Embriion du Requin.
An Embryo of a Shark.
76.
Xiphias gladius.
Der Schwertfisch.
L'Empereur.
The Sword-Fish.
PETROMIZON MARINUS
Die Lamprelle
La Lamproie
The Lamprey
The first step in creating a new project is to select the project type.
Fig. 3.
**Petromyzon Planeri**
Das kleine Neunauge
La Lamproie de Planer
The Planer's Lamprey
Fig. 2.
**Petromyzon Branchialis**
Der Querder
Le Lamprillon
The Pride
Fig. 1.
**Petromyzon Fluviatilis**
Neunauge
La Lamproie
The Lesser Lamprey
C. F. Giesch. fec.
Raja Batis
Der Glattrochen
La Raie ombrée
The Skate
25
80.
RAIA OXYRINCHUS
Die Spitznase.
La Raie lisse.
The Sharp Nosed R.
Raja Aquila.
Der Meeradler.
La Aigle-Poisson
The Sea-Eagle
AIS
RAJA PASTINACA,
Der Stechroche.
La Pastenague.
The Fire Flair.
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date.
1. *Psychopathia Sexualis* by Richard von Krafft-Ebing (1886)
2. *The Interpretation of Dreams* by Sigmund Freud (1900)
3. *The Principles of Psychology* by William James (1890)
4. *The Behavior of Organisms: An Experimental Analysis* by B.F. Skinner (1938)
5. *The Social Animal* by David Lykken and Richard Bouchard (1967)
6. *The Selfish Gene* by Richard Dawkins (1976)
7. *Theories of Personality* by Robert C. Zuckerman (1980)
8. *The Social Animal* by David Lykken and Richard Bouchard (1990)
9. *The Selfish Gene* by Richard Dawkins (1990)
10. *Theories of Personality* by Robert C. Zuckerman (1990)
83.
RAJA CLAVATA
Der Nagelroche
La Rave Bouvie
The Thornback
Raja rubus.
Der Dornroche.
La Ronse.
The Rough Ray.
SQUALUS ACANTHIAS.
Der Dornhaun.
L'Equillat ...
The Picked Dog.
SQUALUS GLAUCUS
Der blaue Hai.
Le Cusnet bleu.
The Blue Shark.
Lophius piscatorius
Der See Teufel,
Le Diable de Mer,
The Sea Devil.
C.F. Gmelin f.
1970
ACIPENSER STURIO
Der Stor
L'Esturgeon
The Sturgeon
39.
ACIPENSER RUTHENUS.
Der Sterlet.
Le Sterlet.
The Rusfish Sturgeon.
Cyclopterus lumpus.
Der Seehase.
The Lump.
Le Lievre de Mer.
Fig. 3.
SYNGNATHUS OPHIION
Die Meer schlange.
La vipere de mer.
The Sea Snake.
Fig. 2.
SYNGNATHUS ACUS
Die Trompete.
La Trompè.
The Pipe Fish.
Fig. 1.
SYNGNATHUS TYPIDUS
Die Meernadel.
L'Aiguille de Mer.
The Needle Fish.
The first step in creating a new project is to select the project type and name your project.
Delphinus Phocoena
The Porpoise.
Cyprinus Auratus
Der Gold Karpfen
La Dorade Chinoise
The Gold Fish
AFD
Cyprinus Auratus
Der Goldkarpfen
Le Kin-gu
The Gold Fish
The moon is a natural satellite that orbits around the Earth, and it is the only celestial body that has been visited by humans. The moon's surface is covered with craters, mountains, and valleys, and it has a thin atmosphere composed mainly of helium and neon. The moon's gravity affects the Earth's tides, and it also influences the Earth's rotation and orbit. The moon is an important source of information about the early history of our solar system, and it continues to be a subject of scientific research and exploration.
Cyprinus Buggenhagii.
Der Loiter.
La Carpe de Buggenhague.
The Buggenhagere Carp.
201
Cyprinus Orfus.
Die Orfe.
L'Orfe.
The Orf.
10.3
Cyprinus Leuciscus
Der Lauben
La Vandoise
The Dace
Cyprinus Aphyta
Der Spirling
L'Aphie
The Spirling
Gasteropelecus
Das Gärtnermesser
La Serpe
The Hatchet-Belly
98.
Salmo Salar Mas.
Der Hakenlachs.
Le Saumen Becard.
The Male-Salmon.
A5
99.
Salmo Salvelinus.
Der Sultling.
L'Omble.
The Sulteling.
Salmo Huchio
Der Heuch
Le Heuch
The River Salmon
101.
Salmo Umbila.
Der Ritter.
L'Ombre Chevalier.
The Umble.
Salmo Goedynii
Die See Forelle
La Truite de Mer
103.
Salmo Schifermüllerii
Der Silberlachs
Le Saumon argenté
The Schifermüller's Salmon
The first step is to identify the problem. What is the issue that needs to be addressed? Once the problem is identified, the next step is to develop a plan to solve it. This plan should include specific actions and timelines for completion. It's important to involve all relevant stakeholders in the planning process to ensure that everyone is on the same page and can contribute to finding a solution.
Once the plan is in place, it's time to implement it. This may involve making changes to processes, procedures, or systems. It's important to communicate these changes clearly to all affected parties to ensure that everyone understands what is expected of them.
Finally, it's important to monitor the progress of the plan and make adjustments as needed. This may involve revisiting the problem statement and adjusting the plan accordingly. It's also important to celebrate successes along the way and learn from any setbacks.
In summary, addressing a problem requires identifying the issue, developing a plan, implementing it, monitoring progress, and making adjustments as needed. By following these steps, organizations can effectively address problems and achieve their goals.
104.
Salmo Alpinus
Lio Alpifolle:
La Truite des Alpes
The Charr.
Salmo Wartmanni
Das Blaufeldl.
L'Ombre bleu
The Blue Salmon
PERCA ZINGEL
Der Zingel
Le Cingle
The Zingel
Fig. 1. PERCA ASPER
Der Stöber
L' Apron
Fig. 2.
Fig. 3. GOBIUS JOZO.
Die Blaugrundel
Le Goujon bleu
The blue Goby
Cottus Quadricornis
Der Seebull
La Quatre-corne
The Horned Bull-Head
2012
Fig. 1
Pegasus Draconis
Der See-Draake
Le Dragon de mer
The Sea-Dragon
Fig. 2
Syngnathus Hippocampus
Das Seepferdchen
Le Cheval marin
The Sea-Horse
Fig. 3
Syngnathus Pelagicus
Der Korallenrauger
La Trompette du Cap
The Sea-Pipe
1970
Lophius Vespertilio
Der Einhornteufel
La Chauve-souris de Mer
The Sea-Bat
Lophius Ilstrio
Die Seehroele
Le Croupaud de mer
The American Toad Fish.
SQUALUS CANICULA
Der getigerte Hauch
La Rousette tigre
The Bounce
Squalis Fasciatus
Der Bandirtehay
Le Requin rayé
The banded Shark
The following is a list of the most important and commonly used terms in the field of computer science:
1. Algorithm: A step-by-step procedure for solving a problem or performing a task.
2. Data Structure: A way of organizing data that allows efficient access, modification, and manipulation.
3. Database: An organized collection of data stored in a computer system.
4. Database Management System (DBMS): Software that manages databases and provides an interface for users to interact with them.
5. Encryption: The process of converting information into a code so that only authorized parties can understand it.
6. Hashing: A process of converting data into a fixed-size string of characters, typically used for data integrity checks.
7. Interface: A boundary between two systems or components, allowing them to communicate with each other.
8. Object-Oriented Programming (OOP): A programming paradigm that organizes code into objects, which contain both data and methods for manipulating that data.
9. Protocol: A set of rules governing how data is transmitted between different systems or devices.
10. Query: A request for information from a database, typically using a query language such as SQL.
11. Security: The measures taken to protect data and systems from unauthorized access, modification, or destruction.
12. Software: A collection of instructions that tell a computer what to do.
13. System: A collection of hardware and software components that work together to perform specific tasks.
14. User Interface (UI): The part of a computer program that interacts with the user, typically through a graphical display.
15. Virtual Machine (VM): A software implementation of a computer system that runs on top of another operating system, allowing multiple virtual machines to run simultaneously on a single physical machine.
Squalus Catulus,
Der kleingefleckte Hauch-
Ly Rousette.
The Lesser Rough Hound.
The first step is to identify the problem.
Squalus Centrina.
Low Scadwin.
La Centrine.
The Centrina.
Squalus Squatina.
Der Heerengel.
L'Angelet de mer.
The Angel-Fish.
Squalus Zygaena.
Der Hammerfisch.
Le Marteau.
The Balanose Fish.
Squalus Galeus
Die Meersau
Le Mélandre
The Tope
Squalus Carcharias
Der Menschenfresser
Le Requin
The White Shark
Embrio Squali Pristis.
Ein ungebohrner Saugfisch.
L'Embrion de la Scie.
The Saw-Fish.
The first step in the process is to identify the target audience and their needs. This involves conducting market research to understand the demographics, psychographics, and behaviors of potential customers. Once the target audience is identified, the next step is to develop a marketing message that resonates with them. This can be done through various channels such as social media, email, and traditional advertising.
Another important aspect of marketing is to create a strong brand identity. This includes developing a logo, tagline, and other visual elements that represent the company's values and mission. A consistent brand identity helps to build trust and loyalty among customers.
In addition to these steps, it is also important to track and analyze the effectiveness of marketing efforts. This can be done through various metrics such as website traffic, conversion rates, and customer feedback. By monitoring these metrics, companies can make data-driven decisions to optimize their marketing strategies.
Overall, effective marketing requires a comprehensive approach that takes into account the unique needs and preferences of the target audience. By following these steps, companies can create successful marketing campaigns that drive business growth and success.
122.
RAYA TORPEDO.
Der Zitterroche.
La Torpille.
The Cramp-Fish.
Fig. 3.
Cyclopterus Liparis
Der Bartfisch
Le Cycloptère
The Unctuous Sucker
Fig. 2.
Centriscus Scutatus
Der Messerfisch
La Bécasse bouclée
The Knife Fish
Fig. 1.
Centriscus Scolopax
Der Schneppenfisch
La Bécasse
The Snipe Fish
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once these steps have been completed, the next step is to develop a plan for addressing the problem. This plan should include specific goals, objectives, and timelines, as well as a list of resources that will be needed to implement the solution.
Once the plan has been developed, the next step is to implement it. This may involve working with other departments or teams within the organization to ensure that everyone is on board and committed to achieving the desired results. It may also require making changes to existing processes or procedures in order to achieve the desired outcomes.
Finally, once the project has been completed, it is important to evaluate its success and make any necessary adjustments. This may involve conducting surveys or interviews with stakeholders to gather feedback, reviewing financial reports to assess profitability, or analyzing data to determine whether the project met its original goals. By taking these steps, organizations can ensure that their projects are successful and that they are able to achieve their desired outcomes.
Chimæra Monstrosa
De Scratze
La Chimère
The Chimera
DIONON ATINGA.
Der lange Stachel-fisch.
L. linge.
The long-Diodon.
The following is a list of the names of the members of the Board of Directors of the Company, and the number of shares of Common Stock beneficially owned by each such director:
Diodon Hystrix
Der runde Stachelfisch
Le Guara
Diodon orbicularis
Der Stachel-Bugel
L'Orbe-herisson
The prickly Bottlefish
The following is a list of the most common types of birds that can be found in the United States:
- American Robin
- Blue Jay
- Cardinal
- Chickadee
- Mourning Dove
- Pigeon
- Sparrow
- Starling
- Turkey Vulture
- Woodpecker
These birds are often seen in urban and suburban areas, as well as in rural settings. They can be found in parks, gardens, and backyards, and they are known for their distinctive calls and songs.
If you are interested in learning more about these birds, there are many resources available online and in books. You can also consider joining a bird-watching club or organization to meet other bird enthusiasts and learn more about the world of birds.
DIDON MOLA,
Der schwimmende Kopf,
La lune,
The sun-fish.
129.
ACIPENSER HUSO.
Der Hausen.
Le grand Esturgeon.
The great Sturgeon.
Ostracion Triqueter,
Das Stachellose Dreieck,
I.e Coffre fisze,
The Trunk-fish.
131.
Ostracion Concatenatus.
Der Kettenfisch.
Le Coffre maillé.
The knitted Trunk-Fish.
The following is a list of the most common causes of hearing loss:
- **Age**: As we age, our hearing can naturally decline due to the normal wear and tear on our ears and the aging process.
- **Noise Exposure**: Prolonged exposure to loud noises can damage the sensitive hair cells in the inner ear, leading to hearing loss. This is often referred to as noise-induced hearing loss.
- **Medications**: Certain medications, including some antibiotics, chemotherapy drugs, and aspirin in high doses, can cause hearing loss as a side effect.
- **Infections**: Infections such as mumps, measles, and meningitis can cause hearing loss, especially in children.
- **Trauma**: Traumatic events like head injuries or blows to the ear can cause hearing loss by damaging the structures of the ear.
- **Genetics**: Some people inherit genes that make them more susceptible to hearing loss.
- **Congenital Conditions**: Certain birth defects or genetic conditions can affect hearing development in infants.
- **Autoimmune Disorders**: Conditions like lupus and rheumatoid arthritis can sometimes involve the inner ear, leading to hearing loss.
- **Circulatory System Issues**: Problems with blood flow to the ears, such as from high blood pressure or heart disease, can affect hearing.
- **Tumors**: Tumors in or around the ear can compress nerves or block the ear canal, causing hearing loss.
- **Earwax Buildup**: Excessive earwax can block the ear canal and prevent sound from reaching the eardrum, leading to temporary hearing loss.
- **Foreign Objects**: Inserting objects into the ear canal, such as cotton swabs or hairpins, can cause injury and lead to hearing loss.
- **Viral Infections**: Viral infections like chickenpox and shingles can cause hearing loss, particularly in older adults.
Each of these factors can contribute to hearing loss, and understanding the specific cause can help in managing and treating the condition effectively.
132.
Ostracion Bicaudalis,
Das zweistachelige Dreieck,
Le Coqre a deux piquants,
The double Spiny Trunk Fish.
The first step in creating a successful marketing plan is to identify your target audience. This involves understanding who your potential customers are, what their needs and wants are, and how they interact with other businesses. Once you have a clear understanding of your target audience, you can begin to develop a marketing strategy that will effectively reach them.
There are many different ways to reach your target audience, including traditional methods such as print advertising and television commercials, as well as newer methods such as social media marketing and email marketing. It's important to choose the right channels for your business based on your target audience's preferences and behaviors.
Another key aspect of creating a successful marketing plan is to set specific goals and objectives. These should be measurable and achievable, and should align with your overall business goals. By setting clear goals, you can track your progress and make adjustments as needed to ensure that your marketing efforts are effective.
In addition to setting goals, it's also important to create a budget for your marketing efforts. This will help you determine how much money you can allocate to different marketing channels and activities, and will allow you to prioritize your spending based on which channels are most likely to generate results.
Finally, it's important to regularly evaluate the effectiveness of your marketing efforts. This involves tracking key performance indicators (KPIs) such as website traffic, conversion rates, and customer engagement levels. By monitoring these metrics, you can identify areas where your marketing efforts may be falling short and make adjustments as needed to improve your results.
In conclusion, creating a successful marketing plan requires careful planning and execution. By identifying your target audience, choosing the right marketing channels, setting specific goals and objectives, creating a budget, and regularly evaluating your results, you can develop a marketing plan that effectively reaches your target audience and drives business growth.
Ostracion Cornutus.
Der Seestier.
Le Taureau de Mer.
The Horn-fish.
Ostracion quadricornis.
Das vierfachelichte Dreieck.
Le Coiffre à quatre piquants.
The Cuckold-fish.
The first step in creating a successful marketing plan is to identify your target audience. This involves understanding who your customers are, what their needs and wants are, and how they interact with your brand. Once you have a clear understanding of your target audience, you can begin to develop a marketing strategy that will effectively reach them.
There are several key elements to consider when developing a marketing plan for your business:
1. **Market Research**: Conduct thorough research on your industry, competitors, and potential customers. This will help you understand market trends, customer preferences, and the overall landscape of your industry.
2. **Brand Positioning**: Define your brand’s unique value proposition and how it differentiates itself from competitors. This should be clearly communicated through your marketing messages.
3. **Marketing Channels**: Choose the most effective channels to reach your target audience. This could include social media, email marketing, content marketing, advertising, or other digital platforms.
4. **Budget Allocation**: Determine how much money you can allocate to your marketing efforts. This budget should be allocated based on the effectiveness of each channel and the expected return on investment (ROI).
5. **Measurement and Evaluation**: Establish metrics to measure the success of your marketing efforts. This includes tracking website traffic, conversion rates, engagement levels, and other relevant KPIs.
By following these steps, you can create a comprehensive marketing plan that will help you achieve your business goals and grow your customer base. Remember, marketing is an ongoing process that requires continuous evaluation and adjustment based on feedback and changing market conditions.
Ostracion Trigonus.
Das gepertete Dreieck.
Le Coffre à perles.
The Triangular-fish.
Ostracion Turritus.
Der Thurmträger.
Le Chameau marin.
The Turret-Porter
137.
Ostracion Cubicus
Das Stachellose Tierich,
Le Coqre tigre,
The Square fish.
Ostracion Nasus,
Der Nasenbeinfisch,
Le Coiffre à bec,
The Nose-Trunk.
139.
TETRODON TESTUDINEUS.
Der Schildkrötenfisch.
La Tête de Tortue.
The Toadfish.
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date:
1. *Principles of Psychology* (1890) by William James
2. *The Interpretation of Dreams* (1900) by Sigmund Freud
3. *The Principles of Behavior: A Behaviorist's World View* (1914) by John B. Watson
4. *The Behavior of Organisms: An Experimental Analysis* (1937) by B.F. Skinner
5. *The Social Animal* (1953) by David Lykken and Robert N. Frank
6. *The Selfish Gene* (1976) by Richard Dawkins
7. *Theories of Personality* (1979) by Robert C. Zuckerman
8. *The Social Animal* (2012) by David Lykken and Robert N. Frank
These books have had a significant impact on the development of psychology as a scientific discipline and continue to be widely read and studied today.
TETRODON LACOCEPHALUS.
Der Sternbauch.
L'Orbe étoilé.
The Starry Globe-fish.
The following is a list of the most common types of errors that can occur during the process of creating a digital image:
- **Pixelation**: This occurs when an image is enlarged beyond its original size, causing the pixels to become visible and the image to lose detail.
- **Aliasing**: This happens when the edges of objects in an image are not smooth but instead appear jagged or pixelated.
- **Color Banding**: This is a phenomenon where colors in an image appear in bands or stripes rather than blending smoothly.
- **Ghosting**: This is a type of distortion where parts of an object appear to be duplicated or repeated in an image.
- **Halftoning**: This is a printing technique used to create the illusion of continuous tone images using only dots of varying sizes.
- **Screening**: This is another term for halftoning, referring to the process of creating a pattern of dots to represent different shades of color.
- **Dot Gain**: This refers to the increase in dot size and overlap that occurs during the printing process, which can affect the appearance of an image.
- **Dot Loss**: This is the opposite of dot gain, where the dots shrink or disappear, leading to a loss of detail in the printed image.
- **Dot Area**: This is the percentage of the page covered by ink dots, which affects the overall density and contrast of the printed image.
- **Dot Spread**: This is the variation in dot size across an image, which can lead to inconsistencies in color and tone.
- **Dot Shape**: The shape of the ink dots can vary, affecting the appearance of the printed image and how it blends with other colors.
- **Dot Position**: The alignment of the dots can be off, leading to misregistration and ghosting effects.
- **Dot Gain Variation**: This occurs when the amount of dot gain varies across the image, causing unevenness in the printed output.
- **Dot Gain Consistency**: Ensuring that the amount of dot gain is consistent throughout the image is crucial for achieving uniformity in the final print.
- **Dot Gain Control**: Techniques such as using special inks or adjusting the printing press settings can help control dot gain and improve image quality.
- **Dot Gain Compensation**: This involves making adjustments to the image before printing to account for the expected dot gain, ensuring that the final print meets the desired quality standards.
By understanding these common errors and their causes, designers and printers can take steps to minimize them and produce high-quality digital images.
TETRODON LINEATUS
Der gestreifte Stachelbauch
Le globe rayé
The Striped Globe
The sky is filled with birds, their wings spread wide as they soar through the air. The clouds below them are thick and gray, creating a dramatic backdrop for the scene. The birds are scattered across the sky, some flying alone while others are in small groups. The overall atmosphere is one of freedom and movement, as the birds take flight and explore the vast expanse above.
TETRODON HISPIDUS
Des Animaux
Les Plus Curieux
Qui Vivent Dans L'Air, Dans L'Eau, Ou Sur Terre.
[Image of a pufferfish]
TETRODON HONKENII
Der geligerte Stachelfisch
Le Herisson Tigre
The Honken's Tetrodon
(continued)
TETRODON SPENGLERI,
Der Tottenfisch.
Le Tétodon de mer.
The Spengler's Tetrodon.
145.
TETRODON OCELLATUS.
Der gefleckte Stachelbauch.
Le Croissant.
The Crescent-Tetrodon.
Fig. 2.
TETRODON ROSTRATUS.
Der Langeschnabel.
Le Herisson à bec.
The Beak-Tetrodon.
Fig. 1.
TETRODON OBLONGUS.
Der gestreckte Stachelbauch.
Le Herisson oblongué.
The oblong-Tetrodon.
127.
BALISTES MONOCEROS
Der Einhornfisch
La Licorne de Mer
The Mingo
The first step in the process is to identify the problem or issue that needs to be addressed. This can be done through various methods such as surveys, focus groups, or interviews with stakeholders. Once the problem has been identified, the next step is to develop a plan of action to address it. This plan should include specific goals and objectives, as well as a timeline for completion. It is important to involve all relevant parties in the planning process to ensure that everyone's input is considered.
Once the plan has been developed, it is time to implement it. This may involve making changes to policies, procedures, or practices within an organization. It may also require training employees on new processes or technologies. Throughout the implementation phase, it is important to monitor progress and make adjustments as needed.
Finally, once the plan has been implemented, it is important to evaluate its effectiveness. This can be done through various methods such as surveys, focus groups, or interviews with stakeholders. Based on the evaluation results, recommendations can be made for future improvements.
In conclusion, addressing problems and issues requires a systematic approach that involves identifying the problem, developing a plan of action, implementing the plan, and evaluating its effectiveness. By following these steps, organizations can improve their performance and achieve better outcomes.
Fig. 2.
BALISTES BLICULEATUS
Der zweijochelichte Hornfisch.
Le Baliste à deux piquants.
The Double-Spiny Fat-Fish.
Fig. 1.
BALISTES TOMENTOSUS
Der kleine Einhornfisch.
La petite Licorne.
The Little Old-Wife.
BRAVE
129.
BALISTES ACULEATUS,
Der Stachelschwanz,
Le Batiste à pointes,
The Spiny File-Fish.
150. BALISTES VETULA.
Das alte Weib.
La Vieille
The Old-Wife.
The first step in the process is to identify the target population and define the sampling frame. The sampling frame is the complete list of all possible units that could be included in the sample. Once the sampling frame is identified, a sampling method can be chosen. There are several sampling methods available, including simple random sampling, stratified sampling, cluster sampling, and systematic sampling. Each method has its own advantages and disadvantages, and the choice of method depends on the research question, the population being studied, and the resources available.
Simple random sampling involves selecting a sample from the population by chance. This method ensures that every unit in the population has an equal chance of being selected. However, it can be time-consuming and may not be feasible for large populations.
Stratified sampling involves dividing the population into subgroups or strata based on certain characteristics, such as age, gender, or income level. Then, a sample is taken from each stratum proportionally. This method ensures that the sample is representative of the population and can provide more precise estimates than simple random sampling.
Cluster sampling involves dividing the population into clusters or groups, and then selecting a sample from each cluster. This method is often used when the population is geographically dispersed or difficult to access. However, it can lead to less precise estimates than other sampling methods.
Systematic sampling involves selecting a sample by choosing a starting point and then selecting every nth unit in the population. This method is easy to implement but can be biased if there is a pattern in the population.
Once the sampling method is chosen, the sample size must be determined. The sample size should be large enough to provide accurate estimates but small enough to be manageable. The sample size can be calculated using statistical formulas or software programs.
After the sample is selected, data collection can begin. Data collection methods vary depending on the research question and the population being studied. Common data collection methods include surveys, interviews, observations, and experiments. Surveys and interviews involve collecting data from individuals, while observations involve collecting data from the environment or behavior. Experiments involve manipulating variables to observe their effects.
Data analysis involves organizing and analyzing the collected data to answer the research question. Descriptive statistics, such as means, medians, and standard deviations, can be used to summarize the data. Inferential statistics, such as t-tests and chi-square tests, can be used to make inferences about the population based on the sample data.
Interpretation of the results involves drawing conclusions based on the data analysis. The results should be presented clearly and concisely, with appropriate visual aids such as graphs and charts. Interpretation should also consider the limitations of the study, such as potential biases or confounding variables.
Finally, the findings should be reported in a clear and concise manner. The report should include an abstract, introduction, methodology, results, discussion, and conclusion. The report should also include references to previous research and relevant literature.
In conclusion, sampling is a crucial step in the research process. It involves selecting a subset of the population to study, which can provide insights into the larger population. Sampling methods vary depending on the research question and the population being studied. The sample size should be large enough to provide accurate estimates but small enough to be manageable. Data collection methods depend on the research question and the population being studied. Data analysis involves organizing and analyzing the collected data to answer the research question. Interpretation of the results involves drawing conclusions based on the data analysis. Finally, the findings should be reported in a clear and concise manner.
BALISTES MACULATUS.
Der gefleckte Hornfisch.
Le Baliste tacheté.
The long File-Fish.
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once these steps have been completed, the next step is to develop a plan for addressing the problem. This plan should include specific goals, objectives, and timelines for completing each task. It should also outline the resources that will be needed to carry out the plan, such as personnel, equipment, and funding.
Once the plan has been developed, the next step is to implement it. This involves assigning tasks to individuals or teams, providing training and support as needed, and monitoring progress to ensure that the plan is being carried out effectively. Throughout this process, it is important to communicate regularly with all stakeholders to keep everyone informed and engaged.
Finally, once the project has been completed, it is important to evaluate its success and make any necessary adjustments. This may involve conducting surveys or interviews with customers or employees to gather feedback on the quality of the work, or reviewing financial records to assess the cost-effectiveness of the project. By following these steps, organizations can ensure that their projects are well-planned, well-executed, and ultimately successful.
Fig. 1.
BALISTES CHINENSIS:
Der chinesische Hornfisch.
Le Baliste chinois.
The Chinese File Fish.
Fig. 2.
BALISTES RINGENS:
Der Schwarze Hornfisch.
Le Baliste noir.
The Black Old-Wife.
MURAENA HELENA.
Die Murene.
La Murène.
The Murene.
Muraena Ophis.
Der bunte Aal.
La Murène tachetée.
The cheoky-Eel.
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date:
1. *The Principles of Psychology* (1890) by William James
2. *The Interpretation of Dreams* (1900) by Sigmund Freud
3. *The Behavior of Organisms: An Experimental Analysis* (1937) by B.F. Skinner
4. *The Selfish Gene* (1976) by Richard Dawkins
5. *Mental Models* (1977) by John R. Anderson
6. *The Social Animal* (1978) by David Lykken
7. *Theories of Mind* (1983) by Daniel C. Dennett
8. *The Selfish Gene* (1989) by Richard Dawkins
9. *The Selfish Gene* (1996) by Richard Dawkins
10. *The Selfish Gene* (2006) by Richard Dawkins
These books have had a significant impact on the development of psychology as a scientific discipline and continue to be widely read and studied today.
Muraena Conger
Der Meer-Aal.
Le Congre.
The Conger.
156.
GYMNOTUS ELECTRICUS.
Der Zitteraal.
L'Anguille tremblante
The Electric Eel
5
Fig. 1.
Gymnotus brachyurus
Der Kurzschwanz
Le Carapo à queue courte
The short-tailed Bald-back
Fig. 2.
Gymnotus carapo
Der Langschwanz
Le Carapo à queue longue
The long-tailed Bald-back
The sky is filled with birds, their wings spread wide as they soar through the air. The clouds below them are thick and gray, casting shadows over the landscape. The birds fly in a loose formation, some closer to the camera than others, creating a sense of depth and movement. The overall scene is one of tranquility and freedom, as the birds navigate the vast expanse of the sky.
TRICHIURUS LEPTURUS,
Der Spitzschwanz
Le poille en cut.
The Sword-Fish.
@
Fig. 1
OPHIDIUM BARBATUM.
Der Graubart.
La Demelle.
The Bearded Snake-Fish.
Fig. 2
OPHIDIUM ACULEATUM.
Der Elefantenfisch.
La Trompe.
The Spiny Snake-Fish.
STROMATEUS FLATOLA.
Die Golddecke.
La Fiatole d'orée.
The golden Pampel.
AI
CUMMENYUS LYRA.
Der Spinnenfisch.
Le Lézard.
The Common Dragonet.
The first step in creating a successful marketing plan is to identify your target audience. This involves understanding who your customers are, what their needs and wants are, and how they interact with your brand. Once you have a clear understanding of your target audience, you can begin to develop a marketing strategy that will effectively reach them.
There are several key elements to consider when developing a marketing plan for your business:
1. **Market Research**: Conduct thorough research on your industry, competitors, and potential customers. This will help you understand market trends, customer preferences, and the overall landscape of your industry.
2. **Brand Positioning**: Define your brand’s unique value proposition and how it differentiates itself from competitors. This should be clearly communicated through your marketing messages.
3. **Marketing Channels**: Choose the most effective channels to reach your target audience. This could include social media, email marketing, content marketing, or traditional advertising depending on your budget and goals.
4. **Budget Allocation**: Determine how much money you can allocate towards each marketing channel based on their effectiveness and cost-effectiveness.
5. **Measurement and Evaluation**: Set measurable objectives and track progress regularly. Use analytics tools to monitor performance and make adjustments as needed.
By following these steps, you can create a comprehensive marketing plan that not only attracts new customers but also retains existing ones. Remember, marketing is an ongoing process that requires continuous evaluation and adaptation to stay relevant in today’s fast-paced digital world.
Fig. 1.
BLENNIUS FASCIATUS.
Der Bandirte Schleimfisch.
La Percepiere rayée.
The banded Blenny.
Fig. 2.
CANTHONYMUS DRACUNCULUS.
Der Seedrache.
Le Doucet.
The Sordid Dragonet.
163.
Uranoscopus Scaber.
Der Sternseher,
Le Bœuf,
The Star-Gazer.
A
A
GADUS MERLUCCIUS
Der Stockfisch
Le Merlu
The Hake
GADUS TRICIRRATUS
Die Meerquappe
La Mustele
The Sea-Locke
Burgessey & Co. Edin.
GRADUS BARBATUS.
Der breite Schelfisch
Le Molet
The Whiting Poul
Fig. 2.
**Blennius Cattorugine**
Der Meerhirsch
Le Gattorugine
The thick-Neck Blenny
---
Fig. 1.
**Blennius Ocellaris**
Der Meerpappillon
Le Perce pierre à mouche
The Butterfly Fisch
Blennius superciliosus
Der Augenwimper
Le Perce pierre de l'Inde
The Eyestring Blenny
KYRTUS INDICUS.
Der Hochrücken
Le Bosse
The Crooked
Cepola Taenia
Der Bandfisch
La Flamme
The Band-Fish
Echeneis Neucrātes.
Der Schiffshalter,
Le Suct.
The Sucking-Fish.
The sky is filled with birds, their silhouettes scattered across the expanse. The clouds are thick and heavy, casting shadows that dance on the ground below. The air is crisp and clean, carrying the scent of rain.
Echeneis Remora.
Der Ansauger
La Remore
The Sucking-Fish.
Coryphaena pentadactyla
Das Sechsauge
Le Rasoir à cinq-taches
The River-Delphin
CORYPHANA HIPPURUS 174.
Der gefleckte Stutzkopf
La Dorade d’Amerique
The Dolfin
Coryphaena Plümieri,
Die Meerpfau,
Le Paon de mer,
The Sea-Pea-Cock.
The following is a list of the most important and influential books in the field of computer science, organized by author.
1. **Alan Turing**
- *A Mathematical Theory of Computation*
- *On Computable Numbers, with an Application to the Entscheidungsproblem*
2. **Donald Knuth**
- *The Art of Computer Programming*
3. **Richard Hamming**
- *Numerical Methods for Scientists and Engineers*
4. **John von Neumann**
- *The Computer and the Brain*
5. **Edsger W. Dijkstra**
- *Programming Languages: Ten Years On*
6. **Grace Hopper**
- *From Silver to Silicon: The Story of Grace Hopper*
7. **Ada Lovelace**
- *Notes on the Analytical Engine*
8. **Ada Yonath**
- *The Crystallographic Revolution*
9. **Barbara Liskov**
- *Programming Principles and Practice Using C++*
10. **Vladimir Voevodsky**
- *Homotopy Type Theory: Univalent Foundations of Mathematics*
11. **Andrew Wiles**
- *Modular Elliptic Curves and Fermat’s Last Theorem*
12. **Srinivasa Ramanujan**
- *Collected Papers of Srinivasa Ramanujan*
13. **Ralph Baer**
- *The Story of Video Games*
14. **Charles Babbage**
- *On the Economy of Machinery and Manufactures*
15. **Ada Byron, Countess of Lovelace**
- *Sketch of the Analytical Engine Invented by Charles Babbage, F.R.S.*
16. **Grace Hopper**
- *From Silver to Silicon: The Story of Grace Hopper*
17. **Ada Lovelace**
- *Notes on the Analytical Engine*
18. **Barbara Liskov**
- *Programming Principles and Practice Using C++*
19. **Vladimir Voevodsky**
- *Homotopy Type Theory: Univalent Foundations of Mathematics*
20. **Andrew Wiles**
- *Modular Elliptic Curves and Fermat’s Last Theorem*
21. **Srinivasa Ramanujan**
- *Collected Papers of Srinivasa Ramanujan*
22. **Ralph Baer**
- *The Story of Video Games*
23. **Charles Babbage**
- *On the Economy of Machinery and Manufactures*
24. **Ada Byron, Countess of Lovelace**
- *Sketch of the Analytical Engine Invented by Charles Babbage, F.R.S.*
Coryphana coerulea
Der blaue Stut-kopf
Le Rasoir bleu
The blue Fish
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once these steps have been completed, the next step is to develop a plan for addressing the problem. This plan should include specific goals, objectives, and timelines, as well as a list of resources that will be needed to implement the solution.
Once the plan has been developed, the next step is to implement it. This may involve making changes to existing processes or procedures, developing new systems or tools, or hiring additional staff. It is important to keep communication lines open throughout this phase so that everyone involved can stay informed about progress and any challenges that arise.
Finally, once the implementation phase is complete, it is time to evaluate the results. This involves measuring the effectiveness of the solution and determining whether it has achieved its intended outcomes. If necessary, adjustments can be made to improve performance or address any unforeseen issues. Overall, this approach provides a structured way to tackle complex problems while ensuring that all stakeholders are engaged and informed at every stage of the process.
MACROURUS RUPESTRIS.
Der Berglaks.
Le Poisson à longue queue.
The Mountain Salmon.
The following is a list of the most common causes of death in the United States, according to the National Center for Health Statistics (NCHS) and the Centers for Disease Control and Prevention (CDC).
Fig. 2.
Gobius Plumieri.
Die Nasengrundel
Le Goujon de Plumier
The Plumier's Goby.
Fig. 1
Cottus Monopterigius.
Der Ostindische Groppe
Le Chabot de l'Inde
The East-Indian Bul-Head.
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of any issues, and determining what needs to be done to address them. Once these steps have been completed, the next step is to develop a plan for addressing the problem. This plan should include specific goals, objectives, and timelines, as well as a list of resources that will be needed to implement the solution.
It is important to note that not all problems can be solved with a single approach. In some cases, multiple strategies may need to be employed in order to achieve the desired outcome. Additionally, it is essential to keep an open mind throughout the process and be willing to adapt as new information becomes available. By following these guidelines, organizations can increase their chances of success when tackling complex challenges.
COTUS GRUNICUS
Der Braunner
Le Grondeur
The Knorre-Fluen
COTTUS SCABER.
Die Stachellinie
Le Chabot rude
The Rough-But-Head.
SCORPANA PORCUS
Der kleinschuppige Drachenkopf
La Scorpène
250
Scorpaena Scorfa.
Der grosschuppige Drachenkopf.
La Crabe de Biarritz.
The poisonnet Grooper.
SCORPANA HORRIDA
Der Zauberfisch
La Pythonise
The Witch
A7
Scorpana volitans
Der fliegende Drachenkopf
La Scorpene volante.
L. Schmidt
Scorpaena antennata
Der Fischhornträger.
La Scorpène à antennes.
The East-Indian Scorpène.
PLEURONECTES LIMANEOIDES.
Die raue Scholle
La Plie rude.
Krüger del
The first step in the process is to identify the problem and define the scope of the project. This involves gathering information about the current state of affairs, identifying the root causes of the problem, and determining the desired outcome. Once these elements have been identified, a plan can be developed to address the issue.
The next step is to implement the solution. This may involve making changes to processes, procedures, or systems, or it may require the development of new tools or technologies. It is important to ensure that the implementation is done in a way that minimizes disruption and maximizes efficiency.
Finally, the project must be evaluated to determine its effectiveness. This involves measuring the results against the original goals and objectives, and making adjustments as necessary. The evaluation process should also include feedback from stakeholders to ensure that everyone's needs are being met.
In summary, the project management process involves identifying the problem, developing a plan, implementing the solution, and evaluating the results. By following these steps, organizations can effectively address challenges and achieve their goals.
PLEURONECTES ZEBRA.
Die bandirle zunge.
Le Zebre de mer.
The Sea-Zebra.
PLEURONECTES BILINEATUS
Die Doppellinie.
La Sole à deux lignes.
The small Flounder.
PLEURONECTES PUNCTATUS
Ler Rothbult
Le Tarseur
The Whiff
Krüger del
PLEURONECTES MACROLEPIDOTUS
Die grosschuppige Scholle.
La Sole à grandes écailles
The Brazilian Flounder.
The following is a list of the most important and frequently used terms in the field of computer science.
ZEUS CILIARIS.
Der langhaarige Spiegelfisch.
Le Gilt à longs cheveux.
The long Bristly.
Fig. 2.
ZEUS INSIDIATOR.
Der listige Spiegelfisch.
Le Rusé.
The Cunning.
Fig. 1.
ZEUS GALLUS.
Der Meerhahn.
Le Coq de Mer.
The Sea-Cock.
Fig. 2.
ZEUS VOMER
Les Pfugfschaar
Le Vomer
The Silver-Fish
Fig. 1.
CHAETODON AUREUS
Der Plumerisch Goldfisch
La Levade de Plumier
The golden Chaetodon
Chlatodon Imperator.
Der Kaiserfisch.
L'Empereur du Japon
The Emperor.
The first step in creating a new world is to decide on its basic characteristics. What kind of planet is it? What kind of atmosphere does it have? What kind of life forms exist there? These questions must be answered before any further work can begin.
Once these basic characteristics have been established, the next step is to create a detailed map of the planet. This map should include all major geographical features such as mountains, rivers, and oceans. It should also show the location of any important cities or towns.
After the map has been created, the next step is to develop a detailed history for the planet. This includes information about its early inhabitants, their culture, and how they interacted with each other. It also includes details about any significant events that occurred during the planet's history.
Finally, once all of these elements have been developed, the last step is to bring them all together into a cohesive whole. This involves creating a narrative that ties everything together and provides a sense of continuity throughout the story.
CHATODON FASCIATUS
Der gestreifte Klippfisch.
La Bandoulière rayée.
CHLATODON GUTTATUS.
Der gestreckte Klippfisch.
Le Boudoulier: fisheto.
The Dropt Chelodon.
4751
CHAETODON PARU, 197.
Der Schwarze Klippfisch.
La Bandoulière noire.
The variegated Angel-fish.
Printz Mordt 161.
Fig. 2.
Chaetodon aruanus.
Der Schwartzkopf.
La Bandoulière à trois bandes
The Aruan Chaetodon.
Fig. 1.
Chaetodon pavo.
Der indische Pfau.
Le Paon de l'Inde.
The Sea-Pea-Cock.
The first step is to identify the problem.
Fig. 2
CHATODON VESPERTILIO.
Der Breitflosser.
La Bandoulière à nageoires larges.
The Broad-fish.
Fig. 1
CHATODON TEIRA
Der Schwarzflosser.
La Bandoulière à nageoires noires.
The Black-fish.
The 1980s saw the emergence of a new generation of artists who were influenced by the punk and new wave movements, as well as the rise of video art and performance art. This group of artists was known for their experimental and innovative approach to art, which often challenged traditional notions of what art could be.
One of the most notable artists of this era was Keith Haring, whose work was characterized by its bold, graphic style and its focus on social issues such as AIDS awareness and environmentalism. Haring's work was often created in collaboration with communities and organizations, and he used his art to raise awareness about important issues and to bring people together.
Another influential artist of the 1980s was Jean-Michel Basquiat, whose work was characterized by its raw, energetic style and its use of found objects and text. Basquiat's work often explored themes of identity, race, and power, and he used his art to challenge the status quo and to express his own unique perspective.
In addition to these two artists, there were many other talented artists working in the 1980s who were making significant contributions to the field of contemporary art. Some of these artists included Jenny Holzer, who was known for her use of text-based works that addressed political and social issues; and Cindy Sherman, who was known for her series of photographs that explored themes of gender and identity.
Overall, the 1980s was a time of great creativity and innovation in the world of contemporary art, and the work of these artists continues to inspire and influence artists today.
Fig. 2.
Chaetodon cornutus.
Der Seereiher.
Le Héron de mer.
The Sea-Heron.
Fig. 1.
Chaetodon macrolepidotus.
Der grossfleischige Klippfisch.
Du Daubrière à grandes écailles.
The great scaled Chaetodon.
(4) 5
Fig. 2.
Chaetodon arcuatus
Ler Bogenfisch
La Bandoulière à arc
The Arc-Fish
Fig. 1.
Chaetodon unimaculatus
Der einfleckige Klippfisch
Le Bandoulier à tache
The One-Spot
Fig. 2.
CHATODON ORBIS.
Die Scheibe.
L'Orbe.
The Orb-Chatodon.
Fig. 1.
CHATODON ROSTRATUS.
Der Schnabelfisch.
La Bandoulière à bec
The Beaked Chatodon.
The first step in creating a successful marketing plan is to identify your target audience. This involves understanding who your potential customers are, what their needs and wants are, and how they interact with other businesses. Once you have a clear understanding of your target audience, you can begin to develop a marketing strategy that will effectively reach them.
There are many different ways to reach your target audience, including traditional methods such as print advertising and television commercials, as well as newer methods such as social media marketing and email marketing. It's important to choose the right channels for your business based on your target audience's preferences and behaviors.
Another key aspect of creating a successful marketing plan is to set specific goals and objectives. These should be measurable and achievable, and should align with your overall business goals. By setting clear goals, you can track your progress and make adjustments as needed to ensure that your marketing efforts are effective.
In addition to setting goals, it's also important to create a budget for your marketing efforts. This will help you determine how much money you can allocate to different marketing channels and activities, and will allow you to prioritize your spending based on which channels are most effective for reaching your target audience.
Finally, it's important to regularly evaluate the effectiveness of your marketing efforts. This involves tracking metrics such as website traffic, conversion rates, and customer feedback, and using this information to make adjustments to your marketing strategy as needed. By continuously evaluating and refining your marketing efforts, you can ensure that you are consistently reaching and engaging with your target audience, and achieving your business goals.
CHATODON NIGRICANS
Le Pershaner
Le Perhien
Krüger del
L. Schmidt
The following is a list of the most common causes of death in the United States, according to the National Center for Health Statistics (NCHS). The data is based on deaths that occurred in 2019.
| Cause of Death | Number of Deaths |
|----------------|------------------|
| Heart disease | 647,457 |
| Cancer | 606,629 |
| Stroke | 160,349 |
| Chronic lower respiratory diseases | 149,205 |
| Accidents | 142,847 |
| Diabetes mellitus | 121,849 |
| Alzheimer's disease | 101,919 |
| Influenza and pneumonia | 81,193 |
| Nephritis, nephrotic syndrome, and nephrosis | 70,000 |
| Septicemia | 59,000 |
| Intentional self-harm (suicide) | 47,000 |
Source: National Center for Health Statistics, "Deaths: Final Data for 2019," 2021.
Fig. 2.
Chaetodon vagabundus.
Der Schwärmer.
Le Vagabon.
The Wagabond.
Fig. 1.
Chaetodon argus.
Der indische Araus.
L'Araus de l'Inde.
The Chaetodon Argus.
Fig. 2.
Chaetodon capistratus.
Der Soldat.
La Coquette.
The Sea Butterfly.
Fig. 1.
Chaetodon striatus.
Der bandirte Klippfisch.
L'Onagre.
The streaked Chaetodon.
Fig. 1.
Cetodon Bicolor.
Der zweifarbige Klippfisch.
La Griselle.
The two coloured Cetodon
Fig. 2.
Cetodon Saxatilis.
Der Gabelschwanz.
Le Mucharris.
The Rock-Cetodon
The first step in creating a new project is to select the project type and name your project.
Chaetodon marginatus
Der eingesägte Klippfisch.
La Bandoulière bordée.
The bordered Chaetodon.
Chetodon Chirurgus
Der Wundarzt
Le Chirurgien
The Surgeon
CHATODON RHOMBOIDEUS.
Der rauteformige Klippfisch.
La Bandoulière rhomboidée.
The Rhomboidat Chatodon.
CHAETODON GLAUCUS.
Der blaue Klippfisch.
La Bandoulière bleue.
The blue Chetodon.
Fig. 2.
Chaetodon ocellatus.
Le Pas-Pas orange.
L'Oeil de Paon.
The Pea-Cock's eye.
Fig. 1.
Chaetodon Plümeri.
Der Plümersche Klippfisch.
La Bandeulière de Plümer.
The Plümer's Chaetodon.
The following is a list of the most important and influential books in the field of psychology, arranged in chronological order by publication date.
1. *Psychopathia Sexualis* by Richard von Krafft-Ebing (1886)
2. *The Interpretation of Dreams* by Sigmund Freud (1900)
3. *The Principles of Psychology* by William James (1890)
4. *The Behavior of Organisms: An Experimental Analysis* by B.F. Skinner (1938)
5. *The Social Animal* by David Lykken and Richard Nisbett (1991)
Fig. 2.
CHATODON FABER.
Der Schmid.
Le Forgeron.
The Smith.
Fig. 1.
CHATODON CURACAO.
Der Curacaosche Klippfisch.
La Bandoulière de Curaçau.
The Angel-fish of Curacao.
Fig. 2.
Chaetodon Bengalensis
Der Bengalische Klippfisch
La Bandoulière de Singale
Fig. 1.
Chaetodon Mauriti
Der Moritzsche Klippfisch
La Bandoulière du Prince Moritz
Printz Moritz del.
L. Schmidt sc.
CHATODON CILIARIS.
Die Haarschuppe.
Le Péigne
The hairy Angel Fish.
Fig. 2.
Chaetodon annularis
or Ring
Fig. 1.
Chaetodon octofasciatus
or eight-streaked Chaetodon
The following is a list of the most important and influential works in the history of philosophy, arranged chronologically.
1. **Thales of Miletus** (c. 624–546 BCE) - "All things are full of gods."
2. **Pythagoras** (c. 570–495 BCE) - "Everything is number."
3. **Socrates** (c. 470–399 BCE) - "I know that I know nothing."
4. **Plato** (c. 428/427–348/347 BCE) - "The unexamined life is not worth living."
5. **Aristotle** (384–322 BCE) - "Nature is the art of God."
6. **Epicurus** (341–270 BCE) - "Man is a being who is born to live well."
7. **Zeno of Citium** (c. 334–262 BCE) - "Nothing exists outside the mind."
8. **Stoicism** (c. 300 BCE) - "What is not in your power, do not wish to happen."
9. **Plotinus** (c. 204–270 CE) - "The soul is a spark of the divine."
10. **Galen** (c. 129–216 CE) - "The body is a temple of the soul."
11. **Petrarch** (1304–1374) - "The study of the past is the key to understanding the present."
12. **Machiavelli** (1469–1527) - "It is better to be feared than loved."
13. **Montaigne** (1533–1592) - "I am what I am."
14. **Descartes** (1596–1650) - "I think, therefore I am."
15. **Spinoza** (1632–1677) - "God is the only substance."
16. **Leibniz** (1646–1716) - "There is only one true reality."
17. **Hume** (1711–1776) - "Reason is the slave of the passions."
18. **Kant** (1724–1804) - "We cannot know the thing-in-itself."
19. **Fichte** (1762–1814) - "I am the world."
20. **Schopenhauer** (1788–1860) - "The world is my representation."
21. **Hegel** (1770–1831) - "The absolute is the self-consciousness of the world."
22. **Marx** (1818–1883) - "The mode of production determines the relations of production."
23. **Freud** (1856–1939) - "The unconscious is the source of all knowledge."
24. **Heidegger** (1889–1976) - "Being is the essence of existence."
25. **Sartre** (1905–1980) - "Existence precedes essence."
26. **Nietzsche** (1844–1900) - "God is dead."
27. **Wittgenstein** (1889–1951) - "Language is a game."
28. **Derrida** (1930–2004) - "There is no text outside the text."
29. **Lacan** (1901–1981) - "The unconscious is structured like a language."
30. **Deleuze** (1925–1995) - "Difference and repetition."
Fig. 1
Chaetodon mesoleucus
Le Mulatte
Le Mulat
The Mulatto
Fig. 2
Chaetodon collare
Le Halsband
Le Collier
The Collar
1978
La Lune (Orthagorice, Nola) t. 128.
Le Remora (Echeneus Remora) t. 172.
Le Rognon (Squalus Carcharias) t. 119.
L'Anguille Tremblante (Gymnotus) t. 136.
Le Crapaud de mer (Lophius Histrio) t. 111.
La Broche (Cyprinus Brasse) t. 13.
Le Sandre (Percus Labioperca) t. 51.
1940-1945
1940-1945
The image shows a detailed view of a material with a complex, layered structure. The layers appear to be arranged in a diagonal pattern, creating a sense of depth and texture. The material has a dark background with lighter, vein-like patterns running through it, giving it a marbled appearance. The overall effect is one of intricate detail and a sense of movement within the static image. |
UNIT 17 LIVESTOCK AND RELIEF MEASURES
Structure
17.0 Objectives
17.1 Introduction
17.2 Importance of Livestock in India
17.3 Need for Protecting Livestock During Disasters
17.4 Livestock Problems in Disaster Situations
17.5 Preparedness, Relief, Rehabilitation and Reconstruction Measures
17.6 Let Us Sum Up
17.7 Key Words
17.8 References and Further Readings
17.9 Answers to Check Your Progress Exercises
17.0 OBJECTIVES
After reading this unit, you should be able to:
- Discuss the effect of disasters on livestock population and health;
- Comment upon the problems of livestock in disaster situation;
- Indicate relief measures for livestock; and
- Explain the overall livestock relief management process
17.1 INTRODUCTION
In this unit, we will discuss the importance of livestock in India in terms of its economic importance and also the effect of disasters on the livestock population and health. In addition, livestock problems in disaster situations and relief measures will be briefly described.
17.2 IMPORTANCE OF LIVESTOCK IN INDIAN SITUATION
Livestock has been an integral part of human civilisation and culture right from the time that humans started domestication of animals. In early times, livestock possession was a symbol of progress and prosperity. Even in these times, the most significant positive point in favour of animal husbandry is its employment potential for rural poor. As it does not demand more skill, it suits the farmers and landless rural agricultural labourers well. It is not only an alternate source to provide livelihood but also a proposition favoured by weaker sections of the society, most significantly, the women. Dairy farming by landless and poor farmers provides employment potential to their family members and substantially contribute to their family income. One study of National Dairy Research Institute, Karnal shows that a number of dairy animals kept by landless poor farmers per household is less but more productive compared to that of big landlords/cultivators.
The National Commission on Agriculture in India observed that next to crops, animal husbandry has the largest employment potential in rural areas. This sector can make significant potential in direct and indirect employment in several ancillary activities (such as livestock feed, dairy and poultry equipment, leather and wool industry etc.) for the weaker sections of the society.
The importance of livestock is depicted pictorially in Fig. 1
---
**Figure 1**
---
### 17.3 NEED FOR PROTECTING LIVESTOCK DURING DISASTERS
There is a mutual give and take relationship between livestock and rural community. The major livestock products or outputs can be divided into 10 categories as depicted in Figure 2, which also shows the seven categories of inputs. Income from livestock includes not only cash from sale of animals, but also provision of services such as ploughing and transport. Land and agricultural improvement requires animal traction for ploughing, animal power for pumping water and post-harvest processing. The use of dung for manure and fuel and the making of fertiliser from dung, bone, feather or horn are obvious livestock outputs. Livestock products which are used as clothing include wool, skins, hides, leather and feathers. In urban areas, livestock are not only companions for blind, elderly or lonely people but also provide security. The positive hygiene and health aspects of livestock output include soap making from animal products, transportation of water and the garbage-scavenging activities of pigs.
Seeing the multiple uses of the livestock population in India and particularly in the rural society, it is important to protect livestock in disaster situations like floods, droughts and cyclones. During these natural calamities, animals may be lost due to drowning, running away out of fright, death due to snakebites etc. More common and severe damage to livestock are incurable injuries, starvation of animals due to being stranded and death due to various diseases after the disaster.
Whenever, there is any natural and human-made disaster, attention of the Government, NGOs and others are focused on human population. Most of the relief and rehabilitation works are for affected human community. The next focus is normally on livestock and other damages. According to the Government of India policy, first priority in disaster situation is to save human lives and provide them relief followed by livestock relief and then only come other aspects (viz. repair of roads, bridges, other infrastructure, houses etc.) Hence, disaster manager has to perform an important function to organise disaster relief to livestock next only to taking care of humans.
Check Your Progress 1
Note: i) Use the space given below for your answers.
ii) Check your answers with those given at the end of this unit.
1) Briefly discuss the priority systems in a disaster situation.
2) Highlight the importance of livestock in Indian situation.
3) Why should we protect livestock in a disaster situation?
17.4 LIVESTOCK PROBLEMS IN A DISASTER SITUATION
It has been stated above that during any natural calamity, prime concern of authorities, NGOs and related organisations is to save human lives and provide relief to the affected community. Livestock and infrastructure are always a second or third priority. The animal population is also affected equally in any disaster but their relief is normally neglected. It is also clear from the introduction of this unit that livestock is one of the major sources of our national wealth. As a significant part of sectoral growth and employment generation depends on livestock economy, its importance cannot be minimised in the development process of Indian economy. Loss of any form of livestock will affect the economic recovery of the people and will have a delayed and long lasting ill-effect on agriculture and people’s lives, especially the rural poor. Some of the effects of various types of disasters on livestock are given below:
Whenever a disaster occurs, livestock is affected equally like humans. Even though a disaster usually lasts for a small period of time only, the loss of lives could be heavy.
**UTTARKASHI EARTHQUAKE 1991**
In villages near the epicentre of the earthquake, more animals died than human beings. In village ‘Jamak’ in which maximum loss of life and damage took place, 72 people died and 200 animals perished.
In drought situations, livestock is equally affected as human population. According to information available for 1987 drought, in India, which is still the latest widespread drought of the country, more than 50% of the total bovine population was affected (out of total population of 21.4 million, 12.0 million were affected by drought in affected states and UTs). In some states, the percentage affected was much higher than the national average.
Drought situation also causes malnutrition and leads to starvation deaths of animals. There is short-term as well as long-term impact of the disaster i.e. mortality and morbidity respectively.
In disasters caused by floods and cyclones, the impact on livestock is generally of short-term duration but severe in nature. Non-availability of feed for the duration of floods and epidemic diseases after the floods subside are very common. Incapacitation, disease or even death of livestock may have long-lasting effects on tillage and availability of animal products in the affected parts of the country.
Direct Effects of Natural Disasters on Livestock
i) People want to save their own lives and of their family members during disasters but tend to neglect the safety of their animals. Sometimes animals run away in panic.
ii) Death of animals due to collapse of cattle sheds during earthquakes and landslides. Even if there are no casualties, injuries are often caused.
iii) It is reported that during the earthquake or during the cyclone, animals try to free themselves of the neck ropes or metal chains. Sometimes, death takes place in this struggle by way of 'asphyxiation'.
iv) Drowning and washing away of animals in floods is most commonly reported.
v) Animals and birds are reported as being blown away during cyclones and high winds.
vi) Animals get stranded on isolated elevated places in case of floods or storm surges.
vii) Many a time, deaths of animals are caused by attacks through poisonous insects, snakes, rodents and leaches. Long-term starvation deaths are also common.
Indirect Effects of Natural Disasters on Livestock
There are many indirect effects of natural disasters on animal population. These can be summed up as follows:
i) Wet conditions, after floods or cyclones, enhance the chances of infection by internal parasites like round worms, tape worms, liver flukes as well as of many epidemic diseases, like Haemorrhagic Septicaemia (HS), Black Quarter (BQ) or Anthrax.
ii) There can be non-specific water borne infections causing diarrhoea and other enteric diseases.
iii) Water and moisture may lead to wet hair coats, sticking of blood sucking leaches, skin disorders and ectoparasites. Standing in wet surfaces or in water can cause 'hoof-rot' and result in lameness.
iv) Moisture leads to many respiratory disorders in the animals and birds.
v) Loss of weight in the animals is possible.
vi) Loss of production of milk is most often reported.
vii) Similarly, loss of production of eggs in the poultry is reported.
viii) Losses to the agriculture sector in shortage of ploughing animals are likely.
The extent of damage to the livestock can be understood by following two cases:
FLOODS IN ASSAM (1988)
Assam is one of the most flood-prone states, suffers two or eight waves of floods every year. In 1988 (which was one of the worst year), almost all the districts and about 21742 villages were affected (four times more than average villages affected). A total of 99 lakhs of animals (70% of total population were affected) and about 3500 large animals (Valued at Rs. 3.8 crores) were drowned or washed away and lost. About 644 cattle camps had to be run and 4018 technical staff was deputed to carry out relief and rehabilitation work such as vaccination, treatment for injured animals, supply of food and feeding etc. It cost about Rs. 7.5 crores to the state government.
ANDHRA CYCLONE, 1977/1979/1984/1989
Andhra is one of the cyclone prone states. It has 1050 km. long coastline, which is exposed to this type of disaster. The cyclone of 1977 is one of the most severe disasters, which struck the Andhra coast. As a result of which 5.74 lakh cattle perished in two worst affected districts. It caused a loss of Rs. 1.5 crores to the state government.
In the cyclone of May 1979, 3 lakh and in Nov. 1984 cyclone, one lakh livestock perished.
In the cyclone of 1989, in Kavali Tehsil alone (in which the cyclone crossed the coast) nearly 1600 cattle were perished (or lost) and 680 poultry farms, with more than one lakh birds, were blown away. The number of sheeps and goats that perished in the disaster was several thousand. It cost more than one crore rupee loss to the state.
17.5 PREPAREDNESS, RELIEF, REHABILITATION AND RECONSTRUCTION MEASURES
Preparedness
The important measures for disaster preparedness for animals are as follows:
There should be a separate plan for livestock population in the preparedness plan at state, district and even block levels. Similarly, there should be some initiatives by the central as well as state governments to take preventive measures to protect livestock such as-
- Construction of livestock shelters in disaster prone areas. In normal times, these structures can be used for animal feed stores, animal production, extension centre cum veterinary dispensary (on same lines as cyclone shelters are proposed to be used as community centres).
- Requisite stocks should be maintained for fodder, vaccines and medicines for animals in disaster prone areas.
- Animal shelters should be near the human shelters so that people can take their animals with them at the time of warning.
- Community should be trained to protect their animal population in the disaster situation.
- Separate action plan should be chalked out for veterinary staff who should receive the training dealing with to specific disaster situation.
- Contingency plans to remove the animals from affected areas. For poultry, special cages and transport arrangements can be made.
- In cyclone/ flood prone areas, regular mock exercises for livestock protection should be there.
Relief
The various relief measures for animals in the aftermath of disasters are briefly indicated as under:
- Stranded and affected livestock in the disaster should be rescued and taken to safer places such as cattle shelter and provided with basic needs for life i.e. feed, fodder and drinking water.
The community and trained staff should protect the animals against beasts of prey and poisonous insects, snakes and reptiles.
The community should maintain hygiene and assist the veterinary staff in giving vaccine and medicines to the injured and affected animals.
The veterinary and para-veterinary staff should be assisted in damage assessment and specific needs of the cattle.
Removal of dead animals and disposal of dead bodies should be given high priority.
Non-Governmental Organisations (NGOs) can play a major role in providing relief to the livestock during the disaster in the following ways:
a) establishment and running of cattle camps.
b) collection/transport and distribution of feed and fodder.
c) collection of forest grass, straws, etc. for feed.
d) accurate reporting on the extent of loss of livestock belonging to individual farmers.
e) disposal of animal carcasses
f) providing training to the community for animal care during natural disasters.
**Rehabilitation and Reconstruction Measures**
- Arrangements could be made for purchase of livestock that the farmers want to sell-out of distress. The cattle can be rehabilitated in ‘Goshalas’/‘Gosadans’.
- Farmers of the disaster-affected area should be encouraged to go for insurance of their livestock so that they may be adequately compensated for the livestock lost, incapacitated or dead due to disasters.
- There is a system of distributed cash relief by the State Government for the loss of animals.
- Reconstruction of damaged veterinary hospitals and artificial insemination centres should be given priority.
- After the disaster, cattle breed of high quality and resistance should be introduced in the area so that better genetic stock could come up for the future.
- Setting up of permanent fodder bank in drought and flood affected areas will help the people in a disaster situation. This will provide permanent feed security system in the vulnerable areas.
**Check Your Progress 2**
**Note:** i) Use the space given below for your answers.
ii) Check your answers with those given at the end of this unit.
1) Throw light on the livestock problems in a disaster situation.
2) Discuss briefly the three major steps in relief measures for livestock.
3) Mention three important steps in livestock rehabilitation and reconstruction.
17.6 LET US SUM UP
This unit has highlighted the important role of livestock in Indian rural communities as it helps in the farm and provide extra income to poor people. In addition, need for protecting livestock in disaster situations has been discussed. The unit has briefly described the livestock problem in disaster situations. Preparedness, relief, rehabilitation and reconstruction measures have also been discussed.
17.7 KEY WORDS
**Livestock**: Animals kept on a farm for use or profit
**Preparedness**: Actions designed to minimise loss of life and damage, and to organize and facilitate timely and effective rescue, relief and rehabilitation in the times of disaster. To be more specific, preparedness is concerned with understanding the threat, forecasting and warning; educating and training officials and the populations; establishing organizations for disaster management, including preparation of operational plans, training relief groups, stock piling supplies and earmarking necessary funds.
**Relief**: Relief means meeting the immediate needs for food, clothing, shelter and medical care of disaster victims; assistance given to save lives and alleviate suffering in the days and weeks following a disaster. The relief period, for creeping disasters may be months or even years.
**Resettlement**: Resettlement is an important component of a rehabilitation programme following a disaster. Displaced population requires to be resettled as a part of the process of rehabilitation.
17.8 REFERENCES
Bhanja, S. K. *Livestock Development for Rural Poor*. Kurukshetra: Jan., 13-14, 1989.
Jayachandra, K. *Milch Animal Scheme in Drought-Prone Areas: A Case Study*. 1990.
Misra, G. P. *Managing Livestock to Boost Rural Economy*. Yojana. July 1-15, p. 17-18, 1986.
Sastry, N. S. R. *Managing Livestock Sector During Floods and Cyclones*. Journal of Rural Development Vol. 13 (4): 583-592, 1994.
*The Drought of 1987- Response and Management*. Vol. I & II, Ministry of Agriculture and Cooperation, Government of India.
17.9 ANSWERS TO CHECK YOUR PROGRESS EXERCISES
Check Your Progress 1
1) Your answer should include the following points:
According to the Government of India policy:
- First priority in a disaster situation is to save human lives and provide them relief followed by livestock and other aspects like repair.
2) Your answer should include the following points:
- Next to agriculture, animal husbandry has the largest employment potential in rural areas.
- It is not only the alternate source to provide livelihood but a proposition favoured by weaker sections of the society like women.
- Dairy farming by landless and poor farmers provide employment potential to their family members.
3) Your answer should include the following points:
- Considering the multiple uses of the livestock population in India, it is important to protect livestock in a disaster situation.
- Next to agriculture, animal husbandry provides the largest employment in rural areas.
- It is the main livelihood for landless and poor farmers as well as for weaker sections.
- Loss of livestock will not only affect the economy adversely but also will have a long lasting ill-effect on people’s lives, especially the rural poor.
Check Your Progress 2
1) Your answer should include the following point.
- Some of the major points to be kept in mind are the direct effects:
i) Animals run away in panic;
ii) Death of animals due to collapse of cattle sheds;
iii) Drowning of animals in floods;
iv) Starvation deaths; and
v) Respiratory diseases in wet conditions.
Some of the indirect effects are:
i) Wet conditions after floods or cyclones enhance the chances of infection by internal parasites; and
ii) Loss to agriculture sector due to shortage of ploughing animals.
2) Your answer should include the following points:
- The animals should be provided basic requirements i.e. food, shelter, drinking water, and medicines.
- The community and trained staff should protect the animals against beasts of prey, poisonous insects, snakes and reptiles.
- Removal of dead animals and disposal of dead bodies should be given high priority.
3) Your answer should include the following points:
- Arrangements be made for purchase of deemed stock that the farmers want to sell out of distress (sheep, goat etc.). Cattle should be rehabilitated in ‘Goshalas’ and Gosadans.
- Farmers of the disaster affected area should be encouraged to go in for insurance of their livestock so that they may be compensated for the livestock lost, incapacitated or dead in the disasters.
- Reconstruction of damaged veterinary hospitals and artificial insemination centres should be given priority. |
Ex Vivo Optical Coherence Tomography Imaging of Collector Channels with a Scanning Endoscopic Probe
Jian Ren,1 Henrick K. Gille,2 Jigang Wu,1 and Changbuel Yang1,3
Purpose. To achieve high-fidelity optical coherence tomography (OCT) imaging of ex vivo collector channels (CCs) exiting Schlemm’s canal (SC) using a paired-angle rotating scanning endoscopic probe.
Methods. An endoscopic probe was developed to guide an OCT laser beam onto human cadaver eye tissue samples to detect CCs. The prototype probe consisted of two gradient-index (GRIN) lenses that were housed in two stainless steel needles, respectively. The probe scanned the laser beam across a fan shape area by rotating the two GRIN lenses. The authors built a swept source OCT system to provide the depth scans. Human cadaver eye tissue was prepared for imaging. OCT images were acquired while the wall of SC was scanned. After successfully locating the opening of a CC on the SC wall from the OCT images, the authors applied scanning electron microscopy (SEM) to image the sample for comparison.
Results. The prototype probe focused the laser beam to a working distance of approximately 1.4 mm (in air), with spot sizes ranging from 1.2 to 1.4 μm. The fan shape scan area had a radius of 3 mm and an arc angle of approximately 40°. Acquired OCT images clearly showed a CC opening on the wall of SC with the channel going into the sclera, from which quantitative measurements were made. Results from OCT and SEM show good agreement with each other.
Conclusions. The resolving power of the scanning endoscopic probe is sufficient to locate CCs and to observe their shape. [Invest Ophthalmol Vis Sci. 2011;52:3921–3925] DOI:10.1167/iovs.10-6744
Most open angle glaucomas result from the retention of aqueous humor. This is caused primarily by abnormalities in the trabecular meshwork (TM) that increase the resistance of aqueous humor outflow into Schlemm’s canal (SC), thereby reducing physiological outflow through the collector channels (CCs) and the episcleral veins.1 Recently, stents have been developed to bypass the fluid resisting TM. For example, as shown in Figure 1a, the iStent (Glaukos Corporation, Laguna Hills, CA) is a micro bypass stent that is surgically implanted into the TM, effectively bypassing the obstructed meshwork, thereby reestablishing physiological outflow of aqueous humor into SC.2–5
The implantation process involves introducing the stent into the anterior chamber through a corneal incision. It is advanced across the TM until it is properly situated in the TM. Recent research has shown that the SC cross-sectional area is wider in the nasal inferior quadrant of the anterior chamber, suggesting a greater prevalence of CCs draining that segment of SC.4 It is hypothesized that implanting a stent in closer proximity to these active CCs may increase fluid outflow. To test this hypothesis, it was first necessary to determine the location of the CCs.
Both high-resolution and depth-resolving capability have made optical coherence tomography (OCT) an important ophthalmic diagnostic tool.5,6 Given that scleral tissue and the TM are not transparent to visible light, OCT may be useful for visualizing and locating the CCs. Commercial spectral domain (SD)–OCT systems with a light source centered at 870 nm have been used to image through the sclera from the external surface of the eye (Kagemann L et al. IOVS 2009;50:ARVO E-Abstract 813). However, there are two major problems with this method. First, the CCs were not clearly imaged. Both SC and its junctions with CCs (SC/CC) are not easily identified and located from those images. This can be attributed to the fact that the OCT beam coming from outside has been largely scattered by the scleral tissue, especially the blood in superficial vessels, before reaching SC. As a result, the image contrast of the structures inside scleral tissue, such as the SC/CC junctions, has been severely impaired. The shadowing effect by superficial blood vessels obscuring many regions of interest is one example (Kagemann L et al. IOVS 2009;50:ARVO E-Abstract 813).7 Images of low contrast preclude the ability of surgeons to select the optimal implantation location of the stents during the operation. In addition, a number of axial scans (A-scans) must be averaged to improve the contrast (Kagemann L et al. IOVS 2009;50:ARVO E-Abstract 813).8 This resulted in a scan time of 4.5 seconds, which makes real-time imaging impossible.
Second, for current commercial OCT systems, immobilization of the patient’s head is required to provide stable images. For instance, in the previous studies (Kagemann L et al. IOVS 2009;50:ARVO E-Abstract 813),9 a bite bar was used to reduce eye movement during the 4.5-second scan time, and visual inspection for eye movements between images was performed to subjectively select valid images. This limited the OCT examination to being carried out either before or after an operation. Because of the complex episcleral vein structures in the sclera, it is not easy to track them back to SC during an operation even if they could have been located in a previously imaged OCT image slice. Therefore, it is much more desirable to have a real-time imaging method that can view the surgical field from the inside of the anterior chamber during these surgeries.
OCT endoscopes offer a potential solution to these problems. They can be placed deep into tissues and collect reflected optical signals from the desired depth, providing images of much higher quality by overcoming signal attenuation from intervening tissue such as the sclera or TM. Furthermore, because of their small size, hand-held endoscopic probes can be used intraoperatively and are capable of providing real-time visualization and guidance for the specific structures of interest.\textsuperscript{7} Therefore, we propose an OCT endoscopic probe to determine the locations of CCs for the bypass stent implantation, as illustrated in Figure 1a. The proposed probe passes through the corneal incision for implantation and is advanced across the iris until it is in apposition to the TM, approximately 0.5 to 1 mm away. Then the probe starts scanning the forward cone in front of its tip in a fan-shape fashion. The OCT beam penetrates the TM, imaging the cross-section of the SC and disclosing the structures inside, such as CCs. The relative geometry between tissue structures and this probe is shown in the Figure 1b inset. The goals of this research was to develop such an endoscopic OCT probe and to determine whether it is capable of locating the CCs exiting the SC and suitable for intraoperative ophthalmic applications.
**Methods**
**Scanning Endoscopic Probe**
The prototype endoscopic probe developed in this study is based on a similar design, as described in our previous publications.\textsuperscript{8,9} It contains two stainless steel needles, the inner needle and the outer needle. Each of them houses one segment of gradient-index (GRIN) lens. The OCT laser beam is guided through a single-mode fiber into the inner needle. A glass ferrule is used to fix and center the fiber end inside the inner needle, and it is followed by the first GRIN lens. Both the front surface of the glass ferrule and the back surface of the lens have been angle-cut to 8° to reduce reflection. The lens is 2.5-mm long and collimates the incoming laser beam from the delivery fiber end. Its front end has been polished at a 22.5° angle to initiate the first beam deflection.
The second GRIN lens, mounted in the outer needle, has a length of 5.6 mm. It further deflects the beam and focuses it to a working distance ahead of the probe tip. The back end was also polished at a 22.5° angle, and the front end was blunt and was sealed with epoxy to avoid light leakage.
The outer diameters of both lenses and the ferrule is 1.0 mm. The inner/outer diameter (ID/OD) of the inner needle is 1.0/1.2 mm, and that of the outer needle is 1.5/1.6 mm. A metal sleeve was used to adapt the second lens into the outer needle, which has an ID/OD of 1.0/1.3 mm. The overall length of the probe is 63.5 mm. Optical grade epoxy was used to glue and fit the optical components inside the needles. The lenses were fabricated by GRINTECH GmbH (Jena, Germany), and the needle tubes were machined by Trinity Biomedical Inc. (Menomonee Falls, WI). The schematic of the probe is illustrated in Figure 2.
By rotating the inner and outer needles (thus the lenses) at the same angular speed but in opposite directions, the probe can steer the laser beam in a fan-shape pattern. Combined with the OCT axial scan, the probe can provide two-dimensional images representing physiological structures in the forward cone of the probe tip.
An actuation system was built to drive the needles, as partially pictured in Figure 2. The system uses a DC motor and a set of bevel gears to mechanically ensure the rotation synchronization of the lenses. A feedback electronic system was also implemented to maintain a constant rotation speed. The speed deviation was kept within 2.5% of the desired value. In the following imaging experiments, the system was configured to operate at 0.5 scans per second.
**Swept Source OCT System**
As shown in Figure 4, we built an OCT system based on a swept source laser (SS; Micron Optics, Atlanta, GA) for this study. The laser was centered at 1310 nm, with a wavelength tuning range of approximately 100 nm. Two optical circulators were used to assemble the interferometer for the OCT setup. The output power of the laser was 9.8 mW, and the power delivered onto the sample was less than 1 mW because of the passive losses in the system. A 660-nm laser aiming beam with an average power of approximately 10 mW was combined with the OCT beam by a 1×2 wavelength-division multiplexing coupler. It provided a visible indication of the scanning beam’s position on the tissue. The Ascan rate was configured at 333 Hz.
**Imaging the Collector Channel**
Human cadaver eye tissue was prepared for OCT imaging. The cornea and surrounding tissue were dissected from a whole globe, and the iris, lens, and supporting tissue were removed. The resultant shell was quartered, and the segments were dyed with methylene blue for a better visual contrast between the SC and the sclera. We removed TM from some tissue segments while maintaining TM for other segments.
This formed two groups of samples. Each segment under test was placed in a microscope slide well with a few drops of basic saline solution to prevent desiccation. The microscope slide was secured on a mechanical stage. The OCT probe was positioned over the tissue and was adjusted to have a distance of approximately 1.5 mm from the tip to the sample, as shown in Figure 3. The stage could be translated to move the sample across the surface perpendicular to the probe’s axis. OCT images were displayed on the system monitor.
First, we examined the tissue penetration of the 1310 nm OCT light on the samples with TM intact. To test the penetration over the entire sample, instead of rotating the two needles to generate a fan-shape scan, the stage was linearly translated so that the probe was traveling relatively perpendicular to SC by a longer range to cover the tissue sample while the laser beam was kept undeflected. Thus, the resultant OCT images in this step had a regular rectangular shape.
On the verification of the tissue penetration of 1310 nm light, we next proceeded to search and image CCs using the probe, where the needles were actuated to rotate. The stage was translated so that the probe was moving relatively along the SC to search CCs while the beam was scanning the cross-section of the SC in a fan-shape pattern, as shown in Figure 1b. To provide corroboration of any CCs found by the probe, the experiments in this step were conducted on tissues with the TM removed so that image confirmation by scanning electron microscopy (SEM) could be performed later. If the TM had not been removed, it would have been very difficult to accurately map out the CCs found deterministically across the entire sample and to image the appropriate parts of it by SEM. This would have resulted in a remote chance of correct correlation between OCT and SEM images. In addition, during SEM sample preparation, desiccation of the samples would have caused the TM tissue to obscure the openings of any CCs found during OCT examination.
Once we located the CCs from the OCT images, we recorded their locations and mapped them across the sample using the scanning visualization provided by the aiming beam. We then sent the sample for study by using SEM to image those spots where CC openings were located.
**RESULTS**
A typical OCT image acquired by translating the stage in the penetration verification experiments is shown in Figure 5a. It

**Figure 3.** Side view of the endoscopic OCT probe and actuation system used in the CC imaging experiments. Close view of the probe tip and the human cadaver eye tissue underneath is enclosed. The sample has been dyed with methylene blue. The bevel gears of the actuation system synchronize the lens rotation mechanically. The outer needle is 63.5 mm long. S, the red aiming beam spot onto the sample.

**Figure 4.** The swept source OCT setup. SMF, single-mode fiber; PC, polarization controller; ADC, analog-to-digital converter; C, optical circulator.

**Figure 5.** OCT and SEM images of human cadaver eye tissue segments. (a) OCT image of a tissue segment with TM intact, acquired by translating the stage. (b) OCT image of a tissue with TM removed, acquired by rotating the needles of the endoscopic probe. (c) SEM image of the same CC in (b). O, CC opening; T, CC path through sclera.
clearly depicts the shape of the cross-section of the SC. Tissues under the TM and on the other side of the SC wall have a clarity almost as good as that of the TM itself. There is no observable shadow effect from the TM. These images verify that the probe operating at 1310 nm can indeed clearly visualize tissues behind the TM for a considerable distance. Some of these images even reveal structures that could be CCs branching far away from their original insertion site.
A typical OCT image acquired by rotating the needles of the endoscopic probe in the experiments searching CCs is shown in Figure 5b. From the OCT image, one can clearly see not only the CC opening exiting for the SC wall but also the shape of the channel winding into the sclera. Based on these images, we were able to quantitatively measure the dimensions of these physiological structures. The SEM image of the CC is shown in Figure 5c for comparison. The two methods agree with each other, indicating that the CC opening was approximately 120 μm wide.
The probe focuses the probing beam to a working distance of approximately 1.4 mm (in air) ahead of the tip. The focal spot size measured to range from 12 to 14 μm in diameter. Both the working distance and the spot size have a weak dependence on the deflection angle. The maximum variation of the spot size was 16%, and the variation in the working distance was 13%.
A maximum scan range of approximately 40° (20° for half angle) was achieved. The axial scan range was 3 mm, which was determined by the sampling period in the wavenumber space. The signal-to-noise ratio of the entire system was measured to be greater than 95 dB.
**DISCUSSION**
As opposed to approximately 800 nm, the typical wavelength range for ophthalmic OCT imaging used in previous studies (Kagemann L et al. *IOVS* 2009;50:ARVO E-Abstract 813), a central wavelength of 1310 nm was selected for this study. Indeed, axial resolution is lower for longer central wavelength assuming the same wavelength scan range. The theoretical axial resolution of our system is 7.5 μm, whereas that of the previous studies (Kagemann L et al. *IOVS* 2009;50:ARVO E-Abstract 813) was 1.5 μm. However, as shown in Figure 5, one can easily identify and locate CCs from the images acquired by the probe. The 7.5-μm axial resolution was sufficient because the dimensions of the structures were usually on the order of 100 microns. The fact that the previous systems (Kagemann L et al. *IOVS* 2009;50:ARVO E-Abstract 815) with higher axial resolution could not provide images of the same quality could have been due primarily to light scattering and absorption by intervening tissues. Although no blood was involved in this ex vivo study, one can expect an improved performance over shorter wavelengths for in vivo studies because shorter wavelengths are more greatly affected by optical scattering. Higher water absorption at 1310 nm should not impair image contrast as much as expected in external OCT systems because the distance from the probe tip to the tissue of interest is usually in the 0.5- to 1-mm range during the intended endoscopic imaging procedures.
This wavelength selection of 1310 nm has been verified in the initial penetration verification experiments. Although we did not happen to capture a CC during those experiments (which were not designed to search CCs after all), based on the images we obtained with TM intact, the clarity should have been comparable if a CC had been captured. In addition to the advantages of 1310 nm light, this occurred primarily because the beam has bypassed most of the intervening tissues; the only remaining tissue is a thin, flimsy layer of TM with a thickness usually around 10–20 microns that can be easily penetrated, as shown in Figure 5a.
Although the rotation speed of the motor was well maintained as constant, the angular speed of the deflection was not constant because the deflection angle is not linearly related to the rotation angle. An accurate relationship between the two angles is critical for accurate image reconstruction. In this study, which was based on our simplified model, we developed a more accurate theoretical model to estimate the deflection angle. A numerical simulation (ZEMAX Development Corporation. Bellevue, WA) was also used to verify this calculation. Finally, we experimentally measured the relationship. These results are shown in Figure 6. We applied this result to the final image reconstruction to obtain the correct geometry of the structures.
In this endoscopic probe design, the actuation system is located far away from the probe tip, which enables easy miniaturization. The diameter of the probe was limited primarily by that of the GRIN lens. We had already achieved a narrower probe encased in two needles of 25 and 21 gauge, respectively. Needles with diameters smaller than 0.04 mm (0.0015 inch) with diameters smaller than 400 μm are commercially available. This can further reduce the probe size, making it small enough to be introduced through a clear corneal incision for OCT visualization of CC location before or while implanting a bypass shunt.
As mentioned in our previous publication, inherent in this technology is the capability to perform volumetric scanning by using different rotation modes. By driving the lenses at different angular speeds and switching their rotation directions, we can engineer many volumetric scanning patterns in addition to the planar fan-shape scan pattern. Those patterns might permit visualization of larger areas, leading to more rapid identification of tissue structures.
The frame rate of the OCT system (0.5 fps) was limited by the scan rate of the swept laser. To accumulate enough A scans (666 depth scans in our case) for each OCT image frame, the rotation speed of our probe had to be kept at a constant value of 15 rpm, which is much lower than what it can support. OCT engines with A-scan rates greater than 100 kHz have been demonstrated, and OCT systems greater than 20 kHz are now commercially available as well. These could significantly increase the frame rate of our endoscopic system.

**Figure 6.** The relationship between deflection angle $\theta$ and rotation angle $\xi$. **Black line:** result of the new theoretical model; **red line:** result of the ZEMAX simulation; **blue line:** result of the experimental measurement.
CONCLUSION AND FUTURE WORKS
This study demonstrated that our endoscopic probe has sufficient resolution to locate and image CCs exiting SC in ex vivo human cadaver eyes. It may potentially be adapted to visualize the anterior chamber intraoperatively to provide guidance for surgeries such as bypass stent implantation.
Motion artifact, a common problem in OCT images, results from the relative movement between tissue target and OCT instrument. During stent implantation, eyelids are usually immobilized by clamps. Because the probe is inserted through a corneal incision, the probe’s position relative to the TM and SC are relatively constant. To further decrease this artifact, higher frame rates (>24 frames/s) must be achieved for real-time video operation by using OCT systems with faster A-scan rates. This upgrade, combined with improved eye immobilization procedures during surgery, will minimize the motion artifacts.
As mentioned, volumetric scan patterns can be designed to enable three-dimensional OCT imaging. Integration of this imaging device with other surgical tools may ultimately provide intraoperative assistance to surgical procedures that could benefit from real-time imaging guidance.
Acknowledgments
The authors thank Ying Wellsand of Thorlabs, Inc., whose connection made this research possible; Mike Roy of the Division of Chemistry and Chemical Engineering at the California Institute of Technology for help with the mechanical fabrication of the actuation system; and Kevin Hsu of Micron Optics, Inc., for the loan of the swept source laser.
References
1. Rosenquist R, Epstein D, Melamed S, Johnson M, Grant WM. Outflow resistance of enucleated human eyes at two different perfusion pressures and different extents of trabeculectomy. *Curr Eye Res.* 1989;8:1235.
2. Dvorak-Theobald G. Schlemm's canal: its anastomoses and anatomic relations. *Trans Am Ophthalmol Soc.* 1934;32:574.
3. Bahler CK, Smedley GT, Zhou J, Johnson DH. Trabecular bypass stents decrease intraocular pressure in cultured human anterior segments. *Am J Ophthalmol.* 2004;138:60-66.
4. Kagemann L, Wollstein G, Ishikawa H, et al. Identification and assessment of Schlemm's canal by spectral-domain optical coherence tomography. *Invest Ophthalmol Vis Sci.* 2010;51:4054.
5. Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. *Science.* 1991;254:1178.
6. Povazay B, Bizheva K, Hee MR, et al. Enhanced visualization of choroidal vessels using ultrahigh resolution ophthalmic OCT at 1050 nm. *Opt Express.* 2005;11:1980.
7. Ren J, Wu JG, McDowell EJ, Yang CH. Manual-scanning optical coherence tomography probe based on position tracking. *Opt Lett.* 2009;34:3400.
8. Wu JG, Conry M, Gu CH, Wang F, Yaoqob Z, Yang CH. Paired-angle-rotation-based optical coherence tomography forward-imaging probe. *Opt Lett.* 2006;31:1265.
9. Han S, Sarunic MV, Wu J, McDowell EJ, Yang CH. Handheld forward-looking needle endoscope for ophthalmic optical coherence tomography inspection. *J Biomed Opt.* 2008;13:020905.
10. Huber R, Adler DC, Srinivasan VJ, Fujimoto JG. Fourier domain mode locking at 1050 nm for ultra-high-speed optical coherence tomography of the human retina at 256,000 axial scans per second. *Opt Lett.* 2007;32:2049. |
Hybrid acceleration techniques for the physics-informed neural networks: a comparative analysis
Fedor Buzaev, Jiexing Gao*, Ivan Chuprov, Evgeniy Kazakov
Moscow Research Center, 2012 Labs, Huawei Technologies Co., Ltd.,
Smolenskaya square 7-9, Moscow, 119121, Russia.
*Corresponding author(s). E-mail(s): firstname.lastname@example.org;
Abstract
Physics-informed neural networks (PINN) has emerged as a promising approach for solving partial differential equations (PDEs). However, the training process for PINN can be computationally expensive, limiting its practical applications. To address this issue, we investigate several acceleration techniques for PINN that combine Fourier neural operators, separable PINN, and first-order PINN. We also propose novel acceleration techniques based on second-order PINN and Koopman neural operators. We evaluate the efficiency of these techniques on various PDEs, and our results show that the hybrid models can provide much more accurate results than classical PINN under time constraints for the training, making PINN a more viable option for practical applications. The proposed methodology in the manuscript is generic and can be extended on a larger set of problems including inverse problems.
Keywords: Physics-informed neural networks, sinusoidal learning space, Fourier neural operators, Koopman neural operators
1 Introduction
Physics-informed neural networks (PINNs) belong to universal function approximators that are trained by taking into account the underlying physical laws during the learning process, and in this way, provide us a robust framework to make predictions bounded by the physical laws [1]. The main application of PINN is to solve partial
differential equations (PDEs) [2]. In this regard PINNs are often considered as a mesh-free alternative to traditional numerical PDE solvers [3]. Essentially, PINN is able to solve a PDE in the weak formulation, i.e. by minimizing the loss function which indicates how well the neural network satisfies the PDE. Usually the loss function is taken as the residual of the PDE and its boundary conditions. The minimization is performed by tuning the weights of the neural network. In order to compute the PDE residual, the partial derivatives of the neural network have to be computed, which can be done by using the automatic differentiation technique. The PINN methodology was rediscovered in [4] and after that was applied in a wide range of mathematical physics problems including computational fluid dynamics [5], electrical engineering [6], radiative transfer [7], nano-optics [8], heat transfer [9] etc. An extensive list of references describing PINN applications for solving PDEs can be found in [10]. Due to the growing interest to this topic, several frameworks dedicated to PINN have been introduced (there are over 400 projects related to physics-informed machine learning on Github), including that from NVIDIA [11] and Microsoft [12]. However, there are some concerns about the mathematical justification of this approach. In particular, minimization of the loss function in PINN is a highly non-convex optimization problem, wherein convergence to the global minimum cannot be guaranteed [13]. The structure of the neural network has to be adjusted to a PDE (and possibly, to parameters of a PDE). Sometimes, the efficiency of PINN does not depend smoothly on the number of layers, neurons and other characteristics of the neural network structure. That complicates the design of the optimal PINN structure for a given problem.
The main practical question is whether PINNs can be more effective than traditional solvers, such as the finite element method (FEM). Potentially, PINN can be faster and more efficient than FEM in some cases, because PINN can learn to approximate the solution to a PDE directly, without the need for discretization or mesh generation, which can be time-consuming and computationally intensive steps in the FEM process. In the recent work of Grossmann et al [14] it was shown that in terms of solution time and accuracy, PINNs were not able to outperform FEM. A similar conclusion was done in [3]. It was shown that low-frequency components of the solution converged quickly, while in contrast, an accurate solution of the high frequencies required an exceedingly long time. Such feature of PINN can be regarded as an implication of the F-principle in deep learning [15], that says that neural networks tend to fit the data by a low-frequency function. The main argument in defense of PINN is that, unlike traditional solvers, PINNs can invoke free parameters of the PDEs as extra inputs and in this way to be trained for a class of PDEs rather than for a single PDE. For instance, such an approach was applied to the nonlinear Schrödinger equation [16]. The launch power of the signal is embedded in PINN as a parametric feature, which makes PINN learn the signal transmissions under different powers simultaneously. There are dedicated frameworks based on PINN which are specifically designed to incorporate a large amount of parametric features, e.g. PI-DeepONet [17]. Since the training of PINN is performed offline and the resulting network is computationally very fast, the long training time is not an issue. The second argument in favor of PINN is based on the so-called transfer learning approach [18–20], wherein
a formerly trained PINN is reused as a starting point in a training procedure for a novel problem thereby accelerating the convergence.
In practice, the PINN training is a time-consuming procedure. In this regard, new methods for accelerating the PINN training are required. Recently several concepts involving novel architectures and approaches were proposed to improve the performance and efficiency of PINNs. Moreover, in some papers authors showed the cases where the classical PINN did not converge at all, while using proposed modifications it was possible to get an accurate solution [21].
Usually the acceleration methods for PINN are studied in isolation of each other and compared only against PINN based on a fully-connected neural network. The goal of this paper is to analyze the efficiency of the combined use of several acceleration techniques.
2 Overview of PINNs
PINNs approximate PDE solutions using a deep neural network. The basic features of PINN can be summarized as follows. Consider a PDE in the following form:
\[ \mathfrak{F}(U(\boldsymbol{x})) = 0 \quad \text{for} \quad \boldsymbol{x} \in D, \]
(1)
where \( \mathfrak{F} \) is the differential operator, \( U \) is the solution to the PDE, while \( \boldsymbol{x} = \{x_1, ..., x_n\} \in D \) is \( n \)-dimensional vector of coordinates belonging to the domain \( D \). Operator \( \mathfrak{F} \) may involve high-order partial derivatives of \( U \) over \( x_i \). The domain \( D \) is bounded by \( \Gamma \). Function \( U \) is a subject to boundary conditions (here we do not distinguish boundary and initial conditions),
\[ \mathfrak{B}(U(\boldsymbol{x})) = 0 \quad \text{for} \quad \boldsymbol{x} \in \Gamma, \]
(2)
where \( \mathfrak{B} \) is the boundary operator. Our goal is to find the vector \( \boldsymbol{b} \) incorporating the parameters of the neural network, so that PINN satisfies Eqs. (1) and (2). The problem is formulated in a weak form:
\[ \boldsymbol{b} = \arg \min_{\boldsymbol{b}} [L_{\text{pde}} + L_{\text{bc}}], \]
(3)
where
\[ L_{\text{pde}} = \sum_{i=1}^{N} [\mathfrak{F}(\text{PINN}(\boldsymbol{x}_i, \boldsymbol{b}))]^2 \quad \text{for} \quad \boldsymbol{x}_i \in D, \]
(4)
\[ L_{\text{bc}} = \sum_{i=1}^{M} [\mathfrak{B}(\text{PINN}(\boldsymbol{x}_i, \boldsymbol{b}))]^2 \quad \text{for} \quad \boldsymbol{x}_i \in \Gamma, \]
(5)
while \( N \) and \( M \) are the number of sampling points within \( D \) and at \( \Gamma \), respectively. The partial derivatives of PINN with respect to the elements of \( \boldsymbol{x} \) in Eq. (4) can be computed on the basis of automatic differentiation tools implemented in machine learning libraries.
The schematic of PINN is shown in Figure 1.
The performance of PINN is strongly affected by the topology/architecture of the neural network, including the number of layers, the number of neurons in each layer, and the connections between neurons. In general, a more complex topology with more layers and more neurons can potentially improve the performance of PINN, but it also requires more training time and so, provide less accurate results under training time constraints.
3 Acceleration methods
Due to the large number of parameters that can be changed in the PINN model, such as the number of layers, number of neurons, optimizer, etc., the accurate analysis of acceleration methods for PINN is a challenging task. We have therefore chosen methods that we believe to be fundamentally different, but which nevertheless show high efficiency when applied individually. For the purpose of our analysis, we identify two groups of acceleration methods. The first group are techniques that are a modification of the way the data is taken from the deep learning block. From this group we consider separable PINN (SPINN), FO-PINN and SO-PINN. In addition, we implement first-order SPINN (FO-SPINN) and second-order SPINN (SO-SPINN) as hybrid models alongside them. The second group of methods can be attributed to the functional spaces. Essentially, it deals with what inside the deep learning block. From this group we consider PINN with sinusoidal activation function, PINN with Fourier neural operators (FNOs), PINN with U-Net architecture and, as a hybrid model, PINN with the sinusoidal activation function and FNO blocks. Subsequently, several hybrid PINN models are considered by implementing pairwise methods from two groups. Below is a brief discussion of each of these methods. Note that the distinction between two groups of methods is not strict, and some methods can be attributed to both categories. In our classification, we have mostly relied on the context of papers where these methods were originally described.
3.1 Separable PINN
SPINNs [22] are a type of PINN that use a separable representation of the solution, where the solution is decomposed into a product of functions that depend only on a single variable. This approach can significantly reduce the computational cost of solving PDEs and can also improve the accuracy of the solution. SPINNs have been shown to be particularly effective for solving PDEs with separable solutions, such as certain types of elliptic and parabolic PDEs. In this regard, SPINN can be partially thought of as a representative of the second group methods. The schematic of the SPINN architecture is shown in Figure 2.
In particular, for an output function depending on $n$ variables, SPINN consists of $n$ sub-networks, each of them depends on a single coordinate. The final output is taken as a dot product of the corresponding outputs of the sub-networks. This approach reduces the number of propagation across the network during the learning process, while the complexity of the problem scales linearly of the number of dimensions unlike classical architectures with exponential scaling.
3.2 First-order PINN
It was observed that computation of partial derivative terms in the PINN loss functions via automatic differentiation during training are computationally expensive. In this regard, several methods were proposed in order to reduce the use of automatic differentiation. In particular, in the so-called FO-PINN [23] the neural network has additional outputs which approximate the partial derivatives, while in [24] derivatives are estimated by using meshless radial basis function-finite differences.
While in the classical PINN, the output of the neural network approximates the PDE solution and the loss function is estimated w.r.t. the function to be found, in FO-PINN or in FO-SPINN there are additional outputs ($\text{PINN}_d(x_i)$) to approximate the first-order partial derivatives. Consequently, the loss function incorporates an extra
term $L_d$ showing how accurate the approximation of derivatives as compared to those computed by using the automatic differentiation:
Note that the first-order PINN can be implemented for SPINN architectures. Such hybrid schemes are also analysed in this paper.
$$L_d = \sum_{i=1}^{N} [\text{PINN}_d(x_i) - \text{autograd}(\text{PINN}(x_i), x_i)]^2.$$ \hspace{1cm} (6)
The ideas of FO-PINN and FOS-PINN are illustrated in Figure 3. Note that the first-order PINN can be implemented for SPINN architectures. Such hybrid schemes are also analysed in this paper.
### 3.3 Second-order PINN
SO-PINN is a logical extension of FO-PINN. In our approach, we integrate an additional neural network that generates both first and second order derivatives. This means we have two neural networks operating concurrently. They are both trained
with the same data batches, while the loss function to be minimized incorporates the loss terms of both networks. Importantly, we merge the two neural networks into a single graph, allowing for backpropagation through the entire network.
The architecture is illustrated in Figure 4. The first-order derivative loss term is the residual between the output for the first derivative of the second neural network and the output of the first neural network followed by automatic differentiation. In SO-PINN, the second-order derivative loss term is calculated as a residual between the corresponding outputs of the second neural network and the automatically differentiated output of the second neural network corresponding to the first-order derivative. In this paper, we adopt the following configuration of SO-PINN: 6 linear layers with layer shape is equal \([n_{\text{inputs}}, 200, 500, 500, 200, n_{\text{outputs}}]\), where \(n_{\text{inputs}}\) is a number of input coordinates shape, \(n_{\text{outputs}}\) is a number of output derivatives shape. We used GELU non-linearity as function of the activation.
### 3.4 Fourier Neural Operator and Koopman Neural Operator
FNO were introduced in [25] and designed to perform a mapping between infinite-dimensional function spaces [26]. Initially, FNO was designed to parametrize a set of PDE solutions using the training dataset. Mathematical properties of such neural operators are considered in [27]. Essentially, a neural operator can be represented as a neural network consisting of a stack of \(L\) layers:
\[
G = Q \circ (W_L + K_L) \circ ... \circ \sigma(W_1 + K_1) \circ P,
\]
where \(G\) is a neural operator, \(\sigma\) is the activation function, \(W_i\) is the point-wise linear operator, \(K_i\) is the integral kernel operator, while \(P\) and \(Q\) are the pointwise neural networks that encode the lower dimension function into higher dimensional space and decode the higher dimensional space into the lower dimensional space, respectively. In FNO, \(K_i\) is taken as a convolution operator, and the fast Fourier transform \(F\) is used to compute \(K_i\). Thus,
\[(Kv)(x) = F^{-1}(r(Fv))(x),\]
where \(r\) represents the parameter to be learned. Its schematic representation is depicted in Figure 5. The use of FNO in PINN is described in [28] and [27]. The corresponding model is referred to as physics-informed neural operators (PINO). In [27] PINO was up to 2 orders of magnitude faster as compared to the classical solvers. However, the PINO error level could not be reduced below 1% (or only perhaps at the cost of drastic increase of the training time). In the meantime, the classical solvers have no problem to achieve higher level of accuracy.
Koopman Neural Operator (KNO) was introduced in [29]. As the FNO, KNO was initially designed to parametrize a set of PDE solutions using the training dataset. Specifically, KNO takes as input a set of measurements or observations of the system at different times, and produces as output a set of predictions of the system’s behaviour at future times. As described in [29], KNO is expressed as a composition of simpler operators that can be more easily approximated by a neural network, e.g. as a composition of nonlinear observables of the system, followed by a linear operator that maps the observables to their future values. KNO includes the following elements: an encoder of the input data, Fourier transform (similar to FNO), a linear layer (corresponding to a Hankel matrix), and inverse Fourier transform. On top of that, a convolutional layer is applied to the output of the encoder in order to extract high-frequency components of the solution. Then the outputs of the inverse Fourier transform and high frequency components are summed up, and the decoder is applied to the sum. A scheme of KNO is given in Figure 6.
Similar to FNO, KNO can also be integrated into the PINN framework. However, incorporating FNO and KNO into PINN is not a straight-forward process. Instead of treating a group of points as a single sample for training, in PINN each point can be treated as a separate sample. This fact raises the crucial question of which quantity
the Fourier transform should be applied to. To solve this obstacle, and to work with mesh-free data, input data should fit to the convolution layers. For that purpose, we artificially increase the data dimension and perform embeddings through two linear layers to transform the data dimension from $[n_{\text{points}}, n_{\text{dim}}]$ to $[n_{\text{points}}, \text{width}, \text{width}]$, where $n_{\text{dim}}$ is a dimension of input data. For outputs we use two linear layers for inverse transformation from $[n_{\text{points}}, \text{width}, \text{width}]$ to $[n_{\text{points}}, n_{\text{dim}}]$. That approach helps to find additional patterns in the mesh-free data.
### 3.5 U-Net
U-Net is a type of neural network architecture that was originally developed for the task of image segmentation [30], which involves identifying and separating different objects within an image. The name U-Net comes from the shape of the network, which resembles the letter U. The U-Net architecture is characterized by a series of convolutional layers that gradually reduce the spatial resolution of the input image, followed by a series of upsampling layers that gradually restore the resolution back to the original size. The network also includes skip connections, which allow information from earlier layers to be passed directly to later layers, bypassing the intermediate layers. This helps the network to better capture both low-level and high-level features of the image, which is particularly important for image segmentation tasks.
To solve partial differential equations, the authors propose to use FNO layers instead of usual convolution layers [31]. We have also complemented this architecture with KNO layers. Its schematic representation of U-Net with FNO layers (or KNO layers) is shown in Figure 7.
### 3.6 Sinusoidal activation function
In [32] it was shown that PINNs have the so-called spectral bias that prevents them from learning high-frequency functions. There it was also proposed to add to neural network special blocks which mimic the solution features in the Fourier space. A similar idea was also discussed in [33]. In [34] it was proposed to use as the sinusoidal activation function for the output of the first layer, which looks as follows:
\[ \sigma(x) = \sin(2\pi(x + b)), \]
(9)
where \( b \) is the bias. The argument in favor of this approach takes into account the process of the PINN convergence. Namely, it was shown that whenever the activation function is relatively flat (e.g. ReLU or tanh), PINN tends to provide a solution that can be very close to satisfying many PDEs and falling into a local minimum of the PINN loss that only minimizes PDE residuals. As a consequence, it takes more time for optimizers to minimize also other terms of the loss function thereby increasing the total training time. It was also shown empirically that the sinusoidal mapping of inputs prevents the loss function to reach a local minimum and be ‘trapped’ there instead of converging to a global minimum.
4 Simulations
All simulations are performed on Nvidia V100 GPU with 16GB memory. To make comparison consistent, we limit the training time by 10 minutes. The default configuration of PINN consists of 8 residual linear layers with 300 neurons in each layer. FNO consists of 3 blocks incorporating 4 linear layers and taking 32 modes followed by the GELU activation functions. We consider two models based on SPINN. The first model of SPINN has 4 linear layers with 300 neurons in each layer followed by the tahnshrink activation functions. The second model of SPINN consists of 4 linear layers with 32 neurons in each layer followed by FNO. The models of FO-PINN and FO-SPINN have the same configuration like models of PINN and SPINN. The GELU activation functions are employed in this case. We use AdamW optimizer to train for 10 minutes with an initial learning rate of 0.001. The ReduceLROnPlateau scheduler multiplies the learning rate by 0.9 if the loss is not decreased after 50 consecutive epochs. The weights of the network are initialized by using the Xavier method [35]. The ensure consistency, a total of 2500 sampling points are chosen for all cases. These points are randomly and uniformly distributed across the domain, and remain fixed throughout the training process without any changes.
4.1 Poisson equation
The Poisson equation is a type of elliptic partial differential equation that is widely used in theoretical physics. The solution of the Poisson equation is the potential field caused by a given charge density distribution. Under a known potential field, an electrostatic field can be calculated.
\[ \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) u(x, y) = f(x, y), \]
(10)
where \( u \) is the unknown function, and \( f \) is the source function that reads as follows:
\[ f(x, y) = (1 - x^2)(2y^2 - 3y^2 + 1). \]
(11)
Results of experiments are shown in Table 1. The best result is marked in bold. Almost all cases of training with sinusoidal activation the final mean squared error will be smaller than without the use of the sinusoidal activation function.
### Table 1 MSE for the Poisson equation
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| FO | 3.1e-01 | 1.2e-01| 4.6e-05| 3.7e-05 | 5.0e-05| 1.5e-05 |
| SO | 1.9e-02 | 9.2e-05| 2.3e-05| 4.5e-06 | 2.7e-05| 5.6e-06 |
| Separable | 1.2e-03 | 5.9e-05| 1.0e-04| 6.1e-05 | 7.3e-05| 7.2e-05 |
| Separable+FO | 7.3e-01 | 1.5e-01| 3.6e-06| 6.8e-07 | 1.6e-06| **5.5e-07** |
| Separable+SO | 9.0e-02 | 1.0e-01| 9.3e-02| 2.1e-01 | 9.0e-02| 9.0e-02 |
| Separable+SO | 1.6e-04 | 1.5e-04| 7.1e-05| 1.8e-05 | 8.5e-05| 6.4e-05 |
| U-Net | - | - | 5.5e-04| 9.1e-05 | 4.2e-04| 1.4e-04 |
| U-Net+FO | - | - | 2.3e-04| 1.6e-04 | 1.0e-03| 7.0e-05 |
| U-Net+SO | - | - | 8.8e-05| 2.7e-04 | 2.0e-04| 1.3e-04 |
| Separable+U-Net| - | - | 1.3e-03| 1.0e-04 | 2.3e-04| 7.4e-05 |
| Separable+U-Net+FO | - | - | 7.5e-02| 1.0e-01 | 8.2e-02| 1.3e-01 |
| Separable+U-Net+SO | - | - | 1.0e-03| 4.9e-04 | 1.4e-03| 3.2e-04 |
### Table 2 MSE for PINN solutions of the reaction-diffusion equation
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| FO | 6.1e-01 | 4.1e-01| 3.9e-01| 3.8e-01 | 1.3e-04| 1.3e-04 |
| SO | 3.9e-01 | 1.1e-03| 3.6e-04| 2.8e-04 | 4.1e-04| 2.6e-04 |
| Separable | 4.1e-01 | 4.3e-01| 3.7e-02| 2.7e-02 | 3.3e-03| 2.9e-02 |
| Separable+FO | 2.3e+00 | 1.0e-01| 1.5e-04| 1.4e-04 | 1.4e-04| 1.3e-04 |
| Separable+SO | 2.6e+00 | 4.0e-01| 4.1e-01| 4.1e-01 | 4.1e-01| 4.1e-01 |
| Separable+SO | 1.7e+01 | 1.5e+00| 1.3e-03| 6.8e-02 | 5.5e-04| 3.9e-03 |
| U-Net | - | - | 4.2e-04| 3.3e-01 | **6.7e-05**| 2.0e-04 |
| U-Net+FO | - | - | 3.8e-01| 4.6e-04 | 3.7e-01| 3.0e-01 |
| U-Net+SO | - | - | 2.9e-02| 9.3e-02 | 1.0e-01| 3.8e-02 |
| Separable+U-Net| - | - | 5.1e-04| 3.1e-04 | 2.8e-01| 2.5e-05 |
| Separable+U-Net+FO | - | - | 4.5e+00| 3.9e-01 | 2.1e+00| 3.9e-01 |
| Separable+U-Net+SO | - | - | 6.4e-02| 1.9e-01 | 6.4e-02| 8.5e-02 |
### 4.2 Reaction-diffusion equation
The reaction-diffusion equation is a mathematical equation describing how two chemicals might react to each other as they diffuse through a medium together.
\[
\frac{\partial u}{\partial t} + \nu \frac{\partial^2 u}{\partial x^2} = \rho u (1 - u),
\]
where \( \rho = 5, \nu = 3 \) and \( u \) are the PDE parameters and the source function, respectively.
The results are shown in Table 2. The best result is marked in bold. The combined use of SPINN with the first- or second-order modifications reduced the accuracy as compared to the classic PINN. The best configuration is the U-Net with KNO blocks. The classic PINN involving KNO blocks also shows good results outperforming the classic PINN and other methods implemented separately. FNO without the use of additional techniques converges to the minimum best.
4.3 Helmholtz equation
The Helmholtz equation is one of the varieties of PDE. That equation represents time-independent form of source’s wave in time-independent domain.
\[
\nabla^2 u = -k^2 f.
\] (13)
For our tests, we consider
\[
-u_{xx} - u_{yy} - k_0^2 u = f(x, y),
\] (14)
where \(k\) and \(f\) are the wave number and source of elliptic PDE that is widely used in theoretical physics. The solution of the Helmholtz is the potential field caused by a given charge density distribution. Under a known potential field, an electrostatic field can be calculated.
The corresponding source functions are given by
\[
f(x, y) = \sin(k_0 x) \sin(k_0 y)
\] (15)
in the domain \(\Omega = [0, 1]^2\), with the Dirichlet boundary conditions \(u(x, y) = 0, (x, y) \in \partial \Omega\). The value of \(k_0\) can be \(4\pi\), \(16\pi\) or \(24\pi\). The distribution with different \(k_0\) is shown in Figure 8. As \(k_0\) increases, the solution becomes more complex in shape. That causes difficulties for neural networks according to the F-principle [15], that says that neural networks tend to fit the data by a low-frequency function.
The MSE values for all cases are shown in Tables 3, 4 and 5. The best result is marked in bold. The best configuration for \(k_0 = 4\pi\) appeared to be SPINN with the first-order modification including FNO blocks and the sinusoidal activation function. The separate use of these options, as well as KNO blocks leads to less accurate results as those from the classic PINN. For \(k_0 = 16\pi\) the accuracy of the classic PINN decreases, while the configurations involving KNO blocks become accurate. In particular, SPINN with the first-order modification, KNO blocks and sinusoidal activation function shows the best performance. Finally, for \(k_0 = 24\pi\) SPINN with the second-order modification, KNO blocks and sinusoidal activation function is the winner outperforming by far other the rest configurations. As modifications of PINN introduce additional overhead and the training time is limited by 10 minutes, different configurations, different configurations manage to go through a different number of learning epochs. Note that the overhead changes with \(k_0\). For instance, for \(k_0 = 4\pi\) the classic PINN is the fastest, while for \(k_0 = 16\pi\) and \(k_0 = 24\pi\) PINN with the second-order modification and that with KNO blocks and sinusoidal activation function, respectively, are the fastest ones. The separate use of SPINN does not improve the results as compared to the classic PINN.
4.4 Burgers’ equation
Burgers’ equation arises in various areas of applied mathematics, including fluid mechanics, nonlinear acoustics and gas dynamics. It is a fundamental PDE which
Fig. 8 Examples of Helmholtz equation with the source terms given by Eq. (15) with $k_0 = 4\pi$ (left), $k_0 = 16\pi$ (middle) and $k_0 = 24\pi$ (right)
Table 3 MSE for the Helmholtz equation with $k_0 = 4\pi$
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| FO | 1.0e-04 | 1.2e-02| 5.9e-02| 8.9e-02 | 1.0e-01| 1.0e-01 |
| SO | 6.2e-02 | 7.1e-02| 6.4e-03| 1.3e-04 | 5.7e-02| 1.0e-01 |
| Separable | 2.3e-01 | 3.7e-02| 2.2e-02| 7.1e-02 | 1.1e-01| 1.0e-01 |
| Separable+FO | 2.1e-01 | 3.7e-02| 2.2e-02| 7.1e-02 | 1.1e-01| 1.0e-01 |
| Separable+SO | 1.8e-01 | 8.6e-04| 7.3e-04| 3.5e-05 | 1.8e-04| 7.8e-04 |
| Separable+FO+SO| 2.2e-01 | 1.3e-02| 2.3e-01| 8.5e-03 | 1.5e-02| 4.4e-02 |
| U-Net | - | - | 1.5e-01| 1.1e-01 | 1.0e-01| 1.0e-01 |
| U-Net+FO | - | - | 6.2e-02| 7.7e-02 | 6.4e-02| 7.7e-02 |
| U-Net+SO | - | - | 1.7e-03| 3.3e-04 | 3.2e-03| 9.0e-03 |
| Separable+U-Net| - | - | 1.0e-01| 9.7e-02 | 1.0e-01| 9.8e-02 |
| Separable+U-Net+FO| - | - | 1.7e-03| 8.0e-04 | 4.1e-02| 1.1e-03 |
| Separable+U-Net+SO| - | - | 1.0e-03| 2.1e-04 | 1.5e-03| 7.5e-04 |
Table 4 The same as in Table 3, but for $k_0 = 16\pi$
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| FO | 1.4e-01 | 1.9e-02| 2.1e-01| 1.1e-01 | 1.8e-01| 1.4e-01 |
| SO | 7.4e-02 | 2.0e-02| 2.0e-01| 5.1e-02 | 1.3e-01| 3.4e-02 |
| Separable | 1.5e-01 | 2.2e-02| 1.5e-01| 4.4e-02 | 1.5e-01| 4.0e-02 |
| Separable+FO | 2.5e-01 | 2.5e-01| 2.4e-01| 1.8e-01 | 1.1e-01| 1.0e-01 |
| Separable+SO | 2.5e-01 | 2.5e-01| 2.4e-01| 3.4e-04 | 6.5e-02| 3.5e-05 |
| Separable+FO+SO| 3.5e-01 | 2.6e-01| 2.4e-01| 4.1e-04 | 2.2e-03| 4.2e-05 |
| U-Net | - | - | 2.4e-01| 1.4e-01 | 2.1e-01| 1.5e-01 |
| U-Net+FO | - | - | 3.3e-02| 3.9e-02 | 6.6e-02| 2.4e-02 |
| U-Net+SO | - | - | 1.1e-03| 2.0e-02 | 2.2e-02| 1.4e-02 |
| Separable+U-Net| - | - | 2.4e-01| 1.0e-01 | 2.4e-01| 1.0e-01 |
| Separable+U-Net+FO| - | - | 2.8e-01| 2.4e-01 | 2.4e-01| 2.4e-01 |
| Separable+U-Net+SO| - | - | 2.4e-01| 3.9e-03 | 2.4e-01| 2.4e-01 |
describes the dynamics of viscous fluids or gases. It is a simplified version of the Navier-Stokes equation and can be derived from it by dropping the pressure gradient term. For our tests, we consider the following equation:
$$u_t + uu_x = (0.01/\pi) u_{xx} \quad (16)$$
Table 5 The same as in Table 3, but for $k_0 = 24\pi$
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| | | | | | | |
| FO | 2.6e-01 | 1.6e-01| 2.4e-01| 2.1e-01 | 2.2e-01| 2.6e-01 |
| SO | 2.4e-01 | 1.7e-01| 2.6e-01| 1.8e-01 | 2.4e-01| 1.9e-01 |
| Separable | 2.6e-01 | 1.8e-01| 2.5e-01| 1.4e-01 | 2.6e-01| 2.1e-01 |
| Separable+FO | 2.6e-01 | 2.7e-01| 2.4e-01| 2.4e-01 | 1.9e-01| 1.0e-01 |
| Separable+SO | 2.8e-01 | 2.0e-01| 2.4e-01| 2.8e-04 | 1.5e-01| 1.2e-04 |
| U-Net | - | - | 2.3e-01| 2.0e-01 | 2.1e-01| 1.9e-01 |
| U-Net+F0 | - | - | 2.7e-01| 1.4e-01 | 2.6e-01| 1.8e-01 |
| U-Net+SO | - | - | 2.4e-01| 1.2e-01 | 2.0e-01| 1.3e-01 |
| Separable+U-Net| - | - | 2.4e-01| 1.1e-01 | 2.4e-01| 1.0e-01 |
| Separable+U-Net+FO| - | - | 2.4e-01| 2.8e-01 | 2.5e-01| 2.5e-01 |
| Separable+U-Net+SO| - | - | 1.2e-01| 4.6e-03 | 2.4e-01| 3.0e-03 |
Table 6 MSE of Burgers’ equation solution
| | Classic | +sin | +FNO | +FNO+sin | +KNO | +KNO+sin |
|----------------|---------|--------|--------|----------|--------|----------|
| | | | | | | |
| FO | 1.1e-01 | 9.3e-02| 2.7e-02| 2.1e-02 | 8.1e-03| 2.9e-02 |
| SO | 9.3e-02 | 8.0e-02| 1.5e-01| 9.7e-02 | 4.3e-02| 3.0e-02 |
| Separable | 1.5e-01 | 1.7e-01| 1.8e-01| 1.0e-02 | 8.7e-02| 8.4e-02 |
| Separable+FO | 1.7e-01 | 2.7e-01| 1.9e-01| 1.0e-01 | 6.0e-02| 2.9e-02 |
| Separable+SO | 3.7e-01 | 3.7e-01| 2.8e-01| 2.3e-01 | 3.7e-01| 3.7e-01 |
| Separable+U-Net| 1.2e-01 | 1.1e-01| 1.3e-01| 1.1e-01 | 9.0e-02| 8.4e-02 |
| U-Net | - | - | 4.4e-02| 2.3e-01 | 1.1e-01| 3.3e-02 |
| U-Net+FO | - | - | 9.5e-02| 8.1e-02 | 9.5e-02| 8.0e-02 |
| U-Net+SO | - | - | 1.0e-01| 1.0e-01 | 1.0e-01| 1.0e-01 |
| Separable+U-Net| - | - | 4.0e-02| 3.5e-02 | 3.6e-02| 3.5e-02 |
| Separable+U-Net+FO| - | - | 3.8e-01| 3.7e-01 | 3.7e-01| 3.7e-01 |
| Separable+U-Net+SO| - | - | 1.3e-01| 1.3e-01 | 1.2e-01| 1.3e-01 |
in domain $x \in [0, 1]^2$, $t \in [0, 1]^2$ with initial conditions $u(0, x) = -\sin(\pi x)$, and Dirichlet boundary conditions $u(t, -1) = u(t, 1) = 0$.
The MSE values are summarized in Table 6. The best result is marked in bold. The configuration with KNO blocks appears to be the best one, while in combination with SPINN the second-order modification is more accurate than that with the first-order modification. An application of SPINN leads to worse results than those provided by the classic PINN.
5 Discussion
We tested several configurations of PINN and found that there was no clear winner in the comparison. For example, in the case of the Poisson equation, SPINN equipped with KNO and sinusoidal activation function performed the best, while using FO-PINN led to significantly worse results. However, this configuration was outperformed by the U-Net PINN with KNO blocks in the case of the reaction-diffusion equation. In fact, none of the options that were considered consistently resulted in an enhancement of PINN. Furthermore, we could not formulate explicitly the causes behind the success or failure of each individual option. The amount of overhead associated with utilizing
Table 7 MSE of Burgers’ equation solution
| Equation | Mesh | Solution time, s | MSE |
|---------------------------|----------|------------------|-------|
| Reaction-Diffusion | 50x50 | 0.2 | $1e^{-4}$ |
| Poisson | 50x50 | 0.03 | $1e^{-7}$ |
| Helmholtz, $k = 4\pi$ | 50x50 | 1.2 | $3e^{-11}$ |
| Helmholtz, $k = 16\pi$ | 50x50 | 1.2 | $1e^{-6}$ |
| Helmholtz, $k = 24\pi$ | 50x50 | 1.2 | 1 |
| Helmholtz, $k = 24\pi$ | 2500x2500| 1080 (18 min) | $3e^{-6}$ |
Various PINN options varies among different problems. This fact adds complexity to the examination of PINN configurations and the process of finding the optimal PINN configuration.
PINN is considered as a possible competitor of traditional solvers, such as FEM. For this reason, we apply FEM to aforementioned problems. We use DOLFINx [36] framework on python. DOLFINx framework uses combination of a Krylov subspace method[37] and a preconditioner from PETSc package [38]. In all simulations, the LU preconditioner with $1e-6$ absolute and relative tolerances is used. Computations are performed on 1 core of Intel Xeon Gold 6151 3.0 GHz CPU. The results are summarized in Table 7. The corresponding MSE values are computed with respect to the analytical solutions. For considered cases, FEM provides stable results with MSE below $10^{-4}$ taking the same number of grid points, as PINN, namely 2500. The accuracy of PINN solutions becomes significantly worse when the number of sampling points in PINN is decreased. Let us recall that PINN takes 10 minutes on GPU. From this perspective and taking into account that our PINNs have around $10^5$ trainable parameters against $10^3$ unknowns in FEM, using piecewise linear functions as basic functions in FEM appears more effective than representing the solution of PDE through neural networks.
As the wavenumber in the Helmholtz equation gets higher, the number of required grid points in FEM also increases significantly. For example, when $k = 24\pi$, the number of points needs to be increased by a factor of $10^3$. In this scenario, the performance of both FEM and PINN (in their most optimal configurations) can be deemed comparable, with a slight advantage leaning towards FEM. In this context, it is reasonable to assume that using PINN as a PDE solver is effective for cases that require fine discretization. However, for other cases, FEM appears to be a better option, as the solution can be obtained using fewer grid points (finite elements). This conclusion is only valid when considering a solution to a single PDE. In the case of complex problems like weather forecasting, it is reasonable to anticipate that PINN would offer significantly faster result predictions than numerical methods, with an accuracy that surpasses that of the numerical approach [39]. Moreover, in [40], it is suggested that the efficiency of PINN approaches increase with the dimension of the problem, whereas in FEM the complexity increases exponentially with increasing dimensions.
We finalize this section by mentioning that although this paper does not cover extensive numerical experiments necessary for comparing PINN and FEM, we present illustrative examples to provide readers with a sense of how these two methods compare to each other.
6 Summary
In this paper, we explored several acceleration techniques for PINNs including novel PINN architectures based on Koopman neural operators and the second-order PINNs in order to improve their performance and efficiency. We have considered several types of methods, including those based on advanced network architectures, calculating derivatives by neural networks instead of solely use of automatic differentiation approach, and learning in functional spaces. These approaches have been implemented in a common framework aiming to streamline the process of developing and fine-tuning PINN models and provide an all-in-one solution for efficient PINN research. The best configuration for a specific problem can be found by performing tests across all available architectures.
In majority of considered cases SPINN architecture with sinusoidal activation function perform well and robust. To reduce the overhead due to automatic differentiation, FO-PINNs have been considered. We extended this method to include the second-order derivatives. Our results showed that the efficiency of SO-PINN was almost as good as FO-PINN in most cases. Nevertheless, SO-PINN performed better in the problems with high oscillatory solutions, such as the Helmholtz equation. We explored the use of functional spaces, including sinusoidal activation functions and FNO. In addition, we implemented PINNs based on KNO. Through our simulations, the combined use of neural operators together with the sinusoidal activation function may improve the accuracy of PINN. In most experiments, KNOs were more efficient than FNOs. The optimized configurations of PINN provided results with the error by 3-4 orders of magnitude less than the original PINN based on fully-connected layers.
Overall, our results suggest that the optimal configuration of PINN depends on the specific physics problem being addressed, and there is no single approach that works best for all cases. Nevertheless, it was shown that the combined use of several performance enhancement techniques can significantly improve PINN results. In particular, the SPINN configuration with sinusoidal activation function and KNO blocks seems to be a configuration to be tested. Our study highlights the potential benefits of combining multiple techniques to achieve the best results.
The most efficient configurations of PINN found in this study could not overperform FEM. For instance, while PINN required minutes of training on GPU to obtain an accurate solution, FEM was capable to achieve accurate results in several seconds. Our experiments with the Helmholtz equation revealed that the performance gap between PINN and FEM diminishes as problem complexity increases. In our study, we specifically refer to the highly oscillatory nature of the solution as the complexity factor. Previous research suggests that a similar trend may be observed as the problem dimension increases. In our future work we aim to explore the potential of PINNs in solving complex problems that traditional solvers may struggle with. Specifically, we will focus on high-dimensional problems and parametric PDEs with free parameters.
7 Declarations
Jiexing Gao supervised the project. Fedor Buzaev and Jiexing Gao conceived the original idea. Fedor Buzaev and Ivan Chuprov designed the model and the
computational framework and analysed the data. Evgeniy proposed the FEM experiment in discussions.
All authors declare that they have no conflicts of interest.
Funding - Not applicable
Ethics approval - Not applicable
Code availability - Not applicable
Consent to participate - Not applicable
Consent for publication - Not applicable
Availability of data and material - Not applicable
References
[1] Kollmannsberger, S., D’Angella, D., Jokeit, M., Herrmann, L.: Physics-informed neural networks. In: Deep Learning in Computational Mechanics vol. 977, pp. 55–84. Springer, (2021). https://doi.org/10.1007/978-3-030-76587-3_5
[2] Berg, J., Nystro¨m, K.: A unified deep artificial neural network approach to partial differential equations in complex geometries (2017) https://doi.org/10.1016/j.neucom.2018.06.056 arXiv:1711.06464
[3] Markidis, S.: The old and the new: Can physics-informed deep-learning replace traditional linear solvers? Frontiers in Big Data 4 (2021) https://doi.org/10.3389/fdata.2021.669097
[4] Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, 686–707 (2019)
[5] Wu, P., Pan, K., Ji, L., Gong, S., Feng, W., Yuan, W., Pain, C.: Navier–stokes generative adversarial network: a physics-informed deep learning model for fluid flow generation. Neural Computing and Applications 34(14), 11539–11552 (2022) https://doi.org/10.1007/s00521-022-07042-6
[6] Xu, Z., Guo, Y., Saleh, J.H.: A physics-informed dynamic deep autoencoder for accurate state-of-health prediction of lithium-ion battery. Neural Computing and Applications 34(18), 15997–16017 (2022) https://doi.org/10.1007/s00521-022-07291-5
[7] Mishra, S., Molinaro, R.: Physics informed neural networks for simulating radiative transfer. Journal of Quantitative Spectroscopy and Radiative Transfer 270, 107705 (2021) https://doi.org/10.1016/j.jqsrt.2021.107705
[8] Chen, Y., Lu, L., Karniadakis, G.E., Negro, L.D.: Physics-informed neural networks for inverse problems in nano-optics and metamaterials (2019) arXiv:1912.01085 [physics.comp-ph]
[9] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) https://doi.org/10.1115/1.4050542
[10] Ryck, T.D., Jagtap, A.D., Mishra, S.: Error estimates for physics informed neural networks approximating the Navier–Stokes equations (2022)
[11] Hennigh, O., Narasimhan, S., Nabian, M.A., Subramaniam, A., Tangsali, K., Fang, Z., Rietmann, M., Byeon, W., Choudhry, S.: NVIDIA SimNet™: An AI-accelerated multi-physics simulation framework. In: Computational Science – ICCS 2021. pp. 447–461. Springer (2021). https://doi.org/10.1007/978-3-030-77977-1_36
[12] Gupta, J.K., Brandstetter, J.: Towards multi-spatiotemporal-scale generalized pde modeling. arXiv preprint arXiv:2209.15616 (2022)
[13] Basir, S., Senocak, I.: Critical investigation of failure modes in physics-informed neural networks (2022) https://doi.org/10.2514/6.2022-2353 arXiv:2206.09961
[14] Grossmann, T.G., Komorowska, U.J., Latz, J., Schönlieb, C.-B.: Can Physics-Informed Neural Networks beat the Finite Element Method? arXiv (2023). https://doi.org/10.48550/ARXIV.2302.04107 , https://arxiv.org/abs/2302.04107
[15] Xu, Z.-Q.J.: Frequency Principle in Deep Learning with General Loss Functions and Its Potential Application. arXiv (2018). https://doi.org/10.48550/ARXIV.1811.10146 , https://arxiv.org/abs/1811.10146
[16] Jiang, X., Wang, D., Chen, X., Zhang, M.: Physics-informed neural network for optical fiber parameter estimation from the nonlinear schrödinger equation. Journal of Lightwave Technology, 1–11 (2022) https://doi.org/10.1109/jlt.2022.3199782
[17] Goswami, S., Bora, A., Yu, Y., Karniadakis, G.E.: Physics-Informed Deep Neural Operator Networks (2022)
[18] Goswami, S., Anitescu, C., Chakraborty, S., Rabczuk, T.: Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. Theoretical and Applied Fracture Mechanics 106, 102447 (2020) https://doi.org/10.1016/j.tafmec.2019.102447
[19] Chen, X., Gong, C., Wan, Q., Deng, L., Wan, Y., Liu, Y., Chen, B., Liu, J.: Transfer learning for deep neural network-based partial differential equations solving. Advances in Aerodynamics 3(1) (2021) https://doi.org/10.1186/s42774-021-00094-7
[20] Tang, H., Yang, H., Liao, Y., Xie, L.: A transfer learning enhanced the physics-informed neural network model for vortex-induced vibration. arXiv (2021). https:
[21] Wight, C.L., Zhao, J.: Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics Informed Neural Networks (2020)
[22] Cho, J., Nam, S., Yang, H., Yun, S.-B., Hong, Y., Park, E.: Separable PINN: Mitigating the curse of dimensionality in physics-informed neural networks (2022) arXiv:2211.08761 [cs.LG]
[23] Gladstone, R.J., Nabian, M.A., Meidani, H.: FO-PINNs: A First-Order formulation for Physics Informed Neural Networks (2022)
[24] Sharma, R., Shankar, V.: Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations. arXiv (2022). https://doi.org/10.48550/ARXIV.2205.09332 . https://arxiv.org/abs/2205.09332
[25] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A.: Fourier Neural Operator for Parametric Partial Differential Equations. arXiv (2020). https://doi.org/10.48550/ARXIV.2010.08895 . https://arxiv.org/abs/2010.08895
[26] Lu, L., Jin, P., Karniadakis, G.E.: Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators (2019) https://doi.org/10.1038/s42256-021-00302-5 arXiv:1910.03193
[27] Li, Z., Zheng, H., Kovachki, N., Jin, D., Chen, H., Liu, B., Azizzadenesheli, K., Anandkumar, A.: Physics-Informed Neural Operator for Learning Partial Differential Equations (2021)
[28] Konuk, T., Shragge, J.: Physics-guided deep learning using fourier neural operators for solving the acoustic VTI wave equation. In: 82nd EAGE Annual Conference Exhibition. European Association of Geoscientists & Engineers (2021). https://doi.org/10.3997/2214-4609.202113304 . https://doi.org/10.3997/2214-4609.202113304
[29] Xiong, W., Huang, X., Zhang, Z., Deng, R., Sun, P., Tian, Y.: Koopman neural operator as a mesh-free solver of non-linear partial differential equations (2023)
[30] Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation (2015)
[31] Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., Benson, S.M.: U-FNO – An enhanced Fourier neural operator-based deep-learning model for multiphase flow (2021)
[32] Wang, S., Wang, H., Perdikaris, P.: On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed
neural networks. Computer Methods in Applied Mechanics and Engineering 384, 113938 (2021) https://doi.org/10.1016/j.cma.2021.113938
[33] Huang, X., Alkhalfiah, T., Song, C.: A modified physics-informed neural network with positional encoding. In: First International Meeting for Applied Geoscience: Energy Expanded Abstracts. Society of Exploration Geophysicists, (2021). https://doi.org/10.1190/segam2021-3584127.1 . https://doi.org/10.1190/segam2021-3584127.1
[34] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence, 1–15 (2022) https://doi.org/10.1109/tai.2022.3192362
[35] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W., Titterington, M. (eds.) Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, pp. 249–256. PMLR, Chia Laguna Resort, Sardinia, Italy (2010). https://proceedings.mlr.press/v9/glorot10a.html
[36] DOLFINx. https://github.com/FEniCS/dolfinx. accessed: 12.11.2022 (2017)
[37] Vorst, H.A.: Iterative Krylov Methods for Large Linear Systems. Cambridge University Press, Cambridge; New York (2003). https://www.worldcat.org/title/iterative-krylov-methods-for-large-linear-systems/oclc/50717063&referer=brief_results
[38] Balay, S.: PETSc users manual: Revision 3.10. Technical report (September 2018). https://doi.org/10.2172/1483828 . https://doi.org/10.2172/1483828
[39] Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z., Azizzadenesheli, K., Hassanzadeh, P., Kashinath, K., Anandkumar, A.: FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators (2022)
[40] Weinan, Yu, B.: The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6(1), 1–12 (2018) https://doi.org/10.1007/s40304-018-0127-z |
Blind Nasotracheal Intubation Using Succinylcholine
J. ROGER MALTBY, M.B., F.F.A.R.C.S., F.R.C.P.C.*, MICHAEL CASSIDY, M.D.,† G. MOHAMMED NANJI, M.B., F.R.C.P.C.‡
The technique of blind nasotracheal intubation with spontaneous respiration was developed by Magill¹ and Rowbotham in the 1920s to provide safe anesthesia for maxillofacial surgery. Magill observed that, with the patient supine and the head in the sniffing position, the course of the air passage from nostril to glottis was a curve. He also discovered that a rubber tube with a similar curve would follow that course and more readily enter the glottis than the esophagus.² With the anesthetic agents available at that time, tracheal intubation required much patience and practice. Few anesthesiologists mastered the technique.
With the introduction of succinylcholine in the 1950s, oral intubation became quick and easy.³ Nasal tracheal intubation, especially the blind technique performed under light general anesthesia, became obsolete except for surgery in which the oral route was either difficult or undesirable.
In the 1980s use of the flexible fiberoptic bronchoscope has largely replaced awake blind nasal tracheal intubation for anticipated difficult intubations. Although some anesthesiologists believe that blind nasal tracheal intubation requires spontaneous respiration,⁴⁻⁶ members of our department prefer the patient to be apneic and relaxed. In this article we report detailed results of 80 cases and our general experience with more than 20,000 cases of the past 20 years.
MATERIALS AND METHODS
Eighty consecutive unselected ambulatory patients, ASA physical status I or II, 13–40 years of age, scheduled for elective oral surgical procedures were studied. No change from our normal anesthetic technique was made. All patients gave informed consent for a general anesthetic and for completion of preoperative and postoperative questionnaires. They fasted from midnight and arrived at the suite 30 min before surgery. On arrival their age, weight, smoking habit, history of epistaxis, and presence of nasal obstruction, sore throat, cough, and hoarseness were recorded. The postoperative morbidity questionnaire was completed by telephone the following day.
Uncuffed clear plastic endotracheal tubes, 6.5 mm for men or 6.0 mm for women, were used throughout the study. Each tube had a radius of curvature 12–16 cm and was lubricated with a water-soluble lubricant without local anesthetic (Muko). No premedication was given, and no topical nasal vasoconstrictor was used.
Patient monitoring included electrocardiogram, pulse, and blood pressure recording. A number 23 butterfly needle was inserted into a peripheral vein. Gallamine 10–15 mg was administered iv, followed by methohexital 1.5–2.0 mg/kg and succinylcholine 0.75–1.5 mg/kg. The patient’s lungs were ventilated by mask with 100% oxygen. Then the patient’s head, supported in a U-shaped headrest, was extended at the atlanto-occipital joint by upward traction on the patient’s chin. The lubricated endotracheal tube was passed along the floor of the right nostril, or the left nostril if the right one was known to be obstructed, into the oropharynx.
The anesthesiologist then observed the neck continuously as the tube was advanced toward the glottis. If anterior or lateral bulging was seen, the tube was withdrawn sufficiently for its direction to be adjusted. If the bulge was on the right, the proximal end of the tube was rotated in a counterclockwise direction and again advanced. If the bulge was on the left, it was rotated in a clockwise direction. If the tip had entered the vallecula, the head was flexed slightly; if it entered the esophagus, extension of the head was increased by additional elevation of the chin.
Railroading of the tip of the endotracheal tube over the tracheal rings was frequently sensed by the anesthesiologist. It was also observed in thin patients in particular. A single compression of the lower end of the patient’s sternum immediately confirmed placement in the trachea. The pressure causes a rush of air through (not around) the tube, which can be felt by the anesthesiologist’s ear against the proximal end of the tube. The anesthetic circuit was connected, and correct positioning of the tube was confirmed by auscultation of both lung fields.
* Professor of Anesthesia.
† Resident in Anesthesia.
‡ Clinical Assistant Professor of Anesthesia.
Received from the Department of Anesthesia, Foothills Hospital at the University of Calgary, Calgary, Alberta, Canada. Accepted for publication July 4, 1988.
Address reprint requests to Dr. Maltby: Department of Anesthesia, Foothills Hospital at the University of Calgary, 1403 29th Street N.W., Calgary, Alberta, T2N 2T9 Canada.
Key words: Anesthesia: general. Anesthetic techniques: blind nasal tracheal intubation. Intubation: nasotracheal. Neuromuscular relaxants: succinylcholine.
TABLE 1. Patient Characteristics
| | N | Age (yr) | Weight (kg) | Nasal Obstruction | Epistaxis Past Month |
|-------|-----|----------|-------------|-------------------|----------------------|
| Female| 43 | 22 ± 6 | 56 ± 9 | 2 | 3 |
| Male | 37 | 23 ± 7 | 71 ± 11 | 6 | 4 |
Values for age and weight are mean ± SD, those for nasal obstruction and epistaxis are numbers of patients.
Blind tracheal intubation was not mandatory in these cases. Thus, we did not persist with tubes of different radii of curvature that might have increased the success rate. If blind intubation was unsuccessful with the maneuvers described, tracheal intubation was performed visually using a laryngoscope.
One observer made all recordings during intubation and completed the postoperative morbidity questionnaire. Time for intubation was recorded from start of the methohexital injection to auscultation of the chest to confirm tracheal intubation. The intubation was graded as "very easy" (through the glottis with no pause), "easy" (minimal pause for rotation of tube), "slight delay" (adjustment of head position, rotation of tube, with or without external manipulation of larynx), and "visual" (after failure to intubate blindly within three minutes). At follow-up the severity and duration of epistaxis, sore throat, and hoarseness were recorded.
RESULTS
Patient characteristics are shown in table 1. Eight patients gave a history of nasal obstruction. Only two of these had right-sided obstruction, although in 14 of the remaining 72 patients passage of the tube was easier through the left nostril than the right. Seven patients gave a history of epistaxis in the previous month.
Table 2 classifies the ease of intubation. Blind intubation was very easy or easy in 70% of cases and successful in 91%. Time for blind intubation varied from 70 s through 180 s. In the very easy and easy categories intubation time from picking up the tube to its entry into the trachea was usually less than 30 s. During intubation only one patient had epistaxis sufficient to warrant pharyngeal suction by the surgeon before insertion of the throat pack. In 77% of patients there was either no blood on the nasotracheal tube or only minimal staining. There was no significant epistaxis following tracheal extubation. Three patients had a bony posterior pharyngeal ridge. In all three the anesthesiologist was able to manipulate the tube past the obstruction into the oropharynx. No pharyngeal mucosal tears occurred.
Postoperative morbidity is recorded in table 3. Seventy-four of the 80 patients were available for follow-up. Postoperative sore throat occurred in 80% and was not related to difficulty in intubation. All cases of hoarseness were mild. Epistaxis was noted at home by 9.5% of patients. None of these patients gave a history of epistaxis, nor was there any correlation between postoperative epistaxis and epistaxis during intubation.
DISCUSSION
This study demonstrates that blind nasotracheal intubation using a muscle relaxant drug has a high success rate with a low complication rate. Although minor morbidity was common, the incidence of postoperative sore throat may be partly attributable to the use of a gauze throat pack in every case, and to surgical manipulations in that area. Epistaxis was not a problem during intubation or extubation, but we used 6.5 or 6.0 mm uncuffed tubes. The advantages of using small uncuffed tubes are that they can be easily manipulated within the nostril and that they cause minimal mucosal trauma. If a throat pack is used to prevent aspiration of blood, secretions, or tooth fragments, uncuffed tubes are satisfactory for short surgical procedures for which intermittent positive pressure ventilation is not required. For longer procedures, including all maxillofacial surgery, a cuffed nasotracheal tube is desirable, although trauma to the nasal mucosa is more likely.
Most cuffed endotracheal tubes are designed for orotracheal intubation. When such a tube is passed through the nose, the point at which the pilot tube joins the main tube lies within the nostril. We have observed that, during nasal tracheal intubation with cuffed tubes for maxillofacial surgery, this rough area often causes epistaxis during
TABLE 2. Time to Intubation and Incidence of Epistaxis
| Intubation | N | Time (s) | Epistaxis* |
|---------------------|-----|----------|------------|
| Very easy | 21 | 80 (70–100) | 3 (1 + 2) |
| Easy | 35 | 104 (80–180) | 12 (6 + 7) |
| Slight delay | 17 | 124 (90–180) | 11 (9 + 2) |
| Visual ± forceps | 7 | 170 (140–240) | 7 (2 + 5) |
Time values are mean (range).
* Epistaxis during intubation; numbers in parentheses represent patients with significant bleeding + those with minor staining.
TABLE 3. Postoperative Morbidity Related to Ease of Intubation
| Intubation | N | Epistaxis | Sore Throat | Hoarseness |
|---------------------|-----|-----------|-------------|------------|
| Very easy | 21 | 1 (0 + 1) | 17 (3 + 14) | 3 (0 + 3) |
| Easy | 31 | 4 (2 + 2) | 23 (7 + 16) | 6 (0 + 6) |
| Slight delay | 16 | 2 (0 + 2) | 14 (1 + 13) | 4 (0 + 4) |
| Visual ± forceps | 6 | 0 (0 + 0) | 5 (1 + 4) | 2 (0 + 2) |
Numbers in parentheses represent patients with significant symptoms + those with minor symptoms.
manipulations to redirect the tip toward the glottis, and that the upper and lower edges of the cuff may also traumatize the nasal mucosa. Therefore, when a cuffed tracheal tube is used, we recommend application of a vasoconstrictor to the nasal mucous membrane before induction of anesthesia.
A posterior nasopharyngeal ridge, caused by a prominent body of the atlas vertebra, may obstruct passage of the tube from the nose into the oropharynx. Rotation of the tube through 180° may negotiate the tip over the ridge. If this fails, a finger can be inserted through the mouth and up behind the soft palate to lift the tip over the ridge as an assistant advances the tube.
Even with gentle manipulation the tip of the endotracheal tube occasionally tears the oropharyngeal mucosa. Our incidence in more than 20,000 cases has been less than 1:9,000. In each case the tear was left open, a systemic antibiotic was administered, and recovery was uneventful.
Although several authors claim that spontaneous respiration is essential for blind nasal tracheal intubation, others find that muscle relaxant drugs make it easier because laryngospasm does not occur. With spontaneous respiration success rates vary from 53% to 92%. With muscle relaxants success rates vary from 76% to 96%. Our experience using succinylcholine for 20 years confirms these latter results. The classical method of blind nasal tracheal intubation described in textbooks is to listen through the tube as it is advanced through the oropharynx. The anesthesiologist directs the tube toward the area of loudest breath sounds. To do this the anesthesiologist's head is turned away from the patient. Careful observation of the neck is only recommended when the tube does not enter the larynx.
The alternative method is to observe the neck continuously during advancement of the tube. If anterior or lateral bulging is seen, the tube should be withdrawn sufficiently for its course to be readjusted in the direction of the glottis. Thereafter, maneuvers for redirecting the tip of the tube are the same whether the patient is breathing or apneic.
Although blind nasal tracheal intubation may no longer be the method of choice for anticipated difficult intubations, there are circumstances in which it may be a valuable technique. In patients who have upper central dental crowns or bridge work, blind nasal intubation avoids the use of a laryngoscope and thus the risk of dental trauma with potential subsequent litigation. When a muscle relaxant is used, blind nasal intubation is often as quick and as easy as visual oral intubation. If the laryngoscope light fails, the technique provides an alternative to illumination using a flashlight. Furthermore, when the patient has already received a muscle relaxant and orotracheal intubation proves unexpectedly difficult, the blind nasal route is often surprisingly easy. However, the technique is not indicated in anticipated difficult tracheal intubation caused by airway obstruction from tumors or infections.
To achieve consistent success with blind nasal tracheal intubation, practice and experience are essential. The anesthesiologist should position the patient's head correctly, and visualize the anatomic course the tube must take to the glottis. The tube itself must be well lubricated, firm enough to maintain its curvature, and small enough in diameter to be easily rotated within the nostril.
In summary, our data showed that blind nasal tracheal intubation in paralyzed patients can usually be accomplished without difficulty. Because of its usefulness in situations wherein problems of airway management are unanticipated, we believe that blind nasal tracheal intubation using muscles relaxants is a technique that should be familiar to anesthesiologists.
REFERENCES
1. Magill IW: Blind nasal intubation. Anaesthesia 30:476–479, 1975
2. Magill IW: Technique in endotracheal anesthesia. Br Med J II: 817–819, 1930
3. Bamforth BJ: Guest Discussion in Jacoby J: Nasal endotracheal intubation by an external visual technic. Anesth Analg 49:781–789, 1970
4. Collins VJ: Principles of Anesthesiology, 2nd edition. Philadelphia, Lea & Febiger, 1976, pg 373–377
5. Stoelting RK: Endotracheal intubation. Anesthesia, 2nd edition. Chicago, Year Book RD. New York, Churchill Livingstone, 1986, pg 535–537
6. Fagan DJ, Castro T Jr, Rastrelli AJ: Comparison of intubation techniques in the awake patient: The Flexi-lum surgical light (light wand) vs blind nasal approach. ANESTHESIOLOGY 66:69–71, 1987
7. Conway CM, Miller JS, Sugden FLH: Sore throat after anaesthesia. Br J Anaesth 32:219–223, 1960
8. Nolan RT: Nasal intubation: An anatomical difficulty with Portex tubes. Anaesthesia 24:447–448, 1969
9. Atkinson RS, Rushman GB, Lee JA: A Synopsis of Anaesthesia, 10th edition. Bristol, John Wright & Sons, 1987, pg 208–210
10. Davies JAF: Blind nasal intubation with propofanol. Br J Anaesth 44:539–540, 1972
11. Iserson KV: Blind nasotracheal intubation. Ann Emerg Med 10: 468–470, 1981
12. Darzl DF, Thomas DM: Nasotracheal intubations in the emergency department. Crit Care Med 8:677–682, 1980
13. Gross JB, Hartigan ML, Schaffer DW: A suitable substitute for 4% cocaine before blind nasotracheal intubation. Anesth Analg 65:915–918, 1984
14. Fassolt A: Blind nasal tracheal intubation in the muscle-relaxed patient. Anaesthesia 35:505–508, 1986
15. Jacoby J: Nasal endotracheal intubation by an external visual technic. Anesth Analg 49:731–739, 1970
16. Kubota Y, Toyoda Y, Kubota H: Endotracheal intubation assisted with a pencil torch. ANESTHESIOLOGY 68:167, 1988 |
Spectropolarimetric investigations of the magnetization of the quiet-Sun chromosphere
J. Trujillo Bueno$^{1,2,3}$
$^1$ Instituto de Astrofísica de Canarias, 38205, La Laguna, Tenerife, Spain
$^2$ Departamento de Astrofísica, Universidad de La Laguna, Tenerife, Spain
$^3$ Consejo Superior de Investigaciones Científicas, Spain
e-mail: email@example.com
Abstract. This paper reviews some recent advances in the development and application of polarized radiation diagnostics to infer the mean magnetization of the quiet solar atmosphere, from the near equilibrium photosphere to the highly non-equilibrium upper chromosphere. In particular, I show that interpretations of the scattering polarization observed in some spectral lines suggest that while the magnetization of the photosphere and upper chromosphere is very significant, the lower chromosphere seems to be weakly magnetized.
Key words. Sun: chromosphere — Sun: magnetic fields – Stars: atmospheres
1. Introduction
The chromosphere is a crucial boundary region in the solar outer atmosphere, not only because it is probably the region where the dominant physics changes from hydrodynamic to magnetic forces and most of the non-radiative heating that sustains the corona and solar wind is released, but also because the dissipation of magnetic energy in the $10^6$ K corona may be significantly modulated by the strength and structure of the magnetic field in the chromosphere (e.g., Parker 2007). Unfortunately, our empirical knowledge of the magnetism of the solar outer atmosphere is practically nonexistent notwithstanding the obvious quantitative information provided by high resolution images of the solar atmosphere taken around the wavelengths of strong spectral lines like H$\alpha$ and Ca\textsc{ii} $\lambda 8542$ Å (e.g., the review by Rutten 2007). Such high cadence, high angular resolution \textit{intensity} images demonstrate that the solar chromospheric plasma is extremely inhomogeneous and dynamic and suggest that the upper solar chromosphere is a “fibrilar-dominated magnetic medium”. They are also useful in helping to constrain the magnetic field orientation, but they do not provide quantitative information on the magnetic field vector because the Stokes $I(I)$ profiles of such strong lines are practically insensitive to its strength, inclination and azimuth. Most probably the magnetic field is the underlying structuring agent, but the fine structure that we see in such intensity images (i.e., the fibrils) directly implies only the presence of thermal and/or density inhomogeneities.
The only way to obtain quantitative empirical information on the magnetic fields of the extended solar atmosphere is via the measurement and interpretation...
of the emergent spectral line polarization (e.g., Stenflo 1994; Del Toro Iniesta 2003; Landi Degl’Innocenti & Landolfi 2004). Solar magnetic fields leave their fingerprints on the polarization signatures of the emergent spectral line radiation. This occurs through a variety of unfamiliar physical mechanisms, not only via the Zeeman effect. In particular, magnetic fields modify the atomic level polarization (population imbalances and quantum coherences) that pumping processes by anisotropic radiation induce in the atoms of the solar atmosphere (e.g., Trujillo Bueno 2001). Interestingly, this so-called Hanle effect allows us to “see” magnetic fields to which the Zeeman effect is blind within the limitations of the available and foreseeable instrumentation. We may thus define “the Sun’s hidden magnetism” as all the magnetic fields of the extended solar atmosphere that are impossible to diagnose via the consideration of the Zeeman effect alone.
A recent review of observational properties of the solar chromosphere was presented by Judge (2006). There are also reviews where the reader finds information on how spectropolarimetric observations allow us to explore chromospheric magnetic fields in quiet and active regions (e.g., Harvey 2006, 2009; Stenflo 2006; Lagg 2007; Casini & Landi Degl’Innocenti 2007; López Ariste & Asensio 2007; Trujillo Bueno 2010). In Trujillo Bueno (2010) I discuss recent advances in magnetic field and coronal polarization diagnostics, with emphasis on the magnetic field of plasma structures embedded in the solar outer atmosphere (e.g., spicules, prominences, active region filaments and coronal loops). Of particular interest in this respect is the very recent paper by Centeno et al. (2010) showing the detection of magnetic fields as strong as 50 G in off-limb spicules of the quiet Sun chromosphere, which could represent a possible lower value of the field strength of organized network spicules at a height of about 2 000 km above the visible solar surface.
In the present paper I focus instead on the diagnostic problem of the magnetization of the atmosphere of the “quiet” Sun, with emphasis on the variation with height of the mean field strength in the quiet chromospheric plasma itself. It is important to note that determining the mean magnetization of the quiet Sun requires finding how much flux resides at small scales. To this end, it is crucial to measure and interpret the linear polarization produced by atomic level polarization and its modification by the Hanle effect (see Sect. 2 and Sect. 3). Although in the quiet Sun the amplitudes of such linear polarization signals are often larger than those of the $V/I$ profiles produced by the longitudinal Zeeman effect, their measurement with the available telescopes still requires sacrificing the spatio-temporal resolution to be able to reach the required polarimetric sensitivity. For this reason, with present telescope apertures the first step is to try to obtain information on the mean intensity, $\langle B \rangle$, of the actual distribution of magnetic field strengths. The shape of the ensuing probability distribution function, PDF($B$), describing the fraction of quiet-Sun plasma occupied by magnetic fields of strength $B$, is difficult to determine empirically, although numerical experiments of magnetoconvection suggest that assuming an exponential shape for the PDF is a suitable approximation that does not overestimate $\langle B \rangle$. Nevertheless, here I consider mainly the simplest model of a single value field that fills the entire atmospheric volume (i.e., PDF($B$)=δ($B - \langle B \rangle$)), with the aim of drawing at least some preliminary conclusions on the lower limit for $\langle B \rangle$ in the photosphere (Sect. 4), upper chromosphere (Sect. 5) and lower chromosphere (Sect. 6). In terms of the heights $h$ (in km) above the visible solar surface where the spectral lines used here are sensitive to the loval atmospheric conditions we have 200 ≤ $h$ ≤ 400 for the photosphere, 1 800 ≤ $h$ ≤ 2 200 for the “upper chromosphere”, and 900 ≤ $h$ ≤ 1 300 for the “lower chromosphere”.
2. Pros and cons of the Hanle and Zeeman effects
The polarization of the Zeeman effect is due to the wavelength shifts between the $\pi$ and $\sigma$ transitions composing a spectral line. The typical observational signature of the circucular polarization produced by the longitudinal Zeeman effect is an antisymmetric Stokes $V(\lambda)$ profile amplitude of which scales with the ratio, $\mathcal{R}$, between the Zeeman splitting and the Doppler broadened line width. The linear polarization amplitudes of the transverse Zeeman effect scale instead as $\mathcal{R}^2$ and its characteristic observational signatures are symmetric Stokes $Q(\lambda)$ and $U(\lambda)$ profiles with their wing lobes of opposite sign to the line center one. Because of cancellation effects the polarization of the Zeeman effect as a diagnostic tool tends to be blind to magnetic fields that are randomly oriented on scales too small to be resolved. Note also that
$$\mathcal{R} = \frac{1.4 \times 10^{-7} AB}{\sqrt{1.663 \times 10^{-2} T/\alpha + \xi^2}},$$
(1)
where $\alpha$ is the atomic weight of the atom under consideration and $\lambda$ is in Å, $B$ in Gauss, $T$ in K and the microturbulent velocity $\xi$ in km s$^{-1}$ (see Landi Degl’ Innocenti & Landolfi 2004).
In the quiet solar atmosphere $\mathcal{R} << 1$ (e.g., $\mathcal{R} \approx 10^{-2}$ for H$\alpha$ and $\mathcal{R} \approx 5 \times 10^{-2}$ for the CaII 8542 Å line), which explains why it is far more difficult to detect the signature of the transverse than the longitudinal Zeeman effect in strong chromospheric lines. In practice, only the impact of the Zeeman effect on Stokes $V$ is detected, and mainly in near-IR lines like the 8542 Å line of CaII. However, the response function of the emergent Stokes $V$ to magnetic field perturbations at various heights in models of the quiet solar atmosphere indicates that the circular polarization produced by the Zeeman effect in spectral lines like H$\alpha$ and CaII 8542 Å is insensitive to the physical conditions of the upper chromosphere (e.g., Socas-Navarro et al. 2000b; Socas-Navarro & Uitenbroek 2004, Uitenbroek 2006). For example, Fig. 1 illustrates that on the quiet Sun the circular polarization of the H$\alpha$ line is sensitive mainly to the photospheric magnetic field (see also Socas-Navarro & Uitenbroek 2004). The emergent Stokes $V$ profiles in the k-line of MgII and in Ly$\alpha$ show a more favourable sensitivity to magnetic fields in the upper solar chromosphere and transition region, but the expected Stokes $V$ signals are very weak (see Eq. 1). In summary, the Zeeman effect is of limited practical interest for the exploration of magnetic fields in the solar outer atmosphere (chromosphere, transition region and corona).
Fortunately, there is yet another physical mechanism by means of which the magnetic fields of the solar atmosphere leave fingerprints on the polarization of the emergent spectral line radiation: the Hanle effect. Anisotropic radiation pumping processes produce atomic level polarization (i.e., population imbalances and quantum coherences among the magnetic sublevels pertaining to any given degenerate energy level). The Hanle effect can be defined as any modification of the atomic level polarization caused by the presence of a magnetic field, including the remarkable effects produced by the level crossings and repulsions that take place when going from the Zeeman effect regime to the complete Paschen–Back effect regime (e.g., Belluzzi et al. 2007). The Hanle effect is especially sensitive to magnetic strengths between 0.1 $B_H$ and 10 $B_H$, where
$$B_H = (1.137 \times 10^{-7})/(t_{\text{life}} g_J)$$
(2)
is the critical Hanle field intensity (in Gauss) for which the Zeeman splitting of the $J$-level under consideration is similar to its natural width. Note that $g_J$ is the level’s Landé factor and $t_{\text{life}}$ its radiative lifetime in seconds. Since the lifetimes of the upper levels of the transitions of interest are usually much smaller...
than those of the lower levels, clearly diagnostic techniques based on the lower-level Hanle effect are sensitive to much weaker fields than those based on the upper-level Hanle effect.
The main properties of the Hanle effect are:
(a) The Hanle effect is sensitive to weaker magnetic fields than the Zeeman effect (from at least 1 mG to a few hundred Gauss), regardless of how large the line width caused by Doppler broadening is. For stronger fields, the Hanle effect remains sensitive to the magnetic field orientation. Moreover, the Hanle effect is sensitive to magnetic fields that are randomly oriented on scales too small to be resolved (Stenflo 1982; Trujillo Bueno et al. 2004).
(b) The Hanle effect as a diagnostic tool is not limited to a narrow solar limb zone. In particular, in the forward scattering geometry of a solar disk-center observation, the Hanle effect creates linear polarization in the presence of an inclined magnetic field (e.g., Trujillo Bueno et al. 2002).
(c) The Hanle effect operates in the line core and the ensuing response function of the emergent linear polarization to magnetic field perturbations shows that in some spectral lines (e.g., Ca\textsc{ii} 8542 Å, H$_\alpha$, Mg\textsc{ii} $k$ and Ly$\alpha$) it is sensitive to the magnetic fields of the upper chromosphere and transition region (see a H$_\alpha$ response function in Fig. 4 of Stépán & Trujillo Bueno 2010a).
In summary, the Hanle effect in strong spectral lines is the key physical mechanism that should be increasingly exploited for quantitative explorations of the magnetism of the solar chromosphere.
3. Forward modeling of the spectral line polarization
In general, the modeling of the Stokes profiles in strong lines like H$_\alpha$ and the IR triplet of Ca\textsc{ii} requires solving a rather complicated radiative transfer problem, known as the non-LTE problem of the 2\textsuperscript{nd} kind (Landi Degl’Innocenti & Landolfi 2004). This consists of calculating, at each spatial point of any given atmospheric model and for each $J$-level of the chosen atomic model, the diagonal and non-diagonal elements of the atomic density matrix that are consistent with the intensity, polarization and symmetry properties of the radiation field generated within the (generally magnetized) medium under consideration. Once such density matrix elements are known it is straightforward to solve the Stokes vector transfer equation to obtain the emergent Stokes profiles for any desired line of sight. Highly efficient iterative methods and accurate formal solvers of the Stokes vector transfer equation were developed for solving this type of (complete redistribution) multi-level radiative transfer problem (see the review by Trujillo Bueno 2003). Such methods have been implemented in multi-level computer programs for the generation and transfer of polarized radiation (Manso Sainz & Trujillo Bueno 2003a; Stépán 2008; Trujillo Bueno & Shchukina 2007, 2009). Moreover, the same radiative transfer methods have been recently generalized by Sampooraa & Trujillo Bueno (2010) for solving the two-level atom problem of resonance line polarization taking into account partial redistribution effects, which may be a suitable approximation for modeling the fractional linear polarization profiles of Ly$\alpha$ and Mg\textsc{ii} $k$.
The following sections discuss how the modeling of the spectropolarimetric observations through the application of the above-mentioned radiative transfer codes allows us to obtain information on the mean magnetization of the quiet solar atmosphere.
4. The magnetization of the photosphere of the quiet Sun
The linear polarization profiles produced by scattering processes in the quiet solar atmosphere have been observed with poor spatial and/or temporal resolution (e.g., Spruit & Keller 1997; Schouderer 2000). For this reason, Trujillo Bueno et al. (2004) confronted observations of the center-to-limb variation of the scattering polarization in the photospheric Sr\textsc{ii} 4607 Å line with calculations of the $Q/I$ profiles that result from spatially averaging the emergent $Q$ and $I$ profiles calculated in a three-dimensional (3D) model of the quiet solar photosphere resulting from realisHydrodynamical simulations of solar surface convection. The very significant discrepancy between the calculated and the observed polarization amplitudes indicated the ubiquitous existence of tangled magnetic fields in the quiet solar photosphere, with a mean strength significantly larger than derived from simplistic one-dimensional radiative transfer investigations (see the review by Trujillo Bueno et al. 2006). The inferred mean strength of this hidden field turned out to be $\langle B \rangle \sim 100$ G (see Fig. 2), which implies an amount of magnetic energy density that is more than sufficient to compensate the energy losses of the solar outer atmosphere. This estimation was obtained by using the approximation of a micro-turbulent field (i.e., that the hidden field has an isotropic distribution of orientations within a photospheric volume given by $L^3$, with $L$ the mean-free-path of the line-center photons). Calculations based on the assumption that the unresolved magnetic field is instead horizontal also lead to the conclusion of a sizable $\langle B \rangle$ (see Sect. 4 in Trujillo Bueno et al. 2006).
What is the physical origin of this “hidden” magnetic field the reality of which is now being supported by Lites et al. (2008) and Orozco Suárez et al. (2007) through high-spatial resolution observations of the Zeeman effect taken with Hinode? Is it mostly the result of dynamo action by near-surface convection, as suggested by Catucci (1999)? Or is it dominated by small-scale flux emergence from deeper layers and recycling by the granular flows? The fact that the magnetic energy density is a significant fraction (i.e., $\sim 20\%$) of the kinetic energy density, and that the scattering polarization observed in the SrI 4607 Å line does not seem to be modulated by the solar cycle, strongly supported the suggestion that a surface dynamo plays a significant role for the quiet Sun magnetism (see Trujillo Bueno et al. 2004). Recent radiative MHD simulations of dynamo action by near-surface convection also support this possibility (Vögler & Schüssler 2007).
In summary, the small-scale magnetic activity of the quiet Sun photosphere is indeed very significant and might be important for understanding the propagation of energy into the outer atmosphere and the flux emergence process. The possibility that with the “Zeeman eyes of Hinode” we might still be seeing only the “tip of the iceberg” of the quiet Sun magnetism is not surprising because 80% or more of the vertical unsigned flux seems to be invisible to observations of the Zeeman effect at Hinode’s resolution of 200 km owing to the cancellation of the Stokes $V$ signal from opposite magnetic polarities (Pietarila Graham et al. 2009; see also Stenflo 1994; Emonet &
5. The magnetization of the upper chromosphere of the quiet Sun
There are various spectral lines whose $Q/I$ and $U/I$ profiles are sensitive to the magnetization of the upper chromosphere of the quiet Sun, such as those considered below. It is, however, necessary to emphasize that determining $\langle B \rangle$ from the observed fractional linear polarization signals usually requires confronting them with those that the highly inhomogeneous and dynamic solar chromospheric plasma would produce if it were unmagnetized. This strategy could be applied successfully for determining the mean magnetization of the quiet solar photosphere by solving the radiative transfer problem for the SrI 4607 Å line in a realistic three-dimensional (3D) hydrodynamical model (see Sect. 4), but a similar approach for instead inferring the magnetization of the quiet chromospheric plasma is not yet possible mainly because to produce a realistic 3D model of the thermal, density and dynamic structure of the quiet chromosphere is still computationally prohibitive (e.g., Carlsson 2007). As a matter of fact, the current 3D models do not show fibrils in Ca\textsc{ii} 8542 Å and the synthetic Hz line-center intensity images show insignificant granulation pattern (e.g., Leenaarts 2010). Should we then abandon any attempt to infer the magnetization of the quiet chromosphere via multi-level radiative transfer modeling using the available chromospheric semi-empirical models? In my opinion, the solar chromosphere is such an important region that we should at least try to do something potentially useful in spite of the obvious fact that any such 1D model is a poor representation of the complex chromospheric conditions.
5.1. The Ca\textsc{ii} 8542 Å line
The circular polarization of the Ca\textsc{ii} IR triplet is caused by the longitudinal Zeeman effect. With the available telescopes the ensuing $V/I$ signals are measurable even in quiet regions, where their amplitudes are $\sim 10^{-3}$ and smaller. Unfortunately, the Zeeman effect in the IR triplet of Ca\textsc{ii} is of little practical interest for investigating the magnetism of the upper solar chromosphere. Calculations of the Stokes $V$ response function of the strongest line of the Ca\textsc{ii} IR triplet to perturbations in the magnetic field strength show that in semi-empirical models of the quiet solar atmosphere the emergent circular polarization is sensitive only to changes between 700 and 1200 km, approximately (e.g., Uitenbroek 2006).
In quiet regions the linear polarization of the Ca\textsc{ii} IR triplet is dominated by atomic level polarization and its modification by the Hanle effect. Typically, the ensuing $Q/I$ and $U/I$ profiles have their maximum values at the line center. While the linear polarization in the 8498 Å line shows sensitivity to inclined magnetic fields with strengths between 1 mG and 50 G, the emergent linear polarization in the 8542 Å and 8662 Å lines is sensitive to magnetic fields with strengths in the milligauss range (see Fig. 3). The reason for this very interesting behavior is that the scattering polarization in the 8498 Å line gets a significant contribution from the selective emission processes that result from the atomic polarization of the short-lived upper level, while that in the 8542 Å and 8662 Å lines is dominated by the selective absorption processes that result from the atomic polarization of the metastable (long-lived) lower levels (Manzo Sáinz & Trujillo Bueno 2004b, 2007). Therefore, in quiet regions of the solar atmosphere the magnetic sensitivity of the linear polarization of the 8542 Å and 8662 Å lines is controlled by the lower-level Hanle effect, which implies that in regions with $B > 1$ G their Stokes $Q$ and $U$ profiles are only sensitive to the orientation of the magnetic field vector.
The most diagnostically interesting lines of the Ca\textsc{ii} IR triplet are the strongest and the weakest (i.e., the 8542 Å and the 8498 Å lines, respectively). Their linear polarization signals resulting from atomic level polarization and the Hanle effect can be exploited to explore the thermal and magnetic structure of the solar chromosphere. They should also be used to evaluate the degree of realism of 3D magnetohydrodynamic simulations.
Fig. 3. The emergent fractional linear polarization line-center amplitudes of the Ca\text{II} IR triplet calculated for a line of sight with $\mu = 0.1$ in the FAL-C model of the solar atmosphere. Each curve corresponds to the indicated inclination, in degrees with respect to the solar local vertical direction, of the assumed random-azimuth magnetic field. From Manso Sainz & Trujillo Bueno (2010).
of the chromosphere via careful comparisons of the Stokes profiles obtained through forward modeling calculations with those observed in quiet regions (e.g., like the ones in Fig. 4). As mentioned above, the current 3D models do not show fibrils in the synthetic intensity images calculated at the core of Ca\text{II} 8542 Å, in spite of the fact that the snapshot chosen by Leenaarts et al. (2010) has $\langle B \rangle \approx 150$ G (i.e., a value in agreement with that inferred by Trujillo Bueno et al. 2004).
In particular, the linear polarization of the 8542 Å line should be increasingly exploited to explore the magnetic field structure of the upper chromosphere, ideally via high angular-resolution two-dimensional (2D) spectropolarimetry with large aperture telescopes and modern instruments, like HiS, CRISP or the Göttingen Fabry-Pérot. One drawback is that for $B > 1$ G the scattering polarization of the Ca\text{II} 8542 Å line is sensitive only to the orientation of the magnetic field vector. Therefore, in principle, from the spatial variations of the $Q/I$ and $U/I$ signals we can see in Fig. 4 we can only say that they are probably caused by changes in the orientation of the magnetic field in the upper chromosphere of the quiet Sun. Although the spatio-temporal resolution of this spectropolarimetric observation is rather low (i.e., no better than 3'' and 20 minutes), the fractional polarization amplitudes fluctuate between $10^{-2}$ and $10^{-3}$ along the spatial direction of the spectrograph slit, with a typical spatial scale of 3''. Interestingly enough, while the Stokes $Q/I$ signal changes in amplitude but remains always positive along that spatial direction, the sign of the Stokes $U/I$ signal fluctuation is compatible with the prediction of the Hanle effect in a magnetized plasma with $B > 1$ G and having a spatially varying magnetic field azimuth, which in turn is consistent with the possibility that the fibrils seen in the high-resolution intensity images taken by Cauzzi et al. (2008) at the line center of $\lambda$ 8542 trace out magnetic lines of force.
5.2. The H$\alpha$ line
As we have seen, the linear polarization of the Ca\text{II} 8542 Å line is sensitive to the orientation of the magnetic field in the upper
chromosphere of the quiet Sun, but not to its strength unless $B < 1$ G there. In order to obtain empirical information on the magnetic strength in the upper chromosphere we need to use similarly strong lines, but such that their scattering polarization is sensitive to magnetic strengths in the gauss range. Among the various possible choices, H$\alpha$ is particularly interesting because it reaches the Hanle saturation regime for $B \gtrsim 50$ G and the shape of its fractional scattering polarization profile is very sensitive to the presence of magnetic field gradients in the upper chromosphere of the quiet Sun (Stépán & Trujillo Bueno 2010b). Moreover, the fibrilar nature of the upper chromosphere is seen even more clearly in H$\alpha$, especially when observing quiet regions far away from the solar disk center (e.g., see Fig. 7 of Rutten 2007). In the remaining part of this section I summarize the main results of this recent paper by Stépán & Trujillo Bueno (2010b).
The temperature minimum region of solar atmospheric models is transparent to H$\alpha$ radiation (Schoolman 1972). As a result, we see the upper chromosphere at the very line center of the H$\alpha$ line but the photosphere in the line wings. It is thus not surprising what Fig. 1 shows for H$\alpha$, namely that the response of the emergent circular polarization induced by the Zeeman effect to magnetic field perturbations exhibits large photospheric contributions. Moreover, in the quiet Sun the $V/I$ signals are very weak ($\sim 10^{-4}$ and smaller).
On the contrary, in quiet regions the linear polarization observed in H$\alpha$ is fully dominated by the presence of radiatively induced population imbalances and quantum coherences among the magnetic sublevels of the line’s levels, which produce linear polarization profiles whose maximum values are located at the very line center. The fractional polarization amplitudes vary between $10^{-3}$ and $10^{-4}$. Moreover, this scattering line polarization is modified by the Hanle effect, which operates mainly in the line core and gives rise to $Q/I$ and $U/I$ profiles different from those corresponding to the zero-field case. The response function of the emergent linear polarization to magnetic field perturbations shows that the Hanle effect in H$\alpha$ is sensitive to the magnetic fields of the upper chromosphere (see Fig. 4 of Stépán & Trujillo Bueno 2010a).
In spite of its modeling difficulties, the Hanle effect in H$\alpha$ should be exploited for facilitating quantitative explorations of the magnetism of the upper solar chromosphere. A first step has been recently taken by Stépán & Trujillo Bueno (2010b). The dashed line in the right panel of Fig. 5 shows the spatially and temporally averaged $Q/I$ profile observed by Gandorfer (2000) in a quiet region at about 1 Mm from the solar limb. Its more peculiar feature is the asymmetry that it shows around the line center, which cannot be explained by the mere fact that the H$\alpha$ line results from the superposition of seven blended components, four of which making a significant contribution to Stokes $Q$. In their paper Stépán & Trujillo Bueno (2010b) argue that the observed $Q/I$ profile can be explained by the Hanle effect of an inclined magnetic field whose mean strength varies with height as approximately indicated in the left panel of Fig. 5. This suggests the presence of an
Fig. 5. Magnetic field strength model (left panel) and calculated vs. observed $Q/I$ profiles (right panel). In the right panel the dashed line shows the $Q/I$ profile observed by Gandorfer (2000), while the solid line shows the $Q/I$ profile calculated by solving the multi-level scattering polarization problem in the presence of the Hanle effect produced by a significantly inclined magnetic field having at each height a random azimuth and the strength given in the left panel. The total $Q/I$ profiles include the contribution of the continuum polarization. Note that for a line of sight with $\mu = 0.1$ the scattering polarization of the H$\alpha$ line is sensitive to the structure of the magnetic field only in the chromospheric region indicated by the solid line part of the model. For more information see Štěpán & Trujillo Bueno (2010b).
abrupt and significant magnetization in the upper chromosphere of the quiet Sun and that the average ratio $\beta$ of gas to magnetic pressure decreases suddenly there.
6. The magnetization of the lower chromosphere of the quiet Sun
The magnetic field model in the left panel of Fig. 5 is characterized by a magnetic complexity zone with $\langle B \rangle > 30$ G in the upper solar chromosphere (i.e., just below the sudden transition region to the $10^6$ K coronal temperatures) and by a strongly magnetized photosphere and a weakly magnetized lower chromosphere. The suggested abrupt magnetization in the upper chromosphere of the quiet Sun is introduced to produce a $Q/I$ profile with a line center asymmetry similar to that found in the observed profile. The strong magnetization of the model’s photospheric region is strongly supported by the 3D radiative transfer modeling of the scattering polarization observed in the SrI 4607 Å line (see Trujillo Bueno et al. 2004), which indicates that the bulk of the quiet solar photosphere is teeming with a distribution of tangled magnetic fields having a mean strength $\langle B \rangle \approx 60$ G (when estimating $\langle B \rangle$ assuming the simplest case, adopted in Sect. 5 and Sect. 6 of this paper, of a single value field strength). Is the mean magnetization of the lower chromosphere really as weak as indicated in Fig. 5 (i.e., with $\langle B \rangle \lesssim 10$ G around a height of 1000 km)?
One spectral line whose observed scattering polarization supports the possibility of a weakly magnetized lower chromosphere is the D$_2$ line of BaII at 4554 Å (see Belluzzi et al. 2007). Figure 6 shows the sensitivity of the emergent $Q/I$ profile to the magnetic field strength, both for the case of a microturbulent and isotropically distributed magnetic field (left panel) and for the case of a random-azimuth horizontal field (right panel). In fact, the $Q/I$ profile observed by Stenflo & Keller (1997) has a three-peak structure, very similar to that shown by the solid lines in Fig. 6. Note that for $B \gtrsim 10$ G the amplitude of the central $Q/I$ peak is smaller than the amplitudes of the wing peaks, contrary to what the observed $Q/I$ profile shows. Another spectral line whose linear polarization suggests that the lower chromosphere of the quiet Sun has a small $\langle B \rangle$ value is the NaI D$_2$ line (see the review by Trujillo Bueno 2009).
7. Concluding comments
The three main conclusions to be drawn here are:
(1) The bulk of the quiet solar photosphere is strongly magnetized, with $\langle B \rangle \sim 100$ G when no distinction is made between granules and intergranules.
(2) The lower chromosphere seems to be weakly magnetized, with $\langle B \rangle < 10$ G.
(3) The magnetization of the upper chromosphere of the quiet Sun is abrupt and significant, with $\langle B \rangle > 30$ G just below the sudden transition region to the $10^6$ K coronal plasma.
Are these conclusions reliable? They are based on radiative transfer modeling of the scattering polarization $Q/I$ profiles of some spectral lines observed without spatial and/or temporal resolution in quiet regions close to the edge of the solar disk. The radiative transfer calculations have been carried out in given atmospheric models (see below). Such $Q/I$ signals depend on the anisotropy of the radiation field within the solar atmosphere, which is sensitive to its thermal and density structure. The $Q/I$ amplitudes are also sensitive to collisions with neutral hydrogen atoms (e.g., the case of the Sr\text{I} 4607 Å line) and protons (e.g., the case of H$_{\alpha}$). Through the Hanle effect the emergent $Q/I$ profiles are also sensitive to the presence of magnetic fields inclined with respect to the symmetry axis of the incident radiation field.
Conclusion 1 is the most reliable one because it is based on detailed 3D radiative transfer calculations of the emergent $Q/I$ for the Sr\text{I} 4607 Å line in a realistic 3D hydrodynamical model of the thermal and density structure of the quiet photosphere (Trujillo Bueno et al. 2004; Trujillo Bueno & Shchukina 2007). Assuming that the “hidden” field of the quiet solar photosphere is randomly oriented below the mean free path of the line-center photons is indeed a suitable approximation for estimating $\langle B \rangle$ (e.g., Frish et al. 2009). Moreover, calculations based on the assumption that the magnetic field is instead horizontal also lead to the conclusion of a substantial amount of magnetic energy in the bulk of the quiet solar photosphere (see Sect. 4 in Trujillo Bueno et al. 2006). As reviewed in the just quoted paper several other investigations strongly support this conclusion (see also the very recent contribution by Danilovic et al. 2010).
Conclusions 2 and 3 are based on radiative transfer modeling of the $Q/I$ profile of the H$_{\alpha}$ line observed by Gandorfer (2000) in a quiet region at about 5″ from the solar limb, using various (hot and cool) 1D semi-empirical models. Any such 1D model is certainly a poor representation of the chromospheric thermal and density conditions. Fortunately, the observed $Q/I$ profile shows
a peculiar line-core asymmetry which is absent in the observed $I(\lambda)$ profile. Moreover, the $Q/I$ profile of the (photo-ionization dominated) H$\alpha$ line is not very sensitive to the chromospheric thermal structure. As shown by Štěpán & Trujillo Bueno (2010b), the line-center asymmetry in the observed $Q/I$ profile can be explained by the Hanle effect of a magnetic field in the solar atmosphere whose height variation suggests the presence of an abrupt and significant magnetization in the upper chromosphere of the quiet Sun and a weakly magnetized plasma directly underneath. Given that the solar chromosphere is extremely inhomogeneous and dynamic we cannot exclude the possibility of an alternative explanation. Nevertheless, there are other spectropolarimetric investigations that favor a sizable quiet-Sun magnetization at a height of about 2,000 km above the visible solar surface (e.g., Trujillo Bueno et al. 2005; Holzreuter et al. 2006; Asensio Ramos & Trujillo Bueno 2009; Centeno et al. 2010) and a weakly magnetized lower chromosphere (Landi Degl’Innocenti 1998; Belluzzi et al. 2007).
Clearly, understanding the variation with height of the mean magnetization of the quiet solar chromosphere requires taking into account the multi-scale contributions of the network and internetwork magnetic loop-like structures.
Acknowledgements. The results on chromospheric magnetism reviewed here owe much to ongoing collaborations with R. Manso Sainz (IAC), R. Ramelli (IRSOL) and J. Štěpán (IAC), and I thank them for many fruitful discussions. Financial support by the Spanish Ministry of Science and Innovation through project AYA2009-063981 and by the SOLAIRE network (MTRN-CTP-2006-035464) is gratefully acknowledged.
References
Asensio Ramos, A., Trujillo Bueno, J. 2009, in Solar Polarization 5, ed. S. Berdyugina, K.N. Nagendra, & R. Ramelli, ASPCS, 405, 281
Belluzzi, L., Trujillo Bueno, J., Landi Degl’Innocenti, E. 2007, ApJ, 666, 588
Carlsson, M. 2007, in The Physics of Chromospheric Plasmas, ed. P. Heinzel, I. Dorotovič & R. J. Rutten, ASPCS, 368, 49
Casini, R., & Landi Degl’Innocenti, E. 2007, in Plasma Polarization Spectroscopy, ed. T. Fujimoto & A. Iwamae (Springer Verlag, Berlin-Heidelberg), 247
Cattaneo, F. 1999, ApJ, 515, L39
Cauzzi, G., et al., 2008, A&A, 480, 515
Centeno, R., Trujillo Bueno, J., & Asensio Ramos, A. 2010, ApJ, 708, 1579
Danilovic, S., Schüssler, M., & Solanki, S. K. 2010, A&A, in press
Del Toro Iniesta, J. C. 2003, in Introduction to Spectropolarimetry (Cambridge University Press, Cambridge)
Emonet, T., Cattaneo, F. 2001, in Advanced Solar Polarimetry, ed. M. Sigwarth, ASPCS, 236, 355
Frish, H., Anusha, L. S., Sampoorna, M. & Nagendra, K. N. 2009, A&A, 501, 335
Fontenla, J. M., Avrett, E. H., & Loeser, R. 1993, ApJ, 406, 319
Gandorfer, A. 2002, in The Second Solar Spectrum, ed. I: 4625 Å to 6995 Å (vdF Hochschulverlag, Zürich)
Harvey, J. W. 2006, in Solar Polarization 4, ed. R. Casini & B. W. Lites, ASPCS, 358, 419
Harvey, J. W. 2009, in Solar Polarization 5, ed. S. Berdyugina, K. N. Nagendra, & B. W. Lites, ASPCS, 405, 157
Holzreuter, R., Fluri, D. M., & Stienflo, J. O., 2006, A&A, 449, L41
Judge, P. 2006, in Solar MHD Theory and Observations: A High Spatial Resolution Perspective, ed. H. Uitenbroek, J. Leibacher & R. F. Stein, ASPCS, 354, 259
Lagg, A. 2007, Advances in Space Research, 39, 1734
Landi Degl’Innocenti, E. 1998, Nature, 392, 256
Landi Degl’Innocenti, E., Landolfi, M. 2004, in Polarization in Spectral Lines, Astrophysics and Space Science Library 307 (Kluwer Academic Publishers, Dordrecht)
Leenaarts, J. 2010, in Chromospheric Structure and Dynamics, MmSAI, this volume
Leenaarts, J., Carlsson, M., Hansteen, V., & Rouppe van der Voort, L., 2009, ApJ, 694, L128
Lites, B. W., et al. 2008, ApJ, 672, 1237
López Ariste, A., Aulanier, G. 2007, in The Physics of Chromospheric Plasmas, ed. P. Heinzel, I. Dorotovič, & R. J. Rutten, ASPCS, 368, 291
Manso Sainz, R., Trujillo Bueno, J. 2003a, in Solar Polarization 3, ed. J. Trujillo-Bueno & I. Sánchez Almeida, ASPCS, 307, 251
Manso Sainz, R., Trujillo Bueno, J. 2003b, Phys. Rev. Letters, 91, 111102
Manso Sainz, R., Trujillo Bueno, J. 2007, in The Physics of Chromospheric Plasmas, eds. P. Heinzel, I. Dorotovic, & R. J. Rutten, ASPCS, 368, 155
Manso Sainz, R., & Trujillo Bueno, J. 2010, ApJ, submitted
Martínez González, M., et al. 2010, ApJ, in press (arXiv: 1001.4551)
Orozco Suárez, D., et al. 2007, ApJ, 670, L61
Pietarila Graham, J., Danilovic, S., & Schüssler, M. 2009, ApJ, 693, 1798
Parker, E. N. 2007, in Conversations on Electric and Magnetic Fields in the Cosmos (Princeton University Press)
Rutten, R. J. 2007, in The Physics of Chromospheric Plasmas, ed. P. Heinzel, I. Dorotovič & R. J. Rutten, ASPCS, 368, 27
Sampaorna, M., & Trujillo Bueno, J. 2010, ApJ, in press
Sánchez Almeida, J., Emonet, T., Cattaneo, F. 2003, ApJ, 585, 536
Schoolman, S. A. 1972, Sol. Phys., 22, 344
Stenflo, J. O. 1982, Sol. Phys., 80, 209
Stenflo, J. O. 1994, in Solar Magnetic Fields: Polarized Radiation Diagnostics (Kluwer)
Stenflo, J. O. 2006, in Solar Polarization 4, ed. R. Casini & B. W. Lites, ASPCS, 358, 215
Stenflo, J. O., & Keller, C. U. 1997, A&A, 321, 927
Socas-Navarro, H., & Uitenbroek, H. 2004, ApJ, 603, L129
Socas-Navarro, H., Trujillo Bueno, J., & Ruiz Cobo, B. 2000, ApJ, 530, 977
Štěpán, J. 2008, PhD thesis, Observatoire de Paris
Štěpán, J., & Trujillo Bueno, J. 2010a, in Chromospheric Structure and Dynamics, MnSAI, this volume (arXiv: 1001.2720)
Štěpán, J., & Trujillo Bueno, J. 2010b, ApJ, in press
Trujillo Bueno, J. 2001, in Advanced Solar Polarimetry, ed. M. Sigwarth, ASPCS, 236, 161
Trujillo Bueno, J. 2003, in Stellar Atmosphere Modeling, ed. I. Hubeny, D. Mihalas, & K. Werner, ASPCS, 288, 551
Trujillo Bueno, J. 2009, in Solar Polarization 4, ed. S. Berdyugina, K. N. Nagendra, & R. Ramelli, ASPCS, 405, 65
Trujillo Bueno, J. 2010, in Magnetic Coupling between the Interior and the Atmosphere of the Sun, Astrophysics and Space Science Proceedings, ed. S. S. Hasan & R. J. Rutten (Springer Verlag)
Trujillo Bueno, J., & Shchukina, N. 2007, ApJ, 664, L135
Trujillo Bueno, J., & Shchukina, N. 2009, ApJ, 694, 1364
Trujillo Bueno, J., Asensio Ramos, A., & Shchukina, N. 2006, in Solar Polarization 4, ed. R. Casini & B. W. Lites, ASPCS, 358, 269
Trujillo Bueno, J., Landi Degl’Innocenti, E., Collados, M., Merenda, L., & Manso Sainz, R. 2002, Nature, 415, 403
Trujillo Bueno, J., Shchukina, N., Asensio Ramos, A. 2004, Nature, 430, 326
Trujillo Bueno, J., Merenda, L., Centeno, R., Collados, M., & Landi Degl’Innocenti, E., 2005, ApJ, 619, L191
Trujillo Bueno, J., Ramelli, R., Manso Sainz, R., Bianda, M. 2010, in preparation
Uitenbroek, H. 2006, in Solar MHD Theory and Observations, ed. J. Leibacher, R. F. Stein, and H. Uitenbroek, ASPCS, 354, 313
Vögler, A., & Schüssler, M. 2007, A&A, 465, L43 |
LADISLAV PROKES
THE PLAYER'S COMPOSER
The following talk was given by A. J. Browcroft to The Chess Endgame Study Circle on Friday, 1.1.66 at St. Bride's Institute, London, E.C.4.
If there is a single composer whose work is likely to make studies really popular, that composer is Prokes. His positions have few pieces, and the pieces are naturally placed. The solution is short. Profound and lengthy analysis is not needed. The position leads the solver to think in terms of direct games which is sufficient, so long as he does not prove not to be, so the solver will have learned something, and he will have been pleasantly surprised. This means that what he learns he is likely to retain, and from a typical Prokes study he can learn not only
A. L. Prokes
Svobodne Slovo, 1.XI.46
Draw
1. Kf1/f1 a5 2. Kf1/1 a5/f1 3. f5 c3 f7/f6 a2 5. Kg8=+/iv.
i) 1. f7? Kf1 2. Kf1 Kf5 wins.
ii) 1. Kf1? Kf5 2. Kf1 Kf5, gaining a tempo by threatening both f4 and Kd5.
iii) 1. Kf1? Kf5 2. Kf1 Kf5 gains a tempo on (i) as bK has not moved. 2. a1Q 6. f7 Qa3 wins.
iv) 2. Ked4 Ke5 is the same as the second line in (i). iv) 5. Ke7? a1Q 6. f7 Qa3 wins. 5. Kf5 wins.
The most puzzling study of all is 2b. Kf1 a5 3. f7, blocking the path of the a-pawn, and then finally returns to g8. That is the main line. But is it the only way to draw?
Why? By analogy with the famous Réti study: We should go to e5, yet in the main line it does not.
the tactical trick or tricks that are the composer's idea but also the simple ground rules of theory that dictate the choice of moves. Lastly, Prokes composed over 1,000 studies (no one seems to know the exact total) and many hundreds of them satisfy these requirements. As his main work, *Kniha Sachovych Studii*, is very difficult to obtain, perhaps the small selection of studies I have chosen will be useful both to the beginner in the field of studies (for the reasons already mentioned), and also to the specialist who may like to acquire a feeling for Prokes' composing style and composing skill.
In each of the studies we shall suggest something that a player can learn. There are, of course, many things that can be learned from a study, but I hope to suggest the less obvious lessons. Many others will occur to you, and indeed it might be a useful exercise to list all that could possibly be learned from a given study. If one did this conscientiously, I think one would find what a great amount of acquired chess knowledge is necessary for the appreciation of studies. But that is for another time, and perhaps another speaker. Let us take each study in turn, go through the solution, and then suggest what can be learned.
A: Timirg. If there are 2 moves to be made (here, for instance 1. Kf7 and 1. f4), see if there is not a reason for choosing one rather than the other to be the first. The specialist may notice that this study goes deeper than the 1822 Rule, for that idea occurs only in note (iii). Prokes has included also the draw with P on the 4th rank against Q.
B: A move that leaves a choice for the following move is superior to one that does not. 1. Kb7 rather than 1. Ka7, and 2. Kc6 rather than 2. Kb6, etc.
C: Play on; something might turn up!
D: When a P is being pursued by a piece, look not only at the simple advance of the P but also at quieter moves that restrict the scope of the piece.
E: If your opponent is ahead in material but restricted in movement, what you should be thinking about is how he will try to increase his mobility. You must find good moves for him, so that you can find better ones for yourself.
F: With a defending K near a corner you should smell stalemate from a long way off.
G: A better move (1d5†) may often be suggested by analysing an obvious one (1a7).
H: Do not be afraid to sacrifice P's, especially when you have to!
I: If your opponent surprises you with a good move (1...Kh8, perhaps an opening new possibility for you).
J: Sit on your hands, as Tarrasch said. 1. g7? d6† 2- Rg8 wins, for B1.
K: Analyse, then look for landmarks (hP, hP), then analyse again.
L: Zugzwang (2. Kd4, then 3. Kd5, not 2. Kd5) is a frequent weapon in Studies. It can be called a 'good' move because it is a good move.
M: Just because one move is unlikely (1. d8), this does not mean that there are not more moves that are equally unlikely (3. Kb5 and 4. Ka6).
N: When nothing else works, try a bit of imagination!
O: Ask yourself what the effects are of your opponent's checks, now and in his following move.
P: A P on the 7th is worth a sacrifice or 2, not excepting the sacrifice of a P on the 5th or 6th.
Q: Are you quite, quite sure there is not something more?
R: Who said that fantastic things could not happen in natural positions? Nobody.
C. L. Prokes
Obranu Lidu, 3.xii.50
Draw
1. Kf8 Bh2 2. Ke8! Kxg5 3. Kg7 Kh4 4. Kd6 Kc5 Kxd5 Kd3 6. Kb4! Kc2 7. Kxa3 Kxc3 stalemate. 1) 6. Kxb5? Kac3 wins.
E. L. Prokes
Prace, 24.iv.47
Draw
1. Bxe6+/ Bxe6 2. gxf4! Bc7 3. d4+ Kf8 4. Kf6 Bxf4(?) 5. Ke6 Bb8=!!/i. i) Why not 1...d4?, as ...Kg4 is met by 2. Kf7 and 3. Kxe6. Answer: 1...Bxe6 Kxf4 3.Kf6 Bxf4 4. Kxe6 Bb8=!!/i. ii) 2. Kc7 Bxe6 wins. ii) 2. Kd7 Bxe6 Kxf7? 5. Ke6 Bc5 wins as in (i), as does 5...Kf6 Bxf4 1...Bb4. iii) Only a draw, because wpd4 prevents Bxe6 so that Kf7 is not more than forced. There is a lesson here in the importance of time (and in space, for on b8 Bb cannot move to any diagonal except a5-a7). On c7 it draws, while on c7, if not on b8, it can play to z5 and win.
S: Who's afraid of the Big Black Queen? Certainly not little white pawns on the 7th rank.
T: A little bit of iProkes (4. Bh1) proves the study is sound, in the most
delightful way. Blockading a P is better than controlling its next square, if the blockade is going to be lifted anyway, because then the enemy will be left blockading his own P and perhaps you can use the tension at K14 and 6. Rg5.
U: It is so easy not to be careful (the right choice of square for wK on move 2).
G. Rude Pravo, 11.vii.48
Draw
1. d5+/i Kd5s 2. e7/i Baet
Kd5 Be5 4. Ke7 Bb5 5. Kb5
Bc5 6. Ke7=.. i) 1. a7? Bf3
2. dc Be6t and 3. Kd5
wins. ii) 2. Kg7? Bf3+
I. L. Prokes
Tijdschrift KNBS, 1948
2 Hon. Men.
Win
1. Kd5/1 Nh5 2. g6/i Rb6t
Ke7 Rf8 3. Kd8/i1 Rxg6 3.
Ke7/i1v Rg7 4. Kd6/v Rg5t
7. Kd7 Kd7 8. Kd8t Kxg5
wins.
i) 1. Kd7 2. g6! Kh6 g5t.
ii) 2. Ke7? Kg7=.. iii) 4.
Kxf6 stalemate. iv) g5? Rf5
5. KxB Rxg5=.. iv) Now
there is no stalemate, so bR
cannot retreat. v) 4. Kd7?
Rxe1 5. BQqt Rg5t=.. v) 6.
gxf5 Kxh6 7. Kd6/v Rg5t 8.
Kf6 Kxg5 9. Kd7 Kf6
vi) 7. Kf6 allows the dual
continuation of (iv) or 7.
- Rxg5 8. BQq Rff4 9. Kxf5
stalemate.
H. L. Prokes
Sachove Studie (1941)
Draw
1. Kf4/i Kf7 2. a6/o Rxa2
Kd7/i1 Kd8 Ke8 3. Ke6/i1
Re8 6. Kd6 Ra8 7. Kc6=..
i) So that wR can be pro-
tected with 2 wK moves. 1.
a8Q? simply loses both wP's
to bK. ii) This threatens
Kd7.
J. L. Prokes
1st Pr. Loumz Ty for Twin
Studies 1942
Win
1. e7! Kxe7 2. d6t Kd7 3. e7
Rg8/i 4. Kf6 Rf8/i 5. Kg5
Rg8 6. Kf7 Rf3 7. g7(Q)t!
Kxg8 8. Rf1# win.
i) 8. Rf2t Kxe2 Kf7 5.
g8(Q) Kxe2 6. Kxe2 Rf6
becomes the main line at
move 8, and the only hap-
pening is 3. Kf7 4. Qe6t
Kxf8 5. Kf6. ii) 4. Kd8 5.
Kd7 6. g6t 7. Kxg6
Ke8 8. Kf1(f8) Kf7 9. Ke8
Ke8 10. Ke7 wins.
K. L. Prokes
1st Pr. Lourne Tp for Twin Studies 1942
Draw
1. f7t Kxf7 2. e7t Kd8/1 3. h7 Rb8/ii 4. Kg6 Kd8/ii 5. Kg7 Ke8/ii 6. Kf6 Kf8 (note i)
i) 2... Kf8? 3. h7 Rb8 4. Kg6 Rg8? as in the twin study, or ... Kf8? 4. Kg6 Kf8
ii) 3... Rf8? 4. Kg6 wins.
iii) 4... Rf8? 5. Kh6 wins.
M. L. Prokes
Hon Men, Enroque 1940
Win
1. d8 ed 2. Wc Kxd7/i. 2. Kxd8 Bxb4 4. Kd8/ii Bat 5 ba or 5. c4 win. ii) 2... Bb8 3. Kd8 Kd7 (note i)
ii) 3. b4t Kxxt=, or 3. Kd8 d5 4. ba d4t. The W move
3. Kd8 threatens ba? or? Kc6. iii) The B1 defence to the K move does not allow this threatening mate. Familiarity with K and P v K underlines this point and indeed makes it comprehensible.
V: Consecutive moves by the same piece are easily overlooked when there seem to be good alternatives.
W: When the man is tied to defend another, look out for the judo trick of counter-attacking (note ii) to evade the bind.
X: A tempo (2. Se7 with check) can create a position otherwise impossible.
Y: Don't think you know it all (that 2R's never win against 1R).
Z: "2 united passed P's on the 6th win against a R", says theory (see position R). The perception of the deception of this exception needs reception from the inception.
O. L. Prokes
7, 7.xi.30
Win
1. f6/i gf 2. Sf8 Rxfs/ii 3. e7 wins. i) 3... Rxe7 2. Rxf7 Rxf5?, which escaping W's move 1. Sg5? Kd8 2. Sf7 Ke6 3. Kf6 Kd8 4. Sxd8 Kxd8 5. K-e7 6. R-g6-
Q. L. Prokes
7, 8.xi.43
Win
1. h7 Sh6/i 2. Kxh6 Rxxa6t 3. Sf6 Rb6t 4. Bxb6 Rxa2t 5. Kg7 Rg3t 6. Kh8 Rg1 7. Sh6 Rh1t 8. Kg7 Kg1t 9. Kf6/i iii Rf1t 10. Kf7 wins. i) 2... Sf6t 3. Kxh6t 4. Kg7 wins. ii) 2... Rxf6t 4. Kg7 wins. iii) The unfort. solution stops at 8. Sf6t and there is a dual way of winning, by sacrificing both rank and rank, as Rf8, for example, is simply met by Sg8.
R. L. Prokes
1st Pr. Louma Ty. 1941
Win
1. d7 Rd1/i 2. Bd5 Rxd7 3. e6 wins. i) 2... Bxd5 3. Bxd5 Rxa2t 3. Kh3 wins. Or i) Rh8 2. Bg4t 3. Rxa2t 4. e6 wins. ii) 2... Kh3t 3. Bxe5 Rb4t- iii) 2. Sf7? Kxe5 3. Be1 Rh8t 4. Kf6t Rf8t wins. Only 3 moves in all main variations until best win is reached, nevertheless beyond question a worthy prize for such a 3-fold sacrifice of wB, each time on a different square.
W. L. Prokes
?, 11 xi 43
Win
1. Kd4 Bg2/i 2. Se4 Bxe5 3. Sc8? Kb6 4. Bg2/i wins. 1) Relying on the draw by "winning" Bf7 in thirteen moves. 2) Bxc3. ii) 4. Sb3?? Kas 5. Sxa5? stalemate. 4. Be3? Bd2=, or 4. Bf2? Be1=.
X. Schweizerischer Arbeiter-schach III.50
Win
1. Sd6 Sbd/1 2. Se7? K- 3. Sxg7 Kxg7/i 4. Ba1 wins
i) 1...Kd8 2. Sd6 Bxd6 (to stop mate) 3. Sxd6 Bc5.
ii) 1...Se4 4. Bf4 wins with 3 minor pieces against 1.
Y. L. Prokes
Sachove Umeni, x49
Win
1. Sd/f1 d4Q! 2. Sxd1 Bc7?
3. Ka5 Bxd1 4. Be7/i Kc2/i
iii 5. Bh5+ Kd1 6. Bxd1 wins.
i) 5...Kc7? 2. Bxe2 Bc6=.
ii) A lovely move.
4...Kd7? Be7? 5. Kc4? Bc2
5. Bxe7 Kd1 5...Kaz 1.
..Bh5 5. Bxd1. In all cases, mate follows.
Z. Schackvärdens 1939
Special mention
Draw
1. Kgt e2/i 2. Rcl Kd4/i 3.
Krb 4. Kd4 Kxb4 5. Rdt
Kxe 6. Kxe2 Kc3 7. Kd1
Kd2 stalemate. 2 sacrifices
of queen and rook, 1 of KB
1 Rai e2 4. Ra3? Kc2 5. Ra2?
Kc1 6. Ra1 Kd2 7. Kxe2.
and 1 Roffer. 2...
...Rkb 3. Kf3 d2 4. Rb1 Kc2
5. Kf2 R-d3 6. Nb1 Kc2
...Kd3 3. Kf3 d2 4. Re3?
Kxc5 5. Kxe2. The 5th R-
offer.
Back Issues of E G
E G 1, 2, and 3 are now out-of-print and unobtainable. Copies have been lodged with the British Museum. Photocopies of out-of-print issues may, at a naturally high price, be obtained by writing to:
Skakhuset, Studiestraede 24, Kobenhavn K, Denmark.
Diagrams and Solutions
No. 217 V. A. Karolikov
1st Place, U.S.S.R.
Championship 1962-63
(1st Prize, Spartak 1962)
Draw
No. 218 T. H. Gorgiev
4th Place, U.S.S.R.
Championship 1962-64
(1st Prize, Ceskoslovensky Szeh 1963)
Draw
No. 219 V. A. Karolikov
5th Place, U.S.S.R.
Championship 1962-64
(Italia Scacchistica 1/62)
Win
No. 220 V. A. Karolikov
L. A. Mitrofanov
6th Place, U.S.S.R.
Championship 1962-64
(2nd Pr. Shakhmatnaya Moskva 1962)
Win
No. 221 A. N. Studenitsky
7th Place, U.S.S.R.
Championship 1962-64
(1st Pr. Shakhmatnaya Moskva 1962)
Win
No. 222 E. L. Pogosjants
9th Place, U.S.S.R.
Championship 1962-64
(1st Pr. Shakhmatnaya Moskva 1962)
Draw
No. 217: V. A. Korolkov. 1. Ra3 c1Q 2. Rxa6† Sa4/i 3. Rxa1† Kb2 4. Rb4† Ka3 5. Rb3?/ii Kxb3 6. Sd4† Ka4 7. Se2 Qxc2 8. c8Q Qh2† 9. Kg4 Qg3† 10. Kh5 =. i) If B1 allows Rxb6† and advances bK up a- and b-lines, then 1...Kb6 is answered by Sd4† and a 4-perpetual is assured; else W queens cP with check. ii) B1 has been threatening mate on g3, forcing W to proceed with checks. But on b3 wR guards g3, so B1 must clearly capture.
No. 218: T. B. Gorgiev and G. M. Kasparyan. 1. c6?/i Ke7?/i 2. Ke4 Sc2/iii 3. Kd3 Se1† 4. Kc2(g3) Sg2 5. Kf3 Sh4† 6. Kg4 Sg2 7. Kf3 Kxe6/iv 8. Bd8/v Sh4† 9. Kg4 Sg2 10. Kf3 Se1† 11. Ke2 Sc2(g2) 12. Kf3=. i) The real purpose of this is so that wBe7 can control a3 and b4, forcing B1 to flee via K-side, so wK reaches f3, containing bSh1, with tempo. ii) ...Ke2? 3. Kf3= iii) ...Sg2 3. Kf3 (checkmate line) 3. ...Sd1† 4. Kd2 Sb2 5. Kc3 Sa4† 6. Kb4 Sb6 7. Ke3=. iv) 7. ...Sc5† 8. Kg2 Sxc6 9. Bf8(a3). 7...Kxc6 threatens 8...Sxe7. v) Cn a3 or b4 wB would eventually be attacked by bSc2 on its return journey, losing W a vital tempo, which would let bSh1 escape.
No. 219: V. A. Korolkov. 1. Kg2/i d2 2. Qd7 Sf5 3. Kh1/ii Kb8 4. Qd8† Ka7 5. Qd3 Ka8 6. Qd5 Kb8 7. Qd7 Ka8/i 8. Qd8 h6 9. Qd3 Ka8 10. Qd5 Ka11. Qd7 h5 12. Qd8 h4 13. Qd5 Ka8 14. Qd3 h3 15. Qd7 h2 16. Qd8 h4 17. Qd3 Ka8 18. Qd5 Ka8 19. Qd7 Ka8 20. Qd8 h3 21. Qd3 Ka8 22. Qd5 Kb8 23. Qd7 Ka7 24. Qd8 Ka8 25. Qa8 mate. i) 1. Qxe7? d2. 1. Qe1? d2 2. Qa1 Sf5 3. Qh1 or 3. Kf3 Ka8=. B1 meets other tries by ..d2 and ..Sf5. All highly remarkable. ii) 3. Kxh2? Se3. 3. Qxd2? Sg3. The position is now one of great beauty, great dynamic balance. iii) 7...Ka8 8. Qxe8† Ka7 9. Qe8 is the same. A tactical point easily overlooked is Qd3, Kb8? Kxh2, Sg3, Qxg3 check.
No. 220: V. A. Korolkov and L. A. Mitrofanov. 1. Rg7?/i Kf1/ii 2. Rxe1† Kf2 3. Rb1† Kxf1 4. Rb8† Kxe2 5. Rg2 6. b5b KB3 Ka8 Ke6 8. Ka7 Kd8 9. b6 queen/iv. i) 1. Rxe1† Exe1? 2. Kxe6 bxa6. ii) So that if 2. ab?, Rxb8 3. Kxc6 Rb8t, or 3. Rxe1† Kxg1 4. Kxc6 Kf2 and bK is one move nearer than in main line iii) A Korolkov trade mark. W with R and 2P's against 2R's and a B, sacrifices his R, leaving him with only 2P's, and W wins. It is impossible, but true, even though neither WP is on the 7th rank. iv) With bK on c5 this would obviously be only drawn.
No. 221: A. S. Studenitsky. 1. Rf3/c1 c2 2. Bxe2 Bg6† 3. Kd4 Exe2 4. Re3† Kb7 5. Re7† Ka6 6. Be3† Ka5 7. Re7† Kb4 8. Be5† Kb3 9. Ra3 mate/ii. i) B1 threatens to promote with check, to advance ...c2, and also ..Bg6† immediately or later. 1. Re1? c2 1. Rxb2? cb 2 ??. ii) Anyone who has attempted to build a study with this final mating picture (mid-board mate by R and B with 2 B1 self-blocks) will have a great respect for this composition.
No. 222: E. L. Pogosjants. 1. Kf3/i Bg2?/ii 2. Kxg2 Ke3 3. Kg3 Bd2 4. Sf4† Ke2 4. Bf3† Ke1 5. Bc1 Qf1 6. Kg2 Be3/iii 8. Bb4† Bd2 9. Bd6=. i) Kg3 Bd2 2. Ba5† Ke2. 1...Bc1 2. Kg3 Bd4 3. Ba5† Kd1 4. Ba4=. iii) 7...Bc3 8. Bg3† Kd2 9. Bf4=.
No. 223: V. A. Korolkov and L. A. Mitrofanov. 1. Ra1/i Qg6 2. Rh2† Kxh2 3. Sf3? Kh3 4. Rh1† Kg2 5. Rh2?/ii Kf1 6. Ke3 Qe2 7. Ph1† Kg2 8. Rg1† Kh3 9. Kf4 Qg6 10. Rh1=. i) To meet 1...cb by 2. Rd1, while 1...c2 leads after 2. Se4 Qf8 3. Rh1† Qb8† 4. Ke3 Qb6† to a draw by perpetual check. As 1...f4t is met by 2. Rxe5†, B1's best is to
No. 223 V. A. Berdikov
1st Prize, Shakhmatnaya
10th Place, U.S.S.R.
Championships 1962-64
(1st Pr., F.I.D.E. Tourney
1962)
Drew
No. 224 F. S. Bondarenko
A. P. Kuznetsov
12th Place, U.S.S.R.
Championships 1962-64
12nd Prize, Shakhmatnaya
Moskva 1963
Win
No. 225 G. M. Kasparyan
R. L. Mandinyan
13th Place, U.S.S.R.
Championships 1962-64
(4th Pr., Galitzky Memorial
Ty, 1964)
Win
No. 226 Al. P. Kuznetsov
2nd Prize, Shakhmatnaya
Moskva 1965
Award 1966
Draw
No. 227 M. Kralin
3rd Prize, Shakhmatnaya
Moskva 1965
Award 1966
Win
No. 228 F. S. Bondarenko
Al. P. Kuznetsov
Comm., Shakhmatnaya
Moskva 1965
Award 1966
Win
attack wRh5. ii) 5. Rg1†? Kf2 6. Rxg6 fg 7. bc b2 8. Sd2 Ke2 9. Sb1 Kd3 10. Kf3 Kc2 11. Sa3† Kxc3 12. Ke3 Kb2 13. Sb1 Kc2 14. Sa3† Ke1 wins.
5. Sh4†? Kxh1.
No. 224: F. S. Bondarenko and A. P. Kuznetsov. 1. f7/i Bg2†/ii 2. Kxe2/iii f3†/iv 3. Kh3 Qf8 4. Bgl Qh6 5. Kh2 Qf8 6. Kh1 Qh6 7. Bh2 Qf8 8. Kg1 Qh6 9. Kh1 Qh8 10. Bgl Qh6 11. Sf6 Kg7 12. Sd7 Qh3 13. Ke1/v 14. Qe2 Qg8 15. Qe1 Qf8 16. Qe2-2† 17. Qa4 Kxg6-cc 18. Ka5-† 19. Bf1 must meet Qf7 now, so has no time for his own move. i) Bf3. ii) b5 is now stalemated, so he tries to throw all his pieces away, with checks. iii) 2. Kgl? Qb1† 3. Kxg2 f3†=. 2. K-? loses to 2...Qe5†. iv) 2...Qb7† 3. f3 and there is no longer a stalemate defence. v) This is what it has all been about. W has won a tempo to obtain a winning P-ending (wB does not count). Note 10. Sf6† Qe7.
No. 225: G. A. Kasparian and L. L. Mandinyan. 1. Rf2† Kg4/i 2. Sb4† Kxh1 3. Sf6† Kg2 4. Rxb4 b5 5. Rg2†/i 6. Sd7† Kf3 7. Kg5 Kf5 8. Rxe5 Kh5 9. Sg4 Kh4 10. Sf6 Kh3 11. Sb5 Kd2 12. Sg3 Kh3 13. Sf5 Kh2 14. Se3 Kh3 15. Sg2 Kf2 16. Sf4 Kh1 17. Rg7 Kh2 18. Rg2† Kh1 19. Re2 Kg1 20. Rd2 Kf1 21. Sg2 Kg1 22. Se3 Kh1 23. Sg4 Kg1 24. Sh2 Kh1 25. Kh1/Kg1 26. Rb2 wins, as wK is now free (27-28). Ke1 30. Sf3† Sf6† 4. Rh6 mate. i) 1. ...Kf1 2. Sd5 mates quickly after Sd3+. ii) 2. Sg6? Bbd seems to draw easily. iii) W can now confine Bd8 on a5, as 3...Bg5 loses to 4. Rh8† Kg2 5. Rg8, winning on material. iv) W has a free R and S, but without wK this is not enough to force mate. v) Threatening 28. Sf3.
No. 226: Al. P. Kuznetsov. 1. Se3 de 2. Kh5 Kxf5 3. h4 Rf8 4. Kh6 Kf6 5. Kh7/i Ra8 6. h5 Ra7? 7. Kh6 Rb7 8. g7 Kf7 9. Kh7 Ra8 10. h6 Rb8 11. g8Q† Rxe8 and the stalemates that B1 has avoided on moves 3, 7, 8 and 10 is now a fact. i) 5. g7? Rg8 8. h5 Kf7.
No. 227: N. Kralin. 1. Se3 f5† 2. Ke5 f4 3. Kxf4 Sxg2† 4. Sxg2 Bd5 5. Sd6/i Bxg2 6. Sf5† Kh5 7. Bz2 Kg6 8. Sh4†. i) 5. Se3? Bxh7 6. Sf5† Kh5 7. Bd3 Ba8=, a delightful use of the remote corner square.
No. 228: F. S. Bondarenko and Al. P. Kuznetsov. 1. Rf7† Kd6 2. Bf4 fe 3. Bg5 Re8 4. Be7† Rxe7 5. Rf8 Rf7 6. Kxf7 g5 7. Rb8 g4 8. Ke8 g3 9. Kd8 g2 10. Rb1 wins.
No. 229: P. Perkonaja. 1. h7 Sc4†/i 2. Kb4/ii Kxh7 3. Sf6† Kg7 4. Sxe8† Kf8 5. Sc7/iii Se5/iv 6. Bh6†/v Kc7/vi 7. Eg3† Kd6 8. e7 Sc6† 9. Kb5 Sxe1 10. Bh6 mate. ii) ...Kxh7 2. Sf6† Kg6 3. Sxe8 Sc4† 4. Kb4 Se5 5. Bg5 Sc6† 6. Kb5 Sxg4† 7. cd8† 8. Kd5 Sc6 9. Kd6 Kf7 10. Kc1 Ke7 11. Sg7 Sf7† 12. Sf5 and 13. Sd6 mate. 2. Kxh7 Kf7 3. Kf6 Kf5 4. Sxe8 Se5 5. Bf4 Sc6 6. ed Kf5 7. Bc7 Ke6 8. d8Q Sxd8 9. Bxd8 Kd7= iii) 5. cd? Se5 6. Sf6 Ke7=. iv) 5...Sb6 6. Kb5. 5...Sd6 6. Bh6†. v) 6. Bf4† Sd3= = vi) 6. ...Kg8 7. Bf4. A beautifully constructed study but the finale is almost identical with a well-known composition by Harold Lommer, Berliner National Zeitung 1938. White: Kg8, Ec2, Sd6, Pd6. Black: Ke8, Sg8, Pc7. White wins. 1. Sf7+ Ke8 (1...Kd4 2. Bh7) 2. Bb3 Kd7 3. Ba4† Ke8 4. d7.
No. 230: E. Puhakka. 1. Kb3/i Br6 2. Sgf† Sd6 3. Kb4/ii Bd7 4. Ka5/iii Bg4/iv 5. Kb6 Kf7/v 6. Ke7 Se4 7. Kd8 Sxf8/vi 8. Se8 Sxe8 stalemate. i) 1. Kd3 Bd5 2. Sgf† Bf7 3. Ke4 Kxf6 4. Sf5 Bg6 wins. After 1. Kb3 Bd5† 2. Kb4=. ii) 3. Se6? Bd5†. iii) 4. Ke5? Se4† and 5...Sxf6(t) wins. 4. Ka5 is part of a remarkable K-march. iv) 4...Se4 5. Sh5.
| No. 228 | P. Perkonmaa |
|---------|--------------|
| 1st Pr., Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
| No. 230 | E. Puukka |
|---------|------------|
| 2nd Pr., Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
| No. 231 | A. Koranyi |
|---------|------------|
| 3rd Pr., Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
| No. 232 | R. Heiskanen |
|---------|--------------|
| 4th Pr., Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
| No. 223 | B. Breider |
|---------|------------|
| 1 Hon. Men, Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
| No. 234 | E. Dobrescu |
|---------|-------------|
| 2 Hon. Men, Visa Kivi Jubilee Ty., 1965 |
| Award vii/66 |
Not obvious. v) 5...Kf6! 6.Kc4 Ke7 7.Kd5 Rf7 8.Kc4 Sh1 9.Kf4 Kf7 10.Sh5 Bxh5 11.Kg5+. Only just. vii) 7...Sd6 is no better and can be met by 8.Kc7 or 8.Sc6.
No. 231: A. Koranyi. 1.Sf3/i Sxf3/ii 2.Ra8? Kh7 3.f7 Rh4+/iii 4.Kg3 Rg5? 5.Kf4 Kh6 6.Kf5 Kh5/iv 7.Rh4+ 8.Kg3= iv) 7...Rb8? Kh7? 8.Kf4 Re7 1.Kg3 Re7 wins, here 7...Kf6 8.Kg5/v Rg4+ 4.Kh5 Sg6 5.Sf5 Rh4? 6.Sxh4 Sf4 mate. ii) 1...Re4+ 2.Kg3 Sf5+ 3.Kc2 Rh5 4.Ra8? Kh7 5.7 Re2? 6.Kg1 Sg3 7.Rh8?. Or 1...Re8 2.Sxh4 Re4+ 3.Kg3 Rhxh4 4.Ra8? Kh7 5.7 Reg4? 6.Kf3 Hf4+ 7.Kg3 Rh4+ Kg3 8.Kf4= iii) 3...Rg6 4.Kf4 Kh6 5.Ke3 Re6 6.Kf2/iv Rh2+ 7.Kg3 Rg5? 8.Kf4=. Anti-clockwise echo to the main line. There is always taboo to prevent Br's doubling on f-file. iv) 7...Kd3? Rd2+ 8.Kc3 Rd3 9.Kb4 Rb2? 10.Kc5 Rc3? 11.Kd6 Rd2? 12.Ke7 Rc7? wins f2. 13.Kf8 Rd6?, 14.Ke4 Se5? 11.Kd4 Rd3? 12.Kxe5 Re2? 13.Kf4 Rf2? and 14.Rxf2. v) 3.Kg3 Sg6 4.Sb5 Rg1? 5.Kh3 Rg6 6.Qg3 Sxf7? 7.Kxh3 Rf4 8.Kf7+ Keg8 vi) 6.Kd8? Rd2? 7.Kc2 Re2? 8.Kc1 Re1+ 9.Kb2 Rd2? 10.Kb3 Rb1? 11.Kc3 Rcf1? 12.Kb3 Sd4? wins either by ...Se6 or more R-checks.
No. 232: R. Heiskanen. 1.Kf3 Ke3 2.Be4/b2/ii 3.Ba2 Ka2 4.Kh4/iii Bf7/iv 5.Bbl Ke1 6.Bd3 Bxd7 7.Sb3 Bbl 8.Sf4 Be2 9.Sc2 Kd4 10.Bxc2/v Kxc2 11.Sc3 Sf5? 12.Kc4 ad 13.Sb5= i) 2.Sc6? Bxf5 3.Kc4 b2 4.Bd3 Bxd3? 5.Kc3 b1Sf wins, but not 2...Ka4? 3.Bd3 Bxf5 4.Kb4 b2 5.Kc3=, nor 2...Ka2? 3.Sd6 Bf7 4.Sc4+= ii) 2...Bd3 3.Kb4 Bb4 2.Ba2/Kba2 4.Kb3 Kc1 6.Sc4 Bh7 7.Sc3 Bg4 8.Kc3 Bxa2 9.Sc3+ or 9...Kd4 7.Bxc4 Kc1 8.Sc4= 1.Ka1 Qf4 10.Sb3= iii) 4.Ka4? loses because in the main line B1 can play ...Be2 check. iv) 4...Kc1 5.Se8 Bf7 6.Sc6= v) 10.Sc3 Kxd3 wins.
No. 233: B. Breider. 1.Sd2/i Kb4/ii 2.Sbl Ke4/iii 3.Kg8 Kd3 4.Kf5 Kxd3 5.Kg4 b5 6.Sa3 b4 7.Sc2 Kf2 8.Kxh3 b3 9.Sa2=, for instance 9...Ke3 10.Kg4 Kd3 11.h4 Kc2 12.h5 Kb4 13.Sbl a3 14.Sxa3 Kxa3 15.h5 b2 16.hf blQ 17.h8Q= iv) 1.Sc5? b5 2.e4 b4 3.e5 b3 4.Sd3 a3 5.e6 b2 6.ef blQ , e8Q Qxd3? and bQ being well centred with checks makes safe an attack on the king. 7.Kb4 ef 8.Kb3 ef b4 wins. ii) 1...a3 2.Sc4? Kb4 3.Sxa3 Kxa3 4.e4 b5 5.e5 b4 6.e6 b3 7.e7 b2 8.e8Q blQ? 9.Kh6= 1...b5 2.e4=. iii) 2...Kb3 3.e4 Kb2 4.e5 Kxb1 5.e6=.
No. 234: E. Dobrescu. 1.Qc3/i Rds/ii 2.Qf6/iii Ke8 3.Qe6 Kf8 4.Ka1/iv Ra5?/v 5.Kbl Rb5? 6.Kc2 Rds/vi 7.Kc3 Rd8 8.Kc4 Rd7/vii 9...Rd8 10.Rd7+ Kf8 11.Kb3 Rd1 11.Qd8+ Kf7 wins. i) 9...Qe8? Kf7= 10.Qd1? Se5 2.Sc4 Rb5= ii) Qd4? Se5 3.Qa5 Rg5 4.Qf7+ Se6. ii) Apart from Qxd3 there is the threat Qe8t. 1...Sc5 2.Qb3 Rf2? 3.K- wins. i)...Rf2? 2.Ka3 Rf3 3.Qe8t and a second check on b7 or g4 wins. iii) 2.Qc4? Ke7= 2.Qc5t Ke8 3.Qb7 Rds= iv) 4.Kb1? Rds 5.Ka1 Rds 6.Sbl 7.Qe8t Rds 8.Qe5 Rds 9.Qe8t Rds 10...Sd5= 11...Rd8 5.Kbl Rb8? 6.Kc2 Sb4? 7.Kc3 Rb8 8.Qe8t K- 10...Sd5= 11...Rd8 5.Kbl Rb8? 6.Kc2 Sb4? 7.Kc3 Rb8 8.Qe8t K- 9.Qc7? K- 10.Kc4 wins, a fine pendant to the main line. Note here 6...Rd8 7.Kc3 wins. It is worth comparing the lines in notes (iv) and (v). vi) 6...Sb4? 7.Kd2 Qe8 8.Qe5t Ke8 9.Qe8t wins or 7...Rd5? 8.Kc3 Qe8 9.Qe5t wins. vii) 8...Rd2? 9.Kb3 Rb8 10.Kc2 on a2 or 9...Sd3 10.Kc3, or 9...Sd1 10.Kb4 Rd4? 11.Kc5 wins. viii) 9...Rd8 10.Kb6 Rb8? 11.Kc7 Rb7? 12.Kc6 wins. This study is a highly original (4.Ka1) discovery with this material. The number of wk moves made not just to get out of check is remarkable.
| No. 235 | R. Helskanen |
|---------|--------------|
| 3 Hon. Men's Vila Klivi Jubilee Tnry 1965 |
| Award vii/66 |
| No. 236 | A. Rautanen |
|---------|-------------|
| 1 Comm. Vila Klivi Jubilee Tnry 1965 |
| Award vii/66 |
| No. 237 | O. Kaila |
|---------|----------|
| 2 Comm. Vila Klivi Jubilee Tnry 1965 |
| Award vii/66 |
| No. 238 | A. Fred |
|---------|---------|
| 3 Comm Vila Klivi Jubilee Tnry 1965 |
| Award vii/66 |
| No. 239 | P. Perkonaja |
|---------|-------------|
| 1st Prize, Houston Chronicle 1965 |
| No. 240 | B. Breider |
|---------|------------|
| 2nd Prize, Houston Chronicle 1965 |
No. 235: R. Heiskanen. 1. Sd6 Rxxd6=/. 2. Bh7/ii Rh8† 3. Ka3/iii Sa7/iv 4. c4Q/iv Sxc8 5. Bxc8 Kxh4 6. n7 Rbl/vi 7. Bc6 Rb5 8. Ka4/vii Rh9 9. Ba2 Ra1 10. Ka3 or Kb3 wins. i) 1...Sxd6 2. a7 Rb8 3. aBQ Rxal 4. Bxa5 Kf5 5. h5 Ke6/viii 6. h5 Kd7 7. Bd5 Sc8 (7...Sf5 8. Be6†, vii 7...Kxe7 8. Bc6 wins. ii) 2...f3?? Kg4 and cannot be checked from c3. iii) 3. Ka4 in main line, but 3...Kc6 4. Bxa5 Kc5 5. Kc4 mate in but 6. Re6†=. iv) 3...Rxh6 4. Bxa5 Sc7 5. Ka4 Kf5 6. h4 Ke7 7. h5 Kd7 8. h6 Kxc7 9. Bd3. v) 4. c8?? Rxh6? 5. Bxa6 Sxc8=. vi) 6...Rh5 7. Ka4 Kh8 8. Ba6 Ra1+ 9. Kbs Rh1† 10. Ke4 Re1† 11. Kb3 wins. vii) 8. aBQ or Re1 Ra5†=, a drawing threat that lies behind B1's move 5 in the main line. viii) 6...Kg4 6. Kb3 Kxh4 7. Kh4 Kg5 8. Kc5 Sc8 9. Bb7 Sc7 10. Kd4 Kf6 11. Kd5 Kf7 12. Ea6 Sd5 13. Be4 wins.
No. 236: A. Rautanen. 1. Sf4 d2 2. Sd5 Be5 3. Se3 Bb2 4. h7/i Be5 5. Sd1 Bb5 6. Sf3 Be5 7. Kxb5 Bxc3 8. Kxb6 Bxa1 9. Sd4 Bc4 10. Sd3 Be5 11. a4 lae 12. Sd4 b1 13. Sxd4 c1= 14. Sxf4 followed by 15. hQq wins. i) 4. Sd1? Sax3 5. h7 Bh2=. ii) 6. Sxb2? d1Q 7. Sxd1=.
No. 237: O. Kalia. 1. Bh4/ii Sb5 2. Bd3 Sd2/ii 3. Ke3 Sf7/iii 4. Be4 Sd1 Ke4 wins by Zugzwang 5. Sd3 6. b4†. ii) 1. Ke3? Kxb3 2. b3 Kc5 3. Bf3 Sxb3 4. Kxb3 Kd4 5. Bb3 Kc3 6. Bg4 Kf2 7. Bxb3 Kg1 8. Ka4 Kxb2 9. Bf1 Kg1 10. Bh3=. 1. Bd3? Sb3 2. Ke3 Scl 3. Be4 Se2?, ii) 2...Sa3 3. Ke3 Sb3 4. Be2 wins. iii) 3...Se4 or 3...Sf1 4. b3†.
No. 238: A. Fred. 1. Sf5† Kxb5/i 2. Sd3 dQ 3. e4† Ka6 4. Sf5† Ka5 5. Be1 Qxe1 6. Sxf0 wins Q or mates. i) 1...Kd5 2. Sxe7† Ke4 3. Sc8 and 4. Sd6† wins. i)...Kb4 2. Be1 wins. i)...Ke3 2. Bd4 Kxd4 3. Sf3† Ke3 4. Sxd2 and wins the ending by playing carefully; if here 2...Kxc2 3. Sd4 wins.
No. 239: P. Perkonoja. 1. e6/i Bg3†/ii 2. Kd1/iii Bxe6 3. Rxe5 Bb3 4. Bc4/iv Rxc4† 5. Ke2/v Rc3 6. Kd2 Rf3 7. Ke2 Rf2† 8. Ke1/vi Bh5 9. Rb5† Ka8 10. Rbb7 Kxa7 11. Rb7† =. i) 1. Rd7† Be6 (better than 1...Bb5? 2. Rxe7 Rcb3 3. Rxb3 wins, as W can never be supported). ii) 1. Rb5† 2. Kd1 Sf3 3. Ba6† and 4. Bxc6†. iii) 2. Kf1 Bxe3 3. c5Qs Bb3†. iv) Stops...Rc5†; and threatens Bxb3. 4...Ba4 5. aBQ† Kxa8 6. Ra5† and 7. Rxa4=. v) 5. Kd2? Bf4†. vi) 8. Ke3? Bf4†. 8. Kd3? Bc2† wins with material plus.
No. 240: B. Breider. 1. b7 Bg7/i 2. Sa6/ii Be5 3. bB8/iii Bg3 4. Bc7 Kg4 5. fg hg 6. Sxc5 Kh3 7. Se4 wins. i) 1...Bh6 2. Sd7/iv Bcl 3. bB8 Bxa3 4. Sb6 and 5. Sxa3†. ii) 2. Sd6 Be5 3. bB8 Bg3 4. Be5 Bxe5 5. Sxa3† 6. Sb6† Bb7 7. bBQ† 8. Qe2† Qa3† 9. Qxe3 as W has nothing better than 5. Qxb8 Bf4. iv) 2. Sa6? Ecl 3. bB8 Bxa3 4. Bg1 Bcl 5. Bb8 Ba8=, but not 5...a3? 6. Sxc5 and wPc4 assures the win.
No. 241: C. M. Bent. 1. Re7/ii Kha9/ii 2. Re8† Kg7 3. Re7† Kxb6 4. Rf7† Qxf7 5. Bx7 Enf/iii 6. g8/iv Bg3† 7. Kg1/v Bxf1 8. Sce5 Eb5/vi 9. Sd6† Ke7 10. Sce7 Be2 11. Sd5† Kf8 12. Sf4 Ba6 13. Se6† Ke7 14. Sc7 Bd3 15. Sd5† Kf8 16. Sf4=, wS gyrating anti-clockwise e6-c7-d5-f4-e6. i) 1. Bd3? Be3? 2. Ke2 Ba6; or 2. Kg2 Bb7† =. g6?? Kxg5 3. Bd3 Be3?. 1. Bg8† Kg7 2. Bf7† Kg6 3. Bg8† Kf7 4. Bf7 cont. (iii) 3...Be2 4. Bxe2 Sg2(d3) or 6...Bc= =. iv) 6. Ba2† Bg3 7. Kg1 Bxe1 8. Sc5 Be2 9. Se6† Ke8. v) 7. Kg2? Bxe1 8. Sc5 Be2 9. Se6† Ke7 10. Sf4 Se3?. vi) wS gyrates clockwise in the fine echo 8...Be2 9. Se6† Ke7 10. Sf4 Ba6 11. Sd5† Kf8 12. Sc7 Be2 13. Se6† Ke7 14. Sf4, and if 14...Sg3 15. Sxe2 Sxe2= 16. Kf1=. Here 11. Ba7? Kf6.
No. 241 C. M. Brent
3rd Prize, Houston Chronicle 1965
Draw
No. 242 Dr. A. Watawa
4th Prize, Houston Chronicle 1965
Win
No. 243 J. F. Peckover
Special Prize
Best U.S. Entry
Houston Chronicle 1965
Win
No. 244 B. Soukup-Bardon
Hon. Men, Houston Chronicle 1965
Draw
No. 245 G. Afanasiev
Dvizov
Hon. Men, Houston Chronicle 1965
Win
No. 246 G. M. Kasparyan
Hon. Men, Houston Chronicle 1965
Win
| No. 247 | A. J. Rehoy |
|---------|-------------|
| Hanson Chronicle Toy 1965 |
| Entry No 36 |
| No. 248 | P. Joita |
|---------|----------|
| 1st Prize, Revista de Sah |
| 1964 Award Iv/66 |
| No. 249 | F. S. Bondarenko |
|---------|-----------------|
| Kurmetsov |
| 2nd Prize, Revista de Sah |
| 1964 Award Iv/66 |
| No. 250 | E. Janosi |
|---------|-----------|
| 1-2 Hon Men, Revista de Sah |
| 1964 Award Iv/66 |
| No. 251 | G. Trebis |
|---------|-----------|
| 1-2 Hon Men, Revista de Sah |
| 1964 Award Iv/66 |
| No. 252 | P. Joita |
|---------|----------|
| 3 Hon Men, Revista de Sah |
| 1964 Award Iv/66 |
... Kh1 6. Rg1t Ka1 7. Sc3† Kb1 8. Sd4† Ka1 9. Kxd3 wins. ii) 4...Kd2 5. Nh1 Ke2 6. Ne7† Kf3 7. Kxb Kc4 8. Kxb3 Kc5 9. He7 wins. iv) 5...Kx2 6. Sf5 Kh2 7. Sd4 (better than 7. Se7† Kxd3. On d4 wS covers b5 and e8, to enable Bc1) 7...Ka3 (7...Sa6 8. Bc6 Kxa4 9. Ke5 Ka5 10. Ke6 Sa6 11. Sc4† and 12. Bxa6. v) 8. Kc3† Sb5† =. vi) 8...Sa6 9. Be6 Sc7 10. £f4 wins.
No. 248: P. Jolin. 1. Kb2/i1 Rg5/ii 2. Sc4 Rxg8 3. Sf6 Rh8/iii 4. d7 Bg6 5. Se8 Khx2 6. Ke1 Rh1t ? 7. Kxd2 Lh5 8. Sg7 Bg4 9. Se6 Rd1t 10. Ke2/iv =. i) B1's major threat is given by 1...Qa5, ii) The other threat, but an interesting question for theory is whether 1...Rxe6 would win. This will be discussed on another occasion. iii) 3...Rg2† 4. Ke1 Rg1† 5. Kd2 Bf7 6. d7 Bb3 7. Ke2 Rd1 8. Se4 Rd4 9. Ke3, a fine companion variation to the main line.
No. 249: F. S. Bondarenko and A. P. Kuznetsov. 1. Qb7/i Ra7 2. Sg5† Ke5 3. Sxf3 Ke6 4. Sg5†/i Ke5 5. St?† Bxf7/iii 6. Qxb5 Bd5† 7. Kg6 Rb7 8. Qb8† Be7 9. Qxb5 Kf6 10. Qa6† Ke7 11. Qa7† Kf6 12. Qa8† Ke7 13. Qa7† Ke6 14. Qa6† Ke7 15. Qa7† Ke6 16. Qa6† Ke7 17. Qa7† Ke6 18. Qa6† Ke7 19. Qa7† Ke6 20. Qa6† Ke7 21. Qa7† Ke6 22. Qa6† Ke7 23. Qa7† f1Q is given as drawn, but 5. Sd4† Ke6 6. Qa2† gives W an ending 2P's ahead, while 5...Ke4 6. Kxg8 Qf6 7. Qxb5 and there is no perpetual. 1. Qd8? Ra7? 2. Kxg2 bLQ 3. Sd4 Ke6 4. Qe8† Kf6 5. Qe4 Kge 11. Qe4 Sc6† Sxe3 5. Qxa7 bLQ should draw, but 11...Qa7? bLQ for 4...Sc6†. iii) 3...Ke6 6. Qe4† Kd7? 7. Qb4 wins. The study is far from clear analytically.
No. 250: E. Janosi. 1. Bb7 Ke3 2. Ba6 Sg3 3. Kf3 Se1† 4. Kxf2 Sd4† 5. Ke3 Sb4 6. Bcd 6 7. Sd4 cd/8 8. Sa4 mate. i) Surprisingly, 7...Se4† does not bust this most attractive study, 8. Sxc2 dc 9. Sd4 K- 10. bS† and 11. Sc3, when cP or fP not beyond 5th rank loses.
No. 251: G. Telbis. 1. Sc7/i d2 2. Sd5 Bd3† 3. Sg6 Bxg6†/i 4. Kg7 Bf5/iii 5. Ea6 d1Q 6. Be2† =. i) 1. Bf5? d2 2. Bc2 Bd3† will win, as also 1. Bb7? Be2 2. Be4 d2 3. Bc2 Bd3†. Instead, W tries to handle dP by threatening Bg4† after ...d1Q. ii) W avoids 3. Kg7? d1Q 4. Bg4† Qxg4† wins. iii) But this clever move renews the idea to recapture on g4 with check.
No. 252: A. Kolesnik. 1. Sf6 Kf2 2. Sxc3 Kxg3 3. Kb2 Ba4 4. Ka3 Ba5 5. Kb4 Ba6 6. Ka5 Bb7 7. Kb6 Ba8 8. Ka7 Bc6 9. Sd4 Kxa3 10. Sxc6 wins. 3 minor pieces win against one. W's move Sg4 would have been the reply to any other bB move to an unattacked square.
No. 253: K. Hannemann. 1. Rxe4 Re2 2. Rxe2† de 3. Qe3 d2 4. Qg1 mate. Echo after 2...fe 3. Qe3 d2 4. Qg1 mate.
No. 254: J. Fritz. 1. Sd2/i Rxa2/ii 2. Sb1† Kh3 3. Ed1† Ka2 4. Rxa3† Kxh1/iii 5. Kd2 ba 6. Bb3 a2 7. Bc2 mate. i) 1. Bxb4? Kb3 2. Rxa2 bLQ =. ii) 1...Kc2 2. Sb1 Kxb1 3. Kd2 Rxa3 4. Rxa3 ba 5. Bf7 a2 6. Bg6 mate. iii) 4...ba 5. Sc3 mate. The three mates all contain three selfblocks. A very successful mating study.
No. 255: J. J. van den Ende. 1. fgt/i Kxg6/ii 2. Bf6 a2/iii 3. Bxc3 Sxc3 4. 0-0 Sb1/iv 5. h3 a1Q 6. Kh2 and Black cannot win. White will take the King to h2, where he can be stopped only if Black tries, say, 6...Qg7 then 7. Rxh1 Kf6 8. Rf1† Ke5 9. Rf4 and the Black King cannot cross the f-file. i) 0-0 a2 2. fgt Kxg6 3. Rf6† Kxg5 4. Ra6 c2 and wins. ii) 1...Kg8 2. 0-0 a2 3. Bh6 and mates. iii) 2...Kxf6 3. 0-0† and 4. Rbl. iv) 4...gh 5. Ra1 =.
| No. 253 | K. Hannemann |
|---------|--------------|
| Stella Polaris, III/66 | 5 |
| No. 254 | J. Fritz |
|---------|----------|
| Stella Polaris, III/66 | 5 |
| No. 255 | J. J. van den Ende |
|---------|-------------------|
| Schakend Nederland | 7 |
| VII-VIII/66 | 7 |
| No. 256 | J. J. van den Ende |
|---------|-------------------|
| Schakend Nederland | 7 |
| VII-VIII/66 | 7 |
| No. 257 | G. J. van Breukelen |
|---------|---------------------|
| Schakend Nederland | 3 |
| VII-VIII/66 | 3 |
| No. 258 | J. Selman |
|---------|-----------|
| Schakend Nederland | 2 |
| VII-VIII/66 | 3 |
No. 256: J. J. van den Ende. 1. Bh4 b2 2. Bh5† Kd8/1 3. Bxb2 Rxb2 4. 0-0-0! wins. i) 2. ...Kf8 3. 0-0! Kg8 3. Bxe2 Rxe2 5. Rxe7 and mates. If 2. ...Kg8 3. Rxe2 Kc6 4. Ke4 Kc5 5. Kd4 6. Rh4† wins. The composer comments "I am ready with alternative 0-0 and 0-0-0, which, so far as I know, has not been achieved before".
No. 257: G. J. van Breukelen. 1. Rh5 Sf3 2. Kc4 Sd2† 3. Kd3 Sf3 4. Ke2/1 Sb4 5. Kf2 Sd2† 6. Sf1 Sf3 7. Kg3 Kb8 8. Kc3 Sc1† 9. Rg6 wins. i) 4. Sf6† Kxf5 and 5. ...Bxd4 ii) 2. ...Sh3 9. Rg3. A difficult struggle of R & S v B & S. The S is captured just when it seems to have got away. We hope to see more from this composer, whose name is new to us.
No. 258: J. Selman. 1. Sa2†/i Sxa2 2. h4 (Now bK obstructs bS) 2. ...Sc1/i 3. h5 Se2 (d3) 4. h6 Sf4 5. Kg8 wins(iii). i) After 1. h4? Sd5 2. Kg6 Kd4=, or 2. h5 Sf6†=. If 1. Se2†? Kd3=. ii) 2. ...Sb4 3. h5 Sd5 4. h6 Sf6† 5. Kg6 wins, if 4. ...Se7? Khe it rather looks now as though W would be able to prevent B from staking the P, 5. ...Kd4 6. h7 Ke5 7. Kg7 Kd4 8. Kf6 Se7† 8. Sh4 9. Kf7(f5) with 9. Kga wins. Selman has made a special study of this type of ending.
**THE FUTURE OF EG**
The 2-year period of guarantee made by the founder is nearly at an end. At the date of writing this note the total of subscriptions is: 115.
This total is not satisfactory, falling short of the 160 needed to ensure the continued production of worthwhile issues like EG Nos 4, 5, 6 and 7. As it is obviously not practical to increase the amount of the subscription (subscribers may even now feel that mere 4 issues a year is too high a rate, given the £ 2.00), we must have more subscribers. The founder cannot do much more than he has done in this respect in the past. He has written 100's of letters to prospective subscribers. What have you done? Unless the situation improves in the next 3 months, EG will collapse like so many other enterprises before it. You will receive no further notification if EG disappears: you will receive EG No 8, but not No 9, because in this event there will be no No 9 for you to receive. You have been warned.
A. J. R.
**Exchanges**
The following are additions to the lists on pp. 29, 59, 88. The list on p. 88 is of magazines not at that date exchanged.
| Magazine | Country |
|---------------------------|-------------|
| British Chess Magazine | England |
| Problemist | England |
| Shakhmaty-in-English | U.S.A. (see p. 94) |
| Skakbladet | Denmark |
| Szachy | Poland |
| Thèmes-64 | France |
Tourney Announcement: "Problemista" is a small circulation typed monthly edited by E. Iwanow. It announces an informal tourney for studies published 1966-67. Diagrams and full solutions to be sent to E. Iwanow, Kilinskiego 57 m.53, Czestochowa, Poland.
No. 159: V. Yakovenko. It is a great pity but the intended line collapses after 4...Ke6 (in place of 4...Kd5), 5.Sc5† Kd5 6.Sxb3 being met by 6...Kc4 with an easy win.
No. 165: Z. Kadrev. After 1.Sg6 Rdl (rather than 1...c2) seems to draw in comfort for Black. If 2.Rg4 c2 etc. and if 2.Se5 Ra1 3.Ra4 c2 etc.
Page 80 - B. H. A. Adamson. Mr. Aloni writes to rebut the criticism in Note (i) to this study, i.e. 1...Rae7 does not win, 4...Kxb5 5.Bb5 Sb6 = as Note (ii) shows. Judicious.
Pages 96-7: The Joseph Jubilee Tourney Award.
C. M. Bent. The solution as presented allows a dual by 12.Sd3† (instead of Kh2) Sxd3 13.cxd3 f3 14.Kg3 and mate in three. Black can however defend more accurately by 10...d3 11.c3 and now only 11...Qxc3 etc. ruin the position with eliminated.
A. C. Miller. Note (i) says that Black threatens 1...Kd3 2.Ba6† Kd4 3.Bb5 Ke5 4.Re5† Kb4 = but 3.Bad Kb4 4.Ra2 would win for White. More of a threat initially was 1...Kc4 (for ..Kb3). In Note (ii) 3.Kb7 should of course read 3.Kc7.
No. 169: B. Green. Black can draw here, it seems, by the sacrificial 8...Sc3† 9.Se2(4) xc3 a2† 10.Sxa2 (or Ka1) a5 etc. The 2S v. P ending is not a speciality of ours, but according to Fine (BCE Nos. 109 & 110) White, to win, must block the pawn with a S on a3 and this does not appear to be possible. The addition of a further bP=may provide a solution to this difficulty.
No. 175: A. Hildebrandt. 1.h7 Rd2 (instead of ...Rb1!) and surely Black draws? Easy is 2.Bb5 Rd8 3.Bc4 Kf6 4.Bg8 Rdl† etc. = Best therefore 2.Be8 Kxe8† 3.b5Q† Kf7 4.Qh5† Kg6 5.Qe8† 6.Qe4† Kg8 7.Qe5† Rdl† (aiming for ...Rd6-f6) 8.Qb8† Kh7 9.Qh8† Kg8 10.Qe8† Kh7 11.Qe7† etc. (threat ...Kf6 12.Qa8 Rdl† etc.) 11...Qf6† would force Black to check the wK to g5 before playing 5...Rd5 but he seems to draw even now.
No. 176: A. S. Gurvich. The low placing of this study surprises us too. Note 1...Kf6 2.Kh3 6.See Qes(!), to stop Sg5 mate, 7.Sf4† and the fatal battery is reasserted.
No. 188: J. Buchwald. This type of study is less likely to show an analytical fault, but there is one here in II where 1.Qe2 wins on the spot.
No. 200: A. Byelysenky. The end position shows White with the two bishops plus a knight against a lone rook, and a win is claimed for White although the rook does not seem lost after 10...Rg7. If this is a book win it is new to us. Can anyone elucidate? (Yes, Chéron, Vol. I Second Edition, p. 292. Without P=, loss is general for B's and S when drawing means B and 2S's, because R may sacrifice for B. No examples are given. AJR).
No. 211: L. Kopac. There is a dual win here by 5.Sf6† Kf7 6.See Kxf6 7.Sd7†. This can be eliminated by moving the bS=to b6 or to d3. The latter place seems preferable, but if we put on 5S= the dual arises after 5.Kg7 Sd7 when 6.Rxf8† and 6.Sf6† would both win.
No. 215: A. Maximovsky. This is good fun but not a win for White. 1.Rh7 Kxh7 2.Be4† Kg7 (rather than ...f5?) 3.b7 Be2† 4.Kf5 (if 4.Kg3 Be2 mate) 5.Kg4 taking the draw as 5.b6Q Bxh3 mate would leave the lone lawyer's rook with Black.
No. 216: V. Kizlev. After 1.Sf6† Kh8 (Note i) the correct winning line is 2.Rh2 Qb8 3.Rh3 Qc8 4.Rh5. If 2.Kg6 Qb8 3.a7, as given, then 3...Qg3† 4.Kf2 Qc7† is obscure and may well draw.
THE CLASSIFICATION OF ENDGAME STUDIES
by J. R. HARMAN
The following talk was given by Mr Harman at the CESC meeting on 1.vii.66.
The purpose of indexing endgame studies is to facilitate retrieving those of like material or those of like idea.
The simplest and most obvious way of classifying endgames is by the material present both in the initial position. While useful, particularly for practical players, it is clearly of little value for correlating endgames of similar ideas, since the same idea can be realised with very different material.
I am indebted to Hugh Blandford for an exact "initial material" classification which he inherited from R. K. Guy and refined and which I have adopted. In this system, the initial material is represented by a 6-digit number. The first digit indicates the number and colour of Q's, the second digit the number and colour of R's, the third that of B's, the fourth that of P's. A decimal point conveniently separates the first 4 digits from the last 2; these last 2 digits are the number of W and B1 P's respectively.
The first 4 digits are each selected according to the code:
| Digit | Meaning |
|-------|---------|
| 0 | means |
| 1 | White |
| 2 | Black |
| 3 | + |
| 4 | + |
| 5 | + |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | more |
More precisely, 9 means combinations not otherwise provided for.
Thus No. 1 in EG is 0133.02, and No. 2 is 2016.01.
Hugh Blandford's endgame library has over 10,000 studies indexed in this way. This is a tremendous number, and it is growing month by month. This mass of material is clearly beyond the capacity of any one mind to retain and selectively regurgitate; and yet if a judge, a journalist, commentator, or anyone else wants to find and evaluate the "art" in respect of an idea or combination of ideas, he has nothing more to rely on than his own (or someone else's) mind. No doubt many lists exist, compiled by individuals, but so far as I am aware there is no reasonably comprehensive collection from which all "like" positions can be conveniently searched.
An offer from me to do the index for EG led to a basic consideration of the whole problem of indexing and retrieving studies. In my retirement this has developed into something that will probably occupy me for months to come. I have left my writing pursuits almost been quite overshadowed, for it has become clear that the publishing of an index of the kind that I have developed is quite beyond the capacity of EG to carry.
What I have sought is an additional "thematic" classification to supplement this "initial material" classification. I later found that Tattegall ("1000 End Games") at the end of his collection forecast (1911) this possibility and indicated a mode of realisation which I have
developed into a more or less complete system.
I break down the ideas into their component chess manoeuvres or patterns, and these I call features.
These features comprise,
Mate, threatened or effected¹
Fork
Opposition
Discovered check
Overloading
Pinning
Offer
Zugzwang to win a piece
to mate
for other purposes
Gaining a tempo (not very useful, and will probably be discarded)
Repetition of moves (includes perpetual check)
Both B1 and Wh promote
Check by P or piece disclosing attack by piece
Blocking, or otherwise disputing, command of promotion square
Punting a piece
Blocking check by a promoted P
Spear ("skewer") check, threatened or effected
Under-promotion to avoid draw
to effect mate
to gain tempo
for other purposes
Losing the move
Stalemate involving K, P, B, S, but not R or Q
R but not Q
Q
smothered
Unclassified
The practical realisation of this indexing has to be cheap (there is no money in chess!). Ideally, all these and other features could be coded onto the memory devices of a computer and selection made as desired. But, this is beyond my means, so I use 5" x 3" index-cards. A rudimentary chessboard is described on each card by ruling, and the position inscribed thereon in black and red ink to distinguish between Black and White. The card bears the "initial material" code number, the name of the author, the date, the source, the result, and the notation. The index cards are notionally divided along their top edge into 21 equal portions, each portion being about \( \frac{1}{4} \) inch long and each representing one or more of the features listed above. The presence of a feature is indicated by an outstanding gummed tab the width of the portion, a 5" x 3" card ruled into 21 portions, each portion being labelled by the corresponding feature. Gummed tabs of different colour distinguish the features which occupy the same portion, and can be suitably inscribed. These tabs are conveniently \( \frac{1}{4} \) lengths of index strips which are available in white, yellow, red and green.
The 21 feature-positions, from left to right
1. Mate: A white tab for a threat; a yellow tab for an effected mate; these tabs are inscribed in black and/or red ink with the men
threatening or effecting mate.
2. White tab if a fork is present; the tab bears on its upstanding portion the symbol in red or black ink of the forking piece and on its lower portion the symbols of the forked men.
3. Opposition, represented by a white tab. It can bear an inscribed triangle to indicate triangulation.
4. A white tab represents discovered check, the checking and disclosing men being inscribed in appropriate colour.
5. A white tab for overloading, with appropriate indication of the overloaded piece.
6. A white tab for pinning, on which is inscribed the pinning and the pinned piece.
7. A white tab inscribed with the symbol(s) for the men that are offered.
8. and 9. Zugzwang. A yellow tab in position 8 denotes win of a piece. Yellow or white tab indicates mate and "other purposes" respectively.
10. Red tab for gain of tempo. This I have found of little value and I shall abandon it. The gain of tempo is so elusive and difficult to define, and anyhow, there was confusion with losing the move which is represented by a white tab in position 15. Position 10 is thus vacant and will be used in due course for something else.
11. A white tab where both promote, and a yellow tab for repetition of moves.
12. A white tab for blocking or otherwise disputing control of the promoting square, and a yellow tab for a check by one piece which discloses an attack by another piece, this last being suitably inscribed.
13. A white tab for blocking a check by a promoted pawn, and a yellow tab (inscribed) for penning.
14. Inscribed tabs, white for effected and yellow for threatened spear (skewer) checks. The checking and masked pieces are both inscribed.
15. A white tab for losing the move, a red tab for "unclassified".
16, 17 and 18. Underpromotion. Yellow for avoiding draw, avoiding -mate, and gaining a tempo respectively, while a white in 18 is for other purposes.
19, 20 and 21. Stalemate. A white tab in 19 is for effective restraint by K, P, and minor pieces (B's and S's). 20 has a yellow tab for restraint by R. 21 has a yellow tab for restraint by Q, and a white tab for a smothered stalemate.
Various inscriptions may be used to define this classification. For example, the white tab in position 12 (blocking command of the promoting square) is inscribed X where a man is interposed at the intersection of command lines, so that if one of two pieces captures it, an appropriate P promotes.
The unclassified tab in the 15 position can be marked with a symbolic stairway to indicate stepwise movements of Q or R or K.
These cards are filed in numerical order of the "initial material" index, and divided into groups according to the first 4 digits of that index. Thus, particular groupings can be specially treated.
Thus, for pawn endings, the first 6 positions are as detailed above. The next 4 are for a P-offer,
182
to free wK (white tab in 4)
to avoid stalemate (a yellow tab in 4)
unclassified (a red tab in 4)
to impede bK (white tab in 5)
to gain tempo (yellow tab in 5)
to free wP (white tab in 6)
to reduce scope of promotee (yellow tab in 6)
while the remainder are standard as defined above.
For B + P endings (i.e. groups 0010 and 0020) a white tab in 10 means prevention of K reaching R1 promotion square.
It is essential to index derivatives. For example, A appears twice, under 0030.22 and again under 3030.11. In general, whenever there is an exchange or loss or gain of a piece or a promotion, another index card is required. In some cases one study is represented by 4 or 5 cards. The purpose of indexing derivatives is to facilitate retrieval, particularly of anticipations. Thus, a search for studies like a R v R ending may reveal quite different initial material, even though the search is restricted to the group 0300. Fortunately, the composing principle of economy operates to the advantage of this possibility.
While it is clear that the main line of play must be fully indexed, it is a matter of subjective judgment how many subsidiary lines should be indexed. I have adopted the rule that unless the subsidiary line is obvious (to me!) it shall be indexed.
Now, how does all this operate in practice? Remember that its value depends on its completeness and so far, all of Tattersall, one-third of Sutherland and Lowson, all of Tarrtzky's Chess Studies, and Golombek's "Modern End-Game Studies", together with nearly one-hundred and pieces have been indexed - say 1700. So, all likenesses will not be found (cf. Blandford's 10,000!).
But, first tests are promising. B, shown at the last meeting, involves essentially a KQ mate, offer of B, and fork of K and Q by B. Miller's is 1.22. The only feature that comes up in the general search is 2010. Immediately this combination of features was picked up in No 118 in E G 3, having its original in 32.22. I leave you to judge the significance of this.
C has as main features a Q-mate, Zugzwang to effect mate, and a P-offer to impede bK and free wP (these latter are special classification features for P endings). Its group is 0.22 but there is a derivative in 1000.02 where only the first two features are significant. So, looking in 1000 is the first step, and there one quickly discovers No 663 in Tattersall.
D (0.23) has KPBP stalemate and an unclassified P-offer. Seeking both these features together, (0.23) is quickly found.
F (0.21) has as its only feature triangulation. Again, G has just this. Now take H; the only significant feature is the stalemate by R or Q. This occurs in I, J, K and L, the last of which is in 2.42.
No 121 in E G 4 has as its main characteristic a perpetual check by a B. It took me half-an-hour to determine that when I index, there are (so far, only) 10 compositions that I had to look at - all the cards that had the yellow tab in the 11th position in those groups that involved a B (= a digit other than 0 in the third digit position).
Incidentally, when such a search is made the results are recorded on a separate card as a list. These lists are obviously of interest; M, N and O.
In Chandler's "The Bright Side of Chess" there is a chess problem comprised by 4 successive S-promotions to avoid stalemate, 3 successive sacrifices of S, and an SK mate. Q, a very different setting, emerged from an all-card search. (In P, wBd1 must have promoted on g8! AJR).
R (0.33) is a complex ending having the terminal feature luring a K to a square where it is checked by promotion. This is (at present) an unclassified feature, and the search is of 22 examples in group 0.nn (which totals about 150 endings) to find S.
A: A. A. Troitzky
Eekistuna Kuriren 1916
1 e7 e1 (Q)
2 Qb8+ Kf6
3 Qb6 Kd5
5 Qb5 wins
B: A. C. Miller
Guardian
1 Sd4 d1(Q)
2 d7 Qd5
3 Ba5+ Ke2
4 d8(Q)+ Kh7
5 Qg8# Kh6
6 Qg6 mate
C: N. D. Grigoriev
Molodaya Gvardia 1925
1 R3 h5
2 R3 h4#
3 Kc5 KxP
4 c6 Kxb3
5 c7 Kxa2
6 c7 a3
7 Qh7+ Kc2
9 Qe6 Kxb2
10 Kc4 Kc2
11 Qe4 Kxb2
12 Qe2 Kxb1
13 Qe1 Kxa2
14 Qb1+ Ka1
15 Kg4 wins
D: A. A. Troitzky
Casopis Ceskoslov. Sachistu
1923
1 Kb4 Kg8
2 Kc4 Kf6
3 Kd6 Kc6
4 c5 Kd8
5 fe=
K: H. Fahrni
Deutsches Wochenenschach 1923
1 Ke7 b4 8 Kxb Sd2
2 Qe5+ Kf8 9 Kc6 Kf7
3 Kxf7 Kf3 10 Kd6 SxP
4 Kc6 Kb8 11 Ke6 Sg4
5 Sd4+ Kf6 12 Kd6 h4
6 g7! Ka8 13 Kg2 =
7 Kxd b1(S)
L: T. B. Gorgiev
Izvestia 1928
1 Kd5 ExP 5 Kf5 g3
2 Ke6 Sxg4 6 Kg6 g2
3 PxP Kg4 7 Kh6 =
4 PXP g4
M: V. Kivi
2nd Pr. Revista Romana de Sah 1934
1 Kf6+ Kf1(Q) 5 Bb6+ Ke1
2 Kf6(K) Kf2 6 Kf2
3 Sd5+ Kc3 7 Bxd5 =
4 Bc7+ Kf2
N: H. Rinck
Deutsche Schachzeitung viii. 1904
1 Se7+ QxS 4 Eb5+ Kc8
2 Bd7+ Kxe 5 Ba6+
3 Be4+ Kd7
O: J. Schumer
1 Se4 d1(Q) 4 Rb1+ Kc5
2 Be7+ Kxb 3 Bg1+ Nb4
3 Sg3+ KxP 6 Bde7 =
P: T. C. L. Kok
Avondpost 1935
1 h1 PxB
2 a8(Q) g3 9 Sc6 PxS
3 Sg7 PxS 10 PxP b5
4 f7+ Kf8 11 Kf2 b4
5 f8(S) g4 12 c8(S) b3
6 Sd6+ PxS 13 Sd6 b2
7 d7+ e5 14 Sb5
8 d8(S) e4 15 SxP mate
Review:
Stella Polaris, the new Scandinavian Chess Problem magazine.
Quarterly issues of 24 pages. Annual subscription Sw.Kr. 15 or US$ 3.
As mentioned in EG4 (p. 86) this magazine is under the general editorship of A. Hildebrand and J. Mortensen. Giving particular emphasis to what is Scandinavian in chess, it covers the whole range of problems from orthodox positions and studies to retro-analysis and fairy chess. The first issue (March 1966) gives 156 positions, the second 177; impressive numbers and a feast for the problemist. As regards the studies therefore: No. 1 includes 29 studies (4 originals), of which 18 illustrate a feature by A. Hildebrand, with an abstract on "Positional Draws in Minimal Studies". No. 2 has 12 studies (5 originals). Stella Polaris is running an annual informal composing tourney with book prizes for original problems and endgames, also a study tourney. Comments are in Swedish and Danish.
Address: A. Hildebrand, Postfack, Uppsala 1, Sweden.
W. V.
The Chess Endgame Study Circle
Annual subscription due each July (month vii): £ 1 (or $3.00), includes
E G 8-8, 9-12 etc.
How to subscribe:
1. Send money (cheques, dollar bills, International Money Orders**) direct to the Founder.
** If you remit by International Money Order you must also write to
the founder, because these Orders do not tell him the name of the
remitter **
Or
2. Arrange for your Bank to transfer your subscription to the credit of:
A. J. Roycroft Chess Account, Westminster Bank Ltd., 21 Lombard St.,
London EC3.
Or
3. If you heard about E G through an agent in your country you may,
if you prefer, pay direct to him.
New subscribers, donations, changes of address, ideas, special subscription arrangements (if your country's Exchange Control regulations prevent you subscribing directly):
A. J. Roycroft, 121 Colin Crescent, London N W 9, England (Founder).
Study Editor:
H. F. Blandford. 12 Clovelly Drive, Hillside, Southport, Lancashire,
England.
General Editor:
F. S. Valois, 14 High Oaks Road, Welwyn Garden City, Hertfordshire,
England.
"Walter Veitch Investigates"
W. Veitch, 7 Parkfield Avenue, East Sheen, London S W 14, England.
To magazine and study editors: Please arrange to send the complimentary copy of your magazine, marked "E G Exchange" to
C. M. Bent, Black Latches, Inkpen Common, Newbury, Berkshire,
England.
Next Meeting of The Chess Endgame Study Circle Friday 7th April at
St. Bride's Institute, Fleet St. London EC 4, at 6.15 p.m.
"DIY" Meeting (Do-It-Yourself): Bring your own material.
Printed by: Drukkerij van Spijk - Postbox 210 - Venlo - Holland |
FROM OUR PRESIDENT
My Dear Sister Presvyteres~
Moving from one season and holiday to another is always a time of joy and confusion in our household. With the leaves changing colors and falling to the ground we find ourselves preparing our children for the season of Halloween. I love to see the neighbor children come to the door with their parents, and I fondly recall a times when "what to be" for Halloween provided major discussions in our house.
It seems that before the candy that had been collected was all eaten up (or somehow had magically disappeared) the wonderful weekend of Thanksgiving would be upon us. What a truly wonderful day. Thanksgiving is an opportunity for us all, as a family, as a community and together as a nation, to give thanks for our blessings, small and large. This holiday reminds me the world is really very small, and gives us a special time to reflect on the blessings we receive everyday.
As we close the month of November, we find ourselves in the full swing of the holiday season of Christmas. Such a joyous time of year...people opening their hearts to each other, and hopefully take time to embrace the miracle of the birth of our Lord and Saviour. Through the bustle of the shopping malls, and crowds all around us, I am always thrilled to see the young children participating in programs at church and school and even sometimes in the neighborhoods, proclaiming the Birth of Christ.
With all this in mind I can only share with you're my pleasure and excitement of the National Sisterhood. Since our last newsletter we have had our third National Retreat. WOW! What a blast. The leadership, surroundings, fellowship as well as overwhelming welcome was more than one could ever expect. The ladies of the Pittsburgh Metropolis really provided us with a time we will never forget. Each retreat has been wonderful, and as we grow, we are becoming better at allowing us a weekend for ourselves. I was thrilled to see so many Sisters taking advantage of this time for themselves and their families. I know we all benefited from our time together, and I am excited and looking forward to the next National Retreat in 2005!
I also would like to share with you the National Board Meeting that took place in New York City in October at the Archdiocese. The board was invited and hosted to lunch by His Eminence Archbishop Demetrios. It was a glorious lunch that allowed us a chance to share with His Eminence all that is developing within the Sisterhood. His Eminence is a great listener, and after lunch we shared thoughts as well as feelings, which hopefully allowed us all to appreciate the different challenges of being a Presvytera.
It would be remiss of me at this time not to thank all the ladies who work for the betterment of the National Sisterhood. They are your Executive Board Members, your Metropolitan Representatives, as well as those who have taken on the lead as a committee head. Without ALL these individuals beside me I would be lost...so I thank them for all their time, talent, and stewardship to the National Sisterhood, and for all of us as Presvyteres.
In closing and as we anticipate and prepare for the birth of Our Lord and Savior Jesus Christ, I would like to extend to you and your families a very Blessed and Glorious Christmas and a New Year filled with His Grace and Love.
With Much love~ Angie
Martha and Mary
Patron Saints of the NSP
2004 Clergy/Laity Congress
I ♥ NY
Start the planning now! ~
* Ask for the week off of work
* Reserve hotel at the Marriott Times Square
* Think about tacking time on at the beginning or end to see fabulous NYC!
* Get Broadway show or David Letterman tickets on-line
* Check out flight prices vs. driving
We CAN'T WAIT to see you there this summer! The planning has begun to make it a wonderful week for you and your family. If you have any suggestions or would like to help, please contact Angie Constantinides at email@example.com
APC/NSP BENEVOLENT FUND COMING YOUR WAY SOON...
Metropolis News...
FROM THE NORTHEAST
Boston~ submitted by Vicki Toppses, Diocese Rep.
Greetings to all in this new Liturgical year! Many exciting events are happening in our Metropolis.
On March 22, 2003 we had the great blessing of having our Metropolitan Methodios and Chancellor Fr. Athanasios Demos at our Annual Lenten MBSP Retreat held on the grounds of the Boston Metropolis Center. We were delighted to have close to 20 Presbyteras and Diakonissas from many jurisdictions in attendance. His Eminence and Fr. Al gave a wonderful seminar on “Family Matters” focusing on strengthening and nurturing the Clergy Family. The first half of the retreat was an intimate and open discussion in which His Eminence spoke to us concerning the stresses on the Clergy Family. We were all able to speak openly about keeping our families together under great pressures as Priest, Presbyteras and Priest’s Children. Together, we were able to provide some options and tools to help our own families and to support other Clergy Families during some challenging times. During the second half of the retreat, Fr. Al spoke on strengthening the relationship of husband and wife as Priest and Presbytera. Fr. Al gave some insightful and charming illustrations on increasing communication between the couple and their families. Included in the retreat was a short MBSP meeting in which we elected our new President. We are honored and privileged to be under the guidance of our new and gifted President Presbytera Athanasia Papademetriou. We are excited to have such a beloved and experienced Presbytera guiding our Sisterhood.
MBSP is also privileged to have Holy Cross Greek Orthodox School of Theology in our Metropolis. One of our ministries as MBSP is to establish and maintain contact with the wives of the seminarians and to provide a welcoming bridge into their new ministry as future Presbyteras and Diakonissas. On September 20, 2003, the MBSP in conjunction with the Office of Student Life hosted the 2nd Annual Welcoming Brunch for these wives and to have a panel discussion on “Life as a Presbytera: What is it like?” We were exited to have our NSP President Angie Constatiniides and a selection “seasoned” and “new” Presbyteras throughout our Metropolis on the panel. Fifteen ladies were in attendance on November 1, 2003 as we held our Fall MBSP Meeting and Potluck at the Boston Cathedral Center. We had a wonderful time of fellowship and an informative speaker on “The Spiritual and Physical Health of a Christian.”
May God be with you all during these great Holidays and His Blessings for the New Year!
In Christ, Presvytera Vicki Toppses
FROM THE SOUTH
Atlanta~ submitted by Correna Panagiotou, Diocese Rep.
Your new executive board, headed by President Elaine Gigicos, has been working hard on some exciting new ideas for our Metropolis sisterhood. We have set up a new sisterhood account and will be asking for your generous stewardship contributions. Our new treasurer is Marinda Tsahakis. Secretary, Flora Moraitis is updating the sisterhood directory. We also hope to have a newsletter coming out very soon. The newsletter editor is Rita Kissal.
Our Clergy Family Ministries team has prepared a survey for our Metropolis Clergy Families to complete. It is a needs assessment inquiry for an upcoming Clergy Couple Enrichment Retreat. The retreat will be held after Pascha 2004. A generous donation has been allocated to help fund this event.
A Clergy Family Retreat hosted by His Eminence, Metropolitan Alexios is being planned to be held at our newly acquired Diakonia Center in South Carolina. Elaine Gigicos, Susan Jacobse, and Christine Saltzman are working with Fr. Hans Jacobse and Fr. Paul Costopoulos in planning this event.
Twenty two presbyteras attended the Sixth Annual Archangel Michael Feast and Honors Banquet held in Atlanta on November 8. Garnette Vasilakis was one of the honorees at the awards. She was chosen by His Eminence, Metropolitan Alexios for the 2003 Lay Metropolitan’s Choice award. Dusty Travelis was also honored with her husband, Fr. Nick Travelis for the 2003 Metropolitan’s Clergy Choice Award.
Congratulations to both Christine Saltzman who has been appointed to serve on the Metropolis Philoptochos Board and to Elaine Gigicos who has been appointed to serve on the Metropolis council.
May God continue to bless you and your families throughout the coming New Year!
FROM THE WEST~
San Francisco~ submitted by Elaine Stephanides, Diocese Rep.
Plans are underway for our annual Presvyteres Retreat which is scheduled to take place on January 23-25, 2004 at St. Nicholas Ranch and Retreat Center in Dunlap, California. The theme for this retreat will be "The Function and Influence of the Presvytera in her husband’s ministry... and thereafter". Throughout the year we have opportunities to get together at various parish events in the different regions, but the annual retreat is the one event which brings Presvyteres from all areas of our Metropolis together in fellowship. We are looking forward to another enjoyable and enlightening program and invite our sister presvyteres from other Metropolises to join us. The retreat center is nestled in the foothills of the Sequoia Mountains outside of Fresno in central California.
The Metropolis of San Francisco Sisterhood now has a link on our Metropolis web site, www.sf.goarch.org. Visit us by clicking on the Ministries/Organizations link.
The Sisterhood continues to promote its wonderful cookbook, "In Presvytera’s Kitchen" which was specially printed and introduced at our National Conference in Los Angeles last year. Books may be ordered through Presvytera Andrea Barakos or any of the Presvyteres of our Metropolis. The books have become popular items in our parish bookstores and Festivals.
Condolences are extended to Fr. Tim and Presvytera Vicki Pavlatos on the death of Vicki’s mother, Presvytera Alva Mahalares. Condolences also to Fr. Michael and Presvytera Kristin Pallad on the death of Kristin’s father, Ed Gooch. Our prayers are with them and their families. May the memory of their beloved parents be eternal.
This year brought two new arrivals to our clergy families. Congratulations to Fr. John and Fotini Roll on the birth of their son, Yianni, and to Fr. Paul and Stephanie Paris on the birth of their son, Lucas.
As we approach the holy season of Christmas, we wish a blessed Nativity to all our Church family. May the new year be blessed with peace, good health, and spiritual joy.
FROM THE EAST
New Jersey~ submitted by Colette Bourantas, Diocese Rep.
Greetings in the Lord!
At this time, we are in the process of discussing with His Eminence Metropolitan Evangelos our upcoming events and programs for our Sisterhood. A schedule for our Metropolis Clergy-Laity Conference and our annual Lenten Retreat will be forthcoming. We look forward to seeing you at the 2004 Clergy-Laity Congress in New York.
I pray that God grants us His blessings as we await the Nativity of our Lord and Savior Jesus Christ.
Your Sister in Christ, Presvytera Colette Bourantas
It has been kind of quiet in the metropolis’ since the last newsletter so the reports are few. If there is something going on in your particular area, let your representative know and she’ll report it to all!
On behalf of the NSP board, we wish you a holiday season full of blessings, family time, happiness and peace. May the birth of Christ’s Love fill your home.
Nominating Committee
Although the 2004 Clergy-Laity Congress is months away, it’s not too early to think about NSP Elections. When the National Sisterhood of Presvyteres meets next July, the new NSP Executive Board will be voted into office. These four executive officers will be responsible for heading the Sisterhood for 2004-2006.
If you would like to submit your name or know of another Presvytera whom you would like to recommend for an office, please submit her name to either of us by February 29th. If you do submit someone else’s name, please ask that person if she is willing to accept an office if her name were presented on the slate.
Our NSP Constitution lists the responsibilities of each officer. We have listed them here to show what they are. We hope that many of you will consider running for an office and will submit your name. As your nominating committee, we hope to present a slate of executive officers that will represent a varied cross section of Presvyteres. Our E-mail addresses are: Margaret Orfanakos: firstname.lastname@example.org Helene Hall: email@example.com; and Kay Efstatihu: firstname.lastname@example.org
Responsibilities and Qualifications of the Executive Officers
The President
The President shall be elected to a two-year term.
The President shall be responsible for convening at least one annual meeting of the NSP Board.
The President, after each Archdiocesan Clergy-Laity Congress, shall submit the names of the members of the new NSP Board to the Archbishop and each Diocesan Hierarch.
The President shall be one of the four administrators of the APC (Archdiocesan Presbyters Council) and NSP Benevolent Fund and, along with the APC President, is responsible for generating the annual appeal letter.
The President shall be a member of each standing committee and is responsible for conferring with the committee chairmen of the various NSP projects.
The President shall appoint a Historian to a two-year term. The Historian shall be responsible for preserving the NSP Archives, which include all minutes and other important letters from previous years, and for updating the albums, which record the activities and events of the NSP.
The President shall represent the NSP at Archdiocesan functions whenever possible. If she cannot attend, she shall ask the Vice-President or another representative to attend.
The President shall be an ex-officio member of the Archdiocese Benefits Committee and if appointed by the Archbishop, shall be a member of the Archdiocesan Council.
The President shall oversee that the Constitution is adhered to in all matters concerning the NSP.
The nominee for President shall have served at least one two-year term on the NSP National Board.
The Vice-President shall be elected to a two-year term.
The Vice-President shall be one of the four administrators of the APC and NSP Benevolent Fund. If so deemed by the four administrators, she shall be responsible for handling the Benevolent Fund Emergency and Reserve Accounts. The Vice-President implements the annual Benevolent Fund appeal and acknowledges each contributor.
The Vice-President may represent the President at any meeting or other function that the President cannot attend.
The Vice-President, if appointed, shall be a member of the National Philoptochos Board.
The nominee for Vice-President shall have served at least one two-year term on the NSP Board.
In the event the President is unable to complete her term, the Vice-President shall serve out the remainder of the President’s term.
The Secretary
The Secretary shall be elected to a two-year term.
The Secretary shall be responsible for keeping the minutes of all NSP Board and NSP Conference meetings.
The Secretary shall be responsible for the NSP Newsletter and mailing list.
The Secretary shall be responsible for sending sympathy cards on behalf of the NSP to the family of a deceased clergy family member.
The Treasurer
The Treasurer shall be elected to a two-year term.
The Treasurer shall be responsible for maintaining the NSP financial account(s).
The Treasurer shall be chairman of the NSP Stewardship and shall maintain a record of donations.
The Treasurer shall submit a report of income and expenditures at each NSP Board Meeting and NSP Conference.
The Treasurer shall be responsible for sending a donation in memory of a recently deceased Priest or Presvytera to the APC/NSP Benevolent Fund.
Widowed
Before long, we will be hearing about plans for the 2004 Clergy-Laity Congress and the National Conference of the Sisterhood of Presvyteres which will take place in New York City, July 25-29, 2004. Once again the registration for the widowed and retired presvyteres will be complimentary. It is my hope that many of us will plan to attend and participate in the activities of our Sisterhood. It would be nice if a special workshop can be planned for the widowed presvyteres, giving us an opportunity to share thoughts and ideas, and become better acquainted. Your comments and suggestions are most welcome.
I ask that you contact me either by phone or e-mail with your thoughts, concerns, or questions which you would like to see addressed by the national board. Please remember to notify me of any address changes. You can reach me at (949) 770-5078 or by e-mail at email@example.com
I look forward to hearing from you.
Your sister in Christ,
Elaine K. Stephanides
Caregivers
Attention Caregivers
Many caregivers think that caregiving is a responsibility to be shouldered without complaint. Some believe that seeking help or support from others is a sign of weakness. Others think that sharing their burden is inappropriate. Many work endless hours to meet the needs of their loved one while neglecting their own health. However, professional counselors would advise caregivers that none of the above beliefs and actions are beneficial for caregivers or for their loved ones.
Effective caregivers take care of their own physical, emotional, mental and social needs. By so doing, they are better able to meet the needs of their loved one. Effective caregivers seek out support groups which are made up of caregivers like them. Sharing caregiving experiences with other caregivers helps one realize that others are traveling similar paths. Effective caregivers ask for help! The burden is too heavy for one person to carry. After all, if the caregiver becomes ill or exhausted, who will do the caregiving? Everyone suffers.
If you are a caregiver, please contact Alexandra Poulos (2628 Pine Springs Drive, Plano, TX 75093-3567). As a caregiver for her husband, Father John C. Poulos, who passed away in January of '01 from early-onset Alzheimer's Disease, she knows what you may be going through. She wants to hear from you.
NSP OUTREACH COMMITTEES
The Listening Connection
Tina Patitsas
29001 Millard Drive
Bay Village, OH 44140
440.250.0075
firstname.lastname@example.org
Web Site
Pauline Pavlakos
6315 Wilmington Drive
Burke, VA 22015
703.239.2627
email@example.com
Sister-to-Sister
Jeannie Monos
5124 Parnell Way
Martinez, GA 30907
706.854.0749
firstname.lastname@example.org
Caregivers
Alexandra Poulos
2628 Pine Spring Drive
Plano, TX 75093
972.596.1883
email@example.com
Katherine Prassas
Anne Pappalas
Vicky Toppses
DaLin Morrow
Marsha Robinson
Welcome Box Reflections
"After a very long and hard week, I was feeling pretty down and in my mind was questioning many things in my life. As I came home from work that day and walked up the steps to my porch, I noticed a package sitting there. It was wonderfully decorated and had my name on it. This alone started to bring a smile to my face, because who does not enjoy receiving packages? I rush inside to open it in to find that it was a 'Welcome Box' from NSP! It had all sorts of goodies in it and a handwritten welcome note. As I looked at everything, I realized what thoughtfulness went into all of these things. It made me feel so special to think that this group of ladies, who did not even know me, took such effort to make me feel welcome. I already felt like a 'part' of this group. As I sat there looking through everything, and read about all the great things NSP does, tears came to my eyes. I realized that although there are still many questions in my life, one thing was for sure. This was where I belonged. And God made sure my special box came that very day to remind me..."
Diakonissa Kristina Gzikowski
SISTER TO SISTER
Big Sisters Needed
After the welcome box is mailed, we need YOU to follow up as a Big Sister. Just a phone call or email to check if the box arrived... A few follow up calls...invite your little sister to the next Metropolis gathering or retreat!
PLEASE sign up to be a Big sister!
To be a Big Sister, please contact Vicki Tompsees firstname.lastname@example.org or (207) 784-5632.
To learn more about the Sister to Sister program, please contact Jeannie Monos email@example.com or (706)854-0749. Thank you!
THREE WISE WOMEN WOULD HAVE...
Asked directions
Arrived on time
Helped deliver the baby
Cleaned the stable
Made a casserole
Brought practical gifts and...
THERE WOULD BE PEACE ON EARTH
NSP Retreat Reflection
Presvyteres Find Fellowship and Balance at National Retreat
Thirty-six presvyteres from across the Archdiocese gathered over the weekend of October 10-13 for a relaxing and spiritually enriching retreat. This 3rd biennial retreat was held at the Antiochian Village, located in the majestic Laurel Mountains of Western Pennsylvania. His Eminence Metropolitan Maximos and the Pittsburgh Metropolis Sisterhood of Presvyteres hosted the retreat.
Fr. Nicholas Krommydas, of St. Demetrios Church in Weston, Massachusetts, was the retreat master. Fr. Nick led the participants in a journey of self discovery with the theme "Walking a Tightrope: Balancing God's Call With our Desire to be." Fr. Nick discussed personality types and stressed that everyone has her own "style." To guide the participants toward balance, Fr. Nick employed a life style balance assessment worksheet and discussed the destructive power of expectations. Along with renewing seasoned friendships and making new friends, the participants were able to explore and share their concerns, issues, and strengths all with the goal of a more balanced life style.
After the retreat, the attendees' email mailboxes were flooded with heartwarming reflections on the weekend:
"Most of the time, I don't think I need a retreat or the fellowship, but I always walk away feeling renewed and reassured, and knowing that, on my tightrope, I am not alone."
"I wanted to thank all of you for all of your support this past weekend. It was truly exactly what I needed and when I needed it. Your love and support has shown, reaffirmed and helped me to remember who I am, who my husband is, and what our calling is."
"If I were asked to pick my favorite part of the retreat. I would have to admit, that there wasn't any "favorite" part, because each moment has become a treasured memory."
"Getting to know you all helped me to feel more apart of the larger work of God in our midst."
"I think more than anything I appreciated the balance of the activities: the glorious worship, the in-depth teaching, the laughter and the free time. The back massagers and wine tasting party weren't bad either!"
"As a new diakonisa, I can't tell you how much it meant to find such love and support among sisters. It is encouraging to know that there are so many friends to go to as this part of my life is beginning."
The annual retreat is quickly growing to be one of the highlights of the Sisterhood. The next retreat will be held in October 2005. We hope to see you there!
Confidential Assistance
Have a concern?
Call the CAP Hotline
24 hours a day...
7 days a week....
NSP WEBSITE
www.nsp.goarch.org
Webmaster, Pauline H. Pavlakos
The NSP website (www.nsp.goarch.org) has a new page entitled Clergy Family Corner. This page is a forum for all of our clergy family members (married, celibate, retired, widowed, divorced) to share their news with the sisterhood. Please submit your information to Pauline Pavlakos at firstname.lastname@example.org and it will be posted immediately.
Each Metropolis is encouraged to submit information relating to their Metropolis. This will be posted on the "What's New?" page. Approximately 250 e-mail addresses are listed on our e-mail page. Don't forget that you can access communications from our NSP President, Angie Constantinides on the "From Our President" page. Check the Prayer and Memorial pages so that you can offer prayers for our sister presvyteres. Our Constitution, Food for Thought pamphlet, Clergy Assistance Program information, The Listening Connection information, and much more can be accessed on the website. Please visit often.
3rd Biennial National Sisterhood of Presvyteres Retreat
Tina Patitsas shares her talent.
His Eminence, Metropolitan Maximos
NSP 2003 RETREAT
“Walking a Tightrope”
Fr. Nicholas Krommydas
Retreat Master
...and our Boston attendees.
Antiochian Village in Bolivar, Pennsylvania
October 10-13, 2003
Miss Pennsylvania and the top five runners up...
Sunday Night Bonfire
Sunday evening worship
It looks like DaLin is sure enjoying the wine tasting Sunday evening!
hey.... can we get a ride from ya?
Dear Sister Presvyteres,
Glory be to Jesus Christ.
I enjoyed meeting with the NSP board in September and seeing all the happy presvyteres at the retreat in Ligonier in October. I want to express my gratitude and appreciation to those who did so much to put those two events together, and to those who participated with love.
TLC is enjoying a steady and well paced growth. Little by little we are all learning about the benefits of having a non-judgemental listener available to you, (who is a presvytera) and our listeners are starting to get phone calls. It's good news when we learn that our members are using this program, and I hope that one day I'll be able to get some feedback from those who have called either Katherine or Anne.
Our Policy and Guidelines booklet is printed and available, if you are interested in knowing more about TLC's program. I can send you copies and I hope to get copies to every diocesan president so that they can encourage those who wish to become listeners. We also have a brochure that we can hand out during Clergy Laity.
Now that the structure of TLC is in place, it frees us up here at the "TLC headquarters" to pursue other ways of promoting this program and helping our members. Katherine and I are talking with our board about doing a TLC workshop for Clergy Laity. The board has some new ideas for what our workshops can look like in 2004, and there are many topics of interest that dovetail nicely with the objectives of TLC which we would like to explore. For instance, marriage, family life, raising children, parish life, the focus of the presvytera, and prayer are just the few that come to mind. We hope to find someone from Equipping Ministries International to come to NYC this summer and lead the workshop. If that is not possible, we will research the next best options for a high quality workshop that can serve as an enriching time for our participants. Kathryn has lots of experience in this area and so do I, and we want to provide well for you.
We may also be asking for your input on what you would like to see in a Clergy Laity workshop. Pauline Pavlakos has added a TLC page in our NSP website with a list of our listeners and the times they are available. She will also post the TLC messages that we send over the internet with excerpts from writings of Church fathers, listening skills texts, or anything you may submit to share. With her expertise, perhaps we can convince her to include a survey that you can complete over the internet to help us with ideas for the workshops or for our ministry in general.
I wish for all of you a blessed feast of the Nativity and a healthy and happy New Year for you and yours.
With all the love that the Lord will allow me to share,
Kathryn Tina Patitsas,
From Listening for Heaven's Sake
As believers, we see the heart of another human being as the "Holy of Holies," where the Spirit of God resides. Those of us who dare to draw near must, like Moses, take off our shoes, for we are treading on holy ground. The priest in the temple did not rush into the Holy of Holies, lest he be struck dead. Instead he prepared carefully and slowly to approach. So must we go slowly into the deep heart of a child of God. Don't rush in and disturb.... Go gently, bring peace
From Perceptive Listening
10 misconceptions about listening:
1) listening is a matter of intelligence
2) good hearing and good listening are closely related
3) listening is an automatic reflex
4) daily participation eliminates the need for training
5) learning to read will automatically improve listening
6) learning to read is more important than learning to listen
7) the speaker is totally responsible for success in oral communication
8) listening is essentially a passive activity
9) listening means agreement
10) consequences of careless listening are minimal
Dear Sisters,
I pray you are all well and preparing for a glorious Nativity.
Thank you to every sister who has made a stewardship donation. As you are probably aware, stewardship is the only means we have of supporting our various programs, such as Sister to Sister, The Listening Connection, newsletter, memorial donations to the APC-NSP Benevolent Fund, archives, etc. As the end of the year approaches, please consider making a stewardship donation for 2003 if you have not already done so. Many of you have contributed for 2004 and some have even contributed for 2005 and 2006. Thank you.
Sisters who have made a stewardship donation through November 30th are listed below. If you made a donation and your name is not listed, please contact me. Our family moved across the country this year and it is possible that some mail did not reach me. It is also possible that I made a clerical error and not listed you, even though every attempt to be accurate was made.
Please mail your stewardship donation to Pauline Pavlakos, 6315 Wilmington Drive, Burke, VA 22015. Your cancelled check will be your receipt unless you specifically ask for a printed receipt. E-mail sisters for whom I have an E-mail address to notify them that their stewardship has been received. Please remember to fill in that portion of the stewardship form. You can check the Stewardship Page (www.nsp.goarch.org/stewards.html) at any time to ensure that your donation has been received.
Christ is born! Glorify Him!
### 2003 Stewards
1. Eleni Alexopoulos
2. Joanna D. Alexson
3. Georgia T. Alikakos
4. Jane Andrews
5. Helen Anthony
6. Celia (Vasilia) Apostolakos
7. Nikie Artemas
8. Anastasia Bandy
9. Zafera Bartz
10. Filisa H. Bender
11. Colette C. Bourantas
12. Georgia Champion
13. Angelyn Christon
14. Penelope S. Dassouras
15. Eleftheria T. Degaitas
16. Eleni C. Demetri
17. Maria Diacovasilis
18. Kay Efsthathiu
19. Angela Eugenis
20. Donna M. Falcinella-Pappas
21. Alyce Costas Gaines
22. Helen T. Hall
23. Mary Harbatis
24. Photini Henderson
25. Maria Hondros
26. Olga H. Hountalas
27. Catherine Kalariotes
28. Christine Kehayes
29. Maria Kourrouvasiils
30. Lula G. Latto
31. Mary T. Leondis
32. Nicole H. Leong
33. Despina Leventis
34. Elizabeth Limberakis
35. Georgia Magoulias
36. Marilyn C. Magoulias
37. Stella Mamangakis
38. Anna Mathews
39. Penelope Metaxas
40. Valerie D. Metrakos
41. Irene D. Missiras
42. Evangeline Moulketis
43. Carolyn Nastos
44. Vangie E. Orfanakos
45. Theodora Paleologos
46. Correna A. Panagiotou
47. Katina Pantelis
48. Cynthia Papanikolaou
49. Vaso Paris
50. Tina Pattitas
51. Alexandra P. Poulos
52. Valerie Pyle
53. Claire Rafael
54. Olga Rafael
55. Yvonne Raptis
56. Elli Retselas
57. Vasiliki L. Rousakis
58. Laina Savas
59. Stella C. Sifikas
60. Bertha (Panayiota) Shiepis
61. Katherine Sietsema
62. Val (Georgia E.) Sitaras
63. Mary Soteropoulos
64. Anna Spiritos
65. Poppy Stamatos
66. Ranae Strzelecki
67. Faye Stylianopoulous
68. Argie Talagan
69. Stephanie Thomas
70. Sophronia N. Tomaras, Ph.D.
71. Becky Touloumnes
72. Rose Treantafeles
73. Joanna A. Tsandikos
74. Vasiliki Tzougros
75. C. Betty Vlahos
76. MaryAnn Wingenbach
77. Anne E. Zervos
### 2004 Stewards
1. Joan H. Bogdan
2. Flora Chioros
3. Joanna Christofidis
4. Sapfo Clapsis
5. Angie Constantinides
6. Harriet Delfos
7. Mary Demotses
8. Ellie Dogias
9. Goldie Doukas
10. Elaine Gigicos
11. Michele Kontos
12. Louella E. Kostopolos
13. Elaine I. Krommydas
14. Dianthe Livanos
15. Anne Macris
16. Jeannie Monos
17. Diana Moskovites
18. Helen Moskovites-Kehagias
19. Margaret Orfanakos
20. Dimitra Pappademetriou
21. Mary P. Pappas
22. Patricia Petropoulakos
23. Kari Rousakis
24. Mary K. Scoulas
25. Joan T. Simones
26. Hrisafie M. Sophocles
27. Gigi K. Souritzidis
28. Elaine Stephanides
29. Paula Strouzas
30. Helen Suhayda
31. Vicki Toppses
32. Pearl Veronis
33. Harriet P. Wilson
34. Ann Yankopoulos
### 2005 Stewards
1. Vivian C. Bacalis
2. Mary C. Christy
3. Penelope Metaxas
4. Evelyn C. Mitsos
5. Evangeline Pappas
6. Christina D. Papulis
7. Nancy Serviou
### 2006 Stewards
1. Maria T. Antokas
2. Joan Orfanakos
3. Pauline H. Pavlakos
4. Eleftheria Recachinas
5. Sophia M. Schutte
Please mail your stewardship donation, noting the year(s) to which it should apply, to:
Pauline Pavlakos, Treasurer
National Sisterhood of Presbyteras
6315 Wilmington Drive
Burke, VA 22015
National Sisterhood of Presvyteres National Board
www.nsp.goarch.org
Angie Constantinides, President
12246 St. James Rd
Potomac, MD 20854
301.738.2002
301.469.5945 fax
email@example.com
Flora Chioros, Vice President
120 James Landing Road
Newport News, VA 23606
757.597.7786
firstname.lastname@example.org
Pauline Pavlakos, Treasurer
6315 Wilmington Drive
Burke, VA 22015
703.239.2627
703.239.2628 fax
email@example.com
Jane Andrews, Secretary
412 Macalester Street
Saint Paul, MN 55105
651.695.1436
firstname.lastname@example.org
Margaret Orfanakos, Past President & Advisor
93 Tall Oaks Drive
Wayne, NJ 07470
973.696.7097
973.696.7378 fax
email@example.com
DIOCESAN REPRESENTATIVES AND LIAISONS
Archdiocesan District
open
Atlanta Diocese
Jeannie Monos
5124 Parnell Way
Martinez, GA 30907
firstname.lastname@example.org
706-854-0749
Boston Diocese
Vicki Toppses
33 Rejune Avenue
Lewiston, ME 04240
207-784-5632
email@example.com
Chicago Diocese
Tulla Poteres
1234 Knighthood Drive
Dyer, IN 46311
219.865.8372
firstname.lastname@example.org
Denver Diocese
open
Detroit Diocese
Goldie Doukas
59 Rankin Road
Snyder, NY 14226
716.839.3725
email@example.com
New Jersey Diocese
Colette Bourantas
924 Morningdale Road
Wilmington, DE 19810
302.475.7242
firstname.lastname@example.org
Pittsburgh Diocese
Dianthe Livanos
162 Brehm Road
Washington, OH 15301-9653
724.745.2702
email@example.com
San Francisco Diocese
Elaine Stephanides
25081 Grissom Road
Laguna Hills, CA 92653
949.770.5078
firstname.lastname@example.org
Retired Presvyteres
Ione Filandrinos
12532 Parkwood Drive
Burnsville, MN 55337
952.882.8016
email@example.com
Widowed Presvyteres
Elaine Stephanides
25081 Grissom Road
Laguna Hills, CA 92653
949.770.5078
firstname.lastname@example.org
Historian
Carolyn Nastos
6015 Chagall Drive
Roanoke, VA 24018
540.772.2730
email@example.com
Jane Andrews, Mailing List
National Sisterhood of Presvyteres
412 Macalester Street
Saint Paul, MN 55105
Address correction requested |
Myofascial trigger points in shoulder pain prevalence, diagnosis and treatment.
Bron, C.
2011, Dissertation
Version of the following full text: Publisher’s version
Downloaded from: https://hdl.handle.net/2066/85865
Download date: 2025-04-05
Note:
To cite this publication please use the final published version (if applicable).
MYOFASCIAL TRIGGER POINTS IN SHOULDER PAIN
prevalence, diagnosis and treatment
Carel Bron
MYOFASCIAL TRIGGER POINTS IN SHOULDER PAIN
PREVALENCE, DIAGNOSIS AND TREATMENT
Een wetenschappelijke proeve op het gebied van de medische wetenschappen
Proefschrift
ter verkrijging van de graad van doctor
aan de Radboud Universiteit Nijmegen
op gezag van de rector magnificus prof mr S C J J Kortmann,
volgens besluit van het college van decanen
in het openbaar te verdedigen op
dinsdag 19 april 2011 om 10 30 uur precies
door
Carel Bron
geboren op 13 december 1956
te Winschoten
Promotoren
Prof. dr. R.A.B. Oostendorp
Prof. dr. M. Wensing
Manuscriptcommissie
Prof. dr. C. van Weel
Prof. dr. P.L.C.M. van Riel
Prof. dr. L.A.L M. Kiemeney
Prof. dr. P.U. Dijkstra, Rijksuniversiteit Groningen
Prof. dr. R.L. Diercks, Rijksuniversiteit Groningen
# CONTENTS
| Chapter | Title | Page |
|---------|----------------------------------------------------------------------|------|
| 1 | Introduction | 7 |
| 2 | Myofascial trigger Points. An evidence-informed review | 17 |
| 3 | Interrater reliability of palpation of myofascial trigger points in three shoulder muscles | 49 |
| 4 | Treatment of myofascial trigger points in common shoulder disorders by physical therapy. A randomized controlled trial. Study protocol | 71 |
| 5 | High prevalence of myofascial trigger points in patients with shoulder pain | 87 |
| 6 | Treatment of myofascial trigger points in patients with chronic shoulder pain: A randomized controlled trial | 109 |
| 7 | General discussion | 135 |
| 8 | Summary | 147 |
| 9 | Samenvatting | 153 |
| | Dankwoord/ Acknowledgements | 161 |
| | Curriculum vitae and publications | 165 |
Many patients have suffered grievously and needlessly because a series of clinicians unacquainted with myofascial trigger points erroneously applied the psychogenic label to them covertly if not overtly.
Dr Janet Travell (1901 – 1997) and dr David Simons (1922 - 2010)
1 INTRODUCTION
INTRODUCTION
Incidence and prevalence of shoulder pain
Shoulder pain is a very common musculoskeletal disorder. In primary care, the yearly incidence is estimated to be 14-2 per 1000 people. The one-year prevalence in the general population is estimated to be 20 to 50%. The estimates are strongly influenced, for example, by the definition of shoulder disorders, including or excluding limited motion, age, gender, and anatomic area. Thus, shoulder pain is widespread and imposes a considerable burden on the affected person and on society. Women are slightly more affected than men and the frequency of shoulder pain peaks between 46 and 64 years of age.\(^1\) People at high risk of shoulder pain include those working as cashiers, garment workers, welders and bricklayers as well as those who work with pneumatic tools or in the meat industry. Hairdressers, plasterers, assembly workers, packers and people who work for long hours at computers, such as secretaries and programmers, are also at high risk.\(^2\) Shoulder pain tends to be persistent or recurrent.\(^3\) Between 22 and 46% of patients who visit a medical practitioner because of shoulder pain report a history of a previous pain episode.\(^1\),\(^4\) Six months after initial medical consultation and despite medical treatment, persistent shoulder symptoms have been reported in up to 79% of patients. Of those with persistent symptoms, more than half typically do not seek any additional treatment.\(^4\),\(^5\)
Definition of shoulder pain, shoulder complaints and shoulder disorder(s)
Shoulder pain, shoulder complaints and shoulder disorders are frequently used terms and appear synonymous. According to the online version of the Oxford dictionary, a disorder is defined as a disruption of normal physical or mental function, a complaint as an illness or medical condition (especially a relatively minor one) and pain as physical suffering or discomfort caused by illness or injury (\(http://oxforddictionaries.com\) accessed October 2010). It is clear from these definitions that there is certain overlap between the terms. In this thesis, we will use the term shoulder pain.
Most shoulder pains are caused by a small number of relatively common conditions. One of the most common causes of shoulder pain is thought to be the subacromial impingement syndrome (SIS). This syndrome includes tendonitis or tendinopathy of the rotator cuff and the long head of the biceps brachii muscle, or subacromial or subdeltoid bursitis. Other less common causes of shoulder pain are tumors, infections and nerve related injuries.\(^6\),\(^11\)
The clinical picture
The main clinical feature of SIS is pain, which is mostly localized at the front and lateral side of the shoulder and halfway up the upper arm, sometimes radiating past the elbow to the radial side of the hand. Pain may already be present at rest but will definitely occur or
increase in severity during or shortly after movement. It is especially painful when reaching forward, sideways or above the head or when putting the hand behind the back. The patient may wake up frequently during the night because of the pain caused by lying on the affected shoulder but also while sleeping on the unaffected side. The patient may display a so-called painful arc. During abduction \(^{12}\), the first part (0 to 60°) often progresses without pain, the middle part (60 to 120°) is painful and the last part (120 to 180°) is again without pain or at least much less painful. Due to these impairments, patients are often limited in their daily activities, including work, leisure and sports.
**Inflammation**
Steinfeld et al proposed that up to 90% of all shoulder pain is related to local inflammation of the subacromial soft tissue \(^{13}\). However, Khan et al found that there is a lack of evidence that pain is related to the inflammation of tendons \(^{14}\). Light microscopy in patients operated on for tendon pain revealed collagen separation with thin, frayed and fragile tendon fibrils separated from each other lengthwise and disrupted in cross section. Although there was an apparent increase in tenocytes with myofibroblastic differentiation (tendon repair cells), the classic inflammatory cells were usually absent. Therefore, they proposed to abandon the term tendinitis and replace it by tendinopathy \(^{14,15}\).
**Rotator cuff degeneration and other structural abnormalities**
Partial or full thickness ruptures of the rotator cuff tendons are very common and their prevalence increases with age \(^{16,17}\). Rotator cuff tears are seen as often in symptomatic as in asymptomatic subjects \(^{18}\). The size of the tear does not correlate with pain intensity or level of disability \(^{19}\). Therefore, it is uncertain to what extent rotator cuff ruptures cause shoulder pain. Other abnormalities seen in magnetic resonance imaging (MRI) and ultrasonography (US), including osteophytes, subacromial and joint fluid, are often seen in asymptomatic high level athletes and does not predict shoulder pain or disability \(^{20-23}\).
**Etiology**
In 1972, Neer \(^{24}\) described SIS as a distinct clinical entity although Jarjavay first recognized subacromial disorders in 1867 when he described a few cases of subacromial bursitis \(^{25}\). Neer hypothesized that the anterior third of the acromion, the coracoacromial ligament and the acromioclavicular joint impinges upon the insertion of the supraspinatus tendon into the greater tubercle. He also postulated that osteophytes within the coracoacromial ligament lead to tearing of the rotator cuff tendons. This is referred to as outlet stenosis or external impingement (see below for internal impingement).
Table 1: Neer’s classification of SIS
| Stage | age (years) | Findings |
|-------|-------------|------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | < 25 | Shoulder pain is experienced that corresponds to the explanation originally provided by Neer but no abnormalities can be found by modern imaging techniques. These complaints are often explained as acute inflammation of the subacromial structures. |
| 2 | 25 to 40 | It is assumed that the pain is caused by a chronic inflammation of the subacromial structures. This is associated with edema formation and minor hemorrhage. |
| 3 and 4 | > 40 | It is possible to detect abnormalities through medical imaging techniques, namely partial (stage 3) or full thickness (stage 4) ruptures and the formation of osteophytes, especially on the undersurface of the anterior portion of the acromion. |
It is apparent from Table 1 that there is a chronological order between the four stages. Several studies have shown that there is a strong association between age and rotator cuff rupture, indicating that ruptures of the rotator cuff tendons become more prevalent with increasing age, while the association between rotator cuff ruptures and pain intensity and dysfunction seems to be absent \(^{16, 17, 19, 26-28}\).
A further distinction is made between primary SIS and secondary SIS. Imaging reveals abnormalities comparable to stage 3 according to Neer only in primary SIS, whilst secondary SIS is defined by the same symptoms but without demonstrable abnormalities, which is comparable to Neer’s stages 1 and 2.
Secondary SIS can be defined as a relative decrease in subacromial space as a result of instability of the shoulder \(^{29}\). This instability is described as being subtle, mild, minor, occult or functional \(^{30-33}\). It is believed that this level of instability cannot be identified by clinical tests or medical imaging techniques \(^{29}\). This kind of SIS is mostly seen in younger patients (< 40 years), who are often active in sports. The theory behind this concept comes from Jobe et al. \(^{30, 34}\) who hypothesized that a combination of shortening of the posterior capsule and instability of the anterior capsuloligamentous complex could lead to compression of subacromial tendons and bursae. This hypothesis has never been confirmed. Recently, Poitras et al. found that experimental shortening of the posterior capsule in cadavers did not lead to an increase of subacromial pressure \(^{35}\).
A third distinction is made with external and internal impingement. Walch et al. first identified internal impingement during shoulder arthroscopy \(^{36}\). Individuals presenting with posterior shoulder pain brought on by positioning of the arm at 90° of abduction and
90° or more of external rotation, typically from overhead positions in sport or industrial situations, may be considered as potential candidates. McFarland et al have argued against this and consider the contact between the undersurface of the rotator cuff tendons and the glenoid rim as purely physiological and not pathological.\textsuperscript{37} It is worth mentioning that in the position of the arm at 90° of abduction and 90° or more of external rotation, the subscapularis muscle is under stretch and may contribute to pain in the shoulder during this maneuver. Referred pain from the subscapularis muscle is located at the back of the shoulder according to Simons et al.\textsuperscript{38}
**Physical examination and clinical tests**
A few orthopedic tests have been described with regard to SIS. The Neer test, Hawkins-Kennedy test, empty can or Jobe test, and the painful arc test are specifically designed to assess subacromial impingement, while external rotation lag sign, drop arm test, supine impingement test, and belly press test are designed to detect rotator cuff tears.\textsuperscript{39} In general, the results of these tests should be interpreted with caution. With these tests, it is not sufficiently possible to make a reliable statement about whether subacromial impingement is present in patient groups that have not been selected in advance. The most reliable tests are the painful arc test, the empty can test (Jobe), and the external rotation-against-resistance test for detecting rotator cuff tears, while tests for impingement without rotator cuff tears are worthless for diagnostic purposes. Specifically, the sensitivity of the test increases with the severity of SIS. The highly sensitive tests seem to have low specificity values and the highly specific ones seem to have low sensitivity values.\textsuperscript{40-53}
**Imaging**
The options for viewing various tissues in the body have increased significantly in recent decades. Thanks to x-ray photography, diagnostic US and MRI, it is possible to detect the presence of structural abnormalities in the shoulder. However, detecting abnormalities in patients with shoulder pain does not provide a guarantee that the abnormalities are actually responsible for the pain. Research in which groups of volunteers without shoulder pain are examined in a similar fashion can provide insight into the importance of the demonstrated abnormalities in patients with shoulder pain. Using MRI, partial (Stage 3) or full thickness (Stage 4) ruptures were found in 34% of the people in a group of 96 volunteers with no shoulder pain.\textsuperscript{18} In another MRI study, 42 patients with shoulder pain and 31 patients without were compared. Rotator cuff ruptures were found in the shoulder of patients with pain as well as in the shoulder of patients without pain in over 50% of cases.\textsuperscript{26} The authors came to the conclusion that there was a significant relationship between age and the occurrence of ruptures but no relationship was found between pain and the presence of rotator cuff ruptures. In an MRI study of the shoulder of professional baseball players, specifically pitchers (n=14), without symptoms of shoulder pain, no or hardly any difference was found between the pitching arm and the non-pitching arm.\textsuperscript{20} In approximately 80% of the cases, rotator cuff ruptures and labral injuries were found in both shoulders,
and acromional osteophytes were observed in half of the players. One throwing athlete had a so-called SLAP (superior labrum from anterior to posterior) tear in both shoulders. A comparable study with asymptomatic high-level athletes (baseball and tennis) \((n=20)\) also revealed a high incidence of ruptures.\(^{21}\) In this study, fluid in the subacromial space (19 of the 40 shoulders) and in the glenohumeral joint (36 of the 40 shoulders) was also reported. Based on these data, it seems reasonable to be cautious and not necessarily conclude that abnormalities found during imaging can fully explain the pain in individual patients.
**Interventions**
While many interventions have been employed for shoulder disorders, including steroid injections, non-steroidal anti-inflammatory drugs (NSAID) and other painkillers, surgery, physical therapy, manual mobilization and manipulation, acupuncture, and low level laser therapy, scientific evidence of their efficacy is limited.\(^{56-70}\) Physical therapy is often the first choice in the management of shoulder pain in patients and may consist of various treatment modalities, such as exercise therapy, massage therapy, muscle stretching exercises, or ultrasound.\(^{71-74}\) Although frequently administered, the efficacy of these interventions has not been established.
**Myofascial trigger points and shoulder pain**
Simons et al.\(^{78}\) claim that “neither impingement syndrome nor rotator cuff disease, as each term is commonly used, is a specific or satisfactory diagnosis (page 545).” As mentioned before, inflammation of subacromial structures is not very common in shoulder pain, which may explain the limited effect of steroid injections and NSAIDs. Narrowing of the subacromial space may result in degenerative changes of the rotator cuff but not in inflammation. Since these degenerative changes occur as often in asymptomatic as in symptomatic subjects, this might again not explain the pain and disability in patients. Physical examination, including specific tests for subacromial impingement, do not take into account that muscles surrounding the shoulder may be tested as well as other structures, and that these muscles may produce the shoulder pain instead of the tendons or bursae. Although the pain is felt deep in the shoulder and clinicians locate the pain in the subdeltoid or sub-acromial region, the pain might come from painful muscle tissue that is remote from the place where it is felt.\(^{75-79}\) Finally, until recently, MRI and US did not reveal abnormalities within muscle tissue, other than intramuscular ruptures. However, MRI combined with elastography and high resolution US have shown tissue characteristics that are characteristic features of myofascial trigger points (MTrPs). This makes the concept of myofascial pain caused by MTrPs more acceptable for physicians and therapists.
**Problems studied in this thesis**
This thesis aims to contribute to the knowledge of the role of MTrPs in shoulder pain. In our physical therapy practice, we treat patients with shoulder pain using a comprehensive therapy approach specifically aimed at treating MTrPs. Although patients and therapists
have been satisfied with our treatment for many years, we felt a need to study the effectiveness in a methodologically well-designed study, which was the main motivation for the research presented in this thesis. If effectiveness can be proven, continuation and possibly wider implementation of the comprehensive therapy targeted at MTrPs would be recommendable.
**Objectives of the thesis**
The aim of this thesis was to determine the importance of MTrPs in patients with chronic unilateral shoulder pain. We wanted to explore three major questions:
- Can we reliably identify MTrPs in shoulder muscles under controlled conditions?
- How common are MTrPs in patients with chronic shoulder pain?
- What is the effectiveness of treatment of MTrPs in patients with chronic unilateral shoulder pain?
**Outline of the thesis**
This thesis consists of three studies: an interrater reliability study, an observational study, and a randomized controlled trial conducted in a primary care physical therapy practice specializing in musculoskeletal disorders of the arm, shoulder and neck.
*Chapter 2* provides an evidence-informed review of the current scientific understanding of MTrPs with regard to their etiology, pathophysiology and clinical implications.
*Chapter 3* presents the results of an interrater reliability study of a sample of three shoulder muscles, which were of importance in patients with shoulder pain according to our daily clinical experience.
*Chapter 4* presents the design of the randomized controlled trial, evaluating the effectiveness of a physical therapy treatment in patients with unilateral non-traumatic chronic shoulder pain. All subjects had unilateral shoulder pain for at least six months and were referred to a physical therapy practice specializing in musculoskeletal disorders of the neck, shoulder and arm. After the initial assessment, patients were randomly assigned to either an intervention group or a control group (wait and see).
*Chapter 5* presents the results of an observational study that aimed to assess the prevalence of muscles with MTrPs and their potential impact on patients with chronic non-traumatic unilateral shoulder pain. Subjects were recruited from patients included in a clinical trial studying the effectiveness of physical therapy treatment in patients with unilateral non-traumatic shoulder pain.
*Chapter 6* presents the results of a single blinded randomized controlled trial. We assessed the outcome in a group of patients with shoulder pain who received comprehensive treatment given by a physical therapist for 12 weeks and compared this with the outcome in a comparable group with patients who remained on a waiting list for 12 weeks.
Finally, *Chapter 7* provides the general discussion and summary of the results.
References
1. van der Windt DA, Koes BW, de Jong BA, Bouter LM. Shoulder disorders in general practice: incidence, patient characteristics, and management. Ann Rheum Dis 1995, 54: 959-964.
2. Bongers P. The cost of shoulder pain at work. BMJ 2001, 322(7278): 64-65.
3. Ginn KA, Cohen ML. Conservative treatment for shoulder pain: prognostic indicators of outcome. Arch Phys Med Rehabil 2004, 85: 1231-1235.
4. Croft P, Pope D, Silman A. The clinical course of shoulder pain: prospective cohort study in primary care. Primary Care Rheumatology Society Shoulder Study Group. BMJ 1996, 313(7057): 601-602.
5. Karels CH, Bierna-Zenstra SM, Burdorf A, Verhagen AP, Nauta AP, Koes BW. Social and psychological factors influenced the course of arm, neck and shoulder complaints. J Clin Epidemiol 2007, 60(8): 839-848.
6. Bigliani LU, Levine WN. Current concepts review: Subacromial impingement syndrome. Journal of Bone and Joint Surgery - Series A 1997, 79(12 79A): 1854-1868.
7. Hawkins RJ, Hobeika PE. Impingement syndrome in the athletic shoulder. Clin Sports Med 1983, 2(2): 391-405.
8. Koester MC, George MS, Kuhn JE. Shoulder impingement syndrome. Am J Med 2005, 118(5): 452-455.
9. Lotz D, Lotz S, Reilmann H. [The subacromial-syndrome: Diagnosis, conservative and operative treatment]. Unfallchirurg 1999, 102(11): 870-887.
10. Mayerholer ME, Breitenseher MJ. [Impingement syndrome of the shoulder]. Radiologe 2004, 44(6): 569-577.
11. Uthhoff HK, Hammond DL, Sarkar K, Hooper GJ, Papoff WJ. The role of the coracoacromial ligament in the impingement syndrome: A clinical, radiological and histological study. Int Orthop 1988, 12: 97-104.
12. Cook CE, Hegedus EJ. Orthopedic Physical Examination Tests: An Evidence-based Approach, first edn. Upper Saddle River, NY: Pearson Prentice Hall Health, 2008.
13. Steinfield R, Valenie RM, Stuart MJ. A commonsense approach to shoulder problems. Mayo Clin Proc 1999, 74(8): 785-794.
14. Khan KM, Cook JL, Kannus P, Maffulli N, Bonar SF. Time to abandon the "tendinitis" myth. BMJ 2002, 324(7338): 626-627.
15. Scott A, Khan K, Roberts CR, Cook JL, Duronio V. What do we mean by the term "inflammation"? A contemporary basic science update for sports medicine. Br J Sports Med 2004, 38: 372-380.
16. Milgrom C, Schaffler M, Gilbert S, van Holsbeeck M. Rotator-cuff changes in asymptomatic adults: The effect of age, hand dominance and gender. The Journal of bone and joint surgery British volume 1995, 77-b: 296-298.
17. Reilly P, Macleod I, Macfarlane R, Windley J, Emery RJ. Dead men and radiologists don't lie: a review of cadaveric and radiological studies of rotator cuff tear prevalence. Ann R Coll Surg Engl 2006, 88(2): 116-121.
18. Sher. Abnormal findings on magnetic resonance images of asymptomatic shoulders. The Journal of Bone and Joint Surgery 1995(9561.144510902464265).
19. Krief OP, Huguet D. Shoulder pain and disability: comparison with MR findings. AJR American journal of roentgenology 2006, 186(16632711): 1234-1239.
20. Mimaci, Mascia, Salonen, Becker. Magnetic Resonance Imaging of the Shoulder in Asymptomatic Professional Baseball Pitchers. The American Journal of Sports Medicine 2002(5490281493756507130).
21. Connor PM, Banks DM, Tyson AB, Coumas JS, D'Alessandro DF. Magnetic Resonance Imaging of the Asymptomatic Shoulder of Overhead Athletes: A 5-Year Follow-up Study. Am J Sports Med 2003, 31(5): 724-727.
22. Fredericson M, Ho C, Waite B, Jennings F, Peterson J, Williams C, Mathesonn GO. Magnetic resonance imaging abnormalities in the shoulder and wrist joints of asymptomatic elite athletes. PM R 2009, 1(2): 107-116.
23. Hagemann G, Rijke AM, Mars M. Shoulder pathoanatomy in marathon kayakers. Br J Sports Med 2004, 38(15273173): 413-417.
24. Neer CS. 2nd Anterior acromioplasty for the chronic impingement syndrome in the shoulder: a preliminary report. J Bone Joint Surg Am 1972, 54(1): 41-50.
25. Jarjavay J. Sur la luxation du tendon de la longue portion du muscle biceps humeral, sur la luxation des tendons des muscles peroniens lateraux. Gaz hebd medchr 1867, 21: 325.
26. Frost P, Andersen JH, Lundorf E. Is supraspinatus pathology as defined by magnetic resonance imaging associated with clinical sign of shoulder impingement? J Shoulder Elbow Surg 1999, 8(6): 565-568.
27. Bonsell S, Pearsall AWt, Heitman RJ, Helms CA, Major NM, Speer KP. The relationship of age, gender, and degenerative changes observed on radiographs of the shoulder in asymptomatic individuals. J Bone Joint Surg Br 2000, 82(8): 1135-1139.
28. Tempelhof S, Rupp S, Seil R. Age-related prevalence of rotator cuff tears in asymptomatic shoulders. Journal of shoulder and elbow surgery / American Shoulder and Elbow Surgeons (et al) 1999, 8(10471998): 296-299.
29. Belling Sorensen AK, Jorgensen U. Secondary impingement in the shoulder: An improved terminology in impingement. Scand J Med Sci Sports 2000, 10(5): 266-278.
Jobe FW, Kvitrne RS, Giangarra CE Shoulder pain in the overhand or throwing athlete The relationship of anterior instability and rotator cuff impingement Orthop Rev 1989, 18(9) 963-975
Magarey M Dynamic evaluation and early management of altered motor control around the shoulder complex Man Ther 2003, 8(4) 195-206
Kvitne RS, Jobe FW, Jobe CM Shoulder instability in the overhand or throwing athlete Clin Sports Med 1995, 14(4) 917-935
Kvitne RS, Jobe FW The diagnosis and treatment of anterior instability in the throwing athlete Clin Orthop 1993(291) 107-123
Jobe FW, Pink M Classification and treatment of shoulder dysfunction in the overhead athlete J Orthop Sports Phys Ther 1993, 18(2) 427-432
Potras P, Kingwell SP, Ramadan O, Russell DL, Uthoff HK, Lapner P The effect of posterior capsular tightening on peak subacromial contact pressure during simulated active abduction in the scapular plane J Shoulder Elbow Surg 2010, 19(3) 406-413
Walch G, Boileau P, Noel E, Donell ST Impingement of the deep surface of the supraspinatus tendon on the posterosuperior glenoid rim An arthroscopic study J Shoulder Elbow Surg, 15(5) 238-245
McFarland EG, Hsu CY, Neira C, O'Neil O Internal impingement of the shoulder a clinical and arthroscopic analysis J Shoulder Elbow Surg 1999, 8(5) 458-460
Simons DG, Travell, J G , Simons L S Myofascial Pain and Dysfunction The trigger point manual Upper half of body, vol 1, second edn Baltimore, MD: Lippincott, Williams and Wilkins, 1999
Moen MH, de Vos RJ, Ellenbecker TS, Weir A Clinical tests in shoulder examination how to perform them Br J Sports Med 2010, 44(5) 370-375
Hegedus EJ, Goode A, Campbell S, Morin A, Tamaddonlu M, Moorman CT, 3rd, Cook C Physical examination tests of the shoulder a systematic review with meta-analysis of individual tests Br J Sports Med 2008, 42(2) 80-92, discussion 92
Calis M, Akgun K, Birtane M, Karacan I, Calis H, Tuzun F Diagnostic values of clinical diagnostic tests in subacromial impingement syndrome Ann Rheum Dis 2000, 59(1) 44-47
Hughes PC, Taylor NF, Green RA Most clinical tests cannot accurately diagnose rotator cuff pathology a systematic review Aust J Physiother 2008, 54(3) 159-170
May S Reliability of physical examination tests used in the assessment of patients with shoulder problems a systematic review Physiotherapy 2010
Michener LA, Walsworth MK, Doukas WC, Murphy KP Reliability and diagnostic accuracy of 5 physical examination tests and combination of tests for subacromial impingement Arch Phys Med Rehabil 2009, 90(19887215) 1898-1903
McFarland EG, Garzon-Muvdi J, Jia X, Desai P, Petersen SA Clinical and diagnostic tests for shoulder disorders a critical review Br J Sports Med 2010, 44(5) 328-332
Park HB, Yokota A, Gill HS, El Rassi G, McFarland EG Diagnostic accuracy of clinical tests for the different degrees of subacromial impingement syndrome J Bone Joint Surg Am 2005, 87(7) 1446-1455
Beaudreul J, Nizard R, Thomas T, Peyre M, Liotard JR, Boileau P, Marc T, Dromard C, Steyer E, Bardin T et al Contribution of clinical tests to the diagnosis of rotator cuff disease a systematic literature review Joint Bone Spine 2009, 76(1) 15-19
Johansson K, Ivarsson S Intra- and interexaminer reliability of four manual shoulder maneuvers used to identify subacromial pain Man Ther 2009, 14(2) 231-239
Kelly SM, Wrightson PA, Meads CA Clinical outcomes of exercise in the management of subacromial impingement syndrome a systematic review Clin Rehabil 2010, 24(2) 99-109
Lewis JS Rotator cuff tendinopathy/subacromial impingement syndrome Is it time for a new method of assessment? British Journal of Sports Medicine 2009, 43(4) 259-264
Macdonald PB, Clark P, Sutherland K An analysis of the diagnostic accuracy of the Hawkins and Neer subacromial impingement signs Journal of Shoulder and Elbow Surgery 2000, 9(4) 299-301
Nanda R, Gupta S, Kanapathipillai P, Low RYL, Rangan A An assessment of the inter examiner reliability of clinical tests for subacromial impingement and rotator cuff integrity European Journal of Orthopaedic Surgery and Traumatology 2008, 18(7) 495-500
Yamamoto N, Muraki T, Sperling JW, Steinmann SP, Itoi E, Cofield RH, An KN Impingement mechanisms of the Neer and Hawkins signs Journal of Shoulder and Elbow Surgery 2009, 18(6) 942-947
Pappas GP, Blemker SS, Beaulieu CF, McAdams TR, Whalen ST, Gold GE In vivo anatomy of the Neer and Hawkins sign positions for shoulder impingement Journal of Shoulder and Elbow Surgery 2006, 15(1) 40-49
Valadie Iu AL, Jobe CM, Pink MM, Ekman EF, Jobe FW Anatomy of provocative tests for impingement syndrome of the shoulder Journal of Shoulder and Elbow Surgery 2000, 9(1) 36-46
Buchbinder R, Green S, Youd JM Corticosteroid injections for shoulder pain Cochrane Database Syst Rev 2003(1) CD004016
Green S, Buchbinder R, Hetrick S *Physiotherapy interventions for shoulder pain* Cochrane Database Syst Rev 2003(2) CD004258
Schellingerhout JM, Thomas S, Verhagen AP *Aspecific shoulder complaints: literature review to assess the efficacy of current interventions* Ned Tijdschr Geneeskd 2007, 151(52) 2892-2897
Camarrnos J, Marinco L *Effectiveness of Manual Physical Therapy for Painful Shoulder Conditions: A Systematic Review* JMMT 2009, 17(4) 206-215
Dorrestijn O, Stevens M, Winters JC, van der Meer K, Diercks RL *Conservative or surgical treatment for subacromial impingement syndrome? A systematic review* J Shoulder Elbow Surg 2009, 18(4) 652-660
Coghlan JA, Buchbinder R, Green S, Johnston RV, Bell SN *Surgery for rotator cuff disease* Cochrane Database Syst Rev 2008(1) CD005619
Desmeules F, Côté GH, Frémont P *Therapeutic exercise and orthopedic manual therapy for impingement syndrome: a systematic review* Clinical journal of sport medicine: official journal of the Canadian Academy of Sport Medicine 2003, 13(12792213) 176-182
Ejnisman B, Andreoli CV, Soares BG, Fallotra F, Peccin MS, Abdalla RJ, Cohen M *Interventions for tears of the rotator cuff in adults* Cochrane Database Syst Rev 2004(1) CD002758
Faber E, Kuiper JJ, Burdorf A, Miedema HS, Verhaar JA *Treatment of impingement syndrome: a systematic review of the effects on functional limitations and return to work* J Occup Rehabil 2006, 16(1) 7-25
Green S, Buchbinder R, Hetrick S *Acupuncture for shoulder pain* Cochrane Database Syst Rev 2005(2) CD005319
Ho CY, Sole G, Munn J *The effectiveness of manual therapy in the management of musculoskeletal disorders of the shoulder: A systematic review* Man Ther 2009(19467911)
Johansson K, Oberg B, Adolsson L, Foldevi M *A combination of systematic review and clinicians' beliefs in interventions for subacromial pain* Br J Gen Pract 2002, 52(475) 145-152
Kromer TO, Tautenhahn UG, de Bie RA, Staal JB, Bastiaaen CH *Effects of physiotherapy in patients with shoulder impingement syndrome: a systematic review of the literature* J Rehabil Med 2009, 41(19841837) 870-880
Kuhn JE *Exercise in the treatment of rotator cuff impingement: a systematic review and a synthesized evidence-based rehabilitation protocol* J Shoulder Elbow Surg 2009, 18(1) 138-160
Walser RF, Meserve BB, Boucher TR *The effectiveness of thoracic spine manipulation for the management of musculoskeletal conditions: a systematic review and meta-analysis of randomized clinical trials* J Man Manip Ther 2009, 17(4) 237-246
Osterås H, Torstensen TA *The dose-response effect of medical exercise therapy on impairment in patients with unilateral longstanding subacromial pain* The open orthopaedics journal 2010, 4(20148093) 1-6
Bennell K, Wee E, Coburn S, Green S, Harris A, Staples M, Forbes A, Buchbinder R *Efficacy of standardised manual therapy and home exercise programme for chronic rotator cuff disease: randomised placebo controlled trial* BMJ 2010, 340 c2756
Crawshaw DP, Hellwell PS, Henson EM, Hay EM, Aldous SJ, Conaghan PG *Exercise therapy after corticosteroid injection for moderate to severe shoulder pain: large pragmatic randomised trial* BMJ 2010, 340 c3037
Kelly SM, Wrightson PA, Meads CA *Clinical outcomes of exercise in the management of subacromial impingement syndrome: A systematic review* Clinical Rehabilitation 2010, 24(2) 99-109
Arendt-Nielsen L, Svensson P *Referred muscle pain: Basic and clinical findings* Clin J Pain 2001, 17(1) 11-19
Ge HY, Fernandez-de-Las-Penas C, Madeleine P, Arendt-Nielsen L *Topographical mapping and mechanical pain sensitivity of myofascial trigger points in the infraspinatus muscle* Eur J Pain 2008, 12(7) 859-865
Couppé C, Midtun A, Hilden J, Jørgensen U, Oxholm P, Fuglsang-Frederiksen A *Spontaneous needle electromyographic activity in myofascial trigger points in the infraspinatus muscle: A blinded assessment* Journal of Musculoskeletal Pain 2001, 9(3) 7-16
Hong CZ, Kuan TS, Chen JT, Chen SM *Referred pain elicited by palpation and by needling of myofascial trigger points: a comparison* Arch Phys Med Rehabil 1997, 78(9) 957-960
Escobar PL, Ballesteros J *Teres minor: Source of symptoms resembling ulnar neuropathy or C8 radiculopathy* Am J Phys Med Rehabil 1988, 67(3) 120-122
MYOFASCIAL TRIGGER POINTS: AN EVIDENCE-INFORMED REVIEW
Jan Dommerholt
Carel Bron
Jo Franssen
The Journal of Manual & Manipulative Therapy, Vol. 14 No. 4 (2006), 203-221
MYOFASCIAL TRIGGER POINTS: AN EVIDENCE-INFORMED REVIEW
Abstract: This article provides a best evidence-informed review of the current scientific understanding of myofascial trigger points with regard to their etiology, pathophysiology, and clinical implications. Evidence-informed manual therapy integrates the best available scientific evidence with individual clinicians' judgments, expertise, and clinical decision-making. After a brief historical review, the clinical aspects of myofascial trigger points, the interrater reliability for identifying myofascial trigger points, and several characteristic features are discussed, including the taut band, local twitch response, and referred pain patterns. The etiology of myofascial trigger points is discussed with a detailed and comprehensive review of the most common mechanisms, including low-level muscle contractions, uneven intramuscular pressure distribution, direct trauma, unaccustomed eccentric contractions, eccentric contractions in unconditioned muscle, and maximal or sub-maximal concentric contractions. Many current scientific studies are included and provide support for considering myofascial trigger points in the clinical decision-making process. The article concludes with a summary of frequently encountered precipitating and perpetuating mechanical, nutritional, metabolic, and psychological factors relevant for physical therapy practice. Current scientific evidence strongly supports that awareness and working knowledge of muscle dysfunction and in particular myofascial trigger points should be incorporated into manual physical therapy practice consistent with the guidelines for clinical practice developed by the International Federation of Orthopaedic Manipulative Therapists. While there are still many unanswered questions in explaining the etiology of myofascial trigger points, this article provides manual therapists with an up-to-date evidence-informed review of the current scientific knowledge.
During the past few decades, myofascial trigger points (MTrPs) and myofascial pain syndrome (MPS) have received much attention in the scientific and clinical literature. Researchers worldwide are investigating various aspects of MTrPs, including their specific etiology, pathophysiology, histology, referred pain patterns, and clinical applications. Guidelines developed by the International Federation of Orthopaedic Manipulative Therapists (IFOMT) confirm the importance of muscle dysfunction for orthopedic manual therapy clinical practice. The IFOMT has defined orthopedic manual therapy as "a specialized area of physiotherapy/physical therapy for the management of neuromusculoskeletal conditions, based on clinical reasoning, using highly specific treatment approaches including manual techniques and therapeutic exercises." The educational standards of IFOMT require that skills will be demonstrated in—among others—"analysis and specific tests for functional status of the muscular system," "a high level of skill in other manual and physical therapy techniques required to mobilize the articular, muscular or neural systems," and "knowledge of various manipulative therapy approaches as practiced within physical therapy, medicine, osteopathy and chiropractic."
However, articles about muscle dysfunction in the manual therapy literature are sparse and they generally focus on muscle injury and muscle repair mechanisms or on muscle recruitment. Until very recently, the current scientific knowledge and clinical implications of MTrPs were rarely included. It appears that orthopedic manual therapists have not paid much attention to the pathophysiology and clinical manifestations of MTrPs. Manual therapy educational programs in the US seem to reflect this orientation and tend to place a strong emphasis on joint dysfunction, mobilizations, and manipulations with only about 10-15% of classroom education devoted to muscle pain and muscle dysfunction.
This review of the MTrP literature is based on current best scientific evidence. The field of manual therapy has joined other medical disciplines by embracing evidence-based medicine, which proposes that the results of scientific research need to be integrated into clinical practice. Evidence-based medicine has been defined as "the conscientious, explicit, and judicious use of current best-evidence in making decisions about the care of individual patients." Within the evidence-based medicine paradigm, evidence is not restricted to randomized controlled trials, systematic reviews, and meta-analyses, although this restricted view seems to be prevalent in the medical and physical therapy literature. Sackett et al emphasized that external clinical evidence can inform but not replace individual clinical expertise. Clinical expertise determines whether external clinical evidence applies to an individual patient, and if so, how it should be integrated into clinical decision making. Pencheon shared this perspective and suggested that high-quality healthcare is about combining "wisdom produced by years of experience" with "evidence produced by generalizable research" in "ways with which patients are happy." He suggested shifting from evidence-based to evidence-informed medicine, where clinical decision making is informed by research evidence but not driven by it, and always includes knowledge from experience. Evidence-informed manual therapy involves integrating the best available external scientific
evidence with individual clinicians' judgments, expertise, and clinical decision-making.\textsuperscript{12} The purpose of this article is to provide a best evidence-informed review of the current scientific understanding of MTrPs, including the etiology, pathophysiology, and clinical implications, against the background of extensive clinical experience.
**Brief Historical Review**
While Dr Janet Travell (1901-1997) is generally credited for bringing MTrPs to the attention of healthcare providers, MTrPs have been described and rediscovered for several centuries by various clinicians and researchers.\textsuperscript{13,14} As far back as the 16\textsuperscript{th} century, de Baillou (1538-1616), as cited by Ruhmann, described what is now known as myofascial pain syndrome (MPS)\textsuperscript{15} MPS is defined as the "sensory, motor, and autonomic symptoms caused by MTrPs" and has become a recognized medical diagnosis among pain specialists.\textsuperscript{16,17} In 1816, British physician Balfour, as cited by Stockman, described "nodular tumors and thickenings which were painful to the touch, and from which pains shot to neighboring parts."\textsuperscript{18} In 1898, the German physician Strauss discussed "small, tender and apple-sized nodules and painful, pencil-sized to little-finger-sized palpable bands."\textsuperscript{19} The first trigger point manual was published in 1931 in Germany nearly a decade before Travell became interested in MTrPs.\textsuperscript{20} While these early descriptions may appear a bit archaic and unusual—for example, in clinical practice one does not encounter "apple-sized nodules"—these and other historic papers did illustrate the basic features of MTrPs quite accurately.\textsuperscript{14}
In the late 1930s, Travell, who at that time was a cardiologist and medical researcher, became particularly interested in muscle pain following the publication of several articles on referred pain.\textsuperscript{21} Kellgren's descriptions of referred pain patterns of many muscles and spinal ligaments after injecting these tissues with hypertonic saline\textsuperscript{22,23} eventually moved Travell to shift her medical career from cardiology to musculoskeletal pain. During the 1940s, she published several articles on injection techniques of MTrPs\textsuperscript{26,28} In 1952, she described the myofascial genesis of pain with detailed referred pain patterns for 32 muscles.\textsuperscript{29} Other clinicians also became interested in MTrPs. European physicians Lief and Chaitow developed a treatment method, which they referred to as "neuromuscular technique."\textsuperscript{30} German physician Gutstein described the characteristics of MTrPs and effective manual therapy treatments in several papers under the names of Gutstein, Gutstein-Good, and Good.\textsuperscript{31-34} In Australia, Kelly produced a series of articles about fibrositis, which paralleled Travell's writings.\textsuperscript{35,38}
In the US, chiropractors Nimmo and Vannerson\textsuperscript{39} described muscular "noxious generative points," which were thought to produce nerve impulses and eventually result in "vasoconstriction, ischaemia, hypoxia, pain, and cellular degeneration." Later in his career, Nimmo adopted the term "trigger point" after having been introduced to Travell's writings. Nimmo maintained that hypertonic muscles are always painful to pressure, a statement that later became known as "Nimmo's law." Like Travell, Nimmo described distinctive referred pain patterns and recommended releasing these dysfunctional points by applying the proper degree of manual pressure. Nimmo's "receptor-tonus control method" continues to be popular.
among chiropractic physicians\textsuperscript{39,40}. According to a 1993 report by the National Board of Chiropractic Economics, over 40% of chiropractors in the US frequently apply Nimmo's techniques\textsuperscript{41}. Two spin-offs of Nimmo's work are St. John Neuromuscular Therapy (NMT) method and NMT American version, which have become particularly popular among massage therapists\textsuperscript{30}.
In 1966, Travell founded the North American Academy of Manipulative Medicine, together with Dr. John Mennell, who also published several articles about MTrPs\textsuperscript{42,43}. Throughout her career Travell promoted integrating myofascial treatments with articular treatments\textsuperscript{16}. One of her earlier papers described a technique for reducing sacroiliac displacement\textsuperscript{44}. However, Travell, as cited by Paris\textsuperscript{45}, maintained the opinion that manipulations were the exclusive domain of physicians and she rejected membership in the North American Academy of Manipulative Medicine by physical therapists.
In the early 1960s, Dr. David Simons was introduced to Travell and her work, which became the start of a fruitful collaboration eventually resulting in several publications, including the \textit{Trigger Point Manuals}, consisting of a 1983 first volume (upper half of the body) and a 1992 second volume (lower half of the body)\textsuperscript{46,47}. The first volume has since been revised and updated and a second edition was released in 1999\textsuperscript{16}. The \textit{Trigger Point Manuals} are the most comprehensive review of nearly 150 muscle referred-pain patterns based on Travell's clinical observations, and they include an extensive review of the scientific basis of MTrPs. Both volumes have been translated into several foreign languages, including Russian, German, French, Italian, Japanese, and Spanish. Several other clinicians worldwide have also published their own trigger point manuals\textsuperscript{48-54}.
**Clinical aspects of Myofascial Trigger Points**
An MTrP is described as "a hyperirritable spot in skeletal muscle that is associated with a hypersensitive palpable nodule in a taut band"\textsuperscript{16}. Myofascial trigger points are classified into active and latent trigger points\textsuperscript{16}. An active MTrP is a symptom-producing MTrP and can trigger local or referred pain or other paraesthesiae. A latent MTrP does not trigger pain without being stimulated. Myofascial trigger points are the hallmark characteristics of MPS and feature motor, sensory, and autonomic components. Motor aspects of active and latent MTrPs may include disturbed motor function, muscle weakness as a result of motor inhibition, muscle stiffness, and restricted range of motion\textsuperscript{55,56}. Sensory aspects may include local tenderness, referral of pain to a distant site, and peripheral and central sensitization. Peripheral sensitization can be described as a reduction in threshold and an increase in responsiveness of the peripheral ends of nociceptors, while central sensitization is an increase in the excitability of neurons within the central nervous system. Signs of peripheral and central sensitization are allodynia (pain due to a stimulus that does not normally provoke pain) and hyperalgesia (an increased response to a stimulus that is normally painful). Both active and latent MTrPs are painful on compression. Vecchiet et al\textsuperscript{57,59}
described specific sensory changes over MTrPs. They observed significant lowering of the pain threshold over active MTrPs when measured by electrical stimulation, not only in the muscular tissue but also in the overlying cutaneous and subcutaneous tissues. In contrast, with latent MTrPs, the sensory changes did not involve the cutaneous and subcutaneous tissues\textsuperscript{57-59}. Autonomic aspects of MTrPs may include, among others, vasoconstriction, vasodilatation, lacrimation, and piloerection\textsuperscript{16,60-63}.
A detailed clinical history, examination of movement patterns, and consideration of muscle referred pain patterns assist clinicians in determining which muscles may harbor clinically relevant MTrPs\textsuperscript{64}. Muscle pain is perceived as aching and poorly localized. There are no laboratory or imaging tests available that can confirm the presence of MTrPs. Myofascial trigger points are identified through either a flat palpation technique (Figure 1) in which a clinician applies finger or thumb pressure to muscle against underlying bone tissue, or a pincer palpation technique (Figure 2) in which a particular muscle is palpated between the clinician’s fingers.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{flat_palpation.png}
\caption{Flat palpation}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{pincer_palpation.png}
\caption{Pincer palpation}
\end{figure}
By definition, MTrPs are located within a taut band of contractured muscle fibers (Figure 3), and palpating for MTrPs starts with identifying this taut band by palpating perpendicular to the fiber direction. Once the taut band is located, the clinician moves along the taut band to find a discrete area of intense pain and hardness.
**Fig. 3: Palpation of a trigger point within a taut band**
(reproduced with permission from Weisskircher H-W. Head Pains Due to Myofascial Trigger Points. CD-ROM, www.trigger-point.com, 1997)
Two studies have reported good overall interrater reliability for identifying taut bands, MTrPs, referred pain, and local twitch responses\textsuperscript{65,66}. The minimum criteria that must be satisfied in order to distinguish an MTrP from any other tender area in muscle are a taut band and a tender point in that taut band\textsuperscript{65}. Although Janda maintained that systematic palpation can differentiate between myofascial taut bands and general muscle spasms, electromyography is the gold standard to differentiate taut bands from contracted muscle fibers\textsuperscript{67,68}. Spasms can be defined as electromyographic (EMG) activity as the result of increased neuromuscular tone of the entire muscle, and they are the result of nerve-initiated contractions. A taut band is an endogenous localized contracture within the muscle without activation of the motor endplate\textsuperscript{69}. From a physiological perspective, the term “contracture” is more appropriate then “contraction” when describing chronic involuntary shortening of a muscle without EMG activity. In clinical practice, surface EMG is used in the diagnosis and management of MTrPs in addition to manual examinations\textsuperscript{67,70,71}. Diagnostically, surface EMG can assist in assessing muscle behavior during rest and during functional tasks. Clinicians use the MTrP referred pain patterns in determining which muscles to examine with surface EMG. Muscles that harbor MTrPs responsible for the patient’s pain complaint are examined first. EMG assessments guide the clinician with postural training, ergonomic interventions, and muscle awareness training\textsuperscript{67}.
The patient’s recognition of the elicited pain further guides the clinician. The presence of a so-called local twitch response (LTR), referred pain, or reproduction of the person’s symptomatic pain increases the certainty and specificity of the diagnosis of MPS. Local twitch responses are spinal reflexes that appear to be unique to MTrPs. They are characterized by a sudden contraction of muscle fibers within a taut band, when the taut band is
Fig. 4: Local twitch response in a rabbit trigger spot. Local twitch responses are elicited only when the needle is placed accurately within the trigger spot. Moving as little as 0.5 cm away from the trigger spot virtually eliminates the local twitch response.
(reproduced with permission from Hong C-Z, 1994)
strummed manually or needled. The sudden contractions can be observed visually, can be recorded electromyographically, or can be visualized with diagnostic ultrasound\textsuperscript{72}. When an MTrP is needled with a monopolar teflon-coated EMG needle, LTRs appear as high-amplitude poly-phasic EMG discharges\textsuperscript{73-78}.
In clinical practice, there is no benefit in using needle EMG or sonography, and its utility is limited to research studies. For example, Audette et al\textsuperscript{79} established that in 61.5% of active MTrPs in the trapezius and levator scapulae muscles, dry needling an active MTrP elicited an LTR in the same muscle on the opposite side of the body. Needling of latent MTrPs resulted in unilateral LTRs only. In this study, LTRs were used to research the nature of active versus latent MTrPs. Studies have shown that clinical outcomes are significantly improved when LTRs are elicited in the treatment of patients with dry needling or injection therapy\textsuperscript{74,80,81}. The taut band, MTrP, and LTR (Figure 4) are objective criteria, identified solely by palpation, that do not require a verbal response from the patient\textsuperscript{82}.
Active MTrPs refer pain usually to a distant site. The referred pain patterns (Figure 5) are not necessarily restricted to single segmental pathways or to peripheral nerve distributions. Although typical referred pain patterns have been established, there is considerable variation between patients\textsuperscript{16,48}.
Usually, the pain in reference zones is described as “deep tissue pain” of a dull and aching nature. Occasionally, patients may report burning or tingling sensations, especially in superficial muscles such as the platysma muscle\textsuperscript{83,84}. By mechanically stimulating active MTrPs, patients may report the reproduction of their pain, either immediately or after a 10-15 second delay. Normally, skeletal muscle nociceptors require high intensities of stimulation and they do not respond to moderate local pressure, contractions, or muscle stretches\textsuperscript{85}. However, MTrPs cause persistent noxious stimulation, which results in increasing the number and size of the receptive fields to which a single dorsal horn nociceptive neuron responds, and the experience of spontaneous and referred pain\textsuperscript{86}. Several recent studies have determined previously unrecorded referred pain patterns of different muscles and MTrPs\textsuperscript{87-90}. Referred pain is not specific to MPS but it is relatively easy to elicit over MTrPs\textsuperscript{91}. Normal muscle tissue and other body tissues, including the skin, zygapophyseal joints, or internal organs, may also refer pain to distant regions with mechanical pressure, making referred pain elicited by stimulation of a tender location a nonspecific finding\textsuperscript{84,92-95}. Gibson et al\textsuperscript{96} found that referred pain is actually easier to elicit in tendon-bone junctions and tendon than in the muscle belly. However, after exposing the
muscle to eccentric exercise, significantly higher referred pain frequency and enlarged pain areas were found at the muscle belly and the tendon-bone junction sites following injection with hypotonic saline. The authors suggested that central sensitization may explain the referred pain frequency and enlarged pain areas.
While a survey of members of the American Pain Society showed general agreement that MTrPs and MPS exist as distinct clinical entities, MPS continues to be one of the most commonly missed diagnoses. In a recent study of 110 adults with low back pain, myofascial pain was the most common finding affecting 95.5% of patients, even though myofascial pain was poorly defined as muscle pain in the paraspinal muscles, piriformis, or tensor fasciae latae. A study of adults with frequent migraine headaches diagnosed according to the International Headache Society criteria showed that 94% of the patients reported migrainous pain with manual stimulation of cervical and temporal MTrPs, compared with only 29% of controls. In 30% of the migraine group, palpation of MTrPs elicited a “full-blown migraine attack that required abortive treatment.” The researchers found a positive relationship between the number of MTrPs and the frequency of migraine attacks and duration of the illness. Several studies have confirmed that MTrPs are common not only in persons attending pain management clinics but also in those seeking help through internal medicine and dentistry. In fact, MTrPs have been identified with nearly every musculoskeletal pain problem, including radiculopathies, joint dysfunction, disk pathology, tendonitis, craniomandibular dysfunction, migraines, tension-type headaches, carpal tunnel syndrome, computer-related disorders, whiplash-associated disorders, spinal dysfunction, and pelvic pain and other urologic syndromes. Myofascial trigger points are associated with many other pain syndromes, including, for example, post-herpetic neuralgia, complex regional pain syndrome, nocturnal cramps, phantom pain, and other relatively uncommon diagnoses such as Barré-Liéou syndrome and neurogenic pruritus. A recent study suggested that there might be a relationship between MTrPs in the upper trapezius muscle and cervical spine dysfunction at the C3 and C4 vertebrae, although a cause-and-effect relationship was not established in this correlational study. Another study described that persons with mechanical neck pain had significantly more clinically relevant MTrPs in the upper trapezius, sternocleidomastoid, levator scapulae, and suboccipital muscles as compared to healthy controls.
Etiology of MTrPs
Several possible mechanisms can lead to the development of MTrPs, including low-level muscle contractions, uneven intramuscular pressure distribution, direct trauma, unaccustomed eccentric contractions, eccentric contractions in unconditioned muscle, and maximal or submaximal concentric contractions.
**Low-level muscle contractions**
Of particular interest in the etiology of MTrPs are low-level muscle exertions and the so-called Cinderella Hypothesis developed by Hagg in 1988\textsuperscript{134}. The Cinderella Hypothesis postulates that occupational myalgia is caused by selective overloading of the earliest recruited and last de-recruited motor units according to the ordered recruitment principle or Henneman’s “size principle”\textsuperscript{134,135}. Smaller motor units are recruited before and de-recruited after larger ones, as a result, the smaller type 1 fibers are continuously activated during prolonged motor tasks\textsuperscript{135}. According to the Cinderella Hypothesis, muscular force generated at sub-maximal levels during sustained muscle contractions engages only a fraction of the motor units available without the normally occurring substitution of motor units during higher force contractions, which in turn can result in metabolically overloaded motor units, prone to loss of cellular Ca\textsuperscript{2+}-homeostasis, subsequent activation of autogenic destructive processes, and muscle pain\textsuperscript{136,137}. The other pillar of the Cinderella Hypothesis is the finding of an excess of ragged red fibers in myalgic patients\textsuperscript{136}. Indeed, several researchers have demonstrated the presence of ragged red fibers and moth-eaten fibers in subjects with myalgia, which are indications of structural damage to the cell membrane and mitochondria and a change in the distribution of mitochondria or the sarcotubular system respectively\textsuperscript{138-142}.
There is growing evidence that low-level static muscle contractions or exertions can result in degeneration of muscle fibers\textsuperscript{143}. Gissell\textsuperscript{144,145} has shown that low-level exertions can result in an increase of Ca\textsuperscript{2+}-release in skeletal muscle cells, muscle membrane damage due to leakage of the intracellular enzyme lactate dehydrogenase, structural damage, energy depletion, and myalgia. Low-level muscle stimulation can also lead to the release of interleukin 6 (IL-6) and other cytokines\textsuperscript{146,147}.
Several studies have confirmed the Cinderella Hypothesis and support the idea that in low-level static exertions, muscle fiber recruitment patterns tend to be stereotypical with continuous activation of smaller type 1 fibers during prolonged motor tasks\textsuperscript{148-152}. As Hägg indicated, the continuous activity and metabolic overload of certain motor units does not occur in all subjects\textsuperscript{136}. The Cinderella Hypothesis was recently applied to the development of MTrPs\textsuperscript{116}. In a well-designed study, Treasters et al\textsuperscript{116} established that sustained low-level muscle contractions during continuous typing for as little as 30 minutes commonly resulted in the formation of MTrPs. They suggested that MTrPs might provide a useful explanation for muscle pain and injury that can occur from low-level static exertions\textsuperscript{116}. Myofascial trigger points are common in office workers, musicians, dentists, and other occupational groups exposed to low-level muscle exertions\textsuperscript{153}. Chen et al\textsuperscript{154} also suggested that low-level muscle
exertions can lead to sensitization and development of MTrPs. Forty piano students showed significantly reduced pressure thresholds over latent MTrPs after only 20 minutes of continuous piano playing\textsuperscript{154}.
**Intramuscular pressure distribution**
Otten\textsuperscript{155} has suggested that circulatory disturbances secondary to increased intramuscular pressure may also lead to the development of myalgia. Based on mathematical modeling applied to a frog gastrocnemius muscle, Otten confirmed that during static low-level muscle contractions, capillary pressures increase dramatically especially near the muscle insertions (Figure 6). In other words, during low-level exertions, the intramuscular pressure near the muscle insertions might increase rapidly, leading to excessive capillary pressure, decreased circulation, and localized hypoxia and ischaemia\textsuperscript{155}.

With higher level contractions in between 10% and 20% of maximum voluntary effort, the intramuscular pressure increases also in the muscle belly\textsuperscript{156,157}. According to Otten, the increased pressure gradients during low-level exertions may contribute to the development of pain at the musculotendinous junctions and eventually to the formation of MTrPs (personal communication, 2005).
In 1999, Simons introduced the concept of “attachment trigger points” to explain pain at the musculotendinous junctions in persons with MTrPs, based on the assumption that taut bands would generate sufficient sustained force to induce localized enthesopathies\textsuperscript{16,158}. More recently, Simons concluded that there is no convincing evidence that the tension
generated in shortened sarcomeres in a muscle belly would indeed be able to generate passive or resting force throughout an entire taut band resulting in enthesopathies, even though there may be certain muscles or conditions where this could occur (personal communication, 2005). To the contrary, force generated by individual motor units is always transmitted laterally to the muscle's connective tissue matrix, involving at least two protein complexes containing vinculin and dystrophin, respectively\textsuperscript{159}. There is also considerable evidence that the assumption that muscle fibers pass from tendon to tendon is without basis\textsuperscript{160}. Trotter\textsuperscript{160} has demonstrated that skeletal muscle is comprised of in-series fibers. In other words, there is evidence that a single muscle fiber does not run from tendon to tendon. The majority of fibers are in series with inactive fibers, which makes it even more unlikely that the whole muscle length-tension properties would be dictated by the shortest contractured fibers in the muscle\textsuperscript{161}.
In addition, it is important to consider the mechanical and functional differences between fast and slow motor units\textsuperscript{162, 163}. Slow motor units are always stiffer than fast units, although fast units can produce more force. If there were any transmission of force along the muscle fiber, as Simons initially suggested, fast fibers would be better suited to accomplish this. Yet, fast motor units have larger series of elastic elements, which would absorb most of the force displacement\textsuperscript{164, 165}. Fast fibers show a progressive decrease in cross-sectional area and end in a point within the muscle fascicle, making force transmission even more unlikely\textsuperscript{163}. Fast fibers rely on transmitting a substantial proportion of their force to the endomysium, transverse cytoskeleton, and adjacent muscle fibers\textsuperscript{162, 163}. In summary, the development of so-called "attachment trigger points" as a result of increased tension by contractured sarcomeres in MTrPs is not clear and more research is needed to explain the clinical observation that MTrPs appear to be linked to pain at the musculotendinous junction. The increased tension in the muscle belly is likely to dissipate across brief sections of the taut band on both sides of the MTrP and laterally through the transverse cytoskeleton\textsuperscript{166, 168}. Instead, Otten's model of increased intramuscular pressure, decreased circulation, localized hypoxia, and ischaemia at the muscle insertions provides an alternative model for the clinically observed pain near the musculotendinous junction and osseous insertions in persons with MTrPs, even though the model does not explain why taut bands are commonly present\textsuperscript{155}.
**Direct trauma**
There is general agreement that acute muscle overload can activate MTrPs, although systematic studies are lacking\textsuperscript{169}. For example, people involved in whiplash injuries commonly experience prolonged muscle pain and dysfunction\textsuperscript{170-173}. In a retrospective review, Schuller et al\textsuperscript{174} found that 80% of 1096 subjects involved in low-velocity collisions demonstrated evidence of muscle pain with myogeloses among the most common findings. Although Schuller et al\textsuperscript{174} did not define these myogeloses, Simons has suggested that a myogelosis describes the same clinical entity as an MTrP\textsuperscript{175}. Baker\textsuperscript{117} reported that the splenius capitis, semispinalis capitis, and sternocleidomastoid muscles developed symptomatic MTrPs in 77%, 62%, and 52% of 52 whiplash patients, respectively. In a
retrospective review of 54 consecutive chronic whiplash patients, Gerwin and Dommerholt\textsuperscript{176} reported that clinically relevant MTrPs were found in every patient, with the trapezius muscle involved most often. Following treatment emphasizing the inactivation of MTrPs and restoration of normal muscle length, approximately 80% of patients experienced little or no pain, even though the average time following the initiating injury was 2-5 years at the beginning of the treatment regimen. All patients had been seen previously by other physicians and physical therapists who apparently had not considered MTrPs in their thought process and clinical management\textsuperscript{176}. Fernández-de-las-Peñas et al\textsuperscript{177, 178} confirmed that inactivation of MTrPs should be included in the management of persons suffering from whiplash-associated disorders. In their research-based treatment protocol, the combination of cervical and thoracic spine manipulations with MTrP treatments proved superior to more conventional physical therapy consisting of massage, ultrasound, home exercises, and low-energy high-frequency pulsed electromagnetic therapy\textsuperscript{177}.
Direct trauma may create a vicious cycle of events wherein damage to the sarcoplasmic reticulum or the muscle cell membrane may lead to an increase of the calcium concentration, a subsequent activation of actin and myosin, a relative shortage of adenosine triphosphate (ATP), and an impaired calcium pump, which in turn will increase the intracellular calcium concentration even more, completing the cycle. The calcium pump is responsible for returning intracellular Ca\textsuperscript{2+} to the sarcoplasmic reticulum against a concentration gradient, which requires a functional energy supply. Simons and Travell\textsuperscript{179} considered this sequence in the development of the so-called "energy crisis hypothesis" introduced in 1981. Sensory and motor system dysfunction have been shown to develop rapidly after injury and actually may persist in those who develop chronic muscle pain and in individuals who have recovered or continue to have persistent mild symptoms\textsuperscript{172, 180}. Scott et al\textsuperscript{181} determined that individuals with chronic whiplash pain develop more widespread hypersensitivity to mechanical pressure and thermal stimuli than those with chronic idiopathic neck pain. Myofascial trigger points are a likely source of ongoing peripheral nociceptive input, and they contribute to both peripheral and central sensitization, which may explain the observation of widespread allodynia and hypersensitivity\textsuperscript{60, 62, 63}. In addition to being caused by whiplash injury, acute muscle overload can occur with direct impact, lifting injuries, sports performance, etc.\textsuperscript{182}
**Eccentric and (sub)maximal concentric contractions**
Many patients report the onset of pain and activation of MTrPs following either acute, repetitive, or chronic muscle overload\textsuperscript{183}. Gerwin et al\textsuperscript{184} suggested that likely mechanisms relevant for the development of MTrPs included either unaccustomed eccentric exercise, eccentric exercise in unconditioned muscle, or maximal or sub-maximal concentric exercise. A brief review of pertinent aspects of exercise follows, preceding linking this body of research to current MTrP research.
Eccentric exercise is associated with myalgia, muscle weakness, and destruction of muscle fibers, partially because eccentric contractions cause an irregular and uneven
lengthening of muscle fibers\textsuperscript{185, 187} Muscle soreness and pain occur because of local ultrastructural damage, the release of sensitizing algogenic substances, and the subsequent onset of peripheral and central sensitization\textsuperscript{85, 188, 190} Muscle damage occurs at the cytoskeletal level and frequently involves disorganization of the A-band, streaming of the Z-band, and disruption of cytoskeletal proteins, such as titin, nebulin, and desmin, even after very short bouts of eccentric exercise\textsuperscript{186, 189, 194} Loss of desmin can occur within 5 minutes of eccentric loading, even in muscles that routinely contract eccentrically during functional activities, but does not occur after isometric or concentric contractions\textsuperscript{193, 195} Lieber and Friden\textsuperscript{193} suggested that the rapid loss of desmin might indicate a type of enzymatic hydrolysis or protein phosphorylation as a likely mechanism.
One of the consequences of muscle damage is muscle weakness\textsuperscript{196, 198} Furthermore, concentric and eccentric contractions are linked to contraction-induced capillary constrictions, impaired blood flow, hypoperfusion, ischaemia, and hypoxia, which in turn contribute to the development of more muscle damage, a local acidic milieu, and an excessive release of protons (H\textsuperscript{+}), potassium (K\textsuperscript{+}), calcitonin-gene-related-peptide (CGRP), bradykinin (BK), and substance P (SP), and sensitization of muscle nociceptors\textsuperscript{184, 188} There are striking similarities with the chemical environment of active MTrPs established with microdialysis, suggesting an overlap between the research on eccentric exercise and MTrP research\textsuperscript{184, 199} However, at this time, it is premature to conclude that there is solid evidence that eccentric and sub-maximal concentric exercise are absolute precursors to the development of MTrPs In support of this hypothesized causal relation, Itoh et al\textsuperscript{200} demonstrated in a recent study that eccentric exercise can lead to the formation of taut and tender ropy bands in exercised muscle, and they hypothesized that eccentric exercise may indeed be a useful model for the development of MTrPs.
Eccentric and concentric exercise and MTrPs have been associated with localized hypoxia, which appears to be one of the most important precursors for the development of MTrPs\textsuperscript{201} As mentioned, hypoxia leads to the release of multiple algogenic substances In this context, recent research by Shah et al\textsuperscript{199} at the US National Institutes of Health is particularly relevant Shah et al analyzed the chemical milieu of latent and active MTrPs and normal muscles They found significantly increased concentrations of BK, CGRP, SP, tumor necrosis factor-\textalpha (TNF-\textalpha), interleukin-1\beta (IL-1\beta), serotonin, and norepinephrine in the immediate milieu of active MTrPs only\textsuperscript{199} These substances are well-known stimulants for various muscle nociceptors and bind to specific receptor molecules of the nerve endings, including the so-called purinergic and vanilloid receptors\textsuperscript{85, 202}
Muscle nociceptors are dynamic structures whose receptors can change depending on the local tissue environment When a muscle is damaged, it releases ATP, which stimulates purinergic receptors, which are sensitive to ATP, adenosine diphosphate, and adenosine They bind ATP, stimulate muscle nociceptors, and cause pain Vanilloid receptors are sensitive to heat and respond to an increase in H\textsuperscript{+}-concentration, which is especially relevant under conditions with a lowered pH, such as ischaemia, inflammation, or prolonged and exhaustive muscle contractions\textsuperscript{85} Shah et al\textsuperscript{199} determined that the pH at
active MTrP sites is significantly lower than at latent MTrP sites. A lowered pH can initiate and maintain muscle pain and mechanical hyperalgesia through activation of acid-sensing ion channels\textsuperscript{203,204}. Neuroplastic changes in the central nervous system facilitate mechanical hyperalgesia even after the nociceptive input has been terminated (central sensitization)\textsuperscript{203,204}. Any noxious stimulus sufficient to cause nociceptor activation causes bursts of SP and CGRP to be released into the muscle, which have a significant effect on the local biochemical milieu and microcirculation by stimulating “feed-forward” neurogenic inflammation. Neurogenic inflammation can be described as a continuous cycle of increasing production of inflammatory mediators and neuropeptides and an increasing barrage of nociceptive input into wide dynamic-range neurons in the spinal cord dorsal horn\textsuperscript{184}.
**The integrated Trigger point Hypothesis**
The integrated trigger point hypothesis (\textit{Figure 7}) has evolved since its first introduction as the “energy crisis hypothesis” in 1981. It is based on a combination of electrodiagnostic and histopathological evidence\textsuperscript{179,183}.
\textbf{Fig. 7: The integrated trigger point hypothesis. Ach-acetylcholine; AchE-acetylcholinesterase; AchR- acetylcholine receptor}
\begin{center}
\includegraphics[width=\textwidth]{image.png}
\end{center}
As early as 1957, Weeks and Travell\textsuperscript{202} had published a report that outlined a characteristic electrical activity of an MTrP. It was not until 1993 that Hubbard et al\textsuperscript{206} confirmed that this EMG discharge consists of low-amplitude discharges in the order of 10-50 µV and intermittent high-amplitude discharges (up to 500 µV) in painful MTrPs. Initially, the electrical activity was termed “spontaneous electrical activity” (SEA) and thought to be related to dysfunctional muscle spindles\textsuperscript{486}. Best available evidence now suggests that the SEA is in fact endplate noise (EPN), which is found much more commonly in the endplate zone near MTrPs than in an endplate zone outside MTrPs\textsuperscript{207, 209}. The electrical discharges occur with frequencies that are 10-1,000 times that of normal endplate potentials, and they have been found in humans, rabbits, and recently even in horses\textsuperscript{209, 210}. The discharges are most likely the result of an abnormally excessive release of acetylcholine (ACh) and indicative of dysfunctional motor endplates, contrary to the commonly accepted notion among electromyographers that endplate noise arises from normal motor endplates\textsuperscript{184}. The effectiveness of botulinum toxin in the treatment of MTrPs provides indirect evidence of the presence of excessive ACh\textsuperscript{211}. Botulinum toxin (BoTox) is a neurotoxin that blocks the release of ACh from presynaptic cholinergic nerve endings. A recent study in mice demonstrated that the administration of botulinum toxin resulted in a complete functional repair of dysfunctional endplates\textsuperscript{212}. There is some early evidence that muscle stretching and hypertonicity may also enhance the excessive release of ACh\textsuperscript{213, 214}. Tension on the integrins in the presynaptic membrane at the motor nerve terminal is hypothesized to mechanically trigger an ACh release that does not require Ca\textsuperscript{2+}\textsuperscript{213, 215}. Integrins are receptor proteins in the cell membrane involved in attaching individual cells to the extracellular matrix.
Excessive ACh affects voltage-gated sodium channels of the sarcoplasmic reticulum and increases the intracellular calcium levels, which triggers sustained muscle contractures. It is conceivable that in MTrPs, myosin filaments literally get stuck in the Z-band of the sarcomere. During sarcomere contractions, titin filaments are folded into a gel-like structure at the Z-band. In MTrPs, the gel-like titin may prevent the myosin filaments from detaching. The myosin filaments may actually damage the regular motor assembly and prevent the sarcomere from restoring its resting length\textsuperscript{216}. Muscle contractures are also maintained because of the relative shortage of ATP in an MTrP, as ATP is required to break the cross-bridges between actin and myosin filaments. The question remains whether sustained contractures require an increase of oxygen availability.
At the same time, the shortened sarcomeres compromise the local circulation causing ischaemia. Studies of oxygen saturation levels have demonstrated severe hypoxia in MTrPs\textsuperscript{201}. Hypoxia leads to the release of sensitizing substances and activates muscle nociceptors as reviewed above. The combined decreased energy supply and possible increased metabolic demand would also explain the common finding of abnormal mitochondria in the nerve terminal and the previously mentioned ragged red fibers. In mice, the onset of hypoxia led to an immediate increased ACh release at the motor endplate\textsuperscript{217}.
The combined high-intensity mechanical and chemical stimuli may cause activation and sensitization of the peripheral nerve endings and autonomic nerves, activate second order neurons including so-called “sleeping” receptors, cause central sensitization, and lead to the formation of new receptive fields, referred pain, a long-lasting increase in the excitability of nociceptors, and a more generalized hyperalgesia beyond the initial nociceptive area. An expansion of a receptive field means that a dorsal horn neuron receives information from areas it has not received information from previously\textsuperscript{218}. Sensitization of peripheral nerve endings can also cause pain through SP activating the neurokin-1 receptors and glutamate activating N-methyl-D-aspartate receptors, which opens post-synaptic channels through which Ca$^{2+}$ ions can enter the dorsal horn and activate many enzymes involved in the sensitization\textsuperscript{83}.
Several histological studies offer further support for the integrated trigger point hypothesis. In 1976, Simons and Stolov published the first biopsy study of MTrPs in a canine muscle and reported multiple contraction knots in various individual muscle fibers (Figure 8)\textsuperscript{219}. The knots featured a combination of severely shortened sarcomeres in the center and lengthened sarcomeres outside the immediate MTrP region\textsuperscript{219}.
Reitinger et al\textsuperscript{220} reported pathologic alterations of the mitochondria as well as increased width of A-bands and decreased width of I-bands in muscle sarcomeres of MTrPs in the gluteus medius muscle. Windisch et al\textsuperscript{221} determined similar alterations in a post-mortem histological study of MTrPs completed within 24 hours of time of death. Mense et al\textsuperscript{222} studied the effects of electrically induced muscle contractions and a cholinesterase blocker on muscles with experimentally induced contraction knots and found evidence of localized contractions, torn fibers, and longitudinal stripes. Pongratz and Spath\textsuperscript{223, 224} demonstrated evidence of a contraction disk in a region of an MTrP using light microscopy. New MTrP histopathological studies are currently being conducted at the Friedrich Baur Institute in Munich, Germany. Gariphanova\textsuperscript{225} described pathological changes with biopsy studies of MTrPs, including a decrease in quantity of mitochondria, possibly indicating metabolic distress. Several older
histological studies are often quoted, but it is not clear to what extent those findings are specific for MTrPs. In 1951, Glogowsky and Wallraff\textsuperscript{226} reported damaged fibril structures. Fassbender\textsuperscript{227} observed degenerative changes of the I-bands, in addition to capillary damage, a focal accumulation of glycogen, and a disintegration of the myofibrillar network.
There is growing evidence for the integrated trigger point hypothesis with regard to the motor and sensory aspects of MTrPs, but many questions remain about the autonomic aspects. Several studies have shown that MTrPs are influenced by the autonomic nervous system. Exposing subjects with active MTrPs in the upper trapezius muscles to stressful tasks consistently increased the electrical activity in MTrPs in the upper trapezius muscle but not in control points in the same muscle, while autogenic relaxation was able to reverse the effects\textsuperscript{228, 231}. The administration of the sympathetic blocking agent phentolamine significantly reduced the electrical activity of an MTrP\textsuperscript{228, 232, 233}. The interactions between the autonomic nervous system and MTrPs need further investigation. Hubbard\textsuperscript{228} maintained that the autonomic features of MTrPs are evidence that MTrPs may be dysfunctional muscle spindles. Gerwin et al\textsuperscript{184} have suggested that the presence of alpha and beta adrenergic receptors at the endplate provide a possible mechanism for autonomic interaction. In a rodent, stimulation of the alpha and beta adrenergic receptors stimulated the release of ACh in the phrenic nerve\textsuperscript{234}. In a recent study, Ge et al\textsuperscript{61} provided for the first time experimental evidence of sympathetic facilitation of mechanical sensitization of MTrPs, which they attributed to a change in the local chemical milieu at the MTrPs due to increased vasoconstriction, an increased sympathetic release of noradrenaline, or an increased sensitivity to noradrenaline. Another intriguing possibility is that the cytokine interleukin-8 (IL-8) found in the immediate milieu of active MTrPs may contribute to the autonomic features of MTrP. IL-8 can induce mechanical hypernociception, which is inhibited by beta adrenergic receptor antagonists\textsuperscript{235}. Shah et al found significantly increased levels of IL-8 in the immediate milieu of active MTrPs (Shah, 2006, personal communication).
The findings of Shah et al\textsuperscript{199} mark a major milestone in the understanding and acceptance of MTrPs and support parts of the integrated trigger point hypothesis\textsuperscript{141}. The possible consequences of several of the chemicals present in the immediate milieu of active MTrPs have been explored by Gerwin et al\textsuperscript{184}. As stated, Shah et al found significantly increased concentrations of H\textsuperscript{+}, BK, CGRP, SP, TNF-\textit{\alpha}, IL-1\textit{\beta}, serotonin, and norepinephrine in active MTrPs only. There are many interactions between these chemicals that all can contribute to the persistent nature of MTrPs through various vicious feedback cycles\textsuperscript{236}. For example, BK is known to activate and sensitize muscle nociceptors, which leads to inflammatory hyperalgesia, an activation of high-threshold nociceptors associated with C-fibers, and even an increased production of BK itself. Furthermore, BK stimulates the release of TNF-\textit{\alpha}, which activates the production of the interleukins IL-1\textit{\beta}, IL-6, and IL-8. Especially IL-8 can cause hyperalgesia that is independent from prostaglandin mechanisms via a positive feedback loop. IL-1\textit{\beta} can also induce the release of BK\textsuperscript{337}. Release of BK, K\textsuperscript{+}, H\textsuperscript{+}, and cytokines from injured muscle activates the muscle nociceptors, thereby causing tenderness and pain\textsuperscript{184}.
Fig. 9: The expanded MTrP hypothesis (reproduced with permission from: Gerwin RD, Dommerholt J, Shah J. An expansion of Simons’ integrated hypothesis of trigger point formation. Curr Pain Headache Rep 2004;8:468-475). Ach-acetylcholine; AchE-acetylcholinesterase; AchR- acetylcholine receptor; ATP-adenosine triphosphate; SP-substance P; CGRP-calcitonin gene-related peptide; MEPP-miniature endplate potential
Calcitonin gene-related peptide can enhance the release of ACh from the motor endplate and simultaneously decrease the effectiveness of acetylcholinesterase (AChE) in the synaptic cleft, which decreases the removal of ACh\textsuperscript{238,239}. Calcitonin gene-related peptide also upregulates the ACh-receptors (AChR) at the muscle and thereby creates more docking stations for ACh. Miniature endplate activity depends on the state of the AChR and on the local concentration of ACh, which is the result of ACh-release, reuptake, and breakdown by AChE. In summary, increased concentrations of CGRP lead to a release of more ACh, and increase the impact of ACh by reducing AChE effectiveness and increasing AChR efficiency. Miniature endplate potential frequency is increased as a result of greater ACh effect. The observed lowered pH has several implications as well. Not only does a lower pH enhance the release of CGRP, it also contributes to a further down-regulation of AChE. The multiple chemicals and lowered pH found in active MTrPs can contribute to the chronic
nature of MTrPs, enhance the segmental spread of nociceptive input into the dorsal horn of the spinal cord, activate multiple receptive fields, and trigger referred pain, allodynia, hypersensitivity, and peripheral and central sensitization that are characteristic of active myofascial MTrPs\textsuperscript{184}. There is no other evidence-based hypothesis that explains the phenomena of MTrPs in as much detail and clarity as the expanded integrated trigger point hypothesis (Figure 9).
**Perpetuating Factors**
There are several precipitating or perpetuating factors that need to be identified and, if present, adequately managed to successfully treat persons with chronic myalgia. Even though several common perpetuating factors are more or less outside the direct scope of manual physical therapy, familiarity with these factors is critical especially considering the development of increasingly autonomous physical therapy practice. Simons, Travell, and Simons\textsuperscript{16} identified mechanical, nutritional, metabolic, and psychological categories of perpetuating factors. Mechanical factors are familiar to manual therapists and include the commonly observed forward head posture, structural leg length inequalities, scoliosis, pelvic torsion, joint hypermobility, ergonomic stressors, poor body mechanics, etc.\textsuperscript{16,102,116,240}
In recent review articles, Gerwin\textsuperscript{241,242} provided a comprehensive update with an emphasis on non-structural perpetuating factors. Management of these factors usually requires an interdisciplinary approach, including medical and psychological intervention\textsuperscript{64,82}. Common nutritional deficiencies or insufficiencies involve vitamin B1, B6, B12, folic acid, vitamin C, vitamin D, iron, magnesium, and zinc, among others. The term “insufficiency” is used to indicate levels in the lower range of normal, such as those associated with biochemical or metabolic abnormalities or with subtle clinical signs and symptoms. Nutritional or metabolic insufficiencies are frequently overlooked and not necessarily considered clinically relevant by physicians unfamiliar with MTrPs and chronic pain conditions. Yet any inadequacy that interferes with the energy supply of muscle is likely to aggravate MTrPs\textsuperscript{242}. The most common deficiencies and insufficiencies will be reviewed briefly.
Vitamin B12 deficiencies are rather common and may affect as many as 15-20% of the elderly and approximately 16% of persons with chronic MTrPs\textsuperscript{103,243}. B12 deficiencies can result in cognitive dysfunction, degeneration of the spinal cord, and peripheral neuropathy, which is most likely linked to complaints of diffuse myalgia seen in some patients. Serum levels of vitamin B12 as high as 350 pg/ml may be associated with a metabolic deficiency manifested by elevated serum or urine methylmalonic acid or homocysteine and may be clinically symptomatic\textsuperscript{244}. However, there are patients with normal levels of methylmalonic acid and homocysteine, who do present with metabolic abnormalities of B12 function\textsuperscript{242}. Folic acid is closely linked to vitamin B12 and should be measured as well. While folic acid is able to correct the pernicious anaemia associated with vitamin B12 deficiency, it does not influence the neuromuscular aspects.
Iron deficiency in muscle occurs when ferritin is depleted. Ferritin represents the tissue-bound non-essential iron stores in muscle, liver, and bone marrow that supply the essential iron for oxygen transport and iron-dependent enzymes. Iron is critical for the generation of energy through the cytochrome oxidase enzyme system and a lack of iron may be a factor in the development and maintenance of MTrPs\textsuperscript{242}. Interestingly, lowered levels of cytochrome oxidase are common in patients with myalgia\textsuperscript{140}. Serum levels of 15-20 ng/ml indicate a depletion of ferritin. Common symptoms are chronic tiredness, coldness, extreme fatigue with exercise, and muscle pain. Anaemia is common at levels of 10 ng/ml or less. Although optimal levels of ferritin are unknown, Gerwin\textsuperscript{242} suggested that levels below 50 ng/ml may be clinically significant.
Close to 90% of patients with chronic musculoskeletal pain may have vitamin D deficiency\textsuperscript{245}. Vitamin D deficiencies are identified by measuring 25-OH vitamin D levels. Levels above 20 ng/ml are considered normal, but Gerwin\textsuperscript{242} suggested that levels below 34 ng/ml may represent insufficiencies. Correction of insufficient levels of vitamin B12, vitamin D, and iron levels may take many months, during which patients may not see much improvement.
Even when active MTrPs have been identified in a particular patient, clinicians must always consider that MTrPs may be secondary to metabolic insufficiencies or other medical diagnoses. It is questionable whether physical therapy and—as an integral part of physical therapy management—manual therapy intervention can be successful when patients have nutritional or metabolic insufficiencies or deficiencies. A close working relationship with physicians familiar with this body of literature is essential. Therapists should consider the possible interactions between arthrogenic or neurogenic dysfunction and MTrPs\textsuperscript{5, 118, 133, 246, 247}.
Clinically, physical therapists should address all aspects of the dysfunction. There are many other conditions that feature muscle pain and MTrPs, including hypothyroidism, systemic lupus erythematosus, Lyme disease, babesiosis, ehrlichiosis, candida albicans infections, myoadenylate deaminase deficiency, hypoglycaemia, and parasitic diseases such as fascioliasis, amoebiasis, and giardia\textsuperscript{64, 242}. Therapists should be familiar with the symptoms associated with these medical diagnoses\textsuperscript{64}.
Psychological stress may activate MTrPs. Electromyographic activity in MTrPs has been shown to increase dramatically in response to mental and emotional stress, whereas adjacent non-trigger point muscle EMG activity remained normal\textsuperscript{229, 230}. Relaxation techniques, such as autogenic relaxation, can diminish the electrical activity\textsuperscript{231}. In addition, many patients with persistent MTrPs are dealing with depression, anxiety, anger, and feelings of hopelessness\textsuperscript{248}. Pain-related fear and avoidance can lead to the development and maintenance of chronic pain\textsuperscript{249}. Sleep disturbance can also be a major factor in the perpetuation of musculoskeletal pain and must be addressed. Sleep problems may be related to pain, apnea, or to mood disorders like depression or anxiety. Management can be both pharmacologic and non-pharmacologic. Pharmacologic treatment utilizes drugs that promote normal sleep patterns and induce and maintain sleep through the night without causing daytime sedation. Non-pharmacologic treatment emphasizes sleep hygiene, such
as using the bed only for sleep and sex, and not for reading, television viewing, and eating\textsuperscript{250}. Therapists must be sensitive to the impact of psychological and emotional distress and refer patients to clinical social workers or psychologists when appropriate.
**The role of Manual Therapy**
Although the various management approaches are beyond the scope of this article, manual therapy is one of the basic treatment options and the role of orthopedic manual physical therapists cannot be overemphasized\textsuperscript{82 158}. Myofascial trigger points are treated with manual techniques, spray and stretch, dry needling, or injection therapy. Dry needling is within the scope of physical therapy practice in many countries including Canada, Spain, Ireland, South Africa, Australia, the Netherlands, and Switzerland. In the United States, the physical therapy boards of eight states have ruled that physical therapists can engage in the practice of dry needling. New Hampshire, Maryland, Virginia, South Carolina, Georgia, Kentucky, New Mexico, and Colorado\textsuperscript{80}. A promising new development used in the diagnosis and treatment of MTrPs involves shockwave therapy, but as of yet, there are no controlled studies substantiating its use\textsuperscript{251 252}.
**Conclusion**
Although MTrPs are a common cause of pain and dysfunction in persons with musculoskeletal injuries and diagnoses, the importance of MTrPs is not obvious from reviewing the orthopedic manual therapy literature. Current scientific evidence strongly supports that awareness and a working knowledge of muscle dysfunction; in particular, MTrPs should be incorporated into manual physical therapy practice consistent with the IFOMT guidelines for clinical practice. While there are still many unanswered questions with regard to explaining the etiology of MTrPs, this article provides manual therapists with an up-to-date evidence-informed review of the current scientific knowledge.
References
1. IFOMT Available at http://www.ifomt.org/ifomt/about/standards Accessed November 15, 2006
2. Huijbregts PA Muscle injury, regeneration, and repair *J Manual Manipulative Ther* 2001,9 9-16
3. Urquhart DM, Hodges PW, Allen TJ, Story IH Abdominal muscle recruitment during a range of voluntary exercises *Man Ther* 2005,10 144-153
4. Fernández-de-las-Peñas C, Alonso-Blanco C, Alguacil-Diego IM, Miangolarra-Page JC Myofascial trigger points and postero-anterior joint hypomobility in the mid-cervical spine in subjects presenting with mechanical neck pain A pilot study *J Manual Manipulative Ther* 2006,14 88-94
5. Fernández-de-las-Peñas C, Alonso-Blanco C, Miangolarra JC Myofascial trigger points in subjects presenting with mechanical neck pain A blinded, controlled study *Man Ther* (In press)
6. Lew PC, Lewis J, Story I Inter-therapist reliability in locating latent myofascial trigger points using palpation *Man Ther* 1997,2 87-90
7. Fernández-de-las-Peñas C, Alonso-Blanco C, Cuadrado ML, Pareja JA Myofascial trigger points in the suboccipital muscles in episodic tension-type headache *Man Ther* 2006,11 225-230
8. Moore A, Petty N Evidence-based practice Getting a grip and finding a balance *Man Ther* 2001,6 195-196
9. Sackett DL, Rosenberg WM The need for evidence-based medicine *JR Soc Med* 1995,88 620-624
10. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS Evidence-based medicine What it is and what it isn't *BMJ* 1996,312 71-72
11. Pencheon D What's next for evidence-based medicine? *Evidence-Based Healthcare Public Health* 2005,9 319-321
12. Cicerone KD Evidence-based practice and the limits of rational rehabilitation *Arch Phys Med Rehabil* 2005,86 1073-1074
13. Baldry PE *Acupuncture, Trigger Points and Musculoskeletal Pain* Edinburgh, UK Churchill Livingstone, 2005
14. Simons DG Muscle pain syndromes Part 1 *Am J Phys Med* 1975,54 289-311
15. Ruhmann W The earliest book on rheumatism *Br J Rheumatism* 1940,11 140-162
16. Simons DG, Travell JG, Simons LS *Travell and Simons' Myofascial Pain and Dysfunction: The Trigger Point Manual* Vol 1 2nd ed Baltimore, MD Williams & Wilkins, 1999
17. Harden RN, Bruehl SP, Gass S, Niemiec C, Barbick B Signs and symptoms of the myofascial pain syndrome A national survey of pain management providers *Clin J Pain* 2000,16 64-72
18. Stockman R The causes, pathology, and treatment of chronic rheumatism *Edinburgh Med J* 1904,15 107-116
19. Strauss H Über die sogenannten, Rheumatische Muskelschwiele [German, With regard to the so-called myogelosis] *Klin Wochenschr* 1938,35 89-91,121-123
20. Lange M *Die Muskelnharzen (Myogelosen)* [German, The Muscle Hardenings (Myogeloses)] Munich, Germany J F Lehmann's Verlag, 1931
21. Travell J *Office Hours Day and Night: The Autobiography of Janet Travell, MD* New York, NY World Publishing, 1968
22. Kellgren JH Deep pain sensibility *Lancet* 1949,1 943-949
23. Kellgren JH Observations on referred pain arising from muscle *Clin Sci* 1938,3 175-190
24. Kellgren JH A preliminary account of referred pains arising from muscle *British Med J* 1938,1 325-327
25. Simons DG Cardiology and myofascial trigger points Janet G Travell's contribution *Tex Heart Inst J* 2003,30(1) 3-7
26. Travell J Basis for the multiple uses of local block of somatic trigger areas (procaine infiltration and ethyl chloride spray) *Miss Valley Med* 1949,71 13-22
27. Travell J, Bobb AL Mechanism of relief of pain in sprains by local injection techniques *Fed Proc* 1947,6 378
28. Travell JG, Rinzler S, Herman M Pain and disability of the shoulder and arm Treatment by intramuscular infiltration with procaine hydrochloride *JAMA* 1942,120 417-422
29. Travell JG, Rinzler SH The myofascial genesis of pain *Postgrad Med* 1952,11 452-434
30. Chaitow L, DeLany J Neuromuscular techniques in orthopedics *Techniques in Orthopedics* 2003,18(1) 74-86
31. Good MG Five hundred cases of myalgia in the British army *Ann Rheum Dis* 1942,3 118-138
32. Good MG The role of skeletal muscle in the pathogenesis of diseases *Acta Medica Scand* 1950,138 285-292
33. Gutstein M Common rheumatism and physiotherapy *Br J Phys Med* 1940,3 46-50
34. Gutstein M Diagnosis and treatment of muscular rheumatism *Br J Phys Med* 1938,1 302-321
35. Kelly M The nature of fibrosis I The myalgic lesion and its secondary effects A reflex theory *Ann Rheum Dis* 1945,5 1-7
36. Kelly M The nature of fibrosis II A study of the causation of the myalgic lesion (rheumatic, traumatic, infective) *Ann Rheum Dis* 1946,5 69-77
37. Kelly M The relief of facial pain by procaine (Novocaine) injections *J Am Geriatr Soc* 1963,11 586-96
38 Kelly M. The treatment of fibrositis and allied disorders by local anesthesia *Med J Aust* 1941,1 294-298
39 Schneider M, Cohen J, Laws S. *The Collected Writings of Nimmo & Vannerson: Pioneers of Chiropractic Trigger Point Therapy*. Pittsburgh, PA: Schneider, 2001
40 Cohen JH, Gibbons RW Raymond L. Nimmo and the evolution of trigger point therapy, 1929-1986 *J Manipulative Physiol Ther* 1998,21 167-172
41 National Board of Chiropractic Examiners. *Chiropractic Treatment Procedures*. Greeley, CO: NBCE, 1993
42 Mennell J. Spray-stretch for the relief of pain from muscle spasm and myofascial trigger points *J Am Podiatry Assoc* 1976,66 873-876
43 Mennell J. Myofascial trigger points as a cause of headaches *J Manipulative Physiol Ther*, 1989,12 308-313
44 Travell W, Travell JG. Technic for reduction and ambulatory treatment of sacroiliac displacement *Arch Phys Ther* 1942,23 222-232
45 Paris SV. In the best interests of the patient *Phys Ther* 2006,86 1541-1553
46 Travell JG, Simons DG. *Myofascial Pain and Dysfunction: The Trigger Point Manual*. Vol 2. Baltimore, MD: Williams & Wilkins, 1992
47 Travell JG, Simons DG. *Myofascial Pain and Dysfunction: The Trigger Point Manual*. Vol 1. Baltimore, MD: Williams & Wilkins, 1983
48 Dejung B, Gröbl C, Colla F, Weissmann R. *Triggerpunkttherapie* [German, Trigger Point Therapy]. Bern, Switzerland: Hans Huber, 2003
49 Ferguson LW, Gerwin R. *Clinical Mastery in the Treatment of Myofascial Pain*. Philadelphia, PA: Lippincott Williams & Wilkins, 2005
50 Kostopoulos D, Rizopoulos K. *The Manual of Trigger Point and Myofascial Therapy*. Thorofare, NJ: Slack, 2001
51 Prateepavanich P. *Myofascial Pain Syndrome: A Common Problem in Clinical Practice*. Bangkok, Thailand: Ammarind, 1999
52 Rachlin ES, Rachlin IS. *Myofascial Pain and Fibromyalgia: Trigger Point Management*. St Louis, MO: Mosby, 2002
53 Cardinal S. *Points Detente et Acupuncture: Approche Neurophysiologique* [French, Trigger Points and Acupuncture Neurophysiological Approach]. Montreal, Canada: Centre Collégial de Développement de Maternel Didactique, 2004
54 Jonckheere PDM. *Spieren en Dysfuncties, Trigger punten, Basissprincipes van de Myofasciale Therapie* [Dutch, Muscles and Dysfunctions, Basic Principles of Myofascial Therapy]. Brussels, Belgium: Satas, 1993
55 Lucas KR, Polus BI, Rich PS. Latent myofascial trigger points: Their effect on muscle activation and movement efficiency *J Bodywork Mov Ther* 2004,8 160-166
56 Weissmann RD. Überlegungen zur Biomechanik in der Myofaszialen Triggerpunkttherapie [German, Considerations with regard to the biomechanics related to myofascial trigger point therapy] *Physiotherapie* 2000,35(10) 13-21
57 Vecchiet L, Giamberardino MA, De Bigontina P. Comparative sensory evaluation of parietal tissues in painful and nonpainful areas in fibromyalgia and myofascial pain syndrome. In Gebhart GF, Hammond DL, Jensen TS, eds *Proceedings of the 7th World Congress on Pain (Progress in Pain Research and Management)*. Seattle, WA: IASP Press, 1994 177-185
58 Vecchiet L, Giamberardino MA, Dragani L. Latent myofascial trigger points: Changes in muscular and subcutaneous pain thresholds at trigger point and target level *J Manual Medicine* 1990,5 151-154
59 Vecchiet L, Pizzigallo E, lezzi S, Alfaiata G, Vecchiet J, Giamberardino MA. Differentiation of sensitivity in different tissues and its clinical significance *J Musculoskeletal Pain* 1998,6 33-45
60 Dommerholt J. Persistent myalgia following whiplash *Curr Pain Headache Rep* 2005,9 326-330
61 Ge HY, Fernandez-de-las-Penas C, Arendt-Nielsen L. Sympathetic facilitation of hyperalgesia evoked from myofascial tender and trigger points in patients with unilateral shoulder pain *Clin Neurophysiol* 2006,117 1545-1550
62 Ludbeck J. Central hyperexcitability in chronic musculoskeletal pain: A conceptual breakthrough with multiple clinical implications *Pain Res Manag* 2002,7(2) 81-92
63 Munglani R. Neurobiological mechanisms underlying chronic whiplash associated pain: The peripheral maintenance of central sensitization *J Musculoskeletal Pain* 2000,8 169-178
64 Dommerholt J, Issa T. Differential diagnosis: Myofascial pain. In Chattoow L, ed *Fibromyalgia Syndrome: A Practitioner's Guide to Treatment*. Edinburgh, UK: Churchill Livingstone, 2003 149-177
65 Gerwin RD, Shannon S, Hong CZ, Hubbard D, Gevirtz R. Interrater reliability in myofascial trigger point examination *Pain* 1997,69(1-2) 65-73
66 Sciotti VM, Mutak VL, DiMarco L, Ford LM, Plezbert J, Santupadn E, Wigglesworth J, Ball K. Clinical precision of myofascial trigger point location in the trapezius muscle *Pain* 2001,93(3) 259-266
67 Franssen JLM. *Handboek Oppervlakte Elektromyografie* [Dutch, Manual Surface Electromyography]. Utrecht, The Netherlands: De Tijdstroom, 1995
Janda V Muscle spasm A proposed procedure for differential diagnosis *J Manual Med* 1991,6 136-139
Mense S Pathophysiologic basis of muscle pain syndromes In Fischer AA, ed *Myofascial Pain Update in Diagnosis and Treatment* Philadelphia, PA WB Saunders Company, 1997 23-53
Headly BJ Evaluation and treatment of myofascial pain syndrome utilizing biofeedback In Cram JR, ed *Clinical Electromyography for Surface Recordings* Nevada City, NV Clinical Resources, 1990 235-254
Headly BJ Chronic pain management In O'Sullivan SB, Schmitz TS, eds *Physical Rehabilitation Assessment and Treatment* Philadelphia, PA FA Davis Company, 1994 577-600
Gerwin RD, Duranleau D Ultrasound identification of the myofascial trigger point *Muscle Nerve* 1997,20 767-768
Hong CZ Persistence of local twitch response with loss of conduction to and from the spinal cord *Arch Phys Med Rehabil* 1994,75 12-16
Hong C-Z, Torjoe Y Electrophysiological characteristics of localized twitch responses in responsive taut bands of rabbit skeletal muscle *J Musculoskeletal Pain* 1994,2 17-43
Hong C-Z, Yu J Spontaneous electrical activity of rabbit trigger spot after transection of spinal cord and peripheral nerve *J Musculoskeletal Pain* 1998,6(4) 45-58
Friction JR, Auvinen MD, Dykstra D, Schiffman E Myofascial pain syndrome Electromyographic changes associated with local twitch response *Arch Phys Med Rehabil* 1985,66 314-317
Simons DG, Dexter JR Comparison of local twitch responses elicited by palpation and needling of myofascial trigger points *J Musculoskeletal Pain* 1995,3 49-61
Wang F, Audette J Electrophysiological characteristics of the local twitch response with active myofascial pain of neck compared with a control group with latent trigger points *Am J Phys Med Rehabil* 2000,79 203
Audette JF, Wang F, Smith H Bilateral activation of motor unit potentials with unilateral needle stimulation of active myofascial trigger points *Am J Phys Med Rehabil* 2004,83 368-374, quiz 375-377,389
Dommerholt J Dry needling in orthopedic physical therapy practice *Orthop Phys Ther Pract* 2004,16(3) 15-20
Hong C-Z Lidocaine injection versus dry needling to myofascial trigger point The importance of the local twitch response *Am J Phys Med Rehabil* 1994,73 256-263
Gerwin RD, Dommerholt J Treatment of myofascial pain syndromes In Boswell MV, Cole BE, eds *Weiner's Pain Management: A Practical Guide for Clinicians* Boca Raton, FL CRC Press, 2006 477-492
Vecchiet L, Dragani L, De Bigonita P, Obletter G, Giamberardino MA Experimental referred pain and hyperalgesia from muscles in humans In Vecchiet L, et al, eds *New Trends in Referred Pain and Hyperalgesia* Amsterdam, The Netherlands Elsevier Science, 1993 239-249
Vecchiet L, Giamberardino MA Referred pain Clinical significance, pathophysiology and treatment In Fischer AA, ed *Myofascial Pain Update in Diagnosis and Treatment* Philadelphia, PA WB Saunders Company, 1997 119-136
Mense S The pathogenesis of muscle pain *Curr Pain Headache Rep* 2003,7 419-425
Mense S Referral of muscle pain New aspects *Amer Pain Soc J* 1994,3 1-9
Fernandez-de-las-Penas CF, Cuadrado ML, Gerwin RD, Pareja JA Referred pain from the trochlear region in tension-type headache A myofascial trigger point from the superior oblique muscle *Headache* 2005,45 731-737
Fernández-de-las-Penas C, Cuadrado ML, Gerwin RD, Pareja JA Myofascial disorders in the trochlear region in unilateral migraine A possible initiating or perpetuating factor *Clin J Pain* 2006,22 548-553
Hwang M, Kang YK, Kim DH Referred pain pattern of the pronator quadratus muscle *Pain* 2005,116 238-242
Hwang M, Kang YK, Shin JY, Kim DH Referred pain pattern of the abductor pollicis longus muscle *Am J Phys Med Rehabil* 2005,84 593-597
Hong CZ, Kuan TS, Chen JT, Chen SM Referred pain elicited by palpation and by needling of myofascial trigger points A comparison *Arch Phys Med Rehabil* 1997,78 957-960
Dwyer A, Aprill C, Bogduk N Cervical zygapophyseal joint pain patterns I A study in normal volunteers *Spine* 1990,15 453-457
Giamberardino MA, Vecchiet L Visceral pain, referred hyperalgesia and outcome New concepts *Eur J Anaesthesiol (Suppl)* 1995,10 61-66
Scudds RA, Landry M, Birmingham T, Buchan J, Griffin K The frequency of referred signs from muscle pressure in normal healthy subjects (abstract) *J Musculoskeletal Pain* 1995,3 (Suppl 1) 99
Torebjörk HE, Ochoa JL, Schady W Referred pain from intraneural stimulation of muscle fascicles in the median nerve *Pain* 1984,18 145-156
Gibson W, Arendt-Nielsen L, Graven-Nielsen T Referred pain and hyperalgesia in human tendon and muscle belly tissue *Pain* 2006,120(1-2) 113-123
Gibson W, Arendt-Nielsen L, Graven-Nielsen T Delayed onset muscle soreness at tendon-bone junction and muscle tissue is associated with facilitated referred pain *Exp Brain Res* 2006 (In press)
Hendler NH, Kozikowski JG Overlooked physical diagnoses in chronic pain patients involved in litigation *Psychosomatics* 1993,34 494-501
99 Weiner DK, Sakamoto S, Perera S, Breuer P Chronic low back pain in older adults Prevalence, reliability, and validity of physical examination findings *J Am Geriatr Soc* 2006;54 11-20
100 Calandre EP, Hidalgo J, Garcia-Leiva JM, Rico-Villademoros F Trigger point evaluation in migraine patients An indication of peripheral sensitization linked to migraine predisposition? *Eur J Neurol* 2006;13 244-249
101 Headache Classification Subcommittee of the International Headache Society The international classification of headache disorders *Cephalalgia* 2004;24(Suppl 1) 9-160
102 Friction JR, Kroening R, Haley D, Siegert R Myofascial pain syndrome of the head and neck A review of clinical characteristics of 164 patients *Oral Surg Oral Med Oral Pathol* 1985;60 615-623
103 Gerwin R A study of 96 subjects examined both for fibromyalgia and myofascial pain (abstract) *J Musculoskeletal Pain* 1995;3(Suppl 1) 121
104 Rosomoff HL, Fishbain DA, Goldberg N, Rosomoff RS Myofascial findings with patients with chronic intractable benign pain of the back and neck *Pain Management* 1989;3 114-118
105 Skootsky SA, Jaeger B, Oye RK Prevalence of myofascial pain in general internal medicine practice *West J Med* 1989;151 157-160
106 Chaamnuay P, Darmawan KJ, Muirden KD, Assawatanabodee P Epidemiology of rheumatic disease in rural Thailand A WHO/LAR COPCORD study Community Oriented Programme for the Control of Rheumatic Disease *J Rheumatol* 1998;25 13821387
107 Graff-Radford B Myofascial trigger points Their importance and diagnosis in the dental office *J Dent Assoc S Afr* 1984;39 249-253
108 Bajaj P, Bajaj P, Graven-Nielsen T, Arendt-Nielsen L Trigger points in patients with lower limb osteoarthritis *J Musculoskeletal Pain* 2001;9(3) 17-33
109 Hsueh TC, Yu S, Kuan TS, Hong C-Z Association of active myofascial trigger points and cervical disc lesions *J Formosa Med Assoc* 1998;97(3) 174-80
110 Wang C-F, Chen M, Lin M-T, Kuan T-S, Hong C-Z Teres minor tendinitis manifested with chronic myofascial pain syndrome in the scapular muscles A case report *J Musculoskeletal Pain* 2006;14(1) 39-43
111 Friction JR Etiology and management of masticatory myofascial pain *J Musculoskeletal Pain* 1999;7(1/2) 143-160
112 Teachey WS Otolaryngic myofascial pain syndromes *Curr Pain Headache Rep* 2004;8 457-462
113 Dommerholt J El síndrome de dolor miofascial en la región craneomandibular [Spanish, Myofascial pain syndrome in the craniomandibular region] In Padr—s Serrat E, ed *Bases diagnósticas, terapéuticas y posturales del funcionalismo craniomaxil* Madrid, Spain Ripano, 2006 564-581
114 Hesse J, Mogelvang B, Simonsen H Acupuncture versus metoprolol in migraine prophylaxis A randomized trial of trigger point inactivation *J Intern Med* 1994;235 451-456
115 Skubick DL, Clasby R, Donaldson CC, Marshall WM Carpal tunnel syndrome as an expression of muscular dysfunction in the neck *J Occupational Rehab* 1993;3 31-43
116 Treaster D, Marras WS, Burr D, Sheedy JE, Hart D Myofascial trigger point development from visual and postural stressors during computer work *J Electromyogr Kinesiol* 2006;16 115-124
117 Baker BA The muscle trigger Evidence of overload injury *J Neurol Orthop Med Surg* 1986;7 35-44
118 Fruth SJ Differential diagnosis and treatment in a patient with posterior upper thoracic pain *Phys Ther* 2006;86 254-268
119 Doggweiler-Wiygul R Urologic myofascial pain syndromes *Curr Pain Headache Rep* 2004;8 445-451
120 Jarrell J Myofascial dysfunction in the pelvis *Curr Pain Headache Rep* 2004;8 452-456
121 Jarrell JF, Vilos GA, Allaire C, Burgess S, Fortin C, Gerwin R, Lapensee L, Lea RH, Leyland NA, Martyn P, Shenassa H, Taenzner P, Abu-Rafea B Consensus guidelines for the management of chronic pelvic pain *J Obstet Gynaecol Can* 2005;27 869-887
122 Weiss JM Pelvic floor myofascial trigger points Manual therapy for interstitial cystitis and the urgency-frequency syndrome *J Urol* 2001;166 2226-2231
123 Dommerholt J Muscle pain syndromes In Cantu RI, Grodin AJ, eds *Myofascial Manipulation* Gaithersburg, MD Aspen, 2001 93-140
124 Weiner DK, Schmader KE Postherpetic pain More than sensory neuralgia? *Pain Med* 2006;7 243-249, discussion 250
125 Chen SM, Chen JT, Kuan TS, Hong C-Z Myofascial trigger points in intercostal muscles secondary to herpes zoster infection of the intercostal nerve *Arch Phys Med Rehabil* 1998;79 336-338
126 Dommerholt J Complex regional pain syndrome Part 1 History, diagnostic criteria and etiology *J Bodywork Mov Ther* 2004;8 167-177
127 Rashiq S, Galer BS Proximal myofascial dysfunction in complex regional pain syndrome A retrospective prevalence study *Clin J Pain*,1999,15 151-153
128 Prateepavanich P, Kupniratsaikul VC The relationship between myofascial trigger points of gastrocnemius muscle and nocturnal calf cramps *J Med Assoc Thailand* 1999;82 451-459
129 Kern KU, Martin C, Scheicher S, Muller H Auslösung von Phantomschmerzen und -sensauionen durch Muskulare Stumpfingerpunkte nach Beinamputationen [German, Referred pain from amputation stump trigger points into the phantom limb] *Schmerz* 2006,20 300-306
130 Kern U, Martin C, Scheicher S, Müller H Does botulinum toxin A make prosthesis use easier for amputees? *J Rehabil Med* 2004,36 238-239
131 Longbottom J A case report of postulated "Barré Liéou syndrome" *Acupunct Med* 2005,23 34-38
132 Stellon A Neurogenic pruritus: An unrecognized problem? A retrospective case series of treatment by acupuncture *Acupunct Med* 2002,20 186-190
133 Fernandez-de-las-Penas C, Fernández-Carnero J, Miangolarra-Page JC Musculoskeletal disorders in mechanical neck pain: Myofascial trigger points versus cervical joint dysfunction *J Musculoskeletal Pain* 2005,13(1) 27-35
134 Hagg GM Ny förklaringsmodell för muskelskador vid statisk belastning i skuldra och nacke [Swedish, New explanation for muscle damage as a result of static loads in the neck and shoulder] *Arbete Människa Miljo* 1988,4 260-262
135 Henneman E, Somjen G, Carpenter DO Excitability and inhibitory motoneurons of different sizes *J Neurophysiol* 1965,28 599-620
136 Hagg GM The Cinderella Hypothesis In Johansson H, et al, eds *Chronic Work-Related Myalgia* Gavle, Sweden: Gavle University Press, 2003 127-132
137 Armstrong RB Initial events in exercise-induced muscular injury *Med Sci Sports Exerc* 1990,22 429-435
138 Hagg GM Human muscle fibre abnormalities related to occupational load *Eur J Appl Physiol* 2000,83(2-3) 159-165
139 Kadi F, Hagg G, Hakansson R, Holmner S, Butler-Browne GS, Thornell LE Structural changes in male trapezius muscle with work-related myalgia *Acta Neuropathol (Berl)* 1998,95 352-360
140 Kadi F, Waling K, Ahlgren C, Sundelin G, Holmner S, Butler-Browne GS, Thornell LE Pathological mechanisms implicated in localized female trapezius myalgia *Pain* 1998,78 191-196
141 Larsson B, Bjork J, Kadi F, Lindman R, Gerdle B Blood supply and oxidative metabolism in muscle biopsies of female cleaners with and without myalgia *Clin J Pain* 2004,20 440-446
142 Henriksson KG, Bengtsson A, Lindman R, Thornell LE Morphological changes in muscle in fibromyalgia and chronic shoulder myalgia In Västerg H, Merskey H, eds *Progress in Fibromyalgia and Myofascial Pain* Amsterdam, The Netherlands: Elsevier, 1993 61-73
143 Lexell J, Jarvis J, Downham D, Salmons S Sumulation-induced damage in rabbit fast-twitch skeletal muscles: A quantitative morphological study of the influence of pattern and frequency *Cell Tissue Res* 1993,273 357-362
144 Gissel H Ca2+ accumulation and cell damage in skeletal muscle during low frequency stimulation *Eur J Appl Physiol* 2000,83(23) 175-180
145 Gissel H, Clausen T Excitation-induced Ca(2+) influx in rat soleus and EDL muscle: Mechanisms and effects on cellular integrity *Am J Physiol Regul Integr Comp Physiol* 2000,279 R917-924
146 Febbraio MA, Pedersen BK Contraction-induced myokine production and release: Is skeletal muscle an endocrine organ? *Exerc Sport Sci Rev* 2005,33(3) 114-119
147 Pedersen BK, Febbraio M Muscle-derived interleukin-6: A possible link between skeletal muscle, adipose tissue, liver, and brain *Brain Behav Immun* 2005,19 371-376
148 Forsman M, Birch L, Zhang Q, Kadefors R Motor-unit recruitment in the trapezius muscle with special reference to coarse arm movements *J Electromyogr Kinesiol* 2001,11 207-216
149 Forsman M, Kadefors R, Zhang Q, Birch L, Palmerud G Motorunit recruitment in the trapezius muscle during arm movements and in VDU precision work *Int J Ind Ergon* 1999,24 619-630
150 Forsman M, Taoda K, Thorn S, Zhang Q Motor-unit recruitment during long-term isometric and wrist motion contractions: A study concerning muscular pain development in computer operators *Int J Ind Ergon* 2002,30 237-250
151 Zennaro D, Laubli T, Krebs D, Klipstein A, Krueger H Continuous, intermittent and sporadic motor unit activity in the trapezius muscle during prolonged computer work *J Electromyogr Kinesiol* 2003,13 113-124
152 Zennaro D, Laubli T, Krebs D, Krueger H, Klipstein A Trapezius muscle motor unit activity in symptomatic participants during finger tapping using properly and improperly adjusted desks *Hum Factors* 2004,46 252-66
153 Andersen JH, Kjørgaard A, Rasmussen K Myofascial pain in different occupational groups with monotonous repetitive work (abstract) *J Musculoskeletal Pain* 1995,3(Suppl 1) 57
154 Chen S-M, Chen J-T, Kuan T-S, Hong J, Hong C-Z Decrease in pressure pain thresholds of latent myofascial trigger points in the middle finger extensors immediately after continuous piano practice *J Musculoskeletal Pain* 2000,8(3) 83-92
155 Otten E Concepts and models of functional architecture in skeletal muscle *Exerc Sport Sci Rev* 1988,16 89-137
156 Sjogaard G, Lundberg U, Kadefors R. The role of muscle activity and mental load in the development of pain and degenerative processes at the muscle cell level during computer work *Eur J Appl Physiol* 2000;83(2-3) 99-105
157 Sjogaard G, Sogaard K. Muscle injury in repetitive motion disorders *Clin Orthop* 1998;351 21-31
158 Simons DG. Understanding effective treatments of myofascial trigger points *J Bodywork Mov Ther* 2002;6 81-88
159 Proske U, Morgan DL. Stiffness of cat soleus muscle and tendon during activation of part of muscle *J Neurophysiol* 1984;52 459-468
160 Trotter JA. Functional morphology of force transmission in skeletal muscle: A brief review *Acta Anat (Basel)* 1993;146(4) 205-222
161 Monju RJ, Roy RR, Hodgson JA, Edgerton VR. Transmission of forces within mammalian skeletal muscles *J Biomech* 1999;32 371-380
162 Bodine SC, Roy RR, Eldred E, Edgerton VR. Maximal force as a function of anatomical features of motor units in the cat tibialis anterior *J Neurophysiol* 1987;57 1730-1745
163 Ounjan M, Roy RR, Eldred E, Garfinkel A, Payne JR, Armstrong A, Toga AW, Edgerton VR. Physiological and developmental implications of motor unit anatomy *J Neurobiol* 1991;22 547-559
164 Petit J, Filippi GM, Emonet-Denand F, Hunt CC, Laporte Y. Changes in muscle stiffness produced by motor units of different types in peroneus longus muscle of cat *J Neurophysiol* 1990;63 190-197
165 Petit J, Filippi GM, Goux M, Hunt CC, Laporte Y. Effects of tetanic contraction of motor units of similar type on the initial stiffness to ramp stretch of the cat peroneus longus muscle *J Neurophysiol* 1990;64 1724-1732
166 Altringham JD, Bottinelli R. The descending limb of the sarcomere length-force relation in single muscle fibres of the frog *J Muscle Res Cell Motil* 1985;6 585-600
167 Street SF. Lateral transmission of tension in frog myofibers: A myofibrillar network and transverse cytoskeletal connections are possible transmitters *J Cell Physiol* 1983;114 346-364
168 Denoth J, Stussi E, Csucs G, Danuser G. Single muscle fiber contraction is dictated by inter-sarcomere dynamics *J Theor Biol* 2002;216(1) 101-122
169 Dommerholt J, Roysen MW, Whyte-Ferguson L. Neck pain and dysfunction following whiplash In Whyte-Ferguson L, Gerwin RD, eds *Clinical Mastery of Myofascial Pain Syndrome* Baltimore, MD: Lippincott, Williams & Wilkins, 2005 57-89
170 Jull GA. Deep cervical flexor muscle dysfunction in whiplash *J Musculoskeletal Pain* 2000;8(1/2) 143-154
171 Kumar S, Narayan Y, Amell T. Analysis of low velocity frontal impacts *Clin Biomech* 2003;18 694-703
172 Sterling M, Jull G, Vicenzino B, Kenardy J, Darnell R. Development of motor system dysfunction following whiplash injury *Pain* 2003;103(1-2) 65-73
173 Sterling M, Jull G, Vicenzino B, Kenardy J, Darnell R. Physical and psychological factors predict outcome following whiplash injury *Pain* 2005;114(1-2) 141-148
174 Schuller E, Eisenmenger W, Beier G. Whiplash injury in low speed car accidents *J Musculoskeletal Pain* 2000;8(1/2) 55-67
175 Simons DG. Triggerpunkte und Myogelose [German, Trigger points and myogeloses] *Manuelle Medizin* 1997;35 290-294
176 Gerwin RD, Dommerholt J. Myofascial trigger points in chronic cervical whiplash syndrome *J Musculoskeletal Pain* 1998;6(Suppl 2) 28
177 Fernández-de-las-Penas C, Fernández-Carnero J, Palomequel del-Cerro L, Miangolarra-Page JC. Manipulative treatment vs conventional physiotherapy treatment in whiplash injury: A randomized controlled trial *J Whiplash Rel Disord* 2004;3(2) 73-90
178 Fernández-de-las-Penas C, Palomeque-del-Cerro L, Fernández-Carnero J. Manual treatment of post-whiplash injury *J Bodywork Mov Ther* 2005;9 109-119
179 Simons DG, Travell J. Myofascial trigger points: A possible explanation *Pain* 1981;10 106-109
180 Sterling M, Jull G, Vicenzino B, Kenardy J. Sensory hypersensitivity occurs soon after whiplash injury and is associated with poor recovery *Pain* 2003;104 509-517
181 Scott D, Jull G, Sterling M. Widespread sensory hypersensitivity is a feature of chronic whiplash-associated disorder but not chronic idiopathic neck pain *Clin J Pain* 2005;21 175-181
182 Vecchiet L, Vecchiet J, Bellomo R, Giaberardino MA. Muscle pain from physical exercise *J Musculoskeletal Pain* 1999;7(1/2) 43-53
183 Simons DG. Review of enigmatic MTrPs as a common cause of enigmatic musculoskeletal pain and dysfunction *J Electromyogr Kinesiol* 2004;14 95-107
184 Gerwin RD, Dommerholt J, Shah J. An expansion of Simons’ integrated hypothesis of trigger point formation *Curr Pain Headache Rep* 2004;8 468-475
185 Newham DJ, Jones DA, Clarkson PM. Repeated high-force eccentric exercise: Effects on muscle pain and damage *J Appl Physiol* 1987;63 1381-1386
186 Friden J, Lieber RL Segmental muscle fiber lesions after repetitive eccentric contractions *Cell Tissue Res* 1998,293 165-171
187 Stauber WT, Clarkson PM, Fritz VK, Evans WJ Extracellular matrix disruption and pain after eccentric muscle action *J Appl Physiol* 1990,69 868-874
188 Graven-Nielsen T, Arendt-Nielsen L Induction and assessment of muscle pain, referred pain, and muscular hyperalgesia *Curr Pain Headache Rep* 2003,7 443-451
189 Lieber RL, Shah S, Friden J Cytoskeletal disruption after eccentric contraction-induced muscle injury *Clin Orthop* 2002,403 590-599
190 Lieber RL, Thornell LE, Fridén J Muscle cytoskeletal disruption occurs within the first 15 min of cyclic eccentric contraction *J Appl Physiol* 1996,80 278-284
191 Barash IA, Peters D, Fridén J, Luiz GJ, Lieber RL Desmin cytoskeletal modifications after a bout of eccentric exercise in the rat *Am J Physiol Regul Integr Comp Physiol* 2002,283 R958-R963
192 Thompson JL, Balog EM, Fitts RH, Riley DA Five myofibrillar lesion types in eccentrically challenged, unloaded rat adductor longus muscle: A test model *Anat Rec* 1999,254 39-52
193 Lieber RL, Fridén J Mechanisms of muscle injury gleaned from animal models *Am J Phys Med Rehabil* 2002,81(11 Suppl) S70-S79
194 Peters D, Barash IA, Burdi M, Yuan PS, Mathew L, Friden J, Lieber RL Asynchronous functional, cellular and transcriptional changes after a bout of eccentric exercise in the rat *J Physiol* 2003,553(Pt 3) 947-957
195 Bowers EJ, Morgan DL, Proske U Damage to the human quadriceps muscle from eccentric exercise and the training effect *J Sports Sci* 2004,22(11-12) 1005-1014
196 Byrne C, Twist C, Eston R Neuromuscular function after exercise-induced muscle damage: Theoretical and applied implications *Sports Med* 2004,34(1) 49-69
197 Hamlin MJ, Quigley BM Quadriceps concentric and eccentric exercise: 2. Differences in muscle strength, fatigue and EMG activity in eccentrically-exercised sore and non-sore muscles *J Sci Med Sport* 2001,4(1) 104-115
198 Pearce AJ, Sacco P, Byrnes ML, Thickbroom GW, Mastaglia FL The effects of eccentric exercise on neuromuscular function of the biceps brachii *J Sci Med Sport* 1998,1(4) 236-244
199 Shah JP, Phillips TM, Danoff JV, Gerber LH An in-vivo microanalytical technique for measuring the local biochemical milieu of human skeletal muscle *J Appl Physiol* 2005,99 1980-1987
200 Itoh K, Okada K, Kawakita K A proposed experimental model of myofascial trigger points in human muscle after slow eccentric exercise *Acupunct Med* 2004,22(1) 2-12, discussion 12-13
201 Brückle W, Suckfull M, Fleckenstein W, Weiss C, Muller W Gewebe-pO2-Messung in der verspannten Rückenmuskulatur (m. erector spinae) [German. Tissue pO2 in hypertonic back muscles] *Z Rheumatol* 1990,49 208-216
202 McCleskey EW, Gold MS Ion channels of nociception *Annu Rev Physiol* 1999,61 835-856
203 Sluka KA, Kalra A, Moore SA Unilateral intramuscular injections of acidic saline produce a bilateral, long-lasting hyperalgesia *Muscle Nerve* 2001,24 37-46
204 Sluka KA, Price MP, Breese NM, Strucky CL, Wemmie JA, Welsh MJ Chronic hyperalgesia induced by repeated acid injections in muscle is abolished by the loss of ASIC3, but not ASIC1 *Pain* 2003,106 229-239
205 Weeks VD, Travell J How to give painless injections In *AMA Scientific Exhibits* New York, NY: Grune & Stratton, 1957 318-322
206 Hubbard DR, Berkoff GM Myofascial trigger points show spontaneous needle EMG activity *Spine* 1993,18 1803-1807
207 Couppé C, Midttun A, Hilden J, Jørgensen U, Oxholm P, Fuglsang-Frederiksen A Spontaneous needle electromyographic activity in myofascial trigger points in the infraspinatus muscle: A blinded assessment *J Musculoskeletal Pain* 2001,9(3) 7-17
208 Simons DG Do endplate noise and spikes arise from normal motor endplates? *Am J Phys Med Rehabil* 2001,80 134-140
209 Simons DG, Hong C-Z, Simons LS Endplate potentials are common to midfiber myofascial trigger points *Am J Phys Med Rehabil* 2002,81 212-222
210 Macgregor J, Graf von Schweinitz D Needle electromyographic activity of myofascial trigger points and control sites in equine cleidobrachialis muscle: An observational study *Acupunct Med* 2006,24(2) 61-70
211 Mense S Neurobiological basis for the use of botulinum toxin in pain therapy *J Neurol* 2004,251(Suppl 1) 11-7
212 De Paiva A, Meunier FA, Molgo J, Aoki KR, Dolly JO Functional repair of motor endplates after botulinum neurotoxin type A poisoning: Biphasic switch of synaptic activity between nerve sprouts and their parent terminals *Proc Natl Acad Sci USA* 1999,96 3200-3205
213 Chen BM, Grinnell AD Kinetics, Ca+ dependence, and biophysical properties of integrin-mediated mechanical modulation of transmitter release from frog motor nerve terminals *J Neurosci* 1997,17 904-916
214 Grinnell AD, Chen BM, Kashani A, Lin J, Suzuki K, Kidokoro Y The role of integrins in the modulation of neurotransmitter release from motor nerve terminals by stretch and hypertonicity *J Neurocytol* 2003,32(5-8) 489-503
215 Kashani AH, Chen BM, Grinnell AD Hypertonic enhancement of transmitter release from frog motor nerve terminals Ca2+ independence and role of integrins *J Physiol* 2001,530(Pt 2) 243-252
216 Wang K, Yu L Emerging concepts of muscle contraction and clinical implications for myofascial pain syndrome (abstract) In *Focus on Pain* Mesa, AZ Janet G Travell, MD Seminar Series, 2000
217 Bukharueva EA, Salakhutdinov RI, Vyskocil F, Nikolsky EE Spontaneous quantal and non-quantal release of acetylcholine at mouse endplate during onset of hypoxia *Physiol Res* 2005,54 251-255
218 Hoheisel U, Mense S, Simons D, Yu X-M Appearance of new receptive fields in rat dorsal horn neurons following noxious stimulation of skeletal muscle A model for referral of muscle pain? *Neurosci Lett* 1993,153 9-12
219 Simons DG, Stolov WC Microscopic features and transient contraction of palpable bands in canine muscle *Am J Phys Med* 1976,55 65-88
220 Reitinger A, Radner H, Tilscher H, Hanna M, Windisch A, Feigl W Morphologische Untersuchung an Triggerpunkten [German, Morphological investigation of trigger points] *Manuelle Medizin* 1996,34 256-262
221 Windisch A, Reitinger A, Traxler H, Radner H, Neumayer C, Feigl W, Firbas W Morphology and histochemistry of myogelosis *Clin Anat* 1999,12 266-271
222 Mense S, Simons DG, Hoheisel U, Quenzer B Lesions of rat skeletal muscle after local block of acetylcholinesterase and neuromuscular stimulation *J Appl Physiol* 2003,94 2494-2501
223 Pongratz D Neuere Ergebnisse zur Pathogenese Myolaszialer Schmerzsyndrom [German, New findings with regard to the etiology of myofascial pain syndrome] *Nervenheilkunde* 2002,21(1) 35-37
224 Pongratz DE, Spath M Morphologic aspects of muscle pain syndromes In Fischer AA, ed *Myofascial Pain Update in Diagnosis and Treatment* Philadelphia, PA WB Saunders Company, 1997 55-68
225 Garphianova MB The ultrastructure of myogenic trigger points in patients with contracture of mimetic muscles (abstract) *J Musculoskeletal Pain* 1995,3(Suppl 1) 23
226 Glogowsky C, Wallraf J Ein Beitrag zur Klinik und Histologie der Muskelharten (Myogelosen) [German, A contribution on clinical aspects and histology of myogeloses] *Z Orthop* 1951,80 237-268
227 Fassbender HG Morphologie und Pathogenese des Weichteilverrheumatismus [German, Morphology and etiology of soft tissue rheumatism] *Z Rheumaforsch* 1973,32 355-374
228 Hubbard DR Chronic and recurrent muscle pain Pathophysiology and treatment, and review of pharmacologic studies *J Musculoskeletal Pain* 1996,4 123-143
229 Lewis C, Gevirtz R, Hubbard D, Berkoff G Needle trigger point and surface frontal EMG measurements of psychophysiological responses in tension-type headache patients *Biofeedback & Self-Regulation* 1994,3 274-275
230 McNulty WH, Gevirtz RN, Hubbard DR, Berkoff GM Needle electromyographic evaluation of trigger point response to a psychological stressor *Psychophysiology* 1994,31 313-316
231 Banks SL, Jacobs DW, Gevirtz R, Hubbard DR Effects of autogenic relaxation training on electromyographic activity in active myofascial trigger points *J Musculoskeletal Pain* 1998,6(4) 23-32
232 Chen JT, Chen SM, Kuan TS, Chung KC, Hong C-Z Phenotolamine effect on the spontaneous electrical activity of active loci in a myofascial trigger spot of rabbit skeletal muscle *Arch Phys Med Rehabil* 1998,79 790-794
233 Chen SM, Chen JT, Kuan TS, Hong C-Z Effect of neuromuscular blocking agent on the spontaneous activity of active loci in a myofascial trigger spot of rabbit skeletal muscle *J Musculoskeletal Pain* 1998,6(Suppl 2) 25
234 Bowman WC, Marshall IG, Gibb AJ, Harborne AJ Feedback control of transmitter release at the neuromuscular junction *Trends Pharmacol Sci* 1988,9(1) 16-20
235 Cunha FQ, Lorenzetti BB, Poole S, Ferreira SH Interleukin-8 as a mediator of sympathetic pain *Br J Pharmacol* 1991,104 765-767
236 Verri WA, Jr., Cunha TM, Parada CA, Poole S, Cunha FQ, Ferreira SH Hypernociceptive role of cytokines and chemokines Targets for analgesic drug development? *Pharmacol Ther* 2006 (In press)
237 Poole S, de Queiroz Cunha F, Ferreira SH Hyperalgesia from subcutaneous cytokines In Watkins LR, Maier SF, eds *Cytokines and Pain* Basel, Switzerland: Birkhaueser, 1999 59-87
238 Fernandez HL, Hodges-Savola CA Physiological regulation of G4 AChE in fast-twitch muscle Effects of exercise and CGRP *J Appl Physiol* 1996,80 357-362
239 Hodges-Savola CA, Fernandez HL A role for calcitonin generated peptide in the regulation of rat skeletal muscle G4 acetylcholinesterase *Neurosci Lett* 1995,190(2) 117-20
240 Fernandez-de-Ias-Penas C, Alonso-Blanco C, Cuadrado ML, Gerwin RD, Pareja JA Trigger points in the suboccipital muscles and forward head posture in tension-type headache *Headache* 2006,46 454-460
241 Gerwin RD Factores que promueven la persistencia de mialgia en el síndrome de dolor miofascial y en la fibromyalgia [Spanish, Factors that promote the continued existence of myalgia in myofascial pain syndrome and fibromyalgia] Fisioterapia 2005,27(2) 76-86
242 Gerwin RD A review of myofascial pain and fibromyalgia Factors that promote their persistence Acupunct Med 2005,23(3) 121-134
243 Andres E, Loukili NH, Noel E, Kaltenbach G, Abdelgheni MB, Perrin AE, Noblet-Dick M, Maloisel F, Schlienger JL, Blckle JF Vitamin B12 (cobalamin) deficiency in elderly patients CMAJ 2004,171 251-259
244 Pruthi RK, Teffen A Pernicious anemia revisited Mayo Clin Proc 1994,69 144-150
245 Plotnikoff GA, Quigley JM Prevalence of severe hypovitaminosis D in patients with persistent, nonspecific musculoskeletal pain Mayo Clin Proc 2003,78 1463-1470
246 Bogduk N, Simons DG Neck pain Joint pain or trigger points In Vlaeyen H, Merskey H, eds Progress in Fibromyalgia and Myofascial Pain Amsterdam, The Netherlands: Elsevier, 1993 267-273
247 Padamsee M, Mehta N, White GE Trigger point injection A neglected modality in the treatment of TMJ dysfunction J Pedod 1987,12(1) 72-92
248 Linton SJ A review of psychological risk factors in back and neck pain Spine 2000,25 1148-1156
249 Vlaeyen JW, Linton SJ Fear-avoidance and its consequences in chronic musculoskeletal pain A state of the art Pain 2000,85 317332
250 Menefee LA, Cohen MJ, Anderson WR, Doghramji K, Frank ED, Lee H Sleep disturbance and nonmalignant chronic pain A comprehensive review of the literature Pain Med 2000,1 156-172
251 Bauermeister W Diagnose und Therapie des Myofaszialen Triggerpunkt Syndroms durch Lokalisierung und Stimulation sensibilisierter Nozizeptoren mit lokussierten elektrohydraulische Stoßwellen [German, Diagnosis and therapy of myofascial trigger point symptoms by localization and stimulation of sensitized nociceptors with focused ultrasound shockwaves] Medizinisch-Orthopadische Technik 2005,5 65-74
252 Muller-Ehrenberg H, Licht G Diagnosis and therapy of myofascial pain syndrome with focused shock waves (ESWT) Medizinisch-Orthopadische Technik 2005,5 1-6
3
INTERRATER RELIABILITY OF PALPATION OF MYOFASCIAL TRIGGER POINTS IN THREE SHOULDER MUSCLES
Carel Bron
Jo Franssen
Michel Wensing
Rob A.B. Oostendorp
The Journal of Manual & Manipulative Therapy, Vol. 15 No. 4 (2007), 203-215
INTERRATER RELIABILITY OF PALPATION OF MYOFASCIAL TRIGGER POINTS IN THREE SHOULDER MUSCLES
Abstract: This observational study included both asymptomatic subjects (n=8) and patients with unilateral or bilateral shoulder pain (n=32). Patient diagnoses provided by the referring medical physicians included subacromial impingement, rotator cuff disease, tendonitis, tendinopathy, and chronic subdeltoid-subacromial bursitis. Three raters bilaterally palpated the infraspinatus, the anterior deltoid, and the biceps brachii muscles for clinical characteristics of a total of 12 myofascial trigger points (MTrPs) as described by Simons et al. The raters were blinded to whether the shoulder of the subject was painful. In this study, the most reliable features of trigger points were the referred pain sensation and the jump sign. Percentage of pair-wise agreement (PA) was $\geq 70\%$ (range 63–93%) in all but 3 instances for the referred pain sensation. For the jump sign, PA was $\geq 70\%$ (range 67–77%) in 21 instances. Finding a nodule in a taut band (PA = 45–90%) and eliciting a local twitch response (PA = 33–100%) were shown to be least reliable. The best agreement about the presence or absence of MTrPs was found for the infraspinatus muscle (PA = 69–80%). This study provides preliminary evidence that MTrP palpation is a reliable and, therefore, potentially useful diagnostic tool in the diagnosis of myofascial pain in patients with non-traumatic shoulder pain.
Shoulder complaints are very common in modern industrial countries. Recent reviews have indicated a one-year prevalence ranging from 4.7 to 46.7%. These reviews have also reported a lifetime prevalence between 6.7 and 66.7%. This wide variation in reported prevalence can be explained by the different definitions used for shoulder complaints and by differences in the age and other characteristics of the various study populations. Because making a specific structure-based diagnosis for patients with shoulder complaints is considered difficult due to the lack of reliable tests for shoulder examination, recent guidelines developed by the Dutch Society of General Practitioners have recommended instead using the term "shoulder complaints" as a working diagnosis. Shoulder complaints have been defined in a similarly non-specific manner as signs and symptoms of pain in the deltoid and upper arm region, and stiffness and restricted movements of the shoulder, often accompanied by limitations in daily activities.
Despite the absence of reliable diagnostic tests to implicate these structures, the currently prevailing assumption is that in non-traumatic shoulder complaints, mostly the anatomical structures in the subacromial space are involved, i.e., the subacromial bursa, the rotator cuff tendons, and the tendon of the long head of the biceps muscle. However, this assumption does not take into account that muscle tissue itself can also give rise to pain in the shoulder region. In our clinical experience, myofascial trigger points (MTrPs) may lead to myofascial pain in the shoulder and upper arm region and contribute to the burden of shoulder complaints.
The term myofascial pain was first introduced by Travell, who described it as "the complex of sensory, motor, and autonomic symptoms caused by myofascial trigger points." An MTrP is a hyperirritable spot in skeletal muscle that is associated with a hypersensitive palpable nodule in a taut band. In addition, the spot is painful on compression and may produce characteristic referred pain, referred tenderness, motor dysfunction, and autonomic phenomena. Two different types of MTrPs have been described: active and latent. Active trigger points are associated with spontaneous complaints of pain. In contrast, latent trigger points do not cause spontaneous pain, but pain may be elicited with manual pressure or with needling of the trigger point. Despite not being spontaneously painful, latent MTrPs have been hypothesized to restrict range of motion and to alter motor recruitment patterns.
As noted above, referred pain is a key characteristic of myofascial pain. Referred pain is felt remote from the site of origin. The area of referred pain may be discontinuous from the site of local pain or it can be segmentally related to the lesion, both of which may pose a serious problem for the correct diagnosis and subsequent appropriate treatment of muscle-related pain. The theoretical model for this phenomenon of referred pain was first proposed by Ruch and later modified by Mense and Hoheisel. Referred pain patterns originating in muscles have been documented using injection of hypertonic saline, electrical stimulation, or pressure on the most sensitive spot in the muscle. In the clinical setting, palpation is the only method capable of diagnosing myofascial pain. Therefore, reliable MTrP palpation is the necessary prerequisite for considering myofascial pain as a valid
diagnosis\textsuperscript{22} Published interrater studies have reported poor to good reliability for MTrP palpation\textsuperscript{23-29} However, only one study has included a muscle that could produce shoulder pain Gerwin et al\textsuperscript{27} reported a percent agreement (PA) of 83% for tenderness in the infraspinatus muscle ($\kappa=0.48$), 83% ($\kappa=0.40$) for the taut band, 59% ($\kappa=0.17$) for the local twitch response, and 89% ($\kappa=0.84$) for the referred pain
In light of this near absence of data, of the societal impact of shoulder complaints as noted above, and of the potential role of myofascial pain syndrome with regard to shoulder pain, the aim of this study was to determine the interrater reliability of MTrP palpation in three human shoulder muscles deemed by us to be clinically relevant, i.e., the infraspinatus, the anterior deltoid, and the biceps brachii muscles
**Methods and Materials**
**Subjects**
Subjects were recruited from a consecutive sample of patients with unilateral or bilateral shoulder pain referred by their physician to a physical therapy private practice specializing in the management of persons with neck, shoulder, and upper extremity musculoskeletal disorders To decrease limited variation within the data set and to control for rater bias, we also included asymptomatic subjects
All subjects were unacquainted with and had not met the raters Additional inclusion criteria for participation in the study were age between 18 and 75 years and the ability to read and understand the Dutch language Exclusion criteria were known serious rheumatological, neurological, orthopaedic, or internal diseases, such as adhesive capsulitis, rotator cuff tears, cervical radiculopathy, diabetes mellitus, recent shoulder or neck trauma, or shoulder/upper extremity complaints of uncertain origin as diagnosed by the referring physicians After reading a brief synopsis of the aim of the study and the test procedure, all subjects signed an informed consent form The Committee on Research involving Human Subjects of the district Arnhem-Nijmegen approved the study design, the protocols, and the informed consent procedure
**Raters and Observers**
The raters were three physical therapists rater A with 29, rater B with 28, and rater C with 16 years of clinical experience, respectively All were employed at the private practice where this study was conducted The raters had all specialized in the diagnosis and management of patients with musculoskeletal disorders of the neck, shoulder, and upper extremity, and they had 21, 16, and 2 years of experience, respectively, with regard to diagnosis and management of MTrPs
The observers were three physical therapists who also had experience in treating patients with myofascial pain Prior to the study, they were informed by the lead investigator (CB) about the study protocol, and they participated in the training sessions with the raters
Fig 1. The localization of trigger points in the infraspinatus, biceps brachii, and the anterior deltoid muscles. The numbers correspond with the sequence of palpation during the test.
Illustrations courtesy of Lifeart/Mediclip, Manual Medicine 1, Version 1.0a, Williams & Wilkins, 1997.
Both raters and observers participated in a total of eight hours of training. During these sessions, they were able to practice their skills, to compare with each other, and to discuss palpation technique, subject positioning, the amount of pressure used by the examiners\textsuperscript{30}, and the location of the MTrPs (Figure 1). Before proceeding with the study, they reached consensus about all aspects of the examination.
**Trigger Point Examination**
Simons et al\textsuperscript{31} documented 11 muscles in total that could refer pain to the frontal or lateral region of the shoulder and arm (Table 1). Based on our clinical observation that these muscles are frequently involved in patients with shoulder pain, we chose to study the infraspinatus, the anterior deltoid, and the biceps brachii. Without providing specific data on prevalence, Simons et al\textsuperscript{31} reported that the infraspinatus is very often involved in shoulder pain. Hong\textsuperscript{32} noted that the deltoid and the biceps brachii could give rise to satellite MTrPs of the infraspinatus muscle. Hsieh\textsuperscript{33} provided evidence for the existence of a key-satellite relation between the infraspinatus muscle and the anterior deltoid muscle. A satellite trigger point may develop in the referral zone of a key MTrP located in the key muscle. It may also develop in an overloaded synergist that is substituting for the muscle that is harboring the key MTrP, in an antagonist countering the increased tension of the key muscle, or in a muscle that is linked apparently only neurogenically to the key MTrP. Sometimes this hierarchy is obvious but it is not always evident. Key and satellite trigger
Table 1. Muscles with a known referred pain pattern to the frontal or lateral region of the shoulder and/or arm\textsuperscript{31}
| Muscle |
|-------------------------------|
| Infraspinatus |
| Deltoid [anterior and middle part] |
| Biceps brachii |
| Supraspinatus |
| Coracobrachialis |
| Lattisimus dorsi |
| Scalene |
| Pectoralis major |
| Pectoralis minor |
| Subclavius |
| Sternalis |
points are related to each other; our clinical observations indicate that signs and symptoms related to satellite trigger points diminish when key MTrPs are treated appropriately.
Another reason for our choice of these specific muscles is that all three muscles studied here are part of the same functional unit with all three muscles acting as synergists active during shoulder flexion. Although the infraspinatus muscle is traditionally known as an external rotator, this is only true for the anatomical position. This muscle is one of the rotator cuff muscles that is active during flexion of the upper arm to provide stability of the glenohumeral joint during arm movements\textsuperscript{34,35}.
Although MTrPs may be found anywhere in the muscle belly, we agreed to palpate for their presence only in close proximity to the motor endplate zones. The reason for this choice of location is that Simons et al\textsuperscript{31} have suggested that the primary abnormality responsible for MTrP formation is associated with individual dysfunctional endplates in the endplate zone or motor point.
We bilaterally palpated these three muscles for MTrPs using four of the criteria proposed for the palpatory diagnosis of MTrPs\textsuperscript{31}:
I Presence of a taut band with a nodule. The rater examined the subject by palpating the muscle perpendicular to the muscle fiber orientation with either a flat palpation (infraspinatus muscle and the anterior deltoid muscle) or a pincer palpation (biceps brachii muscle). When a taut band was identified, the rater palpated along the taut band to locate the nodule. The raters were asked to search for multiple MTrPs in each muscle. The palpatory findings were more important than the exact location of the MTrPs as indicated by Simons et al\textsuperscript{31}.
Fig. 2. The localization of trigger points in the infraspinatus, biceps brachii and the anterior deltoid muscles and the referred pain patterns according to Simons et al\textsuperscript{31}.
$X =$ trigger point
Solid gray shows the essential referred pain zone, nearly present in all patients, while the stippling represents the spillover zone, present in some but not all patients\textsuperscript{31}.
Illustrations courtesy of Lifeart/Mediclip, Manual Medicine 1, Version 1.0a, Williams & Wilkins, 1997.
2 Reported painful sensation during compression in an area consistent with the established referred pain pattern of the involved muscle. While compressing the palpable nodule in the taut band, the subject was asked if he or she felt any pain or any sensation (e.g., tingling or numbness) in an area remote from the compressed point. When the subjects reported referred sensation, they were asked to describe this area. The rater then decided whether this area was comparable to the established referred pain zone (Figure 2).
3 Presence of a visible or palpable local twitch response (LTR) during snapping palpation. The rater quickly rolled the taut band under the fingertip, while examining the skin above the muscle fibers for this characteristic short and rapid movement.
4 Presence of a general pain response during palpation, also known as a jump sign. While compressing the MTrP, the rater carefully examined the subject’s reaction. A positive jump sign was defined as the subject withdrawing from palpation, wincing, or producing any pain-related vocalization.
All four criteria were scored dichotomously:
- Yes if the rater was certain of presence of a parameter
- No if the rater was sure of the absence of a parameter or if the rater was unsure of presence or absence
Examination of the infraspinatus muscle was performed with the subjects seated with the arms hanging down by the side of the body. Examination of the anterior deltoid and biceps brachii muscles was performed with the forearms supported with slight elbow flexion (Figure 3).
The raters were blinded to subject status; i.e., the subjects were not allowed to indicate whether they were symptomatic. They were instructed to inform the raters when they felt pain somewhere else than the palpation site or when they experienced a referred sensation. However, they were not allowed to tell the rater whether they felt a recognizable pain because that would negate attempts at rater blinding.
In addition to scoring the separate criteria, the raters were asked to judge whether a trigger point was present or absent. Simons et al\textsuperscript{31} suggested that minimal diagnostic criteria for an MTrP consist of a palpable nodule present in a palpable taut band. Simons et al also required that this produce the patient’s recognizable pain upon compression, but we should note that in this study, the subjects were not allowed to inform the examiners of their symptom status. Therefore, in this study the examiners decided that the MTrP was present when the palpable nodule in the taut band was present together with at least one or more of the other clinical characteristics. In all other combinations, it was said that the MTrP was absent. As a result of this study design, no distinction was made between active and latent MTrPs, as the examiners were not allowed to inquire whether subjects recognized
the pain from palpation. Therefore, examiners may have reported on both active and latent MTrPs in symptomatic and asymptomatic subjects.
**Methods**
During two morning sessions separated by a one-week interval, two different groups of 20 subjects each were examined. The raters completed the assessment of each of the four characteristics for the three bilateral muscles within a 10 minute period. Subjects were examined in groups of three with each subject in a separate, private treatment room. Following the first assessment, the raters were randomly assigned to one of the two other rooms to assess another subject until all three raters had assessed all subjects. Upon completion of the assessment of the initial group of three subjects, three new subjects were assigned to the examination rooms and the procedures were repeated. An observer was present in each room during all examinations to verify correct implementation of the testing procedures, but the observer did not interfere with the examination. According to the observers, all examinations were performed in an appropriate manner.
**Table 2. The contingency matrix**
| Rater 2 | Positive | Negative |
|---------|----------|----------|
| | a | b | g₁ |
| | c | d | g₂ |
| Total | f₁ | f₂ | n |
**Statistical Analysis**
For the statistical analysis, we used the Statistical Package for the Social Sciences for Windows version 12.0.1 (SPSS Inc., Chicago, IL). Frequencies were calculated for the subject demographic information.
To express interrater reliability, we calculated both pairwise percentages of agreement (PA) and pair-wise Cohen Kappa-values (κ). The PA-value is defined as the ratio of the number of agreements to the total number of ratings made\(^{36}\).
Using the terminology from the contingency matrix provided in *Table 2*, \( \text{PA} = \frac{(a+d)}{n} \). Cohen’s \( \kappa \) is a coefficient of agreement beyond chance: \( \kappa = \frac{(\text{PA} - P_e)}{(1 - P_e)} \). The agreement based on chance alone (\( P_e \)) is calculated by the sum of the multiplied marginal totals corresponding to each cell divided by the square of the total number of cases (\( n \)): \( P_e = \frac{(f_1g_1 + f_2g_2)}{n^2} \).
The \( \kappa \)-value is widely used for dichotomous variables in interrater reliability studies, although there is no universally accepted value for good agreement\(^{37}\). Landis and Koch\(^{38}\) proposed that a \( \kappa \)-value < 0.00 be considered indicative of poor reliability and a value of
0.001–0.20 slight, 0.21–0.40 fair, 0.41–0.60 moderate, 0.61–0.80 substantial or good, and 0.81–1.00 almost perfect or very good reliability. In this study, we considered a PA-value $\geq 70\%$ indicative of interrater reliability acceptable for clinical use, because under ideal circumstances, i.e., equal prevalence of negative and positive findings, when using a dichotomous test, a PA-value $\geq 70\%$ leads to a $\kappa \geq 0.40$.
**Table 3a. Example of the influence of a high value of the prevalence index on the $\kappa$ value**
*(Example used: Trigger point 3, right shoulder, couple A/C, palpation of a nodule)*
| Observer 2 | Positive | Negative |
|------------|----------|----------|
| | | |
| **Observer 2** | **Positive** | **Negative** |
| Positive | 35 | 2 |
| Negative | 2 | 1 |
| **Total** | 37 | 3 |
In this case, the percentage of agreement is high (0.90), but because the prevalence index is also high (0.85), the $\kappa$-value indicates only fair agreement (0.28).
**Table 3b. Example of the influence of a low value of the prevalence index on the $\kappa$ value**
*(Example used: Trigger point 2, right shoulder, couple B/C, palpation of a nodule)*
| Observer 2 | Positive | Negative |
|------------|----------|----------|
| | | |
| **Observer 2** | **Positive** | **Negative** |
| Positive | 19 | 0 |
| Negative | 5 | 16 |
| **Total** | 24 | 16 |
In this case the percentage of agreement is high (0.85), but the prevalence index is low (0.08), so despite slightly lower percentage agreement than in Table 3a, the $\kappa$-value (0.75) indicates good agreement.
A major drawback to using $\kappa$ as an index of agreement is that this statistic is very sensitive to the prevalence of positive and negative findings. To quantify this effect on the $\kappa$ values calculated, in this study we also determined the prevalence index ($P_i$), which is the absolute value of the difference between the number of agreements on positive findings (a) and agreements on negative findings (d) divided by the total number of observations (n): $P_i = |a - d| / n$ [39]. If $P_i$ is high (closer to 1), chance agreement ($P_c$) is also high and $\kappa$ is reduced.
accordingly. If the $P_i$ is closer to 0, chance agreement ($P_c$) is low and $\kappa$ will increase. This means that the $\kappa$-statistic is more useful as an index of agreement in case of a low $P_i$ than it is with higher $P_i$-values. *Table 3* provides examples of the influence of variations in $P_i$ on $\kappa$-values. With $\kappa$-values in this study strongly influenced by variations in prevalence as indicated by the wide range of $P_i$, we were forced to focus on the PA-values for the interpretation of our findings.
To compare the three pairs of raters, we used the Kruskal-Wallis test, which is a non-parametric one-way analysis of variance. The test statistic H will increase with increased variation. For graphical presentation, we used the box-and-whisker plot. To compare several data sets, this semi-graphical way of summarizing data, which provides median value, lower and upper quartiles, and the extreme values, is considered simple and useful\textsuperscript{37}.
**Results**
**Patient Characteristics**
Thirty-two subjects with unilateral or bilateral shoulder pain and eight subjects without shoulder pain were included in this study. The mean age of subjects was 40 (SD = 11.5; range 18 to 70). Of these 40 subjects, 24 (60%) were female and 16 (40%) were male. The study population had a gender and age profile similar to the patient population of the physical therapy practice where the study was conducted. Most of the subjects (53%) were not diagnosed with a specific medical diagnosis for their shoulder complaints as suggested in the guidelines developed by the Dutch Society of General Practitioners\textsuperscript{5}. *Table 4* provides physician referral diagnoses for the 32 patients involved in this study.
**Table 4. Patient diagnosis and referral information**
| Referral diagnosis | Number of subjects | Percentage |
|------------------------------------------------------------------------------------|--------------------|------------|
| No medical diagnosis. | 17 | 53% |
| *The physician referred the patient to the practice without mentioning any medical diagnosis. This follows to the Dutch guidelines for general practitioners.* | | |
| Calcifying tendonitis | 2 | 6% |
| Tendonitis / bursitis / tendinosis | 3 | 9% |
| Soft tissue disorder | 7 | 22% |
| Degenerative changes in the acromioclavicular or glenohumeral joint | 2 | 6% |
| Subacromial impingement syndrome | 1 | 3% |
| **Total** | **32** | **100%** |
**Pair-Wise Interrater Agreement**
Tables 5 to 8 present the data of the various clinical characteristics of the MTrP in the 80 shoulders of our 40 subjects, i.e., palpable nodule in a taut band, referred pain sensation, LTR, and the jump sign, respectively. The column PA provides the percentage agreement values for the three pairs of observers for both the left and right shoulder. The column $\kappa$ shows the corresponding $\kappa$-value; the third column shows the corresponding prevalence index ($P_i$).
Although we have insufficient information to calculate mean agreement values for all rater pairs, we can cautiously conclude that the rater pairs seemed to be demonstrating similar reliability. When comparing the pair-wise PA-values for the presence or absence of MTrPs, we found no significant difference between the rater pairs (Kruskal-Wallis oneway ANOVA on ranks, $H=0.841$, $P > 0.05$; Figure 4).
**Palpable Nodule in a Taut Band**
The PA-value for the palpable nodule in a taut band in the muscle varied from 45% in the medial head of the biceps brachii muscle to 90% in the infraspinatus muscle. The PA tended to be higher in trigger point 3 (83–90%) than in point 1 (63–73%). In the anterior deltoid muscle the PA varied from 63% to 75%. The PA for the biceps brachii varied from 45% to 75%. Only the rater pair A/C agreed in both points more than 70%. The $\kappa$ varied from 0.11 to 0.75. The $\kappa$-value varied from –0.13 to 0.64 (Table 5).
**Referred Pain Sensation**
The agreement on the referred pain sensation elicited by pressure on the nodule reached a PA-value $\geq 70\%$ in all but 3 cases (range 63–93%). The scores for referred pain sensation were the lowest in the infraspinatus (trigger point 1). The $\kappa$-value varied from –0.13 to 0.64 (Table 6).
Table 5. Percentage of agreement (PA), kappa coefficient ($\kappa$), and the prevalence index (Pind) calculated for palpation of a nodule in a taut band in 6 localizations in 3 muscles (left and right).
| TrP | Side | A/B PA% | $\kappa$ | Pind | A/C PA% | $\kappa$ | Pind | B/C PA% | $\kappa$ | Pind |
|-----|--------|---------|----------|-------|---------|----------|-------|---------|----------|-------|
| 1 | Left | 65 | 0.22 | 0.40 | 68 | 0.30 | 0.38 | 68 | 0.34 | 0.13 |
| | Right | 73 | 0.40 | 0.32 | 63 | 0.24 | 0.13 | 70 | 0.47 | 0.30 |
| 2 | Left | 70 | 0.35 | 0.30 | 80 | 0.60 | 0.10 | 65 | 0.30 | 0.20 |
| | Right | 73 | 0.44 | 0.18 | 70 | 0.43 | 0.05 | 88 | 0.75 | 0.08 |
| 3 | Left | 83 | 0.26 | 0.73 | 90 | 0.30 | 0.85 | 88 | 0.25 | 0.83 |
| | Right | 85 | 0.33 | 0.75 | 90 | 0.28 | 0.85 | 85 | 0.33 | 0.75 |
| 4 | Left | 63 | 0.34 | 0.03 | 70 | 0.40 | 0.20 | 63 | 0.25 | 0.18 |
| | Right | 75 | 0.50 | 0.15 | 63 | 0.26 | 0.13 | 68 | 0.35 | 0.03 |
| 5 | Left | 45 | 0.16 | 0.00 | 68 | 0.27 | 0.38 | 53 | 0.14 | 0.18 |
| | Right | 53 | 0.16 | 0.13 | 80 | 0.58 | 0.20 | 53 | 0.11 | 0.18 |
| 6 | Left | 53 | 0.22 | 0.03 | 73 | 0.25 | 0.53 | 45 | 0.15 | 0.05 |
| | Right | 53 | 0.22 | 0.03 | 75 | 0.44 | 0.35 | 58 | 0.24 | 0.13 |
The numbers 1, 2, and 3 in the first column correspond with the localization in the infraspinatus muscle, 4 is localized in the anterior deltoid muscle, and 5 and 6 are localized in the biceps brachii muscle. In the second row, the three raters are mentioned as A, B, and C. The number of subjects is 40.
Table 6. Percentage of agreement (PA), kappa coefficient ($\kappa$), and the prevalence index (Pind) calculated for palpation of referred pain in 6 localizations in 3 muscles (left and right).
| TrP | Side | A/B PA% | $\kappa$ | Pind | A/C PA% | $\kappa$ | Pind | B/C PA% | $\kappa$ | Pind |
|-----|--------|---------|----------|-------|---------|----------|-------|---------|----------|-------|
| 1 | Left | 78 | 0.48 | 0.38 | 63 | 0.19 | 0.28 | 65 | 0.21 | 0.35 |
| | Right | 78 | 0.51 | 0.33 | 75 | 0.41 | 0.40 | 73 | 0.41 | 0.28 |
| 2 | Left | 88 | 0.38 | 0.78 | 88 | 0.55 | 0.68 | 80 | 0.23 | 0.70 |
| | Right | 80 | 0.25 | 0.70 | 85 | 0.33 | 0.75 | 85 | 0.53 | 0.6 |
| 3 | Left | 73 | 0.46 | 0.08 | 63 | 0.26 | 0.13 | 70 | 0.36 | 0.25 |
| | Right | 83 | 0.64 | 0.18 | 78 | 0.54 | 0.13 | 80 | 0.58 | 0.2 |
| 4 | Left | 78 | 0.13- | 0.78 | 85 | 0.31 | 0.75 | 78 | 0.13- | 0.78 |
| | Right | 88 | 0.55 | 0.68 | 80 | 0.25 | 0.70 | 88 | 0.22 | 0.83 |
| 5 | Left | 93 | 0.36 | 0.88 | 83 | 0.29 | 0.73 | 80 | 0.13 | 0.75 |
| | Right | 85 | 0.19 | 0.80 | 93 | 0.63 | 0.78 | 88 | 0.06- | 0.88 |
| 6 | Left | 90 | 0.45 | 0.80 | 75 | 0.25 | 0.60 | 70 | 0.03 | 0.65 |
| | Right | 88 | 0.38 | 0.78 | 75 | 0.15 | 0.65 | 78 | 0.20 | 0.68 |
Table 7. Percentage of agreement (PA), kappa coefficient ($\kappa$), and the prevalence index (Pind) calculated for palpation of a local twitch response in 6 localizations in 3 muscles (left and right).
| TrP | Side | PA% | $\kappa$ | Pind | PA% | $\kappa$ | Pind | PA% | $\kappa$ | Pind |
|-----|------|-----|---------|------|-----|---------|------|-----|---------|------|
| 1 | Left | 80 | 0.09 | 0.75 | 73 | 0.21 | 0.58 | 78 | 0.36 | 0.58 |
| | Right| 85 | 0.04- | 0.85 | 75 | 0.05- | 0.75 | 75 | 0.15 | 0.65 |
| 2 | Left | 100 | n.c. | 1.00 | 73 | n.c. | 0.73 | 73 | n.c. | 0.73 |
| | Right| 95 | n.c. | 0.95 | 78 | n.c. | 0.78 | 78 | 0.11 | 0.73 |
| 3 | Left | 53 | 0.05 | 0.13 | 58 | 0.15 | 0.38 | 50 | 0.16 | 0.25 |
| | Right| 70 | 0.15 | 0.55 | 43 | 0.13 | 0.13 | 33 | 0.07 | 0.03 |
| 4 | Left | 73 | 0.04 | 0.68 | 63 | 0.14 | 0.38 | 65 | 0.11 | 0.55 |
| | Right| 65 | 0.21 | 0.35 | 60 | 0.20 | 0.20 | 60 | 0.20 | 0.15 |
| 5 | Left | 43 | 0.00 | 0.28 | 50 | 0.04 | 0.00 | 58 | 0.00 | 0.48 |
| | Right| 53 | 0.01 | 0.43 | 73 | 0.45 | 0.08 | 60 | 0.13 | 0.45 |
| 6 | Left | 53 | 0.17 | 0.28 | 68 | 0.32 | 0.28 | 50 | 0.16 | 0.25 |
| | Right| 60 | 0.23 | 0.35 | 63 | 0.25 | 0.08 | 58 | 0.21 | 0.33 |
n.c. = not calculated
The numbers 1, 2, and 3 in the first column correspond with the localization in the infraspinatus muscle, 4 is localized in the anterior deltoid muscle, and 5 and 6 are localized in the biceps brachii muscle. In the second row, the three raters are mentioned as A, B, and C. The number of subjects is 40.
Local Twitch Response
The LTR had only acceptable agreement for two locations in the infraspinatus. The lowest PA was 33% in trigger point 3, which is the most central point in the infraspinatus muscle. All three raters were unable to elicit an LTR in trigger point 2 (also in the infraspinatus muscle) in almost any of the subjects. This led to an agreement of 100% in one case; in most cases it was not possible to calculate a $\kappa$-value because of the absence of the LTR in all cases of one rater (table 7).
Table 8. Percentage of agreement (PA), kappa coefficient ($\kappa$), and the prevalence index (Pind) calculated for palpation of the jump sign in 6 localizations in 3 muscles (left and right).
| TrP | Side | PA% | $\kappa$ | Pind | PA% | $\kappa$ | Pind | PA% | $\kappa$ | Pind |
|-----|--------|-----|----------|------|-----|----------|------|-----|----------|------|
| 1 | Left | 75 | 0.47 | 0.25 | 83 | 0.60 | 0.38 | 78 | 0.51 | 0.33 |
| | Right | 63 | 0.27 | 0.18 | 73 | 0.36 | 0.38 | 65 | 0.31 | 0.15 |
| 2 | Left | 70 | 0.07 | 0.60 | 68 | 0.12 | 0.53 | 88 | 0.68 | 0.53 |
| | Right | 68 | 0.02 | 0.63 | 75 | 0.19 | 0.65 | 93 | 0.58 | 0.43 |
| 3 | Left | 70 | 0.29 | 0.40 | 68 | 0.22 | 0.43 | 78 | 0.38 | 0.53 |
| | Right | 75 | 0.47 | 0.25 | 75 | 0.49 | 0.15 | 80 | 0.58 | 0.25 |
| 4 | Left | 78 | 0.56 | 0.18 | 65 | 0.31 | 0.15 | 73 | 0.36 | 0.38 |
| | Right | 78 | 0.54 | 0.18 | 78 | 0.48 | 0.43 | 70 | 0.34 | 0.40 |
| 5 | Left | 68 | 0.30 | 0.33 | 68 | 0.33 | 0.18 | 65 | 0.22 | 0.35 |
| | Right | 68 | 0.31 | 0.28 | 68 | 0.31 | 0.28 | 65 | 0.16 | 0.4 |
| 6 | Left | 68 | 0.35 | 0.28 | 70 | 0.40 | 0.05 | 63 | 0.28 | 0.18 |
| | Right | 70 | 0.37 | 0.25 | 83 | 0.64 | 0.18 | 73 | 0.41 | 0.28 |
The numbers 1, 2, and 3 in the first column correspond with the localization in the infraspinatus muscle, 4 is localized in the anterior deltoid muscle, and 5 and 6 are localized in the biceps brachii muscle. In the second row, the three raters are mentioned as A, B, and C. The number of subjects is 40.
Jump Sign
The raters achieved the highest PA (93%) on the jump sign in the infraspinatus muscle and the lowest PA (63%) in the infraspinatus muscle and the biceps brachii muscle. The $\kappa$ varied from 0.07 to 0.68 (Table 8).
Table 9. Percentage of agreement, kappa [κ] coefficient, and the prevalence index for agreement on presence or absence of myofascial trigger points
| Raters | PA% | κ | Pind |
|----------|-----|------|------|
| 1 Left | | | |
| A-B | 75 | 0.50 | 0.05 |
| A-C | 70 | 0.40 | 0.05 |
| B-C | 70 | 0.40 | 0.05 |
| 1 Right | | | |
| A-B | 65 | 0.33 | 0.00 |
| A-C | 65 | 0.29 | 0.15 |
| B-C | 70 | 0.41 | 0.05 |
| 2 Left | | | |
| A-B | 78 | 0.38 | 0.53 |
| A-C | 75 | 0.44 | 0.35 |
| B-C | 73 | 0.38 | 0.38 |
| 2 Right | | | |
| A-B | 70 | 0.19 | 0.55 |
| A-C | 73 | 0.29 | 0.53 |
| B-C | 88 | 0.72 | 0.33 |
| 3 Left | | | |
| A-B | 73 | 0.18 | 0.58 |
| A-C | 80 | 0.25 | 0.70 |
| B-C | 83 | 0.29 | 0.73 |
| 3 Right | | | |
| A-B | 73 | 0.30 | 0.48 |
| A-C | 78 | 0.40 | 0.53 |
| B-C | 85 | 0.48 | 0.65 |
| 4 Left | | | |
| A-B | 63 | 0.31 | 0.13 |
| A-C | 58 | 0.18 | 0.03 |
| B-C | 65 | 0.25 | 0.30 |
| 4 Right | | | |
| A-B | 80 | 0.60 | 0.00 |
| A-C | 68 | 0.35 | 0.03 |
| B-C | 63 | 0.25 | 0.08 |
| 5 Left | | | |
| A-B | 53 | 0.22 | 0.13 |
| A-C | 60 | 0.19 | 0.20 |
| B-C | 58 | 0.18 | 0.28 |
| 5 Right | | | |
| A-B | 58 | 0.15 | 0.28 |
| A-C | 73 | 0.45 | 0.03 |
| B-C | 55 | 0.12 | 0.25 |
| 6 Left | | | |
| A-B | 58 | 0.28 | 0.08 |
| A-C | 73 | 0.33 | 0.43 |
| B-C | 50 | 0.20 | 0.00 |
| 6 Right | | | |
| A-B | 60 | 0.27 | 0.15 |
| A-C | 80 | 0.58 | 0.20 |
| B-C | 60 | 0.27 | 0.15 |
The numbers 1, 2, and 3 correspond with the localization in the infraspinatus muscle, 4 is localized in the anterior deltoid muscle, and 5 and 6 are localized in the biceps brachii muscle. PA= Percentage of Agreement, κ = kappa coefficient, and Pind = prevalence index.
Overall agreement
The percentage of agreement on MTrP presence or absence was acceptable for the infraspinatus muscle. In two out of three trigger point locations, PA-values exceeded 70%. In the anterior deltoid muscle and in the biceps brachii muscle, the PA-value was < 70% (Table 9).
Discussion
Palpation is the only method available for the clinical diagnosis of myofascial pain. Therefore, reliable MTrP palpation is the necessary prerequisite to considering myofascial pain as a valid diagnosis. This study indicated that referred pain was the most reliable criterion for palpatory diagnosis in all six MTrPs in all three muscles on both sides. Only in three of the 36 MTrP locations did the PA-value not reach the predetermined value of 70%. This finding is consistent with the results of other interrater reliability studies of MTrP examination\textsuperscript{26-27}. The nodule in the taut band, the LTR, and the jump sign were more reliable in the infraspinatus muscle than in the anterior deltoid and biceps brachii muscle. In general, the jump sign also proved a reliable palpatory characteristic in this study. This is in contrast to other studies, which may indicate that the raters in this study were more successful in standardizing the amount of pressure during the palpation. In general, the LTR was not a reliable characteristic although it did prove reliable for MTrP 1 and 2 in the infraspinatus on either side. Palpation of the nodule in the taut band had sufficient reliability for the diagnosis of MTrPs in the infraspinatus muscle, but less for diagnosis of MTrPs in the anterior deltoid and biceps brachii muscles. There was also a high level of agreement for the presence or absence of MTrPs in the infraspinatus muscle. This agreement was lower for the anterior deltoid and biceps brachii muscles.
Compared to various other commonly used physical examination tests such as the assessment of intervertebral motion or muscle strength, whose established interrater reliability ranges from 41% to 97%\textsuperscript{40-43}, the interrater agreement with regard to MTrP palpation in these three shoulder muscles seemed acceptable. However, the degree of agreement seemed to be strongly dependent on the muscle that was examined. Clinical experience suggests that some muscles are more accessible to palpation than others. There may even be differences within particular muscles. For trigger point 3 of the infraspinatus muscle, the raters achieved the highest agreement. Because MTrPs are often in close proximity to each other, raters did not always agree on which MTrP they were evaluating. For example, the raters may have had difficulty in distinguishing trigger points in the infraspinatus muscle, the teres minor muscle, and the posterior deltoid muscle. The area of referred pain may help in determining which muscle was palpated. However, recognition of pain elicited by palpation, as normally would occur in the clinical situation, was not determined in this study, as this could have endangered the blinding of the raters. Recognition of this characteristic pain by the patient may be an important aspect of reliable MTrP identification.
For the biceps brachii muscle, the raters may have had difficulty distinguishing between the lateral and the medial head of the muscle. It is conceivable that such difficulties could contribute to the lower level of agreement noted for this muscle.
We realize that by collapsing rating categories in this study to absent or present and by not including a third category of indeterminate findings, we may have artificially inflated reliability findings. We decided to score dichotomously for the presence or absence of MTrPs and not include this indeterminate category because the treatment choice would have been similar independent of a negative or indeterminate finding. When MTrPs are absent or when the physical therapist is unsure about the presence or absence of an MTrP, in the clinical situation no treatment will be directed to the MTrP.
We should again note that in this study no distinction was made between active and latent MTrPs, as the examiners were not allowed to inquire whether subjects recognized the pain from palpation. Therefore, examiners may have reported on both active and latent MTrPs in symptomatic and asymptomatic subjects. This may affect external validity in this study in that its findings cannot be directly extrapolated to the clinical situation where patient report of recognition of pain is available and the distinction between active and latent trigger points, therefore, can be made.
In the interpretation of the study findings, we chose to emphasize PA over $\kappa$-values. PA-values do not take into account the agreement that would be expected purely by chance. True agreement is the agreement beyond this expected agreement by chance, and $\kappa$ is a measure of true, chance-corrected agreement. However, as we earlier mentioned, the $\kappa$-statistic is probably inappropriate for studies in which the positive and negative findings are not equally distributed.\textsuperscript{39,44,46} In this study, even asymptomatic subjects had some (obviously latent) trigger points in the shoulder muscles. Subjects with unilateral shoulder pain often also may have latent or active trigger points in the contralateral shoulder\textsuperscript{47,48}. Both may have contributed to the high prevalence of positive findings in this study. The resultant PI resulted in generally low $\kappa$-values despite high PA-values, making the $\kappa$-statistic less appropriate for the statistical representation and subsequent interpretation of study findings.
Training would seem important to achieve sufficient agreement, even when raters have considerable clinical experience. Prior to conducting this interrater reliability study, consensus about the standardization of manual palpation of MTrPs was achieved between raters. In this study, there was no statistically significant difference between the rater pairs, even though one rater had only two years of clinical experience with MTrP diagnosis and management. We recognize that this consensus training may impact external validity in that the results of this study may not apply to situations and clinicians where such training has not occurred. Future studies are needed to determine how many years of experience and what extent of pre-study consensus training is needed to achieve sufficient interrater reliability.
Conclusion
In this study, three blinded raters were able to reach acceptable pair-wise interrater agreement on the presence or absence of TrPs as described by Simons et al\textsuperscript{31} Referred pain was the most reliable feature in all six MTrPs in all three shoulder muscles on both sides. The nodule in the taut band, the LTR, and the jump sign were more reliable in the infraspinatus muscle than in the anterior deltoid and biceps muscle.
The results of this study support the idea that experienced raters can obtain acceptable agreement when diagnosing MTrPs by palpation in the three shoulder muscles studied. Allowing for patient report of pain recognition may provide for even better interrater reliability results. Interrater agreement seems dependent on the muscle and even on the location of the trigger point within a muscle, and findings indicating acceptable interrater reliability cannot be generalized to all shoulder muscles. The distinction between active and latent trigger points should be considered in future studies as should the effect of pre-study consensus training and clinical experience. However, in summary we conclude that this study provides preliminary evidence that MTrP palpation is a reliable and, therefore, potentially useful diagnostic tool in the diagnosis of myofascial pain in patients with non-traumatic shoulder pain.
Acknowledgments
We would like to thank all subjects for participating in this study and our colleagues (B Beersma, C Ploos van Amstel, M Onstenk, and B de Valk) for their assistance as observers. The authors are grateful to J Dommerholt for his very helpful comments. We would also like to thank the editor of JMMT, Dr Peter Huijbregts, for his extremely helpful contributions to this paper.
REFERENCES
1 Bot SD, van der Waal JM, Terwee CB, van der Windt DA, Schellevis FG, Bouter LM, Dekker J Incidence and prevalence of complaints of the neck and upper extremity in general practice Ann Rheum Dis 2005;64 118–123
2 Bongers PM The cost of shoulder pain at work BMJ 2001;322 64–65
3 Luime JJ, Koes BW, Hendriksen IJ, Burdorf A, Verhagen AP, Miedema HS, Verhaar JA Prevalence and incidence of shoulder pain in the general population A systematic review Scand J Rheumatol 2004;33 73–81
4 Picavet HSI, Gils HWV van, Schooten JSAG Klachten van het Bewegingsapparaat in de Nederlandse Bevolking Prevalenties, Consequenties en Risicogroepen Bilthoven, the Netherlands CBS, 2000
5 De Winter AF Diagnosis and Classification of Shoulder Complaints Amsterdam, The Netherlands VU University of Amsterdam, 1999
6 Pope DP, Crotl PR, Pritchard CM, Silman AJ Prevalence of shoulder pain in the community The influence of case definition Ann Rheum Dis 1997;56 308–312
7 Michener LA, McClure PW, Karduna AR Anatomical and biomechanical mechanisms of subacromial impingement syndrome Clin Biomech 2003;18 369–379
8 Sternfeld R, Valente RM, Stuart MJ A commonsense approach to shoulder problems Mayo Clin Proc 1999;74 785–794
9 Bang MD, Deyle GD Comparison of supervised exercise with and without manual physical therapy for patients with shoulder impingement syndrome J Orthop Sports Phys Ther 2000;30 126–137
10 Travell JG, Simons DG Myofascial Pain and Dysfunction The Trigger Point Manual Baltimore, MD Williams & Wilkins, 1983
11 Simons DG Trigger points and limited motion J Orthop Sports Phys Ther 2000;30 706–708
12 Lucas KR, Polus BI, Rich PA Latent myofascial trigger points Their effects on muscle activation and movement efficiency J Bodywork Movement Ther 2004;160–166
13 Mense S, Russell IJ, Simons DG Muscle Pain Understanding Its Nature, Diagnosis, and Treatment Philadelphia, PA Lippincott Williams & Wilkins, 2001
14 Ruch T Visceral sensation and referred pain In JF Fulton, ed Howell's Textbook of Physiology Philadelphia, PA Saunders, 1949, pp 385–401
15 Mense S Neurologische Grundlagen von Muskelschmerz [Neurobiological basis of muscle pain] Schmerz 1999;13 3–17
16 Hoheisel U, Koch K, Mense S Functional reorganization in the rat dorsal horn during an experimental myositis Pain 1994;59 111–118
17 Kellgren JH Observations on referred pain arising from muscle Clin Sci 1938;3 175–190
18 Graven-Nielsen T, Arendt-Nielsen L, Svensson P, Jensen TS Quantification of local and referred muscle pain in humans after sequential i.m. injections of hypertonic saline Pain 1997;69 111–117
19 Graven-Nielsen T, Mense S The peripheral apparatus of muscle pain Evidence from animal and human studies Clin J Pain 2001;17 2–10
20 Hwang M, Kang YK, Shin JY, Kim DH Referred pain pattern of the abductor pollicis longus muscle Am J Phys Med Rehabil 2005;84 593–597
21 Hwang M, Kang YK, Kim DH Referred pain pattern of the pronator quadratus muscle Pain 1005;116 238–242
22 Gerwin R, Shannon S Interrater reliability and myofascial trigger points Arch Phys Med Rehabil 2000;81 1257–1258
23 Njoo KH, Van der Does E The occurrence and inter-rater reliability of myofascial trigger points in the quadratus lumborum and gluteus medius A prospective study in non-specific low back pain patients and controls in general practice Pain 1994;58 317–323
24 Nice DA, Ruddie DL, Lamb RL, Mayhew TP, Rucker K Intertester reliability of judgments of the presence of trigger points in patients with low back pain Arch Phys Med Rehabil 1992;73 893–898
25 Lew PC, Lewis J, Story I Inter-therapist reliability in locating latent myofascial trigger points using palpation Man Ther 1997;2 87–90
26 Hsieh CY, Hong CZ, Adams AH, Platt KJ, Danielson CD, Hoehler FK, Tobis JS Interraxaminer reliability of the palpation of trigger points in the trunk and lower limb muscles Arch Phys Med Rehabil 2000;81 258–264
27 Gerwin RD, Shannon S, Hong CZ, Hubbard D, Gevirtz R Interrater reliability in myofascial trigger point examination Pain 69 65–73, 1997
28 Sciotti VM, Mittak VL, DiMarco L, Ford LM, Plezbert J, Santupadn E, Wigglesworth J, Ball K Clinical precision of myofascial trigger point location in the trapezius muscle Pain 2001;93 259–266
29 Wolfe F, Simons DG, Friction J, Bennett RM, Goldenberg DL, Gerwin R, Hathaway D, McCain GA, Russell IJ, Sanders HO The fibromyalgia and myofascial pain syndromes: A preliminary study of tender points and trigger points in persons with fibromyalgia, myofascial pain syndrome and no disease *J Rheumatol* 1992;19 944–951
30 Fischer AA Pressure tolerance over muscles and bones in normal subjects *Arch Phys Med Rehabil* 1986;67 406–409
31 Simons DG, Travell JG, Simons LS, Travell JG *Travell & Simons’ Myofascial Pain and Dysfunction: The Trigger Point Manual* Baltimore, MD: Williams & Wilkins, 1999
32 Hong CZ Considerations and recommendations regarding myofascial trigger point injection *J Musculoskeletal Pain* 1994;2(1) 29–59
33 Hsieh Y-L, Kao MJ, Kuan TS, et al Dry needling to a key myofascial trigger point may reduce the irritability of satellite MTrPs *Am J Phys Med Rehabil* 2007;86 397–403
34 Kronberg M Muscle activity and coordination in the normal shoulder: An electromyographic study *Clin Orthop* 1990;257 76–85
35 Sugahara R Electromyographic study on shoulder movements *Rehab Med Jap* 1974 41–52
36 Haas M Statistical methodology for reliability studies *J Manipulative Physiol Ther* 1991;14 119–132
37 Altman DG *Practical Statistics for Medical Research* Boca Raton, FL: Chapman & Hall, 1991
38 Landis JR, Koch GG The measurement of observer agreement for categorical data *Biometrics* 1977;33 159–174
39 Sim J, Wright CC The kappa statistic in reliability studies: Use, interpretation, and sample size requirements *Phys Ther* 2005;85 257–268
40 Smedmark V, Wallin M, Arvvidsson I Inter-examiner reliability in assessing passive intervertebral motion of the cervical spine *Man Ther* 2000;5 97–101
41 Fjellner A, Bexander C, Faleij R, Strender LE Interexaminer reliability in physical examination of the cervical spine *J Manipulative Physiol Ther* 1999;22 511–516
42 Pool JJ, Hoving JL, de Vet HC, van MH, Bouter LM The interexaminer reproducibility of physical examination of the cervical spine *J Manipulative Physiol Ther* 2004;27 84–90
43 Pollard H, Lakay B, Tucker F, Watson B, Bablis P Interexaminer reliability of the deltoid and psoas muscle test *J Manipulative Physiol Ther* 2005;28 52–56
44 Lantz CA, Nebenzahl E Behavior and interpretation of the kappa statistic: Resolution of the two paradoxes *J Clin Epidemiol* 1996;49 431–434
45 Feinstein AR, Cicchetti DV High agreement but low kappa I: The problems of two paradoxes *J Clin Epidemiol* 1990;43 543–549
46 Cicchetti DV, Feinstein AR High agreement but low kappa II: Resolving the paradoxes *J Clin Epidemiol* 1990;43 551–558
47 Marcus DA, Scharff L, Mercer S, Turk DC Musculoskeletal abnormalities in chronic headache: A controlled comparison of headache diagnostic groups *Headache* 1999;39 21–27
48 Audette JF, Wang F, Smith H Bilateral activation of motor unit potentials with unilateral needle stimulation of active myofascial trigger points *Am J Phys Med Rehabil* 2004;83 368–374
TREATMENT OF MYOFASCIAL TRIGGER POINTS IN COMMON SHOULDER DISORDERS BY PHYSICAL THERAPY: A RANDOMIZED CONTROLLED TRIAL
Carel Bron
Jo Franssen
Michel Wensing
Rob A.B. Oostendorp
BMC Musculoskeletal Disorders 2007 Nov 5;8:10
TREATMENT OF MYOFASCIAL TRIGGER POINTS IN COMMON SHOULDER DISORDERS BY PHYSICAL THERAPY: A RANDOMIZED CONTROLLED TRIAL
Abstract Background Shoulder disorders are a common health problem in western societies. Several treatment protocols have been developed for the clinical management of persons with shoulder pain. However, available evidence does not support any protocol as being superior over others. Systematic reviews provide some evidence that certain physical therapy interventions (i.e., supervised exercises and mobilisation) are effective in particular shoulder disorders (i.e., rotator cuff disorders, mixed shoulder disorders, and adhesive capsulitis), but there is an ongoing need for high-quality trials of physical therapy interventions.
Usually, physical therapy consists of active exercises intended to strengthen the shoulder muscles as stabilizers of the glenohumeral joint or perform mobilisations to improve restricted mobility of the glenohumeral or adjacent joints (shoulder girdle). It is generally accepted that a-traumatic shoulder problems are the result of impingement of the subacromial structures, such as the bursa or rotator cuff tendons. Myofascial trigger points (MTrPs) in shoulder muscles may also lead to a complex of symptoms that are often seen in patients diagnosed with subacromial impingement or rotator cuff tendinopathy. Little is known about the treatment of MTrPs in patients with shoulder disorders.
The primary aim of this study is to investigate whether physical therapy modalities to inactivate MTrPs can reduce symptoms and improve shoulder function in daily activities in a population of chronic a-traumatic shoulder patients when compared to a wait-and-see strategy. In addition, we investigate the recurrence rate during a one-year-follow-up period.
Methods/Design: This paper presents the design for a randomized controlled trial to be conducted between September 2007 – September 2008, evaluating the effectiveness of a physical therapy treatment for non-traumatic shoulder complaints. One hundred subjects are included in this study. All subjects have unilateral shoulder pain for at least six months and are referred to a physical therapy practice specialized in musculoskeletal disorders of the neck-, shoulder-, and arm.
After the initial assessment, patients are randomly assigned to either an intervention group or a control-group (wait and see). The primary outcome measure is the overall score of the Dutch language version of the DASH (Disabilities of Arm, Shoulder and Hand) questionnaire.
Discussion: Since there is only little evidence for the efficacy of physical therapy interventions in certain shoulder disorders, there is a need for further research. We found only a few studies examining the efficacy of MTrP therapy for shoulder disorders. Therefore we will perform a randomised clinical trial of the effect of physical therapy interventions aimed to inactivate MTrPs, on pain and impairment in shoulder function in a population of chronic a-traumatic shoulder patients. We opted for an intervention strategy that best reflects daily practice. Manual high velocity thrust techniques and dry-needling are excluded. Because in most physical therapy interventions, blinding of the patient and the therapist is not possible, we will perform a randomised, controlled and observer-blinded study.
Trial Registration: This randomized clinical trial is registered at current controlled trials ISRCTN75722066.
Background
Shoulder pain is a common health problem in western societies. There are substantial differences in reported prevalence in the general population. The one-year prevalence of shoulder disorders has been reported to range from 20% to 50%. This wide range is strongly influenced, for example, by the definition of shoulder disorders, including or excluding limited motion, age, gender, and anatomic area.\(^{1-3}\) Of all shoulder patients who attend primary care physicians, 50% recover within 6 months, meaning they do not seek any medical help after the first episode.\(^{4-6}\) Chronicity and recurrence of symptoms are common.\(^{7-8}\) According to the guidelines of the Dutch College of General Practitioners,\(^9\) the recommended management of shoulder symptoms starts with educational information about the natural course of shoulder pain combined with the advice to avoid irritating and loading activities. The use of analgesics or NSAIDs is recommended for the first two weeks. When no recovery occurs within two weeks, subacromial or intra-articular injection therapy with corticosteroids are administered and eventually repeated. Finally, physical therapy is only recommended after a 6-week period when there are functional limitations (specifically an activating and time-contingent approach). International guidelines for shoulder pain, including the Clinical Guideline of Shoulder Pain of the American Academy of Orthopaedic Surgeons\(^{10}\) and the Shoulder Guideline of the New Zealand Guidelines Group\(^{11}\), differ more or less from the Dutch guidelines in classification, recommended interventions, and timeline, and order of interventions. Scientific evidence from randomized clinical trials, meta-analyses, or systematic reviews for either the efficacy of multimodal rehabilitation, injection therapy, medication, surgery, or physical therapy or the order of application of commonly used therapies is lacking.\(^{12-16}\)
An alternative approach to the management of persons with shoulder problems consists of a treatment aimed at inactivating MTrPs and eliminating factors that perpetuate them. MTrPs may be inactivated by manual techniques (such as compression on the trigger point or other massage techniques), cooling the skin with ethyl chloride spray or with ice-cubes followed by stretching of the involved muscle, trigger point needling using an acupuncture needle, or injection with local anaesthetics or Botulinum toxin, followed by ergonomic advises, active exercises, postural correction, and relaxation (with or without biofeedback).\(^{17-18}\) Over the years, MTrPs are increasingly accepted in the medical literature. Clinical, histological, biochemical, and electrophysiological research has provided biological plausibility for the existence of MTrPs.\(^{19-24}\)
MTrPs are defined as exquisitely tender spots in discrete taut bands of hardened muscle that produce symptoms.\(^{25-26}\) A previous study showed that MTrPs can be detected reliably by trained physiotherapists.\(^{27}\) Palpation is still the only reliable method to diagnose myofascial pain clinically. In reviews addressing the efficacy of interventions in shoulder patients, MTrP therapy and myofascial pain are rarely mentioned.\(^{15}\) However, some published case studies suggest that treatment of MTrPs in shoulder patients may be beneficial.\(^{28-31}\)
The primary aim of this study is to investigate the effectiveness of inactivation of MTrPs in shoulder muscles by physical therapy on symptoms and functioning of the shoulder in daily
activities in a population of chronic a-traumatic shoulder patients when compared to a wait-and-see strategy. In addition, we investigate the recurrence rate during a one-year-follow-up period.
**Methods/Design**
An examiner-blinded randomized controlled trial will be conducted, which has been approved by the ethics committee of the Radboud University Nijmegen Medical Centre, the Netherlands [CMO 2007/022].
**Participants/Study sample**
Between September 2007 and September 2008, all consecutive patients referred to a physical therapy practice specialized in the treatment of individuals with musculoskeletal disorders of the neck, shoulder and arm are potential study participants. The referring physicians include general practitioners, orthopaedic surgeons, neurologists and physiatrists. Patients are eligible if they have unilateral shoulder complaints (described as pain felt in the shoulder or upper arm) for at least six months. The patients present with persistent shoulder pain that has not spontaneously recovered. The patients are between 18 and 65 years old. Because the questionnaires are in the Dutch language, subjects must understand written and verbal Dutch. Patients who have been diagnosed (prior to the referral) with shoulder instability, shoulder fractures, systemic diseases (such as rheumatoid arthritis, Reiter's syndrome, diabetes), or who's medical history or examination suggests neurological diseases, or other severe medical or psychiatric disorders will be excluded from the study. The project leader will check all the available information from referral letters, additional information from the general practitioner and from the patients. All eligible patients will be informed of the study and will be invited to participate. Patients who are willing to participate will be asked to review and sign the written informed consent.
**Measurements**
Before randomization, all participants will be assessed during an individual baseline test session. They will complete a battery of questionnaires and tests, determining data on social, demographic, and physical factors, and baseline values for the outcome measures. In addition, subjects will complete the DASH, RAND-36-dutch language version, and passive range of motion tests of the shoulder (PROM). During the initial assessment, MTrPs will be identified, based on compression-produced pain that is recognized by patients as their own shoulder pain. If no MTrPs are detected, the subjects will be excluded from the study. All measurements will be performed by the same independent observer, who is not employed by the physical therapy practice (This is to create optimal blinding of the observer, who is now not able to recognize the subjects). The observer is trained in identifying MTrPs and has several years of clinical experience in MTrP therapy. The observer participated in a former reliability study of MTrP palpation. The baseline measurements will be at T0, the second measurement (T1) will...
be 6 weeks after the first assessment session, the third (T2) will be 12 weeks after the first assessment session. All measurements (see table 1) will be performed outside the physical therapy practice to assure that the observer will not recognize any of the study participants when they come to the physical therapy practice for their treatment. After this first assessment, the patients will be randomly assigned to one of two groups: the intervention group or the control group. The patients in the control group will stay on the waiting list and will not receive any treatment. They are allowed to use over-the-counter painkillers during this 12-week period. After 6 weeks and 12 weeks, respectively, they will be examined by the same blinded observer. After 12 weeks they will receive the same physical therapy program as the experimental group (see Figure 1). The initial trial ends after 12 weeks, but 6 months and 12 months after the start of the experimental intervention shoulder function of the subjects will be re-evaluated with the DASH-Dutch language version.
**Table 1: Overview of variables**
| Variable | T0 Baseline | T1 After 6 wk | T2 After 12 wk | Measured by |
|-----------------------------------------------|-------------|---------------|----------------|-------------------|
| Age* | X | | | Interview |
| Gender* | X | | | Interview |
| Work | X | | | Interview |
| Dominant side affected | X | | | Interview |
| Duration of the complaints* | X | | | Interview |
| DASH DLV | X | X | X | Questionnaire |
| Use of medication | X | X | X | Interview |
| Use of other therapy | X | X | X | Interview |
| Work % | X | X | X | Interview |
| Improvement | X | X | X | Interview |
| (percentage of perceived improvement) | | | | |
| Number of involved muscles | X | X | X | Assessment |
| No. of treatment sessions | | | X | Assessment |
| Health status | X | | | RAND-36 DLV |
| for baseline comparison | | | | |
| Existence and severity of symptoms of depression | X | | | Beck Depression Inventory |
| Shoulder Passive ROM | X | X | X | Goniometry |
| • flexion | X | X | X | |
| • abduction | X | X | X | |
| • external rotation | X | X | X | |
| • internal rotation | X | X | X | |
| • cross body adduction | X | X | X | |
*Age, gender and duration of the complaints seem to be important prognostic variables [53].*
**Intervention**
The patients in the intervention group will be treated by a physical therapist once a week for a maximum period of 12 weeks. All participating physiotherapists are experienced in treating patients with long-lasting shoulder symptoms and patients with MTrPs and myofascial pain, especially in the upper part of the body. They are trained and skilled in the identification of MTrPs and received a certification in manual trigger point therapy. The treatment starts with inactivation of the active (pain producing) MTrPs by using manual techniques (compression on the trigger point, manual stretching of the trigger point area and the taut band) combined with “intermittent cold application by using ice-cubes followed by stretching the muscle” according to Travell\(^{32}\) to further inactivate the MTrPs.
Manual pressure will decrease the sensitivity of the painful nodule in the muscle while other massage techniques will mobilize and stretch the contracted muscle fibres. The application of the ice-cubes has a desensitizing effect, and makes it easier to stretch shoulder muscles. Each treatment session will end with a heat application to increase the circulation of the involved muscles.
Patients will be advised to do stretching exercises and will be taught to perform these correctly by means of surface electromyography-assisted stretching.\textsuperscript{33,34} Furthermore, they will be advised to perform relaxation exercises, and to apply heat (like a hot shower, hot packs) several times (at least twice) a day. If there is abnormal measurable higher electromyographic activity in the upper trapezius muscle (measured by surface Electromyography (sEMG) using a Myomed 932 (Enraf Nonius, Delft, the Netherlands) during standing and/or sitting\textsuperscript{35}, relaxation exercises will be performed using a portable myofeedback device (Myotrac I, Thought Technology, Quebec, Canada). Abnormal sEMG activity is defined as a constantly measured value above 1–5% of the maximally voluntary contraction\textsuperscript{36,39}, which is in general above 10 microvolt, during several minutes and the patient is not able to relax the muscle spontaneously or on request. Finally, all patients will receive ergonomic recommendations, and instructions to assume and maintain “good” posture.\textsuperscript{40,41} Manual high velocity thrust techniques of the cervical spine and the shoulder and dry needling are excluded from the treatment protocol, because not all participating physical therapists are skilled to perform these techniques. The content of each session may vary as it depends on the findings during the first treatment session and the results of the previous treatment sessions. Thus, there are differences in the content of the individual treatments, but within the limits of the treatment protocol.
**Stoprule**
The treatment ceases when the patient is completely symptom-free or the patient and the therapist agree that treatment will not further benefit the patient, although their participation in the study will prolong. If patients decide that they no longer wish to participate in the study, they are free to withdraw from the study at any moment.
**Control of intervention integrity**
To enhance the integrity of this complex intervention, every week all participating physical therapists will discuss the content of each therapy session with the researcher (CB) without mentioning names or other information which will assure the blinding of the independent researcher (CB). After 6 and 12 weeks, the patients of the intervention group will be interviewed about the content of the received treatment sessions to assure that all patients will be treated according to the protocol. If patients are not treated according to the protocol, they will be identified and participation may be discontinued.
**Expectations regarding treatment outcome**
At the start of the trial (T0), both the patients and physical therapists will complete a questionnaire regarding the anticipated treatment outcome.
Setting
The study will be conducted in a physical therapy practice specialized in management of persons with musculoskeletal disorders of the neck, shoulder and arm. After randomisation every patient assigned to the experimental group will be treated by the same physical therapist.
Objectives
In the current study we will test the following hypothesis (H0)
A physical therapy treatment to inactivate MTrPs within a three months’ period is as effective as a “wait and see” approach of patients with chronic shoulder complaints in a three month period.
Population characteristics
- To identify potential confounding factors, demographic information for all subjects will be collected including age, gender, education, occupation, sports and leisure activities, duration of the complaints, and type of onset, among others.
- The Dutch language version of the RAND-36 item Health Survey will be used for baseline characteristics of the study population. The RAND-36, which is almost identical to the MOS SF-36\(^{42}\), scores the functional status and quality of life and is widely used for screening health status in medical, social and epidemiological research. The RAND-36 consists of 36 items divided into 8 subscales concerning physical functioning, role limitations due to physical health, role limitations due to emotional problems, energy and fatigue, emotional wellbeing, social functioning, pain, general health perception and health change. This questionnaire is considered to be a reliable instrument for comparing groups (internal consistency Cronbach’s alpha > 0.70). The test-retest stability is sufficient (0.58 – 0.82) and the questionnaire is responsive when scoring after at least 4 weeks. The construct validity was estimated by comparing the RAND-36 with other Health questionnaires (like the Nottingham Health Profile (NHP) and the Groninger Activities Restriction Scale (GARS). There are significant correlations between the subscales of the RAND-36 and the subscales of the NHP (correlation coefficient 0.42 – 0.69). The correlation coefficient between the subscale physical functioning and the GARS is 0.65. A higher score (maximum is 100 points) defines a more favourable health status.
- The Beck Depression Inventory (BDI) is used to discriminate between patients with major depression and those without or with minor depressive feelings. The BDI is included because depression may be a confounding factor. The BDI is widely accepted and used in clinical and experimental research and its predictive value is rated as good. A BDI-score equally or higher than 21 indicates a major depression (specificity 78.4%)\(^{43}\)
Outcomes
The following outcome parameters will be used.
Primary
The overall score of the DASH (Disability of Arm Shoulder and Hand) questionnaire – Dutch language version will be used as the primary outcome measure. The DASH is a multidimensional (physical, emotional and social) 30-item self-report measure focussing on physical function pain and other symptoms. At least 27 of the 30 items must be completed for a score to be calculated. The assigned values for all completed responses are simply summed and averaged. This value is then transformed to a score out of 100 by subtracting one and multiplying by 25. The transformation is done to make the score easier to compare to other measures using a 0–100 scale. A higher score indicates greater disability.
\[
\text{DASH disability/symptom score} = \frac{[(\text{sum of } n \text{ responses}) - 1]}{n} \times 25
\]
where \( n \) is equal to the number of completed responses.
Scoring is on a 5-point Likert scale from no difficulty (0 points) to very difficult (5 points). The range of the total score is from 0 to 100, where 0 means no symptoms (pain, tingling, weakness or stiffness) and no difficulty in performing daily activities, while 100 means extreme, severe symptoms and unable to perform any daily activity. Content and face validity of the DASH were confirmed by a variety of experts of the American Academy of Orthopaedic Surgeons (AAOS), the council of Musculoskeletal Speciality Societies (COMSS) and the institute for Work and Health (Toronto, Ontario, Canada) throughout the development process.\(^{44}\)
Its internal consistency was excellent (Cronbach’s alpha = 0.96) during field-testing. The test-retest reliability was excellent (ICC2.1 = 0.92 and 0.96) in two studies\(^{45,46}\) and satisfactory in one study (Pearson 0.98 and kappa 0.67). The minimal detectable Change (MDC) was calculated in a population of 172 patients with several upper limb disorders (Osteoarthritis, Carpal Tunnel syndrome, Rotator Cuff syndrome, Rheumatoid Arthritis and Tennis Elbow)\(^{47}\). The Minimal Detectable Change (MDC) varied between 10.70 (at 90% confidence level) and 12.75 (at 95% confidence level). The DASH demonstrated to be a responsive questionnaire.
The inter- and intra-observer reliability is good to excellent (intra-observer reliability Pearson r = 0.96 to 0.98; ICC = 0.91 to 0.96; Inter-observer agreement Cohen’s kappa = 0.79).
The construct validity was estimated by comparing the DASH to several other questionnaires. The correlation with other instruments like the SPADI (Shoulder Pain and Disability Index) is good (Pearson’s r = 0.82 to 0.88). The DASH questionnaire is one of the best among 16 other questionnaires for shoulder symptoms\(^{48}\).
Secondary
An independent examiner will perform the following tests.
- The total number of shoulder muscles with MTrPs will be counted and compared to the baseline measurement findings.
- Passive range of motion of the shoulder will be measured by a handheld digital inclinometer (The Saunders group Inc, Chaska, MN). The range of motion of the non-painful shoulder will be used as reference \(49, 49, 50\). Because the normal range of motion differs from one individual to another, we focus on improvement of limited range of motion during the experiment (both experimental group and control group).
- For the measurement of passive external rotation, the patient is in a supine position, with the shoulder in \(0^\circ\) of abduction and rotation, the elbow flexed at \(90^\circ\) and the forearm in a neutral position. This position is defined as the position of \(0^\circ\). The observer then performs external rotation until pain limits the range of motion or the extreme of the range is reached. The inclinometer is placed against the volar side of the forearm. This range of motion is recorded in degrees. The normal range of motion for external rotation is between \(70^\circ\) and \(90^\circ\).
- For the measurement of passive glenohumeral abduction, the patient is seated upright, and the position of \(0^\circ\) is defined as the upper arm is in a neutral position. While palpating the lower angle of the scapula with the thumb, the examiner elevates the upper arm of the patient until the scapula begins to rotate or pain limits further motion. The inclinometer is placed against the lateral side of the upper arm near the elbow. The range of motion is recorded in degrees. The normal range of motion is \(90^\circ\).
- For the measurement of passive elevation (through flexion), the patient is in the supine position with the arm along the side. This position is defined as the position of \(0^\circ\). The observer than performs elevation until pain limits the range of motion or the extreme of the range is reached. Then the inclinometer is placed against the medial side of the upper arm near the elbow. The range of motion is recorded in degrees. The normal range of motion is between \(165^\circ\) and \(180^\circ\).
- For the measurement of internal rotation the patient is in a prone position. The shoulder is \(90^\circ\) abduction, and the forearm is in neutral position. This position is defined as the position of \(0^\circ\). The observer than performs internal rotation until pain limits the range of motion or the extreme of the range is reached. The sensor is placed against the volar side of the forearm. The normal range of motion is \(70^\circ\).
- For the measurement of horizontal adduction the patient is in a supine position. The arm is in \(90^\circ\) abduction. This position is defined as the position of \(0^\circ\). The observer performs adduction, while the arm stays in the vertical plane, until pain limits the range of motion or the extreme of the range is reached. The normal range of motion is \(135^\circ\).
• Finally the total number of treatment sessions will be counted. This is done by an assistant, who is not involved in the study by using the administration-software of the practice (see Table 1)
**Sample size**
The initial sample size is based on the assumption that the overall score of the primary outcome measure DASH shows a mean improvement of 15 points (SD = 22) \(^51\).
To test the null hypothesis of equality of treatment at \(\alpha = 0.05\) with 90% power and assuming a uniform dropout rate of 5%, it was calculated that 52 patients in each group would be sufficient.
**Randomization**
After inclusion the patients will be randomly assigned to either the intervention group or the “wait and see” group. The randomisation will be performed by an assistant not otherwise involved in the study by generating random numbers using computer software. Stratification or blocking strategies will not be used.
**Informed consent**
The patients will be informed about the study prior to the first assessment and will be asked to give written informed consent.
**Blinding**
Blinding of the patients or the physical therapists, who are involved in the treatment, is impossible due to the treatment characteristics.
An independent observer will collect baseline data and outcome data. The independent observer is blinded. The successfulness of the blinding procedure will be evaluated by asking the observer to which group she believes the subjects belong.
**Statistical analysis**
For comparisons of prognostic variables on baseline we will use the Student’s t-test for continuous variables with normal distribution and the chi-square test for categorical variables or continuous variables with non-normal distribution \(^52\). For the overall score of the DASH (primary outcome measure) we will use the unpaired t-test for normally distributed data or Mann-Whitney Rank Sum-test for non-normally distributed data to assess the difference between the two groups after the treatments. Regression analyses will be used to include prognostic factors, such as the baseline scores like age, gender and duration of the complaints, in the analyses. All significance levels will be set at \(p < 0.05\). All data will be analysed primarily according to intention-to-treat principle. We will use Sigmastat 3.11 and Systat 12 for Windows (Systat Inc. Richmond, California, USA) for the statistical analyses.
Discussion
Since there is little evidence for the efficacy of physical therapy interventions in some shoulder disorders, there is a need for further research. Therefore we will perform a randomised clinical trial dealing with the effect of physical therapy interventions aimed to inactivate MTrPs on pain and impairment in shoulder function in a population of chronic a-traumatic shoulder patients. To the best of our knowledge, few studies of the efficacy of MTrP therapy are published. We choose for an intervention strategy that best reflects daily practice. We excluded manual high velocity thrust techniques and intramuscular MTrP release by dry needling, because these interventions are not commonly used by Dutch physical therapists and not all participating therapists were skilled to perform these techniques at the beginning of the study. In most physical therapy interventions, blinding of the patient and the therapist is not possible. The observers will be blinded for the allocation procedure. The results of this trial will be presented as soon as they are available.
Competing interests
The author(s) declare that they have no competing interests.
Authors' contributions
All authors read, edited and approved the final manuscript. CB is the lead investigator, and developed the design of the study, will carry out data-acquisition, analysis, interpretations, and prepared as primary author the manuscript. MW and RO were responsible for the design, project supervision and writing of the manuscript. JF will assist in carrying out data acquisition and was involved in preparing the study design and in writing the manuscript.
Acknowledgements
The authors like to thank Jan Dommerholt, physical therapist for his assistance and critical analysis of this paper.
References
1 Luime JJ, Koes BW, Hendriksen IJ, Burdorf A, Verhagen AP, Miedema HS, Verhaar JA Prevalence and incidence of shoulder pain in the general population, a systematic review Scand J Rheumatol 2004, 33 73-81
2 Bongers PM The cost of shoulder pain at work BMJ 2001, 322 64-65
3 Pope DP, Croft PR, Pritchard CM, Silman AJ Prevalence of shoulder pain in the community: the influence of case definition Ann Rheum Dis 1997, 56 308-312
4 Bergman GJD Manipulative therapy for shoulder complaints in general practice University of Groningen, The Netherlands, 2005
5 van der Windt DA, Koes BW, Boeke AJ, Deville W, de Jong BA, Bouter LM Shoulder disorders in general practice: prognostic indicators of outcome Br J Gen Pract 1996, 46 519-523
6 HSI P, van GHWV, JSAG S Klachten van het bewegingsapparaat in de Nederlandse bevolking: Prevalenties, consequenties en risicogroepen Bilthoven, CBS, 2000
7 Mitchell C, Adebajo A, Hay E, Carr A Shoulder pain: diagnosis and management in primary care BMJ 2005, 331 1124-1128
8 Winters JC, Jorntsma W, Groenier KH, Sobel JS, Jong BM, Arendzen HJ Treatment of shoulder complaints in general practice: long term results of a randomised, single blind study comparing physiotherapy, manipulation, and corticosteroid injection BMJ 1999, 318 1395-1396
9 Winters JC, Jongh AC, van der Windt DAWM, et al NHG standaard schouderklachten [Guidelines for Shoulder Complaints of the Dutch College of General Practitioners (version 1999)] Huisarts Wet 1999, 42 222-231
10 Clinical Guideline of Shoulder pain of the American Academy of Orthopaedic Surgeons 2001 [http://www.aaos.org/Research/guidelines_guide.asp]
11 Shoulder Guideline of the New Zealand Guidelines Group 2004 [http://www.nzgg.org.nz/guidelines]
12 Karjalainen K, Malmivaara A, van TM, Roine R, Jaahanen M, Hurn H, Koes B Multidisciplinary biopsychosocial rehabilitation for neck and shoulder pain among working age adults: a systematic review within the framework of the Cochrane Collaboration Back Review Group Spine 2001, 26 174-181
13 Green S, Buchbinder R, Hetrick S Physiotherapy interventions for shoulder pain Cochrane Database Syst Rev 2003 CD004258
14 Green S, Buchbinder R, Glazier R, Forbes A Interventions for shoulder pain Cochrane Database Syst Rev 2000 CD001156
15 Green S, Buchbinder R, Hetrick S Acupuncture for shoulder pain Cochrane Database Syst Rev 2005 CD005319
16 Ejnisman B, Andreoli CV, Soares BG, Fallopa F, Peccini MS, Abdalla RJ, Cohen M Interventions for tears of the rotator cuff in adults Cochrane Database Syst Rev 2004 CD002758
17 Simons DG, Travell JG, Simons LS, Travell JG Travell & Simons' myofascial pain and dysfunction: The trigger point manual 2nd edition Baltimore, Williams & Wilkins, 1999
18 Baldry P Acupuncture, Trigger Points and Musculoskeletal Pain third edition Churchill Livingstone, 2004
19 Gerwin RD, Dommerholt J, Shah JP An expansion of Simons' integrated hypothesis of trigger point formation Curr Pain Headache Rep 2004, 8 468-475
20 Hong CZ, Simons DG Pathophysiologic and electrophysiologic mechanisms of myofascial trigger points Arch Phys Med Rehabil 1998, 79 863-872
21 Mense S, Simons DG, Hoheisel U, Quenzler B Lesions of rat skeletal muscle after local block of acetylcholinesterase and neuromuscular stimulation J Appl Physiol 2003, 94 2494-2501
22 Shah JP, Phillips TM, Danoff JV, Gerber LH An in vivo microanalytical technique for measuring the local biochemical milieu of human skeletal muscle J Appl Physiol 2005, 99 1977-1984
23 Simons DG, Hong CZ, Simons LS Endplate potentials are common to midfiber myofacial trigger points Am J Phys Med Rehabil 2002, 81 212-222
24 Dommerholt J, Bron C, Franssen JLM Myofascial trigger points, an evidence-based review J Manual Manipulative Therapy 2006, 14 203-221
25 Wolfe F, Simons DG, Frincon J, Bennett RM, Goldenberg DL, Gerwin R, Hathaway D, McCain GA, Russell JJ, Sanders HO, The fibromyalgia and myofascial pain syndromes: a preliminary study of tender points and trigger points in persons with fibromyalgia, myofascial pain syndrome and no disease J Rheumatol 1992, 19 944-951
26 Gerwin RD, Shannon S, Hong CZ, Hubbard D, Gevirtz R Interrater reliability in myofascial trigger point examination Pain 1997, 69 65-73
27 Bron C, Wensing M, Franssen JLM, RAB O Interobserver Reliability of Palpation of Myofascial Trigger Points in Shoulder Muscles Journal Manual Manipulative Therapy 2007, 15 in press
28 Ingber RS Shoulder impingement in tennis/racquetball players treated with subscapularis myofascial treatments Arch Phys Med Rehabil 2000, 81 679-682
29 Weed ND When shoulder pain isn't bursitis The myofascial pain syndrome Postgrad Med 1983, 74 97-2, 104
30 Grosshandler SL, Stratas NE, Toomey TC, Gray WF Chronic neck and shoulder pain Focusing on myofascial origins Postgrad Med 1985, 77 149-8
31 Bron C, Franssen JLM, de Valk BGM Een posttraumatische schouderklacht zonder aanwijsbaar letsel Ned Tijdschrift v Fysiotherapie 2001 97-102
32 JG T, DG S Myofascial Pain and Dysfunction The Trigger Point Manual The lower extremities Volume II first edition Baltimore, Lippincott, Williams and Wilkins, 1999
33 Neblett R, Gatchel RJ, Mayer TG A clinical guide to surfaceEMG-assisted stretching as an adjunct to chronic musculoskeletal pain rehabilitation Appl Psychophysiol Biofeedback 2003, 28 147-160
34 Neblett R, Mayer TG, Gatchel RJ Theory and rationale for surface EMG-assisted stretching as an adjunct to chronic musculoskeletal pain rehabilitation Appl Psychophysiol Biofeedback 2003, 28 139-146
35 Franssen JLM Handboek oppervlakte-elektromyografie First edition Edited by Franssen JLM Utrecht, De Tijdstroom, 1995
36 Veiersted KB, Westgaard RH, Andersen P Pattern of muscle activity during stereotyped work and its relation to muscle pain Int Arch Occup Environ Health 1990, 62 31-41
37 Hagg GM Static Work Loads and Occupational Myalgia - A New Explanational Model In Electromyographical kinesiology Edited by Anderson PA, Hobart DJ and Danoff JV Amsterdam - New York - Oxford, Exerpta Medica, 1991 141-144
38 Hagg GM, Luttmann A, Jager M Methodologies for evaluating electromyographic field data in ergonomics J Electromyogr Kinesiol 2000, 10 301-312
39 Roman-Liu D, Tokarski T, Wojcik K Quantitative assessment of upper limb muscle fatigue depending on the conditions of repetitive task load J Electromyogr Kinesiol 2004, 14 671-682
40 Szeto GP, Straker LM, O'Sullivan PB EMG median frequency changes in the neck-shoulder stabilizers of symptomatic office workers when challenged by different physical stressors J Electromyogr Kinesiol 2005, 15 544-555
41 Peper E, al The Integration of electromyography (SEMG) at the workstation assessment, treatment, and prevention of repetitive strain injury (RSI) Appl Psychophysiol Biofeedback 2003, 28 167-182
42 Ware JE Jr, Sherbourne CD The MOS 36-item short-form health survey (SF-36) I Conceptual framework and item selection Med Care 1992, 30 473-483
43 Geisser ME, Roth RS, Robinson ME Assessing depression among persons with chronic pain using the Center for Epidemiological Studies-Depression Scale and the Beck Depression Inventory a comparative analysis Clin J Pain 1997, 13 163-170
44 Sowlay S, Beaton DE, McConnell S, Bombardies C The DASH Outcome Measure Users Manual Second edition Toronto, Onuaro, Institute for Work & Health, 2002
45 Turchin DC, Beaton DE, Richards RR Validity of observer-based aggregate scoring systems as descriptors of elbow pain, function, and disability J Bone Joint Surg Am 1998, 80 154-162
46 Beaton DE, Katz JN, Fossel AH, Wright JG, Tarasuk V, Bombardier C Measuring the whole or the parts? Validity, reliability, and responsiveness of the Disabilities of the Arm, Shoulder and Hand outcome measure in different regions of the upper extremity J Hand Ther 2001, 14 128-146
47 Beaton DE, Davies AM, Hudak P, McConnell S The DASH (Disabilities of the Arm, Shoulder and Hand) outcome measure What do we know about it now? British Journal of Hand Therapy 2001, 6 109-118
48 Bot SD, Terwee CB, van der Windt DA, Bouter LM, Dekker J, de Vet HC Clinimetric evaluation of shoulder disability questionnaires a systematic review of the literature Ann Rheum Dis 2004, 63 335-341
49 Clarkson HM Joint Motion and Function Assessment A research-based practical Guide 1st edition Philadelphia, Baltimore, Lippincott, Williams & Wilkins, 2005
50 A Fde W, Heemskerk MA, Terwee CB, Jans MP, Deville W, van Schaardenburg DJ, Scholten RJ, Bouter LM Inter-observer reproducibility of measurements of range of motion in patients with shoulder pain using a digital inclinometer BMC Musculoskelet Disord 2004, 5 18 [http://]
51 Gummesson C, Atroshi I, Ekdahl C The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire longitudinal construct validity and measuring self-rated health change after surgery BMC Musculoskelet Disord 2003, 4 11
52 Altman DG Practical statistics for medical research first edition Chapman & Hall, 1991
53 Thomas E, van der Windt DA, Hay EM, Smidt N, Dziedzic K, Bouter LM, Croft PR Two pragmatic trials of treatment for shoulder disorders in primary care generalisability, course, and prognostic indicators Ann Rheum Dis 2005 64 1056-1061
5
HIGH PREVALENCE OF MYOFASCIAL TRIGGER POINTS IN PATIENTS WITH SHOULDER PAIN.
Carel Bron
Jan Dommerholt
Boudewijn Stegenga
Michel Wensing
Rob A.B. Oostendorp
BMC Musculoskeletal Disorders 2011 (under review)
HIGH PREVALENCE OF MYOFASCIAL TRIGGER POINTS IN PATIENTS WITH SHOULDER PAIN.
Abstract
Background: Shoulder pain is reported to be highly prevalent and tends to be recurrent or persistent despite medical treatment. The pathophysiological mechanisms of shoulder pain are poorly understood. Furthermore, there is little evidence supporting the effectiveness of current treatment protocols. Although myofascial trigger points (MTrPs) are rarely mentioned in relation to shoulder pain, they may present an alternative underlying mechanism, which would provide new treatment targets through MTrP inactivation. While previous research has demonstrated that trained physiotherapists can reliably identify MTrPs in patients with shoulder pain, the percentage of patients who actually have MTrPs remains unclear. The aim of this observational study was to assess the prevalence of muscles with MTrPs and the association between MTrPs and the severity of pain and functioning in patients with chronic non-traumatic unilateral shoulder pain.
Methods: An observational study was conducted. Subjects were recruited from patients participating in a controlled trial studying the effectiveness of physical therapy on patients with unilateral non-traumatic shoulder pain. Sociodemographic and patient-reported symptom scores, including the Disabilities of the Arm, Shoulder, and Hand (DASH) Questionnaire, and Visual Analogue Scales for Pain were compared with other studies. To test for differences in age, gender distribution, and education level between the current study population and the populations from Dutch shoulder studies, the one sample T-test was used. One observer examined all subjects (n=72) for the presence of MTrPs. Frequency distributions, means, medians, standard deviations, and 95% confidence intervals were calculated for descriptive purposes. The Spearman’s rank-order correlation (ρ) was used to test for association between variables.
Results: MTrPs were identified in all subjects. The median number of muscles with MTrPs per subject was 6 (active MTrPs) and 4 (latent MTrPs). Active MTrPs were most prevalent in the infraspinatus (77%) and the upper trapezius muscles (58%), whereas latent MTrPs were most prevalent in the teres major (49%) and anterior deltoid muscles (38%). The number of muscles with active MTrPs was only moderately correlated with the DASH score.
Conclusion: The prevalence of muscles containing active and latent MTrPs in a sample of patients with chronic non-traumatic shoulder pain was high.
INTRODUCTION
Shoulder pain, which is often persistent or recurrent, is one of the major reasons patients consult with primary healthcare providers.\textsuperscript{1-6} However, the pathophysiological mechanisms underlying shoulder pain are poorly understood. Although subacromial impingement is often suggested to be a potential source of shoulder pain\textsuperscript{7-8}, solid evidence is lacking. In fact, calcifications, acromion spurs, subacromial fluid, or signs of tendon degeneration are equally prevalent in healthy subjects and in patients with shoulder pain\textsuperscript{9-12}. Furthermore, physical examination tests of subacromial impingement are not reliable\textsuperscript{13-15}, and the results of imaging diagnostics do not correlate well with pain\textsuperscript{9 10 16 17}. In addition, interventions targeting subacromial problems are, at most, only moderately effective at treating shoulder complaints\textsuperscript{18-24}.
Myofascial trigger points (MTrPs) may offer an alternative explanation for the pathophysiological mechanisms underlying shoulder pain. In recent years, our understanding of the etiology, pathophysiology, and management of MTrPs has increased\textsuperscript{25-30}. MTrPs are local points, that are highly sensitive to pressure, the application of which causes characteristic referred sensations, including pain, muscle dysfunction\textsuperscript{26}, and sympathetic hyperactivity\textsuperscript{31-33}.
MTrPs are classified into active and latent myofascial trigger points. Active MTrPs are characterized by the presence of clinical pain and constant tenderness. Specifically, active MTrPs prevent full lengthening and weakening of the muscle. Diagnostically, active MTrPs refer patient-recognized pain upon compression and mediate a local twitch response in muscle fibers when stimulated. When compressed within the patients' level of pain tolerance, active MTrPs produce referred motor phenomena and often sympathetic hyperactivity, (generally in the pain reference zone), and cause tenderness in the pain reference zone. In contrast, latent MTrPs are clinically quiescent, and are only painful when palpated. With the exception of spontaneous pain, a latent MTrP can present with all the clinical characteristics of active MTrPs. In addition, latent MTrPs are within a taut band that increases muscle tension and restricts patients' range of motion\textsuperscript{26}. Although the exact pathophysiology of MTrPs is not yet fully understood, abnormal electrical activity, called endplate noise, has been associated with both latent and active MTrPs, and several pain-inducing and pro-inflammatory substances have been found at active MTrP in humans\textsuperscript{27-34}.
In clinical practice, identification of MTrPs is usually performed by palpation. In a recent study\textsuperscript{35}, we confirmed that this technique is a reliable method for detecting MTrPs in shoulder muscles. Although prevalence studies are sparse\textsuperscript{36-42}, based on clinical experience, MTrPs seem to be associated with shoulder pain, disability, and dysfunction\textsuperscript{43-45}. Still, little is known about the impact of MTrPs on pain and functioning in patients with shoulder disorders\textsuperscript{46}. Because MTrPs refer pain to the shoulder, they may contribute substantially to the clinical picture of shoulder pain (Figure 1). Experimental muscle pain, clinical muscle pain, and MTrPs have all been shown to alter motor activation patterns in a similar manner as the kinematic disturbances seen in shoulder pain patients often referred to as SIS\textsuperscript{47-49}.
The aim of this study was to determine the prevalence of MTrPs and the correlation between MTrPs and pain and functioning, in a sample of patients presenting with chronic, non-traumatic unilateral shoulder complaints.
**Figure 1:** Referred pain patterns (gray) from the lower trapezius (a), upper trapezius (b), anterior deltoid (c), and infraspinatus (d) muscle MTrPs (Xs), according to Simons *et al.* Illustrations courtesy of LifeART/MEDICLIP, Manual Medicine 1, Version 1.0a, Lippincott, Williams & Wilkins, 1997
Material and methods
Study design
This observational study was embedded in a clinical trial (registered at current controlled trials ISRCTN75722066) addressing a specific treatment of patients with shoulder pain. The Committee of Human Research of the region Nijmegen-Arnhem, the Netherlands, has approved the study protocol [CMO 2007/22].
Study Participants
Study participants were recruited from patients participating in a controlled trial investigating the effectiveness of physical therapy on patients with unilateral, non-traumatic shoulder pain. This study was conducted at a primary care practice for physical therapy, which specializes in the treatment of patients with disorders of the shoulder, the neck, and upper extremities. A power analysis was performed prior to beginning this study, and it was calculated that 104 subjects were needed for the clinical trial.
All patients who contacted the practice for non-specific shoulder complaints from September 2007 until September 2009, were requested to participate in the study. The inclusion criteria were 1) age between 18 and 66 years, 2) unilateral non-traumatic shoulder pain, and 3) duration of symptoms of more than six months. Patients were excluded from the study if they presented with a prior diagnosis of shoulder instability, shoulder fractures, any systemic diseases, or a medical history or examination suggestive for the presence of neurological disease, internal diseases, or psychiatric disorders. All patients signed a written informed consent before participating in the study.
General Applicability
To determine the potential general applicability of this study to primary care shoulder pain patients, we searched for Dutch studies conducted on primary care patients from 1995 until 2009. Eight studies were found and sociodemographic data (age, gender, education level, and duration of shoulder pain) were analyzed and compared to the current study population.
Measures
At baseline, age, gender, hand dominance, and education level were recorded. For comparison reasons we classified the education level as high education (university and higher vocational school), medium education (middle vocational school and higher or middle general secondary school), and low education (lower vocational school, lower general secondary school, primary school, or no education). Shoulder-pain related data (duration of shoulder-pain, recurrence rate and location of the complaints) were collected and the study subjects were asked to complete a set of standardized self-report measures, including the Disabilities of the Arm, Shoulder, and Hand outcome measure - Dutch Language Version (DASH-DLV), Visual Analogue Scale for Pain (VAS-P) and the Beck Depression.
Inventory- Second Version- Dutch Language Version (BDI-II-DLV) \textsuperscript{50}. The BDI-II-DLV is used to discriminate between patients with major depression and those with only minor depressive feelings or no depression, which may be a confounding factor. The BDI-II has good predictive value, is widely accepted, and is commonly used in both clinical and experimental research. A BDI-II-DLV score equally or $\geq 21$ indicates major depression (specificity 78.4%) \textsuperscript{56}.
For every study participant, one of the two available observers measured the passive range of motion (PROM) of the shoulder in flexion, internal and external rotation, abduction, and (horizontal or cross-body) adduction with a handheld digital inclinometer (The Saunders Group Inc, Chaska, MN). Range of motion was expressed in degrees and presented as the sum of the value measured for the non-affected shoulder minus the value measured for the affected shoulder. A positive value means that the affected shoulder had impaired range of motion as compared to the non-affected shoulder. Next, the observer examined each subject for the presence of MTrPs in the shoulder muscles of their affected shoulder according to the guidelines outlined in Simons et al \textsuperscript{26}; the non-affected shoulder was examined as a control. Following these guidelines, an MTrP is defined as: a nodule in a taut band that is extremely painful upon compression, and may produce referred pain or sensations. MTrPs were classified as either ‘active’ when the pain was recognized by the patient as a familiar pain, and ‘latent’ when the observer found a firm nodule in a taut band, which was painful on compression, but did not produce a recognizable pain. The inter-examiner reliability of trigger point palpation has been established in several studies \textsuperscript{35, 57, 58}. All 17 muscles that are known to produce pain in the shoulder or may result in dysfunction of shoulder muscles were systematically examined and the number of muscles with MTrPs in the affected shoulder was counted, regardless of the number of MTrPs per muscle (\textit{Table 1}). The two observers were physical therapists, each with 30 years of clinical experience in primary care practice. Both observers had attended an extensive, postgraduate course on MTrP diagnosis and therapy and had more than 5 years experience in identifying MTrPs and treating patients with MTrPs prior to the start of the study.
\begin{table}[h]
\centering
\caption{\textbf{List of muscles examined for presence of MTrPs}}
\begin{tabular}{|p{12cm}|}
\hline
upper trapezius muscle & middle trapezius muscle & lower trapezius muscle \\
infraspinatus muscle & supraspinatus muscle & subscapularis muscle \\
teres minor muscle & teres major muscle & anterior deltoid muscle \\
middle deltoid muscle & posterior deltoid muscle & pectoralis major muscle \\
pectoralis minor muscle & biceps brachii muscle & triceps brachii muscle \\
scalene muscles & subclavius muscle & \\
\hline
\end{tabular}
\end{table}
The DASH-DLV is a widely used multidimensional (physical, emotional and social) 30-item self-reporting questionnaire that focuses on physical function, pain and other symptoms. DASH-DLV scores ranges from 0 to 100, with higher scores indicating greater disability. DASH is a reliable and valid questionnaire, with good to excellent intra- and inter-rater reliability, and good correlation with the Shoulder Pain and Disability Index. Because of these advantages, DASH is considered to be one of the best questionnaires available for shoulder symptoms (http://www.dash.iwh.on.ca/).
The VAS-P is a self-report scale consisting of a 100 mm horizontal line anchored by word descriptions on each side. VAS-P can be used to measure pain current pain levels (VAS-P1), the average pain over the last 7 days (VAS-P2), and the most severe pain over the last 7 (VAS-P3). VAS-P scores ranges from 0 (no pain) to 100 (the worst pain imaginable). The Visual Analogue Scale has properties consistent with a linear scale for patients with mild to moderate pain.
Data was collected and transferred to a worksheet by a research assistant (who was not involved in the physical examination or palpation of MTrP).
**Data analysis**
Frequency distributions, means, medians, standard deviations, and 95% confidence intervals were calculated for descriptive purposes. The Shapiro-Wilk W test was used to test for normality of the data. Because the number of muscles with MTrPs (active, latent and total) was not normally distributed we used the Spearman's rank-order correlation ($\rho$) test for all variables. For interpretation of the $\rho$-values, we used the classification proposed by Feinstein. A correlation coefficient $< 0.30$ was considered to be indicative of a poor correlation. A correlation coefficient $\geq 0.30$ and $\leq 0.70$ was considered to be indicative of moderate correlation, and a correlation coefficient $\geq 0.70$ was defined as substantial or a good correlation. To test for differences in age, gender distribution, and education level between the current study population and study populations from Dutch shoulder studies (from 1995 until 2009), we used a one sample T-test. The $\alpha$ level for statistical significance was set at 0.05. All analyses were performed using Systat 12 or Sigmapstat 3.1 for Windows (Systat Software, Inc Chicago, IL, USA).
**Results**
A flowchart describing patient participation is depicted in Figure 2. Out of 211 patients who were treated for shoulder disorders, between September 2007 and September 2009, 72 patients (50 females and 22 males, mean age 43.9 years, SD 12.3, 95% CI 41.0 to 46.0) presented with unilateral, non-traumatic shoulder complaints, met the study inclusion criteria, and agreed to participate in this study. Twenty-six subjects were suffering from their first episode of shoulder pain, while for 19 subjects, this was their second episode.
The remaining 27 subjects had suffered from $\geq 3$ episodes of shoulder pain. Study participants’ characteristics are summarized in Table 2. A comparison of data obtained from the present study with data from previous Dutch studies is presented in Table 3. The mean age of the present study population was lower ($p < 0.05$) and the proportion of female subjects was higher ($p < 0.05$) compared to these other studies. In addition, the current study population was more highly educated ($p < 0.05$) than the previous study populations for which educational data was reported $^{3, 5, 52}$. Comparison of the duration of shoulder pain was not possible because different classifications were used.
**Figure 2: Flow chart showing a schematic summary of patient participation in this study**
Consecutive subjects with shoulder pain screened for eligibility
$(n=211)$
Excluded $(n=136)$
- primary frozen shoulder $(n=20)$
- bilateral shoulder pain $(n=26)$
- post-traumatic $(n=8)$
- post-surgical $(n=5)$
- consultation $(n=30)$
Eligible
$(n=88)$
Declined to participate
$(n=13)$
Subjects with unilateral shoulder pain
$(n=75)$
3 subjects were excluded after physical examination, because of signs and symptoms of primary frozen shoulder $(n=2)$ or severe language problems $(n=1)$
Subjects with unilateral non-traumatic shoulder pain were applicable for analysis
### Table 2 Characteristics of patients participating in this study (n=72).
| Characteristics | n (%) | mean (SD; 95% CI); median |
|------------------------------------------------------|----------------|----------------------------|
| Age (years) | | 43.9 (12.3; 41.0 – 46.8); 45.0 |
| Gender, female | 50(69.4) | |
| Duration of shoulder pain | | |
| 6-9 months | 17(23.6) | |
| 9-12 months | 14(19.4) | |
| 1-2 years | 13(18.0) | |
| 2-5 years | 14(19.4) | |
| >5 years | 14(19.4) | |
| Recurrence rate | | |
| 1st episode | 26(36.1) | |
| 2nd episode | 19(26.4) | |
| 3rd > episode | 27(37.5) | |
| Hand dominance, left-handed | 4(5.6) | |
| Side of complaints right | 48 (66.7) | |
| DASH-DLV (0 – 100)a | | 30.8 (14.1; 27.5 – 34.1); 28.3 |
| VAS-P1 (0-100)b | | 30.0 (23.9; 27.0 – 39.9); 30.0 |
| VAS-P2 (0-100)b | | 42.1 (17.7; 37.4 – 50.0); 40.0 |
| VAS-P3 (0-100)b | | 56.6 (19.8; 51.2 – 61.9); 57.0 |
| BDI-II-DLV (0 – 63)c | | 6.1 (6.0; 4.7 – 7.6); 5.00 |
**a** Higher Dash-DLV (Disabilities of the Arm, Shoulder and Hand outcome measure- Dutch Language Version) scores mean more disability with a maximum of 100 (range from 0 to 100)\(^{59}\).
**b** Higher VAS-P scores (Visual Analogue Scales for Pain) mean more pain, with a maximum of 100 (range from 0 to 100). VAS-P1 represents the current pain score, VAS-P2 represents the average pain score over the past seven days, and VAS-P3 represents the most severe pain score over the past seven days.
**c** Higher scores on the BDI-II-DLV (Beck Depression Inventory-second edition- Dutch Language Version) mean more symptoms of depression. Clinical interpretation of scores is accomplished through criterion-referenced procedures utilizing the following interpretive ranges: 0-13 minimal depression; 14-19 mild depression; 20-28 moderate depression; and 29-63 severe depression\(^{77}\).
**d** One patient scored 45 points, which is indicative of major depression. This high score was due to a major event that happened on the day of inclusion in the study.
Table 3 Socio-demographic characteristics of the current study population and eight other Dutch shoulder research study populations.
| Age (years, ± SD) | Current study N=72 | Van der Windt 1996 N=335 | De Winter 1999 N=201 | Winters 1999 N=101 | Bot 2005 N=281 | Bergman 2005 N=71 | Kuijpers 2006 N=492 | Feleus 2008 N=682 | Reilingh 2008 N=587 |
|-------------------|---------------------|--------------------------|----------------------|-------------------|----------------|------------------|-------------------|-----------------|-----------------|
| | 43 (12.3) | 49.6 (14.4) | 48 (12) | 47.3 (15.4) | 49.2 (13.8) | 47.8 (11.8) | 52 (14) | 45* | 49.5 (14.7)† |
| | | | | | | | | | 51.9 (13.9)‡ |
| | | | | | | | | | 52.9 (13.3)¶ |
| Gender (%) | female | 69 | 56 | 66 | 58 | 63 | 52 | 50 | 52 |
| | | | | | | | | | 50 |
| Education level | Low | 6 | NA | NA | NA | 44 | NA | NA | 36 |
| | Medium | 47 | NA | NA | NA | 42 | NA | NA | 36 |
| | high | 47 | NA | NA | NA | 14 | NA | NA | 28 |
| | | | | | | | | | 23 |
| Duration of shoulder pain (month) | < 3 m | 0 | 85 | 26 | 75 | 66 | 70 | 60 | 74 | 59 |
| | 3-6 m | 100 | 15 | 55 | 25 | 34 | 26 | | 41 |
| | > 6 m | | | | | | | | |
* Feleus reported the median instead of the mean age
† Mean age (±SD) of the acute pain group (< 6 weeks)
‡ Mean age (±SD) of the subacute pain group (6-12 weeks)
¶ Mean age (±SD) of the chronic pain group (> 3 months)
NA (not available). It was not possible to derive these data from the papers.
Prevalence of myofascial trigger points per subject
Muscles containing active MTrPs were found in all 72 subjects. The median number of muscles with active MTrPs per subject was 6 (range 2 to 16). Muscles containing latent MTrPs were found in 67 subjects. The median number of muscles with latent MTrPs per subject was 4 (range 0 to 11). Figure 3 shows the frequency distribution of active and latent MTrPs per subject. Neither active MTrPs nor latent MTrPs were normally distributed (Shapiro W= 0.95; p < 0.05; W=0.96; p < 0.05 respectively).
Figure 3: The number of active (black bar) and latent (grey bar) of MTrPs per subject. The X-axis shows the number of MTrPs per subject, and the Y-axis shows the number of subjects. The exact number of MTrPs is shown above every bar (n=72).
Prevalence of myofascial trigger points per muscle
Active MTrPs were found in the infraspinatus muscle in 56 subjects and in the upper trapezius muscle in 42 subjects. In addition, active MTrPs were highly prevalent in the middle trapezius (n=31), anterior deltoid (n=34), middle deltoid (n=36), posterior deltoid (32), and teres minor (n=34) muscles.
Latent MTrPs were found in the infraspinatus muscle in 11 subjects and in the upper trapezius in 27 subjects. Latent MTrPs were found in the teres major muscle in 35 subjects and in the anterior deltoid muscle in 27 subjects. Figure 4 presents the distribution of active and latent MTrPs per muscle.
Figure 4: The number of subjects with active (black bar) or latent MTrPs (gray bar) per muscle. The X-axis shows the muscles that were examined for identification of MTrPs, and the Y-axis shows the number of subjects with MTrPs. The exact number of MTrPs is shown above every bar (n=72).
DASH-DLV, VAS-P, BDI-II-DLV, and PROM
The mean score on the DASH was 30.8 (SD 14.1; 95% CI 27.5 to 34.1). Mean VAS-P scores were follows: the VAS-P score for ‘current pain’ (VAS-P1) was 30 (SD 23.9; 95% CI 27.0 to 39.9), for ‘average pain in the last seven days’ (VAS-P2) was 42.1 (SD 17.7; 95% CI 37.4 to 50.0) and for ‘for the most severe pain in the last seven days’ (VAS-P3) was 56.6 (SD 19.8; 95% CI 51.2 to 61.9). The mean PROM score, calculated as the sum the PROM value measured for the non-affected shoulder minus the PROM value measured for the affected shoulder, was 32.4 degrees (SD 34.8; 95% CI 24.2 to 40.6), where a positive value indicates that the affected shoulder has a impaired range of motion. Both DASH and PROM scores were normally distributed (W = 0.97; p < 0.05 and W = 0.91; p < 0.05 respectively). VAS-P1, VAS-P2, and VAS-P3 scores were also considered to be normally distributed, although the Shapiro-Wilk test did present borderline results for VAS-P2 and VAS-P3.
Table 4: Correlation matrix of the current study population (n=72).
| | MTrPs | Active MTrPs | Latent MTrPs | DASH-DLV | BDI-II DLV | VAS P1 | VAS P2 | VAS P3 | Duration |
|----------------|-------|--------------|--------------|----------|------------|--------|--------|--------|----------|
| MTrPs | - | 0.65* | 0.11 | 0.29* | 0.22 | 0.44* | 0.31* | 0.06 | 0.26* |
| AMTrPs | - | - | -0.64* | 0.30* | 0.16 | 0.33* | 0.28* | 0.01 | 0.12 |
| LMTrPs | - | - | -0.12 | 0.02 | -0.02 | -0.06 | 0.04 | 0.04 | 0.04 |
| DASH-DLV | - | - | - | 0.35* | 0.66* | 0.58* | 0.27* | 0.05 | |
| BDI-II-DLV | - | - | - | - | 0.33* | 0.18 | 0.07 | 0.13 | |
| VAS-P1 | - | - | - | - | - | 0.68* | 0.35* | 0.18 | |
| VAS-P2 | - | - | - | - | - | - | 0.57* | 0.18 | |
| VAS-P3 | - | - | - | - | - | - | - | -0.10 | |
| Duration | - | - | - | - | - | - | - | - | - |
The data represent Spearman’s rank correlation coefficient. Correlation coefficients between the number of muscles with myofascial trigger points (MTrPs), the number of muscles with active MTrPs (AMTrPs) and the number of muscles with latent MTrPs (LMTrPs), the DASH (Disability of the Arm, Shoulder and Hand) outcome measure- Dutch Language Version (DASH-DLV), the Beck Depression Inventory-second version- Dutch language Version (BDI-II-DLV), the Visual Analogue Scales for current pain (VAS-P1), the average pain over the last seven days (VAS-P2), the most severe pain over the last seven days (VAS-P3) and the duration of shoulder pain (Duration), are given (* p < 0.05).
Correlation between the number muscles of MTrPs and pain and disability scores (DASH-DLV, VAS-P)
The number of muscles with active MTrPs only moderately correlated with the DASH-DLV ($\rho = 0.30; p < 0.05$) and VAS-P1 scores ($\rho = 0.33; p < 0.05$), and poorly correlated with VAS-P2 ($\rho = 0.28; p < 0.05$) and the duration of the shoulder pain ($\rho = 0.26, p < 0.05$). We were unable to detect statistically significant correlations between the number of muscles with MTrPs (either active or latent) and VAS-P3 ($\rho = 0.09; p > 0.05$) or the PROM ($\rho = 0.13; p > 0.05$) scores. Table 4 provides an overview of the correlations and Figure 5 shows a scatterplot of DASH scores versus the number of active MTrPs.
Figure 5: Scatterplot of DASH scores versus the number of muscles with active MTrPs. The regression line shows a weak positive correlation ($r = 0.3$), indicating that increasing numbers of active MTrPs have only a moderate effect on DASH scores.
Discussion
Prevalence of MTrPs
All subjects with unilateral, chronic, non-traumatic shoulder pain presented with multiple shoulder muscle MTrPs. In addition, MTrPs were found in all 17 muscles examined. However, the number of shoulder muscles with MTrPs appeared to vary greatly among subjects. In particular, MTrPs were most frequently located in the infraspinatus and upper trapezius muscles, in agreement with results from Skootsky $^{37}$ and Simons $^{26}$, who found that infraspinatus muscles were frequently associated with myofascial shoulder pain. There are very few other prevalence studies in the literature, and to the best of our knowledge, this is the first extensive report on the prevalence of MTrPs in patients with chronic, non-traumatic unilateral shoulder pain.
Mean and median scores on DASH-DLV and VAS-P scores
The mean DASH-DLV score measured for the current study population is comparable with the mean baseline scores measured for other study populations for subjects with shoulder and arm pain.\textsuperscript{63,65}. According to Beaton\textsuperscript{81} subjects (n=200) with DASH scores < 23.6 are still able to perform all desired daily activities, although they may experience some discomfort. For comparison, in a study population from the US (n=1706), the mean DASH score was 10.10 (SD 14.68) and in young active and healthy adults the mean DASH score was 1.85 (SD 5.99).\textsuperscript{66}. Importantly, the DASH-DLV score primarily reflects the level of dysfunction with less emphasis on pain and other symptoms. While 23 items refer to the ability of the subject to perform activities, only 7 items assesses the severity of symptoms. Subjects with long-standing shoulder complaints may alter the way in which they perform activities by using compensatory movements. In addition, DASH-DLV does not discriminate between activities performed using the affected or non-affected arm, which may influence the magnitude of the disability and therefore the final DASH-DLV score. In support of this, several subjects in our study commented that their DASH score would have been different if the activities in question were related to the affected arm.
Correlation between number of muscles with MTrPs, DASH-DLV scores, and VAS-P scores
The number of muscles containing active MTrPs moderately and positively correlated with DASH-DLV, VAS-P1, VAS-P2 scores, and the duration of the shoulder pain, suggesting that the number of muscles with active MTrPs explained only 10% of the variation of the outcome measures. In addition, other clinically relevant factors may have contributed to the primary and secondary outcome scores. First, although we did not measure the pain intensity at the MTrP, this may have a significant impact on pain and functioning. Hidalgo et al found that patients with shoulder pain had a larger number of both active and latent MTrPs than healthy subjects. They also found that active MTrPs were associated with greater pain intensity, and that lower Pain Pressure Thresholds (PPT) were reported for active MTrPs compared to latent and patients with shoulder pain displayed lower PPT than healthy subjects\textsuperscript{49}. Second, in this study we did not take into consideration the number of MTrPs per muscle, which may have contributed to the moderate correlation observed between the number of muscles with MTrPs and the DASH-DLV and VAS-P scores. The total number of muscles with MTrPs was poorly but positively correlated with the duration of the complaints, indicating that the number of shoulder muscles with MTrPs may increase over time regardless of whether the MTrPs were active or latent. Finally, because one of the characteristics of the DASH-DLV score is, that it does not discriminate between the affected and the non-affected shoulder, one could speculate that patients with chronic shoulder pain may develop strategies to overcome pain and disability caused by their shoulder disorder, for instance by using the non-affected arm, resulting in decreased DASH-DLV and VAS-P scores. All these factors may have a substantial influence on the correlation coefficient. Although the number of shoulder muscles with active MTrPs correlates moderately with the various outcome measures, this does not imply that MTrPs are clinically unimportant.
Clinical implications
To date, unilateral shoulder pain has mainly been proposed to be due to either the presence of inflammation in the subacromial tendons and bursae, or degenerative rotator cuff ruptures (diagnosed using modern imaging techniques, such as MRI or sonography). Although these pathological structures may cause pain, it is also known that similar abnormalities have been found in asymptomatic shoulders.
Active MTrPs, which are painful spots that produce familiar shoulder pain during contraction, stretching or compressing, these MTrPs may provide an alternative explanation for shoulder pain, which is independent of the presence of subacromial abnormalities. According to Simons, Travell and Simons\textsuperscript{36}, MTrPs within the infraspinatus muscle (which were most prevalent) cause pain in the anterior and middle deltoid regions which expands into the frontal upper arm, as well as referred pain and referred sensations felt in the wrist and the hand. In addition, internal rotation and cross-body adduction may be impaired, which is often the case in patients with shoulder pain. Both experimentally induced and spontaneous muscle pain lead to an aberrant motor activation pattern that is also present in patients with shoulder pain.\textsuperscript{57} \textsuperscript{68} Although latent MTrPs are not usually an immediate source of pain, they can elicit referred pain when mechanically stimulated, or during sustained or repeated muscle contraction. In addition, latent MTrPs may disturb normal motor recruitment patterns and movement efficiency. Lucas et al showed that subjects who received myofascial dry needling, followed by passive muscle stretching to remove latent MTrPs, showed normalized motor activation patterns within 20 to 30 minutes following the treatment.\textsuperscript{48} Therefore, it is reasonable to expect that treatment of MTrPs may lead to normalization of motor activation patterns and may facilitate spontaneous recovery of shoulder pain, either without exercising or by making exercise more effective.
Based on the results of this study, we propose that an alternative approach may be indicated for the assessment and management of patients with chronic, non-traumatic shoulder pain. Current treatment regimens consist primarily of pharmacological interventions, including anti-inflammatory medications, or muscle strengthening exercises. If MTrPs are one of the main reasons for shoulder pain (active MTrPs) and altered motor activation patterns (active and latent MTrPs), as several authors have proposed, then anti-inflammatory treatment\textsuperscript{26} \textsuperscript{48} \textsuperscript{69} and muscle strengthening exercises should not be the treatment of first choice. Instead, the treatment should begin with MTrP inactivation. Manual techniques, including manual compression of the MTrP, known as ischemic compression or trigger point release, trigger point dry needling or injection therapy are used to inactivate MTrPs. After MTrP inactivation, muscle stretching and relaxation exercises, heat applications, dynamic exercises to improve range of motion and muscle reconditioning are instructed as appropriate. This therapy is accompanied with a gradual increase in daily activities.
If the above hypothesis is true, treatment of MTrPs could provide an innovative, promising therapy for shoulder pain. This study shows the results of patients' characteristics for a
sample of patients with chronic, unilateral non-traumatic shoulder pain, who were recruited for a randomized clinical trial to study the results of MTrPs directed interventions by physical therapists in this group. The results of this study are accepted for publication (Bron et al BMC medicine).
**General Applicability**
We compared sociodemographic data from the current study population with similar data from several other Dutch shoulder pain research studies. Because none of these studies investigated MTrPs, we made this comparison to see whether there was reason to expect that the high prevalence of MTrPs we observed was unique to our population. In our study population more females were included, and the subjects were significantly younger and more highly educated than subjects from the other Dutch populations, although a specific explanation for these differences is lacking. There is no reason to suspect that educational levels correlate with the number of MTrPs and awareness of educational levels is mainly important for effectiveness studies, because they may impact the patients’ motivation and compliance.\(^{70}\) However, increased age may also be associated with increased number of MTrPs.\(^{72}\) Because the subjects of the present study were younger, and musculoskeletal complaints tend to increase with age,\(^{72}\) there is no reason to suspect that we overestimated the prevalence of MTrPs in our population. On the other hand, there were more females in our study population, and females may be more prone to musculoskeletal disorders in general.\(^{73}\) Thus, for this reason there may be a chance that MTrPs were slightly more prevalent in our study population.\(^{74-76}\) Despite the above-mentioned differences, we conclude that our subjects are comparable with other patients with chronic shoulder pain and the findings of this study can be generalized to other patients.
**Strength and limitations of the present study**
One of the limitations of our study is that we only examined patients with unilateral chronic shoulder pain and dysfunction, whereas MTrPs are thought to be responsible for both acute and chronic pain. It is conceivable that patients who developed chronic shoulder pain may have more MTrPs, and persistent MTrPs in the acute phase than patients who recover easily. In future research projects assessment of MTrPs in patients with acute shoulder problems should also be included. The small sample size is another limitation of this study. Before starting this study a power analysis was performed and it was calculated that 104 subjects would be needed for the clinical trial. After two years (one year more than originally planned, 72 subjects were enrolled in the study. For practical reasons, the study was completed with this smaller sample size, which may have influenced some of the results of this study. We used two observers in this study, with identical clinical experience and post-graduate training on myofascial trigger point therapy. In addition, both observers found a comparable mean number of active MTrPs. Because there was no statistically significant difference in mean DASH scores obtained by the two observers, we consider both groups to be comparable and the findings obtained by both observers to be similar.
Conclusion
This study demonstrates that MTrPs are very prevalent in patients with chronic unilateral, non-traumatic shoulder pain. In addition, the number of MTrPs is moderately correlated with DASH-DLV outcome measures and VAS-P pain measures, indicating that MTrPs contribute to the clinical picture of common shoulder pain problems. We recommend that the MTrP examination and treatment should be considered for patients with shoulder pain in both future clinical studies and clinical practice.
Authors’ contributions
All authors have read, edited and approved the final manuscript. CB is the lead investigator, and developed the design of the study, carried out data-acquisition, analysis, interpretations, and prepared the manuscript as primary author. MW and RO provided advice on the study and the manuscript, and supervised the study. JD and BS provided intellectual contributions to the manuscript.
Competing interests
The authors declare that they have no competing interests.
Acknowledgements
The authors would like to thank Maria Onstenk and Monique Bodewes for their contributions as observers, Ineke Staal and Larissa Bijlsma for their logistical assistance, and Peter Mulder for his assistance and critical analysis of this paper.
REFERENCES
1 Picavet HSJ, Schouten JSAG Musculoskeletal pain in the Netherlands prevalences, consequences and risk groups, the DMC3-study Pain 2003, 102(1) 167-178
2 Kuipers T, van Tulder MW, van der Heijden GJ, Bouter LM, van der Windt DA Costs of shoulder pain in primary care consulters a prospective cohort study in The Netherlands BMC Musculoskelet Disord 2006, 7 83
3 Feleus A, Bierma-Zeinstra SM, Miedema HS, Verhaar JA, Koes BW Management in non-traumatic arm, neck and shoulder complaints differences between diagnostic groups Eur Spine J 2008, 17(9) 1218-1229
4 Macfarlane GJ, Hunt IM, Silman AJ Predictors of chronic shoulder pain a population based prospective study J Rheumatol 1998, 25(8) 1612-1615
5 Reilngh ML, Kuipers T, Tanja-Harterkamp AM, van der Windt DA Course and prognosis of shoulder symptoms in general practice Rheumatology (Oxford) 2008, 47(5) 724-730
6 Schellingerhout J, Verhagen A, Thomas S, Koes B Lack of uniformity in diagnostic labeling of shoulder pain Time for a different approach Man Ther 2008 6
7 Neer CS, 2nd Anterior acromioplasty for the chronic impingement syndrome in the shoulder a preliminary report J Bone Joint Surg Am 1972, 54(1) 41-50
8 Hawkins RJ, Hobeika PE Impingement syndrome in the athletic shoulder Clin Sports Med 1983, 2(2) 391-405
9 Needell SD, Zlatkin MB, Sher JS, Murphy BJ, Urbe JW MR imaging of the rotator cuff peritendinous and bone abnormalities in an asymptomatic population AJR Am J Roengenol 1996, 166(4) 863-867
10 Schibany, Zeheigruber H, Kainberger F, Wurning C, Ba-Ssalamah A, Hereth AM, Lang T, Gruber D, Breitenseher MJ Rotator cuff tears in asymptomatic individuals a clinical and ultrasonographic screening study Eur J Radiol 2004, 51(15294335) 263-268
11 Naranjo A, Marrozo-Pulido T, Ojeda S, Francisco F, Erazquin C, Rua-Figueroa I, Rodriguez-Lozano C, Hernandez-Socorro CR Abnormal sonographic findings in the asymptomatic arthritic shoulder Scand J Rheumatol 2002, 31(1) 17-21
12 Neumann CH, Holt RG, Steinbach LS, Jahnke AH, Jr , Petersen SA MR imaging of the shoulder appearance of the supraspinatus tendon in asymptomatic volunteers AJR Am J Roentgenol 1992, 158(6) 1281-1287
13 Park HB, Yokota A, Gill HS, El Rassi G Diagnostic Accuracy of Clinical Tests for the Different Degrees of Subacromial Impingement Syndrome The Journal of Bone and Joint Surgery 2005(3057332520020642814)
14 MacDonald PB, Clark P, Sutherland K An analysis of the diagnostic accuracy of the Hawkins and Neer subacromial impingement signs J Shoulder Elbow Surg 2000, 9(4) 299-301
15 McFarland EG, Selhi HS, Keyurapan E Clinical evaluation of impingement what to do and what works Instr Course Lect 2006, 55 3-16
16 Bonsell S, Pearsall AW, Heitman RJ, Helms CA, Major NM, Speer KP The relationship of age, gender, and degenerative changes observed on radiographs of the shoulder in asymptomatic individuals J Bone Joint Surg Br 2000, 82(8) 1135-1139
17 Bradley MP, Tung G, Green A Overutilization of shoulder magnetic resonance imaging as a diagnostic screening tool in patients with chronic shoulder pain J Shoulder Elbow Surg 2005, 14(3) 233-237
18 Ekelberg OM, Bautz-Holter E, Tveita EK, Juel NG, Kvalheim S, Brox JI Subacromial ultrasound guided or systemic steroid injection for rotator cuff disease randomised double blind study BMJ 2009, 338 a3112
19 Desmeules F, Cote CH, Fremont P Therapeutic exercise and orthopedic manual therapy for impingement syndrome a systematic review Clin J Sport Med 2003, 13(3) 176-182
20 Dorrestijn O, Stevens M, Winters JC, van der Meer K, Diercks RL Conservative or surgical treatment for subacromial impingement syndrome? A systematic review J Shoulder Elbow Surg 2009, 18(4) 652-660
21 Buchbinder R, Green S, Youd JM Corticosteroid injections for shoulder pain Cochrane Database Syst Rev 2003(1) CD004016
22 Green S, Buchbinder R, Hetrick S Physiotherapy interventions for shoulder pain Cochrane Database Syst Rev 2003(2) CD004258
23 Green S, Buchbinder R, Glazier R, Forbes A Interventions for shoulder pain Cochrane Database Syst Rev 2000(2) CD001156
24 Cummins CA, Sasso LM, Nicholson D Impingement syndrome temporal outcomes of nonoperative treatment J Shoulder Elbow Surg 2009, 18(2) 172-177
25 Gerwin RD, Dommerholt J, Shah JP An expansion of Simons' integrated hypothesis of trigger point formation Curr Pain Headache Rep 2004, 8(6) 468-475
26 Simons DG, Travell, JG , Simons L S Myofascial Pain and Dysfunction The trigger point manual Upper half of body; vol 1, second edn Baltimore, MD Lippincott, Williams and Wilkins, 1999
Shah JP, Danoff JV, Desai MJ, Parikh S, Nakamura LY, Phillips TM, Gerber LH Biochemicals associated with pain and inflammation are elevated in sites near to and remote from active myofascial trigger points *Arch Phys Med Rehabil* 2008, 89(1) 16-23
Shah JP, Phillips TM, Danoff JV, Gerber LH An in vivo microanalytical technique for measuring the local biochemical milieu of human skeletal muscle *J Appl Physiol* 2005, 99(5) 1977-1984
Chen Q, Bensamoun S, Basford JR, Thompson JM, An KN Identification and quantification of myofascial taut bands with magnetic resonance elastography *Arch Phys Med Rehabil* 2007, 88(12) 1658-1661
Sikdar S, Shah J, Gebreab, Yen R, Gilliams E, Danoff JV, Gerber L Novel Applications of Ultrasound Technology to Visualize and Characterize Myofascial Trigger Points and Surrounding Soft Tissue *Arch Phys Med Rehabil* 2009, 90(november) 1829-1838
Zhang Y, Ge HY, Yue SW, Kimura Y, Arendt-Nielsen L Attenuated skin blood flow response to nociceptive stimulation of latent myofascial trigger points *Arch Phys Med Rehabil* 2009, 90(1) 1829-1838
Zhang Y, Ge HY, Fernandez-De-Las-Penas C, Arendt-Nielsen L Sympathetic facilitation of hyperalgesia evoked from myofascial tender and trigger points in patients with unilateral shoulder pain *Clinical Neurophysiology* 2006, 117(7) 1545-1550
Kimura Y, Ge HY, Zhang Y, Kimura M, Sumikura H, Arendt-Nielsen L Evaluation of sympathetic vasoconstrictor response following nociceptive stimulation of latent myofascial trigger points in humans *Acta Physiologica* 2009, 196(4) 411-417
Hubbard DR, Berkoff GM Myofascial trigger points show spontaneous needle EMG activity *Spine (Phila Pa 1976)* 1993, 18(13) 1803-1807
Bron C, Franssen J, Wensing M, Oostendorp RA Interrater reliability of palpation of myofascial trigger points in three shoulder muscles *J Man Manip Ther* 2007, 15(4) 203-215
Ettlin T, Schuster C, Stoffel R, Bruderlin A, Kischka U A distinct pattern of myofascial findings in patients after whiplash injury *Arch Phys Med Rehabil* 2008, 89(7) 1290-1293
Skootsky SA, Jaeger B, Oye RK Prevalence of myofascial pain in general internal medicine practice *West J Med* 1989, 151(2) 157-160
Fishbain DA, Goldberg M, Meagher BR, Steele R, Rosomoff H Male and female chronic pain patients categorized by DSM-III psychiatric diagnostic criteria *Pain* 1986, 26(2) 181-197
Gerwin R A study of 96 subjects examined both for fibromyalgia and myofascial pain *J Musculoskeletal Pain* 1995, 3(supple 1)
Friction JR, Kroening R, Haley D, Siegert R Myofascial pain syndrome of the head and neck: a review of clinical characteristics of 164 patients *Oral Surg Oral Med Oral Pathol* 1985, 60(6) 615-623
Sola AE, Kuttner JH Myofascial trigger point pain in the neck and shoulder girdle, report of 100 cases treated by injection of normal saline *Northwest Med* 1955, 54(9) 980-984
Fleckenstein J, Zaps D, Ruger LJ, Lehmeyer L, Freiberg F, Lang PM, Irmch D Discrepancy between prevalence and perceived effectiveness of treatment methods in myofascial pain syndrome: Results of a cross-sectional, nationwide survey *BMC Musculoskelet Disord* 2010, 11(1) 32
Bron C, Franssen JLM, de Valk BGM A post-traumatic Shoulder Complaint without apparent injury (Een post-traumatische schouderklacht zonder aanwijsbaar lesel) *Ned Tijdschrift v Fysiotherapie* 2001, 111(4) 97-102
Ge HY, Fernandez-de-Las-Penas C, Madeleine P, Arendt-Nielsen L Topographical mapping and mechanical pain sensitivity of myofascial trigger points in the infraspinatus muscle *Eur J Pain* 2008, 12(7) 859-865
Reynolds MD Myofascial trigger points in persistent posttraumatic shoulder pain *South Med J* 1984, 77(10) 1277-1280
Abate M, Gravare-Silbernagel K, Siljeholm C, Di Iorio A, De Amicis D, Sahni V, Werner S, Paganelli R Pathogenesis of tendinopathies: inflammation or degeneration? *Arthritis Res Ther* 2009, 11(3) 235
Falla D, Fanna D, Graven-Nielsen T Experimental muscle pain results in reorganization of coordination among trapezius muscle subdivisions during repetitive shoulder flexion *Exp Brain Res* 2007, 178(3) 385-393
Lucas KR, Rich PA, Polus BI Muscle activation patterns in the scapular positioning muscles during loaded scapular plane elevation: the effects of Latent Myofascial Trigger Points *Clin Biomech* 2010, 25(8) 765-770
Hidalgo-Lozano A, Fernandez-de-las-Penas C, Alonso-Blanco C, Ge HY, Arendt-Nielsen L, Arroyo-Morales M Muscle trigger points and pressure pain hyperalgesia in the shoulder muscles in patients with unilateral shoulder impingement: a blinded, controlled study *Exp Brain Res* 2010, 202(4) 915-925
Bron C, Wensing M, Franssen JL, Oostendorp RA Treatment of myofascial trigger points in common shoulder disorders by physical therapy: a randomized controlled trial [ISRCTN75722066] *BMC Musculoskelet Disord* 2007, 8 107
Bergman GJ, Winters JC, Groener KH, Pool JJ, Meyboom-de Jong B, Postema K, van der Heijden GJ Manipulative therapy in addition to usual medical care for patients with shoulder dysfunction and pain: a randomized, controlled trial *Ann Intern Med* 2004, 141(6) 432-439
Bot SD, van der Waal JM, Terwee CB, van der Windt DA, Schellevis FG, Bouter LM, Dekker J Incidence and prevalence of complaints of the neck and upper extremity in general practice *Ann Rheum Dis* 2005, 64(1) 118-123
53 de Winter AF, Jans MP, Scholten RJ, Deville W, van Schaardenburg D, Bouter LM Diagnostic classification of shoulder disorders interobserver agreement and determinants of disagreement Ann Rheum Dis 1999, 58(5) 272-277
54 Feleus A, Bierna-Zeinstra S, Miedema H, Verhaar J, Koes B Management in non-traumatic arm, neck and shoulder complaints differences between diagnostic groups Eur Spine J 2008, 17(9) 1218-1229
55 van der Windt DA, Koes BW, Boeke AJ, Deville W, De Jong BA, Bouter LM Shoulder disorders in general practice prognostic indicators of outcome Br J Gen Pract 1996, 46(410) 519-523
56 Geisser ME, Roth RS, Robinson ME Assessing depression among persons with chronic pain using the Center for Epidemiological Studies-Depression Scale and the Beck Depression Inventory a comparative analysis Clin J Pain 1997, 13(2) 163-170
57 Gerwin R, Shannon S Interexaminer reliability and myofascial trigger points Arch Phys Med Rehabil 2000, 81(9) 1257-1258
58 Al-Shenqiti AM, Oldham JA Test-retest reliability of myofascial trigger point detection in patients with rotator cuff tendinitis Clin Rehabil 2005, 19(5) 482-487
59 Solway S, Beaton D, McConnell S, Bombardier C The DASH Outcome Measure User's Manual, second edn Toronto Institute for Work & Health, 2002
60 Bot SD, Terwee CB, van der Windt DA, Bouter LM, Dekker J, de Vet HC Clinimetric evaluation of shoulder disability questionnaires a systematic review of the literature Ann Rheum Dis 2004, 63(4) 335-341
61 McCormack HM, Horne DJ, Sheather S Clinical applications of visual analogue scales a critical review Psychol Med 1988, 18(4) 1007-1019
62 Feinstein A Clinimetrics New York Yale University Press, 1987
63 Roy JS, Macdermid JC, Woodhouse LJ Measuring shoulder function A systematic review of four questionnaires Arthritis Care Res 2009, 61(5) 623-632
64 Gummesson C, Atroshi I, Ekdahl C The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire longitudinal construct validity and measuring self-rated health change after surgery BMC Musculoskelet Disord 2003, 4 11
65 Schmitt JS, Di Fabio RP Reliable change and minimum important difference (MID) proportions facilitated group responsiveness comparisons using individual threshold criteria J Clin Epidemiol 2004, 57(10) 1008-1018
66 Hunsker FG, Ciolfi DA, Amadio PC, Wright JG, Caughlin B The American academy of orthopaedic surgeons outcomes instruments normative values from the general population J Bone Joint Surg Am 2002, 84-A(2) 208-215
67 Diedrichsen LP, Norregaard J, Dyhre-Poulsen P, Winther A, Tufekovic G, Bandholm T, Rasmussen LR, Krogsgaard M The activity pattern of shoulder muscles in subjects with and without subacromial impingement J Electromyogr Kinesiol 2009, 19(5) 789-799
68 Hess SA, Richardson C, Darnell R, Fris P, Lisle D, Myers P Timing of rotator cuff activation during shoulder external rotation in throwers with and without symptoms of pain J Orthop Sports Phys Ther 2005, 35(12) 812-820
69 Gerwin R Myofascial Pain Syndrome Here we are, where must we go? J Musculoskeletal pain 2010, 18(4) 18
70 DiMatteo MR Evidence-based strategies to foster adherence and improve patient outcomes JAAPA 2004, 17(11) 18-21
71 DiMatteo MR Variations in patients' adherence to medical recommendations a quantitative review of 50 years of research Med Care 2004, 42(3) 200-209
72 Vecchiet L Muscle Pain and Aging Journal of Musculoskeletal Pain 2002, 10(1) 5 - 22
73 Rollman GB, Lautenbacher S Sex differences in musculoskeletal pain Clin J Pain 2001, 17(1) 20-24
74 Treaster DE, Burr D Gender differences in prevalence of upper extremity musculoskeletal disorders Ergonomics 2004, 47(5) 495-526
75 Ring D, Kadzielski J, Fabian L, Zurakowski D, Malhotra LR, Jupiter JB Self-reported upper extremity health status correlates with depression J Bone Joint Surg Am 2006, 88(9) 1983-1988
76 Fillingim RB, King CD, Ribeiro-Dasilva MC, Rahim-Williams B, Riley JL 3rd Sex, gender, and pain a review of recent clinical and experimental findings J Pain 2009, 10(5) 447-485
77 Beck AT, Brown, G., & Steer, R A Beck Depression Inventory II manual San Antonio, TX The Psychological Corporation, 1966
TREATMENT OF MYOFASCIAL TRIGGER POINTS IN PATIENTS WITH CHRONIC SHOULDER PAIN; A RANDOMIZED CONTROLLED TRIAL
Carel Bron
Arthur de Gast
Jan Dommerholt
Boudewijn Stegenga
Michel Wensing
Rob AB Oostendorp
Accepted for publication BMC Medicine
TREATMENT OF MYOFASCIAL TRIGGER POINTS IN PATIENTS WITH CHRONIC SHOULDER PAIN; A RANDOMIZED CONTROLLED TRIAL
Abstract
Background Shoulder pain is a common musculoskeletal problem that is often chronic or recurrent. Myofascial trigger points (MTrPs) cause shoulder pain and are prevalent in patients with shoulder pain. However, few studies have focused on MTRP therapy. The aim of this study was to assess the effectiveness of multimodal treatment of MTrPs in patients with chronic shoulder pain.
Methods A single assessor blinded randomized controlled trial was conducted. The intervention group received a comprehensive treatment once a week, consisting of manual compression on the MTrPs, manual stretching of the muscles, and intermittent cold application with stretching. Patients were instructed to perform muscle stretching and relaxation exercises at home, received ergonomic recommendations and advises to assume and maintain "good" posture. The control group remained on the waiting list for three months. The Disability of Arm, Shoulder, and Hand outcome measure score (DASH [primary outcome]), Visual Analogue Scale for pain (VAS-P), Global Perceived Effect (GPE), and the number of muscles with MTrPs were assessed at 6 and 12 weeks in the intervention group and compared with a control group.
Results Compared to the control group, the intervention group showed significant improvement (p<0.05) after 12 weeks on the DASH (mean difference 7.7; 95% confidence interval [CI] 1.2 to 14.2), VAS-P for current pain (13.8, 95% CI 2.6 to 25.0), VAS-P for pain in the last week (10.2, 95% CI 0.7 to 19.7), and VAS-P most severe pain in the last week (13.8; 95% CI 0.8 to 28.4). After 12 weeks 55% of the subjects in the intervention group reported to be improved (from slightly improved to completely recovered) versus 14% in the control group. The mean number of muscles with active MTrPs decreased in the intervention group compared to the control group (mean difference 2.7; 95% CI 1.2 to 4.2).
Conclusions The results of this study show that a 12-week comprehensive treatment of MTrPs in shoulder muscles reduces the number of muscles with active MTrPs and is effective in reducing symptoms and improving shoulder function in patients with chronic shoulder pain.
Trial Registration Current Controlled Trials [ISRCTN75722066]
Background
Shoulder pain is a common musculoskeletal problem. In several countries, the one-year prevalence is estimated to be 20% to 50%. The annual incidence of shoulder pain and symptoms in Dutch primary care practice ranges from 19 to 29.5 per 1000. Shoulder pain is the main contributor to non-traumatic upper limb pain, in which chronicity and recurrence of symptoms are common. The most common cause of shoulder pain is considered to be subacromial impingement syndrome (SIS), causing inflammation and degeneration of subacromial bursae and tendons. SIS was first described in 1867 by French anatomist and surgeon Jarjavay and in 1972 re-introduced by Neer. Although the interpretation of the physical signs during shoulder examination is far from reliable, the diagnosis of SIS is based mainly on the clinical picture of pain in the shoulder as described by Neer.
The clinical picture consists of an arc of pain, crepitus and muscle weakness, and a positive impingement test, which means complete relief of pain with forced forward elevation of the upper arm after injection of a local anesthetic into the subacromial space. Scientific evidence from randomized controlled trials (RCTs), meta-analyses, or systematic reviews of RCTs on the effectiveness of multimodal rehabilitation, injection therapy, medication, surgery, physical therapy, or the application of other therapies in patients with shoulder pain is conflicting or lacking, which justifies a search for an alternative explanation of shoulder pain, whether or not diagnosed as SIS.
A common cause of muscle pain is myofascial pain caused by myofascial trigger points (MTrPs). MTrPs in the shoulder muscles produce symptoms similar to other shoulder pain syndromes, including pain at rest and with movement, sleep disturbances, and pain-provocation during impingement tests. Clinical, histological, biochemical, and electrophysiological research have provided biological plausibility for the existence of MTrPs. As a result, the role of MTrPs in musculoskeletal pain is increasingly accepted in the medical literature. MTrPs are defined as exquisitely tender spots in discrete taut bands of hardened muscle that produce symptoms, known as myofascial pain.
MTrPs are classified into active and latent trigger points. According to Simons et al., "an active MTrP causes a clinical pain complaint. It is always tender, prevents full lengthening of the muscle, weakens the muscle, refers a patient-recognized pain on compression, mediates a local twitch response of muscle fibers when adequately stimulated, and, when compressed within the patient's pain tolerance, produces referred motor phenomena and often autonomic phenomena, generally in its pain reference zone, and causes tenderness in the pain reference zone." A latent MTrP is defined as "clinically quiescent with respect to spontaneous pain, it is painful only when palpated. A latent MTrP may have all the other clinical characteristics of an active MTrP and always has a taut band that increases muscle tension and restricts range of motion." Palpation is still considered the only reliable clinical method to diagnose MTrPs. Previous studies have shown that trained physical therapists can reliably detect MTrPs by palpation. Although magnetic resonance
Figure 1: Referred pain patterns (red) from supraspinatus (a), infraspinatus (b), teres minor (c), and subscapularis (d) muscle MTrPs (Xs), according to Simons et al. Illustrations courtesy of LifeART/MEDICLIP, Manual Medicine 1, Version 1.0a, Lippincott, Williams & Wilkins, 1997
Elastography and ultrasound imaging studies have shown potential to visualize MTrPs, their clinical usefulness has yet to be established.
Manual techniques, spray-and-stretch, and trigger point needling can inactivate MTrPs. MTrP inactivation may be combined with ergonomic advice, active exercises, postural correction, and relaxation if and when appropriate. Treatment of MTrPs is rarely included in systematic reviews of the effectiveness of conservative interventions in patients with shoulder pain. However, several case studies suggest that the treatment of MTrPs in patients with shoulder pain may be beneficial, although well-designed controlled studies
are still lacking \(^{47,52}\) Recently, Hains et al compared ischemic compression of relevant MTrPs (intervention) with ischemic compression of irrelevant MTrPs (sham treatment). The results of this study suggest that ischemic compression on MTrPs in shoulder muscles may reduce the symptoms of patients experiencing chronic shoulder pain \(^{53}\).
The aim of the current study was to assess the effectiveness of a comprehensive treatment program of MTrPs in shoulder muscles on symptoms and functioning of the shoulder in patients with chronic non-traumatic shoulder pain compared to a wait-and-see approach.
**Methods and Subjects**
A single-blinded randomized controlled trial (RCT) was conducted, which was approved by the Medical Ethics Committee of the Radboud University Nijmegen Medical Centre, the Netherlands [CMO 2007/022]. This RCT is registered at Current Controlled Trials [ISRCTN75722066] and the study protocol was published \(^{54}\).
**Participants/Study sample**
Between September 2007 and December 2009, all consecutive patients with shoulder pain referred to a primary care practice for physical therapy, were potential study participants. The patients were self-referred or referred by general practitioners, orthopedic surgeons, neurologists, or physiatrists. Patients were eligible if they had unilateral non-traumatic shoulder pain for at least six months and were aged between 18 and 65 years old, and whose clinical presentation did not warrant referral for further diagnostic screening. Patients who previously had been diagnosed with shoulder instability, shoulder fractures, systemic diseases, such as rheumatoid arthritis, Reiter's syndrome, or diabetes, or whose medical history or physical examination suggested neurological diseases, or other severe medical or psychiatric disorders were excluded from the study. Patients with signs and symptoms of a primary frozen shoulder were also excluded. Because the questionnaires were in the Dutch language, subjects had to understand written and verbal Dutch. The lead investigator (CB) checked all available information from referral letters and additional information from the patients. All eligible patients were invited to participate in the study. The patients were informed of the study before the first assessment and signed a written informed consent.
**Data assessment**
Two research assistants (MO and MB, see acknowledgements), each with 30 years of clinical experience in primary care practice and more than 5 years experience in identifying and treating MTrPs, performed the physical examination, including the assessment of passive range of motion (PROM) of the shoulder and the MTrPs palpation of the shoulder muscles. The total number of shoulder muscles with active and latent MTrPs was counted. The research assistants were blinded to the treatment allocation during the entire study period. The assessments were at intake, prior to the randomization, and at 6 and 12 weeks. For
every single patient only one observer was active. A detailed medical history was completed, which included demographic variables and potential prognostic factors\textsuperscript{35} \textsuperscript{56}, and a set of self-administered questionnaires for outcome measurements, including the Disabilities of Arm, Shoulder and Hand questionnaire (DASH), Visual Analogue scales for Pain (VAS-P), RAND-36, and the Beck depression Inventory-II (BDI-II). A third research assistant (IS, see acknowledgements) transferred the collected data to a worksheet. After transferring the data from the worksheet into a statistical software package the lead investigator (CB), who was blinded to the treatment allocation until all statistical tests were performed, analyzed the data. Blinding of the patients or the treating physical therapists was impossible due to the treatment characteristics.
**Sample size**
The planned sample size was based on an assumed mean improvement of the primary outcome, DASH questionnaire score of 15 points (SD 22), which implies an effect size of 0.68.\textsuperscript{57} To test the null hypothesis at $\alpha = 0.05$ with 90% power and assuming a uniform dropout rate of 5%, it was calculated that 52 patients in each group would be required.
**Randomization**
After collection of patient’s data at baseline, the included patients were randomly assigned to either the intervention group or the “wait and see” group. A research assistant (IS) performed the randomization by generating random numbers using computer software (Research Randomizer on www.socialpsychology.org). These numbers were stored on a computer and were only accessible to the assistant. No stratification or blocking strategies were used.
**Interventions**
The patients in the intervention group were treated by a physical therapist once a week for a maximum period of 12 weeks. Five physical therapists were involved in the treatment of the patients. All participating physical therapists were experienced in treating patients with persistent shoulder pain and MTrPs. They were trained and skilled in the identification and treatment of MTrPs and had successfully completed a certification-training program in trigger point therapy.
The treatment started with inactivation of active, pain-producing MTrPs by manual compression. The physical therapist applied gentle, gradually increasing pressure on the MTrP until the finger encountered a definite increase in tissue resistance. At that point the patient commonly would feel a certain degree of discomfort or pain. The pressure was maintained until the therapist sensed relief of tension under the palpating finger or the patient experienced a considerable decline of pain. At that point the therapist could repeat this procedure several times until pressure on the MTrP would only provoke little discomfort without pain. This technique was combined with other manual techniques, such as deep stroking (pressure directed along the length of the taut band) or strumming (pressure...
perpendicularly across the muscle fibers). Both techniques can manually stretch the trigger point area and the taut band. These manual techniques could be preceded or followed by “intermittent cold application by using ice-cubes followed by stretching the muscle” according to Simons et al.\textsuperscript{27}. The effectiveness of muscle stretching exercises was enhanced by including short isometric contractions and relaxation (hold-relax). Patients were instructed to perform at home simple gentle static stretching and relaxation exercises several times during the day. When appropriate, the relaxation exercises were augmented by using a portable myofeedback device (Myotrac I, Thought Technology, Quebec, Canada). Furthermore patients were instructed to apply heat, such as a hot shower or hot packs, for muscle relaxation and pain relieve at least twice every day. All patients received ergonomic advice and instructions to assume and maintain “good” posture\textsuperscript{58, 59}. The content and aim of each session varied based on the specific findings from the initial evaluations and patients’ responses to previous treatment sessions. All individual treatments however, were consistent with the limits of the treatment protocol\textsuperscript{54}.
\textit{Figure 2: Manual compression on the MTrP in the infraspinatus muscle of the left shoulder (a), stroking with ice (in a polystyrene cup) in unidirectional parallel strokes combined with gentle muscle stretching applied for the infraspinatus muscle of the left shoulder in side lying (b), and cross-body muscle stretching exercise for posterior shoulder muscles, including the infraspinatus muscle (c).}
Stop rule
Treatments were discontinued when patients were completely free of symptoms or when a patient and physical therapist agreed that treatment would not further benefit the patient. Participation in the study continued, unless patients decided to stop participation in the study. Subjects were free to withdraw from the study at any time without consequences for their treatment.
Treatment integrity
To enhance the integrity of the interventions, all participating physical therapists were allowed to discuss the content of each therapy session with the lead investigator (CB) without releasing names or any other information that could jeopardize the blinding of the lead investigator. After 6 and 12 weeks, the lead investigator interviewed the patients of the intervention group to assure that the received treatments had been consistent with the study protocol.
Wait and See
Patients in the control group remained on a waiting list and were informed that they would receive the same physical therapy treatment after 3 months had passed. They were instructed not to change their self-management of their shoulder pain. If they were using either prescribed or over-the-counter medication, they were encouraged to continue the medication at the patient’s discretion, because of their participation in the study. In addition, they were requested to report any other intervention or other relevant change during the study period. Every six weeks, they visited the physical therapy practice and provided research data similar to the patients from the intervention group. After 12 weeks, they started the physical therapy treatment.
Outcome measures
Primary Outcome Measure
The DASH is an internationally widely used multidimensional 30-item self-report measure focusing on physical function, pain, emotional, and social parameters.\textsuperscript{60} The score ranges from 0 to 100 whereby a higher score indicates greater disability. The Minimal Clinically Important Difference (MCID) is approximately a 10-point difference between pre- and post-treatment.\textsuperscript{57, 61, 62} The DASH is a reliable and valid questionnaire and considered to be one of the best questionnaires for patients with shoulder symptoms.\textsuperscript{61, 63}
Secondary Outcome Measures
The Visual Analogue Scale for Pain (VAS-P) is a self-report scale consisting of a horizontal line, 100 mm in length, anchored by the words “no pain” at left side (score 0) and “worst pain imaginable” at the right side (score 100).\textsuperscript{64–66} The VAS-P was used to measure pain at
the current moment (VAS-P1), the average pain during the last seven days (VAS-P2), and the most severe pain during the last seven days (VAS-P3). A 14 mm change is considered to be a MCID in patients with rotator cuff disease.\textsuperscript{57-70}
To assess Global Perceived Effect (GPE) the subjects rated the effect of treatment on an ordinal 8-point scale with categories ranging from “1 = much worse” to “8 = completely recovered”. GPE was then dichotomized into the number of improved (from slightly improved to completely recovered) versus not improved (from unchanged to much worse) patients. GPE has good test-retest reliability and correlated well with changes in pain and disability.\textsuperscript{71}
Passive range of motion (PROM) of the shoulder was measured by a handheld digital inclinometer (The Saunders group Inc, Chaska, MN) and recorded in degrees. Forward elevation of the shoulder, external rotation, and cross-body adduction were measured in the supine position, internal rotation in prone position, and glenohumeral abduction in the upright position. The range of motion of the non-painful shoulder was used as a reference. A detailed description of the goniometric measurement of the PROM is published in the design of the study\textsuperscript{54}.
**Table 1. List of muscles examined for MTrPs**
| upper trapezius muscle | middle trapezius muscle | lower trapezius muscle |
|------------------------|-------------------------|------------------------|
| infraspinatus muscle | supraspinatus muscle | subscapularis muscle |
| teres minor muscle | teres major muscle | anterior deltoid muscle|
| middle deltoid muscle | posterior deltoid muscle| pectoralis major muscle|
| pectoralis minor muscle| biceps brachii muscle | triceps brachii muscle |
| scalene muscles | subclavius muscle | |
The total number of shoulder muscles with MTrPs was counted using the same methods as at baseline and then compared to the baseline measurements. While the patient was in supine or in prone position, depending on the muscle that was examined, seventeen muscles were palpated bilaterally for the presence of a taut band, spot tenderness, the presence of a nodule, local twitch response, and local and referred pain (\textit{table 1}). When the patient recognized the pain from compression on the tender spot, the MTrPs were considered to be active. When the pain was only local and not familiar, MTrPs were considered to be latent\textsuperscript{27, 38, 54}.
At 6 and 12 weeks, participants were asked to complete a self-assessment form, which included questions regarding whether they had changed their self-management, or had received any medical treatment that could have influenced their shoulder pain.
### Table 2. Characteristics of participants at baseline.
| | Intervention (n=34) | Control (n=31) |
|--------------------------------|---------------------|----------------|
| **Age (years; mean; SD; 95% CI)** | 42.8 (11.7; 38.7-46.9) | 45.0 (13.2; 40.2-49.9) |
| **Female (n; %)** | 21 (62) | 23 (74) |
| **Level of education* (n; %)** | | |
| Low | 2 (6) | 2 (7) |
| Intermediate | 13 (38) | 17 (55) |
| High | 19 (56) | 12 (38) |
| **Right-handed (n; %)** | 33 (97) | 29 (94) |
| **Pain dominant side (n; %)** | 24 (70) | 19 (61) |
| **Duration of complaints (n; %)** | | |
| 6-9 months | 10 (29) | 5 (16) |
| 9-12 months | 4 (12) | 8 (26) |
| 1-2 years | 8 (23) | 6 (19) |
| 2-5 years | 6 (18) | 5 (16) |
| >5 years | 6 (18) | 7 (23) |
| **Episode (n; %)** | | |
| first | 13 (38) | 11 (35) |
| second | 8 (24) | 8 (26) |
| third or more | 13 (38) | 12 (39) |
| **DASH-DLV (mean; SD; 95% CI)†** | 30.3 (16.6; 24.5–36.1) | 30.8 (11.9; 26.5–35.2) |
| **VAS-P1 (mean; SD; 95% CI)§** | 31.9 (24.3; 21.9–41.9) | 35.2 (25.7; 25.7–43.0) |
| **VAS-P2 (mean; SD; 95% CI)§** | 41.3 (19.7; 33.2–49.4) | 43.4 (17.0; 37.2–50.0) |
| **VAS-P3 (mean; SD; 95% CI)§** | 54.9 (21.9; 45.8–63.9) | 59.5 (18.2; 52.8–66.2) |
| **BDI-II-DLV (mean; SD; 95% CI)¶** | 6.3 (4.0; 4.9–7.8) | 5.8 (8.2; 2.8–8.8) |
| **RAND-36-DLV (mean; SD; 95% CI)** | | |
| social functioning | 78.7 (20.3; 71.6 – 85.8) | 81.1 (18.5; 74.3 – 87.8) |
| limitations due to physical problems | 47.7 (43.0; 32.5 – 63.0) | 49.5 (37.2; 35.8 – 63.1) |
| vitality | 59.3 (17.0; 53.3 – 65.1) | 62.6 (17.9; 56.0 – 69.1) |
| bodily pain | 51.6 (16.0; 45.7 – 57.6) | 52.7 (12.3; 48.2 – 57.2) |
| general health perception | 52.9 (8.5; 50.0 – 55.9) | 56.6 (7.0; 54.1 – 59.2) |
| **PROM (mean; SD; 95% CI)‡** | 28.4 (34.8; 16.1 – 40.7) | 39.0 (34.9; 26.2 - 51.8) |
| **Muscles with MTrPs (mean; 95% CI)‡‡** | | |
| active MTrPs | 7.4 (3.6; 6.1 – 8.7) | 6.1 (3.5; 4.8 – 7.4 ) |
| latent MTrPs | 4.2 (2.7; 3.2 – 5.1) | 5.9 (3.0; 4.8 – 7.0) |
* High education (university and higher vocational school), medium education (middle vocational school and higher or middle general secondary school), and low education (lower vocational school, lower general secondary school, primary school, or no education).
† Higher Dash-DLV (Disabilities of the Arm, Shoulder and Hand outcome measure- Dutch Language Version) scores indicate more disability with a maximum of 100 (range 0 to 100).
§ Higher scores on the VAS-P (Visual Analogue Scales for Pain) indicate more pain with a maximum of 100 (range 0 to 100). VAS-P1: current pain score, VAS-P2: average pain score of the past seven days, and VAS-P3: most severe pain score of the past seven days.
¶ Higher scores on the BDI-II-DLV (Beck Depression Inventory-second edition- Dutch Language Version) indicate more symptoms of depression (range 0-63).
** Only the subscales of the nine subscales of the RAND-36 that differ significantly from a normal Dutch population are presented here [89]. Higher scores indicate a better quality of life (range 0-100).
‡ A positive number (degrees) of the mean score of PROM (Passive Range Of Motion) indicates impairment of the PROM of the affected shoulder.
‡‡ Number of Muscles with active, resp. latent MTrPs (range 0-17 muscles)
Statistical Analysis
Analyses were performed according to the intention-to-treat principle. Both groups were compared for baseline characteristics using t-test and Chi-square for binominal variables. For the DASH, VAS-P, and the number of muscles with MTrPs, the t-test for normally distributed data was used to assess the difference between the two groups at week 6 and week 12. We considered a mean difference of more than 10 points on the DASH as clinically important (MCID). Effect sizes Cohen’s d were calculated to examine the average impact of the intervention.\(^{72}\) According to Cohen, \(d = 0.2\) indicates small effect, negligible clinical importance, \(d = 0.5\) indicates medium effect, moderate clinical importance, and \(d = 0.8\) indicates a large effect, crucial clinical importance.\(^{73}\) To compare patients who improved more than 10 points on the DASH with patients who improved less than 10 points, we calculated relative risk (RR) and their 95% confidence intervals (95% CI). To examine the impact on individual patients in more detail, we dichotomized participants’ measures of GPE into improved versus not improved. The proportions of patients who had clinically improved between groups were compared by calculating RR and the 95% CI at 6 weeks and 12 weeks. Pearson correlation coefficients were used to relate the variables of the number of muscles with active MTrPs and the DASH score.
In addition, the effect of the intervention was evaluated in a regression analysis. The DASH score at 12 weeks was the dependent variable, the group variable, the DASH score at baseline, the number of muscles with active MTrPs at intake, the number of muscles with latent MTrPs at intake, and PROM included as covariates in this multiple linear regression model.
To evaluate the successfulness of the blinding procedure, both observers were asked to identify the treatment allocation. A goodness-of-fit \(\chi^2\) test was used to determine that the number of correctly and incorrectly identified cases fitted a probability of 50%. For all comparisons, \(p < 0.05\) was considered statistically significant (two-tailed). If the 95% confidence interval (95% CI) of the difference does not include the value 0, the difference is statistically significant (at \(\alpha = 0.05\)). Systat 12, Sigmaplot 11, and Sigmastat 3.11 (Systat Inc, Richmond, California, USA) for Windows were used for the statistical analysis.
Results
Between September 2007 and September 2009, 72 patients were randomly assigned to either the intervention group or the control group. See figure 3 for the schematic summary of the patient participation and table 2 for the patients’ characteristics at baseline. At baseline, both groups were comparable for all variables with no statistical or clinical relevant differences, except for the number of muscles with latent MTrPs and the level of education.
Consecutive patients with shoulder pain screened for eligibility by research assistant (n = 211)
excluded (n=123)
primary frozen shoulder (n=20), bilateral shoulder pain (n=26) post-traumatic (n=8), post-surgical (n=5), consultation (n=30) <18 years (n=3), >65 years (n=8), duration of complaints < 6 months (n=8), reason unclear (n=15)
declined to participate (n=13)
Eligible subjects (n= 75)
Excluded after physical examination, because of signs and symptoms of primary frozen shoulder (n=2) or severe language problems (n=1)
RANDOMIZED (n= 72 )
allocated to physical therapy treatment (n= 37 )
withdrawals (n= 0)
developed primary frozen shoulder (n=2) or cervical radiculopathy (n=1)
intention to treat analyses for primary and secondary outcomes (n=34)
allocated to waiting list (n= 35 )
withdrawals (n= 3)
developed primary frozen shoulder (n=1)
intention to treat analyses for primary and secondary outcomes (n=31)
**Primary outcome**
**DASH**
The difference between the intervention group and the control group was not significant after 6 weeks (4.1; 95% CI -2.8 to 11.1), and significant after 12 weeks (7.7; 95% CI -1.2 to 14.2). The graphical presentation of the mean DASH scores at intake, after 6 and 12 weeks is shown in figure 4.
Seventeen subjects (50%) in the intervention group and seven (22%) in the control group improved more than 10 points (MCID) on the DASH outcome measurement (relative risk 2.3; 95% CI 1.1 to 4.7) (figure 4). The effect size (Cohen’s d) was 0.60 (table 3).
The multiple linear regression analysis with the baseline score as a covariate demonstrated a significantly higher DASH score at 12 weeks of 7.447 (95% CI: 2.14 to 12.75) in the intervention group compared with the control group. Adjustment for the covariates had no influence on this result.
**Secondary outcomes**
**VAS-P1, VAS-P2, and VAS-P3**
The intervention group showed on average significantly lower scores on all VAS-P scales compared to the control group after 12 weeks (VAS-P1, 13.8; 95% CI: 2.6 to 25.0), VAS-P2; 10.2; 95% CI: 0.7 to 19.7), and VAS-P3; 13.8; 95% CI 0.8 to 28.4). The differences after 6 weeks were not significant, except for VAS-P3 (15.6; 95% CI: 2.3 to 28.8). The difference between baseline and after 12 weeks in the intervention group reached the MCID for all three VAS-P scales, while changes in the control group did not reach the MCID. The effect sizes on the three VAS-P scales varied from 0.5 to 0.7 (table 3).
**GPE**
After 6 weeks, improvement was reported by 49% (16/33) of the patients in the intervention group versus 17% (5/30) patients in the control group (relative risk 2.9, 95% CI: 1.2 to 7.0). After 12 weeks, 55% (18/33) of the patients in the intervention group reported to be improved versus 14% (4/28) of the patients in the control group (relative risk 3.8; 95% CI: 1.46 to 10.0) (table 3).
**Number of muscles with trigger points**
The number of muscles with active MTrPs was significantly lower in the intervention group compared to the control group after 12 weeks (mean difference 2.7, 95% CI: 1.2 to 4.2). The change in the number of muscles with latent MTrPs was non-significant versus control group (mean difference 0.4, 95% CI: -0.7 to 1.5) (table 3). Effect size (Cohen’s d) for active MTrPs after 12 weeks was 0.89, a large effect and for latent MTrPs 0.13.
Figure 4: The mean DASH scores (error bars present 95% confidence interval) at intake, after 6, and after 12 weeks for the intervention group (n=34) and control group (n=31).
The mean Dash scores of patients with shoulder pain (95% confidence interval) at intake, after 6, and after 12 weeks for the intervention group (n=34) and control group (n=31)
Correlation between the number of muscles with active MTrPs and the DASH outcome at 12 weeks
The number of shoulder muscles with active MTrPs was positively correlated with the DASH outcome at 12 weeks (r = 0.49; regression coefficient = 2.13, p = 0.000; ANOVA F = 9.6, p = 0.000), when corrected for muscles with active MTrPs at intake). This implies that the number of muscles with active MTrPs was associated with 24% of the variation in DASH outcome. Two cases were identified as significant outliers during the multiple linear regression analysis (both in the intervention group) and were removed before further analysis.
PROM
The PROM difference between the groups did not change significantly during the measurements at 6 weeks (mean difference 8.8, t= 1.14, p > .05) and 12 weeks (mean difference 8.2; t= 1.19; p >.05).
Table 3. Primary and secondary outcomes in intervention group and control group after 6 and 12 weeks
| outcome | intervention (n=34) | control (n=31) | mean difference (95% CI) | p-value | Effect Size (Cohen’s d) |
|--------------------------------|--------------------|----------------|--------------------------|---------|-------------------------|
| **DASH (mean; SD)**† | | | | | |
| baseline | 30.3 (16.6) | 30.8 (11.9) | 0.5 (-6.7 to 7.7) | NS | |
| after 6 weeks | 23.4 (12.6) | 27.5 (15.5) | 4.1 (-2.8 to 11.1) | NS | |
| after 12 weeks | 18.4 (12.3) | 26.1 (13.8) | 7.7 (1.2 to 14.2) | <.05 | 0.60 |
| **VAS-P1 (mean; SD)**§ | | | | | |
| baseline | 31.9 (24.3) | 35.2 (25.7) | 3.3 (-9.1 to 15.7) | NS | |
| after 6 weeks | 29.0 (18.4) | 37.8 (17.9) | 8.8 (-0.2 to 17.8) | NS | |
| after 12 weeks | 17.2 (19.5) | 31.0 (21.0) | 13.8 (2.6 to 25.0) | <.05 | 0.69 |
| **VAS-P2 (mean; SD)**§ | | | | | |
| baseline | 41.3 (19.7) | 43.4 (17.0) | 2.0 (-7.1 to 11.1) | NS | |
| after 6 weeks | 32.9 (19.3) | 40.0 (20.7) | 6.7 (-3.6 to 17.0) | NS | |
| after 12 weeks | 22.5 (16.4) | 33.2 (23.3) | 10.2 (0.7 to 19.7) | <.05 | 0.54 |
| **VAS-P3 (mean; SD)**§ | | | | | |
| baseline | 54.9 (21.9) | 59.5 (18.2) | 4.6 (-14.6 to 5.4) | NS | |
| after 6 weeks | 41.0 (25.1) | 56.6 (28.3) | 15.6 (2.3 to 28.8) | <.05 | |
| after 12 weeks | 34.0 (21.9) | 47.8 (27.3) | 13.8 (0.8 to 28.4) | <.05 | 0.57 |
| **GPE (proportion; %)*** | | | | | RR (95% CI) |
| Improved | 16/33 (49%) | 5/30 (17%) | | < .05 | 2.9 (1.2-7.0) |
| after 6 weeks | | | | | |
| after 12 weeks | 18/33 (55%) | 4/28 (14%) | | < .05 | 3.8 (1.5-10.0) |
| **Number of muscles with active trigger points (mean; SD)** | | | | | |
| baseline | 7.4 (3.7) | 6.1 (3.5) | 1.3 (-0.5 – 3.1) | NS | |
| after 6 weeks | 6.2 (3.5) | 6.8 (3.6) | 0.6 (-1.2 – 2.4) | NS | |
| after 12 weeks | 4.8 (3.0)) | 7.5 (3.2) | 2.7 (1.2 – 4.2) | < .05 | 0.89 |
| **Number of muscles with latent trigger points (mean; SD)** | | | | | |
| baseline | 4.2 (2.7) | 5.9 (3.0) | 1.7 (-0.3 – 3.1) | < .05 | |
| after 6 weeks | 3.8 (2.1) | 4.8 (2.8) | 1.0 (-2.3 – 0.2) | NS | |
| after 12 weeks | 4.7 (2.3) | 4.4 (2.3) | 0.4 (-0.7 – 1.5) | NS | 0.13 |
ES = effect size. NS = not significant.
† Higher Dash-DLV (Disabilities of the Arm, Shoulder and Hand outcome measure- Dutch Language Version) scores, more disability with a maximum of 100 (range from 0 to 100).
§ Higher scores on the VAS-P (Visual Analogue Scales for Pain) indicate more pain with a maximum of 100 (range from 0 to 100). VAS-P1 represents the current pain score, VAS-P2 represents the average pain score of the past seven days, and VAS-P3 the most severe pain score of the past seven days.
* GPE; Global Perceived Effect
Figure 5: Number of subjects that improved more than 10 points (minimal clinically important difference) on the DASH outcome measure after 12 weeks for the intervention ($n=34$) and control group ($n=31$).
Evaluation of Blinding
After 6 weeks, the observers identified the treatment allocation correctly in 62% ($\chi^2 = 4.70$, $p = 0.03$) and after 12 weeks in 71% ($\chi^2 = 13.86$, $p = 0.00$) of the patients after completing the physical examination and MTrP count.
Co-interventions
We checked whether the participants in either group had received other interventions other than those described in the treatment protocol. During the first 6 weeks of the study, one subject in each group received an injection administered by a general practitioner. After 6 weeks no co-interventions were reported in either group.
Discussion
Summary of main findings
This single blinded randomized controlled trial evaluated the effectiveness of a 12-week comprehensive MTrP physical therapy treatment program in patients with chronic non-traumatic unilateral shoulder pain when compared to a wait-and-see strategy. After 12 weeks, the intervention group showed statistically as well as clinically significant differences compared to the control group on the primary and secondary outcome measures. The effect sizes were considered to be medium and consistent with the hypothesized effect size. The number of shoulder muscles with active MTrPs was significantly lower in the intervention group than in the control group), supporting the assumed biomedical mechanism underlying MTrP therapy.
Explaining the results/ comparing with other studies
To our knowledge, this is the first study of the effectiveness of a comprehensive MTrP therapy program in patients with shoulder pain. The difference of the DASH scores between groups was smaller than the MCID. However, the mean of the baseline DASH score was smaller than what was expected based on results from other studies.\textsuperscript{57, 74, 75} With a smaller mean value, it is less likely to get great differences between baseline and follow-up at 12 weeks. However, the effect size was 0.6, which is considered to be a medium effect that is clinically relevant. When considering the number of subjects improving more than 10 points, there is a clinically relevant result. Furthermore, many more subjects in the intervention group reported improvement (GPE) than in the control group.
Previous trials have investigated various types of physical, manual or exercise therapy. The treatments in these studies included interventions showing similarities to components of the treatment program of this study, but were not specifically aimed at treating MTrPs. For example, exercise therapy or manual therapy interventions included soft tissue massage and muscle stretching exercises, which generally are performed for anterior and posterior muscle tightness.\textsuperscript{76, 79} These interventions may have an unintentional effect on MTrPs in shoulder muscles, as MTrPs seem to be prevalent in patients with shoulder pain, which may have contributed to the results of other studies.\textsuperscript{25, 26} However, as these studies did not focus on MTrPs, there is no direct evidence that these interventions did have or did not have effect on MTrPs.
Recently, Hains published the first report on the effectiveness of ischemic compression therapy of MTrPs in shoulder muscles in chronic shoulder conditions compared to sham compression. The intervention group received 15 sessions (comprising of 15 second compression of MTrPs in up to four muscles, including the supraspinatus, infraspinatus, deltoid, and biceps muscles) three times a week without taking any other therapeutic measures. The control group received sham therapy (15 seconds compression of MTrPs in shoulder muscles, considered irrelevant for the shoulder pain). The intervention group
showed a significant improvement on the Shoulder Pain and Dysfunction Index (SPADI) when compared to the sham group \(^{53}\). The authors did not report any change in the number of MTrPs in the shoulder muscles or in the number of shoulder muscle with MTrPs. The current study showed that a decrease of the numbers of shoulder muscles with active MTrPs is correlated with better outcome. While the number of muscles with active MTrPs decreased in the intervention group, the number of muscles with latent MTrPs tended to increase slightly. One explanation can be that the state of MTrPs is more or less dynamic and changes from active to latent and vice versa occur, depending on the degree of irritability \(^{80}\).
One of the clinical features of active MTrPs is spontaneous pain, in rest or during activity, which is felt at a distant from the MTrP side and, by definition, has to be recognized by the patients as their familiar pain. According to Mense, “the current concept of the referral of muscle pain is based on the observation that the efficacy of synaptic connections of central dorsal horn neurons can change, particularly under the influence of a nociceptive input. The important point is that ineffective synaptic connections can become effective under pathological circumstances. This means that a neuron can acquire new receptive fields in the presence of nociceptive input” \(^{81}\). This process is called central sensitization. By expanding receptive fields, non-nociceptive input, originating from a location other than the originally painful location, may be perceived as painful. In patients with shoulder pain, MTrPs in, for example, the infraspinatus, supraspinatus, teres minor, or subscapularis muscle may cause local and referred pain, which can be felt deep in the shoulder. In other words, MTrPs may mimic pain interpreted as pain arising from subacromial bursitis, tendonitis or tendonopathy, which may explain why often treatment of inflammation is so ineffective.
Furthermore, MTrPs can cause particular motor effects as well. MTrPs can lead to muscle weakness of the involved painful muscle and reorganization of motor activation patterns. Restricted range of motion may be observed secondary to a contracted taut band \(^{80, 82, 83}\). A changed motor activation pattern is often reported in the shoulder pain literature \(^{84}\). Since MTrPs can alter such patterns, MTrP inactivation should be considered prior to any form of muscle strengthening exercises. When muscle weakness persists, it may alter a patient’s shoulder kinematics and eventually causes humeral head migration, rotator cuff degeneration and formation of bony spurs in the subacromial space. Early recognition and treatment of MTrPs may prevent patients from developing chronic shoulder pain and early degeneration.
As we did not examine the effects of single components of the intervention we cannot conclude whether a single component or a combination of components attributed more to the treatment effect than others. Others have examined the effect of single ischemic compression or a combination of ischemic compression and stretching and concluded that both interventions had positive effects on the recovery \(^{45}\). The management of MTrPs is not restricted to MTrP inactivation, but needs correction of perpetuating factors, that are clinically apparent, but not yet necessarily scientifically established \(^{27, 42, 85}\). Further research is needed to clarify the importance of perpetuating factors in shoulder patients, such as mechanical factors \(^{86}\).
Limitations of the study
The power analysis indicated that 104 subjects were needed for this clinical trial. Partly because of an overestimation of the number of eligible subjects and partly because of the unwillingness of patients to enter the trial, the study was completed with a smaller sample size. This study took two years to complete, which is one more year than originally was planned. However, the results were significant and clinically relevant, although the study population was smaller than the initially calculated sample size. A greater sample size is unlikely to have altered the direction of the results.
In the intervention group, the participants had a higher level of education than in the control group. Awareness of educational levels is important, as it may impact the patients' motivation and compliance\(^{87,88}\), but adding the level of education as covariate in a multiple linear regression analysis did not alter the results.
Evaluation of the blinding of the independent observers, who performed the physical examination and counting of MTrPs, revealed that after 12 weeks the observers were able to identify to which group a patient belonged. It is likely that the changes in physical findings and decrease in the number of MTrPs improved the observers' accuracy of group identification. Since the blinding only influenced the observer, who performed the MTrP identification, this finding had no effect on the reliability of the other outcomes scores.
The subjects in the control group were instructed to maintain their self-management of their shoulder pain and to report any management deviation. While this may pose a potential threat for the comparability of the groups, no significant changes were reported. As all patients were suffering from chronic shoulder pain and likely had explored various self-management strategies before entering into the study, we did not anticipate that they would change their self-management strategy during the study period.
Although the observers did not intend to give some "good" advice during the physical examination, they may have unintentionally instructed the subjects to avoid provocative activities. When the subjects in the control group followed the instructions and acted more carefully during daily life, their symptoms may have reduced, while they were still suffering from MTrPs. This may explain the improvement in the control group.
Implications for research and clinical practice
This study showed that patients with chronic unilateral non-traumatic shoulder pain had better outcome after a treatment for MTrPs than those without treatment and this outcome was correlated with a decrease of the number of muscles with active MTrPs.
Treatment of MTrPs can be considered as a promising approach for the treatment of patients with shoulder pain. Future clinical trials should be directed to establishing the effectiveness of MTrP treatment in patients with varying underlying pathology of the shoulder and in a wider context than a specialized physical therapy practice. It would be worthwhile to identify predictors for successful MTrP treatment, and to investigate whether MTrPs treatment is more successful when combined with supportive interventions, such as
exercises and manual therapy. Observational follow-up studies are needed to investigate the long-term effects of treatment of MTrPs in patients with chronic shoulder pain.
Given the high number of patients with shoulder pain, this will require a substantial effort and financial investment. Studies on the cost-effectiveness of treatment of patients with MTrPs in the shoulder muscles are therefore needed.
**Conclusions**
Participants in the intervention group had better outcome on all outcome measures after 12 weeks of comprehensive MTrP treatment program than those on the waiting list. Clinically relevant improvements were achieved in 55% of the patients with shoulder pain and the number of muscles with active MTrPs was significantly decreased.
**Competing interests**
The author(s) declare that they have no competing interests.
**Authors' contributions**
All authors have read, edited and approved the final manuscript. CB is the lead investigator, and developed the design of the study, carried out data-acquisition, analysis, interpretations, and prepared as primary author the manuscript. MW and RO supervised the study and helped to prepare the manuscript. JD, BS, and AdG provided intellectual contributions to the manuscript.
**Acknowledgements**
The authors like to thank Maria Onstenk and Monique Bodewes for their contributions as observers, Ineke Staal and Larissa Bylsma for their logistical assistance, Jo Franssen, Ben de Valk, Betty Beersma, Margriet Eleveld and Elise Ploos van Amstel who participated as physical therapists, and Reinier Akkermans for statistical assistance.
References
1. Bongers P. The cost of shoulder pain at work. BMJ 2001, 322(7278) 64-65
2. van der Heijden GJ. Shoulder disorders: a state-of-the-art review. Baillieres Best Pract Res Clin Rheumatol 1999, 13(2) 287-309
3. Bot S, van der Waal JM, Terwee CB, van der Windt DA, Schellevis FG, Bouter LM, Dekker J. Incidence and prevalence of complaints of the neck and upper extremity in general practice. Ann Rheum Dis 2005, 64(15608309) 118-123
4. Feleus A, Bierna-Zeinstra SM, Miedema HS, Bernsen RM, Verhaar JA, Koes BW. Incidence of non-traumatic complaints of arm, neck and shoulder in general practice. Man Ther 2008, 13(5) 426-433
5. Mitchell C. Shoulder pain: diagnosis and management in primary care. BMJ 2005, 331(7525) 1124-1128
6. van der Windt DA, Koes BW, Boeke AJ, Deville W, De Jong BA, Bouter LM. Shoulder disorders in general practice: prognostic indicators of outcome. Br J Gen Pract 1996, 46(410) 519-523
7. Bigliani LU, Levine WN. Subacromial impingement syndrome. J Bone Joint Surg Am 1997, 79(12) 1854-1868
8. Koester MC, George MS, Kuhn JE. Shoulder impingement syndrome. Am J Med 2005, 118(5) 452-455
9. Morrison DS, Frogameni AD, Woodworth P. Non-operative treatment of subacromial impingement syndrome. J Bone Joint Surg Am 1997, 79(5) 732-737
10. Neer CS. 2nd Anterior acromioplasty for the chronic impingement syndrome in the shoulder: a preliminary report. J Bone Joint Surg Am 1972, 54(1) 41-50
11. Park HB, Yokota A, Gill HS, El Rassi G, McFarland EG. Diagnostic accuracy of clinical tests for the different degrees of subacromial impingement syndrome. J Bone Joint Surg Am 2005, 87(7) 1446-1455
12. Michener LA, Walsworth MK, Doukas WC, Murphy KP. Reliability and diagnostic accuracy of 5 physical examination tests and combination of tests for subacromial impingement. Arch Phys Med Rehabil 2009, 90(19887215) 1898-1903
13. Neer CS. 2nd Impingement lesions. Clin Orthop Relat Res 1983(173) 70-77
14. Andersen NH, Sojbjerg JO, Johannsen HV, Sneppen O. Self-training versus physiotherapist-supervised rehabilitation of the shoulder in patients treated with arthroscopic subacromial decompression: a clinical randomized study. J Shoulder Elbow Surg 1999, 8(2) 99-101
15. Bergman GJ, Winters JC, Groener KH, Pool JJ, Meyboom-de Jong B, Postema K, van der Heijden GJ. Manipulative therapy in addition to usual medical care for patients with shoulder dysfunction and pain: a randomized, controlled trial. Ann Intern Med 2004, 141(6) 432-439
16. Blair B, Rokito AS, Cuomo F, Jarolem K, Zuckerman JD. Efficacy of injections of corticosteroids for subacromial impingement syndrome. J Bone Joint Surg Am 1996, 78(11) 1685-1689
17. Buchbinder R, Green S, Youd JM. Corticosteroid injections for shoulder pain. Cochrane Database Syst Rev 2003(1) CD004016
18. Desmeules F, Cote CH, Fremont P. Therapeutic exercise and orthopedic manual therapy for impingement syndrome: a systematic review. Clin J Sport Med 2003, 13(3) 176-182
19. Diercks RL, Ham SJ, Ros JM. [Results of anterior shoulder decompression surgery according to Neer for shoulder impingement syndrome, little effect on fitness for work]. Ned Tijdschr Geneeskd 1998, 142(22) 1266-1269
20. Ekeberg O, Bautz-Holter E, Tveita E, Juel N, Kvalheim S, Brox J. Subacromial ultrasound guided or systemic steroid injection for rotator cuff disease: randomised double blind study. Br Med J 2009, 338(16987191156838195903related v2L_0lgvug) a3112
21. Cammamo J, Marinko L. Effectiveness of Manual Physical Therapy for Painful Shoulder Conditions: A Systematic Review. JMMT 2009, 17(4) 206-215
22. Dorrestijn O, Stevens M, Winters JC, van der Meer K, Diercks RL. Conservative or surgical treatment for subacromial impingement syndrome? A systematic review. J Shoulder Elbow Surg 2009, 18(4) 652-660
23. Coghlan JA, Buchbinder R, Green S, Johnston RV, Bell SN. Surgery for rotator cuff disease. Cochrane Database Syst Rev 2008(1) CD005619
24. Michener LA, Walsworth MK, Burnet EN. Effectiveness of rehabilitation for patients with subacromial impingement syndrome: a systematic review. J Hand Ther 2004, 17(2) 152-164
25. Hidalgo-Lozano A, Fernandez-de-las-Penas C, Alonso-Blanco C, Ge HY, Arendt-Nielsen L, Arroyo-Morales M. Muscle trigger points and pressure pain hyperalgesia in the shoulder muscles in patients with unilateral shoulder impingement: a blinded, controlled study. Exp Brain Res 2010, 202(4) 915-925
26. Bron C, Dommerholt J, Siegenga B, Wensing M, Oostendorp R. High prevalence of myofascial trigger points in patients with shoulder pain. Submitted for publication
27. Simons DG, Travell, JG, Simons LS. Myofascial Pain and Dysfunction: The trigger point manual. Upper half of body, vol 1, second edn. Baltimore, MD: Lippincott, Williams and Wilkins, 1999
28 Gerwin RD, Dommerholt J, Shah JP An expansion of Simons' integrated hypothesis of trigger point formation *Curr Pain Headache Rep* 2004, 8(6) 468-475
29 Hong CZ, Simons DG Pathophysiologic and electrophysiologic mechanisms of myofascial trigger points *Arch Phys Med Rehabil* 1998, 79(7) 863-872
30 Mense S, Simons DG, Hoheisel U, Quenzer B Lesions of rat skeletal muscle after local block of acetylcholinesterase and neuromuscular stimulation *J Appl Physiol* 2003, 94(6) 2494-2501
31 Shah J, Danoff, Desai, Parikh, Nakamura, Phillips, Gerber Biochemicals Associated With Pain and Inflammation are Elevated in Sites Near to and Remote From Active Myofascial Trigger Points *Arch Phys Med Rehabil* 2008, 89(1) 16-23
32 Sikdar S, Shah J, Gebreab, Yen R, Gilliams E, Danoff JV, Gerber L Novel Applications of Ultrasound Technology to Visualize and Characterize Myofascial Trigger Points and Surrounding Soft Tissue *Arch Phys Med Rehabil* 2009, 90(november) 1829-1838
33 Chen, Basford, An Ability of magnetic resonance elastography to assess taut bands *Clin Biomech* 2008, 23(5) 623-629
34 Couppé C, Midtun A, Hilden J, Jorgensen U, Oxholm P, Fuglsang-Frederiksen A Spontaneous needle electromyographic activity in myofascial trigger points in the infraspinatus muscle A blinded assessment *Journal of Musculoskeletal Pain* 2001, 9(3) 7-16
35 Fernandez-de-las-Penas C, Cuadrado ML, Arendt-Nielsen L, Simons DG, Pareja JA Myofascial trigger points and sensitization: an updated pain model for tension-type headache *Cephalalgia* 2007, 27(5) 383-393
36 Niddam, Chan, Lee, Yeh, Hsieh Central representation of hyperalgesia from myofascial trigger point *Neuroimage* 2008, 39(3) 1299-1306
37 Chang CW, Chen YR, Chang KF Evidence of neuroaxonal degeneration in myofascial pain syndrome: a study of neuromuscular jitter by axonal microstimulation *Eur J Pain* 2008, 12(8) 1026-1030
38 Bron C, Franssen J, Wensing M, Oostendorp RA Interrater reliability of palpation of myofascial trigger points in three shoulder muscles *J Man Manip Ther* 2007, 15(4) 203-215
39 Al-Shenguiti A, Oldham J Test-retest reliability of myofascial trigger point detection in patients with rotator cuff tendinosis *Clin Rehabil* 2005, 19(10) 973-1731644189432related -lRA-J_z7Zgl) 482
40 Aguilera FJ, Martin DP, Masanet RA, Botella AC, Soler LB, Morell FB Immediate effect of ultrasound and ischemic compression techniques for the treatment of trapezius latent myofascial trigger points in healthy subjects: a randomized controlled study *J Manipulative Physiol Ther* 2009, 32(7) 515-520
41 Cummings M, Baldry P Regional myofascial pain: diagnosis and management *Best Pract Res Clin Rheumatol* 2007, 21(2) 367-387
42 Dommerholt J, Bron C, Franssen J Myofascial Trigger Points: An Evidence-Informed Review *J Man Manip Ther* 2006(2315935924984954042)
43 Edwards J, Knowles N Superficial dry needling and active stretching in the treatment of myofascial pain--a randomised controlled trial *Acupunct Med* 2003, 21(3) 80-86
44 Gerwin RD A review of myofascial pain and fibromyalgia--factors that promote their persistence *Acupunct Med* 2005, 23(3) 121-134
45 Hanten WP, Olson SL, Butts NL, Nowicki AL Effectiveness of a home program of ischemic pressure followed by sustained stretch for treatment of myofascial trigger points *Phys Ther* 2000, 80(10) 997-1003
46 Hong CZ Treatment of myofascial pain syndrome *Curr Pain Headache Rep* 2006, 10(5) 345-349
47 Inger RS Shoulder impingement in tennis/racquetball players treated with subscapularis myofascial treatments *Arch Phys Med Rehabil* 2000, 81(5) 679-682
48 Bron C, Franssen JLM, de Valk BGM A post-traumatic Shoulder Complaint without apparent injury (Een post-traumatische schouderklacht zonder aanwijsbaar letsel) *Ned Tijdschrift v Fysiotherapie* 2001, 111(4) 97-102
49 Weed ND When shoulder pain isn't bursitis: The myofascial pain syndrome *Postgrad Med* 1983, 74(3) 97-98, 101-102, 104
50 Grosshandler SL, Stratas NE, Toomey TC, Gray WF Chronic neck and shoulder pain: Focusing on myofascial origins *Postgrad Med* 1985, 77(3) 149-151, 154-148
51 Daub CW A case report of a patient with upper extremity symptoms: differentiating radicular and referred pain *Chiropractic & osteopathy* 2007, 15(17640388) 10
52 Reynolds MD Myofascial trigger points in persistent posttraumatic shoulder pain *South Med J* 1984, 77(10) 1277-1280
53 Hains G, Descarreaux M, Hains F Chronic shoulder pain of myofascial origin: a randomized clinical trial using ischemic compression therapy *J Manipulative Physiol Ther* 2010, 33(5) 362-369
54 Bron C, Wensing M, Franssen JL, Oostendorp RA Treatment of myofascial trigger points in common shoulder disorders by physical therapy: a randomized controlled trial [ISRCTN75722066] *BMC Musculoskelet Disord* 2007, 8 107
55 Mallen C, Peat G, Thomas E, Dunn K, Croft P Prognostic factors for musculoskeletal pain in primary care: a systematic review The British Journal of General Practice 2007, 57(4354665228890130420related 9Me_91_TgbjwJ) 655
56 Kuijpers T, van der Windt DA, van der Heijden GJ, Bouter LM Systematic review of prognostic cohort studies on shoulder disorders Pain 2004, 109(3) 420-431
57 Gummesson C, Atroshi I, Ekdahl C The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire longitudinal construct validity and measuring self-rated health change after surgery BMC Musculoskelet Disord 2003, 4 11
58 Szeto GP, Straker LM, O'Sullivan PB EMG median frequency changes in the neck-shoulder stabilizers of symptomatic office workers when challenged by different physical stressors J Electromyogr Kinesiol 2005, 15(6) 544-555
59 Peper E, Wilson VS, Gibney KH, Huber K, Harvey R, Shumay DM The integration of electromyography (SEMG) at the workstation assessment, treatment, and prevention of repetitive strain injury (RSI) Appl Psychophysiol Biofeedback 2003, 28(2) 167-182
60 Solway S, Beaton D, McConnell S, Bombardier C The DASH Outcome Measure User's Manual, second edn Toronto Institute for Work & Health, 2002
61 Roy J, Macdermid J, Woodhouse L Measuring shoulder function: a systematic review of four questionnaires Arthritis Rheum 2009, 61(19405008) 623-632
62 Schmitt JS, Di Fabio RP Reliable change and minimum important difference (MID) proportions facilitated group responsiveness comparisons using individual threshold criteria J Clin Epidemiol 2004, 57(10) 1008-1018
63 Beaton DE, Katz JN, Fossel AH, Wright JG, Tarasuk V, Bombardier C Measuring the whole or the parts? Validity, reliability, and responsiveness of the Disabilities of the Arm, Shoulder and Hand outcome measure in different regions of the upper extremity J Hand Ther 2001, 14(2) 128-146
64 McCormack HM, Horne DJ, Sheather S Clinical applications of visual analogue scales: a critical review Psychol Med 1988, 18(4) 1007-1019
65 Williamson A, Hoggart B Pain: a review of three commonly used pain rating scales J Clin Nurs 2005, 14(7) 798-804
66 Myles PS, Troedel S, Boquesti M, Reeves M The pain visual analog scale: is it linear or nonlinear? Anesth Analg 1999, 89(6) 1517-1520
67 Kelly AM The minimum clinically significant difference in visual analogue scale pain score does not differ with severity of pain Emergency medicine journal EMJ 2001, 18(11354213) 205-207
68 Loos MJ, Houterman S, Scheltinga MR, Roumen RM Evaluating postherniorrhaphy groin pain Visual Analogue or Verbal Rating Scale? Hernia 2008, 12(2) 147-151
69 O'Connor DA, Chipchase LS, Tomlinson J, Krishnan J Arthroscopic subacromial decompression: responsiveness of disease-specific and health-related quality of life outcome measures Arthroscopy 1999, 15(8) 836-840
70 Huskisson EC Measurement of pain Lancet 1974, 2(7889) 1127-1131
71 Kamper SJ, Ostelo RW, Knol DL, Maher CG, de Vet HC, Hancock MJ Global Perceived Effect scales provided reliable assessments of health transition in people with musculoskeletal disorders, but ratings are strongly influenced by current status J Clin Epidemiol 2010, 63(7) 760-766 e761
72 Zakzanis KK Statistics to tell the truth, the whole truth, and nothing but the truth formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers Arch Clin Neuropsychol 2001, 16(7) 653-667
73 Cohen J Statistical power analysis for the behavioral sciences, second edn Hillsdale Lawrence Erlbaum Associates, 1988
74 Lombardi I, Jr., Magni AG, Fleury AM, Da Silva AC, Natour J Progressive resistance training in patients with shoulder impingement syndrome: a randomized controlled trial Arthritis Rheum 2008, 59(5) 615-622
75 Kennedy CA, Manns M, Hogg-Johnson S, Haines T, Hurley L, McKenzie D, Beaton DE Prognosis in soft tissue disorders of the shoulder: Predicting both change in disability and level of disability alter treatment Phys Ther 2006, 86(7) 1013-1032
76 Bennell K, Wee E, Coburn S, Green S, Harris A, Staples M, Forbes A, Buchbinder R Efficacy of standardised manual therapy and home exercise programme for chronic rotator cuff disease: randomised placebo-controlled trial BMJ 2010, 340 c2756
77 Crawshaw DP, Hellinwell PS, Henson EM, Hay EM, Aldous SJ, Conaghan PG Exercise therapy after corticosteroid injection for moderate to severe shoulder pain: large pragmatic randomised trial BMJ 2010, 340 c3037 consort-statement-flowdiagram_RCT 2001 1
78 Tate AR, McClure PW, Young IA, Salvatori R, Michener LA Comprehensive Impairment-based Exercise and Manual Therapy Intervention for Patients with Subacromial Impingement Syndrome: A Case Series Journal of Orthopaedic Sports Physical Therapy 2010, 40(8) 474-493
80 Gerwin R Myofascial Pain Syndrome Here we are, where must we go? J Musculoskeletal pain 2010, 18(4) 18
81 Mense S How Do Muscle Lesions such as Latent and Active Trigger Points Influence Central Nociceptive Neurons? J Musculoskeletal Pain 2010, 18(4) 5
82 Falla D, Farina D, Graven-Nielsen T Experimental muscle pain results in reorganization of coordination among trapezius muscle subdivisions during repetitive shoulder flexion Exp Brain Res 2007, 178(3) 385-393
83 Lucas KR, Rich PA, Polus BI Muscle activation patterns in the scapular positioning muscles during loaded scapular plane elevation The effects of Latent Myofascial Trigger Points Clin Biomech (Bristol, Avon) 2010
84 Cools, Witvrouw EE, Declercq GA, Danneels LA Scapular Muscle Recruitment Patterns Trapezius Muscle Latency with and without Impingement Symptoms The American Journal of Sports Medicine 2003(2715682948942258026)
85 Gerwin RD Myofascial pain syndromes in the upper extremity J Hand Ther 1997, 10(2) 130-136
86 Treaster D, Marras W, Burr D, Sheedy J, Hart D Myofascial trigger point development from visual and postural stressors during computer work J Electromyogr Kinesiol 2006, 16(2) 115-124
87 DiMatteo MR Evidence-based strategies to foster adherence and improve patient outcomes JAAPA 2004, 17(11) 18-21
88 DiMatteo MR Variations in patients' adherence to medical recommendations a quantitative review of 50 years of research Med Care 2004, 42(3) 200-209
89 Zee KIvd, Sanderman R RAND-36 Manual Groningen Northern Centre for Health Care research, 1993
7 GENERAL DISCUSSION
The central aim of this thesis was to describe the impact of myofascial trigger points (MTrPs) on pain and functioning in patients with chronic unilateral non-traumatic shoulder pain.
**Background**
Non-traumatic shoulder pain is often called non-specific shoulder pain, which means that a specific medical diagnosis is not provided. Specific shoulder pain diagnoses include, among others, glenohumeral labral lesions, fractures, tendon tears, dislocations, glenohumeral instability, osteoarthritis, infections and rheumatoid arthritis.
Shoulder pain is highly prevalent in the population. Together with low back pain and neck pain, it is the most common among musculoskeletal disorders. Furthermore, shoulder pain is persistent and recurrent, even with medical treatment. Most non-traumatic shoulder pain is explained as being the result of inflammation, degeneration or impingement of the soft tissues in the subacromial space, usually referred to as subacromial impingement syndrome (SIS). Medical treatment usually consists of anti-inflammatory drugs, muscle strengthening exercises for the rotator cuff muscles or surgery for decompression of the impinged tendons and bursa.
However, there is no evidence that any of these therapeutic interventions are effective in patients with non-traumatic shoulder pain, with a few exceptions. These exceptions are subacromial corticosteroid injections for rotator cuff disease and intra-articular injection for adhesive capsulitis. They may be beneficial although the effect may be small and not well-maintained. Exercises combined with mobilization, including soft tissue massage, shoulder muscle strengthening and stretching exercises, are beneficial for patients with rotator cuff disorders (SIS).
Shoulder diagnostics are hampered by the fact that physical examination tests have only limited diagnostic validity due to low specificity, and abnormalities revealed by medical imaging are not necessarily pathognomonic.
Research studies on shoulder pain have rarely mentioned MTrPs, and the effect of MTrP therapy in patients with shoulder pain remains unclear. In this thesis, a comprehensive MTrP therapy program administered by specifically-trained physical therapists over three months was compared to a wait-and-see strategy in patients with chronic (duration more than six months) unilateral non-traumatic shoulder pain.
In Chapter 2 we presented an evidence-informed review on MTrPs with the current knowledge of aetiology, pathophysiology and clinical implications. This chapter offers a conceptual framework which helps to explain why MTrPs develop, why they are sustained,
and what mechanism explains the therapeutic effects MTrPs are determined by manual palpation and together with information from the patients' medical history and findings from the physical examination, the assessment of myofascial shoulder pain caused by MTrPs can be made. Gerwin and Sciotti have confirmed the reliability of manual MTrP palpation.\textsuperscript{11, 12} Simons et al. have recommended 'clearly, a clinical or experimental research study of human MTrPs, to obtain the most meaningful results, should employ both experienced and trained examiners who have been tested for interrater reliability before the study is conducted'.\textsuperscript{13}
In Chapter 3, we presented the results of an interrater reliability study. This study compared the palpation findings of three different observers, examining bilaterally six MTrP locations in three shoulder muscles (infraspinatus, anterior deltoid and biceps brachii muscles) in 40 subjects. Thirty subjects had unilateral shoulder pain, and 10 subjects had no current shoulder pain. The raters were blinded to the pain status of the subjects. All subjects were examined for the characteristic features of MTrPs, including a hard nodule in a taut band, referred pain, the local twitch response and the jump sign. In this study, we found a high prevalence of latent MTrPs in the unaffected shoulder of the patients with unilateral shoulder pain as well as in the shoulders of pain-free shoulder subjects. Latent MTrPs are described as "clinically quiescent with respect to spontaneous pain, it is painful only when palpated. A latent MTrP may have all other clinical characteristics of an active MTrP and always has a taut band that increases muscle tension and restricts range of motion." Since all raters were blinded for the status of the subjects (shoulder pain or non-shoulder pain) they were not able to distinguish active from latent MTrPs. In reliability studies, the Cohen's kappa is most often used as a statistical measure for agreement. The unexpected high number of MTrPs had an unintentional influence on the Cohen's kappa. Therefore, we also provided the prevalence index ($P_i$), which expresses the ratio between the presence and absence of certain features. In other words, when the $P_i$ is high, the Cohen's kappa will be low, although the percentage of agreement is high. It has been reported that latent MTrPs are highly prevalent in the shoulder muscles of asymptomatic healthy subjects\textsuperscript{14}, and in future studies, it is recommended to check control subjects for latent MTrPs before entering the study. In clinical practice, the patient provides the clinician with information about the sensitivity of the MTrP and the recognizable sensations, including pain and paresthesia, caused by compression on the MTrP. This extra information is likely to increase the reliability of MTrP palpation. Objective criteria, confirming the assessment of MTrPs by palpation, are provided by needle EMG, microdialysis, magnetic resonance elastography and ultrasonography, but these facilities are not available in primary care.\textsuperscript{15, 23}
In Chapter 4, the study protocol of the clinical randomized trial was presented. This study was conducted in one physical therapy practice. All participating physical therapists were specialized in the treatment of patients with musculoskeletal disorders of the neck, shoulder and arm. They were experienced in diagnosing and treating of patients with MTrPs. Thus, the results of this trial may apply only to experienced physiotherapists. In this trial, we compared a comprehensive MTrP therapy protocol with a waiting list. The various elements of this MTrP
approach were according to the guidelines of Simons et al.\textsuperscript{13} Due to the design of the study, we were only able to make conclusions on this MTrP therapy and not on its single elements. In general, treatment followed the principles of inactivation of MTrPs, restoration of muscle function and correction of the factors that precipitated and perpetuated the MTrPs.\textsuperscript{24} A combination of treatment elements may be more effective than a single element.\textsuperscript{13, 25, 27}
For example, myofeedback or MTrP compression alone is probably not effective, but an effective combination employs MTrP compression, which decreases the sensitivity of the muscle and therefore makes it easier to relax, and myofeedback, which helps to relax and thereby decreases sustained muscle overloading. Applying heat as a single modality seems to be ineffective, but applying it after MTrP compression followed by muscle stretching exercises may improve blood circulation within the muscle. Normalizing blood circulation seems to be a key factor in the physiological response after MTrP therapy.\textsuperscript{19, 23, 28}
Since the prevalence of MTrPs in patients with shoulder pain was unclear, we collected data on the presence of MTrPs in our study sample at baseline. The results of this observational study are presented in Chapter 5. We found active MTrPs in all patients, indicating that MTrPs are responsible for at least a part of the shoulder pain in all patients. The median number of active MTrPs was six, but varied greatly per subject. If multiple MTrPs contribute to shoulder pain, then elimination of only one MTrP will probably not improve the patients' pain and functioning. Conversely, some or probably all active MTrPs have to be treated adequately before the patient improves. Most active MTrPs were found in the infraspinatus and the upper trapezius muscle. According to Simons et al., the infraspinatus muscle refers pain to the frontal and lateral side of the shoulder and the pain is felt as 'deep shoulder pain'. The upper trapezius muscle refers to the top of the shoulder and eventually to the neck, and even to the temporal region of the head.\textsuperscript{13} When (ipsilateral) headache accompanies shoulder pain, this may also have a myofascial origin.\textsuperscript{29, 33} We looked at the association between the number of active and latent MTrPs, the DASH and the VAS-P. There was a significant positive, but only moderate, correlation between active MTrPs and the DASH or VAS-P1 scores. This may have several reasons:
1. We counted the number of muscles with active MTrPs and not the total number of active MTrPs. It is likely that the number of active MTrPs is more correlated to the outcome scores than the number of muscles with active MTrPs.
2. We did not determine the sensitivity of the MTrPs, which can be measured by the Pain Pressure Threshold (PPT). According to Hidalgo, 'active MTrPs in some muscles are associated to greater pain intensity and lower PPTs when compared to those with latent TrPs in the same muscles', and 'significant negative correlations between pain intensity and PPT levels are found'. Therefore, we assume that a higher sensitivity of MTrPs could result in higher outcome scores.
3. The mean DASH score was 30.8 (95% CI 27.5 to 34.1, minimum 0 and maximum 100). This relatively low score happens to be common in other shoulder research investigating chronic shoulder pain. In studies examining the effects of conventional
therapy in patients with chronic shoulder pain, the DASH at baseline ranged from 31.3 to 35.0.\textsuperscript{34-36} In studies examining the effects of surgical interventions, the DASH score was slightly higher, ranging from 42.0 to 43.0.\textsuperscript{34,37,38} There may be several reasons for generally low outcome scores at baseline in patients with chronic musculoskeletal pain, including shoulder pain. First, this may be explained by the fact that chronic shoulder patients may have learned to cope with their shoulder pain and the limitations in functioning and therefore, have a lower score on the DASH. Second, as patients with higher DASH scores have more pain and more disability, they are less willing to participate in a study because of the chance of being allocated in the control group, which means that they have to wait another three months before therapy starts. Therefore, it is conceivable that selection bias has led to a relatively low DASH score. For the same reasons, this also holds true for VAS-P1 (mean 30.0; 95% CI 27.0 to 39.9), VAS-P2 (mean 42.1; 95% CI 37.4 to 50.0) and VAS-P3 (mean 56.6; 95% CI 51.2 to 61.9).
In Chapter 6, we presented the results of the RCT. We had calculated that we needed 104 patients (52 in each arm) to detect a clinically relevant improvement. Partly because of an overestimation of the number of eligible subjects and partly because of the unwillingness of patients to enter the trial, the study was completed with a smaller sample size. The study took two years to complete, which is one more year than was originally planned. The reason for the declined patient flow remained unclear. Despite the smaller sample size, we found significant and clinically relevant differences between the intervention group and the control group. While the study stopped after three months, it is unclear whether the improvement will continue or stop after this time. A follow-up study is needed to assess this, but this was not part of this thesis. As previously mentioned, the functional status of included patients at baseline was relatively good. This implies that it is difficult to achieve significant improvement, even a 50% improvement is still only 15 points on the DASH. Nevertheless, small improvements may have important clinical implications, although they are close to or even smaller than the Minimal Clinical Important Difference (MCID). In this study, all patients were treated once a week for 12 weeks. The duration of each session was 30 minutes. It is unclear whether this is the most optimal frequency and the most optimal duration of the session. In this study, we used manual techniques for inactivation of MTrPs. There were two major reasons for this choice. First, not all participating physical therapists had attended courses in addition to manual MTrP therapy. Second, manual techniques can easily be administered by general physical therapists without any additional extensive educational course. In recent years, new techniques have become available for physical therapists in several countries, including the Netherlands. Especially, trigger point dry needling (TPDN) is one of the innovations in MTrP therapy.\textsuperscript{39} TPDN might be more effective than manual techniques, but requires additional training, since physical therapists are not allowed to treat invasively without additional training.\textsuperscript{40,42}
Recommendations
Clinical practice
We recommend to use the clinical diagnostic term ‘shoulder pain caused by MTrPs’, when the presence of MTrPs is confirmed by physical examination, including palpation and provocation by firm digital pressure, instead of using the generally accepted term ‘non-specific shoulder pain’. We further recommend examining patients who have already been diagnosed with subdeltoid or subacromial bursitis, rotator cuff disorder, tendonitis, tendinopathy or subacromial impingement for the presence of MTrPs, since MTrPs may accompany other diseases or disorders of the shoulder.
While a substantial number of patients benefitted from the MTrP therapy, it is important to note that some patients did not benefit. It remains unclear whether continued treatment or innovative types of MTrP treatment would be beneficial in this group. Future observational studies can help to elucidate these ambiguities.
For the moment, we recommend evaluating patient progress carefully and stopping MTrP treatment when no improvement occurs. Based on the results of the RCT, we are not able to define a clear stop rule, but based on clinical experience, it is conceivable that patients with chronic shoulder pain need more treatment sessions over a longer period of time than the three months period of the RCT.
Other authors have suggested that it may take more than 11 weeks for patients with chronic shoulder pain to fully enjoy the benefits of an active exercise program.\(^{43}\)
Implementation
Nationwide implementation of MTrP treatment is a new challenge, which was not considered in our clinical studies. Nevertheless, these studies give some clues about possible barriers for implementation. Most implementation experts recommend identifying the most relevant barriers for change and tailoring implementation interventions to those barriers. However, even carefully developed and well-applied implementation programs have mixed and moderate effects. The uptake of biomedical knowledge in clinical practice is a slow and haphazard process.\(^{44,45}\) Regarding MTrP therapy, we suggest that the following factors may be associated with its implementation:
1. Knowledge and skills of physiotherapists regarding MTrP assessment and management
2. Motivation to apply this assessment and management among physiotherapists
3. Knowledge and motivation among physicians to refer patients for MTrP treatment
4. Preferences, concerns and expectations of patients with shoulder pain
5. Strength of evidence related to MTrP treatment
6. Availability of alternative, effective treatments for shoulder pain
7. Reimbursement for the treatment (insurance and/or co-payment)
8. Annual number of patients needed for optimally effective and efficient delivery
Organizational, legal or economic factors may be underlying barriers for change in clinical practice. The knowledge and skills of physical therapists regarding MTrP treatment are obviously important, as in almost all situations of implementation. Physicians and therapists can reliably identify MTrPs in shoulder muscles, provided that they are trained well and have enough clinical experience. Dutch physical therapists should be able to provide manual therapy interventions, including various massage techniques and muscle stretching exercises, as it is part of their undergraduate education. In addition, continued education has to be provided and followed to keep up with new developments. In many countries, post-graduate manual trigger point therapy and trigger point dry needling courses are offered. Trigger point dry needling (TPDN) is an invasive procedure in which a solid filament needle (acupuncture needle) is inserted into the skin (superficial dry needling) and muscle (deep dry needling). As the name implies, TPDN is directed at MTrPs and its aim is to inactivate them. TPDN falls within the scope of physical therapy practice in many countries including Canada, Spain, Ireland, South Africa, Australia, the Netherlands, Switzerland, and in a growing number of states in the United States.
Generally, physical therapists have to get motivated to apply the clinical aspects of MTrPs, among clinical signs and symptoms by means of research evidence on the effectiveness of their intervention, the satisfaction of their patients and financial viability of providing the treatment. Because MTrPs are highly prevalent in patients with shoulder pain, health insurance companies may also be interested when MTrP therapy is shown to be capable of decreasing the overall costs for shoulder pain interventions, for example by preventing unnecessary shoulder surgery. Cost-effectiveness studies can help to convince decision makers to reimburse MTrP treatment.
Physicians should consider MTrP therapy for patients with shoulder pain and refer to physical therapists with or without in combination with other interventions, such as steroid injections or NSAIDs. With persistent shoulder pain, MTrP therapy can be considered prior to resorting to surgical interventions.
Since direct access to physical therapy was introduced in 2006, the number of patients who contacted physical therapists without consulting a general practitioner gradually increased from 21% (2006) to 38% (2009) (from www.nivel.nl/lipz). Since 7% of these patients receiving physical therapy in 2009 had shoulder pain, there is a need for undergraduate and post-graduate physical therapy and medical education to include training in MTrP assessment and management to assure that patients may be treated accordingly.
**Clinical guidelines**
The authors of clinical guidelines should include the assessment and management options of MTrPs when revising the guidelines, since MTrPs are highly prevalent, easily diagnosed and effectively treated by relatively simple interventions, including physical therapy. We also recommend the introduction of the concept of MTrP and teaching the diagnostic and therapeutic tools at schools for physical therapy and medical schools, and further
implementation of assessment and treatment of MTrPs in the daily practice of physicians and physical therapists. MTrP therapy may be helpful in recovering from shoulder pain, especially in those cases which seem resistant to other interventions such as steroid injections or exercise therapy.
**Future research**
We recommend that in future clinical studies on the assessment and management of MTrPs should be considered in patients with shoulder pain. The effectiveness of MTrP therapy should be compared with other interventions, and combinations of MTrP therapy with other interventions should also be explored. For example, the combination of a single steroid injection for short-term relief (up to six weeks) combined with MTrP therapy for long-term relief may be beneficial for the patient and may help to decrease the recurrence rate. Such combination therapies may be more effective than single intervention approaches, and may eventually help to lower the costs for society. Other MTrP interventions, such as TPDN may be even more effective than manual soft tissue massage or mobilization techniques. Therefore, these interventions should be investigated as well as other interventions. The optimal dose, duration and intensity of interventions in physical therapy practice are still unknown and this has to be established as well. In future studies, PPT and counting the number of active MTrPs should be included, instead of the number of muscles with active MTrPs, and in addition to subjective patient self-reported outcomes, more objective tests to measure function, for instance lifting a heavy weight or carrying a shopping bag, can be used as well.
In recent decades, several articles have contributed to explain the aetiology and pathophysiology of myofascial pain and myofascial trigger points. However, there is still a need for more fundamental research in this field. For example, there is evidence for a central role for acetylcholine in the integrated trigger point hypothesis, although evidence collected by analyzing its concentrations in the vicinity of the neuromuscular junction by microdialysis is still lacking.
Furthermore, some studies have shown that experimental and clinical shoulder muscle pain can induce abnormal motor activation patterns, but only one study has shown an association between latent MTrPs and abnormal motor activation patterns. Studies using objective parameters, including electromyography, can help to explain the association between MTrPs and muscle function. In clinical studies, using physical therapy modalities, it is not possible to blind the therapist and the patient, which means that it is impossible to exclude the placebo effect. Therefore, the magnitude of the placebo effect remains unclear in physical therapy. However, pragmatic controlled studies comparing several interventions can help us to find the most effective intervention.
Conclusions
From the studies in this thesis, we may conclude that MTrPs in patients with shoulder pain are highly prevalent and are at least partly responsible for shoulder pain. Physical examination, including palpation according to the guidelines of Simons et al., is a reliable method to diagnose MTrPs, which may increase when patients guide the physician or therapist during physical examination, in terms of recognizable pain.\textsuperscript{13} Our study shows that comprehensive treatment of MTrPs was effective in patients with chronic shoulder pain within 12 weeks.
References
1 Coghlan JA, Buchbinder R, Green S, Johnston RV, Bell SN Surgery for rotator cuff disease Cochrane Database Syst Rev 2008(1) CD005619
2 Buchbinder R, Green S, Youd JM Corticosteroid injections for shoulder pain Cochrane Database Syst Rev 2009, CD004016
3 Green S, Buchbinder R, Heinck SE Physiotherapy interventions for shoulder pain Cochrane Database Syst Rev 2010, CD004258
4 Karpalainen K, Malmivaara A, Van Tulder M, Roine R, Jauhiainen M, Hurn H, Koes B Multidisciplinary biopsychological rehabilitation for neck and shoulder pain among working age adults Cochrane Database Syst Rev 2010(3)
5 Conroy DE, Hayes KW The effect of joint mobilization as a component of comprehensive treatment for primary shoulder impingement syndrome J Orthop Sports Phys Ther 1998, 28(1) 3-14
6 Lombardi I, Jr, Magni AG, Fleury AM, Da Silva AC, Natour J Progressive resistance training in patients with shoulder impingement syndrome: a randomized controlled trial Arthritis Rheum 2008, 59(5) 615-622
7 Bang MD, Deyle GD Comparison of supervised exercise with and without manual physical therapy for patients with shoulder impingement syndrome J Orthop Sports Phys Ther 2000, 30(3) 126-137
8 Calis M, Akgun K, Birtane M, Karacan I, Calis H, Tuzun F Diagnostic values of clinical diagnostic tests in subacromial impingement syndrome Ann Rheum Dis 2000, 59(1) 44-47
9 Park HB, Yokota A, Gill HS, El Rassi G, McFarland EG Diagnostic accuracy of clinical tests for the different degrees of subacromial impingement syndrome J Bone Joint Surg Am 2005, 87(7) 1446-1455
10 McFarland EG, Garzon-Muvidi J, Jia X, Desai P, Petersen SA Clinical and diagnostic tests for shoulder disorders: a critical review Br J Sports Med 2010, 44(5) 328-332
11 Gerwin RD, Shannon S, Hong CZ, Hubbard D, Gevirtz R Interrater reliability in myofascial trigger point examination Pain 1997, 69(1-2) 65-73
12 Sciotti VM, Mittak VL, DiMarco L, Ford LM, Plezbert J, Santapadri E, Wigglesworth J, Ball K Clinical precision of myofascial trigger point location in the trapezius muscle Pain 2001, 93(3) 259-266
13 Simons DG, Travell, JG, Simons LS Myofascial Pain and Dysfunction: The Trigger Point Manual Upper half of body, vol. 1, second edn Baltimore, MD: Lippincott, Williams and Wilkins, 1999
14 Sola AE, Rodenberger ML, Gettys BB Incidence of hypersensitive areas in posterior shoulder muscles, a survey of two hundred young adults Am J Phys Med 1955, 34(6) 585-590
15 Hubbard, Berkoff Myofascial Trigger Points Show Spontaneous Needle EMG Activity Spine 1993(18319511802240963229)
16 Hong CZ, Simons DG Pathophysiologic and electrophysiologic mechanisms of myofascial trigger points Arch Phys Med Rehabil 1998, 79(7) 863-872
17 Hong CZ, Yu J Spontaneous electrical activity of rabbit trigger spot after transection of spinal cord and peripheral nerve J Musculoskeletal Pain 1998, 6(4) 45-58
18 Shah An in vivo microanalytical technique for measuring the local biochemical milieu of human skeletal muscle J Appl Physiol 2005, 99(5) 1977-1984
19 Shah J, Danoff, Desai, Parikh, Nakamura, Phillips, Gerber Biochemicals Associated With Pain and Inflammation are Elevated in Sites Near to and Remote From Active Myofascial Trigger Points Arch Phys Med Rehabil 2008, 89(1) 16-23
20 Shah JR, Gilliams EA Uncovering the biochemical milieu of myofascial trigger points using in vivo microdialysis: an application of muscle pain concepts to myofascial pain syndrome J Bodyw Mov Ther 2008, 12(4) 371-384
21 Chen, Basford, An Ability of magnetic resonance elastography to assess taut bands Clin Biomech 2008, 23(5) 623-629
22 Sikdar S, Shah J, Gebreab, Yen R, Gilliams E, Danoff JV, Gerber L Novel Applications of Ultrasound Technology to Visualize and Characterize Myofascial Trigger Points and Surrounding Soft Tissue Arch Phys Med Rehabil 2009, 90(november) 1829-1838
23 Sikdar S, Ortiz R, Gebreab T, Shah JP Understanding the Vascular Environment of Myofascial Trigger Points using Ultrasonic Imaging and Computational Modeling 32nd Annual International Conference of the IEEE EMBS 2010
24 Gerwin R Myofascial Pain Syndrome: Here we are, where must we go? J Musculoskeletal Pain 2010, 18(4) 18
25 Gam AN, Warming S, Larsen LH, Jensen B, Hoydalsmo O, Allon I, Andersen B, Gotzsche NE, Petersen M, Mathiesen B Treatment of myofascial trigger-points with ultrasound combined with massage and exercise--a randomised controlled trial Pain 1998, 77(1) 73-79
26 Hanten WP, Olson SL, Butts NL, Nowicki AL Effectiveness of a home program of ischemic pressure followed by sustained stretch for treatment of myofascial trigger points Phys Ther 2000, 80(10) 997-1003
27 Gerwin RD A review of myofascial pain and fibromyalgia - Factors that promote their persistence
*Acupuncture in Medicine* 2005, 23(3) 121-134
28 Bruckle W, Suckfull M, Fleckenstein W, Weiss C, Muller W [Tissue pO2 measurement in taut back
musculature (m. erector spinae)] *Z Rheumatol* 1990, 49(4) 208-216
29 Davidoff RA Trigger points and myofascial pain toward understanding how they affect headaches *Cephalalgia*
1998, 18(7) 436-448
30 Couppé C, Torelli P, Fuglsang-Frederiksen A, Andersen KV, Jensen R Myofascial trigger points are very
prevalent in patients with chronic tension-type headache a double-blinded controlled study *Clin J Pain* 2007,
23(1) 23-27
31 Fernandez-de-Las-Penas C, Alonso-Blanco C, Cuadrado ML, Gerwin RD, Pareja JA Myofascial trigger points
and their relationship to headache clinical parameters in chronic tension-type headache *Headache* 2006,
46(8) 1264-1272
32 Fernandez-de-Las-Penas C, Cuadrado ML, Arendt-Nielsen L, Simons DG, Pareja JA Myofascial trigger points
and sensitization an updated pain model for tension-type headache *Cephalalgia* 2007, 27(5) 383-393
33 Fernandez-de-Las-Penas C, Ge HY, Alonso-Blanco C, Gonzalez-Iglesias J, Arendt-Nielsen L Referred pain areas
of active myofascial trigger points in head, neck, and shoulder muscles, in chronic tension type headache *J
Bodyw Mov Ther* 2010, 14(4) 391-396
34 Atroshi I, Gummesson C, Andersson B, Dahlgren E, Johansson A The disabilities of the arm, shoulder and
hand (DASH) outcome questionnaire reliability and validity of the Swedish version evaluated in 176 patients
*Acta Orthop Scand* 2000, 71(6) 613-618
35 Camargo PR, Haik MN, Filho RB, Mattiolo-Rosa SM, E Salvini TF Pain in workers with shoulder impingement
syndrome: an assessment using the DASH and McGill Pain questionnaires *Rev Bras Fisioter* 2007, 11(2) 161-167
36 Tate AR, McClure PW, Young IA, Salvatori R, Michener LA Comprehensive impairment-based exercise and
manual therapy intervention for patients with subacromial impingement syndrome a case series *J Orthop
Sports Phys Ther* 2010, 40(8) 474-493
37 Gummesson C, Atroshi I, Ekdahl C The disabilities of the arm, shoulder and hand (DASH) outcome
questionnaire longitudinal construct validity and measuring self-rated health change after surgery *BMC
Musculoskelet Disord* 2003, 4 11
38 Bengtsson M, Lunsgaard K, Hermodsson Y, Nordqvist A, Abu-Zidan FM High patient satisfaction after
arthroscopic subacromial decompression for shoulder impingement a prospective study of 50 patients *Acta
Orthop* 2006, 77(1) 138-142
39 Dommerholt J, Mayoral del Moral O, Gröbli C Trigger point dry needling *J Manual Manip Ther* 2006,
14(4) E70-E87
40 Ingerber RS Shoulder impingement in tennis/racquetball players treated with subscapularis myofascial
treatments *Arch Phys Med Rehabil* 2000, 81(5) 679-682
41 Rickards LD Therapeutic needling in osteopathic practice An evidence-informed perspective *Int J Osteopath
Med* 2009, 12(1) 2-13
42 Tough E, White A, Cummings T, Richards S, Campbell J Acupuncture and dry needling in the management of
myofascial trigger point pain A systematic review and meta-analysis of randomised controlled trials *Eur J Pain*
2009, 13(1) 3-10
43 Bennell K, Wee E, Coburn S, Green S, Harms A, Staples M, Forbes A, Buchbinder R Efficacy of standardised
manual therapy and home exercise programme for chronic rotator cuff disease randomised placebo controlled
trial *BMJ* 2010, 340 c2756
44 Wensing M, Wollersheim H, Grol R Organizational interventions to implement improvements in patient care
a structured review of reviews *Implement Sci* 2006, 1 2
45 Wensing M, Bosch M, Grol R Developing and selecting interventions for translating knowledge to action
*CMAJ* 2010, 182(2) E85-88
46 Gerwin RD, Dommerholt J, Shah JP An expansion of Simons' integrated hypothesis of trigger point formation
*Curr Pain Headache Rep* 2004, 8(6) 468-475
47 Kimura Y, Ge HY, Zhang Y, Kimura M, Sumikura H, Arendt-Nielsen L Evaluation of sympathetic
vasoconstrictor response following nociceptive stimulation of latent myofascial trigger points in humans *Acta
Physiol (Oxf)* 2009, 196(4) 411-417
48 Mense S How Do Muscle Lesions such as Latent and Active Trigger Points Influence Central Nociceptive
Neurons? *J Musculoskeletal Pain* 2010, 18(4) 5
49 Xu YM, Ge HY, Arendt-Nielsen L Sustained nociceptive mechanical stimulation of latent myofascial trigger
point induces central sensitization in healthy subjects *J Pain* 2010, 11(12) 1348-1355
50 Lucas KR, Rich PA, Polus BI Muscle activation patterns in the scapular positioning muscles during loaded
scapular plane elevation the effects of Latent Myofascial Trigger Points *Clin Biomech* 2010, 25(8) 765-770
8 SUMMARY
Shoulder pain is, after low back pain and neck pain, the most common complaint of the musculoskeletal system in The Netherlands and other countries, the percentage of individuals with shoulder pain in the population is estimated to be around 20% to 50% per year. Shoulder pain has an important influence on the daily functioning of the individual patient. About half of all patients with shoulder pain seek medical help. Patients with shoulder pain have difficulties in recovering from shoulder pain so that it is often recurrent, despite medical treatment. Shoulder pain affects not only the patient but also the whole of society through direct and indirect costs and sick leave.
The terms "shoulder pain", "shoulder complaint", and "shoulder disorder" are often used interchangeably. In this thesis, we use the term "shoulder pain", as pain was the main complaint of the patients when consulting a physician or therapist. Shoulder pain caused by trauma is beyond the scope of this thesis.
The clinical picture, which the patient presents with, consists of pain at the frontal or lateral side of the shoulder, often radiating to the upper arm and sometimes even into the forearm and hand. The pain is often present at rest and almost always provoked or aggravated by posture or (repeated) movements of the arm. The patient sleeps poorly because of the inability to lie on either shoulder. The pain often leads to limitations in daily life and problems with participation in work, sporting and leisure activities.
Non-traumatic shoulder pain is mostly explained by local pathological changes in the subacromial space\(^1\), including inflammation of the rotator cuff tendons or the subacromial bursa, or degenerative changes in the subacromial space, such as tendon degeneration. However, there is growing scientific evidence to indicate that local inflammation is not the (only) causal explanation for shoulder pain, and degenerative changes in rotator cuff tendons are seen as often in people without shoulder pain as patients with shoulder complaints. Therefore, it remains unclear as to whether or not the abnormal findings of additional imaging techniques, including ultrasound and magnetic resonance imaging (MRI), can explain the existence or occurrence of shoulder pain.
The main etiological explanation for shoulder pain was described by Dr Charles Neer, who in 1972 argued that the so-called subacromial space in these patients was too small. Because of this insufficient space, impingement of the subacromial bursa and the rotator cuff tendons may occur. The constant and repetitive encroachment of these structures could lead to acute bursitis, or, when persistent, to chronic bursitis, and finally, to degenerative rotator tendon tears. This is called subacromial impingement syndrome (SIS) and up until today it has been the main explanation for non-traumatic shoulder pain. Because the physical examination of patients with shoulder pain and additional imaging are of limited diagnostic value, and since there is only a small amount of scientific evidence for the effectiveness of various interventions aimed at the treatment of SIS, the question of whether or not there might be another possible explanation for non-traumatic shoulder pain is justified.
To date, there has been little or no attention paid to the role of myofascial trigger points (MTrPs) in shoulder muscles in the scientific literature on the emergence or persistence of shoulder pain. MTrPs are local changes in skeletal muscles that may result in sensory, motor and autonomic symptoms. An MTrP is defined as a hyperirritable spot in skeletal muscle that is associated with a hypersensitive palpable nodule (muscle hardening) in a taut band. MTrPs are divided into active and latent trigger points. Active MTrPs cause spontaneous pain and latent MTrPs only cause pain and other sensations when directly stimulated by mechanical compression, muscle contraction, or muscle stretching. Both active and latent MTrPs can decrease the mobility of the shoulder and may cause muscle weakness.
The aim of the research described in this thesis was to gain more insight into the role of MTrPs in patients with shoulder pain.
Before examining the influence of MTrPs on the pain in patients with shoulder pain, Chapter 2 describes what is known about the etiology, pathophysiology and clinical implications of MTrPs for physiotherapy treatment.
In particular, Dr Janet Travell (1901-1997) and Dr David Simons (1922-2010) are generally credited with bringing MTrPs to the attention of medical and other healthcare providers.
The comprehensive “expanded integrated hypothesis” describes the complex interactions that can help explain the emergence and persistence of MTrPs. It is assumed that various mechanisms can cause MTrPs, such as sustained or frequently repeated muscle contractions and unusual eccentric or concentric muscle contractions, leading to higher intramuscular pressure, and direct (muscle bruising) or indirect (sprain or strain) muscle trauma. In all of these cases, the repeated or sustained loading of the muscle goes beyond the properties of the tissue, which may lead to muscle overload in the end. During muscle overload, biochemical changes in and around the muscle fibers occur, leading to an increased and sustained contracture (contraction due to high concentrations of Ca$^{2+}$ in the muscle fiber, without depolarization) and to the release of various sensitizing and other pain-related substances. Under this condition, there is no motor neuron activity that leads to sustained contraction. As a result, this muscle overload creates a lack of adenosine triphosphate (ATP), which is an energy-rich substance needed for the release of large amounts of Ca$^{2+}$ from the myosin-actin-complex and for its re-uptake into the sarcoplasmic reticulum, where it is stored.
The lack of ATP results in prolonged linking of the proteins actin and myosin, causing a shortened and thickened muscle fiber. It is assumed that the thickening of multiple muscles fibers obstructs blood flow through the smallest capillaries, resulting in ischemia and hypoxia of the muscle tissue. For the synthesis of ATP, large amounts of oxygen are needed, which can only be obtained through the capillary blood flow. As the thickened muscle fiber cells prevent a sufficient blood flow, a self-perpetuating situation develops.
Chapter 3 describes an inter-rater reliability study of the palpation of MTrPs in three shoulder muscles, i.e. the infraspinatus, anterior deltoid, and biceps brachii muscles. A total of 6 MTrP locations on both shoulders in 40 subjects were studied.
Thirty subjects had unilateral shoulder pain and ten subjects were symptom-free at the time of investigation. The observers did not know whether one of the shoulders was painful or not, and if so, which one it was. The muscles were palpated to determine the presence or absence of a noticeable hardening in a taut band, to determine whether or not firm compression during palpation could generate referred pain \(^2\) (RP), and to determine whether snapping palpation could elicit a local twitch response \(^3\) (LTR) or cause a "jump sign" \(^4\) (JS). The palpation findings were subjected to a pairwise comparison. Finally, based on the combination of these findings, the presence or absence of an MTrP was scored.
The most reliable characteristics of the MTrPs were RP and JS, followed by the localization of a local hardening (nodule) in a taut band and the LTR. The highest degree of reliability for the presence or absence of an MTrP was found in the infraspinatus muscle.
One of the three observers had 2 years of experience in MTrP therapy and the other two investigators had 16 and 21 years of experience in MTrP therapy, respectively.
No difference was found in the degree of agreement between the different pairs of observers.
Based on this study, it was concluded that palpation of MTrPs in shoulder muscles is reliable and, therefore, a potentially useful diagnostic tool in the diagnosis of myofascial pain in patients with non-traumatic shoulder pain. It also appears that 2 years of experience is sufficient.
Chapter 4 describes the research protocol for a randomized controlled trial (RCT) of the effectiveness of MTrP therapy in patients with chronic, unilateral, and non-traumatic shoulder pain. This study took place between September 2007 and December 2009.
The treatment consisted of physical therapy (aimed at eliminating MTrPs in shoulder muscles) for three months, compared with expectant management.
The primary outcome measure used was the Disabilities of Arm, Neck and Hand questionnaire (DASH). The secondary outcome measures used were the Visual Analogue Scale for pain for current pain (VAS-P1), the average pain in the last week (VAS-P2), the worst pain in the last week (VAS -P3), the global perceived effect (GPE), and the number of muscles with active or latent MTrPs. Prior to the study, it was calculated that in order to show a clinically relevant difference on the DASH, 104 patients would be needed (52 patients per group).
Chapter 5 describes the prevalence of MTrPs in 17 shoulder muscles in patients with unilateral shoulder pain. Mainly because of logistical reasons, this study was completed with a smaller sample size than was originally calculated. All patients \((n = 72)\), who were included in the RCT, were examined prior to randomization for the presence of active or latent MTrPs \(^5\). The number of muscle MTrPs was counted.
Muscles containing active or latent MTrPs were found in all 72 subjects. The median number of muscles with active MTrPs was 6 (ranging from 2 to 16). The median number
of muscles with latent MTrPs was 4 (ranging from 0 to 11). Active MTrPs were most prevalent in the infraspinatus, upper trapezius, and middle deltoid muscles. Latent MTrPs were most prevalent in the teres major, anterior deltoid, and upper trapezius muscles. The number of muscles with active MTrPs only moderately correlated with the DASH score (Spearman's $\rho = 0.3$). The number of muscles with active MTrPs only explained 10% of the variation of the DASH outcome measure. Other factors were the degree of sensitivity of the individual MTrPs, alone or in combination with the number MTrPs per muscle. In addition, there may have been other relevant factors that were not included in this study.
Based on these prevalence data, examination for the presence of MTrPs in patients with shoulder pain is recommended.
Chapter 6 describes the results of physical therapy in patients with chronic, unilateral non-traumatic shoulder pain. In the period from September 2007 to December 2009, 65 patients were included in this study. Patients in the intervention group were treated by one of five experienced physical therapists from the same physical therapy practice, specialized in the management of patients with musculoskeletal disorders of the neck, shoulder and arm, once a week for a 3-month period. Patients in the control group remained on the waiting list. They were instructed not to change their self-management regarding their shoulder pain or to report any changes. At intake, relevant patient characteristics were recorded and several questionnaires were completed. The passive range of shoulder motion was measured and the number of muscles with MTrPs was counted. The treatment consisted of inactivating the MTrPs by sustained compression of the MTrP, followed by stretching the muscle, including the "taut band", and a combination of muscle stretching exercises and a cold application (which was a variant of the spray-and-stretch method originally described by Dr Janet Travell).
Subsequently, the patients were instructed to perform muscle stretching and relaxation exercises several times a day. When appropriate, these relaxation exercises were augmented by the use of a (portable) myofeedback device. Finally, all patients received ergonomic advice and instructions to assume and maintain "good posture".
In this study, we chose an intervention strategy aimed at treating MTrPs that best reflects daily physical therapy practice. The idea behind this was that the different parts might have no (permanent) or little effect separately, whereas their combination may have an effect as the various components reinforce each other. The disadvantage of this approach is that the influence of the various components of the final outcome remains unknown.
Compared to the control group, after 12 weeks the intervention group scored significantly better on the primary and secondary outcome measures (DASH, VAS P1, P2 and VAS VAS-P3). The differences were all clinically relevant.
After 12 weeks, 55% of the patients in the intervention group were reported to have improved versus 14% in the control group. Also, the extent to which the patients improved was significantly greater in the intervention group than in the control group. The average number of muscles with active MTrPs decreased significantly in the intervention group, whereas this number increased in the control group. The number of muscles with latent
MTrPs did not change significantly in the intervention group and was significantly decreased in the control group. The number of muscles with active MTrPs moderately correlated with the improvement in the DASH score (Pearson's $r = 0.49$). All of the abovementioned effects were achieved after 12 weeks.
Although after 6 weeks a slight trend was seen in the improvement of the outcomes in favor of the intervention group, the difference between the two groups after 6 weeks was not significant. Because the study ended after 3 months, no conclusions can be drawn concerning the effects of the treatment beyond this period, but it is conceivable that a longer-term treatment may lead to a better result.
Chapter 7 presents the general discussion and the main conclusions of this thesis.
The main conclusions of this thesis were:
- MTrPs provide a promising new explanation and treatment target for shoulder pain, which is well grounded in pathophysiological knowledge.
- MTrPs in the shoulder muscles can be reliably determined by palpation and are an important addition to the physical examination.
- MTrPs are highly prevalent in shoulder muscles in patients with chronic, unilateral, non-traumatic shoulder pain.
- Most active MTrPs were found in the infraspinatus, the upper trapezius, and the middle deltoid muscles. Most latent MTrPs were found in the teres major, anterior deltoid, and the upper trapezius muscles.
- The multimodal treatment of MTrPs in shoulder muscles in patients with chronic, unilateral, non-traumatic shoulder pain is effective.
- The multimodal treatment of MTrPs in shoulder muscles in patients with chronic, unilateral, non-traumatic shoulder pain takes 12 weeks to complete at a frequency of one treatment per week.
References
1. The subacromial space lies underneath the acromion, the coraco-acromial ligament, and the coracoid process and above the humeral head, the upper margin of the glenoid fossa and the superior labrum.
2. Referred pain (RP) is not felt at the site of a tissue lesion but is felt at some distance from it, often entirely remote from its source. It is often described as radiating pain.
3. A local twitch response (LTR) is a transient contraction of a group of tense muscle fibers (taut band) that traverse an MTrP. The LTR can be elicited by snapping palpation of the taut band or by needling the MTrP.
4. The jump sign is a general pain response of the patient, who winces, may cry out, and may withdraw in response to pressure applied to an MTrP.
5. Active MTrPs cause spontaneous pain in rest or during (repeated) movement. The pain that arises from firm digital palpation is recognized as the patients' familiar pain. Latent MTrPs have the same characteristics as active MTrPs, but they are clinically silent.
SAMENVATTING
SAMENVATTING
Schouderpijn is na lage-rugpijn en nekpijn, de meest voorkomende klacht van het musculoskeletale systeem. Zowel in Nederland, als in diverse andere landen, wordt het percentage personen met schouderpijn in de bevolking geschat op zo'n 20 tot 50% per jaar. Schouderpijn heeft een belangrijke invloed op het dagelijks functioneren van de individuele patient. Ongeveer de helft van alle patiënten met schouderpijn zoekt medische hulp. Verder blijkt dat patiënten moeilijk herstellen van schouderpijn en dat deze klacht, ondanks medische behandeling, gemakkelijk recidiveert. Schouderpijn heeft niet alleen invloed op de individuele patient, maar ook op de totale samenleving door ziekteverzuim en directe en indirecte kosten.
De begrippen 'schouderpijn' (shoulder pain), 'schouderklachten' (shoulder complaints) en 'schouderaandoening' (shoulder disorder) worden vaak door elkaar gebruikt. In dit proefschrift gebruiken we zoveel mogelijk de term schouderpijn (shoulder pain), omdat pijn een van de belangrijkste klachten is waarmee de patiënt bij de dokter of fysiotherapeut komt. Schouderpijn door traumata valt buiten het bestek van dit proefschrift. Het klinische beeld, waarmee de patiënt zich presenteert, bestaat uit pijn aan de voor- of zijkant van de schouder, vaak uitzstralend naar halverwege de bovenarm en soms zelfs tot in de onderarm en hand. De pijn is vaak in rust aanwezig en wordt vrijwel altijd uitgelokt of verergerd door (herhaalde) bewegingen of houdingen van de arm. De patiënt slaapt vaak slecht, omdat hij/zij nog op de aangedane schouder, noch op de niet-aangedane schouder kan liggen. De pijn leidt regelmatig tot beperkingen in het dagelijks leven en participatieproblemen in werk, sport en hobby's.
De verklaring voor niet-traumatische schouderpijn wordt gezocht in veronderstelde locale pathologische veranderingen in de subacromiale ruimte\(^1\), bestaande uit een ontsteking van de pezen van de rotator cuff of de subacromiale bursa of degeneratieve veranderingen in de subacromiale ruimte, zoals degeneratieve scheuren van de rotator cuff. Er zijn in toenemende mate wetenschappelijke aanwijzingen dat deze lokale ontstekingen niet de causale verklaring bieden voor schouderpijn. Degeneratieve veranderingen van de rotator cuff worden net zo vaak gezien bij personen zonder schouderklachten als bij patiënten met schouderklachten. Het is dan ook onduidelijk of afwijkende bevindingen van aanvullend beeldvormend onderzoek, waaronder echografie en magnetic resonance imaging (MRI), verklarend zijn voor het bestaan of ontstaan van schouderpijn.
De belangrijkste etiologische verklaring voor deze schouderpijn is beschreven door dr Charles Neer, die in 1972 stelde, dat de zogenaamde subacromiale ruimte bij deze patiënten te gering was. Door deze te geringe ruimte ontstaat er inklemming van de bursa en de pezen van de rotator cuff. Het voortdurend repeterend inklemmen van deze structuren zou kunnen leiden tot acute of, bij voortdurfing, tot chronische bursitis en degeneratieve...
scheuren in de pezen van de rotator cuff. Dit wordt het subacromiale impingement syndroom (SIS) genoemd en wordt tot op heden als belangrijkste verklaring voor niet-traumatische schouderpijn gehanteerd. Hoewel het lichamelijk onderzoek bij patiënten met schouderpijn en het aanvullend beeldvormend onderzoek weinig diagnostisch valide zijn en er weinig overtuigend bewijs voor de effectiviteit van de verschillende interventies, gericht op de behandeling van de gevolgen van het SIS, bestaat, is de vraag gerechtvaardigd of er niet een andere mogelijke verklaring is voor de schouderpijn.
Tot op heden is er in de wetenschappelijke literatuur over het ontstaan of voortbestaan van schouderpijn niet of nauwelijks aandacht voor de rol van myofasciale triggerpoints (MTrPs) in schouderspieren. MTrPs zijn lokale veranderingen in skeletspieren, die sensorische, motorische en autonome verschijnselen kunnen geven. Een MTrP is gedefinieerd als een sterk prikkelbare plaats in de skeletspier, die samenvalt met een overgevoelige palpabele spierverharding ('nodule') in een strakke band. MTrPs worden ingedeeld in actieve en latente trigger points. Actieve MTrPs geven spontane pijnklachten en latente MTrPs geven alleen pijn en andere sensaties bij directe stimulatie door mechanische druk of door aanspanning of rek van de spier. Zowel actieve als latente MTrPs kunnen verminderde beweeglijkheid van de schouder en krachtsverlies van de schouderspieren tot gevolg hebben. Het doel van het onderzoek, dat beschreven is in dit proefschrift, is om meer inzicht te verkrijgen in de rol van de MTrPs bij patiënten met schouderpijn.
Alvorens in te gaan op de invloed van MTrPs op de klacht bij patiënten met schouderpijn, wordt in hoofdstuk 2 beschreven wat er bekend is over de etiologie, de pathofysiologie van MTrPs en de klinische implicaties voor de fysiotherapeutische behandeling. Het zijn met name dr Janet Travell (1901-1997) en dr David Simons (1922-2010) geweest, die MTrPs onder de aandacht hebben gebracht van (para-)medici.
De uitgebreide, geïntegreerde hypothese beschrijft de complexe interactie, die het ontstaan en het persisteren van MTrPs kan helpen verklaren. Er wordt daarin aangenomen, dat er diverse mechanismen zijn die MTrPs kunnen veroorzaken, zoals langdurig aangehouden of veelvuldig herhaalde contracties, ongewone excentrische of concentrische contracties, waarbij hoge intramusculaire druk ontstaat, en direct (spierkrneuzing) of indirect (overrekkingssletsel) spiertrauma. In alle gevallen gaat het om situaties, waarbij de belasting van de spier als anatomische structuur de belastbaarheid kortdurend, herhaaldelijk of langdurig overstijgt. Gedurende deze situaties ontstaan biochemische veranderingen in en om de spiervezels, die enerzijds leiden tot een toegenomen en aanhoudende contractuur (contractie, als gevolg van hoge concentraties Ca$^{2+}$ in de spiervezel, zonder depolarisaties) en anderzijds tot het vrijkomen van talrijke aan nocisensors en pijn gerelateerde stoffen. Er is in deze conditie geen sprake van een activiteit vanuit de motorische zenuwvezel die tot deze aanhoudende contractie leidt. Als gevolg van deze spieroverbelasting ontstaat een tekort aan adenosintrifosfaat (ATP). Deze energierijke stof is nodig om de grote hoeveelheid aan myosine en actine gebonden Ca$^{2+}$ vrij te maken en terug op te nemen in het
sarcoplasmatisch reticulum. Door dit tekort blijven de myosine en actine eiwitten gekoppeld, waardoor de spiercel verkort en verdikt. Deze verdikking, zo wordt aangenomen, zorgt voor obstructie van de bloedtoevoer via de allerkleinste weefselcapillairen, waardoor ischemie en hypoxie van de spier ontstaat. Voor de vorming van ATP zijn grote hoeveelheden zuurstof nodig, die via de capillaire aanvoer moet worden verkregen. De verdikte spiercellen verhinderen deze aanvoer en een zich zelf in standhoudende situatie is ontstaan.
Hoofdstuk 3 beschrijft het onderzoek naar de interbeoordelaarsbetrouwbaarheid van palpatie naar het voorkomen van MTrPs in drie schouderspieren, namelijk de M. infraspinatus, de M. deltoideus pars anterior en de M. biceps brachii. Bij 40 proefpersonen werden aan beide schouders in totaal 6 MTrP locaties onderzocht. Van deze proefpersonen hadden 30 enkelzijdige schouderklachten en 10 hadden op het moment van onderzoek geen schouderklachten. Elke proefpersoon werd door drie verschillende onderzoekers onderzocht. De onderzoekers wisten niet of de schouder pijnlijk was en zo ja, welke schouder pijnlijk was. Er werd gescoord op de aan- of afwezigheid van een voelbare verharding in een strakke streng, het opwekken van ‘referred pain’ (RP), en de mogelijkheid tot het opwekken van een ‘local twitch response’ (LTR) of het veroorzaken van een ‘jump sign’. De zo verkregen gegevens werden tussen de drie beoordelaars paarsgewijs vergeleken. Op basis van de combinatie van bevindingen werd gescoord op de aan- of afwezigheid van een MTrP. De meest betrouwbare kenmerken van MTrPs waren het opwekken van RP en een ‘jump sign’, gevolgd door de localisatie van een lokale verharding (nodule) in een strakke streng (taut band) en de LTR. De hoogste mate van betrouwbaarheid voor de aan- of afwezigheid van een MTrP werd gevonden in de M. infraspinatus. Een van de drie onderzoekers had twee jaar ervaring in MTrP-therapie en de andere twee onderzoekers hadden respectievelijk 21 en 16 jaar ervaring met MTrP-therapie. Er werd daarbij geen verschil gevonden in de mate van overeenkomst tussen de verschillende combinaties van onderzoekers. Op basis van dit onderzoek werd geconcludeerd dat de palpatie van MTrPs in deze schouderspieren betrouwbaar was en een goede aanvulling was op het lichamelijk onderzoek bij patienten met niet-traumatische schouderpijn. Verder blijkt dat twee jaar ervaring voldoende was om dit even betrouwbaar te doen als iemand met langere ervaring.
Hoofdstuk 4 beschrijft het onderzoeksprotocol voor een gerandomiseerd, gecontroleerd onderzoek (Randomized Controlled Trial [RCT]) naar de effectiviteit van MTrP therapie bij patiënten met chronische, enkelzijdige, niet-traumatische schouderpijn. Dit onderzoek heeft plaatsgevonden tussen september 2007 en december 2009. De behandeling bestond uit fysiotherapeutische behandeling (gericht op het opheffen van MTrPs in schouderspieren) gedurende drie maanden, vergeleken met een afwachtend beleid. De primaire uitkomstmaat was de score op Disabilities of Arm, Hand and Neck questionnaire (DASH). De secundaire uitkomstmaten waren de scores op Visual Analogue Scale voor pijn (VAS-P) voor de huidige pijn (VAS-P1), voor de gemiddelde pijn in de laatste week (VAS-P2), de hevigste pijn van de afgelopen week (VAS-P3), op het globaal ervaren effect (GPE), het aantal spieren met actieve of latente MTrPs. Voorafgaande aan het onderzoek werd berekend, dat voor een relevant klinisch verschil op de DASH, 104 patienten nodig waren (52 patiënten per groep).
Hoofdstuk 5 beschrijft de prevalentie van MTrPs in 17 schouderspieren bij patiënten met unilaterale schouderpijn. Door vooral logistieke problemen is het niet gelukt om het gewenste aantal patiënten te includeren.
Alle patiënten (n=72), die werden geïncludeerd voor de RCT, werden voorafgaande aan de randomisatie, onderzocht op de aanwezigheid van actieve en/of latente MTrPs\(^3\). Daarbij werd het aantal spieren geteld met MTrPs.
Alle onderzochte patiënten hadden meerdere actieve en latente MTrPs. De mediaan van het aantal spieren met actieve MTrPs bedroeg 6 (varierend van 2 tot 16) en met latente MTrPs 4 (varierend van 0 tot 11). De spieren met de meeste actieve MTrPs waren de M. infraspinatus, de M. trapezius (pars descendens) en de M. deltoideus (pars medius). De spieren met de meeste latente MTrPs waren de M. teres major, de M. deltoideus (pars anterior) en de M. trapezius (pars descendens). Er was een geringe correlatie tussen het aantal spieren met actieve MTrPs en de DASH score (spearman’s \( \rho = 0.3 \)). De hoogte van de DASH score wordt voor ongeveer 10% verklaard door het aantal spieren met actieve MTrPs. Andere factoren zijn de mate van gevoeligheid van de afzonderlijke MTrPs, al of niet in combinatie met het aantal MTrPs per spier. Daarnaast zijn er wellicht ook andere relevante factoren, die in dit onderzoek niet zijn meegenomen. Op basis van deze prevalentiegegevens wordt het onderzoeken van de patiënt met schouderpijn op de aanwezigheid van MTrPs aanbevolen.
Hoofdstuk 6 beschrijft de resultaten van de fysiotherapeutische behandeling van patiënten met langdurige, enkelzijdige niet-traumatische schouderpijn. In de periode van september 2007 tot december 2009 werden 65 patiënten geïncludeerd. De patiënten in de interventiegroep werden door 5 ervaren fysiotherapeuten uit dezelfde gespecialiseerde fysiotherapie praktijk behandeld gedurende maximaal 3 maanden één keer per week. De patiënten in de controlegroep bleven op de wachtlijst staan en werden gevraagd hun zelf-management/attitude ten aanzien van hun schouderpijn niet te veranderen of veranderingen tijdens de meetsessies te rapporteren. Bij de intake werden de relevant geachte patientkenmerken vastgelegd, diverse vragenlijsten ingevuld, passieve bewegingsuitslagen van de schouder gemeten en het aantal spieren met MTrPs geteld. De behandeling bestond uit het inactiveren van MTrPs door aanhoudende druk op het MTrP, gevolgd door het rekken van de spier inclusief de ‘taut band’, en een combinatie van spierrekkingsoefeningen met ijsapplicatie (een variant van de door Janet Travell omschreven “spray and stretch”- methode). Vervolgens werden de patiënten gemstrueerd meerdere keren per dag spierrekkings- en ontspannings-oefeningen te doen. Indien gewenst, kon het leren ontspannen worden ondersteund met een (portable) myofeedbackapparaat. Tenslotte kregen alle patiënten adviezen over hun houding, en ergonomische adviezen. In dit onderzoek is er voor gekozen om een behandeling gericht op MTrPs te geven, die overeenkomt met een fysiotherapeutische behandeling uit de dagelijkse praktijk. De gedachte hierachter is dat de verschillende onderdelen afzonderlijk geen of weinig (blijvend) effect hebben, maar dat de verschillende componenten elkaar versterken. Het nadeel van een dergelijke aanpak is, dat de invloed van de verschillende onderdelen op het eindresultaat onbekend blijft.
Vergeleken met de controlegroep scoorde de interventie groep na 12 weken op de primaire en secundaire uitkomstmaten (DASH, VAS-P1, VAS-P2 en VAS-P3) significant beter. De verschillen werden allen klinisch relevant beschouwd. Na 12 weken gaf 55% van de patiënten uit de interventiegroep aan te zijn verbeterd tegen 14% uit de controlegroep. Ook de mate waarin de patiënten verbeterden was groter in de interventiegroep dan in de controlegroep. Het gemiddelde aantal spieren met actieve MTrPs nam af in de interventiegroep, terwijl dit aantal toenam in de controlegroep. Het aantal spieren met latente MTrPs veranderde in de interventiegroep niet-significant en nam significant af in de controlegroep. Er werd een positieve correlatie gevonden tussen de afname van het aantal spieren met actieve MTrPs en de verbetering van de DASH-score \((r = 0.49)\). Alle bovengenoemde effecten werden bereikt na 12 weken. Hoewel er na 6 weken een lichte trend was te zien in de verbetering op de uitkomstmaten in het voordeel van de interventiegroep, was het verschil tussen beide groepen na 6 weken niet significant. Omdat het onderzoek na 3 maanden eindigde kunnen geen uitspraken worden gedaan over het effect van een behandeling die langer duurt dan drie maanden, maar het is niet ondenkbaar dat een langer durende behandeling wellicht tot een beter resultaat leidt.
In hoofdstuk 7 worden de discussie en belangrijkste conclusies van dit proefschrift gepresenteerd.
De belangrijkste conclusies van dit proefschrift zijn:
- MTrPs leveren een veelbelovende verklaring en nieuwe behandelmogelijkheden voor schouderpijn, gebaseerd op pathofysiologische inzichten.
- MTrPs in schouderspieren kunnen voldoende betrouwbaar worden vastgesteld door palpatie en zijn een belangrijke aanvulling op het lichamelijk onderzoek.
- MTrPs komen veelvuldig voor in schouderspieren bij patiënten met chronische, enkelzijdige, niet-traumatische schouderpijn.
- De meeste actieve MTrPs werden gevonden in de M. infraspinatus, de M. trapezius (pars descendens) en de M. deltoideus (pars medius). De meeste latente MTrPs werden gevonden in de M. teres major, M. deltoideus (pars anterior) en de M. trapezius (pars ascendens).
- De behandeling van MTrPs in schouderspieren bij patiënten met chronische, enkelzijdige, niet-traumatische schouderpijn is effectief.
- De effectieve behandeling van MTrPs in schouderspieren bij patiënten met chronische, enkelzijdige, niet-traumatische schouderpijn neemt 12 weken in beslag bij een frequentie van één behandeling per week.
Voetnoten
1 De subacromiale ruimte is aan de bovenzijde begrensd door het acromion, het ligamentum coraco-acromiale en het processus coracoideus en aan de onderzijde door de bovenrand van de cavitas glenoidale, het labrum glenoidale en het caput humeri.
2 Referred pain is pijn die niet gevoeld wordt op de plek van de weefselbeschadiging, maar die gevoeld wordt op enige afstand van die plek. Dit wordt vaak omschreven als uitstralende pijn of irradiatie.
3 Local twitch response (LTR) is een korte contractie van een groep spiervezels in de strakke streng (taut band). Deze LTR ontstaat als gevolg van een korte dwarse manipulatie van de strakke streng of door het aanprakken van het MTrP met een (injectie, emg of acupunctuur) naald.
4 Jump sign is een algemene respons van de patient in de vorm van grimassen, kreunen, terugtrekreactie als reactie op de palpatie van een MTrP.
5 Actieve MTrPs veroorzaken spontane pijn in rust of bij (herhaald) bewegen. De pijn die ontstaat als gevolg van palpatie wordt door de patient herkend als de ‘bekende’ pijn. Latente MTrPs hebben alle karakteristieken van een MTrP, zijn ook drukpijnlijk, maar geven bij palpatie geen herkenbare pijn.
DANKWOORD / ACKNOWLEDGEMENTS
DANKWOORD
Het schrijven van een proefschrift voelt als het lopen van een marathon. Des te verder men komt, des te moeilijker wordt het. Alle aanmoedigingen, die vanaf de zijkant worden toe geroepen maken het lopen niet gemakkelijker, maar zorgen dat opgeven geen optie meer is.
Dit proefschrift was niet tot stand gekomen zonder de hulp en aanmoedigingen van heel veel mensen, zoals familie, vrienden, kennissen en collega's.
Speciaal wil ik ook alle patiënten noemen, die in de afgelopen jaren mij hun vertrouwen hebben geschonken en mij ontzettend veel geleerd hebben. Een groot aantal patiënten en proefpersonen hebben meegewerkt aan de verschillende onderzoeken. Voor hen een speciaal woord van dank.
Verder wil ik in het bijzonder nog noemen:
Prof. dr. R.A.B. Oostendorp, mijn promotor. Beste Rob, dankzij jou heb ik in november 2002 de mogelijkheid gekregen aan dit promotietraject te beginnen. Vanaf het begin heb je het vertrouwen gehad, dat ik dit wel tot een goed einde zou kunnen brengen. Ik waardeer je geloof in mij. Je wist me te leren mijn rol als enthousiast behandelend fysiotherapeut te vervullen voor die van kritisch beschouwend wetenschapper.
Diverse keren zakte me de moed in de schoenen, maar op de momenten, dat ik het niet meer zag zitten, wist jij mij weer letterlijk en figuurlijk op de rails te zetten en kon ik de reis per trein van Nijmegen naar Groningen weer goed gemutst aanvangen. Ik ben diep onder de indruk geraakt van je wetenschappelijke kennis en je kritische beoordeling van de manuscripten. Zodoende ontstonden er eerst een groot aantal versies van elk manuscript, voor het goedgekeurde manuscript werd aangeboden aan een tijdschrift. Het grote voordeel hiervan was dat de uiteindelijke acceptatie door een tijdschrift zeer vlot verliep. Ik waardeer je eindeloze geduld dat je met me hebt gehad en ben blij dat ik na ruim acht jaar met dit proefschrift je gelijk heb kunnen aantonen.
Prof. dr M Wensing, mijn tweede promotor. Beste Michel, ik moest wel even wennen aan iemand, die mij elke keer weer duidelijk wist te maken dat hij niet veel van spieren en triggerpoints af wist. Maar je dwong mij telkens om uit te leggen wat ik nu eigenlijk wilde, waar ik naar op zoek was, welke vragen ik mezelf eigenlijk wilde stellen. Ik heb veel geleerd van je kennis van de methodologie en de statistiek en ik hoop dat jij nu iets meer weet van myofasciale triggerpoints. En wat een eer dat ik (een van) je eerste promovendus(i) ben.
De manuscriptcommissie bestaande uit prof. dr. C van Weel, prof. dr. P.L.C.M. van Riel, prof. dr. L.A.L.M. Kiemeneij, prof. dr. P.U. Dijkstra en prof. dr. R.L. Diercks dank ik hartelijk voor de snelle beoordeling van het manuscript.
Dr. J. Dommerholt. Beste Jan, hoe anders zou het zijn gelopen als ik je niet op het 6de International Myopain Society Congres in München was tegengekomen? Je bent in de afgelopen jaren van onschatbare waarde geweest voor mij als vraagbaak, sparringpartner, medeauteur en vriend. Je bent uiterst kritisch en dat kwam me goed van pas. En je wist me ook af en toe te laten weten dat de soep niet zo heet gegeten wordt als ie wordt opgediend. Ik ben vereerd dat je mijn paranimf wil zijn.
Prof. dr. B. Stegenga. Beste Boudewijn, ik waardeer het bijzonder dat je medeauteur wilde zijn van twee van mijn artikelen. Je kennis als klinisch epidemioloog kwam goed van pas. Ik ben er van overtuigd dat de kennis van myofasciale triggerpoints onder tandartsen en kaakchirurgen dankzij jou de komende jaren verder zal toenemen. Ik hoop dat we onze eetafspraakjes in de talloze gezellige Groningse eethuisjes nog lang zullen voortzetten.
Dr. A. de Gast. Beste Arthur, ik dank een deel van mijn kennis over schouderaandoeningen aan jou. De vele keren dat wij samen de schoudercursus mochten verzorgen voor het Nederlands Paramedisch Instituut, soms zelfs in verre oorden, waren voor mij altijd bijzonder leerzame, motiverende en vooral plezierige ervaringen. Ik hoop nog veel van je te leren.
Monique Bodewes, hoe consciëntieus en zonder morren heb je samen met Maria Onstenk de vele vrije woensdagmiddagen opgeofferd om de patiënten te onderzoeken. Ook toen bleek dat het onderzoek veel langer ging duren dan oorspronkelijk was gepland, hebben jullie me niet in de steek gelaten. Zonder jullie was het niet gelukt. Maria, ik vind het fantastisch dat jij mij als paranimf wilt bijstaan.
Mijn collega's:
Beste Jo, met jou kon ik in de praktijk de meest heftige discussies voeren, zelfs als we het eens waren.
Beste Ben, Margriet en Betty, dank voor de vele keren dat jullie mij behulpzaam waren.
Beste Ineke en Larissa, zonder jullie hulp was het hele onderzoek sowieso mislukt. Jullie aanwezigheid in de praktijk is onbetaalbaar. Ik hoop dat ik nog lang van jullie hulp gebruik mag maken.
Alle medewerkers van de afdeling IQ-healthcare bedank ik voor alle hulp die ik heb mogen ontvangen. Voor jullie ben ik waarschijnlijk een haast onzichtbare promovendus uit het 'hoge noorden'.
Prof. dr H.J. Zaagsma Beste Hans, bedankt voor je vele adviezen die ik van tijd tot tijd van je mocht ontvangen
Prof. dr. P Onck Beste Patrick dank voor de vele filosofische bespiegelingen.
Jasper, dankzij jou zijn het aantal type- en spelfouten in dit proefschrift sterk is gereduceerd Bovendien zorgde je voor de foutloze invoer van gegevens in de spreadsheets. Annick, dank voor je logistieke ondersteuning tijdens het onderzoek.
Mijn ouders. Lieve Pap en Mam, helaas kunnen jullie vanwege jullie gezondheid deze gebeurtenis niet meer van dichtbij meemaken. Jullie niet afslatende belangstelling en trots zijn keer op keer van grote betekenis geweest en hebben als een stok achter de deur gewerkt
Dennis, Jasper, Annick en Mara Jullie zijn het mooiste wat er is. Vele malen belangrijker nog dan een proefschrift.
CURRICULUM VITAE / LIST OF PUBLICATIONS
Carel Bron is a manual physical therapist. He is co-owner of the Physical Therapy Practice for Neck, Shoulder, and Upper Extremity Disorders in Groningen, The Netherlands and co-founder of the Myofascial Pain Seminars Groningen.
Carel Bron was born on December 13th 1956, in Winschoten, the Netherlands. Upon graduating from the Wessel Gansfort College in Groningen, he studied physical therapy between 1975 and 1979 at the Academie voor Fysiotherapie in Groningen. After his graduation he worked as physical therapist at the University Medical Center Groningen (formerly known as Academic Hospital Groningen) until 1991. In 1983, he commenced his manual therapy studies at the Stichting Opleidingen Manuele Therapie (Foundation of Manual Therapy Education) in Eindhoven and Amersfoort, the Netherlands and graduated in 1988.
In November 2002, he started his PhD-studies at the Scientific Institute for Quality of Healthcare (IQ-healthcare, department Chair Prof dr R Grol) at that time known as the Centre for Quality of Care Research, Radboud University Nijmegen Medical Centre.
Bron C, Klasen HJ, Binnendijk B: Conservatieve behandeling van claviculafracturen
Ned. Tijdschrift voor Manuele Therapie 1993, 12
Bron C, Franssen JLM, de Valk BGM. A post-traumatic shoulder complaint without
apparent injury (Een posttraumatische schouderklacht zonder aanwijsbaar letsel).
Ned Tijdschrift voor Fysiotherapie 2001, 111(4): 97-102.
Eleveld M, Bron C, Eygendaal D, Maas M: Multidisciplinaire samenwerking bij de
diagnostiek van een schoudertrauma Sportfysiotherapie in beeld 2002
Franssen JLM, de Valk BGM, Bron C Handboek Fysiotherapeutische casuistiek: Een patient
met spanningshoofdpijn en hoesthoofdpijn Bohn, Stafleu & Van Loghum, 2003 1-16
Bron C. The subacromial impingement syndrome Tijdschrift Manuele Therapie 2008
IFOMT special, June. 12-18.
Bron C. Het subacromiaal-impingement syndroom. Stimulus 2005, 24(4): 409-422
Bron C, Franssen JLM. Impingement van de schouder. Jaarboek fysiotherapie 2004: 83-104.
Bron C: Het subacromiaal-impingement syndroom Tijdschrift Manuele Therapie 2006,
3(3). 20-26.
Dommerholt J, Bron C, Franssen JLM: Myofascial Trigger Points. An Evidence-informed
Review Journal of Manual and Manipulative Therapeutics 2006. 14(4). 203-221
Dommerholt J, Bron J, Franssen JLM. Mięśniowo-powięziowe punkty spustowe -
Przegląd uwzględniający dowody naukowe. [Myofascial trigger points: An evidence-
informed review]. Rehabilitacja Medyczna 2006, 10(4): 39-56.
Dommerholt J, Bron C, Franssen JLM. Myofasciale Triggerpoints een evidence-informed
review Tijdschrift Manuele Therapie 2007, 5(1): 16-28
Bron C, Wensing M, Franssen JLM, Oostendorp RAB: Myofascial trigger points in
common shoulder disorders. Design of a randomized clinical trial
BMC Musculoskeletal disorders 2007, 8: 107.
Bron C, Franssen JLM, Wensing M, Oostendorp RAB: Interrater Reliability of Palpation of Myofascial Trigger Points in Three Shoulder Muscles. JMMT 2007, 15(4): 203-215.
Bron C, Dommerholt J, Franssen JLM: Myofasciale Trigger Points. Physios 2009, 1(1): 14-21.
Bron C, de Gast A, Dommerholt J, Stegenga B, Wensing M, Oostendorp RAB: Treatment of myofascial trigger points in patients with chronic shoulder pain. a randomized, controlled trial. BMC Medicine 2011, 9:8
Dommerholt J, Bron C, Franssen J. Myofasziale Triggerpunkte. Evidenzbasierter Review. Manuelle Therapie 2011, 15. 1-13
Bron C, Dommerholt J, Stegenga B, Wensing M, Oostendorp RAB. High prevalence of myofascial trigger points in patients with shoulder pain (submitted)
**Book Chapters**
In ‘Neck and Arm Pain Syndromes: Evidence-informed screening, diagnosis and conservative management’, Edinburgh, Elsevier Churchill Livingstone 2011
Chapter 6, Myofascial trigger points in the workplace. Franssen JLM, Bron C, Dommerholt J
Chapter 16, Rotator cuff lesions. shoulder impingement. Huijbregts PA, Bron C
Chapter 19, Frozen shoulder. Bron C, de Gast A, Franssen JLM
Chapter 22, Therapeutic exercises for the shoulder region. McEvoy J, O’ Sullivan K, Bron C
**Posters**
Inter-rater reliability of palpation of three shoulder muscles.
Myopain congress 2007 Washington DC, USA
Treatment of myofascial trigger points in shoulder disorders.
Myopain congress 2010 Toledo Spain
Myofascial trigger points in shoulder pain Prevalence, diagnosis and treatment
PhD thesis, Radboud University Nijmegen Medical Centre (Scientific Institute for Quality of Healthcare (IQ healthcare))
Financial support by the Scientific Institute for Quality of Healthcare of the Radboud University Nijmegen Medical Centre and the Scientific Committee Physical Therapy of the Royal Dutch Association of Physical Therapy for the publication of this thesis is gratefully acknowledged
Coverdesign and lay-out Douwe Oppewal, www.oppewal.nl
Printed by Netzodruk, Groningen
ISBN 978-90-9026017-4
Copyright 2011 Carel Bron, Groningen, The Netherlands
All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means without written permission of the copyright owner
1 Myofasciale triggerpoints in schouderspieren zijn betrouwbaar vast te stellen door palpatie (dit proefschrift)
2 Myofasciale triggerpoints in schouderspieren komen zeer frequent voor bij patiënten met chronische niet-traumatische schouderklachten (dit proefschrift)
3 Het aantal spieren met myofasciale triggerpoints voorspelt niet de ernst van de klacht (dit proefschrift)
4 Fysiotherapeutische behandeling van myofasciale triggerpoints bij schouderklachten is effectief (dit proefschrift)
5 Er bestaat een positieve correlatie tussen de afname van pijnklachten en het verbeteren van het functioneren enerzijds en de afname van het aantal spieren met myofasciale triggerpoints anderzijds (dit proefschrift)
6 Lijkbleek is dichter bij de waarheid dan spierwit
7 Spierpijn (muscle pain) is meer dan alleen maar spierpijn (muscle soreness)
8 De term “subacromiale pijn” suggereert ten onrechte dat de diepgevoelde schouderpijn zijn oorsprong vindt in de subacromiaal gelegen anatomische structuren
9 Het gemotiveerd afwijken van de richtlijnen op basis van resultaten van “nieuw” wetenschappelijk onderzoek moet niet alleen worden getolereerd, maar zelfs worden aangemoedigd om de voortgang in de ontwikkeling van de geneeskunde in het algemeen en de fysiotherapie in het bijzonder niet te belemmeren
10 Een promovendus heeft vaker behoefte aan een lifeline dan aan een deadline
11 There is nothing so practical as a good theory (Kurt Lewin (1890- 1947))
12 Although skeletal muscle occupies nearly half of our body, no medical specialty claims it as its focus organ (David G Simons 1922-2010)
13 Het is buiten kijf dat een buitenpromovendus buitengewoon weinig in de buitenlucht komt |
AGONIST-ANTAGONIST COMBINATION TO REDUCE THE USE OF NICOTINE AND OTHER DRUGS
Inventors: Jed E. Rose, Venice; Edward D. Levin, Los Angeles, both of Calif.
Assignee: Robert J. Schnap; a part interest
Notice: The term of this patent shall not extend beyond the expiration date of Pat. No. 5,316,759.
Appl. No.: 570,530
Filed: Dec. 11, 1995
Related U.S. Application Data
Continuation of Ser. No. 235,454, Apr. 29, 1994, Pat. No. 5,574,052, which is a continuation of Ser. No. 54,144, Apr. 30, 1993, which is a continuation of Ser. No. 855,868, Mar. 23, 1992, Pat. No. 5,316,759, which is a continuation of Ser. No. 235,454, Apr. 11, 1994, now abandoned, which is a continuation-in-part of Ser. No. 840,072, Mar. 17, 1986, Pat. No. 4,846,199.
Int. Cl.6 A61K 31/44; A61K 31/465
U.S. Cl. 514/243; 131/270; 131/271; 131/329; 424/10; 514/660; 514/810; 514/812; 514/813; 514/922; 514/947
Field of Search 514/343, 660, 514/810, 812, 813, 922, 947; 424/10; 131/270, 271, 329
References Cited
U.S. PATENT DOCUMENTS
5,316,759 5/1994 Rose et al. ......................... 514/343
5,480,651 1/1996 Callaway .......................... 424/464
Primary Examiner—Herbert J. Lilling
ABSTRACT
A method of treating and reducing a drug dependency such as a nicotine dependency is provided. The method comprises initially administering to a subject a drug, such as nicotine or another agonist of the drug in an amount which would normally provide the desired pharmacologic effects and at least partially satiate the need for the drug by a user. The method also comprises administering to the subject an antagonist to the drug or its other agonist in an amount sufficient to at least partially block the pharmacologic effects of the drug or its other agonist. In one embodiment of the invention, the drug and the antagonist are administered substantially simultaneously so as to occupy a substantial portion of the receptors of the user for that drug thereby blocking or attenuating the effects of any further intake of the drug or other agonist. In another embodiment, the drug or its other agonist is first administered and the antagonist is self-administered by the user after which the user reduces the use of the drug thereby counter-conditioning the drug user to the stimuli associated with the normal administration of the drug. The invention further provides a method of therapeutically treating psychophysiologic diseases and disorders involving neuronal dysregulation. The method additionally provides a pharmacologic composition for the treatment and reduction of drug dependence and which relied upon a combination of an agonist and an antagonist.
18 Claims, 2 Drawing Sheets
FIG. 1
FIG. 2
FIG. 3
FIG. 4
NORMAL CONDITIONING OF SMOKERS
INVERSE CONDITIONING
NICOTINE RECEPTOR ACTIVATION OR SATISFACTION
TIME (MINUTES)
FIG. 5
FIG. 6
ELEVATED BASELINE FROM NICOTINE SKIN PATCH
FIG. 7
AGONIST-ANTAGONIST COMBINATION TO REDUCE THE USE OF NICOTINE AND OTHER DRUGS
RELATED APPLICATIONS
This application is a continuation of our U.S. patent application Ser. No. 235,454, filed April 20, 1992 for "Agonist-Antagonist Combination To Reduce The Use Of Nicotine And Other Drugs", now U.S. Pat. No. 5,744,082 which is a continuation of our U.S. patent application Ser. No. 054,144, filed Apr. 30, 1993, for "Agonist-Antagonist Combination To Reduce The Use Of Nicotine And Other Drugs", which was a continuation of our U.S. patent application Ser. No. 855,868, filed Mar. 23, 1992 for "Agonist-Antagonist Combination To Reduce The Use Of Nicotine And Other Drugs", now U.S. Pat. No. 5,316,759 which was a continuation of our U.S. patent application Ser. No. 231,092, filed Aug. 11, 1992; for "Agonist-Antagonist Combination To Reduce The Use Of Nicotine And Other Drugs" (now abandoned) which is a continuation-in-part of patent application Ser. No. 840,072, filed Mar. 17, 1986, entitled "Smoking of Regenerated Tobacco Smoke" (now U.S. Pat. No. 4,846,199, dated Jul. 11, 1989).
GOVERNMENT RIGHTS
This invention was made with the support of the Veterans Administration of the United States government. The government has certain rights in this invention.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates in general to certain new and useful improvements in methods and compositions for treating and reducing drug dependency and for therapeutically treating psychophysiologic diseases and disorders involving neuronal dysregulation and more particularly, to methods and compositions of the type stated which rely upon the administration of a combination of a drug or another agonist and an antagonist to the drug.
2. Brief Description of the Prior Art
The substantial use of drugs and particularly, the widespread abuse of drugs has led to increased incidence of health problems and has even largely contributed to significant increases in crime. It has been well established that the intake of the drug nicotine through tobacco smoking has resulted in various adverse health conditions. While the use of drugs, such as nicotine, do not necessarily lead to increased incidence of crime, use of this drug and similar related drugs does present significant health problems.
While the use of other addictive drugs including controlled substances, such as various narcotics, e.g., heroin and cocaine, also can result in adverse health conditions, these more serious drug uses have a significant social impact in that they give rise to a substantial increase in numerous types of criminal activity. Various governmental agencies have expended substantial sums of money in attempting to eradicate or at least reduce the incidence of crime, but without much success. Accordingly, in recent years, there has been an increased emphasis on attempting to treat and reduce drug dependency.
The use of drugs is also involved in the treatment of various psychophysiological disorders, and particularly psychiatric disorders involving dysregulation of a neurotransmitter. In addition, certain diseases involving imbalances of the autonomic nervous system are treated by administration of certain drugs. Here again, these drugs may have serious side effects in that while they may attenuate a certain disorder, they exacerbate others. Further, many of the drugs used to treat these disorders can produce dependence, for example a dependence on diazepam (Valium). Therefore, the subject, while finding some release from the disorder or disease, may become severely addicted to the drug which is used.
In general, two approaches have been used in the pharmacologic treatment of drug dependence. The first approach is often described as the "substitution approach" and provides an alternative drug which is designed to theoretically allow the user to withdraw from the habitually abused drug without suffering the aversive symptoms normally associated with a withdrawal from a drug. As a simple example, methadone is often administered to heroin addicts in the treatment of heroin addiction. It was anticipated and initially believed that a substitution of methadone for heroin, for example, would lead to the eventual cessation of all drug use after a weaning period in which the dose of the substituted drug was gradually reduced.
This first approach to drug dependency has met a very low rate of success. It has been found that the substitution of one drug for another does not typically wean the subjects from all drugs. In fact, it has been found in many cases that the drug users will store the substituted drug, such as the methadone and continue using the original abused drug, heroin or morphine, and only use the stored substitute, methadone, when the heroin or morphine is not readily available. Thus, this first approach to reduced drug dependency has met with very little success.
There have also been various proposed treatments for the administration of nicotine (the putative addictive substance in tobacco smoking) as a replacement for tobacco smoking. One of the most successful approaches which have been used to date in reducing the incidence of tobacco smoking relies upon nicotine containing chewing gum. The use of this type of gum suffers from several problems, including not only the bad taste and smell of the chewing appliance, but also gastrointestinal upset which results therefrom and which also reduces compliance. Moreover, the nicotine containing chewing gums do not satisfy that craving which most smokers experience for the distinct sensations in the throat and chest elicited by the nicotine in smoke. Over the course of many years of tobacco smoking, these particular sensations have become an important part of and conditioned with the habit of smoking and help maintain tobacco smoke dependency.
There have also been several proposals for administering nicotine through various aerosol sprays. However, the aerosol sprays are designed to supply that amount of nicotine which would have been acquired by a user through the normal channel of tobacco smoking. The sprays result in severe respiratory tract irritation. There is no available means to provide the nicotine either by means of an oral or nasal spray and attenuate the severe irritating effects of the nicotine.
The second known general approach which has been used in the pharmacologic treatment of drug dependence involves the blocking of the reinforcing effects of the abused drug. It is theorized that by reducing the reinforcement of the user, there will be a decreased incidence of the abuse and use of a drug by the user. As a simple example, naltrexone is presently used to block the reinforcing effects of heroin and mecamylamine has been used to block the reinforcing effects of nicotine. This latter approach has not been found
to be effective in that the intense withdrawal symptoms suffered by the user encourage continuing use of the addictive drug and thereby prevent compliance with the treatment unless a sufficient period of abstinence has elapsed so that the individual's nervous system is accustomed to the absence of the abused drug. The administration of an antagonist alone also creates a dysphoric state which encourages relapse and return to the abused drug.
Each of the aforementioned approaches have only been used experimentally. Moreover, the individual antagonist approach and the individual agonist approach have each been found to be relatively ineffective while the second approach has been ineffective due to the fact that there are significant withdrawal or other adverse symptoms. This causes the drug abuser to return to his original drug habit in order to avoid the pain and discomfort associated with the withdrawal. Thus, this latter approach to reduce drug dependency has also met with little success.
Hereunto, no one has attempted to combine the sustained administration of a drug agonist and an antagonist to that drug in a therapeutic treatment. It would appear that the administration of an agonist and its antagonist would accomplish little, since the antagonist would effectively cancel out the effects of the agonist with a result that the combination would be equivalent to giving nothing at all.
OBJECTS OF THE INVENTION
It is, therefore, one of the primary objects of the present invention to provide a method of reducing the dependency on drugs by utilizing a combination of an agonist and an antagonist.
It is another object of the present invention to provide a method of the type stated for reducing drug dependency by simultaneously administering a drug or another agonist of that drug along with an antagonist to that drug and thereby occupy a substantial number of the receptors of a subject available to that drug or its agonist.
It is a further object of the present invention to provide a method of the type stated which enables administering an agonist and antagonist without causing an over-activity or under-activity of the receptor for the agonist thereby avoiding dangerous side effects which may occur if the agonist or antagonist were given alone in the same dosage.
It is a another salient object of the present invention to provide a method of the type stated in which a drug or another agonist of that drug is administered to an individual to provide a certain systemic level and an antagonist is self-administered by the individual which causes a reduction in the satisfaction associated with the intake of the drug or its other agonist.
It is also an object of the present invention to provide a method for treating psychophysiologic disorders and diseases involving neuronal dysregulation by the simultaneous application of an agonist and an antagonist in relative amounts so that substantial portions are present in the bloodstream in the patient having the disorder or disease.
It is an additional object of the present invention to provide a novel composition of a drug or another agonist of that drug and an antagonist to that drug.
With the above and other objects in view, our invention resides in the novel features of form and arrangement and combination of steps in the method and in the components forming part of the composition as hereinafter described.
BRIEF SUMMARY OF THE INVENTION
The present invention relates in general terms to a method of treating and reducing drug dependency. Any of a number of known drug dependencies can be treated in accordance with the method of the invention including for example, dependency on nicotine, heroin (or morphine) cocaine benzodiazepines and the like. The invention in a broad aspect relies upon a combination of an administration of a drug or another agonist of the drug and an antagonist to the drug. The present invention also provides a unique method of therapeutically treating psychophysiologic diseases and disorders involving neuronal dysregulation by a simultaneous administration of a drug or another agonist of the drug and an antagonist to the drug.
The term "agonist" is used in a broad sense and includes the drug of interest. Thus, for example, in this case, nicotine is an agonist and heroin is an agonist. Methadone is merely another agonist for heroin since it provides effects similar to that of heroin. Thus, the term "agonist" as used herein, unless otherwise specified, will include the drug itself.
The method in a broad sense, comprises initially administering to a subject a drug or another agonist of this drug in an amount which would normally provide the desired pharmacologic effects. Moreover, the amount of the drug applied would at least partially satiate the needs for the drug by the user. The method also involves the administering to a subject an antagonist to the drug or its other agonist in an amount sufficient to at least partially block the pharmacologic effects of the drug or its other agonist while there is a substantial systemic presence of the drug or its agonist present.
The method of the present invention involves two general approaches to the treatment of drug dependency and to the therapeutic treatment of the above described psychophysiologic diseases and disorders which involve use of drugs. In the first approach, there is a treatment for the dependency on the drug by saturating a substantial portion of the known receptors for that drug with a combination of the drug or its other agonist and an antagonist to that drug or such other agonist. In this case, the agonist or drug is administered in an amount to which the subject is generally dependent upon that drug to thereby satisfy a demand for the drug. The antagonist is generally simultaneously administered to the same subject in an amount to attenuate the pharmacologic effects of the drug or its other agonist. In this case, the drug or its other agonist and the antagonist are preferably present in such an amount that more receptors of the drug are occupied by the drug and the antagonist than could safely be occupied by the drug alone or the antagonist alone. Moreover, a lesser number of the receptors are left available to respond to the drug thereby insulating the user from the reinforcing effects of the drug and at the same time minimizing the pathologically wide associated with the agonist.
With the use of the current agonist-antagonist therapy one can attenuate the fluctuations of a neural system while keeping the absolute level of activation constant. In other words, one can attenuate the impact of an abused drug without causing a withdrawal syndrome and one can decrease the pathologically wide fluctuations in neural activity without adverse side effects associated with giving only an agonist.
In this case, the purpose of the invention is to saturate the receptors of the drug to thereby insulate the individual from the reinforcing effects of the drug. In the case of nicotine, the individual would be administered both nicotine and an antagonist to nicotine, such as mecamylamine. In the case of other drugs such as heroin, or its agonist, methadone, the antagonist naltrexone would be administered.
In accordance with this aspect of the invention, the drug may be present in an amount which would otherwise be
toxic in the absence of the antagonist but the toxicity is offset by the presence of the antagonist. The drug should preferably be administered in a sufficiently high dose to occupy a sufficient number of the receptors and thereby substantially reduce a subject's demand for the drug.
In one preferred embodiment, both the drug, or its other agonist, and the antagonist may be administered by means of a transdermal patch, as hereinafter described in more detail. The drug or its other agonist and for that matter the antagonist, may be administered by other means such as oral administration, intravenous administration etc. In order to wean the person from the use of the drug, both the drug, or its other agonist, and the antagonist may be reduced in selected amounts over a period of time.
The use of this approach is effective in that the user will receive little or no satisfaction from taking additional amounts of the drug inasmuch as a very substantial portion of the receptors for that drug are already occupied by the initial dose of the drug and the initial dose of the antagonist to the drug.
The second general approach used in the administration of the agonist and an antagonist involves an inverse conditioning to the stimuli associated with the taking of the abused drug. In this case, the method involves the administering to a subject a drug or another agonist of the drug in an amount which would achieve a systemic level of the drug to which the subject was previously accustomed. This approach to the method also involves the self-administration of an antagonist to the drug or its other agonist, but only at selected intervals. Moreover, the antagonist is preferably administered in a form similar to the administration of the abused drug, as hereinafter described.
While this approach does increase the saturation of the receptors for the drug by the presence of the drug and the antagonist, it more importantly causes a reduction of the enjoyable effects associated with the taking of the drug. The subject is administered a certain amount of the drug or other agonist to provide a desired systemic level. The administration of the antagonist is preferably in a form with sensory cues which mimic or closely simulates the form in which the user was accustomed to taking the drug, as aforesaid. Thus, in taking the drug in this form, there is an inverse conditioning or counter-conditioning of the stimuli associated with the taking of the drug.
As a simple example of this latter approach in treating and reducing drug dependency, the dependency on nicotine could be reduced by providing a desired systemic level of the nicotine through a transdermal patch or other means. The antagonist, such as mecamylamine, could be incorporated into a smoking device, such as a simulated cigarette which provides many if not most of the sensory cues of normal tobacco smoking. In this way when the user took a puff from the simulated cigarette instead of receiving nicotine, he would receive an antagonist, namely the mecamylamine, thereby further depriving the user of the pharmacologic effects of nicotine to which he or she was previously accustomed. The usual conditioning is that smoking is associated with increased nicotine stimulation and pleasurable effects. However, in this case, smoking and its attendant sensory cues would be associated with decreased nicotine stimulation and the unpleasant effects of withdrawal whenever the user smoked.
It can be observed that one important factor in each of the above identified approaches to the method of the present invention is that there is generally a sustained level of the agonist in a user's bloodstream. When using the first approach, there would generally be a sustained level of both the agonist and the antagonist since they are generally simultaneously administered. In the second approach, there would at least be the sustained level of the agonist and the user would self-administer the antagonist at the will of the user. Thus, there would be peaks in the amount of the antagonist in the bloodstream of the user of the second approach.
Preferably, in both approaches to the method of the invention, the agonist is administered by a route which is different than that employed in the actual use of the drug. Thus, in the case of nicotine, administration of an agonist would occur through use of a transdermal patch or a route other than by way of smoking. In the case of the heroin, methadone would likely be used because it has a longer acting effect than heroin, but would be administered by a route different than the user employed for the administration of heroin. Thus, if the user self-administered heroin through a hypodermic needle, the methadone would be administered orally or by means other than a hypodermic needle. In this way, there will not be any reinforcement of the original response obtained by the actual method of taking the drug.
In both approaches, it can also be observed that there is essentially no self-administration of the agonist alone. In other words, the agonist may be self-administered in combination with the antagonist as for example, a composition in the form of a pill or tablet. Otherwise, the agonist would generally always be administered in a therapy, as for example, in a treatment center, or the like. The antagonist could be self-administered, as described above.
The present invention is also highly effective in the treatment of various psychophysiologic disorders and diseases involving neuronal dysfunction, as described above. The invention utilizing both the agonist and the antagonist is effective in treating disorders involving dysregulation of a neurotransmitter as for example, in manic depression and schizophrenia. Imbalances of the autonomic nervous system can also be treated by the concurrent agonist-antagonist administration, as well. In particular, sympathetic nervous system disorders e.g., hypertension could also be treated by this approach with the adrenergic agonist-antagonist combinations.
The present invention also provides a unique composition of both an agonist and an antagonist. The composition is novel and obvious in view of the fact that one would not normally attempt to combine an agonist and an antagonist for the reasons described above. Moreover, it is important to have a single composition which may be in tablet or pill form, for example, or which may be administered through a transdermal patch. In this way, the user who is typically an abuser of a drug or another agonist of that drug will not be able to separate the desired portion of the composition, namely the drug or agonist from the antagonist. When the user takes the composition, he will certainly take both the drug or its other agonist and the antagonist to the drug.
This invention possesses many other advantages and has other purposes which will be made more clearly apparent from a consideration of the forms in which it may be embodied. They will now be described in more detail for purposes of illustrating the general principles of the invention, but it is to be understood that such detailed description is not to be taken in a limiting sense.
**BRIEF DESCRIPTION OF THE DRAWINGS**
Having thus described the invention in general terms, reference will now be made to the accompanying drawings (two sheets) in which:
FIG. 1 is a somewhat schematic vertical sectional view of a transdermal patch for the transdermal administration of an agonist or an antagonist;
FIG. 2 is a vertical sectional view of an apparatus capable of being used for inhaling an aerosol of an agonist or an antagonist;
FIG. 3 is a schematic side elevational view, partially broken away and in section, and showing a modified form of apparatus for inhaling an aerosol of an agonist or an antagonist;
FIG. 4 is a schematic side elevational view, partially broken away and in section, and showing another modified form of apparatus for inhaling an aerosol of an agonist or an antagonist;
FIG. 5 is a fragmentary sectional view taken along line 5-5 of FIG. 4;
FIG. 6 is a graph showing a normal conditioning of a smoker with nicotine receptor activation as a function of time; and
FIG. 7 is a graph showing an inverse conditioning of smokers with nicotine receptor activation as a function of time.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Referring now in more detail and by reference characters to the drawings which illustrate several preferred embodiments of the present invention, the invention relies in its principle aspect, upon the administration of a drug, or another agonist of the drug, and an antagonist to the drug, as previously described. As also indicated above, there are two major approaches to reduction of drug dependency using a combination of the agonist and an antagonist.
In the first of these approaches, an effective treatment strategy is based upon the combined administration of the agonist and an antagonist. The net result of the administration of the drug, or another agonist of that drug and its own antagonist would even be the result of the users brain in that drug are occupied then if either the drug or the antagonist were given alone. As a result, the drug user is further insulated from any reinforcing effects of the abused substances. As an example, by administering a dose of nicotine and dose of mecamylamine, the user is insulated from the formerly desirable effects associated with the smoking of tobacco and which desired effects were primarily the obtaining of nicotine. Few receptors are left available to respond to the abused substance in this case because the receptor system is at least partially saturated.
It should be understood that the receptor system could only be partially saturated in that there could be serious adverse consequences to the patient or other subject if all of the receptors were occupied by either a drug or an agonist. Nevertheless, in the context of the present invention, a much larger number of receptors are occupied than would otherwise be occupied if the subject was receiving only a drug or other agonist and the antagonist.
A further advantage of this approach over the administration of an agonist alone, or an antagonist alone, is that the toxic effects of either drug are offset by the other. As a simple example, in the case of nicotine, it may not be safe to administer a high dose of nicotine alone, since it tends to occupy enough receptors for obtaining a maximal suppression of an individual's craving for cigarettes. However, by concurrently administering an antagonist for the nicotine, such as mecamylamine, a higher dose of the nicotine could be administered. In like manner, it is possible to deliver a higher dose of a highly addictive drug, such as heroin or morphine, when naloxone is administered.
The agonist and the antagonist are preferably simultaneously administered to the subject. However, it should be understood that the antagonist could be administered shortly after the administration of the agonist or otherwise, the agonist could be administered shortly after the administration of the antagonist. It is important, however, in the context of this mode of treatment that there is a generally similar therapeutic amount of the antagonist along with the agonist. In this way, the user will not suffer a severe withdrawal symptom which would otherwise occur with the presence of a large amount of the antagonist and a very small amount of the agonist. In like manner, the user will not be able to obtain the reinforcing effects on which he or she was normally accustomed if there is not an excess of the agonist without a corresponding presence of a substantial amount of the antagonist.
Any means for delivering the agonist and the antagonist may be employed. For example, the agonist and the antagonist may be administered by means of a transdermal patch, as hereinafter described or it may be administered by means of a pill or tablet, or the like. Moreover, the agonist and an antagonist could be administered intravenously or by other means known for administration of medications e.g. sublingually, etc.
The preferred modes of administering the agonist or the antagonist and preferably both relies upon the use of a transdermal patch P of the type illustrated in FIG. 1 of the drawings. This patch P is adapted for application to a suitable portion of a smoker's body, as for example, on a forearm or a chest of the individual or the like.
The patch P comprises a lower liquid permeable membrane or layer 10 along with a suitable non-permeable covering or outer enclosing layer 12 and which forms a reservoir 14 therebetween. This reservoir 14 is sized to receive an agonist or an antagonist or both and which usually may be provided in a liquid form. The layer 10 may be formed of a porous surface and an adhesive layer 16 covered by a releasable backing 18. Thus, once the releasable backing 18 is removed, the patch P can be adhered to the skin of a user through the adhesive layer 16. The adhesive layer 16 is also sufficiently porous so that any agonist or antagonist contained within the reservoir 14 may be transdermally applied to the user. In like manner, and for this purpose, small apertures could be formed within the adhesive layer 16, if desired.
The membrane 10 and the outer enclosing layer 12 may be formed of a cloth material or similar cloth-like material which is capable of retaining, but yet permitting dispensing of the agonist and antagonist in a liquid carrier which would hold the agonist and the antagonist. For this purpose, both the agonist and the antagonist may be liquid, or otherwise dissolved in a liquid carrier. The patch P may also be provided in the reservoir 14 with a silicone polymer matrix comprised of a cross-linked rubber and having micro-sealed compartments which are effectively formed by the cross-linking of the silicone rubber.
The exact details of construction of the patch P are not critical with respect to the present invention and other forms of delivery apparatus or patches can be used. One such patch is illustrated, for example, in U.S. Pat. No. 3,797,494 to Zaffaroni. Other patches which can be used are illustrated in U.S. Pat. No. 3,731,683 to Zaffaroni and U.S. Pat. No. 4,336,243 to Sanvordeker et al.
The patch P preferably has a size of about two centimeters by two centimeters at a minimum. Preferably, the patch has a surface area of about five centimeters by five centimeters with a thickness of about two to three millimeters. There is no effective outer limit on the size of the patch, except for convenience. When administering nicotine as the drug, the nicotine may be present in an amount so as to provide that amount of nicotine which would be acquired by a smoker. As a simple example, a patch could deliver a few milligrams of nicotine per hour. For a 24-hour delivery period the patch would have a size and thickness to retain about a minimum of 50 milligrams of nicotine.
In one of the preferred embodiments, when utilizing a transdermal patch, nicotine should be available so as to provide one to four milligrams per hour. The smoker normally obtains about two to about four milligrams per hour of nicotine as a result of a normal smoking pattern. Thus, at least some amount of nicotine to which the smoker is accustomed is generally present. The mecamylamine is present in an amount of 0.5 milligrams to about one milligram per hour delivery. These relative amounts of the nicotine and the mecamylamine are present at the start of any nicotine reduction program and may be reduced after a period of time. After a period of time, the nicotine is reduced to no more than one milligram of nicotine to be delivered per hour along with about 0.5 milligrams of mecamylamine.
The patch has been found to be highly effective in that it provides a steady rate of delivery of both the agonist and antagonist. In this way, the user can experience levels of either, as previously described. Oral time release capsules can also provide the same effect. However, better control is provided when using the transdermal patch. An oral spray such as an aerosol spray can be used to administer the nicotine and the mecamylamine or for that matter, the other agonists and antagonists. However, the delivery of nicotine through an oral spray could present a problem due to the harshness and the severe irritation which results in the respiratory tract when inhaled, although this may be mitigated by mecamylamine.
The patch described herein can be placed on any convenient part of the users skin, such as, the underside of the forearm of the user's body or on the chest of an individual. In this way, when the patch is applied to the user's body it will release a continuous supply of nicotine to the smoker.
The nicotine and/or the mecamylamine may be dissolved in an inert vehicle, such as, for example, K-Y jelly or any other liquid carrier which does not react with the body or with the nicotine or the other agonists and the mecamylamine and other antagonists. The vehicle must also readily permit transdermal migration of the nicotine and mecamylamine. One of the primary considerations which may be used is cost. However, various low molecular weight alcohols, such as ethanol, etc. could be used. In addition, glycerol, propylene glycol, petrolatum, etc. are effective carriers for the nicotine and mecamylamine.
The nicotine and mecamylamine are each added to the liquid carrier in an amount of about 3 percent by weight to about 10 percent by weight, particularly when water is the liquid carrier in this case the amount of the two components added to the carrier are limited by the solubility of the mecamylamine. The amount of nicotine and mecamylamine to be added to the carrier is further determined by the desired rate of delivery, as hereinafter described, and is, in turn, a function of e.g. patch size, pH of the carrier, etc.
The carrier preferably should have a pH of no higher than about eight or nine, although it can be made less basic or more acidic, as hereinafter described, in order to control nicotine penetration rates. Nicotine is well known to penetrate the intact skin, particularly at a pH of about seven or greater.
The amount of any agonist and any antagonist introduced into a liquid carrier is a function of the solubility of the components in the carrier as well as the desired rate of delivery. Moreover, patch construction to some extent will affect the rate of delivery.
The relative amounts of agonist and antagonist which are administered to a subject are functions their receptor occupancy. Thus, it is desirable to provide the relative amounts of agonist and antagonist which present an equivalent amount of binding at the brain receptors for the agonist.
It is also possible to add to the liquid carrier an agent to increase the permeability of the skin, such as dimethyl sulfoxide (DMSO) or equivalent agent. The dimethyl sulfoxide is a topical agent which facilitates the penetration of the agonist through the skin. Other similar acting agents which can be used include, e.g. sodium lauryl sulfate, 1-dodecylhexahydro-2H-azepin-2-one (Azone) and a mixture of propylene glycol and oleic acid.
The patch P has been found to be effective in administering either the antagonist or the agonist and preferably both in this embodiment of the invention. The patch is preferable inasmuch as it may provide a controlled rate of delivery of the agonist and antagonist to the user and also maintain a sustained level of the agonist and antagonist in the users bloodstream. By controlling the membrane size and like factors, it is possible to regulate the rate of delivery of these agents to the user. Moreover, the patch provides the desired pharmacologic effects to which the user is accustomed, but still blocks the need for the additional external administration of the drug. It also enables a dissociation of the convenient means for delivery of the drug. A person addicted to nicotine is accustomed to smoking as a means to derive that nicotine. When the patch is employed, there is no longer a reinforcement of the need for smoking to obtain the desired level of nicotine. In the case of a heroin addict who utilizes an intravenous needle, there is a dissociation of the need for the intravenous mode of administration is no longer required.
The second approach for treatment of drug dependencies utilizing the agonist and antagonist combinations relies upon an inverse conditioning for smoking cessation. This approach differs from the previously described approach in that the aim of the first approach is to saturate the nicotine receptors to thereby insulate the user from the reinforced effects of taking a drug. In this latter approach, the invention counter-conditions the stimuli associated with the administration of the drug by reversing the usual consequences of taking the drug.
In one of the preferred embodiments of the invention utilizing this latter approach of counter-conditioning, the drug or the agonist is applied to the individual in a level which would approximate that previously obtained by the individual. For example, the addict of heroin or morphine would receive methadone as the agonist to heroin or the morphine since methadone has a more constant systemic level. The methadone would be administered in amounts which would approximate the pharmacologic effects to which the individual was accustomed even though if the antagonist were not administered. In like manner, the party addicted to nicotine would receive that general level of nicotine to which he or she was previously accustomed. The administration of the nicotine or the other drug would
preferably occur by any convenient means which was different than the original previous usual form of administration. Thus, the morphine or heroin could be taken from the oral cavity if the subject previously used intravenous delivery. The nicotine would preferably be applied by means of a transdermal patch. When a desired amount of the agonist has been assimilated into the system of the user, the user is then provided with a means for administration of the antagonist which would simulate the method previously used for taking the abused drug. In this case, for example, the user would be provided with an artificial cigarette otherwise naloxone in hypodermic needles, if in the past, the user has been accustomed to self-administering the morphine or heroin intravenously. Thus, the naloxone would also be administered intravenously. It can be appreciated that when the user formerly administered a substance, he or she received a certain desired sensation. In this case, when the naloxone is administered, there is an inverse conditioning in that there would be an unpleasant effect of withdrawal. The more that the user attempted to reinforce his or her habit by using hypodermic needles, the worse the greater withdrawal. As a result, there is an inverse conditioning.
The same holds true in the case of the party addicted to nicotine. The act of smoking would then be negatively correlated with nicotine effects. The typical smoker is accustomed to receiving a nicotine reinforcement or so-called "high" by drawing puffs of smoke from a cigarette, a pipe, or other means for burning tobacco. In this case, the smoker is administered nicotine chronically through a transdermal patch or equivalent method of delivery other than by smoking. The smoker would also be provided with an artificial cigarette or other smoking device which contains mecamylamine as opposed to nicotine. In this case, the usual conditioning associated with smoking, namely increased nicotine stimulation and the pleasurable effects derived therefrom would not be obtained. In fact, there would be a withdrawal and in this paradigm, smoking would be associated with decreased nicotinic stimulation and the unpleasant effects of withdrawal. Thus, whenever the user smoked the artificial cigarette or similar smoking device, the user would be presented with the full range of sensory and motor aspects of smoking associated with nicotine but without, rather a decrease in nicotine-induced satisfaction. The satisfaction provided by the systemic level of nicotine would be lessened by each bolus of mecamylamine that is inhaled.
FIGS. 6 and 7 clearly show this effect of inverse conditioning. By reference to FIG. 6, the normal nicotine receptor activation or satisfaction is shown on a scale as a function of time. Each puff of a cigarette or similar smoking device provides a certain level of satisfaction since each puff of the tobacco smoke provides the satisfying drug, nicotine. In the inverse conditioning as shown in FIG. 7, it can be observed that there is an elevated base line present in the subject's bloodstream. Thus, the subject is "pre-loaded" with nicotine from the transdermal patch. However, on each occasion when a puff of the artificial cigarette is taken, there is a decrease in the amount of the satisfaction obtained. Thus, in comparison, whereas an increase in satisfaction occurred with each puff, in FIG. 6, a decrease in the satisfaction results with each puff, as shown in FIG. 7.
Those artificial smoking devices which are illustrated and described in the aforementioned patent application (New, U.S. Pat. No. 4,801,992) may be used for the purposes of administering the antagonist in accordance with this embodiment of the invention. One such smoking device is illustrated in FIG. 2 of the drawings and comprises an outer housing 20 which may adopt the size and shape of a conventional cigarette. Thus the housing 20 is elongate and cylindrically shaped. Located within the housing 20 is a concentrically disposed, diametrically reduced cylindrically shaped divider tube 22 which is disposed in an inner chamber 24 containing a suitable ignitable fluid, such as a conventional lighter fluid e.g. liquid propane, butane, or the like. Any conventional petroleum distillate, such as conventional lighter fluids, may be employed for this purposed.
A pair of intermediate discs or walls 26 and 28 are located within the housing 20. Concentrically disposed within the divider tube 22 is an elongate smoke delivery tube 30. Mounted on the right hand end of the smoke delivery tube 30, reference being made to FIG. 2 is a filter 32 to create a draw or inhalation resistance, such as in a conventional cigarette filter. Located at the opposite end of the smoke delivery tube 30 is an air-tight reservoir 34 containing the antagonist, mecamylamine. The ampule 34 may be formed of glass or graphite or other inert material.
Extending from the chamber 24 containing the ignitable fluid is a wick 36 having an end 38 capable of being ignited to create a flame. However, by reference to FIG. 2, it can be observed that the ampule 34 is located in a position where it is not immediately above the flame of the burnable end 38 and therefore is not heated. When the smoker desires to create an aerosol of the constituents in the ampule 34, the smoke delivery tube 30 and hence the ampule 34 are shifted to the left, reference being made to FIG. 2. The constituents in the ampule 34 will become heated and form an aerosol. The ampule 34 may be provided with an air inlet opening 40 so that when a suction is imparted to the ampule 34, through the smoke delivery tube 30, the aerosol will travel through the smoke delivery tube 30 and through the filter 32. A similar opening 42 may be formed in the end wall of the housing 20 for this purpose. The air inlet opening 42 also operates as a vent to prevent excessive pressure build-up within the ampule 34.
Thus, when the smoker desires to inhale, an aerosolized amount of a charge of the mecamylamine will be generated and inhaled, providing the negative effects described above. The smoke thereby passes on the smoke delivery tube 30 so that the ampule 34 is located below the flame of the burnable end 38 and within a matter of a few seconds, a sufficient amount of aerosol has formed, equivalent to that which would be generated in a puff by the smoker on a normal cigarette. This aerosol could even be visually similar to normal cigarette smoke. It could also be incorporated with agents to provide a similar aroma and other sensory qualities resembling cigarette smoke.
A spring 44 is disposed between the right-hand end wall 26 in the outer housing 20 and the filter 32. Moreover, an outer sleeve 46 is disposed over the spring 44 between the filter 32 and the housing 20. The housing 20, as well as the sleeve 46, may both be formed of a suitable paperboard material which is relatively inexpensive so that the entire apparatus functions as a disposable cigarette which may be disposed of when the charge of antagonist in the ampule 34 has been depleted. Otherwise, the entire apparatus of FIG. 2 could be constructed to be reusable with the ampule capable of being recharged.
It is possible to generate the aerosol at a relatively low temperature by boiling the liquid antagonist solution, namely mecamylamine solution. For example, temperatures as low as 200 degrees C. could be used for generating a vapor and hence an aerosol of the substance. Consequently, there is no significant combustion and hence poisonous products of combustion. Alternatively, a nebulizer such as an
ultrasonic nebulizer could be used to create the aerosol to thereby eliminate any heating of the contents of the ampule.
FIG. 3 illustrates a modified form of an apparatus for smoking of an aerosol such as nicotine, and which comprises a conventional filter tipped cigarette 50 comprised of a paper tube 52 filled with ground tobacco 54. A conventional filter 56 is provided at the right-hand end of the cigarette in a conventional manner. The present invention provides an elongate tube 58 extending concentrically within the cigarette from one end thereof to the other and which is opened at each of the opposite ends. The tube 58 may also be formed of a paper material or similar burnable substance.
The tube 58 is provided on its interior with an antagonist such as mecamylamine. In this case, the antagonist could be in the form of a heavy gel which is coated on the interior surface of the tube 58. The burning cigarette 50 will provide the source of heat which will volatilize the mecamylamine. Thus, as the left-hand end of the cigarette is ignited, the heat from the burning end of the cigarette will volatilize the mecamylamine enabling the same to continue to be inhaled by the smoker upon puffing at the filter 56.
In accordance with the construction illustrated in FIG. 3, it can be observed that there is very little draw resistance through the tube 58. Approximately 90 percent of any intake of the aerosol or smoke will be through the tube 58. While any drawing of air through the burning end of the cigarette will indeed produce cigarette smoke, this will be in a relatively small quantity. Nevertheless, this embodiment of the invention is effective in that it will provide some nicotine to the user so that there may not be a complete withdrawal syndrome. Thus, the device could be constructed so that the amount of nicotine delivered generally can be proportioned to the amount of the antagonist, mecamylamine. In this way, it is possible to tailor the device to the particular needs of an individual in which smoking cessation is desired. This type of smoking apparatus is also effective in that it will have the feel and appearance of a conventional cigarette and will also enable the smoker to receive unaltered cigarette smoke along with the antagonist. If the amount of the antagonist is substantially greater than the amount of cigarette smoke, there still will be a net effect of a withdrawal. This further reinforces the fact that the nicotine induced satisfaction.
FIGS. 4 and 5 illustrate another modified form of smoking device 60 and which is very similar in construction to the smoking device 50 of FIG. 3 in that it employs a conventional filter tip cigarette. In this embodiment, the elongate smoke delivery tube 58 is provided on its interior surface with a plurality of radially inwardly extending somewhat flexible fingers 62. These fingers may actually adopt the form of filaments and are sufficiently flexible so as to yield to the draw of aerosol through the delivery tube 58. The fingers 62 are spaced at regular intervals around the interior surface of the delivery tube 58 and are effective to preclude any of the liquid antagonist from rolling out of the delivery tube 58 when the cigarette is held in a vertical position with the outer end located downwardly. Thus, the liquid antagonist Will collect in the spaces 64 between the point of connection of the fingers 62 and the interior surface of the wall of the tube 58.
It can be observed that the smoking device 60 is also effective in generating an aerosol or smoke of the antagonist similar to the device of FIG. 3. Thus, this device would also be effective in increasing the negative effect associated with the smoking of a cigarette.
It can be observed in connection with this approach for reducing the dependence on drugs, and in particular nicotine, that there is an inverse conditioning to tobacco smoke whereby the act of smoking is negatively correlated with nicotine effects. The smoker is given nicotine through the filter or other means as aforesaid and the mecamylamine is obtained through an artificial cigarette which may simulate the appearance, size and shape of a conventional cigarette. As indicated above, this smoking would be associated with decreased nicotine stimulation and the unpleasant effects of withdrawal. As long as the subject did not resume tobacco smoking as through conventional cigarettes, this inverse conditioning would lessen the temptation provided by smoke from external sources, as for example from third parties cigarettes.
In the case of cocaine, amphetamines and other stimulants, an agonist such as bromocriptine could be used. The bromocriptine would typically be administered in an amount of about 40 to 100 milligrams per day. Again, this agonist could be taken orally or otherwise by means of a transdermal patch. However, with a transdermal patch, the bromocriptine agonist would typically be administered in a rate of about 10 to 25 milligrams per day. The antagonist for cocaine, amphetamines and these other stimulants would be fluphenazine such as fluphenazine hydrochloride. This fluphenazine hydrochloride would be taken at a rate of about 20 to 60 milligrams per day orally. In like manner, fluphenazine decanoate could also be used as an antagonist at a rate of about 25 to 75 milligrams in each three week period. This latter antagonist would be injected intramuscularly.
With respect to heroin and other opiates, the agonist is methadone and would normally be administered orally in an amount of about 30 to about 120 milligrams per day. The antagonist for the opiates is naltrexone, as aforesaid. The naltrexone would be administered in a rate of about 40 to about 70 milligrams per day. Again, the naltrexone could be administered orally or intravenously, although it is preferably administered intravenously to negate any correlate to the effects of the abused drug. It should be understood that other drug dependencies such as drugs such as barbituates, benzodiazepines and alcohol could also be treated with this combination of agonist and antagonist treatment. Dangerous withdrawal symptoms would still be carefully monitored and controlled.
The invention is also effective in therapeutically treating psychophysiological diseases and disorders involving neuronal dysregulation and which may also involve a drug use. As indicated previously, psychiatric disorders which involve the dysregulation of a neurotransmitter including manic depression and schizophrenia could be treated by the method of the present invention. In this case, for treatment of psychiatric disorders of this type, bromocriptine would be used as an agonist, generally in the amounts used for treatment of cocaine. The fluphenazine, or the other derivatives thereof, would be used as the antagonist, also in the ranges as indicated above.
Diseases involving the imbalances of the autonomic nervous system can also be treated by this agonist-antagonist drug administration. Sympathetic nervous system disorders, such as hypertension, could also be treated with administration of adrenergic agonists and antagonists. Parasympathetic nervous system disorders could also be treated with concurrent treatment of muscarinic cholinergic agonists and antagonists.
The present invention also provides a unique composition of the agonist and the antagonist as indicated above. This agonist and antagonist combination is effective and unique in that it would not be expected to provide the beneficial
results described herein. One would assume that the agonist and antagonist would counter the effects of each other thereby providing essentially no effect whatsoever. However, it has been determined in accordance with the present invention that this combination of agonist and antagonist is highly beneficial as previously described. It is important to provide a composition to the user so that the user may not discard the antagonist and only resort back to the old habit of using the agonist. This is particularly true with the highly addictive drugs such as the opiates and the like. Thus, the composition of the invention is an important contribution to the practice of the method and is highly effective therefore.
EXAMPLES
The invention is further illustrated by but not limited to the following examples.
Example 1
Approximately three milligrams per hour of nicotine is administered to a subject having a nicotine dependency. The nicotine is administered by means of a transdermal patch applied to the forearm of the subject. Mecamylamine is also administered simultaneously through the same patch at a rate of 15 to 30 milligrams per day. It is found that this type of administration substantially occupies the receptors in the brain of the subject thereby reducing the desire for nicotine intake.
Example 2
In example 2 a subject having nicotine dependency is treated by administration of the agonist-antagonist combination in accordance with the present invention. In this case, the nicotine is administered to a subject through a transdermal patch such that there is about 20 nanograms of nicotine in each milliliter of the person's bloodstream at any point in time.
The subject is provided with a simulated cigarette or so-called artificial cigarette, similar to that illustrated in FIG. 3 of the drawings. The ampule of that artificial cigarette is provided with mecamylamine. When the user draws upon the filter of the cigarette, he receives a charge of mecamylamine which causes a withdrawal effect. This, in turn, inversely conditions the smoker and ultimately causes the smoker to associate negative feelings with smoking of cigarettes thereby reducing the tendency of the smoker to resort to a smoking habit.
Example 3
A subject having a heroin addiction is treated by cessation of all heroin administration. The subject is provided with about 70 milligrams per day of methadone and which is administered orally simultaneously, the subject is also provided with naltrexone administered at a rate of about 50 milligrams per day and which is administered orally.
The brain receptors for these opiates are substantially filled thereby reducing the desire of the subject for further intake of heroin or an agonist thereof.
Thus, there has been illustrated and described a unique and novel combination of an agonist and an antagonist for treatment of drug dependency and other psychophysiological diseases and disorders. The present invention thereby fulfills all of the objects and advantages which have been sought therefore. It should be understood that many changes, modifications, variations and other uses and applications will become apparent to those skilled in the art after considering this specification and the accompanying drawings. Therefore, any and all such changes, modifications, variations and other uses and applications which do not depart from the spirit and scope of the invention are deemed to be covered by the invention.
Having thus described our invention, what we desire to claim and secure by letters patent is:
1. A pharmacologic composition for the treatment and reduction of dependency on an abused stimulating drug selected from the class consisting of cocaine and amphetamines where the addictive effects of this drug causes activation of receptors which are actuated by the drug, said composition comprising:
a) a bromocriptine agonist which causes receptors for the drug to become activated, said bromocriptine agonist being present in the composition in an amount to provide a daily dose of this agonist of 40 to 100 milligrams per day and thereby partially satiate the needs for the drug by a subject using the composition;
b) an antagonist selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate sufficient to at least partially block the effects of the drug, the receptors which are responsive to the drug also being sensitive to the antagonist, said amounts of the drug and the antagonist sufficient so that there is a substantial systemic amount of the antagonist present when there is a substantial systemic amount of the drug present in the blood of a user of the composition, the drug or the agonist and the antagonist when administered in said daily dose amounts will cause the activation of the drug and also reduce a satisfaction of a subject when the drug is administered and also reduce a state of withdrawal from the drug in the subject, such that the drug or agonist is complemented by the antagonist to occupy a greater number of receptors of the subject using the drug than would be occupied by the drug alone.
2. The composition of claim 1 further characterized in that the drug and the antagonist are administered contemporaneously.
3. A pharmacologic composition for the treatment and reduction of dependency on an abused stimulating drug selected from the class consisting of cocaine and amphetamines where the addictive effects of the drug causes activation of receptors for the drug, said composition comprising:
a) a bromocriptine agonist which causes receptors for the drug to become activated, said bromocriptine agonist being present in the composition in an amount to provide a daily dose of this agonist in the amount of 10-25 milligrams per day through a transdermal patch and thereby partially satiate the needs for the drug by a subject using the composition;
b) an antagonist selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate with the fluphenazine and fluphenazine hydrochloride being present in an amount to provide a daily dose of 20-60 milligrams per day and the fluphenazine decanoate being present in an amount to provide 20-75 milligrams per three week period and being sufficient to at least partially block the effects of the drug, the receptors which are responsive to the drug and also being sensitive to the antagonist, such that there is a substantial systemic amount of the drug present in the blood of a subject using the composition, the drug or the agonist and the antagonist
when administered in said daily dose amounts will preclude intoxication or overdosing and also reduce a satisfaction of a subject when the stimulating drug is administered and also reduce a number of withdrawals from the drug in the subject, such that the drug or agonist is complemented by the antagonist to occupy a greater number of receptors of the subject of the drug than would be occupied by the drug alone.
4. The composition of claim 3 further characterized in that the drug and the antagonist are administered contemporaneously.
5. A pharmacologic composition for the treatment and reduction of dependency on an abused stimulating opiate drug where the addictive effects of this drug causes activation of receptors for the drug, said composition comprising:
a) a methadone agonist which causes receptors for the drug to become activated, said methadone agonist being present in the composition in an amount to provide a daily dose of this agonist at 30–70 milligrams per day and thereby partially satiate the need for the drug by a subject using the composition;
b) a naltrexone antagonist present in an amount to provide a daily dose of about 40–70 milligrams per day and sufficient to at least partially block the effects of the drug, the receptors which are responsive to the drug also being sensitive to the antagonist, said amounts of the drug and the antagonist being sufficient so that there is a substantial systemic amount of the antagonist present when there is a substantial systemic amount of the drug present in the blood of a subject using the composition, the number of receptors of the subject which are activated by the drug or agonist and blocked by the antagonist being greater than the number of receptors which would be occupied by the agonist alone or antagonist alone.
6. The composition of claim 5 further characterized in that the agonist and the antagonist are administered contemporaneously.
7. A pharmacologic composition for the treatment of physiological dysfunction of a subject in which brain receptors of the subject are subject to under-activity and over-activity and where the composition reduces this under-activity and over-activity, said composition comprising a bromocriptine agonist in the composition in an amount to provide a dosage rate of 10–25 milligrams per day by a transdermal patch which prevents under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present if used in an amount to provide a dosage ratio of 20–75 milligrams per day, and where the amounts of the antagonist are sufficient to prevent under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present if used in an amount to provide a daily dosage rate of 20–75 milligrams per day and where the amounts of the antagonist are sufficient to prevent under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present if used in an amount to provide a daily dosage rate of 20–60 milligrams per day and the fluphenazine decanoate if used in an amount to provide a daily dosage ratio of 20–75 milligrams per day, and where the amounts of the antagonist are sufficient to prevent under-activity of the receptors and at least partially block the pharmacologic effects of the agonist, said amounts of agonist and antagonist being such that there is always a substantial systemic amount of the antagonist present when there is a substantial systemic amount of the agonist present in the subject using the composition and the receptor amounts also being sufficient to preclude intoxication and to also prevent under-activity of the receptors such that the agonist is complemented by the antagonist to occupy a number of receptors of the subject which is greater than the number of receptors which would be occupied by the agonist alone.
8. The composition of claim 7 further characterized in that the agonist and the antagonist are administered contemporaneously.
9. A pharmacologic composition for the treatment of physiological dysfunction of a subject in which brain receptors of the subject are subject to under-activity or over-activity and where the composition reduces the over-activity and under-activity, said composition comprising a bromocriptine agonist in the composition in an amount to provide a dosage rate of 10–25 milligrams per day by a transdermal patch which prevents under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of a fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present in the composition to provide a daily dosage rate of 20 to 60 milligrams per day and the fluphenazine decanoate if used present in an amount to provide a daily dosage rate of 20–75 milligrams per day and where the amounts of the antagonist are sufficient to prevent over-activity of the receptors and at least partially block the pharmacologic effects of the agonist, said amounts of agonist and antagonist being such that there is always a substantial systemic level of the antagonist present in the subject using the composition and the respective amounts also being sufficient so that the drug or agonist precludes intoxication and the combination of the agonist and antagonist also prevents under-activity of the receptors such that the agonist is complemented by the antagonist to occupy a number of receptors of the subject which is greater than the number of receptors which would be occupied by the agonist alone.
10. The composition of claim 9 further characterized in that the agonist and the antagonist are administered contemporaneously.
11. A pharmacologic composition for the treatment and reduction of dependency on an abused stimulating drug selected from the class consisting of cocaine and amphetamines where the addictive effects of this drug causes activation of receptors which are actuated by the drug, said composition comprising:
a) an agonist selected from the class consisting of amphetamines and cocaine and bromocriptine which causes receptors for the drug to become activated, said agonist being present in the composition in an amount to provide a daily dose of this agonist of 40 to 100 milligrams per day and thereby partially satiate the needs for the drug by a subject using the composition;
b) an antagonist selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate sufficient to at least partially block the effects of the drug, the receptors which are responsive to the drug also being sensitive to the antagonist, said amounts of the drug and the antagonist sufficient so that there is a substantial systemic amount of the antagonist present when there is a substantial systemic amount of the drug present in the blood of a user of the composition, the drug or the agonist and the antagonist when administered in said daily dose amounts will preclude intoxication or overdosing and also reduce a satisfaction of a subject with the drug is administered and also reduce a state of withdrawal from the drug in the subject, such that the drug or agonist is complemented by the antagonist to occupy a greater number of receptors of the subject using the drug than would be occupied by the drug alone.
12. The composition of claim 11 further characterized in that the drug and the antagonist are administered contemporaneously.
13. A pharmacologic composition for the treatment and reduction of dependency on an abused stimulating drug selected from the class consisting of cocaine and amphetamine where the addictive effects of the drug causes activation of receptors for the drug, said composition comprising:
a) an agonist selected from the class consisting of amphetamines and cocaine and bromocriptine which causes receptors for the drug to become activated, said agonist being present in the composition in an amount to provide a daily dose of this agonist in the amount of 10–25 milligrams per day through a transdermal patch and thereby partially satiate the needs for the drug by a subject using the composition; and
b) an antagonist selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate with the fluphenazine and fluphenazine hydrochloride being present in an amount to provide a daily dose of 20–60 milligrams per day and the fluphenazine decanoate being present in an amount to provide 20–75 milligrams per three week period and being sufficient to at least partially block the effects of the drug and the antagonist being sufficient so that there is always a substantial systemic level of the antagonist present when there is a substantial systemic amount of the drug present in the blood of a subject using the composition, the drug or the agonist and the antagonist when administered in said daily dose amounts will preclude intoxication or overdosing and also reduce a satisfaction of a subject when the stimulating drug is administered and also reduce a state of withdrawal from the drug in the subject such that the drug or agonist is complemented by the antagonist to occupy a greater number of receptors of the subject of the drug than would be occupied by the drug alone.
14. The composition of claim 13 further characterized in that the drug and the antagonist are administered contemporaneously.
15. A pharmacologic composition for the treatment of physiological dysfunction of a subject in which brain receptors of the subject are subject to under-activity and over-activity and where the composition reduces this under-activity and over-activity, said composition comprising an agonist selected from the class consisting of amphetamines and cocaine and bromocriptine in the composition in an amount to provide a daily dosage rate of 40 to 100 milligrams per day which prevents under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present in the composition to provide a daily dosage rate of 20 to 60 milligrams per day and the fluphenazine decanoate if used present in an amount to provide a daily dosage rate of 20–75 milligrams per day and where the amount of the antagonist is sufficient to prevent over-activity of the receptors and at least partially block the pharmacologic effects of the agonist, said amounts of agonist and antagonist being such that there is always a substantial systemic amount of the antagonist present when there is a substantial systemic amount of the drug present in the subject using the composition and the receptor amounts also being sufficient to preclude intoxication and to also prevent under-activity of the receptors such that the agonist is complemented by the antagonist to occupy a number of receptors of the subject which is greater than the number of receptors which would be occupied by the agonist alone.
16. The composition of claim 15 further characterized in that the agonist and the antagonist are administered contemporaneously.
17. A pharmacologic composition for the treatment of physiological dysfunction of a subject in which brain receptors of the subject are subject to under-activity and over-activity and where the composition reduces the over-activity and under-activity, said composition comprising an agonist selected from the class consisting of amphetamines and cocaine and bromocriptine in the composition in an amount to provide a daily rate of 40 to 100 milligrams per day by a transdermal patch which prevents under-activity of the brain receptors causing this physiological dysfunction and an antagonist to the agonist present in the composition and being selected from the class consisting of a fluphenazine, fluphenazine hydrochloride and fluphenazine decanoate, said fluphenazine antagonist or fluphenazine hydrochloride antagonist being present in the composition to provide a daily dosage rate of 20 to 60 milligrams per day and the fluphenazine decanoate if used present in an amount to provide a daily dosage rate of 20–75 milligrams per day and where the amount of the antagonist is sufficient to prevent over-activity of the receptors and at least partially block the pharmacologic effects of the agonist, said amounts of agonist and antagonist being such that there is always a substantial systemic level of the antagonist present in the subject using the composition and the respective amounts also being sufficient so that the drug or agonist precludes intoxication and the combination of the agonist and antagonist also prevents under-activity of the receptors such that the agonist is complemented by the antagonist to occupy a number of receptors of the subject which is greater than the number of receptors which would be occupied by the agonist alone.
18. The composition of claim 17 further characterized in that the agonist and the antagonist are administered contemporaneously. |
CHAPTER 11: COMMUNICATION WITH THE PUBLIC
Once you have prepared and submitted your RMP, EPA will make it available to the public. CAA sections 112(r) and 114(c) require that RMPs be made available to the public, except for any classified or confidential business information contained in RMPs or the off-site consequence analysis (OCA) sections of RMPS (sections 2 through 5). Members of the public may obtain copies of RMPs (without the OCA sections) by requesting them from EPA in writing (including by email). Members of the public may also read, but not copy, the OCA sections of RMPs in federal reading rooms. There is a monthly limit on the number of facilities for which any member of the public can view OCA in reading rooms.
In view of the public’s access to RMPs, you should expect that your community will discuss the hazards and risks associated with your facility as indicated in your RMP. You will necessarily be part of such discussions. The public and the press are likely to ask you questions because only you can provide specific answers about your facility and your accident prevention program. This dialogue is a most important step in preventing chemical accidents and should be encouraged. You should respond to these questions honestly and candidly. Refusing to answer, reacting defensively, or attacking the regulation as unnecessary are likely to make people suspicious and willing to assume the worst. A basic fact of risk communication is that trust, once lost, is very hard to regain. As a result, you should prepare as early as possible to begin talking about these issues with the community, Local Emergency Planning Committees (LEPCs), State Emergency Response Commissions (SERCs), other local and state officials, and other interested parties.
Another reason that the public and press may ask questions about your facility is the increased concern about domestic terrorism issues since September 11, 2001. The fact that your facility is regulated under the EPA Risk Management Program means that you probably store relatively large quantities of extremely toxic or flammable substances, and people may be concerned about security at your facility.
Communication with the public can be an opportunity to develop your relationship with the community and build a level of trust among you, your neighbors, and the community at large. By complying with the RMP rule, you are taking a number of steps to prevent accidents and protect the community. These steps are the individual elements of your risk management program. A well-designed and properly implemented risk management program will set the stage for informative and productive dialogue between you and your community. The purpose of this chapter is to suggest how this dialogue may occur. In addition, note that some industries have developed guidance and other materials to assist in this process; contact your trade association for more information.
11.1 BASIC RULES OF RISK COMMUNICATION
Risk communication means establishing and maintaining a dialogue with the public about the hazards at your operation and discussing the steps that have been or can be taken to reduce the risk posed by these hazards. Of particular concern under this rule are the hazards related to the chemicals you use and what would happen if you had an accidental release.
Many companies, government agencies, and other entities have confronted the same issue you may face: how to discuss with the public the risks the community is subject to. Exhibit 11-1 outlines seven “rules” of risk communication that have been developed based on many experiences of dealing with the public about risks.
A key message of these “rules” is the importance and legitimacy of public concerns. People generally are less tolerant of risks they cannot control than those they can. For example, most people are willing to accept the risks of driving because they have some control over what happens to them. However, they are generally more uncomfortable accepting the risks of living near a facility that handles hazardous chemicals if they feel that they have no control over whether the facility has an accident. The Clean Air Act’s provision for public availability of RMPs gives public an opportunity to take part in reducing the risk of chemical accidents that might occur in their community.
**Exhibit 11-1: Seven Cardinal Rules of Risk Communication**
1. Accept and involve the public as a legitimate partner
2. Plan carefully and evaluate your efforts
3. Listen to the public’s specific concerns
4. Be honest, frank, and open
5. Coordinate and collaborate with other credible sources
6. Meet the needs of the media
7. Speak clearly and with compassion
**Hazards versus Risks**
Dialogue in the community will be concerned with both hazards and risks; it is useful to be clear about the difference between them.
Hazards are inherent properties that cannot be changed. Chlorine is toxic when inhaled or ingested; propane is flammable. There is little that you can do with these chemicals to change their toxicity or flammability. If you are in an earthquake zone or an area affected by hurricanes, earthquakes and hurricanes are hazards. When you conduct your hazard review or process hazards analysis, you will be identifying your hazards and determining whether the potential exposure to the hazard can be reduced in any way (e.g., by limiting the quantity of chlorine stored on-site).
Risk is usually evaluated based on several variables, including the likelihood of a release occurring, the inherent hazards of the chemicals combined with the quantity released, and the potential impact of the release on the public and the environment. For example, if a release during loading occurs frequently, but the quantity of
chemical released is typically small and does not generally migrate offsite, the overall risk to the public is low. If the likelihood of a catastrophic release occurring is extremely low, but the number of people who could be affected if it occurred is large, the overall risk may still be low because of the low probability that a release will occur. On the other hand, if a release occurs relatively frequently and a large number of people could be affected, the overall risk to the public is high.
The rule does not require you to assess risk in a quantitative way because, in most cases, the data you would need to estimate risk levels (e.g., one in 100 years) are not available. Even in cases where data such as equipment failure rates are available, there are large uncertainties in using that data to determine a numerical risk level for your facility, because your facility is probably not the same as other facilities, and your situation may be dynamic. Therefore, you may want to assign qualitative values (high, medium, low) to the risks that you have identified at your facility, but you should be prepared to explain the terms if you do. For example, if you believe that the worst-case release is very unlikely to occur, you must give good reasons; you must be able to provide specific examples of measures that you have taken to prevent such a release, such as installation of new equipment, careful training of your workers, rigorous preventive maintenance, etc. You should also be able to show documentation to support your claim.
**WHO WILL ASK QUESTIONS?**
Your Local Emergency Planning Committee (LEPC) and other facilities can help you identify individuals in the following groups who may be reviewing RMP data and asking questions. Interested parties may include:
(1) Persons living near the facility and elsewhere in the community or working at a neighboring facility
(2) Local officials from zoning and planning boards, fire and police departments, health and building code officials, elected officials, and various county and state officials
(3) Your employees
(4) Special interest groups including environmental organizations, chambers of commerce, unions, and various civic organizations
(5) Journalists, reporters, and other media representatives
(6) Medical professionals, educators, consultants, neighboring companies and others with special expertise or interests
In general, people will be concerned about accident risks at your facility, how you manage the risks, and potential impacts of an accident on health, safety, property, natural resources, community infrastructure, community image, property values, and other matters. Those individuals in the public and private sector who are responsible for dealing with these impacts and the associated risks also will have an interest in working with you to address these risks.
WHAT INFORMATION ABOUT YOUR FACILITY IS AVAILABLE TO THE PUBLIC?
As noted above, EPA is legally required to make RMPs available to the public, except for any CBI they may contain. Public access to the OCA sections of RMPs is restricted, but the public may still read, if not copy, those sections. Under the regulations governing public access to the OCA sections of RMPs (see 40 CFR Part 1400), any member of the public may read those sections for up to 10 facilities per month, with no restriction on the geographical location of the facilities for which that information is sought. Any member of the public may also read the OCA sections of RMPs for any facility in or potentially affecting the jurisdiction of the local emergency planning committee in which that person lives or works.
Even though most of the contents of RMPs are available to the public, it is likely that people will want additional information. People who request copies of RMPs will be aware that the RMPs they receive do not include the OCA sections. Interested persons may also know that you retain additional information at your facility (e.g., documentation of the results reported in the OCA sections of your RMP) and are required to make it available to EPA or its implementing agency during inspections or compliance audits. Therefore, they may request additional information. EPA encourages you to provide public access to additional information upon reasonable request. If EPA or its implementing agency were to request additional information, it would be available to the public under section 114(c) of the CAA, except for confidential business information and OCA information.
The public may also be interested in other information relevant to risk management at your facility, such as:
- Submissions under sections 302, 304, 311-312, and 313 of the Emergency Planning and Community Right to Know Act (EPCRA) reporting on chemical storage and releases, as well as the community emergency response plan prepared under EPCRA section 303
- Other reports on hazardous materials made, used, generated, stored, spilled, released and transported, that you submitted to federal, state, and local agencies
- Reports on workplace safety and accidents developed under the Occupational Safety and Health Act that you provide to employees, who may choose to make the information publicly available, such as medical and exposure records, chemical data sheets, and training materials
- Any other information you have provided to public agencies that can be accessed by members of the public under the federal Freedom of Information Act and similar state laws (and that may have been made widely available over the Internet)
- Any published materials on facility safety (either industry- or site-specific), such as agency reports on facility accidents, safety engineering manuals and textbooks, and professional journal articles on facility risk management, for example
11.2 SAMPLE QUESTIONS FOR COMMUNICATING WITH THE PUBLIC
Smaller businesses may not have the resources or time to develop the types of outreach programs, described later in this chapter, that many larger chemical companies have used to handle public questions and community relations. For many small businesses, communication with the public will usually occur when you are asked questions about information in your RMP. It is important that you respond to these questions constructively. Go beyond just answering questions; discuss what you have done to prevent accidents and work with the community to reduce risks. The people in your community will be looking to you to provide answers.
To help you establish a productive dialogue with the community, the rest of this section presents questions you are likely to be asked and a framework for answering them. These are elements of the public dialogue that you may anticipate. The person from your facility designated as responsible for communicating with the public should review the following and talk to other community organizations to determine which questions are most likely to be raised and identify other foreseeable issues. Remember that others in the community, notably LEPCs and other emergency management organizations are also likely to be asked these and other similar questions. You should consider the unique features of your facility, your RMP, and your historical relationship with the community (e.g., prior accidents, breakdowns in the coordination of emergency response efforts, and management-labor disputes), and work together with these other organizations to answer these questions for your situation and to resolve the issues associated with them.
What does your worst-case “distance to endpoint” or release distance mean?
The distance is intended to provide an estimate of the maximum possible area that might be affected by a catastrophic release from your facility. It is intended to ensure that no potential risks to public health are overlooked, but the distance to an endpoint estimated under worst-case conditions should not be considered a “public danger zone.”
In most cases, the mathematical models used to analyze the worst-case release scenario as defined in the rule may overestimate the area that would be impacted by a release. In other cases, the models may underestimate the area. For distances greater than approximately six miles, the results of toxic gas dispersion models are especially uncertain, and you should be prepared to discuss such possibilities in an open, honest manner.
Reasons that modeling may underestimate the distance generally relate to the inability of some models to account for site-specific factors that might tend to increase the actual endpoint distance. For example, assume a facility is located in a river valley and handles dense toxic gases such as chlorine. If a release were to occur, the river valley could channel the toxic cloud much farther than it might travel if it were to disperse in a location with generally flat terrain. In such cases, the actual endpoint distance might be longer than that predicted using generic lookup tables.
Reasons that the area may be overestimated include:
- For toxics, the weather conditions (very low wind speed, calm conditions) assumed for a worst-case release scenario are uncommon and probably would not last as long as the time the release would take to travel the distance estimated. If weather conditions are different, the distance to endpoint would be much shorter, because the release would be dispersed, and in the process diluted, more quickly.
- For flammables, although explosions can occur, a release of a flammable is more likely to disperse harmlessly or burn. If an explosion does occur, however, debris from the blast could affect an even broader area than would be indicated by the distance to endpoint for the flammable substance.
- In general, some models cannot take into account other site-specific factors that might tend to disperse the chemicals more quickly and limit the distance.
Note: When estimating worst case release distances, the rule does not allow facilities to take into account active mitigation systems and practices that could limit the scope of a release. Specific systems (e.g., monitoring, detection, control, pressure relief, alarms, mitigation) may limit a release or prevent the failure from occurring. Also, if you are required to analyze alternative release scenarios (i.e., if your facility is in Program 2 or Program 3), these scenarios are generally more realistic than the worst case, and you can offer to provide additional information on those scenarios.
What does it mean that we could be exposed if we live/work/shop/go to school X miles away?
(For an accident involving a flammable substance):
The distance means that people who are in that area around the facility could be hurt if the contents of a tank or other vessel exploded. The blast of the explosion could shatter windows and damage buildings. Injuries would be the result of the force of the explosion and of flying glass or falling debris.
(For an accident involving a toxic substance):
The distance is based on a concentration of the chemical that you could be exposed to for an hour without suffering irreversible health effects or other symptoms that would make it difficult for you to escape. If you are within that distance, you could be exposed to a greater concentration of the chemical. If you were exposed to higher levels for an extended period of time (10 minutes, 30 minutes, or longer), you could be seriously hurt. However, that does not mean that you would be. Remember, for worst case scenarios, the rule requires you to make certain conservative assumptions with respect to, for example, wind speed and atmospheric stability. If the wind speed is higher than that used in the modeling, or if the atmosphere is more unstable, a chemical release would be dispersed more quickly, and the distances would be much smaller and the exposure times would be shorter. If the question pertains to an alternative release scenario, you probably assumed typical weather conditions in the modeling. Therefore, the actual impact distance could be shorter or longer, and you should be prepared to acknowledge this and clearly explain how you chose the conditions for your release scenario.
In general, the possibility of harm depends on the concentration of the chemical you are exposed to and the length of time you are exposed.
If there is a worst-case accident, will everyone within that distance be hurt? What about property damage?
It is important to remember that worst-case scenarios are very unlikely. In general, even if a very large accidental release did occur, not everyone within the circle defined by the worst-case distance to endpoint would be hurt.
In analyzing the potential consequences of a worst-case release, we look at two types of chemicals - toxics and flammables. Releases of flammables do not usually lead to explosions; released flammables are more likely to disperse without igniting. If the released flammable does ignite, a fire is more likely than an explosion, and fires are usually concentrated at the facility.
For an explosion, everyone within the distance-to-endpoint circle would certainly feel the blast wave since it would move in all directions at once. However, while some people within the circle could be hurt, it is unlikely that everyone would be since some people would probably be in less vulnerable locations. Most injuries would probably be due to the effects of flying glass, falling debris, or impact with nearby objects.
For toxic chemicals, whether someone within the circle is hurt by a release depends on many factors. First, the released chemicals would usually move in the direction of the wind (except for some dense gases, which may be constrained by terrain features to flow in a different direction). Generally, only people downwind from the facility would be at risk of exposure if a release occurred, and this is normally only a part of the population inside the circle. If the wind speed is moderate, the chemicals would disperse quickly, and people would be exposed to lower levels of the chemical. If the release is stopped quickly, they might be exposed for a very short period time, which is less likely to cause injury. However, if the wind speed is low or the release continues for a long time, exposure levels will be higher and more dangerous. The population at risk would be a larger proportion of the total population inside the circle. You should be prepared to discuss both possibilities.
Generally, it is the people who are closest to the facility — within a half mile or less — who would face the greatest danger if a large accident occurred.
Damage to property and the environment will depend on the type of chemical released. In an explosion, environmental impacts and property damage may extend beyond the distance at which injuries could occur. For a vapor release, environmental effects and property damage may occur as a result of the reactivity or corrosivity of the chemical or toxic contamination.
How sure are you of your endpoint distances?
Perhaps the largest single difficulty associated with hazard assessment is that different models and modeling assumptions will yield somewhat different results. There is no one model or set of assumptions that will yield “certain” results. Models represent scientists’ best efforts to account for all the variables involved in an accidental release. While all models are generally based on the same physical principles, dispersion modeling is not an exact science due to the limited opportunity for real-world validation of results. No model is perfect, and every model represents a somewhat different analytical approach. As a result, for a given scenario, people can use different consequence models and obtain estimates of the distance to the toxic endpoint that in some situations might vary by a factor of ten. Even using the same model, different input assumptions can cause wide variations in the predictions. It follows that, when you present a single value as your best estimate of the endpoint distance, others will be able to claim that the answer ought to be different, perhaps greater, perhaps smaller, depending on the assumptions used in modeling and the choice of model itself.
You therefore need to recognize that your estimated distance lies within a considerable band of uncertainty, and to communicate this fact to those who have an interest in your results. A neighboring facility handling the same covered substances as you do may have come up with a different result for the same scenario for these reasons.
If you use EPA’s RMP Offsite Consequence Analysis Guidance or one of the industry-specific guidance documents that EPA has developed, you will be able to address the issue of uncertainty by stating that the results you have generated are conservative (that is, they are likely to overestimate distances). However, if you use other models, you will have to provide your own assessment of where your specific estimate lies within the plausible range of uncertainties.
Why do you need to store so much on-site?
If you have not previously considered the feasibility of reducing the quantity, you should do so when you develop your risk management program. Many companies have cited public safety concerns as a reason for reducing the quantities of hazardous chemicals stored on-site or for switching to non-hazardous substitutes. If you have evaluated your process and determined that you need a certain volume to maintain your operations, you should explain this fact to the public in a forthright manner. As appropriate, you should also discuss any alternatives, such as reducing storage quantities and scheduling more frequent deliveries. Perhaps these options are feasible - if so, you should consider implementing them; if not, explain why you consider these alternatives to be unacceptable. For example, in some situations, more frequent deliveries would mean more trucks carrying the substance through the community on a regular basis and a greater opportunity for smaller-scale releases because of more frequent loading and unloading.
What are you doing to prevent releases?
If you have rigorously implemented your risk management program, this question will be your chance, if you have not already done so, to tell the community about your prevention activities, the safe design features of your operations, the specific activities that you are performing such as training, operating procedures, maintenance, etc., and any industry codes or standards you use to operate safely. If you have installed new equipment or safety systems, upgraded training, or had outside experts review your site for safety (e.g., insurance inspectors), you could offer to share the results. You may also want to mention state or federal rules you comply with.
What are you doing to prepare for releases?
For such questions, you will need to talk about the coordination that you have done with the local fire department, LEPC, or mutual aid groups. Such coordination may include activities such as defining an incident command structure, developing notification protocols, conducting response training and exercises, developing mutual aid agreements, and evaluating public alert systems. This description is particularly important if your employees are not designated or trained to respond to releases of regulated substances.
If your employees will be involved in a response, you should describe your emergency response plan and the emergency response resources available at the facility (e.g., equipment, personnel), as well as through response contractors, if appropriate. You also may want to indicate the types of events for which such resources are applicable. Finally, indicate your schedule for internal emergency response training and drills and exercises and discuss the results of the latest relevant drill or exercise, including problems found and actions taken to address them.
Do you need to use this chemical?
Again, if you have not yet considered the feasibility of switching to a non-hazardous substitute, you should do so when you develop your risk management program. Assuming that there is no substitute, you should describe why the chemical is critical to what you produce and explain what you do to handle it safely. If there are substitutes available, you should describe how you have evaluated such options.
Why are your distances different from the distances in the EPA lookup tables?
If you did your own modeling, this question may come up. You should be ready to explain in a general way how your model works and why it produces different results. EPA allows using other models (as long as certain parameters and conditions specified by the rule are met) because it realizes that EPA lookup table results will not necessarily reflect all site-specific conditions.
In addition, although all models are generally based on the same physical principles, dispersion modeling is not an exact science due to the limited opportunity for real-world validation of the results. Thus, the method by which different models combine the basic factors such as wind speed and atmospheric stability can result in distances that readily vary by a factor of two (e.g., five miles versus ten miles). The introduction of site-specific factors can produce additional differences.
EPA recognizes that different models will produce differing estimates of the distance to an endpoint, especially for releases of toxic substances. The Agency has provided a discussion of the uncertainties associated with the model it has adopted for the OCA Guidance. You need to understand that the distances produced by another model lie within a band of uncertainty and be able to demonstrate and communicate this fact to those who are reviewing your results.
How likely are the worst-case and alternative release scenarios?
It is generally not possible to provide accurate numerical estimates of how likely these scenarios are. EPA has stated that providing such numbers for accident scenarios rarely is feasible because the data needed (e.g., on rates for equipment failure and human error) are not usually available. Even when data are available, there are large uncertainties in applying the data because each facility’s situation is unique.
In general, the risk of the worst-case scenario is very low. Although catastrophic vessel failures have occurred, they are rare events. Combining them with worst-case weather conditions makes the overall scenario even less likely. This does not mean that such events cannot or will not happen, however.
For the alternative scenario, the likelihood of the release is greater and will depend, in part, on the scenario you chose. If you selected a scenario based on your accident history or industry accident history, you should explain this to the public. You should also discuss any steps you are taking to prevent such an accident from recurring.
Is the worst-case release you reported really the worst accident you can have?
The answer to this question will depend on the type of facility you have and how you handle chemicals. EPA defined a specific scenario (failure of the single largest vessel) to provide a common basis of comparison among facilities nationwide. So, if you have only one vessel, EPA’s worst case is likely to be the worst event you could have.
On the other hand, if you have a process which involves multiple co-located or interconnected vessels, it is possible that you could have an accident more severe than EPA’s worst case scenario. If credible scenarios exist that could be more serious (in terms of quantities released or consequences) than the EPA worst case scenario, you should be ready to discuss them. For example, if you store chemicals in small containers such as 55-gallon drums, the EPA-defined worst-case release scenario involves a release from only one container, but a fire or explosion at the facility could release larger quantities if multiple containers are involved. In this case, you should be ready to frankly discuss such a scenario with the public. If you take precautions to prevent such scenarios from occurring, you should explain these precautions also. If an accidental release is more likely to involve multiple drums than a single drum as a result, for example, of the drums being stored closely together, then you must select such a scenario as your alternative release scenario so that information on this scenario is available in your RMP.
Chemical manufacturers may want to talk about releases that could result from runaway reactions that could continue for several hours. This type of event could result in longer exposure times.
What about the accident at the [name of similar facility] that happened last month?
This question highlights an important point: you need to be aware of events in your industry (e.g., accidents, new safety measures) for two reasons. First, your performance likely will be compared to that of your competitors. Second, learning about the circumstances and causes of accidents at other facilities like yours can help you prevent such accidents from occurring at your facility.
If information is available on accidents that happen at facilities similar to yours (e.g., from reports, case studies, journal articles, or other sources), you should be familiar with and have evaluated whether your facility is at risk for similar accidents. You should take the appropriate measures to prevent the accident from occurring and be prepared to describe these actions. If your facility has experienced a similar release in the past, this information may be documented in your accident history or other publicly available records, depending on the date and nature of the incident, the quantity released, and other factors. If you have already taken steps specifically designed to address this type of accident, whether as a result of this accident, a prior accident at your facility, or other internal decision-making, you should describe these efforts. If, based on your evaluation, you determine that the accident could not occur at your facility, you should discuss the pertinent differences between the two facilities and explain why you believe those differences should prevent the accident from occurring at your facility.
What actions have you taken to involve the community in your accident prevention and emergency planning efforts?
If you have not actively involved the community in accident prevention and emergency planning in the past, you should acknowledge this as an area where you could improve and start doing so as you develop your risk management program. The emergency response provisions of part 68 require you to coordinate with your LEPC and/or local fire department, depending on whether your facility holds toxics and/or flammable substances. More generally, you may want to become an active participant in the LEPC, SERC, and regional mutual aid organizations serving your area. Other opportunities for community involvement are fire safety coordination activities with the local fire department, joint training and exercises with local public and private sector response personnel, the establishment of green fields between the facility and the community, and similar efforts.
When discussing accident prevention and emergency planning with the community, you should indicate any national programs in which you participate, such as the Chemical Manufacturers Association’s Responsible Care program or Community Awareness and Emergency Response program or OSHA’s Voluntary Protection Program. If fully implemented, these programs can help improve the safety of the facility and the community. You may have future plans to participate in areas described previously or have new initiatives associated with the risk management program. Be sure you ask what else the community would like you to do and explain how you will do it.
Can we see the documentation you keep on site?
If the requested information is not confidential business information, EPA encourages you to make it available to the public in a reasonable manner. (Since the OCA sections of your RMP are available to the public on a restricted bases, it makes sense that the documentation underlying those sections be made available in a manner that reflects its sensitivity.) Although you are not required to provide this information to the public, refusing to provide it simply because you are not compelled to is not the best approach. If you decide not to provide any or most of this material, you should have good reasons for not doing so and be prepared to explain these reasons to the public. Simply taking a defensive position or referring to the extent of your legal obligations is likely to threaten the effectiveness of your interaction with the community. Offer as much information as possible to the public; if particular documents would reveal proprietary information, try to provide a redacted copy, summary, or some other form that answers the community’s concerns. You may want to work with your LEPC on this issue. You should also be aware that information that EPA or the implementing agency obtains as part of an inspection or investigation conducted under section 114 of the Clean Air Act would be available to the public under section 114(c) of the Act to the extent it does not reveal confidential business information or OCA information.
11.3 COMMUNICATION ACTIVITIES AND TECHNIQUES
Although this section is most applicable to larger companies, small businesses may want to review it and use some of the ideas to expand their communications with the public. To prepare for effective communication with the community, you should:
(1) Adopt an organizational policy that includes basic risk communication principles (see exhibit 11-1)
(2) Assign responsibilities and resources to implement the policy
(3) Plan to use "best communication practices"
**ADOPT AN ORGANIZATIONAL COMMUNICATIONS POLICY**
An organizational policy will support communication with the public on your RMP and make it an integral part of management practices. Otherwise, breakdowns are likely to occur, which could cause mistrust, hostility and conflicts.
A policy helps to establish communication as a normal organizational function and to present it as an opportunity rather than a burden or threat. The policy can be incorporated in an organization's policies, an approach taken by many companies who belong to the Responsible Care program of the Chemical Manufacturers Association (CMA). These companies have adopted CMA's Codes of Management Practices, which contain risk communication principles and practices.
Remember that what you communicate is more important than the type of communication policy or program you use, and what you actually *do* to maintain a safe facility is more important than anything you say. Your company's safety and prevention steps in your risk management program should serve as the core elements of any risk communication program.
**ASSIGN RESPONSIBILITIES AND RESOURCES**
A policy is only a paper promise until it is regularly and effectively implemented. Thus, you should follow up your communication policy by (1) having top management participate at the outset and at key points throughout the communication process, and (2) assigning communication responsibilities within your organization and providing the necessary resources.
Experience has demonstrated that assigning responsibility to knowledgeable managers, plant engineers, and staff and encouraging participation by employees, (most of whom are likely to be community residents) is a good communications practice. Delegating communication functions to outside technical consultants, attorneys, and public relations specialists has repeatedly failed to impress the community and even tends to incur mistrust. (However, if you hired a firm with acknowledged expertise in dispersion modeling, you may want them on hand to help respond to technical questions.)
Communications staff will need work time and resources to prepare presentation materials, hold meetings with interested persons in the community, and do other work necessary to respond to questions and concerns and maintain ongoing dialogue. A training program in communication skills and incentives for good performance also may be advisable.
Organizations have a legitimate interest in preventing disclosure of confidential business information or statements that inadvertently and unfairly harm the organization or its employees. Thus, you should assure that your risk communication staff is instructed on how to deal with situations that pose these problems. This may mean that you have an internal procedure enabling your staff to bring such situations to top management and legal counsel for quick resolution, keeping in mind that unduly defensive or legalistic responses that result in restricting the amount of information that is provided can damage or destroy the risk communication process.
Your communication staff may find the following steps helpful in addressing the priority issues in the communication process:
Prior to RMP Submittal
- Enlist employee support for, and involvement in, the communication process
- Build on work you have done with your LEPC, fire department, and local officials, and gain their insights
- Incorporate technical expertise, management commitment, and employee involvement in the risk communication process
- Use your RMP’s executive summary to begin the dialogue with the community; be sure you have taken all of the steps you present
- Taking a community perspective, identify which data elements need to be clarified, interpreted, or amplified, and which are most likely to raise community concerns; then compile the information needed to respond and determine the most understandable methods (e.g., use of graphics) for presenting the information
At Submittal
- Review the RMP to assure that you are familiar with its data elements and how they were developed. In particular, review the hazard assessment, prevention, and response program features, as well as documentation of the methods, data, and assumptions used, especially if an outside consultant performed the analyses and developed these materials. You have certified their accuracy and your spokesperson should know them intimately, as they reflect your plan
- Review your performance in implementing the prevention and response programs and prepare to discuss problems identified and actions taken
- Review your performance in investigating accidents and prepare to discuss any corrective actions that followed
Other Steps
- Identify the most likely concerns about risks identified in the RMP but not fully addressed, consult with management and safety engineering, and determine additional measures the organization will take to resolve these concerns.
- Avoid misrepresentations and minimize the roles of public relations specialists.
- Identify "best communication practices" (as described in the next section) and plan how to use them.
**Use "Best Communication Practices"**
Many facilities already have gained considerable experience in communicating with the public. Lessons from their experiences are described below. However, the value of these best practices and your credibility will depend on your facility's possession and ongoing demonstration of certain essential qualities:
- Top management commitment (e.g., owner and facility manager) to improving safety.
- Honesty, openness, and concern for the community.
- Respect for public concerns and perceptions.
- Commitment to maintaining a dialogue with all sectors of the community, to learning from this dialogue, and to being prepared to change your practices to make your facility more safe.
- Commitment to continuous improvement through internal procedures for evaluating incidents and promoting organizational learning.
- Knowledge of safety issues and safety management methods.
- Good working relationships with the LEPC, fire department, and other local officials.
- Active support for the LEPC and related activities.
- Employee support and commitment.
- Continuation of commitment despite potential public hostility or mistrust.
Another note: Because each facility and community involves a unique combination of factors, the practices used to achieve good risk communication in one case do not necessarily ensure the same quality result when used in another case. Therefore, while it is advisable for you to review such experience to identify "best communication practices," you should carefully evaluate such practices to determine
if they can be adapted to fit your unique circumstances. For example, if your facility is in the middle of an urban area, you probably will use different approaches than you would use if it were located in an industrial area far from any residential populations. These practices are complementary approaches to delivering your risk management message and responding to the concerns of the community.
With these cautions in mind, a number of "best" practices are outlined below for consideration. First, you will want to establish formal channels for information-sharing and communication with stakeholders. The most basic approaches include:
- Convene public meetings for discussion and dialogue regarding your risk management program and RMP and take steps to have the facility owner or manager and all sectors of the community participate, including minorities and low-income residents
- Arrange meetings with local media representatives to facilitate their understanding of your risk management program and the program summary presented in your RMP
- Establish a repository of information on safety matters for the LEPC and the public and, if electronic, provide software for public use. Some organizations also have provided computer terminals for public use in the community library or fire department
Other, more resource-intensive activities of this type to consider include:
- Create and convene focus groups (small working groups) to facilitate dialogue and action on specific concerns, including technical matters, and take steps to assure that membership in each group reflects a cross section of the community and includes technically trained persons (e.g., engineers, medical professionals)
- Hold seminars on hypothetical release scenarios, prevention and response programs, applicable standards and industry practices, analytic methods and models (e.g., on dispersion of airborne releases, health effects of airborne concentrations), and other matters of special concern or complexity
- Convene special meetings to foster dialogue and collaborations with the LEPC and the fire department and to establish a mutual assistance network with other facility managers in the community or region
- Establish hot lines for telephone and e-mail communications between interested parties and your designated risk communication staff and, if feasible, a web site for posting useful information
In all of these efforts, remember to use plain language and commonly understood terms; avoid the use of acronyms and technical and legal jargon. In addition, depending on your audience, keep in mind that the preparation of multilingual materials may be useful or even necessary.
Secondly, you may want to initiate or expand programs that more directly involve the community in your operations and safety programs. Traditional approaches include:
- Arrange facility tours so that members of the public can view operations and discuss safety procedures with supervisors and employees
- Schedule drills and simulations of incidents to demonstrate how prevention and response programs work, with participation by community responders and other organizations (e.g., neighboring companies)
- Conduct a “Safety Street” - a community forum generally sponsored by several industries in a locality, where your representatives present facility safety information, explain risks, and respond to public questions (see Section 11.4 for a reference to more information on this program)
- Periodically reaffirm and demonstrate your commitment to safety in accordance with and beyond regulatory requirements and present data on your safety performance, using appropriate benchmarks or measures, in newsletters and by posting the information at your web site
- Publicly honor and reward managers and employees who have performed safety responsibilities in superior fashion and citizens who have made important contributions to the dialogue on safety
If community interest is significant, you may also want to consider the following activities:
- Invite public participation in monitoring implementation of your risk management program elements
- Invite public participation in auditing your performance in safety responsibilities, such as chemical handling and tracking procedures and analysis and follow-up on accidents and near misses
- Organize a committee comprised of representatives from the facility, other industry, emergency planning and response organizations, and community groups and chaired by a community leader to independently evaluate your safety and communication efforts (e.g., a Community Advisory Panel). You may also want to finance the committee to pay for an independent engineering consultant to assist with technical issues and learn what can be done to improve safety, and thereby share control with the community
Your communication staff should review these examples, consider designing their own activities as well as joint efforts with other local organizations, and ultimately decide with the community on which set of practices are feasible and can best create a healthy risk communication process in your community. Once these decisions are made, you may want to integrate the chosen set of practices in an overall communication program for your facility, transform some into standard procedures, and monitor and evaluate them for continuous improvement.
OTHER COMMUNICATION OPPORTUNITIES
By complying with the RMP rule and participating in the communications process with the community, you should have developed a comprehensive system for preventing, mitigating, and responding to chemical accidents at your facility. Why not share this knowledge with your staff, others you do business with (e.g., customers, distributors, contractors), and, perhaps through industry groups, others in your industry? If you transfer this knowledge to others, you can help improve their chemical safety management capabilities, enhance public safety beyond your community, and possibly gain economic benefits for your organization.
11.4 FOR MORE INFORMATION
Among the numerous publications on risk communication, the following may be particularly helpful:
- *Improving Risk Communication*, National Academy Press, Washington, D.C., 1989
- "Safety Street" and other materials on the Kanawha Valley Demonstration Program, Chemical Manufacturers Association, Arlington, VA
- Community Awareness and Emergency Response Code of Management Practices and various Guidance, Chemical Manufacturers Association, Arlington, VA
- *Communicating Risks to the Public*, R. Kasperson and P. Stallen, eds., Kluwer Publishing Co., 1991
- "Challenges in Risk and Safety Communication with the Public," S. Maher, Risk Management Professionals, Mission Viejo, CA, April 1996
- Primer on Health Risk Communication Principles and Practices, Agency for Toxic Substances and Disease Registry, on the World Wide Web at atsdr1.atsdr.cdc.gov:8080
- *Risk Communication about Chemicals in Your Community: A Manual for Local Officials*, US Environmental Protection Agency, EPA EPCRA/Superfund/RCRA/CAA Hotline
- *Risk Communication about Chemicals in Your Community: Facilitator's Manual and Guide*, US Environmental Protection Agency, EPA EPCRA/Superfund/RCRA/CAA Hotline
- *Chemicals, the Press, and the Public: A Journalist's Guide to Reporting on Chemicals in the Community*, US Environmental Protection Agency, EPA EPCRA/Superfund/RCRA/CAA Hotline |
Micro-Feature Dimensional and Form Measurements with the NIST Fiber Probe on a CMM
A Programmable Calibration System for Accurate AC Current Measurements at NIS, Egypt
Calibration Management in the ISO/IEC 17025 Accredited Facility
2010
JULY
AUGUST
SEPTEMBER
250 combined years of experience
240 calibration products
6 measurement disciplines
1 united organization
Precision, performance, confidence.
Fluke Calibration.
We’ve brought together a select group of the world’s top calibration companies* to provide you a full range of calibration solutions across six measurement disciplines. Our equipment—the most trusted in the industry—gives you the accuracy and reliability you demand. Our calibration software has built-in, bench-level tools to address your varied workload from a single, integrated database. Our customer service provides you timely product support as well as advanced, discipline-specific know-how. This unique combination is the reason the most demanding metrology and calibration organizations, including National Measurement Institutes around the world, rely on products from Fluke Calibration.
*Fluke, Wavetek/Datron, Hart Scientific, DH Instruments, Ruska, and Pressurements.
To learn more, visit www.fluke.com/flukecal.
FEATURES
25 Micro-Feature Dimensional and Form Measurements with the NIST Fiber Probe on a CMM
B. Muralikrishnan, J. Stone, J. Stoup
31 Establishment of a Programmable Calibration System for Accurate AC Current
Measurements at NIS, Egypt
Mamdouh Halawa and Amal Hasan
38 Calibration Management in the ISO/IEC 17025 Accredited Facility
Bernard Williams
DEPARTMENTS
2 Calendar
3 Editor’s Desk
14 Industry and Research News
18 New Products
ON THE COVER: A new system developed at NPL, United Kingdom, enables Morehouse to provide an A2LA accredited calibration of force devices up to 2,250,000 lbf in compression or 1,200.00 lbf in tension. Morehouse also provides calibration service for load cells, proving rings, crane scales, force gauges and other force devices. Visit www.mhforce.com. Photo courtesy of Morehouse Instruments.
CONFERENCES & MEETINGS 2010
Sep 13-17 AUTOTESTCON. Orlando, FL. www.autotestcon.com.
Oct 12-14 VII International Specialized Exhibition: Controlling, Analyzing and Measuring Equipment 2010. Russia, Moscow. http://www.kipexpo.ru/english/.
Oct 12-14 Microtechnology Expo. Russia, Moscow. www.microtechexpo.ru/eng/.
Oct 25-28 IEST Fall Conference. Arlington Heights, IL. Working Group sessions and tutorials covering issues related to cleanrooms and controlled environments. IEST, www.iest.org
Oct 31 - Nov 5 25th ASPE Annual Meeting. Atlanta, GA. American Society for Precision Engineering (ASPE), www.aspe.net.
Nov 15-18 Eastern Analytical Symposium and Exposition (EAS). Somerset, NJ. www.eas.org.
Dec 10-12 2nd India Lab Expo. New Dehli, India. An exhibition of scientific, biotechnology, analytical & laboratory technology. Expected 280 exhibitors participating. www.indialabexpo.com.
CONFERENCES & MEETINGS 2011
Feb 8-10, 2011 HuLST (Human Life Science Test) Expo. Koelnmesse, Cologne, Germany. Four fairs under the HuLST umbrella are: Medical Device and Technology Test Expo; Food and Beverage Test Expo; Pharma Test Expo, and Biotech Test Expo. Exhibits and technical presentations will cover all types of testing, inspection and quality assurance. www.hulst-expo.com.
Mar 31-Apr 1, 2011 METROMEEET: 7th International Conference on Industrial Dimensional Metrology. Bilbao, Spain. Topics: Advances of micro- and nanometrology; measurement issues of large work pieces and their solutions; metrology and economics; new developments in optical metrology; new developments in the metrological sciences; state of the art and challenges of multi-sensor coordinate metrology; accreditation and certification; future metrology tendencies; latest developments and solutions in the area of optical non-contact measurement and 3D digitalisation systems; methods, organisation and best practices in industrial metrology; academic education in metrology; new developments in measurement instruments; industrial process quality requirements and metrology-based process improvements; in-line inspection; uncertainty traceability and reliability of measurements with CMMs; sports metrology. METROMEEET, tel +34 94 480 51 83, email@example.com, www.metromeet.org.
The Little Country that Could
American readers may recognize the play on words in my title taken from *The Little Engine that Could*, a children’s book about a little train engine chugging along pulling its cars and upon reaching a large hill worries that it won’t be able to make it over the hill. There’s a dramatic moment as it climbs the hill and looses speed and starts slipping back, but with great determination the engine continues to put everything it has into that climb and finally reaches the top and joyfully rushes down the other side and onto its destination. The point of the story is that even the smallest engine can achieve great things if it believes in itself and tries with all its might.
In November 2008 I visited Israel for the first time to attend their international metrology conference. Like most Americans I had sentimental feelings about Israel and a curiosity to see this tiny land that has so dominated the news since its birth in May 1948. Israel is unlike any country I have ever visited. It is smaller in size than you think — smaller than New Jersey, maybe about the same as Canada’s Vancouver Island, less than 1/11th the size of the United Kingdom. It is 20,777 square kilometers (8019 square miles). It has 7.5 million citizens, 74.5% Jewish, 20.3% Arab and 4.3% “others.” It is surrounded by hostile neighbors and has endured 4 major wars, hundreds of terrorist attacks, thousands of rocket attacks from terrorists in Lebanon and more recently from Gaza.
Israel is like America in that it is a melting pot of people from all over the world and an open democracy where all citizens are eligible to vote and serve in political office. Israel and America have enjoyed a unique friendship since the beginning, maybe in part because many of its citizens are Americans including past Prime Minister Golda Meir and current Ambassador to the U.S. Michael Oren.
Where Israel becomes relevant to metrology is in the amazing contributions it has made to industry in telecommunications, medicine, biotechnology, agriculture, alternative energy sources, security and the list goes on. Israel recently hosted Biomed 2010, the second largest biotech conference in the world with 7,000 attendees, 1,000 from outside Israel. Intel has a major R&D center and two fabrication plants in Israel. Motorola’s plant in Israel is the company’s largest facility in the world. The first computer anti-virus software package was developed in Israel back in the 1970s. The mobile phone technologies that allow you to leave voicemail, send text messages and transmit movies or pictures were all developed in Israel. The high quality color images in newspapers are transmitted by graphic technologies developed by Scitex, one of Israel’s earliest technology companies. The Pentium processor chip was largely developed in Israel. Many hospital CAT scanners and magnetic resonance imaging (MRI) machines were developed in Israel. The life-saving device used to keep arteries open was originally developed by Medtronic in Israel. The list goes on.
I was so fascinated with the accomplishments of this little dynamic country that I purchased two books that describe their contributions and they are even more amazing than I imagined. Israel is “the little country that could” and does provide technology to the world at a level far beyond their size and despite the immense pressures of surviving in a hostile environment and absorbing thousands of new immigrants each year. Israel deserves our complete support and unwavering commitment to her survival and the well being of her citizens.
Best regards,
Carol Singer
May 24-27, 2011 AUSPLAS, AUSTECH and National Manufacturing Week. Melbourne, Australia. Ausplas is Australia’s national trade show for the plastics processing industry, organized by the Australian Manufacturing Technology Institute, Ltd. National Manufacturing Week is Australia’s only fully integrated manufacturing industry exhibition displaying all major aspects across ten specialist product zones. AUSPLAS, firstname.lastname@example.org, www.ausplas.com.
Aug 21-25, 2011 NCSLI Conference. National Harbor, MD. Conference theme: 50 Years: Reflecting On The Past - Looking To The Future. www.ncsli.org.
Sep 27-29 LabAsia 2011. Kuala Lumpur, Malaysia. LabAsia 2011 is the third in a series of biennial international exhibitions that showcase the latest in laboratory and analytical equipment, instrumentation and services. Institut Kimia Malaysia (IKM) is a professional scientific organisation to regulate the practice of chemistry, represent the chemistry profession and promote the advancement of chemistry in Malaysia. In conjunction with LabAsia 2011, IKM is organising two major international scientific meetings. The IUPAC International Conference on Chemical Research Applied to World Needs, CHEMRAWN 2011, will focus on Renewable and Sustainable Energy from Biological Sources, and will showcase the latest research, technology and innovation in renewable and sustainable energy. The 13th International Symposium on Advances in Extraction Technologies or ExTech 2011 will feature the latest in extraction technologies, separation science and the ensuring analytical techniques and is especially useful for our scientists working in all areas who are engaging in extraction, separation and analytical sciences. The ExTech series of international symposia has been held all over the world; but this is the first time it is being held in Asia. Institut Kimia Malaysia (IKM), www.lab-asia.com.
SEMINARS Australia
Aug 20 Humidity Workshop. Lower Hutt, New Zealand. Measurement Standards Laboratory of New Zealand, http://msl.ir.cri.nz/training-and-resources/training-courses.
Aug 24 Balances and Weighing Workshop. Auckland, New Zealand. Measurement Standards Laboratory of New Zealand, http://msl.ir.cri.nz/training-and-resources/training-courses.
Aug 24-25 Traceable Electrical Energy Metering. Lower Hutt, New Zealand. Measurement Standards Laboratory of New Zealand, http://msl.ir.cri.nz/training-and-resources/training-courses.
Aug 25 Measurement, Uncertainty and Calibration. Auckland, New Zealand. Measurement Standards Laboratory of New
Customers agree:
Manual MET/CAL® software is the easy, efficient way to collect, store and report calibration data.
Customers who have tried Manual MET/CAL® Calibration Management Software say it’s easy to use and helps them:
- Perform calibrations quickly and efficiently.
- Boost cal lab productivity.
- Calibrate mechanical and dimensional instruments and more.
- Manage all calibration assets in a single MET/BASE database.
- Perform batch calibrations.
- Keep data secure.
Manual MET/CAL® is fully compatible with the Fluke MET/TRACK® asset management application, so you can now manage all your calibration assets from a single database.
Get product details and download a free trial version of Manual MET/CAL® software at www.fluke.com/trymefree
©2010 Fluke Corporation. Specifications are subject to change without notice. Ad no. 3766430A
Aug 26 Temperature Measurement and Calibration. Auckland, New Zealand. Measurement Standards Laboratory of New Zealand, http://msl.irl.cri.nz/training-and-resources/training-courses.
Sep 24-26 Flow Measurement and Calibration: Liquid and Gas. Munich, Germany. In English. The seminar is being held during the time of the fair in Munich Oktoberfest. TrigasFl GmbH, tel: +49-8165-64720, email@example.com, http://www.trigasfl.de/html/en_seminars.htm.
Sep 27-29 Durchflussmessung und Kalibrierung. Munich, Germany. In Deutscher Sprache. TrigasFl GmbH, tel: +49-8165-64720, firstname.lastname@example.org, http://www.trigasfl.de/html/en_seminars.htm.
Oct 26-28 28th International North Sea Flow Measurement Workshop. St Andrews, UK. Technical presentations, real work experiences, poster presentations and networking sessions. TUV NEL Ltd., tel +44(0)1355272858, email@example.com, www.tuvnel.com.
SEMINARS USA
SEMINARS: Accreditation & ISO/IEC 17025
Oct 28-29 Auditing to ISO 17205. Bloomington/Burnsville, MN. J&G Technology, tel 952-935-1108, firstname.lastname@example.org, www.jg-technology.com/seminars.html.
SEMINARS Analytical Chemistry
Nov 15 Metrology in the Analytical Laboratory. Somerset, NJ. Stranaska LLC, www.stranaska.com. Available through the Eastern Analytical Symposium Short Course Program. www.eas.org.
SEMINARS Certified Calibration Technician Exam
Sep 13-17 CCT-501 Metrology for Cal Lab Personnel. Seattle, WA. Fluke, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
Sep 21 Calculator Refresher for Certification Exams. Bloomington/Burnsville, MN. J&G Technology, tel 952-935-1108, email@example.com, www.jg-technology.com/seminars.html.
Nov 2 Calculator Refresher for Certification Exams. Bloomington/Burnsville, MN. J&G Technology, tel 952-935-1108, firstname.lastname@example.org, www.jg-technology.com/seminars.html.
Nov 3-5 Certified Calibration Technician Preparation. Bloomington/Burnsville, MN. J&G Technology, tel 952-935-1108, email@example.com, www.jg-technology.com/seminars.html.
Nov 3-5 Certified Calibration Technician (CCT) Review. Minneapolis, MN. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
GUILDLINE
6623A SERIES
World’s ONLY Series of Compact & Modular Range Extenders Providing Current from 3 Amps to an Amazing 3000 Amps!
Unique design provides fully self-contained capability, meaning NO additional power supplies, NO mechanical switches, NO compressed air required and NO programming requirements … Just Real Solutions!
Model 6623A-150 (above) is only slightly larger than 5” in height. The 6623A-300 adds only 2” to total height (7”). Both units are capable of operating off 120VAC line voltage.
GUILDLINE INSTRUMENTS
PRECISION MEASUREMENT SOLUTIONS WWW.GUILDLINE.COM 1-800-310-8104
Nov 10-12 Certified Calibration Technician (CCT) Review. Schaumburg, IL. The QC Group, tel 800-959-0632, email@example.com, www.theqcgroup.com/courselist/.
SEMINARS: Dimensional and Gage Calibration
Aug 26-27 Basic Dimensional Measurement Tools and Methods. Jackson, MS. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
Aug 30-31 Calibration Training and Hands-On Gage Repair. Richmond, VA. IICT Training & Productions, email@example.com, http://consultinginstitute.net/.
Aug 31 GD&T Management Overview. Orlando, FL. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
Aug 31 - Sep 1 GD&T Training - Fundamentals. Orlando, FL. Dan Medford, The QC Group, tel 800-959-0632, email@example.com, www.theqcgroup.com/courselist/.
Aug 31 - Sep 2 Coordinate Measuring Machine, CMM Training. Minneapolis, MN. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
Sep 2-3 Calibration Training and Hands-On Gage Repair. Myrtle Beach, SC. IICT Training & Productions, email@example.com, http://consultinginstitute.net/.
Sep 13-14 GD&T Training - Fundamentals. Rolling Meadows, IL. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
Sep 14-15 Basic Dimensional Measurement Tools and Methods. Schaumburg, IL. The QC Group, tel 800-959-0632, email@example.com, www.theqcgroup.com/courselist/.
Sep 14-15 Calibration Training and Hands-On Gage Repair. Schaumburg, IL. IICT Training & Productions, firstname.lastname@example.org, http://consultinginstitute.net/.
Sep 15-17 GD&T Training - Advanced Concepts. Rolling Meadows, IL. The QC Group, tel 800-959-0632, email@example.com, www.theqcgroup.com/courselist/.
Sep 16-17 Calibration Training and Hands-On Gage Repair. Kenosha, WI. IICT Training & Productions, firstname.lastname@example.org, http://consultinginstitute.net/.
Sep 23 GD&T Management Overview. Jackson, MS. The QC Group, tel 800-959-0632, email@example.com, www.theqcgroup.com/courselist/.
Sep 23-24 GD&T Training - Fundamentals. Jackson, MS. The QC Group, tel 800-959-0632, firstname.lastname@example.org, www.theqcgroup.com/courselist/.
**CALENDAR**
**SEMINARS FDA / ISO 13485 / QSR**
**Sep 27-28 Calibration in the FDA Regulated Industries.** Las Vegas, NV. University of Wisconsin Department of Engineering Professional Development. [http://epdweb.engr.wisc.edu/Courses/Course.lasso?myCourseChoice=L625](http://epdweb.engr.wisc.edu/Courses/Course.lasso?myCourseChoice=L625) or Program Director Michael P. Waxman, Ph.D., email@example.com, toll free tel 800-462-0876, tel 608-262-2061.
**Sep 27-30 Gas Flow Calibration Using molblox/molbox.** Phoenix, AZ. Fluke/DH Instruments, tel 888-79-FLUKE, [www.fluke.com/2010caltraining](http://www.fluke.com/2010caltraining).
**Nov 15-18 Gas Flow Calibration Using molblox/molbox.** Phoenix, AZ. Fluke/DH Instruments, tel 888-79-FLUKE, [www.fluke.com/2010caltraining](http://www.fluke.com/2010caltraining).
**SEMINARS: General Metrology, Best Practices, Calibration Training & Laboratory Management**
**Sep 20-24 CLM-301 Cal Lab Management for the 21st Century.** Seattle, WA. Fluke, tel 888-79-FLUKE, [www.fluke.com/2010caltraining](http://www.fluke.com/2010caltraining).
**Sep 27-30 MET-101 Basic Hands-On Metrology.** Seattle, WA. Fluke, tel 888-79-FLUKE, [www.fluke.com/2010caltraining](http://www.fluke.com/2010caltraining).
**Nov 1-4 MET-301 Advanced Hands-On Metrology.** Seattle, WA. Fluke, tel 888-79-FLUKE, [www.fluke.com/2010caltraining](http://www.fluke.com/2010caltraining).
**SEMINARS: Materials Characterization**
**Aug 25-26 Test Methods for Composite Materials.** Tampa, FL. Seminars For Engineers, tel 800-755-2272, firstname.lastname@example.org, [www.seminarsforengineers.com/comptest](http://www.seminarsforengineers.com/comptest).
**SEMINARS: Measurement Uncertainty**
**Sep 14-16 Measurement Uncertainty Workshop.** Fenton, MI. Presented by QUAMETEC Institute, tel 810-225-8588, [www.QIMTonline.com](http://www.QIMTonline.com).
**Nov 9-11 Measurement Uncertainty Workshop.** Fenton, MI. Presented by QUAMETEC Institute, tel 810-225-8588, [www.QIMTonline.com](http://www.QIMTonline.com).
**Dec 6 Measurement Uncertainty Overview.** Minneapolis, MN. The QC Group, tel 800-959-0632, email@example.com, [www.theqcgroup.com/courselist/](http://www.theqcgroup.com/courselist/).
**Dec 6-7 Measurement Uncertainty Budgets.** Minneapolis, MN. The QC Group, tel 800-959-0632, firstname.lastname@example.org, [www.theqcgroup.com/courselist/](http://www.theqcgroup.com/courselist/).
---
**Now! The Most Accurate Force Calibrations Ever From A Commercial Lab!**
**0.002% of load through 120,000 lbf**
- A true primary standard
- Every weight calibrated directly by NIST
- Accredited to ISO 17025
- Calibrating load cells, proving rings, force gauges
-- in compression or tension
-- kilograms or Newtons, too
- **Calibrations available from 0.1 to 2,250,000 lbf in compression and 1,200,000 in tension**
Want your own dead weight force machine? We’ll build one for you--from 50 to 120,000 lbf
For complete details, call 1-717-843-0081
**Morehouse INSTRUMENT CO.**
1742 Sixth Avenue • York, PA 17403-2675
[www.mhfforce.com](http://www.mhfforce.com) • Fax 1-717-846-4193
**CALENDAR**
**Dec 6-7 Understanding Measurement Uncertainty.** Bloomington/Burnsville, MN. J&G Technology, tel 952-935-1108, email@example.com, www.jg-technology.com/seminars.html.
**SEMINARS: Pipette Proficiency & Quality Management**
**Sep 23-24 Pipette Quality Management Certification.** Westbrook, ME. ARTEL, tel 888-406-3463, tel 207-854-0860, firstname.lastname@example.org, www.artel-usa.com/news/training.aspx.
**Nov 15-16 Pipette Quality Management Certification.** Westbrook, ME. ARTEL, tel 888-406-3463, tel 207-854-0860, email@example.com, www.artel-usa.com/news/training.aspx.
**SEMINARS: Pressure**
**Sep 13-17 Precision Pressure Calibration.** Phoenix, AZ. Fluke - DH Instruments, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**Oct 12-15 Setting Up and Using COMPASS® for Pressure Software.** Phoenix, AZ. Fluke - DH Instruments, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**Dec 6-10 Precision Pressure Calibration.** Phoenix, AZ. Fluke/DH Instruments, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**SEMINARS: Software**
**Oct 4-7 MET/CAL Database and Reports.** Seattle, WA. Fluke, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**Oct 11-14 MET/CAL Procedure Writing.** Seattle, WA. Fluke, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**Oct 18-21 MET/CAL Advanced Programming Techniques.** Seattle, WA. Fluke, tel 888-79-FLUKE, www.fluke.com/2010caltraining.
**SEMINARS: Vibration/Shock**
**Sep 20-23 Fundamentals of Random Vibration and Shock Testing.** HALT, ESS, HASS. San Jose, CA. Equipment Reliability Institute, tel 805-564-1260, firstname.lastname@example.org, http://equipment-reliability.com/open_courses.html.
**Oct 12-14 Pyrotechnic Shock Testing, Measurement, Analysis and Calibration.** Santa Clarita, CA. Equipment Reliability Institute, tel 805-564-1260, email@example.com, http://equipment-reliability.com/open_courses.html.
**Nov 1-4 Fundamentals of Random Vibration and Shock Testing.** HALT, ESS, HASS. Orlando, FL. Equipment Reliability Institute, tel 805-564-1260, firstname.lastname@example.org, http://equipment-reliability.com/open_courses.html.
Visit www.calabmag.com for monthly calendar updates
## Magnetic Field
| Application | Product | Range | Resolution | Bandwidth |
|------------------------------------------------------------------------------|--------------------------------|-----------|-------------|-------------|
| Linear field sensing. Non-contact measurement of position, angle, vibration, current. Small size, low power. | CSA-1V 1-axis Hall IC, SOIC-8 | ± 5mT | ~ 10µT | dc to 100kHz|
| | 2SA-10 2-axis Hall IC, SOIC-8 | ± 40mT | ~ 50µT | dc to 10kHz |
| High sensitivity and accuracy for low fields. Survey and monitor sites for magnetically sensitive equipment. | MAG-01 1-axis Fluxgate Testometer | ± 2mT | ± 0.1nT | dc to 10Hz |
| | MAG-03 3-axis Fluxgate Transducer | ± 1mT | ± 0.1nT | dc to 3kHz |
| Linear field measurement, Feedback control, Quality control. Magnet mapping. Unique 3-axis at one point. | YM12 1-axis Hall Transducer | ± 2T | ± 12µT | dc to 5kHz |
| | 3M12 2-axis Hall Transducer | ± 2T | ± 20µT | dc to 1kHz |
| | 3RTP 3-axis Hall Transducer | ± 2T | ± 100µT | dc to 10kHz |
| Hand-held 3-axis for fringe field mapping, quality control, safety monitoring. | T025 3-axis Hall Testometer | ± 2T | ± 10µT | dc |
| Precision field measurement and control. Laboratory and process magnets. Analytical instruments. | DTM-133 1-axis Hall Testometer | ± 3T | ± 5µT | dc to 10Hz |
| | DTM-151 1-axis Hall Testometer | ± 3T | ± 0.1µT | dc to 3Hz |
| Calibration of magnetic standards and sensors. Very high precision and long-term stability. | 2026 total field NMR testometer | 0.04 to 20T | ± 0.1µT | dc |
| | FW100 total field NMR Testometer | 1.4pT to 2.1T | ± 0.5nT | dc |
| Precision flux change measurement. | PDI 5025 Digital Voltage Integrator | 40 V/s | ± 2E-8V/s | 1ms to 2^20ms |
Conversion of magnetic flux density (B) Tesla to gauss: 0.1nT = 1µG, 100µT = 1G, 1T = 10kG
## Electric Current (Isolated measurement)
| Application | Product | Range | Resolution | Bandwidth |
|------------------------------------------------------------------------------|--------------------------------|-----------|-------------|-------------|
| High sensitivity for low currents, currents at high voltage, differential currents. | IPCT Current Transducer | ± 5A | ± 10µA | dc to 4kHz |
| Linear sensor for low-noise, high accuracy, high stability power supplies or amplifiers. | 867-400 Current Transducer | ± 400A | < 4ppm | dc to 100kHz|
| | 866-600 Current Transducer | ± 600A | < 4ppm | dc to 100kHz|
| Instruments for development, quality control, calibration, precision power measurement. | 860R Current Transducer | to ± 2000A | < 5ppm | dc to 300kHz|
| | 862R Current Transducer | to ± 25kA | < 5ppm | dc to 10kHz |
| | 8696 6-channel Current Transducer | to ± 600A | < 10ppm | dc to 100kHz|
| Passive current transformer for rf and pulse current. Low loss, high frequency. | CT Current Transformer | to ± 10kA | to 5V/A into 50 ohm | to 2GHz |
| | ICT Charge Transformer | to ± 400nC | ± 0.5pC | 1µs to < 1ps|
## Fiber-Optic I/O
| Application | Product | Range | Resolution | Bandwidth |
|------------------------------------------------------------------------------|--------------------------------|-----------|-------------|-------------|
| Input/output modules that can be placed locally at the transmitter or controlled unit. For high voltage, high noise environments. RF signal transmission. | FTR RS-232-C link | ± 100mV to ± 10V | 16-bit | dc to 30Hz |
| | CNA Digital & Analog link | | | |
| | p2p Digital & Analog link | -20dBm to +20dBm | < 25dB for 0dBm | dc to 3GHz |
MODEL 2000SP RH CALIBRATOR
- Humidity Accuracy ±1.25%RH
- Humidity Stability ±0.10%RH
- Humidity Uniformity ±0.20%RH
- Temp Accuracy ±0.20°C
- Temp Stability ±0.10°C
- Stabilization Time <5 minutes
M2000SP is a compact system for bench-top or portable calibrations. Not used by pharmaceutical and calibration labs, the M2000SP provides the highest quality and reliability in RH calibrators.
MODEL 2000
Digital Electronics for Precise Control • Fast Equilibrium Over Entire Range of Operation
Excellent Stability and Uniformity Assures Highest Accuracy
All systems are provided with Certificates of Calibration from an NVLAP Accredited Lab
(T) 631-951-9100 www.humidity-generator.com email@example.com
HIGH RESISTANCE STANDARDS
- Fully Guarded for Any Ratio
- BPO or N Type Connectors
- Very Low TCR & VCR
- Stable Over Time
- Internal Temperature Sensor
- 17025 Accredited Calibration
| MODEL | RESISTANCE | TCR / VCR |
|-------|------------|-----------|
| 106 | 1 MΩ | < 1 / < 0.1 |
| 107 | 10 MΩ | < 3 / < 0.1 |
| 108 | 100 MΩ | < 10 / < 0.1 |
| 109 | 1 GΩ | < 15 / < 0.1 |
| 110 | 10 GΩ | < 20 / < 0.1 |
| 111 | 100 GΩ | < 30 / < 0.1 |
| 112 | 1 TΩ | < 50 / < 0.1 |
| 113 | 10 TΩ | < 300 / < 0.1 |
SEE WWW.OHM-LABS.COM FOR SPECIFICATIONS
611 E. CARSON ST. PITTSBURGH, PA 15203
TEL 412-431-0640 FAX 412-431-0649
WWW.OHM-LABS.COM
Excellence in Resistance
You now have an Agilent Channel Partner when it comes to MET/CAL® …
Cal Lab Solutions
Fusing Software with Metrology
When it comes to MET/CAL® and writing procedures for everything from physical dimensional to high end RF & Microwave, Cal Lab Solutions is the best in the business and has the customer references to back it up. Matter of fact, we became an Agilent Channel Partner largely based on our customers’ comments on the quality of our work.
For more information about Cal Lab Solutions and how to boost your lab’s productivity, give us a call us at (303) 317-6670 or visit us at http://www.callabsolutions.com.
Our Promise to You
✓ Our Procedures are Clean and Easy to Read
✓ Guaranteed to Work or You Pay Nothing
✓ We Support Interchangeable Standards
✓ Run Tests Individually or End to End
✓ Our Procedures Cover All the Options
✓ We Offer On-site Installations When Needed
www.callabsolutions.com
US DOE Plans Upgrade to Argonne Photon Source
The US Department of Energy (DOE) has announced a planned upgrade for the Advanced Photon Source (APS) laboratory at Argonne National Laboratory. The upgrade will be more cost-effective than building a new facility and will make revolutionary improvements in performance needed to address the sustainable energy and health research needs of the future. The upgrade will also add new X-ray facilities, make existing X-ray facilities 10 to 100 times more powerful and almost double the number of experiments that can be carried out in a year. In addition, the upgrade is expected to create new high-tech jobs.
Currently, the APS laboratory serves the experimental needs of more than 3,500 researchers each year. The Advanced Photon Source uses high-energy X-ray beams to peer deep into the atomic and molecular structures of materials and living organisms as small as a few nanometers. The APS has been providing the U.S. scientific community with the expertise and research tools that enable breakthroughs such as improved battery technologies, an unprecedented understanding of how engine fuel injectors function, treatment for the human immunodeficiency virus and other diseases, the creation of new nanomaterials, and advances in nanobiology, among other developments.
Upgrades to the accelerator-based x-ray source will include producing record brightness for penetrating x-rays at 25 keV and above using long straight sections, higher beam current and pioneering superconducting undulators; transverse radio-frequency deflection cavities to generate unique high-repetition-rate, 1-picosecond-duration X-ray pulses.
Upgrades planned for unique x-ray capabilities and new beamlines include: long imaging beamlines, nanometer focusing optics for penetrating X-rays, short-pulse X-rays, high magnetic fields, inelastic scattering, phase contrast and nanobeams in realistic environments.
NPL United Kingdom to Establish Branch in Space
The NPL, United Kingdom has submitted a proposal to the European Space Agency (ESA) to establish NPL’s first “branch” in space. Leading an international team of Earth observation and climate scientists, together with an industrial consortium, NPL has submitted a proposal for a satellite mission to ESA called “TRUTHS” (Traceable Radiometry Underpinning Terrestrial- and Heli- Studies).
The TRUTHS mission will establish SI traceable benchmark measurements of solar radiation incident upon and reflected from, the Earth at unprecedented accuracies (a factor of ten greater than current missions). The key science objective is to establish an unequivocal reference point of a number of key climate change indicators, with the aim of improving climate modelling. Of particular importance are those which provide radiative energy feedback to the climate system e.g. clouds and surface albedo.
The resultant measurements will significantly reduce uncertainty in climate forecasts giving policy makers more robust information. This will allow them to make key infrastructural decisions on mitigation and adaptation.
Laboratories—Try IAS!
Experience Accreditation Service Plus +
Getting accredited has a whole new meaning with the IAS Accreditation Service Plus + program.
Laboratories receive:
• Quick scheduling and rapid assessments
• On demand responsiveness
• True affordability
• Global recognition by ILAC
• Proof of compliance with ISO/IEC 17025
Learn about the Benefits of IAS Accreditation Service Plus +
www.iasonline.org/CL | 866-427-4422
strategies in decadal, rather than multi-decadal, timescales.
In addition to making its own benchmark measurements, TRUTHS is unique in its ability to transfer its high accuracy to other Earth observation missions, upgrading their performance so that they too can make high quality measurements of other key parameters and processes impacting on climate (for example, ocean colour, aerosols, land cover change). In this way TRUTHS becomes “NPL in space.”
Other members of the TRUTHS consortium are: Astrium, the Rutherford Appleton Laboratory, Serco, Physikalisch-Meteorologisches Observatorium Davos, OIP Sensor System and Surrey Satellit Technology, Ltd.
The European Space Agency will announce whether the TRUTHS proposal is successful in early December 2010.
Tegam Announces Exclusive License for PPM Instruments Products
Tegam, Inc. has announced a formal agreement with PPM Instruments which grants Tegam an exclusive license to manufacture, market, and service their instrumentation products starting 1 August, 2010. After this date, Tegam will become the sole manufacturer of PPM’s resistance measuring equipment, laboratory nano-voltmeters, signal sources, and signal conditioners.
PPM will continue to accept orders and ship products until 31 July 2010. Any orders that cannot be filled before 31 July 2010 by PPM will be filled and invoiced by Tegam. PPM and Tegam will cooperate to ensure a seamless transition of supply for those customers who have open contracts with PPM that extend beyond 31 July 2010.
Customers with warranties issued by PPM will receive warranty service from Tegam for the remainder of their warranty. Where feasible, Tegam will provide repair and calibration service on products that PPM has previously discontinued.
For more information visit www.tegam.com or contact Kim Niznik Goff at firstname.lastname@example.org, tel 440-466-6100, fax 440-466-6110, email@example.com.
CAFMET Celebrates 5 Year Anniversary and Cairo Conference Success
The 3rd International Conference of Metrology organized by the African Committee of Metrology (CAFMET) was held recently in Cairo, Egypt. The conference was scheduled to include 117 scientific papers, 90 selected oral presentations, 16 training workshops, a variety of exhibitors, nearly 200 registered attendees from 50 different countries (compared to 25 countries represented in 2008).
Unfortunately eruption of the Icelandic volcano Eyjafjöll just three days before the conference caused air transportation problems for several speakers coming from Europe, but despite the problems, CAFMET organizers were satisfied with the attendance and the hospitality shown by the National Institute of Standards (NIS) Egypt. Germany’s PTB and the United Nations Industrial Development Organization provided attendance support for a significant number of African attendees.
CAFMET is celebrating their fifth anniversary in 2010. CAFMET was founded for the purpose of spreading metrology education and culture in Africa. Since it’s founding, it has organized three international conferences, held two regional forums and provided several technical workshops in different countries.
CAFMET’s 4th International Conference on Metrology will be held in Marrakech, Morocco in 2012.
“I can’t afford to gamble with onsite calibration services”
Then play the winning hand.
Call on Agilent’s experienced Onsite Service Team for calibration of your application-critical test equipment—Agilent and non-Agilent—at your site.
We test using factory-based procedures and the highest quality calibration equipment. We also perform adjustments—correctly—and, if needed, can expedite repairs to help minimize downtime.
Let us stack the deck in your favor with a customized delivery plan that ensures convenient scheduling, reduced downtime and a lower overall cost of maintenance.
Get the straight story at www.agilent.com/find/winning-hand
Fluke Announces Manual MET/CAL Software
NCSLI Conference Booth #411, 413, 510, 512
Fluke Corporation has introduced Fluke Manual MET/CAL® software — an application for calibration professionals who do not the full calibration automation capabilities of MET/CAL® Plus Calibration Management Software, but who need to collect, store and report calibration data consistently and efficiently. Manual MET/
CAL offers a database-driven solution addressing the mechanical and dimensional workload such as torque gages and tools, dimensional and mechanical instruments, and machine tools.
Manual MET/CAL software is also useful for MET/CAL Plus software users when need an easy way to collect and manage calibration data for instruments that cannot be automated via IEEE connection, such as dimensional instruments.
Manual MET/CAL software provides all the tools needed to calibrate such factors as mass, force, density, dimension and hardness, and calibrate multiple instruments at once, increasing cal lab productivity while eliminating the need to record results with paper and pencil.
Manual MET/CAL software allows users to:
· Quickly and easily create, edit, test and run calibration procedures.
· Input calibration test data into a computer.
· Capture pre-computed measurement uncertainty and TUR (Test Uncertainty Ratio) values.
· Perform batch calibrations.
· Calibrate instruments which have separate input and output values, such as transducers.
· Put a calibration into hibernation, then come back at any time to finish it later.
· Save calibration results into a database.
· Run all calibration reports and certificates using Crystal Reports.
Manual MET/CAL software is compatible with the Microsoft Open Database Connectivity (ODBC) standard so users can exchange data to and from any ODBC compatible application (such as Microsoft Excel or Word) for further processing and analysis.
Fluke Manual MET/CAL software uses Crystal Reports, which makes it easy to create custom reports, lists, and labels. Several standard reports are included. Use them right out of the box, or use them as templates to create customized reports.
Fluke Corporation, tel 800-760-4523, www.fluke.com.
Edison ESI Metrology
Why Edison ESI?
- Eliminate Redundant Metrology Source Audits
- Optional Paperless Reports & Calibration Recall
- Package Discounts & Managed Service Incentives
- Flexible Onsite/Offsite Calibration Service
- Primary/Secondary Calibration Service
- Unsurpassed Customer Service
Call Us Today!
1-866-SCE-CALS
Measurement Capabilities
Vibration
Torque
Time/Frequency *
Mass*
Pressure/Vacuum*
Special Equipment
Gas Flow*
Temperature*
Electrical*
Dimensional*
Humidity
Force*
*NVLAP Accredited Discipline (Labcode: 105014-0)
Quality Accuracy Trust
Our Staff of Metrologists and Metrology Engineers are experts in the business of metrology operations and specialists in their specific measurement disciplines. The accuracy of every measurement we make is as important to us as it is to you. We offer a variety of calibration service options that customers can count on. These core values coupled with our quality standards drive our team to provide the highest quality metrology services on time, for the right price.
www.edisonmudcats.com
Edison Mudcats™
Metrology Suite
A SOUTHERN CALIFORNIA EDISON® Company
Own The World’s Largest Met/Cal® Procedure Library
Over 6,000 Procedures
Over 1,700 Instruments
Over 120 Lab Standard Drivers
All Source Code
All available for a one time low price
No yearly fees
Verification Software
Development Tools
Executable Files
Adjustment Software
Driver Templates
Picture Files
WEB www.intercalinc.com/buylibrary
EMAIL firstname.lastname@example.org
HUMAN 800.753.3249
INTERCAL
Setting New Standards
Ohm-Labs Introduces Temperature Stabilized High Resistance Standards
NCSLI Conference Booth #303
Ohm-Labs, Inc. has released the new Multiple High Resistance Standard designed to maintain state-of-the-art resistance at levels above 1 megohm. The MHS incorporates seven fully guarded high resistance standards, identical to those used in Ohm-Labs’ 100-H series resistance standards. The 1 and 10 megohm elements are wound from resistance wire for high stability. The wire is from a stock to Ohm-Labs’ specifications, using precious metal oxide as the resistance element.
A custom designed constant temperature chamber provides isolation from variations in ambient temperature. A precision thermometer indicates chamber temperature. A thermistor and two thermometer wells are provided for external monitoring. The integrated air bath may be set from 18 to 30 °C to characterize temperature coefficients. The internal constant temperature air bath stabilizes in less than one hour, allowing high resistance standards to be used in the field or on site with the highest accuracy. Connections are made via silver plated slide on BPO plugs. Adaptors to BNC or N type are available. ISO 17025 accredited, NIST traceable calibration is included.
Ohm-Labs tel 412-431-0640, or visit www.ohm-labs.com.
Rotronic Introduces HygroFlex8 Humidity+Temperature Transmitter
Together with the digital HygroClip2 probes, the HygroFlex8 is one of the most versatile and precise humidity and temperature measuring instruments available on the market. With a measurement accuracy of 0.8% RH and 0.1°C, the instruments can measure humidity ranges from 0 to 100% RH and overall temperature ranges of -100 to 200°C depending on the type of probe being used. The latest AirChip3000 technology guarantees an automatic sensor test, drift compensation and every probe is temperature compensated with over 30,000 data points to maintain the highest possible accuracy over the entire measuring range.
The data logging function allows recording of up to 20,000 data points with date and time stamp. Optional digital outputs additionally enable connection to a network via Ethernet and Power over Ethernet (POE). The HF8 can be integrated into an RS485 network and /or make multiprof connections.
Rotronic, tel 631-427-3898, email@example.com, www.rotronic-usa.com.
The new HygroGen2 – humidity and temperature generator for fast calibration
Based on AirChip3000 technology the HygroGen2 is extremely precise and with its user-friendly touch screen interface allows rapid set-point changes. HygroGen2 takes the calibration laboratory to the instrument so that full system validation may be performed without the need to remove the instrument from operation.
Thanks to the significant time savings, the HygroGen2 delivers a rapid return on investment.
Visit www.rotronic-usa.com for more information.
ROTRONIC Instrument Corp, 135 Engineers Road, Hauppauge, NY 11788, USA
Tel. 631-427-3898, Fax 631-427-3902, firstname.lastname@example.org
Reliability. Affordability. Simplicity.
Test Equipment Repair for Your Business.
When your test equipment isn’t working, calibration is not your first concern. That’s why at Custom-Cal, we are more than just calibration!
We are pioneers in Optical Test Equipment Repair, specialists in RF test equipment and structured to meet all your repair needs.
3 Reasons equipment owners, resellers and other service providers trust Custom-Cal for repair:
- **Unique Service** – Industry insiders rely on our unique ability to service all types of instruments – even unsupported equipment
- **Unmatched Resources** – We employ and provide the best technicians, tools, standards and documentation in the business
- **Competitive Pricing** – Our prices can save you 40% or more over Original Equipment Manufacturer (OEM) flat rates
Example of Test Equipment Types Repaired:
Wavelength Meters, Optical Spectrum Analyzers (OSA), Tunable Laser Sources (TLS), Optical to Electrical Converters (OE or O/E), Optical Power Meters (OPM), Optical Time Domain Reflectometers (OTDR), Optical Return Loss Meters (ORL), Polarization Dependent Loss Meters (PDL), Polarization Analyzers and Dispersion Analyzers, Lightwave Component Analyzers, OmniBER and BERT Systems (PPG and ED to 12.5 Gbit+)
Visit us online at [Custom-Cal.com](http://Custom-Cal.com) or call 888-530-9009 today.
On Time Support Releases Barcode Magician 1.7 Software
On Time Support has released Barcode Magician® 1.7 software for use with Fluke MET/BLACK® 7. Barcode Magician allows both lab managers to organize the metrology database to meet the needs of different departments.
Metrology Xplorer® 1, from On Time Support, allows customers to view and access data or reports from a web browser. However, there may be times when users need to update a record. Multiply this access by many users and you have the potential to corrupt the database or incur mistakes that later may be discovered by an auditor. Another consideration is systems that have users in different cities, in different departments, accessing a centralized metrology database. Barcode Magician uses an Action Code-based system which functions like macros allowing users to perform functions quickly and easily while maintaining the integrity of the database.
Barcode Magician also allows users to process several instruments at the same time using a scanner or keyboard input. For more information on Barcode Magician contact On Time Support or the Fluke Corporation.
On Time Support, Inc., tel 281-296-6066, fax 281-434-9478, email@example.com, www.ontimesupport.com. Fluke Corporation, tel 800-763-4523, www.fluke.com.
Symmetricom Announces Commercial Time-Scale System
Symmetricom®, Inc. has announced the Time-Scale System, a fully integrated world-class redundant nanosecond level timing solution. Designed for international metrology, aerospace and defense customers, the system combines multiple high-performance atomic clocks in a time scale that drives a local real-time clock (RTC). Comparable to the world’s best national laboratories that compute a local time scale steered to agree with Universal Coordinated Time (UTC), the Time-Scale System is ideal for national timing laboratories in countries that need to establish traceable time. It is also designed for government or civilian agencies that require precision timing capabilities independent of a Global Navigation Satellite System (GNSS).
The Time-Scale System’s unique set of design features enables industry-leading frequency stability, phase-noise performance and reliability in a system fully integrated into one or more instrument racks.
A timing quality GNSS receiver provides information used to steer the system output to UTC and generates common
...because calibration is a matter of confidence
We are proud of our two new state-of-the-art calibrator series - the RTC series, Reference Temperature Calibrator, and the HPC series, Handheld Pressure Calibrator. Both series offer many new fantastic features and technical improvements, and both are even more sophisticated than any existing calibrators of their kind. All of this in new high profile designs and with the usual outstanding JOFRA quality.
AMETEK®
CALIBRATION INSTRUMENTS
North American contact: Tel: +1 800 527 9999 • E-mail: firstname.lastname@example.org
Worldwide contact: Tel: +45 4816 8000 • E-mail: email@example.com
Find more info at www.jofra.com
Micro-Feature Dimensional and Form Measurements with the NIST Fiber Probe on a CMM
B. Muralikrishnan, J. Stone, J. Stoup
Precision Engineering Division, National Institute of Standards and Technology
C. Sahay
Department of Mechanical Engineering, University of Hartford
The NIST fiber probe is a Coordinate Measuring Machine (CMM) probing system intended for diameter and form measurement of micro features and small holes. The Moore M48 CMM at NIST can measure holes down to 500 µm diameter with the Movamatic probe; the NIST fiber probe extends the range to less than 100 µm diameter. Over the last several years, we have performed numerous precision dimensional and form measurements using this probe mounted on the M48 CMM. We have measured size (diameter and thickness) and form (circularity, sphericity, straightness, flatness, conicity) on artifacts such as fiber optic ferrules, fuel injector nozzle holes, knife-edge and cylindrical apertures, ruby spheres, gage blocks, micro gears, and other micro-features on meso-scale components. We briefly describe the probing system and then present a few of the different applications we have studied, highlighting the challenges they represent and the measurement advances our probing system has offered. More importantly, these applications serve to highlight a new calibration service NIST can now offer industry — the dimensional measurement of micro-features at ultra-low forces.
1. Introduction
Micro-parts and micro-features are becoming increasingly important to our economy, but systems for their measurement are still in their infancy. There is tremendous potential for new applications of tiny devices such as MEMS devices (micro-electro-mechanical systems) and their progeny (including micro-opto-electro-mechanical systems [MOEMS] and microfluidic systems such as lab-on-a-chip [LOC]). Beyond micro-parts per se, macroscopic devices may include microfeatures that are critical to performance. A good example is fuel injectors, where spray holes with less than 100 µm diameter show promise of increasing fuel efficiency and reducing pollution.
The increasing prevalence of micro-parts has stimulated the development of micro measurement systems. These include vision systems for two-dimensional measurements and three-dimensional systems such as X-ray tomography or Coordinate Measuring Machines (CMMs) with microprobes. Numerous instrument manufactures, academic researchers and national laboratories have invested in this problem by developing probing systems and micro-CMMs [1, 2]. Of the microprobes, there are a number of varieties, ranging from scaled-down versions of classical macroscopic CMM probes to scaled-up versions of probes originally developed for scanning probe microscopy. A good review of many probing technologies is given in [1, 2].
Here at NIST we have developed a “fiber probe” capable of measuring holes under 100 µm in diameter. There are several varieties of probes that might be described as fiber probes. All of these probes are notable for their ability to measure very high aspect ratio features. The NIST probe, for example, has been used to measure inside small holes at an aspect ratio of 80:1 without noticeable compromise in performance. Among fiber probes, there are at least two varieties that employ a vibrating stylus [3, 4], and at least two varieties where the stylus is static [5-7]. The basic operating principle of a vibrating probe is to detect changes in the amplitude, phase, or frequency of vibration as the probe comes into contact with a surface. A static probe operates in more the manner of a traditional CMM probe, detecting the deflection of the probe tip when it contacts the surface, as described in the next section.
2. The NIST Fiber Probe
Dimensional metrology of micro-scale features in micro- and meso-scale components is a challenging problem not only because of the small sizes involved, but also due to the requirement of low probing forces. At NIST, we have developed a low-force, fiber-based contact probing system [8] that can be mounted on our high accuracy CMM, the Moore M48° CMM. This probe enables the measurement of 100 µm scale features such as a micro-hole to a depth of at least 5 mm (sometimes up to about 10 mm) with extremely small contact forces of the order of 5 µN or smaller. The uncertainty in a diameter measurement for a high quality artifact is generally less than 100 nm ($k=2$).
The probe stylus is made from a glass fiber with a ball formed on the end. The probe functions by optically imaging the fiber stem from two orthogonal directions a few millimeters away from the ball end of the fiber (Figure 1). To be more precise, the optical system does not literally image the fiber, but images a narrow line of light brought to a focus by the stem of the probe, creating a very sharp, high-contrast image for which nanometer-level motions are detectable. Upon contacting a part, the fiber bends by a small amount. The magnitude of the deflection at the tip (which is the part over-travel) is related to the observed displacement of the stem at the point of observation by a previously determined calibration factor. The technique and early results are discussed in [8].
It is of interest to compare our probe to the static fiber probe developed at Physikalisch-Technische Bundesanstalt (PTB) (now available commercially [7]). The PTB probe stylus consists of a small ball on the end of an optical fiber. The ball is illuminated through the fiber and a vision system viewing the ball detects when it is deflected by contact with the wall. The NIST system also employs a camera but does not view the ball directly—rather, it senses a deflection of the stylus stem at a point well away from the ball at the end of a long fiber. This approach has both potential advantages and disadvantages, and it is not clear which approach is preferable.
A clear disadvantage of the NIST system is that it must infer position of the stylus tip based on indirect evidence obtained from the stem, and it must be determined if this indirect information is indeed reliable. An advantage of the indirect approach is that detection of the stem above the hole isolates the measurement from disturbing influences that might be present when imaging inside the hole, such as reflections and diffraction. Also, there may be significant advantages in using the unusual optical detection technique described above (fiber is not directly imaged) with its high-contrast image of a fine line of light. In the final analysis, which system is preferable might well depend on the measurement task and on the environment in which the probe is used.
The deflection mode of operation as described above is the general mode in which the NIST fiber probe is employed. A variation of this deflection mode of operation is the vibration assisted pseudo-deflection mode where the imaging system operates as a 1D roundness instrument. The part is mounted on a precision spindle and the probe is always in contact with the part. To overcome surface adhesion, the fiber is excited into resonance as the part is rotated from one sampling position to another. We have discussed this enhanced capability in [9].
Another mode of operation of the fiber probe is “buckling mode” (also discussed in [9]). This mode of operation is needed when measurements directly along Z are required. When the probe is brought in contact with the part along the Z direction, the fiber buckles on contact and the amount of machine over-travel is determined as in the deflection mode by a prior calibration.
3. Applications
Over the last several years, we have performed several high precision dimensional and form measurements using our fiber probe mounted on the M48 CMM. We have measured size and form of numerous artifacts. We describe a few interesting applications where either the small part size and/or contact force limitations in combination with high accuracy requirements have necessitated measurements to be made with our fiber probe.
3.1 Internal Geometry of a 127 µm diameter, 10.5 mm Long Hole in a Fiber Optic Ferrule
Measuring large aspect ratio micro-features at low uncertainties is a major challenge in dimensional metrology. The internal geometry of a fiber optic ferrule is a typical example; in fact the ferrule was a primary driver for development of the NIST fiber probe. With a nominal diameter of 125 µm, a measurement to a depth of 5 mm inside the fiber optic ferrule represents an aspect ratio (depth to diameter) of 40:1. We have performed several measurements inside a ferrule at these depths at expanded uncertainties under 100 nm (k=2). Some early results are shown in [8].
As an interesting test, we recently attempted to increase our working depth to the entire length of the ferrule, which was 10.5 mm long. This represents an aspect ratio of 82:1, which is a significant advance in micro-feature dimensional metrology capability. In order to achieve the increase in working depth with the same fiber probe (its length remains unchanged), we had to move the point of observation on the stem closer to the fixed end of the fiber thus reducing the sensitivity of the fiber. In addition to lower sensitivity,
there is also the potential problem of non-linear bending due to external forces such as static charge which could lead to measurement errors.
A simple test of the performance of the probe at extended working depth of 10.5 mm is to measure the hole near its mouth, then invert the hole and measure the same region of the hole by inserting the fiber all the way into the hole. Any disagreement in diameter and form is an indication of potential problems associated with stem-wall interactions or other sources of error that vary with depth inside the hole.
We measured the hole at depths of 3 mm and 4 mm from the mouth, then reversed the hole and re-measured it from the other end at depths of 7.5 mm and 6.5 mm. The diameters obtained at the 3 mm and 4 mm depths were 127.39 μm and 127.37 μm, while the diameter (entering the hole from the opposite end) at the 7.5 mm and 6.5 mm depths were 127.37 μm and 127.35 μm. This excellent agreement indicates that there is little degradation in our measurement capability at large depths.
Figure 2(a) shows a photo of the fiber probe entering a ceramic ferrule. Figure 2(b) shows the radial deviations at the 3 mm position measured from one end and the corresponding 7.5 mm position measured from the other end. It is clear that the radial deviation plot shows excellent agreement. Similar agreement in radial deviations was observed for the 4 mm and 6.5 mm positions also.
3.2 Internal Geometry of a Reverse Tapered Hole in a Fuel Injector Nozzle
Mapping the geometry of fuel injector nozzle holes is critical in improving fuel efficiency of automobiles and therefore in reducing harmful emissions. Nondestructive technologies such as X-ray computed tomography are sometimes used, but the uncertainty in these methods is large and difficult to assess without an independent measurement technique. Fiber based contact probes offer a unique solution for this challenging measurement problem.
We have measured several micro-holes in a fuel injector nozzle where the diameter of the hole increased with depth (i.e., reverse tapered hole). The nominal diameter at the mouth (fuel exit location) was 130 μm while the diameter at a depth of 0.8 mm (fuel entry location) was 150 μm. In order to measure this taper, we had a special probe fabricated with a ball of larger diameter (100 μm diameter ball) mounted on a thin stem (50 μm diameter stem). This allowed a larger working range so that when the probe is deep inside the hole, there is no contact between the stem and the edge of the hole near the mouth. Figure 3(a) shows a picture of the fiber probe entering the hole, and Figure 3(b) shows a 3D form plot of the internal geometry of the hole.
The uncertainty in these diameter measurements was under 200 nm (k=2). The largest uncertainty contributors were the poor form/surface finish of the hole. Therefore finite sampling and mechanical filtering introduced errors that were significantly larger than other sources such as uncertainty due to probing system, machine positioning, probe ball calibration etc.
3.3 Area of Knife-Edge Apertures Used in Radiometry/Photometry
A significant advantage of fiber based probes is the extraordinarily low contact forces they exert on the part during a measurement. For instance, our fiber probe exerts a force of less than 5 μN during a typical measurement. This is based on cantilever beam calculations where the nominal geometry and deflections are known.
Figure 3 (a) Photo of the fiber probe entering a reverse tapered micro-hole in a fuel injector nozzle, 0.8 mm deep, diameter ranging from 130 μm to 150 μm (b) 3D plot of the data from inside the micro-hole (the data has not yet been compensated for probe radius, hence the x and y axes scales are smaller than expected)
An interesting macro-scale application where such low forces are useful is the determination of the area of knife-edge apertures. Apertures are used as standards in radiometry and photometry where high accuracy area measurement is required. These apertures may have a cylindrical wall, or may have a sharp edge. The diameters range from several millimeters to about a hundred micrometers or smaller. The cylindrical apertures may be measured with a traditional CMM probing system (if the diameter is not too small), but the knife-edge apertures have such delicate edges that they can only be measured optically or using an ultra-low force probing technique.
The uncertainty in optical methods may be small but the agreement between different optical methods is sometimes larger than the uncertainty, as an inter-comparison study showed [10]. We should point out that while the contact forces are very low, the contact pressure on a sharp knife edge aperture may potentially be at or near the yield strength of the material. We have attempted to estimate the contact pressure and any deformation due to hertzian stresses in [11].
We therefore attempted to measure these knife-edge apertures [11] with our fiber probe. Our measurements and analysis suggested that the uncertainties in diameter using the fiber probe are extremely small — of the order of 0.06 µm ($k=1$) to about 0.17 µm ($k=1$) depending on the aperture. The largest uncertainty contributors are the part surface roughness and form; the contribution from the probing system is extremely small in comparison. Further, our measurements and uncertainties are validated on cylindrical apertures which could be measured using well established traditional probing systems on CMMs.
An interesting aspect to these knife-edge aperture measurements was that we used the cylindrical portion of the stem as the probing element instead of the sphere at the end of the stem. This is because a sphere is sensitive to any warp or tilt in the aperture leading to potentially large errors if contact occurs above or below the equatorial plane of the sphere. A cylindrical stem, on the other hand, is fairly uniform in diameter over a short portion; its size and shape can be calibrated using a master sphere. Figure 4(a) shows a picture of a knife-edge aperture and Figure 4(b) shows an example profile of a knife-edge aperture measured with our fiber probe.
3.4 Three-Dimensional Measurements of Micro-Scale Features Such As Hemisphere and Cone-On-Meso-Scale components
Measuring the 3D geometry of microscale features is even more challenging than measuring 2D sections on 3D artifacts for several reasons. The probe diameter and form have to be calibrated over the entire region of the probe that contacts the part, unlike a 2D case where only the equatorial plane of the probe ball is of interest. While probe balls on the styli of traditional probing systems have excellent sphericity, probe balls on micro-scale probes are generally of unknown quality. The probe balls that we utilized on our fiber probe were nearly spheroidal in shape, longer along the Z axis than along X or Y by about 2 µm. The residuals from the best-fit spheroid were within a ±0.25 µm band.
A somewhat related issue that makes the measurement of 3D geometry challenging is that of probe radius compensation. If the probe ball can be assumed to be a perfect sphere and surface normals are along probing direction, probe radius compensation is easy to perform. If the probe ball has arbitrary shape and/or the part feature size is comparable to probe size, probe radius compensation involves detecting surface normals at the point of contact. The surface normals themselves are not necessarily along the direction of machine motion because the flexible probe is free to bend and make contact at any point lateral to the direction of motion. Therefore, the analysis of the data becomes a fairly involved mathematical problem by itself.
A recent application we studied involved a component that comprised a 35 µm nominal radius hemisphere located on a cone whose half-angle was nominally 20°. The region of interest was the top 130 µm portion of the part. After calibrating the probe, we measured more than 500 points on the part. Subsequently, we employed a least-squares best-fit method for probe radius compensation where we recognized the fact that the probe was spheroidal in shape and that in addition, it had some small tilt about the X and Y axes. A photo of the part being measured is shown in Figure 5(a) and the points on the surface after probe radius compensation are shown in Figure 5(b). We have not yet developed uncertainty budgets for this application.
3.5 Micro-Holes In Micro Gears
We recently measured micro-holes in several micro-gears produced by a LIGA process. (LIGA is a German acronym describing a production process with high aspect ratio (i.e., like structures.) The gears varied in thickness from 0.15 mm to about 1 mm, while the central holes were about 0.25 mm in diameter. Again, this application required a small probe size and low contact force; our fiber probe was therefore an ideal probing system in this case.
Our measurements indicated that the hole was generally tapered and that not surprisingly, dirt may be a large contributor to the uncertainty in the measurement. Figure 6 (a) shows a picture of the probe entering the micro-gear and Figure 6 (b) shows a form plot of the data from inside the hole. There are several points deep inside the gear that may possibly be dirt. While we have not quantified an uncertainty in diameter at any depth, our experience in making measurements on similar artifacts suggests that the part surface texture and form, and possibly dirt, will be the largest contributors to the overall uncertainty.
4. Error sources
There are numerous error sources associated with a fiber probe measurement. While some error sources are typical in any CMM measurement such as machine positioning accuracy, environmental effects, probe diameter and form calibration errors, etc, there are other sources of error that are unique to the fiber probe. The fiber position is detected by imaging the stem; imaging uncertainty, although extremely small, is therefore a contributor. The non-orthogonality of the two imaging axes and their misalignment with the machine’s axes is also critical when measuring small features. We compensate for this error by measuring the magnitude of the misalignment and software-correcting the data.
The flexible fiber stem is susceptible to external forces such as air currents and electrostatics, and the resulting non-linear bending in combination with other error sources such as axis-misalignment may produce errors. We reduce the influence of these effects to some extent by shielding the probing system from air currents and placing Polonium strips near the fiber to dissipate static charge. The fiber itself cannot be coated with metal because the imaging technique we use relies on the glass fiber behaving as a cylindrical lens.
Geometric effects such as probe radius compensation are another unique source of error, particularly when the surface normals are unknown, or when probe imaging axes are misaligned. Mechanical filtering is also a potential error source especially since the probe size is comparable to part feature size. Dirt is a major problem when performing measurements at low forces where contact forces are insufficient to dislocate particles. We have explored these unique error sources in several articles [12-14].
5. Future Directions
Requirements for accurate three-dimensional measurements of microfeatures are increasing in step with improvements in manufacturing techniques. Production techniques
such as LIGA process and other micro-manufacturing methods (micro-milling, grinding etc) already produce parts with low surface roughness that could benefit from the intrinsic accuracy of our method. Further, unique measurement needs arising from cutting edge applications serve as further drivers of our technique, for example, tiny thin walled targets for high energy physics experiments (such as for the National Ignition Facility targets), soft foams (aerogels) for space applications etc.
To meet this emerging need, we must continually improve NIST capabilities; our fiber probe is a step in that direction. We face several problems. As discussed previously, one significant challenge is simply dirt: cleaning dust or debris from a micro-hole with greater than 40:1 aspect ratio is a problem that would benefit from more study. Furthermore, although our M48 CMM provides an excellent platform for the fiber probe, it will not be able to keep up with rapidly-developing micro CMMs unless additional small-scale metrology is retrofit to the machine. Finally, we expect to interface several new probing systems to our CMM and explore performance of these probes for various types of measurements. Through such steps we hope to keep in step with evolving industry needs.
6. Conclusion
The NIST fiber probe was developed in response to a growing need to provide high accuracy dimensional and form measurements on a variety of micro-scale features in micro- and meso-scale components. Even large components such as turbines have tiny holes that are sometimes required to be characterized with low uncertainties. The fiber probe provides a bridge between two extremes of measurement capabilities we currently have at NIST: sub-micrometer level accuracy measurements on meso-scale parts on the Moore M48 CMM and sub-nanometer accuracy of micro-scale parts with atomic force microscope (AFM) and other scanning methods. In providing sub-100 nm level uncertainties on micro-scale components, we have attempted to meet a growing industry need in the area of micro-manufacturing.
The fiber probe is currently in a transition phase from research to application. We have over the last few years measured numerous artifacts that were brought to our notice by customers and colleagues. We have highlighted some of those applications in this paper to illustrate the capability of our probe and more importantly, to highlight a new calibration service that NIST can now offer industry.
Acknowledgements
We are grateful to numerous collaborators who have provided assistance through the years: Dr. Wolfgang Holler, Jeffrey Anderson, Dr. Ted Doiron, Dr. Steve Phillips, Eric Staniford and Dr. Toni Litoria from NIST, Fedor Timofeev from WTT Inc., S. Vemuri and A. Potluri, former graduates from the University of Hartford, and Dr. Marcin Bazga and Dr. Shane Woody from InsituTec Inc. We also thank Dr. N. George Orji at NIST and Dr. Richard Seugling at Lawrence Livermore National Laboratory for reviewing this paper.
References
1. A. Weckemann, G. Peggs and J. Hoffmann, “Probing systems for dimensional micro- and nano-metrology,” Measurement Science and Technology, Vol. 17, No. 2, pp. 504-509, 2006.
2. A. Weckemann, T. Estler, G. Peggs and D. McMurtty, “Probing Systems in Dimensional Metrology,” CIRP Annals Manufacturing Technology, Vol. 53, No. 2, pp. 657-684, 2004.
3. InsituTec Inc. MicroTouch sensor, http://www.insitutec.com.
4. Mitutoyo UMAP probe, http://www.mitutoyo.com.
5. Zeiss P25 CMM, http://www.zeiss.com.
6. Gannon XM probe by Xpresse Precision Engineering, http://www.xpresse.com.
7. Werth fiber probe WFP 3D, http://www.werth.de.
8. B. Muralikrishnan, Jack Stone and John Stoup, “Fiber deflection probe for small hole metrology,” Precision Engineering, Vol. 30, No. 2, pp. 154-164, 2006.
9. B. Muralikrishnan, Jack Stone and John Stoup, “Enhanced capabilities for the NIST fiber probe,” Proceedings of the Annual Meeting of the ASPE, 2006, Monterey CA.
10. Martoni Litoria, Joel Fowler, Jürgen Hartmann, Nigel Fox, Michael Stock, Annick Razet, Boris Khlevnov, Erkki Ikonen, M Machacs and Kostadin Doychinov, “Final report on the CCPR-52 supplementary comparison of area measurements of apertures for radiometry,” Metrologia, Vol. 44, No. 1a, Technical Supplement, 2007.
11. B. Muralikrishnan, Jack Stone and John Stoup, “Area measurement of knife-edge and cylindrical apertures using ultra low force contact fiber probe,” Metrologia, Vol. 45, pp. 281-289, 2008.
12. Jack Stone, B. Muralikrishnan and John Stoup, “A fiber probe for CMM measurements of small features,” Proceedings of SPIE, Vol. 5879 – Recent Developments in Traceable Dimensional Measurement III, San Diego, CA, 2005.
13. B. Muralikrishnan and Jack Stone, “Fiber deflection probe uncertainty analysis for micro holes,” NCSLI Measure, Vol. 1, No. 3, pp. 38-44, 2006 from Proceedings of the NCSL International Conference 2005, Washington DC.
14. Jack Stone, Bala Muralikrishnan, C. Sahay, “Geometric effects when measuring small holes with micro contact probes,” (to be published.)
B. Muralikrishnan (firstname.lastname@example.org), J. Stone, J. Stoup, Precision Engineering Division, National Institute of Standards and Technology, Gaithersburg, MD.
C. Sahay, Department of Mechanical Engineering, University of Hartford.
Establishment of a Programmable Calibration System for Accurate AC Current Measurements at NIS, Egypt
Mamdouh Halawa and Amal Hasan
Electrical Metrology Department, National Institute for Standards (NIS) Egypt
This paper describes the establishment of a compact programmable system for accurate ac current measurements at National Institute of Standards (NIS), Egypt. The system consists mainly of a fabricated thermal current converter (TCC) associated with a definite hardware and a flexible software. The new system has been harmonized to cover a wideband range from 5 mA to 20 A at frequencies from 10 Hz to 100 kHz. In addition, the system has been used in measuring specific characteristics of the TCC, such as the time response, dc reversal errors, and the transfer errors at different frequencies. The design of the fabricated TCC was re-modified then re-calibrated in PTB, Germany to promote the system reliability.
1. Introduction
Alternating current is commonly measured by comparing an unknown ac current with a known dc current based on the fact that alternating and direct currents have the same effective value when they produce identical amounts of power in a pure resistance. For instance, the root mean square (rms) values of ac voltage are most accurately measured by comparing the heating effect of an unknown ac signal to that of a known, stable and traceable dc signal by using thermal voltage converters (TVC’s). These converters normally consist of one or more thermoelements, possibly in series with a range resistor. The thermoelement (Fig. 1) is composed of one or more thermocouples arranged along a heater structure. These thermocouples are used to detect the temperature along the heater structure while applying a timed sequence of an ac signal and both polarities of a dc signal. By comparing the output of the thermoelement (using a highly sensitive nano-voltmeter) due to the applied ac against the average of both polarities of the dc, the unknown ac quantity may be determined in terms of the known dc quantity. The very small difference between the responses of the ac and dc signals on the thermoelement is normally known as ac–dc difference, $\delta$. This ac–dc difference is determined using [1].
$$\delta = \frac{V_a - V_d}{V_d}$$ \hspace{1cm} (1)
where:
$\delta$ is the measured ac–dc difference.
$V_a$ and $V_d$ are the magnitudes of the ac and dc quantities required to produce the same thermocouple output.
For most multi-range thermal transfer devices as in Fig. 2, it is more practical and economical to use a single element in conjunction with an appropriate shunt resistor. The resistance of the shunt is chosen so that the parallel combination of the shunt and heater resistor represents a current divider for the current to be measured. The correction factors for the frequency response of the shunt, of course, must be known and applied if accurate measurements in some technical applications are to be made [2].
On the other hand, the resistance elements of these shunts do not need to be very accurate because the shunt’s ac–dc difference at various frequencies is used as a correction factor for the ac current measurement, relative to the accurate dc current source [3]. The ac–dc difference, $\delta$, is given in the test report of the transfer standard in parts per million (ppm) as:
$$\delta = \frac{I_{AC} - I_{DC}}{I_{DC}} * 10^6$$ \hspace{1cm} (2)
where:
$\delta$ = ac–dc difference for the TCC.
$I_{AC}$ = rms value of ac current
$I_{DC}$ = average of the absolute values of dc current applied in positive and negative direction across the transfer standard.
In practice, the national laboratories evaluate the ac–dc differences at various currents and frequencies to cover most scientific and industrial applications.
The converted value of the ac-dc difference, $\delta$, is applied to the ac-dc transfer standard as a correction factor as:
$$I_{AC} = I_{DC}(1 + \delta)$$ \hspace{1cm} (3)
At NIS Egypt, new techniques for performing precision ac-dc difference measurements are subjected for evaluation and investigation regularly. The previous manual system was very time consuming and a reason of significant error due to operator’s self-errors and the other sources. As a result, it has been indispensable to promote the calibration services of NIS through establishment of a new programmable system. The system consists of a programmable instrument and a software for controlling the measurements procedures. The system can be also used to calibrate the functions of ac current calibrators and the high sensitive digital ac ammeters. Furthermore, different characteristics of the thermal transfer standards can be evaluated by using the new system. For instance, the standard deviation of the repeatable measurements, the dc reversal difference, time constants and the value of the exponent “n” of the thermocouple’s heater are also easily evaluated by the system.
The system is easy to operate. The selection of the applied ranges at definite frequencies is software-controlled to cover a frequency range from 10 Hz to 100 kHz for the current range from 5 mA to 20A. The system has been established as a coherent structure to disseminate the traceability for the calibration services. A new precise thermal current converter (TCC) was fabricated at NIS then re-modified and calibrated at PTB, Germany, to improve the capability of this work. It is also planned to use this TCC in the accurate measurement of the secondary coils parameters of the current transformers. This will enhance the related calculations of the power losses on the transmission lines then the efficiencies calculations of the power stations. The new TCC was also tested and characterized by using the new automated system (as explained in section 4).
2. New Thermal Current Converter
Currently, the highest accuracy of ac current measurement at NIS is based on using the single junction thermal converters (SJTCs) at the current range of 5 mA. The higher current ranges can be provided by using thermal converters (TCs) connected in parallel with current shunts (Fluke A40 & A40A or Holt model 11). Such combinations are called thermal current converters (TCCs) and used for highly accurate ac current measurements up to 20A. A new thermal current converter (Fig. 3) was fabricated at NIS to serve as a working standard for the range of 5A. It mainly consists of a single junction thermal voltage converter connected in parallel with a current shunt Fluke A40 and re-calibrated at frequencies from 10 Hz to 1 MHz at PTB, Germany.
Table 1. Specifications of the new TCC
| STD | Type of TE |
|--------------|-----------------------------|
| 95 Ohms | Heater resistance |
| 8 Ohms | Thermocouple resistance |
| Fluke A40 | Current shunt |
| 6 mV nominally at the rated input | Output voltage |
| Not more than 100 VDC | Insulation |
| 5A | Range |
| For TVC (104*72*85) mm | Dimensions (length*width*height) |
| For current shunt (50*35*85) mm | |
| TVD: 597 gm | Weight |
| Current shunt: 378 gm | |
3. System Setup
The hardware of this automatic system is configured by the operator in relation with the specific calibration needs. Several types of instruments (Fig. 4) can be used in this system to act either as reference instruments or as units under test. The system consists of a programmable source, Wavetek 9100 calibrator, for both alternating and direct currents; two thermal current converters (TCC), (one of them is acting as a reference standard unit while the other represents the unit under test); high sensitive digital multimeter Fluke 8508A to measure the dc current signal, two similar digital multimeters (DMM), HP 3458A, connected across the output of each TCC to record the output emfs of the thermoelements and GPIB controller.
The controller drives the TCCs and records the readings on the DMMs through the IEEE-488 bus cables. The readings from the DMMs are used for data analysis to compute the ac-dc difference of the TCCs. Mainly, the system performs three tasks according to the state of the three switches S1, S2 and S3.
Task #1: determination of the accurate value of dc current signal.
Task #2: determination of the ac-dc difference of the TCC ($\delta$)
Task #3: determination of the accurate value of ac current signal and some important TCC’s characteristics such as dc reversal error, time constant, n test,..... etc.
Figure 4. Setup of the compact automatic calibration system of the thermal current converters.
Obviously, we can consider the system as a multifunction automatic program. Each function needs definite positions of the working switches to perform a certain task as follows:
A. Determination of the Accurate Value of the DC Current Signal
According to Eq. 3, it is necessary to determine the actual value of the dc current signal. The system could perform this task when S1 is closed while S2 and S3 are open. In this state, the actual value of the output dc current from the calibrator can be measured accurately by using the high sensitive digital multimeter, Fluke 8508A. The setup of this task is illustrated in Fig. 5.
Figure 5. Determination of the accurate value of dc current.
The main advantage of this procedure is the possibility of canceling any effect due to the drift in the value of the dc signal. This means that the value of the dc signal will be instantaneously evaluated before any measurement of the ac signal. In addition, the uncertainty contribution due to the short term stability of the dc signal can be also cancelled from the whole uncertainty budget.
B. Determination of the AC-DC Difference of the TCC (6)
When S2 is closed while S1 and S3 are open, as illustrated in Fig. 6, the system will be responsible for determination of the ac-dc difference of the tested TCC (as explained in section 4).
C. Determination of the Accurate AC Current and Some Important TCC Characteristics
Once the accurate value of the dc current signal is determined successfully (Task #1) and the ac-dc difference of the test TCC is also determined (Task #2), the system could be finally configured as shown in Fig. 7 (S1 is opened while S2 and S3 are closed). This setup aims to calibrate the ac current function of the calibrator at different ranges of currents and frequencies. In addition, some characteristics of the tested TCC could be characterized and evaluated accurately through the same setup (more details in section 5).
4. AC-DC Difference of TCC
The relationship of the output emf, $E$, as a function in the heater current, $I$, may be expressed as:
$$E = KI^n \quad (4)$$
where $E$ is the output emf of the TE, $I$ is the heater current, $K$ varies somewhat with large changes in heater current but is constant over a narrow range where nearly equal ac and dc currents are compared and $n$ is usually 1.6 to 1.9 at rated heater current.
The relationship between a small change in the thermocouple heater current ($\Delta I$) and the corresponding change in output ($\Delta E$) is expressed as:
$$\frac{\Delta I}{I} = \frac{\Delta E}{n \cdot E}$$ \hspace{1cm} (5)
As stated above, the ac-dc difference, basically, is defined as:
$$\delta = \frac{I_a - I_d}{I_d},$$ \hspace{1cm} (6)
where $I_a$ is the alternating current, and $I_d$ is the average of both forward and reverse dc current; i.e,
$$I_d = \frac{I_+ + I_-}{2}$$ \hspace{1cm} (7)
where, $I_+$, $I_-$ are current values which give the same output emf of the TE.
After completing the sequence of four steps by applying successively ac, dc+, dc- and ac signals, to minimize the effects of drift in the TCC outputs, the ac-dc difference of the test TCC (8) is then evaluated from the following relation [4]:
$$\delta = \left( \frac{E_{as} - E_{ds}}{n_s \cdot E_{ds}} \right) - \left( \frac{E_{at} - E_{dt}}{n_t \cdot E_{dt}} \right) + \delta_s$$ \hspace{1cm} (8)
where $\delta_s$ is the ac-dc transfer difference of the standard TCC, $E_{as}$ and $E_{at}$ are output emfs of the standard and the unknown TCCs for ac current, respectively. The mean emf values for forward and reverse dc currents are taken as $E_{ds}$ and $E_{dt}$, respectively and the factor n is determined at the beginning of the test for both TCCs. The test runs for 20 determinations of ac-dc difference at the same current and frequency, then the average is calculated and printed.
The rated current, the device warming up time, the settling time, the test frequencies and the corresponding ac-dc difference of the reference standard TCC are shown and entered in the main screen of the program. At the beginning of the test, for each device, the program determines the exponent n. The value of the exponent n is level dependent and is measured by measuring the change in the output emf when the dc nominal input current is varied by ± 0.05%. The results are then printed and/or saved on the main screen of the system.
5. Additional System Capabilities
A. Stability Tests
In this type of measurement, it is important to be sure that the current sources are quite stable. Therefore, two programs were built into our software driver entitled “Stability of the DC Source” and “Stability of the AC Source” to investigate the stability of the current sources as a short term stability. At a nominal setting, a number of 120 readings are recorded over a time interval of about 1 hour. The deviation in the measured signal (in ppm) from the nominal value is defined as:
$$\frac{\Delta I}{I} = \frac{I_1 - I_2}{I_2} * 10^6$$ \hspace{1cm} (9)
where, $I_1$ is the instantaneous measured value, and $I_2$ is the nominal value. The operator could decide that the AC and DC current sources have acceptable stability if and only if the plot shows that all readings are scattered within the upper and the lower limits of ± 2σ, where σ is the standard deviation of the observations.
B. “n” Test
One of the most important tests that has to be performed is that of determining the factor n of the TCCs. From Eq. (5):
$$n = \frac{\Delta E / E}{\Delta I / I}$$ \hspace{1cm} (10)
where ΔE is the measured change in output emf for small changes in applied current ΔI, $I$ is the nominal current of the TCC, and $E$ is the measured emf at the nominal current. The value of ΔI was programmed to be ± 5 percent of the nominal current, $I$. Sufficient time was allowed for the TCC after each current change to reach its rated emf value. The TCC was tested from 45 to 110 percent of the rated current. Each value of n is computed as the average of two determination (+5% and -5%) at any given current. The computed equation for n variation as a function of the heater current can be also plotted in the final results sheet.
C. DC Reversal Difference (DCRD)
DC reversal difference is generally defined as the percentage difference between the values of the forward and reverse dc current when they both produce the same output emf of the thermoelement. This value is not necessarily constant but increases in some cases as the heater current of the thermoelement is lowered [2]. The value of DCRD was actually measured as:
\[
DCRD = \frac{2}{n} \left( \frac{(E_+ - E_-)}{(E_+ + E_-)} \right)
\]
where, \(E_+\) and \(E_-\) are the TE output emfs with equal forward and reverse heater current, and \(n\) was given as in (10). The program was designed to determine the DCRD as a function of the heater current. In this test, the heater current is changed from 50% to 105% of its rated value with 5% increments and the output emf of the TE is recorded for each heater current value. A plot of the calculated dc reversal difference values against the input heater current is then plotted immediately via the assigned excel worksheet for this function.
D. Frequency and Current Dependence
The TCC, as a transfer standard, is used for accurate AC current measurements in the primary metrological laboratories. But due to the thermoelectric and electromagnetic effects, the TCC is affected by a residual difference of its response to ac and dc, which is a function of both the test current and the applied frequency. The automated program is also used for determination of the ac-dc difference of TCCs at a set of different frequencies from 10 Hz to 100 kHz for different certain currents. Using the relation between the ac-dc differences against the frequencies at constant current, the TCC frequency dependence factor, \(K_f\) can be evaluated. \(K_f\) is the percentage change in the transfer error for a frequency change of 1 Hz.
Similarly, the TCC is tested by using the same program to determine its ac-dc difference at different current levels from 50% to 105% of its rated current at a constant frequency. This performance helps to evaluate the current dependence factor, \(K_c\) of the TCC at a certain frequency. \(K_c\) is the percentage change in the transfer error for a current change of 1A. A summary of these values is shown in Table 2 and 3.
| Parameter | Value |
|-----------|-------|
| “n” | 1.76 |
| DCRD (ppm)| |
| Max. value (at 50%) | -357 |
| Min. value (at 95%) | -8 |
| Actual value (at 100%) | 39.6 |
| \(K_f\) | -0.0085 ppm/Hz |
|-------------|----------------|
| \(K_f\) at 55 Hz | -0.24 ppm/A |
| \(K_f\) at 1 kHz | 2.7 ppm/A |
Table 2. Results of some measured parameters of the TCC.
| Steady state time S.S. | Time constant \(\tau\) | Parameter Frequency |
|------------------------|-------------------------|---------------------|
| 140 | 2.86 | DC |
| 110 | 3.56 | 55 Hz |
| 80 | 2.61 | 1 kHz |
Table 3. Results of the time responses.
The new TCC was first fabricated and tested at NIS using the new programmable automated system at the rated current, 5 A and as listed in Table 4.
| Frequency (Hz) | AC-DC Difference (ppm) | Standard Deviation (ppm) |
|----------------|-------------------------|--------------------------|
| 10 Hz | 24.7 | 3.6 |
| 20 Hz | 16.8 | 2.7 |
| 30 Hz | 5.0 | 4.7 |
| 40 Hz | 12.7 | 2.4 |
| 55 Hz | 12.2 | 3.4 |
| 100 Hz | 12.4 | 3.9 |
| 400 Hz | -2.3 | 4.4 |
| 1 kHz | 10.2 | 2.8 |
| 10 kHz | -72.1 | 4.3 |
Table 4. AC-DC Difference of the new TCC measured in NIS
Fortunately, there was excellent opportunity to send the new TCC to PTB, Germany as a part of scientific mission between NIS and PTB. The design of the TCC was re-modified to be in a cylindrical frame instead of the boxing frame (Fig. 8). New short wires and good soldering material were also used to improve the fabrication of the new TCC. The TCC was then re-calibrated using the very accurate calibration system of PTB (Fig. 9) and results are as listed and plotted in Table 5 and Fig. 10 respectively.
| Frequency (Hz) | AC-DC Difference (ppm) | Standard Deviation (ppm) |
|----------------|-------------------------|--------------------------|
| 10 Hz | 28 | 0.7 |
| 20 Hz | -6.6 | 0.4 |
| 55 Hz | -0.75 | 0.7 |
| 120 Hz | -2.2 | 0.2 |
| 200 Hz | -3.7 | 1 |
| 500 Hz | -6 | 0.8 |
| 1000 Hz | -14.3 | 0.8 |
| 2000 Hz | -21.2 | 0.6 |
| 5000 Hz | -40 | 0.6 |
| 20 kHz | -72 | 0.9 |
| 50 kHz | -165 | 0.4 |
| 100 kHz | -369 | 1 |
Table 5. AC-DC Difference of the new TCC measured in PTB
7. Uncertainty Statement
The expanded uncertainty of this type of measurements was reduced by ratio of about 1:3 after using the improved automated calibration system [6]. The uncertainties values of the practical work are calculated in accordance with National Institute of Standards and Technology (NIST) requirements [7].
The uncertainty assigned to the measurements is divided into Type A uncertainties (those evaluated by statistical means for 20 similar times) and Type B uncertainties (those evaluated by other means) and then combine these uncertainties in a form of root-sum of squares (RSS). For AC-DC measurements, the Type B uncertainties are generally dominating [8]. The reported values are usually the average of 20 determinations of the transfer standard’s AC-DC difference. For instance, the expanded uncertainty of the SJTVC at 55 Hz ($k = 2$, for 95% confidence level) is given in Table 6.
| Source of Uncertainty | Probability Distribution | Uncertainty Values ± ppm |
|-----------------------|--------------------------|--------------------------|
| Calibration certificate | Normal (Type B) | 2.5 |
| Tee connector | Rectangular (Type B) | 1 |
| Room temp change | Rectangular (Type B) | 1 |
| Repeatability (for 20 minutes) | Normal (Type A) | 3.4 |
| Expanded uncertainty | Normal (k=2) | 8.8 |
8. Conclusion
A new automated system for highly accurate AC current measurements has been established at NIS, Egypt. The new system can be used for the automatic calibration of the ac-dc current transfer and the ac current calibrators. A new working standard thermal current converter has been fabricated and tested using the new system. The design of the new TCC has been re-modified then re-calibrated at PTB, Germany. The results show that the new TCC exhibits good time constants, reasonable steady state times and small dc reversal difference. The current dependence behavior at 55 Hz and 1 kHz is also evaluated.
References
1. Thomas E. Lipe, “A Reevaluation of the NIST Low-Frequency Standards for AC-DC Difference in the Voltage Range 0.6-100 V” IEEE Trans. Instrum. Meas., Vol. 45, No. 6, Dec. 1996.
2. Earl S. Williams “The Practical Uses of AC-DC Transfer Instruments,” NBS Technical Note 1166, October 1982.
3. Fluke Calibration: Philosophy in Practice, Principles of AC-DC Metrology, 2nd ed., May 1994.
4. Mamdouh Halawa, Ahmed Hussein, “Improvement and Confirmation of a New Automatic System for AC-DC Calibration at NIS, Egypt,” Symposium of Metrology, Mexico, October 2006.
5. K. J. Lentner, Donald R. Flach, “An Automatic System for AC/DC Calibration,” IEEE Trans. Instrum. Meas., Vol. IM-32, pp. 51-56, March 1983.
6. Mamdouh Halawa, “Establishment of AC Voltage Traceability at NIS, Egypt,” Proceedings of the NC5L Workshop and Symposium, USA, July 2007.
7. B. N. Taylor, C. E. Kuyatt, “Guidelines for evaluating and expressing the uncertainty of NIST measurement results,” NIST Technical Note 1297, U.S. Government Printing Office, 1994.
8. JL. Ying, S.W. Chua, V. K. S. Tan, “Automatic Calibration System for AC/DC Transfer Standards,” Proceedings of the NC5L Workshop & Symposium, 1996.
Mamdouh Halawa, Electrical Metrology Department, National Institute for Standards (NIS), Giza, Egypt, tel 0020105402742, fax 0020233867451, email@example.com
Calibration Management in the ISO/IEC 17025 Accredited Facility
Bernard Williams, M.E.
Prime Technologies, Inc.
Whether your quality practices are guided by the need for compliance with the FDA, ISO 9001 or ISO/IEC 17025, a calibration management software solution can be an important central element of your integrated quality system. Properly implemented, the right software will provide benefits that include stronger compliance controls, daily productivity gains, improved communication and control and the ability to move to a paperless management system. Even if you are fortunate enough to have one of the better computerized maintenance management systems, such as SAP PM, DataStream, JD Edwards or Maximo, they fall short when it comes to understanding or addressing the unique demands of the metrologist.
Calibration is not just another planned maintenance activity. Normal planned maintenance helps to ensure resource availability and reliability but calibration management does more, and is often more technically rigorous. Communication and the management of acceptable process or device specifications are critical. A professional calibration management solution will readily cope with these demands. The instruments, gages, devices and systems that require calibration are as varied as their functionality. Their respective signal types or engineering units of measure are virtually unlimited. Input and output accuracies may vary over the device range. They may be based on percent of range, percent of reading, percent of reading range, plus or minus, or a combination. Remember that while you will want to document the manufacturer’s specifications you will also want to define the performance or control specification as the required performance objectives for the item you are planning to manage.
Ideal test point values with allowable variances will aid the technician in the faster accomplishment of the assigned calibration. But remember, the system should automatically recalculate the acceptable performance levels based on the actual test value and immediately inform the technician of an unacceptable result. Subtle performance variations can directly impact product quality.
Regardless of what your company produces: food, beverages, chemicals or pharmaceuticals, the recipes and processes that produce them have been developed to achieve a desired result. Ingredients, their quantities, process temperature, pressure, etc need all be reliably controlled. If the instruments controlling the process are not operating correctly it is easily understandable that the product may fail to deliver the expected result, be it taste or efficacy. The risk of noncompliance, quality excursions and even civil liability is higher than with other maintenance activities.
When you are evaluating your calibration management needs, asset management is a good place to start. The system record should be capable of documenting the complete life cycle of the asset, be it an instrument, device, gage or system. The scope of the information managed is dependent upon the nature of your relationship with the asset. Simply put, if your business is only responsible for providing metrology or calibration services, your records may fail to include the establishment and documentation of instrument specifications for acquisition and validation that the owner of the asset is obligated to maintain.
As a calibration service provider you will often be presented with your client’s asset detail and calibration specifications. Although the extent of your responsibility is governed by your service agreement, with a good
Regardless of what a company produces, both product and process quality needs to be reliably controlled.
calibration management system the level of detail and the ease of information retrieval and subsequent evaluation will help enrich the quality of your professional services. The identification of issues less obvious then calibration failure can include trend analysis, potential problem recognition or opportunities for improving calibration practices. All represent opportunities for you to provide better service and to reinforce your relationship with your clients.
Calibration software should support complete details on the asset including the manufacturer’s specifications as well as the process or performance specifications. It is the logical center of activity records related to the asset; approvals, validation, maintenance and calibration. Look for software that will allow you to document all the characteristics regardless of complexity. Keep in mind it is not an uncommon necessity to establish a variety of planned activities for a device, and the solution you select should accommodate this.
Calibration, validation and maintenance procedures must be documented, approved and managed. While many of us already have some form of electronic quality procedure management, the ability of the calibration management software to access these documents or, alternatively, allow the user to publish the procedures within the calibration software application is a valuable feature. In order to demonstrate compliance, the system’s ability to reflect these documents in the records for all assets and their related activities is imperative. The more complete and accessible your documentation, the easier it will be for those that will use it. Your ability to demonstrate clear asset/procedure relationships and ease of access will go a long way towards instilling a higher degree of confidence in an auditor that your technicians follow your procedures. It doesn’t matter if the auditor is from your internal QA staff as required per 21 CFR § 820.22, from your customer, or from a regulatory authority, if you can’t readily identify and locate the procedure they are likely going to find it difficult to accept that your busy technicians do so. By using an electronic system to manage your asset/procedure relationships, you will be easily able to locate procedures on-demand, instead of searching through years of records. If you want to avoid receiving an FDA 483 warning letter, be prepared to demonstrate well-documented procedures and practices in compliance with 21 CFR § 211.68 and 21 CFR § 820.72.
Technicians, managers and administrators must be trained, their competency to execute the procedures documented and their periodic retraining proactively managed. Incorporation of this in the system you select can greatly simplify management of this often overlooked quality and compliance element. These requirements are increasingly being carefully audited and are clearly specified for ISO/IEC 17025 accreditation or FDA 21 CFR § 211.25 compliance.
The test standards utilized to perform calibrations, validations and testing must be managed and controlled as strenuously as all other assets. Additional software functionality to automatically communicate suitability for use, the uncertainty contributor of the standard and reverse traceability are all features that will provide additional benefit and efficiency to your integrated quality system. Always a consideration of any calibration activity, easing the burden on the technician and the manager through prequalification and proactive system control will eliminate invalid calibrations and wasted effort while simultaneously enforcing best practices and quality policies.
The greatest gains in productivity and quality control are realized with the implementation of paperless calibration techniques. A calibration management software implementation at a major pharmaceutical manufacturer targeted a paperless calibration environment as a prime objective and reported calibration productivity improvements in excess of 200 percent and nearly 100 percent on-schedule calibration activities without any reduction in performance confidence. Users are immediately and reliably presented with the correct performance specifications and the approved procedure for the item under test. Positive management and quality practices control the schedule as well as the process. The system, not the technician, does the calculations necessary to determine acceptable or unacceptable performance.
When selecting calibration management software, look for a solution that will not only allow you to manage simple input to output (direct) correlation routines but also provide features to execute more complex performance algorithms. Simple devices like gages or thermometers may represent the majority of what we may be called on to calibrate. However, the time and effort necessary to calibrate a complex device or system solutions will typically represent a disproportionate amount of time to complete and represent more opportunities for technician error. Users of professional
calibration management systems are offered standard or custom test procedures that prompt the technician what reading is to be taken and then automatically evaluates the recorded result determining performance acceptability regardless of complexity.
No matter what your current practices are, you should look to likely future requirements. One example is the determination of calibration uncertainty. Already a requirement under ISO/IEC 17025 and A2LA, in the world of constant quality improvement and competition, your company may decide on the practice in order to present a qualification or differentiator to your clients. An automated tool to assist managing this more complex practice will greatly simplify things for you and your associates.
The calibration program should be capable of automatically evaluating all the uncertainty contributors, including the coverage factor, and determine the combined budget, consistent with ISO Guide 98: “Guide to the expression of uncertainty in measurement (GUM).” In selecting the right solution you should be able to configure the software to automatically initiate a wide range of functions based upon the results of the calibration.
With a successful calibration result, automatic program functions will satisfy the scheduled calibration and advance to the next date based on the planned interval. If a work order management utility is being utilized as well, the task will be satisfied. Should, on the other hand, the device be found out of calibration or fail, the options get more interesting. They can include electronic notification of the failure to appropriate staff as defined by the classification, criticality or other established business logic, the automatic launching of system generated repair/replace requests, the initiation of quality/compliance incident reports that will become the basis of a corrective and preventive action (CAPA) investigation and more.
In either circumstance, the recording of actual calibration data will simplify evaluation and analysis. The ability to readily retrieve and evaluate historical calibration results can save time for the engineer and quality professional. With manual and paper based systems, the inquiry and review of historical data can easily represent more than 35 percent of the reviewer’s time.
Another important to quality reporting and improvement is communication. A solution that supports flexible paperless routings for change control, approvals, quality, and compliance incidents can alone justify the investment in a new solution. The better solution will address communication and notification comprehensively. Alerts, reminders, notifications and incident reporting should all include the ability to communicate to defined responsible participants external to the application itself.
Finally no matter how attractive the promised features and benefits may appear, you should be certain to consider the qualifications of the solution provider. I started this dialog with the suggestion that the right system can equal significant benefits to your organization. Whatever solution you select, keep in mind you will be dealing with critical business practices. Take the time to look into the vendor’s experience, support services, track record of success and quality practices. Are they prepared to withstand your scrutiny of their design, and testing practices? Inquire regarding the maturity of the product. If you are working in an FDA-regulated environment, ask if the vendor can demonstrate validation qualifications? Ask for references and contact them. It’s important to work with a flexible vendor willing to address your special needs but look carefully at what the vendor can actually demonstrate and weigh it carefully against your critical requirements. Remember anyone can promise vaporware and unfortunately many do.
Bernard Williams, firstname.lastname@example.org, tel 800-440-7501 or visit www.procalv5.com for more information.
About the author
Bernard Williams, M.E., is director, sales engineering and consulting for Prime Technologies Inc. Mr. Williams is an executive with more than thirty years engineering and management experience with leading edge technology companies. He has worked in such diverse fields as power generation, holography, analytical chemical systems and process automation. For the last ten years, he has worked with Prime Technologies, Inc. as the Senior Technology Consultant and contributor to the development of their ProCalV5 Computerized Calibration & Maintenance Management Solution.
Test equipment repair is our focus.
testequipmentrepair.com
Why Test Equipment Repair?
When your organization requires test equipment repair support, the repair partner you select makes all the difference. Selecting an organization that specializes solely in repair is the preferred choice.
Experience & Focus Count
Repair is our business, always has been. Established in the repair industry in 1975, Test Equipment Repair Corporation’s staff possesses the specific experience and technical infrastructure required to support the most challenging repair missions.
Repair Support For Legacy And Currently Manufactured Test Equipment Products
Per-Incident Repair / Multi-Year Repair Agreements / End-Of-Support (EOS) Programs
Secure On-Line Account Management Access And Reporting Tools
Test Equipment Repair Corporation - Industry’s Source For Repair
Test Equipment Repair Corporation
Toll Free: (866) 965-4660 email@example.com
DISCOVER THE "BLUE BOX"™ DIFFERENCE
Automatic Resistance Ratio Bridge
- Range 0.001 to 100MΩ
- 7" Touch Screen Display
- Statistics and Graphing Functions
- 4 Second Reversal Rate
- Accuracy: <0.1 Part in 10⁶
- Linearity: 0.01 Part in 10⁶ of FS
- DC Current Comparator Technology
Visit our NEW website at www.mintl.com
Contact us at firstname.lastname@example.org
Sales Offices in China & India OPENING SOON
Measurements International
Metrology is Our Science, Accuracy is Our Business™
MI-Canada
Measurements International Ltd.
PO Box 2359, 118 Commerce Drive
Prescott, Ontario, Canada K0E 1T0
Phone: (613) 925-5934
Toll-Free: 1-800-324-4988
Fax: (613) 925-1195
MI-USA
Measurements International Inc.
815 Eyrie Dr., Suite #4
Oviedo, FL, USA 32765
Phone: (407) 706-0328
Toll Free: 1-866-684-6393
Fax: (407) 706-0318
MI-Europe
Druzstevni 845
686 05 Uherske Hradiste
Czech Republic
Phone: +420 572 572 801
Fax: +420 572 572 358 |
The Marketing and R & D Interface
Abbie Griffin\(^1\)
John R. Hauser\(^2\)
October 1991
Revised February 1992
WP # 48-91
Sloan WP# 3350-91-MSA
\(^1\) Assistant Professor of Marketing and Production Management, The University of Chicago
\(^2\) Kirin Professor of Management, Sloan School, MIT
Forthcoming, *Handbook: MS/OR in Marketing*, Gary L. Lilien and Jehoshua Eliashberg, Eds., (Elsevier Science Publishers: Amsterdam, The Netherlands), 1992.
© 1992 Massachusetts Institute of Technology
Sloan School of Management
Massachusetts Institute of Technology
38 Memorial Drive, E56-390
Cambridge, MA 02139-4307
To succeed in today's marketplace, many corporations must engender cooperation between the marketing and the R&D (Research and Development) functions.
It wasn't always this way. In earlier times, most families were self-contained. Various family members cooperated to produce yarn, weave cloth, sew garments, build furniture, forge utensils, and even build their own living quarters. When people gained experience they became craftsmen, experts with the skills to produce goods that could be sold to others in order to pay for other consumable goods and services. But still the expertise could be centralized in a single person who knew (or developed) the technology of production, the process of production, and the means to market the goods to others. For example, the blacksmith knew where to get the raw materials, how to light and maintain the forge fire, and how to shape the metal. Customers sought out the blacksmith and explained their needs. He asked questions, understood their needs, and produced the product. If he did these tasks well, he lived well. If he failed at any of these tasks, he starved. The marketing and the R&D functions were integrated in the activities of the blacksmith. Market feedback was quick, obvious, and persuasive.
Even today, in entrepreneurial firms, the producer-inventor combines the knowledge of what is needed with how to produce it. But as the firm grows the functions of marketing and of R&D become specialized. Scientists are hired to maintain and develop the technology, marketing specialists are hired to sell the product or, in some cases, to talk to customers and communicate product benefits. Over time these groups grow apart, each expert at their own function, but less aware and sometimes quite critical of the other's contribution. As integration and communication among these critical functions decreases, their abilities to produce successful products decreases and the firm suffers.
This chapter addresses the issues of marketing and R&D cooperation, integration, and communication. It illustrates how empirical research and OR/MS methods have contributed to a better understanding and managing the interface. In section 1 we examine cross-functional responsibilities to understand better what information and tasks are shared. In section 2 we cite scientific studies which suggest that communication and cooperation between marketing and R&D are critical success factors of product policy. In section 3 we seek to understand why communication and cooperation are difficult. In section 4 we review academic models of the marketing/R&D interface. In section 5 we present approaches to encourage communication and cooperation. In section 6 we turn to a relatively new technique, Quality Function Deployment (QFD), which has proven to be successful in improving the marketing/R&D interface. We present scientific evidence of QFD's affect on communication (section 7), compare the documented short-term and long-term benefits (costs) of QFD (section 8), and discuss the marketing implementation of QFD (section 9). Section 10 summarizes this chapter.
1. CROSS-FUNCTIONAL RESPONSIBILITIES
In 1967 Lawrence and Lorsch (p. 11) defined the level of integration between two corporate functions as:
...the quality of the state of collaboration that exists among departments that are required to achieve a unity of effort by the demands of the environment.
To understand the need for cooperation, we must understand the tasks which require cooperation "to achieve a unity of effort by the demands of their environment." Table 1 is a partial list of tasks in which the outcome is superior if marketing and R&D each provide input. Some of these tasks are core tasks upon which the success of the enterprise rests, for example, setting new-product goals and understanding customer needs. These are listed as shared responsibilities because they usually require cooperation throughout the period of the task and the combined expertise of both functional groups\(^1\). Other tasks are listed as marketing-dominant or R&D-dominant. In these tasks most of the expertise to complete the task resides within one group; the other functional group is called upon for consultation, usually in a discontinuous manner during critical periods in the task.
Of the three categories in table 1, shared-responsibility tasks are the most difficult to manage because maintaining the balance of inputs over the duration of the task is extraordinarily difficult, yet key to new-product success (Cooper 1983). Function-dominant responsibilities are easier to manage. Because the primary expertise resides within one functional group, that group can marshall the resources (time, money, and people) to accomplish the task. Work relationships are often well-established and the task fits within the normal scheme of functional effort. On the other hand, the dominant function may have difficulty obtaining input from the contributing function if the contributing function does not share in the rewards and recognition for successful completion of the task.
For example, we have listed "trouble-shooting problems customers have with current products" as a marketing-dominant responsibility. These tasks require effort at random intervals over the lifetime of a product (unless the product has a serious flaw), occurring most frequently when someone uses the product in ways for which it was not intended. Customer support (marketing) is usually called upon to solve the problem and often can do so. However, if the problem pushes the frontier of product use, marketing may not have the expertise to solve the problem so R&D is called in. However, this request may be disruptive to research on the next generation of the product. (Problems rarely occur at convenient times and places.) Furthermore, marketing may blame R&D for the problem occurring in the first place and R&D may resent marketing for being unable to handle the problem. Thus, seeds of discontent between the functions are sown. On the other hand, such frontier-pushing, leading-edge users can be the source of new-product ideas (von Hippel 1986) and should not be overlooked. Like so many other tasks, the right rewards, incentives, and recognition are necessary to ensure that the task is completed in a timely manner with the appropriate inputs from both functions.
Consider the R&D dominant task, "identifying and fixing design flaws for future releases
\(^1\)In the past, "understanding customer needs" was usually delegated to the marketing function. Recently corporations have come to recognize the centrality of this task: it is a task that requires expertise from both groups. We expand upon new techniques to address this task later in this chapter.
SHARED RESPONSIBILITIES
- understanding customer needs
- setting new product goals
- matching technological solutions to customer needs
- establishing the core benefit proposition for new products
- next generation product improvement
- resolving engineering-design and customer-need tradeoffs
MARKETING DOMINANT RESPONSIBILITIES
- finding and assessing new applications for products and technologies
- trouble-shooting problems customers have with current products
- training new users
- producing accurate product literature and manuals
- selecting advertising claims
R&D DOMINANT RESPONSIBILITIES
- establishing long-term R&D directions
- keeping abreast of competitive developments
- identifying and fixing design flaws for future releases of current products
Table 1. Examples of Tasks Requiring Marketing and R&D Cooperation
of products." The bulk of the effort in this task resides in the "fixing" portion of the task. It is normally R&D's responsibility to fix these flaws. But the earlier these flaws are identified and cataloged, the earlier R&D can seek solutions. Very often, the customer input which identifies flaws comes in through the marketing function. Not only can marketing identify these flaws through their customer contacts, they can provide priorities (which bugs cause the worst grief for the customer) and, potentially, they can identify user solutions (von Hippel 1978). While it is R&D's responsibility to fix the flaws, it is everyone's responsibility to satisfy the customer. This task is important and requires coordination because each function controls only a part of the solution.
Finally, consider the shared responsibility of "establishing the core benefit proposition for a new product." The core benefit proposition (CBP) is the short list of strategic benefits that the product provides to customers and an indication of how the product provides these benefits (Urban and Hauser 1980). It includes the basic benefits that define the category of products as well as the unique benefits that the new product provides better than competition. In essence, the CBP defines the new product. Clearly, a good CBP requires marketing input to determine
the critical benefits that customers demand as well as R&D input to determine how to provide the benefits. Selecting the right benefits and solutions from the set of potential benefits and solutions requires close cooperation throughout the new-product development process.
The complexity of developing the CBP is illustrated in figure 1, which is a synthesis by Dougherty (1989) from Bonnet (1986), Clark (1985), Cooper (1983), and Roberts (1988). We have modified it slightly by listing, at the bottom of the figure, the formal labels for a sequential product-development process as described in Urban and Hauser (1980). Notice the similarity between the informal stages of Dougherty's description of the processes as observed and the formal stages of Urban and Hauser's pedagogical summary. The key difference is the complex interactions inherent in the observed process.
Figure 1 depicts an ideal model of the process of "needs-linking" that leads to a CBP. As drawn, each stage of new-product development from opportunity identification to design to testing to launch requires the coordination of inputs from both marketing and R&D. Naturally, as the product is developed inputs are combined and many phases are iterated. The final process is often much more integrative than figure 1. (For example, see Dougherty 1987.) However, at minimum, figure 1 illustrates that both marketing and R&D have roles in all phases of new-product development.
Technology and market choices are neither independent nor static; they cannot be analyzed separately and they evolve as new solutions become available, as customer needs evolve, as competitors offer their new products, and as governmental and environmental constraints change. But if the business opportunity has been identified and if the product meets user needs then it is more likely to succeed (Rothwell, et. al. 1974, Cooper and Kleinschmidt 1986).
Table 1 provides examples of the many responsibilities that require marketing and R&D interactions. In each of these interactions money, materials, information, and technical expertise must flow across the boundaries between the functional areas (Ruekert and Walker 1987). We explore now scientific evidence that suggests that such flows are critical to the success of newproduct development.
2. COOPERATION, WHEN IT OCCURS, LEADS TO SUCCESS
Cross-functional cooperation takes time, resources are stretched thinly, and in encouraging cross-functional communication we run the danger of allowing amateurs (marketing in technology, R&D in customer relations) to limit the effectiveness of experts in getting a job done. On the other hand, intuition suggests that when the tasks of marketing and R&D are performed separately in a corporation, cooperation between the two groups enhances new-product success. Few, if any, of the tasks in table 1 could be accomplished successfully without inputs from and cooperation between both functional groups. Furthermore, figure 1's needs-linking process requires harmonious, effective communication. Each group understands and respects the other and provides the other group with the information they need to complete their responsibilities in a timely manner.
In table 2 we have summarized some of the extensive scientific evidence that relates to cooperation between marketing and R&D. In each case the researchers either support or are consistent with the hypothesis that cooperation enhances success\(^2\). More importantly, the evidence is strong, consistent, common to a variety of methodologies, and seemingly applicable in both services and products and in both consumer and industrial markets. Few, if any, management principles are based on such persuasive evidence.
To help the reader appreciate this body of research we give three examples. One example is based on in-depth, ethnographic studies of a relatively few projects; another is based on a large-sample survey, and a third is based on a variety of longitudinal studies over a period of ten years.
Dougherty (1987) studied pairs of successful and unsuccessful new-product projects at industrial firms, consumer firms, and service firms by a combination of retrospective interviewing and examining the paper trails of the projects. She then combined her qualitative data to develop a three-point scale to measure the amount of communication during the new-product development. She measured this communication for nine topics. Four topics related to the user (product use, politics of the buying center, customer needs, and delivery); four topics related to marketing strategy (customer segments, competitive activity, marketing actions, and prices), and the final topic measured communication about the physical characteristics of the product or service.
---
\(^2\)Of course, cooperation can be formal or informal, but as Feldman and Page (1984, p. 53) state "More than many other activities in a company, product planning is a 'people process.' Its multi-layered, cross-functional, interdisciplinary character places great demands on the human skills of creativity, negotiation, and perseverance." Moore (1987, p. 12) also finds smaller divisions allow informal contact which facilitates new-product development.
Figure 2 presents one of the head-to-head comparisons in Dougherty's data. In this case, as in other pairs in her data, there was sporadic communication among failed project team members and uniformly strong communication among team members involved in the successful project. Of course, there might be a reverse causality (people get excited about successes and talk more) and her data is based on a small sample of qualitative judgments, but it is suggestive of how communication enhances success.
Figure 3 illustrates the type of evidence generated by the large scale surveys in table 2. In this study, Cooper (1984a, 1984b) surveyed 122 organizations on 66 strategic variables. He factor analyzed the 66 variables to obtain 19 strategic dimensions which he then cluster analyzed the organizations to obtain five basic organizational strategies. Figure 3 plots the centroids of the clusters on two axes: the success rate and the percent of company sales from new products. Some firms achieved a high success rate by taking few chances, but they did not succeed in the sense of generating sales. On the other hand, some firms, particularly high-technology firms, were able to achieve sales at the cost of many failures. The one group of firms that had consistent success, in terms of both the percent of new products that made it in the marketplace and sales generated by those new products, was the group that balanced the
| RESEARCHER(S) | SAMPLE | TYPE OF FIRM | EVIDENCE (Partial list) |
|-----------------------|-------------------------|---------------------------------------------------|----------------------------------------------------------------------------------------|
| Cooper (1983) | 58 projects | Industrial | Projects which balance marketing and R&D inputs have a higher rate of success. |
| Cooper (1984) | 122 firms | Manufacturing | Management strategies which balance marketing and R&D have a greater percentage of new product successes and greater percentage of their sales coming from new products. |
| Cooper (1987) | 106 projects | Financial services | Synergy (e.g., fit with the firm's expertise, management skills, and market research resources) was the number one correlate of success (Correlation = 0.5.) |
| de Brentani (1989) | 125 firms | Manufacturing | Market synergy and technological synergy are both significantly related to success. |
| Hise, O'Neal, Parasuraman, and McNeal (1990) | 276 projects | Financial services, manufacturing services, transportation, communication. | Sales, market share, and reduced costs are correlated with communication between functions (Correlation with sales and market share = 0.38; correlation with reduced cost = 0.29) |
| Gupta, Raj, and Wilkinson (1985) | 167 firms | High-technology | Lack of communication was listed as the number one barrier to achieving integration among marketing and R&D. |
| Moenctert and Snader (1990) | 107 R&D managers | Products and services | Integration of functions is positively related to innovative success. |
| Peitz and Andrews (1986) | 1311 scientists and engineers | Scientist and engineers | Positive relationship between the amount of interaction and performance. |
| Pinto and Pinto (1990) | 72 hospital teams | Health services | Strong relationship between cross-functional cooperation and the success (perceived task outcomes and psychosocial outcomes) of the project. (Correlation = 0.71.) |
| Snader (1988) | 56 firms | Consumer and industrial | The greater the harmony between marketing and R&D, the greater the likelihood of success. |
| Takeuchi and Nonaka (1986) | 6 projects | Consumer and industrial | Cross-fertilization and self-organizing teams led to success. |
Table 2: Examples of the Scientific Evidence that Suggests that Communication among Marketing and R&D Enhances New-Product Success
marketing and technology functions through better communication and cooperation. Their products had a "high degree of fit and focus."
Finally, table 3 illustrates that communication is more than just talk. In a ten-year study of 289 projects, Souder (1988) provides evidence that interfunctional harmony (communication and cooperation, not just communication) is a strong correlate of new-product success. For example, he found that the groups really needed to cooperate on the project -- too much social interaction at the expense of professional interaction was harmful because it prevented objective criticism.
Research investigating the relationship between R&D and marketing communication and product development success is extensive in amount and virtually univocal in support. Additional research has demonstrated that communication and integration, while desirable, is not always easy to achieve due to a number of inherent barriers between the groups. We now explore barriers that exist which make communication difficult and which discourage cooperation between the functions of marketing and R&D.
3. BARRIERS TO COMMUNICATION AND COOPERATION
There are many barriers to successful communication between marketing and R&D. As Moenaert and Souder (1990, p. 96) summarize "empirical research indicates that disharmony between marketing and R&D is the rule, rather than the exception." We now explore some of the reasons for this disharmony, starting with the people themselves and moving into differences in firm-related barriers to cooperation and communication.
**Personality**
When Saxberg and Slocum (1968) studied marketing and R&D personnel in American corporations, they found inherent personality differences between the groups. These differences are summarized in table 4. Some of these differences are stereotypes, many may have changed since 1968, and many may be unique to the American culture, but these differences do caution us that there may be some social distance between marketing and R&D. These differences may be due to the influence of education, the corporate reward systems, the differing goals, and/or self-selection by people in one field or the other. But the stereotypes, when they exist, can be formidable barriers between the groups. Even if the stereotypes are not based in fact, if one or the other group believes in them, this belief alone can become a barrier to mutual understanding.
For example, Gupta, et. al. (1986b), in a sample of marketing and R&D managers at 167
high-technology firms, found that the two groups were more socio-culturally similar than table 5 would suggest -- they found differences mainly in time orientation. Furthermore, this difference was true whether or not marketing and R&D were well-integrated in the firm. On the other hand, Gupta, et. al. found that cultural barriers were two of the four most frequently stated barriers to cooperation. This suggests that the true barrier may be a perceptual barrier of stereotypes rather than a barrier of actual cultural differences. In either case, the barrier exists.
Personality or stereotype barriers may be the most difficult of all communication barriers to reduce or eliminate (Block 1977). Thus, the firm should seek means by which the groups can work together effectively. Among other things, this implies mechanisms for gaining understanding of persons in the other function and their capabilities and building trust between persons in each function.
Cultural Thought-worlds
Marketing and R&D personnel often differ in their training and background. Marketing professionals are drawn primarily from business schools, often with a prior liberal arts background, while R&D professionals are drawn primarily from engineering and science schools. The training in business schools focuses on general problem solving combining data
and intuition to make decisions that lead to profitable performance of the firm. The training in engineering schools is technological problem solving with a goal of developing better solutions to technical problems. The training in schools of science focuses on the scientific method of hypothesis generation and testing. These "world views" are reinforced in the corporate cultures of the functional departments (Dougherty 1987, Douglas 1987).
| Dimension | Marketing | R&D |
|----------------------------|-----------|-------|
| Time Orientation | Short | Long |
| Projects Preferred | Incremental| Advanced |
| Ambiguity Tolerance | High | Low |
| Departmental Structure | Medium | Low |
| Bureaucratic Orientation | More | Less |
| Orientation to Others | Permissive | Permissive |
| Professional Orientation | Market | Science |
| Professional Orientation | Less | More |
Table 5. Marketing and R&D Differences (adapted from Lorsch and Lawrence 1965, Gupta, et. al. 1986b, and Dougherty 1987.)
Lawrence and Lorsch (1967) first publicized the cultural differences between marketing and R&D. Table 5 summarizes the differences that they and subsequent researchers have documented. (See also Gupta, et. al. 1986, Ruekert and Walker 1987, and Souder 1987.) Basically, marketing prefers a short time horizon of incremental projects. They focus on the market, can accept a high degree of ambiguity and bureaucracy, and feel a professional loyalty to the firm. By contrast, R&D prefers a long time horizon of advanced projects. They focus on scientific development with a loyalty to their scientific profession. They have a low tolerance for ambiguity and bureaucracy. Naturally, these rules do not apply to every individual professional or even to every marketing or R&D department, but they do indicate trends that researchers have been able to identify.
The result of these differences is that marketing and R&D run the danger of developing self-contained societies, or thought-worlds, in which they reside. Even though each function works for the same corporation with the same overall corporate goals, the lens through which each function interprets those goals differs. More importantly these thought-worlds mean that marketing and R&D may have difficulty understanding one another's goals, solutions, and tradeoffs. To work together they must understand and appreciate the other's thought-world.
Language
As separate thought-worlds develop, language barriers also arise. Marketing has and uses
its own set of technical terms and R&D uses different technical terms. For example, when an automobile customer says he wants quick acceleration, the engineer may interpret this in terms of the time it takes the car to go from 0 to 60 mph, or the horsepower of the engine, or the gear-ratio. The consumer might be really concerned with merging into traffic from a yield sign or making a light at rush hour. However, if the appropriate context for quick acceleration is not made clear by marketing, the engineer will add some (possibly inappropriate) context to the customer comment, for example, acceleration during highway driving. When this misunderstanding occurs, customer needs and engineering solutions can disconnect even though each group thinks they are talking about exactly the same thing. As another example, consider word processing on a personal computer. The computer manufacturer might think in terms of millions of instructions per second (MIPS) or disk access time, while the software developer might think in terms of keystrokes to accomplish a task. On the other hand the user might be more concerned with being able to remember how to do the task, how input is made (keyboard vs. function keys vs. cursor vs. mouse), and being able to see the screen respond as fast as the user can type. The subtle differences in language often imply vastly different solutions and make the difference between a successful and an unsuccessful project.
Even the level of detail used by each group varies. For example, the marketing department of a liquid dishwashing detergent manufacturer may find that consumers want the product to "clean my dishes better." Such a statement may be adequate for an advertising strategy, but R&D wants to know what kind of dishes, under what conditions, what dirt has to be removed, and in what type of water. Different solutions can be developed if the consumer judges "clean" by spots on the dishes, spots on the glasses, a shine on the dishes, the fragrance after the wash, the amount of bubbles during the washing process, or, and this is a true story, the size and shape of the bubbles. In automobiles, marketing may discover that the consumer wants easy-to-use controls, but R&D needs much more detailed information if they are to make the necessary tradeoffs in placing controls for lights, turn signals, wipers, radio, heater, air conditioner, cruise control, windows, and locks. If each group does not understand customer needs at the level of detail that they need to do their job, they become frustrated with the communication process.
**Organizational Responsibilities**
Organizational barriers arise due to different task priorities and responsibilities, from measures of success which do not support integration (short-term profit, current market share, etc.), and from a lack of top management support to reward integration. While these factors are clearly under top management control, the possibility of organizational change in and of itself can create barriers. Middle managers who have risen to where they are under the previous criteria now have to learn to play by different rules to continue to rise in the organization. Given that they have become proficient and successful under the old system, many are reluctant to change to new operating rules, philosophies, or informal goals. The confusion and angst can cause resistance to any "outsiders" and thus reduce cooperation among marketing and R&D.
Physical Barriers
Physical barriers frequently isolate marketing from R&D in U.S. firms. Long distances between the groups make face-to-face communication inconvenient and create delays in decisions. Meetings must be planned; serendipitous information transfer or clarification in the halls or around the water fountain and coffee machine does not occur. For example, in a seminal study (Allen 1986, chapters 8 and 9) suggests that the probability that two people communicate one or more times per week drops off rapidly with the distance between their offices\(^3\). For example, at a distance of 10 meters, the probability drops to less than 10%.
This physical separation can become acute. It is not uncommon for R&D facilities to be located on "campuses," which are actually in different cities than the marketing offices. For example, at General Motors the marketing offices are in downtown Detroit or at the divisional headquarters (e.g., Flint, MI) while the R&D facility is located in Warren, MI. At a major computer company, the marketing offices are located in a northern state while the R&D effort is headquartered in a southern state. In other firms, time zones separate marketing from R&D. Such separation decreases chance meetings and social encounters which can be extremely productive in creative problem solving and mutual understanding.
Physical isolation of groups also exacerbates other barriers to communication. Isolation solidifies the separate thought worlds, encourages short-cut jargon-filled language development, and heightens perceptions of personality differences.
The general outcome of the personality, cultural, language, organizational, and physical barriers between marketing and R&D means that communication and cooperation are difficult to achieve. As Kotler (1991, p. 699) summarizes "marketers see the R&D people as impractical, long-haired, mad-scientist types who don't understand business, while R&D people see marketers as gimmick-oriented hucksters who are more interested in sales than in the technical features of the product. These stereotypes get in the way of productive teamwork." Such misunderstanding can lead to strong "Not-Invented-Here" (NIH) attitudes, where each function supports the data and work generated from within their own group. If we contrast these barriers with the scientific evidence that communication between marketing and R&D is one key success factor in developing new products and generating sales, we see that such barriers must be addressed, and either eliminated or circumvented, if the firm is to be profitable in the long term.
The next section reviews academic models which have been proposed for studying marketing/R&D interactions by both management scientists and behavioral researchers. Section 5 reviews approaches to enhance communication and cooperation and section 6 reviews one approach, developed in industry, which has been successful in enhancing the success of new-
---
\(^3\)The study is based on 512 researchers in 7 U.S. laboratories. Allen estimated the following equation, \(P(c) = 0.522/S + 0.026\), where \(P(c)\) is the probability that two persons communicate one or more times per week and \(S\) is the separation between their offices in meters.
product development by improved communication and cooperation.
4. ACADEMIC MODELS OF THE MARKETING/R&D INTERFACE
Managing the marketing/R&D interface changed dramatically in the mid-1980s. Firms started feeling intense competitive pressures to reduce new-product-development cycle times and manufacturing lead times and to increase their success rate for new product introductions. During this time frame many firms experimented with flatter management structures, cross-functional teams, and cross-discipline management processes. By changing the way corporations manage the marketing and R&D functional groups these innovations have led to new perspectives in the academic literature. By the same token, research prior to this period, which assumes hierarchical corporate structures with separate (and sometimes isolated) functional groups, is being reassessed in light of these interfunctional innovations. This section focuses on the academic research which recognizes the interfunctional perspective. We review but briefly research on management within the functions.
Academic research since 1986 on the marketing/R&D interface has been performed by several different disciplines and has produced three different types of work. By far the largest body of work, reviewed in section 2 of this chapter, studies the balance between marketing and R&D and the level of communication or integration between marketing and R&D. It is primarily published in the management-of-technology and product-development literatures, with some articles appearing in the organizational-studies literature.
In addition, several researchers have attempted to model all or part of the interaction process. Both behavioral as well as management science approaches have been taken to model the interactions and relate them to either the success of their outputs or the effectiveness of the interactions. This section of the chapter reviews both efforts.
Management Science Approaches
The major published contributions by management science methods address optimization within a function. The models and simulations assume that any information or inputs required from another function will be available and provided when they are needed. These models are important, in part, because they suggest which information will be needed to optimize a function. Naturally, we expect that the models will evolve and improve based on the consideration of cooperation among functions.
R&D Management Science\(^4\). Lucas (1971) started a major stream of R&D-oriented management science research. Lucas investigated whether a firm should make an investment in an R&D project, and, if it should invest, what should be the optimal spending policy. He
\(^4\)We thank an anonymous reviewer for bringing this stream of literature to our attention.
developed four abstract models for evaluating and controlling investment levels in an R&D project over a time horizon, where the objective is to obtain the conditions which maximize the expected value of the project's present value. His models manipulate whether the completion time is known or a random variable and whether costs are constant or variable but controllable. This work has been extended by Aldrich and Morton (1975) to allow the possibility of time dependent returns using continuous-time dynamic-programming methods. Mehrez (1983) also extended the original ideas to allow for changes in returns from the project, to measure the expected value of perfect information (which allows analysis and solidification of the market uncertainties which affect returns) and defines a set of optimal spending policies for a number of different objective functions.
A later stream of research models R&D investment strategies in terms of how much a firm should invest in R&D, given that competitors are also investing in a particular innovation. Reinganum (1982) models how investment levels depend upon the availability of perfect patent protection and the number of firms simultaneously targeting a particular innovation. Under perfect patent protection her model suggests (1) that the pace of technology investment is increased if the payoffs to the innovator are greater and (2) if more firms join the R&D race then each firm increases its investment. See further analyses in Lee and Wilde (1980), Loury (1979), Kamien and Schwartz (1972), and Park (1987). Finally, a new stream of research, which has begun to use agency theory to set incentives for the manufacturing/marketing interface (Porteus and Whang 1991), might be extended to the marketing/R&D interface.
Marketing Science. There is a stream of research called product optimization. See Hauser and Simmie (1981), Green and Krieger (1992), Kohli and Sukumar (1990), and Schmalensee and Thisse (1988). This research begins with an assumption that R&D has already decided on a basic approach and must select from a set of features a product that has the best profit potential. A model of market response is estimated (either a perceptual map as in Hauser and Simmie, a conjoint model as in Green and Krieger, or a demand model as in Schmalensee and Thisse) that allows the researcher to predict, for every feature combination, the demand for the product. In some cases competition is considered explicitly while in others the analyst must perform a number of "what-if" analyses. Because the features are specified it is possible, in theory, to determine the production cost for each feature combination. Then, by searching over feature combinations based on the models of demand and cost, the analyst can recommend the feature combination with the best profit potential. These models do not address how R&D decides on which features to include in the first place, nor do they provide guidance on how to encourage creative new-product solutions.
For example, consider the case of a firm designing a spirometer. (A spirometer measures lung capacity.) Some of the basic needs of spirometry users (physicians and hospitals) are that the spirometer provides "convenient-sized output," "good printout quality," and "easy-to-interpret diagnostic information." In 1990, spirometers used either a 4 ¼" thermal printer or an 8½" x 11" letter-quality printer. If the design were limited to these options, conjoint analysis could present users with the choice of these options (vs. other features) and quantify the importance of a letter-quality printer relative to a thermal printer. If the cost of providing each
were known, the researcher could balance sales and cost to select the better printer. However, such an analysis is appropriate only if the new-product team has already limited the set to these features.
Later in this chapter we review a method to decide on which features to consider. An important aspect of this method is that it deals with the more basic needs, such as "easy-to-interpret diagnostic information." By dealing with the more basic needs, the new-product team is encouraged to consider creative solutions. For example, when Puritan-Bennett introduced its revolutionary Renaissance spirometry system, it made it possible for the physician to use the printer that was already in the office. See Hauser (1992) for details on this case.
These marketing science models are quite effective once the final set of product features has been selected and thus are a complementary approach to the marketing/R&D interface approaches discussed later in this chapter.
**Behavioral Models**
Two groups of researchers have begun to develop behavior-based, conceptual models of marketing and R&D's interactions. They, and others, have begun testing the validity of their models in both U.S. and international firms.
Gupta et al. (1986a) propose a model and 13 propositions for studying the R&D and marketing interface in product development. (See Figure 4). According to the authors' model the degree of integration for which the firm should strive depends on the organization's strategy (prospector, analyzer, defender or reactor) and the perceived environmental uncertainty within which the firm operates. Higher environmental uncertainty and strategies targeting leading-edge technology or product positions lead to an increased need for R&D/marketing integration. A pilot study of survey-generated results produced inconclusive findings (Gupta, et al., 1986c).

Parry and Song (1991) tested the constructs of Gupta et al.'s model by surveying 411 Japanese high-technology firms to determine which, if any, of the hypotheses appeared to hold. They found that Japanese managers perceive a higher need for the integration of marketing and R&D when firms emphasized opening up new markets and new product areas ("prospector" firms, Miles and Snow, 1978). This need was higher than that for firms which pursued more cautious innovation routes ("analyzers"). In turn, analyzer firms see more need for integration between marketing and R&D than firms who place little emphasis on innovation ("defenders"). In a sample of 274 R&D managers and 264 marketing managers, Song (1991) found the correlation between the stated ideal level of integration and the achieved level of integration to be 0.55.
Parry and Song also found that high consumer demand uncertainty and high rates of technological change drive managers to believe that they need better marketing/R&D integration, and that these uncertainties have a more significant impact on the perceived need for marketing/R&D integration than do uncertainties about competitor strategies. They also found relationships between perceived levels of integration and
- the quality of R&D and marketing relations (self-stated),
- the business experience of R&D personnel,
- the encouragement by management to take risks, and
- the value that management placed on integration.
The survey has been replicated for US chemical firms, but the results are not yet in final form (Norton, et al., 1992). This research team plans to repeat the survey for German chemical firms and compare the results to those obtained for US and Japanese firms.
Ruekert and Walker (1987) provide another perspective. They offer a framework and 14 propositions for examining how, how effectively, and why marketing personnel interact with personnel in other functional areas in planning, implementing, and evaluating marketing activities. The authors have transformed each proposition into testable constructs and have tested parts of the framework by using survey responses from marketing, R&D, manufacturing, and accounting personnel in three different divisions of one *Fortune 500* firm.
Like Gupta, et al. (1986a), Ruekert and Walker (1987) start with the firm's organizational and working environments. However, in their model, these starting factors feed into the organizational and structural processes which lead to integration (See Figure 5). Their framework predicts (1) psycho-social outcomes in terms of level of conflict and perceived effectiveness and (2) functional outcomes in terms of goal accomplishment for interactions between marketing and R&D and between marketing and other functions within the firm.
Ruekert and Walker predicted that more interdependence, more task and work similarity, more formal between-group interactions, and more influence between groups would lead to higher transaction flows (flows of resources, work, and assistance), less conflict, and higher perceived effectiveness between the groups. Even in this small pilot study, they found support for
their basic proposition that marketing and R&D interaction results from and is influenced by perceived resource dependencies in getting their jobs done. The more one function believes they depend on the other, the greater the amount of interactions and resource flows across the functional boundaries and the more influence the information providing group has over the other group.
Both models (Gupta, et. al. and Ruekert and Walker) relate greater perceived effectiveness between marketing and R&D to the organization structure and behavior across the functional groups. The model of Gupta, et al., (1986a) may prove useful in analyzing the appropriate level of marketing/R&D integration, given a firm's strategy and environment. It might be used to explain how firms with different levels of marketing/R&D integration can all be successful from an innovation-producing point of view. Ruekert and Walker's (1987) model may be more appropriate for analyzing interfaces within one company, or within a set of companies facing similar environments, using similar strategies. This model may prove useful in determining whether a particular technique a company employs for integrating across the two groups has actually changed anything. The model can be used to diagnose which aspects of integration a company might try to develop a technique to improve.
Research Challenges
R&D Management Science. The real challenge in this field of research is the application and improvement of the models. Interesting and provocative theory has been developed to indicate what R&D should do to optimize its decisions. Through application these models might be made more practical so that they are accepted widely. The other challenge is integration with marketing science to reflect the interfunctional cooperation that becoming important in industry.
Marketing Science product optimization. These models are very effective once a set of features is chosen. The challenges are to integrate them with the techniques to select the features in a cooperative marketing/R&D world. More importantly, there are opportunities to extend the models to deal with the formal identification of more fundamental customer needs. (The models might also be extended to incorporate manufacturing considerations and customer-satisfaction feedback.)
Behavioral models. The behavioral models provide conceptual guidance as to how the marketing/R&D interface operates. Table 2 suggests that cooperation and communication are important to new-product success and that communication is best directed at the responsibilities listed in table 1. The models of Gupta, et al., (1986a) and Ruekert and Walker (1987) are early attempts to explain how personal, corporate, and environmental factors may impact a firm's ability to integrate marketing and R&D.
Researchers are starting to test the validity of these models with both U.S. and Japanese firms. If this research is successful, it will help companies select techniques and methods to achieve the level of cross-functional integration that is appropriate to the firm's strategies and environmental conditions. This is important research that has the potential to change the marketing/R&D interface. The most important directions for these models are their development through application in use and their extensions to consider the interfunctional aspects of the marketing/R&D interface.
5. APPROACHES TO ENHANCE COMMUNICATION AND COOPERATION
The academic research is just beginning to explore new theories to enhance the communication and cooperation between marketing and R&D. On the other hand, industry has faced this problem for a number of years, and there is much we can learn from the research on management practice. There are five general approaches cited by Shapiro (1987), Ruekert and Walker (1987), and Allen (1978). Companies have been using these approaches to attempt to integrate the efforts of marketing and R&D:
- relocation and physical facilities design,
- personnel movement,
- informal social systems,
- organizational structure, and
- formal integrative management processes.
Relocation and Physical Facilities
Some firms have changed their physical facilities to promote communication. We have already seen that communication drops off rapidly with distance. One solution is to relocate
people so that the distance between marketing and R&D is reduced. This provides the opportunity for, but does not by itself generate, coordination or even communication.
Allen (1986) has experimented with different layouts which enhance communication. Not only has he found that communication increases with team co-location, but he has found additional increases when the group works in non-territorial spaces. Informal meeting places, with accessible black boards or white boards and free coffee, located at strategic points throughout buildings enhance informal (and productive) communication. Little\(^5\) suggests that lounge spaces with traffic generators such as refrigerators, copiers, fax machines, and small libraries bring people together and encourage the exchange of ideas. Corning's Decker Engineering building in Corning, NY and Steelcase's Headquarters building in Grand Rapids, Michigan have been designed around Allen's architectural axioms to enhance communication\(^6\).
Providing communication opportunities through physical proximity must also be complemented by providing groups with techniques which foster the development of cross-functional relationships.
**Personnel Movement**
Human movement between functional groups is one effective technique to improve the transaction flow across functional boundaries (Roberts 1987, Roussel, et. al. 1991, pp. 163-173). People moving from one function to another bring with them contextual information which may be important in understanding why certain decisions are made even though there is no formal documentation. They also bring with them the person-to-person contacts and friendship-based links to people in their old groups. These links improve the probability of both formal and informal communication and cooperation.
As a product or service moves toward commercialization, personnel can move with the project. This downstream transfer moves experience and know-how into the receiving function to aid in problem-solving and reduces the impression that the downstream function is "stuck" with the post-transfer problems. Similarly, upstream transfers enable the upstream group to anticipate downstream problems. It creates the impression that the downstream group has inputs and ownership in the project.
However, transferring personnel between closely-related technical disciplines or between engineering and manufacturing is far easier than shifting personnel between marketing and R&D. As indicated earlier, the skills, knowledge, language, and culture required by each function create barriers that are difficult to overcome. There are some solutions. Companies may at
---
\(^5\)John D. C. Little of M.I.T., personal communication.
\(^6\)So have the common areas in for both the Management of Technology Group and the Marketing Group at M.I.T.'s Sloan School of Management. Informal observation by the authors suggests that the change in physical facilities has had a major effect on cooperative research. Both groups are considering co-location of research personnel.
times find and hire those rare individuals with dual skill sets, or they may induce some of their personnel to obtain training in both areas -- the creation of "Management of Technology" programs at many leading universities is a sign that such training is occurring. Companies can also consider transfers where a marketing professional spends part of his (her) time in an R&D group as an advisor or an R&D professional spends part of his (her) time in the marketing function. Such transfers should be temporary to ensure that skills are not eroded, but they can be valuable in the transfer of perspectives without asking a professional to do a job for which he (she) has not been trained. See also discussion in Takeuchi and Nonaka (1986).
**Informal Social Systems**
Both Moore (1987, p. 12) and Feldman and Page (1984, p. 50) suggest that informal contact is important and often substitutes for formal new-product processes. While cultural differences between marketing and R&D raise barriers to cooperation, informal social networks can provide contact outside a development team, especially to those functions which are ancillary to the team. Such informal social contacts may have the requisite expertise to contribute to solving a particular problem or may identify who has the expertise. Unfortunately, group-culture change often requires a catastrophic crisis which demands change for survival (Schein 1985). Although it is difficult to force such networks to develop, managers can provide opportunities through cross-functional dinners and picnics, athletic leagues and tournaments, community volunteer projects, and other recreational activities. Retreats often serve the dual purpose of problem-solving and generating social networks. However, retreats must be planned carefully to encourage the informal social system. They can be counterproductive if viewed as an inefficient use of time.
**Organizational Structure**
Table 6 lists six characteristics of an organization that lead to improved cooperation between marketing and R&D. In a study of 80 technology intensive companies, Gupta and Wilemon (1988) found each of these characteristics correlate highly with credibility and cooperation. Any organizational structure should incorporate these characteristics if it is to succeed at fostering cooperation. We explore three organizational structures that have been proposed.
**Coordinating groups.** Lorsch and Lawrence (1965) advocate creating coordinating groups. Permanent coordinating groups consist of personnel who have a balanced perspective which enables them to work effectively with several specialist groups over a long period of time. Temporary cross-functional committees resolve problems between functional groups and encourage achieving specific goals. However, in case-based research Lorsch and Lawrence report mixed success. Such groups do have the potential to address the issues of table 6 if they are focused on encouraging joint efforts rather than simply resolving conflicts. Supporting part of Gupta, et al.'s interaction model, Lorsch and Lawrence also found that identical structures performed differently across organizations with different strategies and operating within different environments.
Project or program teams. Marquis and Straight (1965) advocate placing all functional contributors in the same group under a single leader. Project or program teams encourage the exchange of information, provide a degree of formalization, value cooperation, and provide a joint reward system. Such a project organization maximizes coordination and control toward a specific goal, but runs the risk of greater centralization. One long-term flaw with project-based organizations is that, by removing specialists from their supportive functional groups, these specialists react less with colleagues in their own technical or market-based discipline. If the project duration is too long, the technical skills and the knowledge base of the team members erodes, especially when the technology base or market structure is changing rapidly (Roberts 1987).
Product champions. Moore (1987, p. 14) stresses that product champions are important to the success of new-product development. Such champions develop networks of informal contacts to make sure that information is transferred between marketing and R&D and that both functions contribute to the process. Of course product champions need "protectors" who can intercede to defend the champion and to secure resources and "strategists" to focus the champion on the goals of the organization. See discussion in Urban and Hauser (1980, pp. 540-546).
Matrix organizations. Babcock (1991) reports that a number of firms have implemented matrix organizations in an attempt to maintain functional specialization while improving cross-functional performance. Functional specialists reside in their functional groups and report to a functional manager, but they also report on a "dotted-line" basis, frequently part-time, to one or more project leaders who need their particular expertise during some phase of a project. In theory, the project leader performs the integrating function by encouraging information exchange, providing formal reporting procedures, joint rewards, and assigning value to cooperation. However, personnel often find it difficult to balance time spent in a functional group with time spent on the cross-functional teams. Different individuals may infer different priorities from their functional and cross-functional managers. Without good coordination, such matrix organizations run the risk of becoming just "paper" matrices (Roberts 1987).
| CHARACTERISTIC | EXPLANATION |
|----------------|-------------|
| 1. Harmonious operating characteristics | Discuss important issues, resolve conflicts early, work together |
| 2. Formalization | Clear performance standards, clear responsibilities, well-defined guidelines |
| 3. Decentralization | Issues resolved quickly by "local" knowledge |
| 4. Innovativeness | Supports new ideas, tolerates failure, is responsive to change |
| 5. Cooperation is valued | Provides opportunities to exchange views and perspectives |
| 6. Joint reward system | Both marketing and R&D share in success (and do not blame the other for failure) |
Table 6. Organizational Characteristics that Enhance Cooperation (Gupta and Wilemon 1988)
Each of these organizational structures has the potential to improve marketing/R&D coordination and communication and each has worked in a variety of circumstances. However, a formal organization is not sufficient by itself to generate cooperation and communication. It must be supported by other means, such as personnel co-location, moving personnel across functions, and formal management processes.
**Formal Management Processes**
The most visible joint task requiring cooperation between marketing and R&D is new-product development. Urban and Hauser (1980) synthesize many of the new-product development processes into a sequence of opportunity identification, design, testing, launch, and profit (life cycle) management. While, on paper, such phased development processes encourage cooperation between the functions, these processes usually assign specific responsibilities to one of the functional groups and encourage timely input from the other. For example, Urban and Hauser's decision flow chart (chapter 18, page 538) describes parallel decisions and development within functional areas. Similarly, Dougherty's synthesis in figure 1 suggests parallel responsibilities with communication across boundaries. See also the review in Kotler (1991, chapter 12).
More recently, processes have been developed which combine the inputs of marketing, R&D, and even manufacturing, into a joint decision making process. Hayes, et. al. (1988) and Takeuchi and Nonaka (1986) suggest that such overlapping development processes enhance communication and reduce barriers to cooperation. An ideal process would include:
- a cross-functional effort led by an experienced business manager,
- a stable, possibly autonomous, team throughout the whole project,
- extensive overlapping of project phases and problem-solving with early release of preliminary information to all functional groups and transfer of lessons learned,
- progress which is measured by task completion, not time progression, and
- cross-functional relationships based on trust and mutual respect, with conflicts addressed early and at low levels, that is, by the team itself, not upper managers.
Griffin's (1991) research on managing the process of product development suggested that an ideal process would
- explicitly structure decision-making processes across functional groups,
- build a solidly-organized, highly motivated team, and
- move information efficiently from its origin to the ultimate user.
In the next section we review in depth one formal integrative process, Quality Function Deployment, that has been adopted widely and which has been proven to enhance communication between marketing and R&D. Sections 7 and 8 present research which demonstrates how QFD improves the process of new product development. Section 9 describes how QFD's use has implications for the way marketing research is performed in firms, and provides suggestions as
to how management science research can improve QFD's application.
6. QUALITY FUNCTION DEPLOYMENT -- ONE PROCESS TO ENCOURAGE INTERFUNCTIONAL COOPERATION
Quality Function Deployment (QFD) is one formal management process that has been developed to integrate marketing and R&D. It was developed in 1972 at Mitsubishi's Kobe shipyard, brought to the United States by Ford and Xerox in 1986 and, in the last five years, has been adopted widely by Japanese, United States, and European firms\(^7\). Hauser and Clausing (1988) claim that in some applications it has reduced design time by 40% and design costs by 60% while maintaining and enhancing design quality\(^8\). Many US firms who have adopted QFD claim that it has improved relations between marketing and R&D by focusing their efforts on providing information and designing products and services that satisfy customer needs.
QFD begins with an interfunctional team that includes marketing, R&D, and other functions, such as manufacturing. The team works together under a team leader to focus on either product improvement or product development following the suggested practices of ideal processes listed above. We believe QFD works because it provides procedures and processes to enhance communication between, and structure decision-making across, marketing and R&D and because it provides a translation mechanism from the language of the customer to the language of the engineer. It overcomes many of the barriers to communication listed earlier in this chapter. The enhanced communication leads to reduced cycle time.
QFD uses four "houses" to integrate the informational needs of marketing, engineering, R&D, manufacturing, and management. Applications begin with the first house, the House of Quality (HOQ). We begin by describing the HOQ, which is shown conceptually in figure 6.
One way to look at the HOQ is as a translation between the marketing input (customer needs, customer-need importances, and customer perceptions) and the language of R&D (design attributes and engineering measures) through a relationship matrix. However, we want to stress that the HOQ (as well as QFD) is an integrative process in which marketing and R&D participate as equal partners in all aspects of the communication process including the identification of customer needs and their relationship to design attributes. Much of the benefit of the HOQ comes from the mutual understanding of the problem and of one another that comes as marketing and R&D work together for a joint solution. With this in mind we describe first
---
\(^7\)Among the firms reporting applications in 1989 were General Motors, Ford, Navistar, Toyota, Mazda, Mitsubishi, Procter & Gamble, Colgate, Campbell's Soup, Gillette, IBM, Xerox, Digital Equipment Corp., Hewlett-Packard, Kodak, Texas Instruments, Hancock Insurance, Fidelity Trust, Cummins Engine, Budd Co., Cirtek, Yasakawa Electric Industries, Matsushita Denko, Komatsu Cast Engineering, Fubota Electronics, Shin-Nippon Steel, Nippon Zeon, and Shimizu Construction.
\(^8\) These estimates are derived from Japanese companies using QFD.
what is normally considered the marketing-side input, the voice of the customer.
**The Voice of the Customer**
**Identifying customer needs.** The voice of the customer begins with identifying customer needs, which are listed on the left side of the house. A customer need is a description, in the customer's own words, of the benefit which he, she, or they want fulfilled by the product or service. For example, for an automobile headlight, a customer need might be "lights up the road with a fully loaded trunk." Note that the customer need is not a solution, say halogen headlights, or a physical measurement of the need, say illumination at 20 feet, but rather a more detailed description of what the customer wants the headlight to do. This is a key distinction between QFD and more traditional marketing inputs to R&D. If the team focuses on solutions too early, they miss creative opportunities. For example, if, in the early 1950's, an automobile transmission team had set a customer need as a "smooth clutch" rather than "easy shifting," they
might have overlooked the invention of an automatic transmission. Similarly, if the team focuses too quickly on physical measurements, they may miss an understanding of all of the influences on customer needs. For example, a customer's perception of how well a headlight "lights up the road" might depend upon illumination, characteristics of the windshield, interior lighting, the targeting of the lens, etc.
Customer needs are identified by talking to real customers facing real problems. For example, focus groups or experiential one-on-one interviews allow customers to explain how they use a product and how they would like to use a product under different usage scenarios. (We discuss methodological implications in section 9 of this chapter.)
Normally, discussions with customers identify 100-400 customer needs including basic needs (what the customer just assumes a headlight will do), articulated needs (what the customer will tell you that he, she, or they want the headlight to do), and excitement needs (those needs, which, if they were fulfilled, would delight and surprise the customer). However, it is difficult for a team to work with 100-400 customer needs simultaneously.
Structuring the needs. To make customer needs manageable, they are structured into a hierarchy of primary, secondary, and tertiary needs. The primary needs, also known as strategic needs, are generally the five-to-ten top-level needs that set the strategic direction for the product. Secondary needs, also known as tactical needs, are elaborations of the primary needs -- each primary need is usually elaborated into three-to-ten secondary needs. These needs indicate more specifically what can be done to fulfill the corresponding strategic (primary) need. A typical list of 20-30 secondary needs is quite similar to the 20-30 "customer attributes" that are common in marketing research. But, the voice of the customer goes beyond the customer attributes of marketing research to explore the tertiary needs, also known as operational needs. The tertiary needs provide detailed requirements to R&D, so that they can deliver the benefits the customer wants. For example, if R&D wants to design a headlight that lights up the road, they might ask "Is the customer driving in the city or on a country road?," "Is the customer driving at dusk or late at night?," "Is the trunk loaded or empty?," "Are the high-beams on or off?," "Is it raining, snowing, foggy, or clear?," etc. The tertiary needs address these questions and articulate what the customer wants in each of many specific conditions. It is this last lower level of customer needs that provides the translation from a product or service strategy to a detailed product or service design. It is this last lower level that provides the means of communicating between marketing and R&D.
Importances of the needs. Customers want their needs fulfilled, but some needs have higher priorities than others. These priorities help the QFD team make decisions which balance the cost of fulfilling a need and the benefit to the customer. For example, if it is equally costly to fulfill two needs, then the need which the customer rates as more important should be given higher priority. Importances are usually based upon survey measures.
Customer perceptions. Customer perceptions describe how customers evaluate "our" product and competitive products in terms of the products' abilities to fulfill the customer needs.
By understanding which products fulfill customer needs best, how well those customer needs are fulfilled, and whether there are any gaps between the best product and "our" product, the QFD team can provide goals and identify opportunities for product design. Furthermore, by comparing customer perceptions to a team's perceptions of a product, the team can overcome organizational biases. Customer perceptions are measured for each customer need, usually through a formal survey.
While customers are the primary sources of the marketing-side inputs to the HOQ, engineers provide the bulk of the technical and performance inputs to QFD. We now turn to the steps required in assembling the technical inputs to QFD.
**The Voice of the Engineer**
**Design attributes.** To fulfill customer needs, the product (or service) must fulfill measurable requirements. For example, a headlight might have as design requirements target values for illumination (in candle power), coverage (in square meters), and power consumption (in watts). It might have design characteristics specified under a variety of conditions, conditions which match those voiced by the customer. These design measures are listed at the top of the house. They are measured in physical measurement units that become targets for an R&D design. However, they are not product solutions. Solutions come in the second house of QFD. If solutions are specified too early, the R&D process becomes constrained to existing solutions and new, creative directions may be missed.
**Engineering measures.** Just as we measured competitive products with respect to customer needs, so do we measure competitive products on the physical units specified by the design attributes.
**Relationship matrix.** The QFD team judges which design attributes influence which customer needs. Each element of the relationship matrix indicates how much (if at all) each design attribute affects each customer need. The idea is to specify the strongest relationships leaving most of the matrix blank (60-70% blank). While it is possible to complete experiments to measure elements of the matrix, the majority of applications are based on team judgments which synthesize the combined expertise of the team and any "hard" data that may have been collected.
**Roof matrix.** Finally, the roof matrix, shown as cross-hatched lines in figure 4, quantifies the physical interrelations among the design attributes -- a brighter headlight requires more electrical power and thus impacts other subsystems in the car.
**Using the House of Quality**
The House of Quality encourages cooperation between marketing and R&D by requiring each functional group to quantify and articulate their inputs and assumptions. QFD links the language of the customer to the language of the engineer. By specifying the language of the
customer (marketing) and the technical language of design (R&D) and the means to translate one to another (relationship matrix), the HOQ prevents misunderstanding and forces each group to clarify their own thought-world. If the entire team participates in the HOQ, all team members understand and accept these inputs and relationships.
Once the HOQ is complete, the team can use the inputs and the relationships to establish design targets, that is, specific performance values of the design attributes which the product will try to deliver. To make these decisions the team considers the cost and difficulty of achieving these targets, the influence of these targets on other design attributes, the influence of these targets on fulfilling customer needs (relative to competition), and any other relevant input of which the team is aware.
Perhaps in the future researchers will develop means to automate the HOQ while maintaining its ability to encourage creativity and foster interfunctional communication. In the applications with which we are familiar the HOQ works best as a means of communication that does not supplant the team's judgments and instincts. It can be used in parallel with more formal "what-if" analyses that quantify the cost and/or demand impacts of design decisions and it can be followed with a more-formal marketing-science product optimization model. But such applications have yet to appear in the published literature.
The Other Houses of QFD
If the first house links customer needs to design attributes, the second house links design attributes to solutions. For example, in the second house design attributes (illumination) are placed on the left side of the house and solutions, such as the type of headlight, the electric system that drives it, the control system in the car, the dashboard switches, the housing and closure system for the headlight, are placed at the top of the house. This house is often known as the parts-deployment house, or simply, the second house. In developing the second house, the QFD team tries to list as many alternative solutions as possible so that the best combination can be chosen to deliver the design attributes. When the second house is linked to the HOQ, these solutions are based on customer needs.
QFD recognizes that design solutions are not enough. The product must be built or the service delivered. Furthermore, marketing and R&D are not the only functions that need to be integrated. The third house links engineering solutions to process operations, thus coordinating marketing and R&D with manufacturing and service delivery. Finally, a fourth house links process operations to production requirements to complete the cycle. Together the four linked houses ensure that the customer's voice drives the entire design and manufacturing (service delivery) process and that all relevant functional areas work together toward the same goal.
This completes our short description of QFD. For a managerial discussion of QFD see Hauser and Clausing (1988); for a participant-observer ethnography of thirty-five projects at nine firms see Griffin (1989); for details and case studies see Clausing (1986), Eureka (1987), King (1987), Kogure and Akao (1983), McElroy (1987), and Sullivan (1986), as well as collections
of articles by Akao (1987), and the American Supplier Institute (1987).
In the remainder of this chapter, we present the first major research results from investigating QFD's use in the US. In section 7 we present results indicating that QFD improves communication between marketing and R&D. In section 8, we suggest when QFD does and does not work and suggest that the greatest benefits of QFD are long-term rather than short-term improvements in time and cost. Finally, in section 9 we examine the marketing research implications of QFD.
7. QFD IMPROVES COMMUNICATION BETWEEN FUNCTIONS
In studying product-development communications, we have compared QFD to a traditional "phase-review" process at a large automobile manufacturer (Griffin and Hauser 1992). One development team used QFD; the other used a phased process in which they reviewed development as the project passed certain checkpoints. The teams were chosen to be matched as closely as feasible. Both teams resided in the same organization, developing components of comparable technical complexity, with about the same number of parts, and which serve similar functions in an automobile. Both products are manufactured by outside suppliers, but are designed by the automobile manufacturer (OEM, "original equipment manufacturer"). Both teams report to the same manager two levels up and contain roughly the same number of team members with roughly the same experience. In both cases the team leader was committed to the process being used.
During the 15-week period of observation team members reported on their communications by completing, on a randomly chosen day each week, a one-page form. Following a method developed by Allen (1970, 1986), the names of potential communication partners were listed as rows on the form and the potential topics of communication were listed as columns. The team member simply indicated at the end of the day with whom he (she) had communicated and about what. In total the response rate was 85% with a reliability\(^9\) of 94.7%.
Section 2 of this chapter suggests that communication among marketing and R&D enhances success in product development. If
---
\(^9\)A response is considered reliable of person \(i\) reports communicating with person \(j\) and person \(j\) independently reports communicating with person \(i\).
we add manufacturing to marketing and R&D we have a "core team" within the QFD or phase-review teams. Figure 7 compares the communication patterns of the core QFD and the core phase-review teams. QFD led to more communication, more communication within functions, and more communication between functions. However, QFD reduced slightly the communication from the core team to management. When we examined more detailed patterns (see figure 4 in Griffin and Hauser 1992) it suggested a pattern of QFD team members communicating directly to one another rather than going through a management loop. If indeed management was serving primarily as a communication conduit, then this would imply that the QFD pattern of communication was more efficient.
Figure 8 compares the topics discussed. Both teams focused primarily on design issues. In part this was because communication on design issues is a key correlate of new-product success (Hise, et. al. 1990) and, in part, because design was the focus during the period of our observation. More importantly for the comparison between teams, there was more communication about design, customer needs, and market information by the QFD team. On the other hand, the phase-review team allocated more effort to planning. Thus, figure 8 is consistent with the hypothesis that QFD allows the team to focus on "doing" rather than "planning to do."
Naturally, figures 7 and 8 are based on a single comparison at one manufacturing company. They are consistent with intuition and suggest that QFD may indeed provide a means to enhance communication and, by implication, the success of the marketing/R&D interface. Our own qualitative experience in over twenty applications suggests that such communication gains are typical. We have observed that QFD and the House of Quality encourage marketing and R&D to understand one another and to focus on customer needs\textsuperscript{10}.
\textsuperscript{10}These applications include computers (main-frame, mid-range, work stations, and personal), software, printers, cameras, airline service, paints, surgical instruments, diagnostic instruments, office equipment, consumer products, tools, retirement plans, movie theaters, health insurance, distribution networks, automobiles, automobile subsystems, and automobile components.
8. QFD CONTRIBUTES TO LONG-TERM DEVELOPMENT IMPROVEMENTS
Many US managers have promoted QFD based on the potential for decreased design costs and decreased time to market. However, the philosophy of QFD is one of incremental improvements with payoffs coming over the long term. To determine whether QFD could provide either short-term and/or long-term benefits and to determine when QFD does and does not work, Griffin (1991) studied nine of the two-dozen US companies using QFD in 1987. In total 35 projects were studied. These projects included components, subsystems, and complex systems for products, services, and software. They varied on the number of customers (few to many) and the amount of change (incremental to "clean sheet").
The research combined retrospective interviews with members of the QFD teams, interviews with senior managers who could evaluate the success of the projects and the incremental advantages and disadvantages of QFD, and real-time observation of nine of the thirty-five teams as they used QFD.
We begin by evaluating the short-term impact of QFD. Short-term benefits, if they occur will be changes in the cost and the time required to develop new products or improvements in the products themselves. Based on team-member and manager evaluations and quantitative measures (when available), Griffin assigned each project to one of four categories in terms of short-term, tactical success. The categories were:
- **Tactical success.** Measurable project improvements.
- **Failure.** The process was abandoned or rejected.
- **Mixed success.** Some performance aspects improve; others are worse.
- **No change.** Projects exhibit no short-term changes from expected performance.
| Tactical Success Category | No. | % with Strategic Benefits |
|---------------------------|-----|----------------------------|
| Tactical Success | 7 | 100% |
| No Change | 7 | 100% |
| Mixed Results | 8 | 100% |
| QFD Failures | 4 | 50% |
| No Information | 9 | 56% |
| TOTAL | 35 | 83% |
Table 7. Successes and Failures in Sample
The indicators of tactical short-term success were product performance (improved feature performance, increased quality levels, or increased customer satisfaction) and commercial process improvements (decreases in commercialization time and cost). A project is labeled as successful if at least one measure is better and no measure is worse.
In addition to the short-term benefits, team members and managers reported long-term
strategic benefits by answering two open-ended questions: "What did QFD do for this project?" and "What benefits did using QFD produce?" Many of the teams reported intangible benefits such as "better understanding of customer needs," or "better cross-functional relationships." Such benefits may have their payoffs over many years and many projects.
Table 7 reports the percent of projects with short-term and with long-term benefits. Notice that while only 27% reported a clear tactical success, 83% reported long-term strategic advantages. These long-term advantages are summarized in table 8. Based on tables 7 and 8 we infer that QFD is better at providing (perceived) long-term rather than short-term benefits to the firm and that firms should guard against asking too much too soon from QFD in terms of quantifiable outcome measures.
Griffin also investigated project characteristics which might be associated with short-term QFD successes and failures. These variables are summarized in table 9.
One key variable is whether firms treated QFD as an investment or a cost. All successful projects considered QFD an investment in people and information while all failures treated QFD as an expense which must be incurred, but should be minimized. The second key variable was commitment. For the successful projects, all the involved managers and team members were highly committed to using QFD as a process for development. In the less successful projects only one function (marketing, R&D, or manufacturing) was committed.
A third variable, the goal orientation of the project, seemed to be necessary, but not sufficient. Six of the seven successful QFD projects were undertaken to effect a specific change (solve a problem) in some aspect of the product or development process. However, two of the failed projects also had problem-oriented goals. Put another way, QFD is unlikely to improve short-term development outcomes when it is undertaken as a "demonstration" project to gain expertise with the method, or as a means to study a "generic" project for the entire product line, or as a means to improve process measures without a specific project goal.
Naturally, Griffin's qualitative study can provide at best hypotheses for further scientific comparison. For example, one might hypothesize that teams bought into QFD because it seemed to be providing success or that firms saw QFD as an investment because the payoffs (and success) was assured. Furthermore, the long-term benefits are based on perceptions of the
product-development team rather than "hard" sales or profit measures\textsuperscript{11}. However, Griffin's study suggests the working hypotheses that QFD requires a long-term perspective, a willingness to invest in people and process, and a goal-orientation.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& Total & Success & Fail & Mixed & No Change \\
\hline
WHO PUSHES QFD USE & & & & & \\
Top-Down & 8 & 0 & 1 & 2 & 3 \\
Bottom-Up & 10 & 4 & 1 & 1 & 1 \\
Neutral Other & 17 & 3 & 2 & 5 & 3 \\
\hline
LEVEL OF TEAM BUY-IN OR COMMITMENT & & & & & \\
High & 10 & 7 & 0 & 0 & 2 \\
Moderate & 12 & 0 & 0 & 5 & 4 \\
Low & 13 & 0 & 4 & 3 & 1 \\
\hline
GOALS FOR USING QFD & & & & & \\
Process-Oriented & 20 & 1 & 2 & 5 & 5 \\
Output-Oriented & 14 & 6 & 2 & 3 & 2 \\
\hline
CORPORATE ATTITUDE TO QFD & & & & & \\
Investment & 21 & 7 & 0 & 7 & 3 \\
Expense & 14 & 0 & 4 & 1 & 4 \\
\hline
NUMBER OF FUNCTIONS INVOLVED & & & & & \\
1 out of 4 & 2 & 1 & 0 & 1 & 0 \\
2 out of 4 & 8 & 3 & 2 & 1 & 0 \\
3 out of 4 & 15 & 1 & 2 & 5 & 5 \\
4 out of 4 & 10 & 2 & 0 & 1 & 2 \\
\hline
TEAM FAMILIARITY & & & & & \\
High & 3 & 0 & 0 & 0 & 2 \\
Moderate & 14 & 5 & 2 & 4 & 2 \\
Low & 18 & 2 & 2 & 4 & 3 \\
\hline
\end{tabular}
\caption{Summary of Implementation Characteristics}
\end{table}
\textsuperscript{11}Griffin completed her study in 1987. Since then, firms have improved their use of QFD and the marketing research inputs have become more efficient and effective. Based on personal observation, we feel that the success rate of QFD has improved for both the short-term and long-term. See Griffin and Hauser (1991) for nine vignettes of successful applications and Hauser (1992) for complete details on one application that increased potential sales by a factor of five while increasing customer satisfaction and reducing the cycle time of new-product development.
9. MARKETING IMPLICATIONS OF QFD
Although marketing should be involved throughout the QFD process, one key input from marketing is participation in identifying, structuring, and prioritizing the voice of the customer. Many of these tasks will be familiar to our marketing readers. For example, see the discussions in chapters 7-12 of Urban and Hauser (1980). For QFD the new challenge is the detail necessary at the tertiary (operational) level (100-400 customer needs), the desire to link the customer needs to design attributes, and the manner in which the customer needs are structured into primary, secondary, and tertiary levels (strategic, tactical, and operational levels). In this section we give examples of how marketing might provide the voice of the customer. Naturally, we can not review all of the techniques that are available, but this short review should give the reader an idea of current practice. For more details and scientific comparisons, see Griffin and Hauser (1991).
Identifying Customer Needs
Identifying customer needs relies upon qualitative research. For example, in the Vocalyst™ method now used by a variety of service, industrial product, and consumer product firms, between 10 and 30 customers (per segment) are interviewed for approximately one hour each in a one-on-one setting. During the interview customers are asked to picture themselves using the product or service. As the customer describes his, her, or their experiences, the interviewer keeps probing, searching for more detailed explanations and more complete descriptions. Experiences are elicited and probed until the interviewer feels that no new needs will be identified. During the interview the customer is also asked to image future experiences and to indicate needs which should be fulfilled that are not now fulfilled. When necessary the interviews probe higher level needs and explore the means to achieve these needs. (Such explorations are known as laddering and/or means-end analysis. See Reynolds and Guttman 1988.) Other techniques involve focus groups (Calder 1979) and mini-groups of two-to-three customers.
Because practice is still developing there is much debate among practitioners. The two questions we hear often are: (1) Is it better to interview customers in groups or individually?, and (2) How many people (groups) should be interviewed?
Groups or one-on-one interviews. Many market research firms advocate group interviews (see also Calder 1979) based on the hypothesis that customers within groups will get ideas from one another and that the synergies will produce more and varied customer needs. On the other hand, if there are eight people in a focus group then, on average, each person talks for about 15 minutes. Some market research firms argue that 15 minutes may not provide sufficient time to probe deeply for a more complete understanding of the operational customer needs.
---
12Vocalyst is a trademark of Applied Marketing Science, Inc. of Waltham, MA. Experiential interviews are based on Griffin (1989) and Griffin and Hauser (1991). We are aware of other applications.
Figure 9 is one comparison of the productivity of focus groups and one-on-one interviews. The product category is a complex piece of office equipment and the users were experienced. Eight two-hour focus groups of 6-8 people each and nine one-hour interviews were undertaken. The entire set of data was analyzed by six people to produce a combined set of customer needs. Silver and Thompson (1991) then reanalyzed the data to determine, for each customer need and for each group or individual, if that group or individual had voiced that customer need. They used this data to determine, on average, how many customer needs would have been voiced by one person or group, by two people or two groups, etc.
Figure 9 indicates that a focus group provides more needs that a single interview, but that two interviews provide about as many needs as a single focus group. As one manager said when he examined the data, it is almost as if an hour of transcript time is an hour of transcript time independently of whether it comes from a single interview or a focus group\(^{13}\). If it is less expensive to complete two one-hour interviews than one two-hour focus group, then figure 9 suggests that interviews may be more efficient. At minimum figure 9 suggests that the group synergies do not seem to be present.
**How many customers.** Figure 9 suggests that 10+ interviews might be sufficient for that high technology piece of office equipment, particularly if the one-on-one interviews are supplemented by telephone interviews to probe on issues the team may wish to have elaborated\(^{14}\). We do not yet know whether that conclusion generalizes for expert users. However, we know of one other detailed study for less-experienced users of a less-complex product, portable food-carrying devices, a.k.a. coolers.
In this study, 30 customers were interviewed. Again, a set of needs was identified and, for each need and each customer, we determined whether that customer would have given that need (Griffin and Hauser 1991). Because it was possible that 30 customers were not enough, we estimated a statistical model to determine how many needs were not identified by the thirty
---
\(^{13}\)We note that this qualitative result can also be seen in the data reported by Fern (1982). He found that eight nominal 20-minute interviews with individuals produced 75% more ideas than one 100-minute eight-person focus group. In each case approximately 1.2 ideas were generated per minute of transcript time.
\(^{14}\)As of this writing, a few market research firms are advocating such telephone supplements. Experience is sparse so we do not yet know whether such interviews live up to their promise.
customers. The results, given in figure 10, suggest that 30 customers give over 90% of the needs and that approximately 15 customers give about 80% of the needs.
Putting figures 9 and 10 together, we hypothesize that at least 10-20 interviews are needed, more if the users are less experienced, the product category is more complex, or a particularly broad market segment is being investigated.
**Structuring Customer Needs**
Because we work with a large number of attributes in QFD, means must be found to structure them into smaller groups of similar items which can be used by the development team to drive product design. Large random attribute lists overwhelm design teams - they don't know where to start or on which attributes to focus. Factor analysis, the standard marketing research techniques to structure attributes, is difficult to use because the large number of customer needs makes data collection difficult and because there is often ecological correlation which confounds correlation due to similarity\(^{15}\). Thus, Japanese and US firms have tended to use one of two techniques -- one based on the team's judgment and one based on statistical analysis of data collected from customers. We describe these techniques, affinity diagrams and customer-sort diagrams, which are used to structure customer needs into usable hierarchies.
**Affinity diagrams.** In most American and Japanese applications, customer needs are structured with affinity charts or K-J diagrams\(^{16}\), two variations of one of the "Seven New Tools" used in Japanese planning processes (King 1987). Affinity-charts are based on a process of group consensus in which the product-development team imposes structure on the customer needs. The advantage of an affinity chart is that it assures group buy-in to the structure; the disadvantage is that there is no assurance that the team's structure represents how customers make decisions.
Each team member is given a roughly equal number of cards, each bearing one need. One team member selects a card from his (her) pile, reads it aloud, and places it on the table
---
\(^{15}\)For example, in an automobile, the total length is often correlated with engine size, legroom, and even the quality of the interior. Thus, when we survey people about needs that relate to length, engine size, legroom, and the interior we will find correlation even though these needs are quite distinct in the customer's mind. We do not rule out factor analysis; we only point out that it can not be used blindly.
\(^{16}\)K-J is the registered trademark of Jiro Kawakita for his version of the affinity chart. For the remainder of the paper we use the more generic name.
(or wall). Other members add "similar" cards to the pile with a discussion after each card. Sometimes the card is moved to a new pile, sometimes it stays where it was first placed. The process continues until the group has separated all the cards into some number of piles of similar cards, where each pile differs from the others in some way. The team then structures the cards in each pile into a hierarchical tree with more detailed needs at lower levels, and more tactical and strategic needs at the upper levels. To label a higher-order need, say a secondary need to represent a group of tertiary needs, the group can either select from among the tertiary needs or add a new card to summarize the relevant tertiary needs. Throughout the process the team can rearrange the cards, start new piles, or elaborate the hierarchy.
**Customer Sort.** In a customer sort, customers are given a deck of cards, each bearing one need. They are asked to sort the cards into piles such that each pile represents similar needs and differs from the other piles in some way. The number of piles and the exact definition of similarity is left unspecified. After completing the sort, each respondent is asked to choose a single need, called an exemplar, which best represents each pile. From the sort data we create a co-occurrence matrix in which the $i$-$j$-th element of the matrix is the number of respondents who placed need $i$ in the same pile as need $j$. We also label each need with the number of times it was chosen as an exemplar.
To develop a structured hierarchy we cluster\(^{17}\) the co-occurrence matrix. To name the clusters we use the exemplars. When there is no clearly dominant exemplar within a cluster, we either choose from among the exemplars in the cluster or add a label to the data.
**Comparison.** We have now completed formal comparisons of structures for a consumer packaged-good and for coolers. We have also completed a formal comparison for a high-technology product between affinity groups based on team input and affinity groups based on customer input. In addition we are aware of a number of comparisons that have been completed in the field for both consumer and industrial products\(^{18}\). Table 10 reports the primary-need labels for the cooler hierarchies. Notice that the customer-sort hierarchy used more categories, that the secondary and tertiary needs are more equally distributed among categories, and that the exemplars are more equally distributed. More importantly, most managers believe that the customer-sort hierarchy provides a clearer, more-believable, and easier-to-work-with representation of customer perceptions than does the affinity hierarchy. The affinity hierarchy seems to sort needs according to how one would build a cooler; the customer-sort hierarchy
---
\(^{17}\)We have found that Ward's method, the average linkage method, and the complete linkage (farthest neighbor) provided similar structures in our data. See Griffin (1989). For example, when comparing a Ward's-based cluster solution and an average-linkage-based cluster solution, only 3% of the customer needs appeared in different primary groupings. Single linkage (nearest neighbor) led to "chaining" in which customer needs were merged to a large cluster one at a time. Because the difference between the three clustering algorithms is slight, we chose Ward's method for the comparisons in this chapter. It is used more often in industry (Romesburg 1984) and, when shown the three solutions, the management team believed that the Ward's structure was slightly superior to other two. (In Ward's method, clusters are merged based on the criterion of minimizing the overall sum of squared within-cluster distances.)
\(^{18}\)Personal communication with Robert Klein of Applied Marketing Science, Inc.
seems to sort needs according to how one would use a cooler.
| PRIMARY NEED | SECND NEEDS | TERT NEEDS | EXEMPLARS |
|-----------------------|-------------|------------|-----------|
| Price | 4 | 0 | 2 |
| Consumer Utility | 2 | 14 | 21 |
| Phys. Characteristics | 10 | 30 | 10 |
| Thermal Attributes | 4 | 34 | 3 |
| Convenience | 5 | 139 | 6 |
| PRIMARY NEED | SECND NEEDS | TERT NEEDS | EXEMPLARS |
|-----------------------|-------------|------------|-----------|
| Attractiveness | 4 | 20 | 9 |
| Carries Many Things | 4 | 21 | 9 |
| Maintains Temps. | 5 | 39 | 6 |
| Right Size | 3 | 29 | 7 |
| Easy to Move | 2 | 23 | 6 |
| Convenience | 2 | 29 | 1 |
| Works as Container | 2 | 30 | 4 |
Total: 25, 217, 42
Coeff. of Variation: 0.6, 1.4, 0.9
Table 10. Comparing Affinity-chart and Customer-sort Cooler Hierarchies (from Griffin and Hauser 1991)
Based on our observations and our discussions with practitioners most of the speculation based on table 10 generalizes to the proprietary applications. The customer-sort hierarchy does tend to spread the needs and exemplars more equally and the resulting structure does seem to represent better how customers use the product rather than how firms build the product. Indeed, in every application which we have observed or have heard about, once the team saw the customer-sort hierarchy they felt that it, rather than the affinity diagram, was a better voice of the customer.
One argument in favor of the affinity hierarchy is that it encourages more involvement by the team and leads to greater "ownership" of the results. Faced with this criticism, recent applications of customer-sort methods have asked the team to complete the customer task in parallel with the customers. Both sets of data are analyzed and comparisons made. During the process the team begins to ask themselves: "I know how I would sort the customer needs, but how would the customer sort these same needs?" Such involvement leads to a greater appreciation of the results and a belief by the team that they "own" the results.
Providing Priorities for Customer Needs.
Providing priorities (importances) for customer needs, albeit for fewer needs than are necessary for QFD, has received much attention in the marketing literature. We refer the reader to Wilkie and Pessemier (1973), Lynch (1985), Shocker and Srinivasan (1979), Green (1984), and Hauser and Urban (1979). Rather than focusing on that literature, we again focus on those questions that are most common in the application of QFD: (1) Can we avoid new data collection by using frequency of mention in the qualitative research as a surrogate for importance?, (2) Do survey measures of importances have any relation to how customers make choices among products?, and (3) If frequency of mention is not a good surrogate, what is the
best survey measure?
Is frequency of mention a good surrogate for importance? It is a reasonable hypothesis that customers will mention most those needs that are most important. If this were true, then we could use frequency of mention as a surrogate for importance. To test this hypothesis we measured importances for the customer needs in figure 10 (coolers) with a nine-point self-rated importance scale. We then reanalyzed data in the same way, but for only the most important needs. The results are plotted in figure 11, where, for comparison, we have normalized the data so that 30 customers equals 100%. Figure 11 clearly suggests that important needs are no more likely to be mentioned by a customer than needs in general. Regrettably, frequency of mention does not appear to be a good surrogate for importance.
How do survey measures relate to customer choice? Table 11 reports one comparison of survey measures to customer interest and preference. In this study a major consumer-products company created seven product concepts, each of which emphasized improvement with respect to one of the seven primary needs that had been identified for the category. Consumers were then given the concepts and asked to express their interest in the concepts and their preference for the concepts. In addition, later in the survey they were asked to rate the importance of the customer needs. Three scales were used: (1) a direct nine-point rating scale, (2) a constant-sum scale in which customers allocated 100 points among the customer needs, and (3) an anchored scale in which customers allocated 10 points to the most important need and up to 10 points to all other needs. Each customer completed one set of scales. The sample size was 1600 customers per scale with response rates in the range of 75-78% for the three samples.
Table 11 reports that all three measures correlate highly with both interest and preference. While interest and preference are not actual choice, we do expect that they should be correlated with actual choice (should the concepts be developed into physical products) suggesting that all three measures of importance are reasonable in the sense that customers prefer those concepts that stress important needs.
What is the best survey measure? Table 11 is one of three formal comparisons which we have completed. In each case, there was general agreement among all of the survey measures. No clear winner has yet emerged. Qualitatively, we prefer the anchored scale, however one must be cautious in using either it or the constant-sum scale. In both of these scales the rated importance of the primary need is cascaded down as a multiplying factor for the corresponding secondary and tertiary needs. If the primary need is poorly worded, then any
Table 11. Comparison of Interest and Preference with Importances (from Griffin and Hauser 1991)
| Primary Need | Interest | Preference | Direct | Anchored | Consensus |
|--------------|----------|------------|--------|----------|-----------|
| A | 2 | 1 | 1 | 1 | 1 |
| B | 1 | 2 | 3 | 2 | 2 |
| C | 4 | 4 | 4 | 4 | 4 |
| D | 6 | 6 | 6 | 5 | 5 |
| E | 5 | 5 | 5 | 6 | 6 |
| F | 7 | 7 | 7 | 7 | 7 |
| G | 3 | 3 | 2 | 3 | 3 |
RANK Correlation with
| | Interest | Preference |
|----------------|----------|------------|
| Interest | 0.89 | 0.93 |
| Preference | 0.96 | 0.96 |
measurement error affects all corresponding secondary and tertiary needs. For more details see Hauser (1991).
Research Challenges
Academic research on the use and improvement of QFD has just begun. There are many opportunities to improve the process through careful, scientific study. Hopefully, by building upon the rich history of field experience, researchers can make many incremental improvements which, if taken together, revolutionize the practice of QFD.
The Voice of the Customer. For example, Griffin and Hauser (1991) have used probabilistic models (beta-binomial models) to quantify how many customers' needs are missed with a data collection and analysis procedure. These simple models can be extended to more complex analyses. Alternatively, researchers can use the models to explore differences among product categories and data collection techniques. For example, the total number of customer needs applicable to the product category might be affected by the product's inherent complexity (cars have more functions than shampoos) and the homogeneity of the respondent group (one niche segment will probably produce only some subset of the total needs for the entire market).
A factor worth exploring is the question of who to interview. Some respondents are extraordinarily articulate; others have difficulty expressing themselves -- the customer needs just trickle out. Even a very good interviewer sometimes has difficulty getting an inarticulate respondent to provide useful data. Also, a person's experience level in the product area may
affect the number of customer needs they can voice to an interviewer. For example, we have noticed that people who routinely rent and drive many different cars in the course of their frequent business travel can discuss their driving needs far more easily than the person who has driven only one car over the last several years. Clearly, the topic of leading-edge users bears investigating in the context of QFD.
Another challenge is the structuring of customer needs. We described above a formal clustering procedure based on judged similarity among customer needs. It is clear that clustering customer-sort data is superior to judgment-based affinity charts. However, we do not know what data collection is best, how best to correct for the fact that different customers use different numbers of piles, or what data analysis procedure is best.
The question of importance measurement for large numbers of customer needs is still open. Procedures such as conjoint analysis appear promising, but the state-of-the-art frontier for conjoint analysis seems to be about 50 features (Wind, et. al., 1989). More critically, conjoint analysis works best when features (whether a spirometer has a thermal printer or a dot-matrix printer) rather than customer needs are specified. We have seen proprietary applications where quantal choice (logit) models have been effective with between 10 and 20 customer needs, but in those applications the firm had access to a large data base in which customers could be observed making actual choices. Such revealed preference models are promising if one can deal with the collinearity that is inherent in the 100-400 customer needs that are typical for QFD.
**The Relationship Matrix.** The relationship matrix of a House of Quality, in which the impact design attributes have on customer needs are recorded, consists of somewhere between 10,000 to 100,000 cells. Each cell represents one interaction between a design attribute and a customer need. The relationships in these cells are currently identified by consensus of a cross-functional team in a series of working meetings. The results are thus the perceived relationships between these two parameters, not necessarily the actual relationships. An interesting and highly useful area of research would be to investigate a means for producing these relationships more efficiently and grounded in data rather than perception. We are aware of one proprietary study in which an automobile manufacturer measured over 100 features of competitive automobiles and over 50 customer needs on those same automobiles. They then attempted to use multiple regression to estimate the relationship matrix. While the study gave valuable insight to the firm, the collinearity in the regressions was severe and had to be modified with judgment. Certainly Bayesian techniques are worth exploring.
**A Technical Importance Index.** In QFD, importance priorities for design attributes are often derived by summing, across all customer needs, the level of interaction between the characteristic and the customer need multiplied by the importance of the customer need. Bordley and Paryani (1991) have developed a more complete index of design attribute importance which takes into account both the gap between current performance and the cost to close the gap, as well as the original values. While their technique presents some concerns as to the difficulty and length of time required to develop the series of additional data dictated by the method, they demonstrate through simulation that the highest priority design attributes from a small set of
disguised data are very robust to changes in assumptions about sales volume, strengths of relationships between design attributes and customer needs, and how correlations between design attributes affect customer feature delivery.
**Summary.** QFD is just one technique which may be used to improve the marketing/R&D interface. Other product-development processes as well as other general approaches can be used to improve the interaction and work flow across the interface. The research we have presented in this section demonstrates that while some initial inroads have been made into understanding and improving upon QFD, there is significant opportunity for research to contribute both to applying QFD better, and by extension, to improving the use of other techniques.
This completes our brief discussion of marketing's role to identify, structure, and prioritize the voice of the customer. This is one key role of marketing, but we want to emphasize that QFD only works well if marketing and R&D are involved throughout the QFD process.
### 10. SUMMARY
This chapter has reviewed some of the developments at the interface between marketing and R&D. We began by recognizing that many of the day-to-day tasks of product development require cooperation. Although some of these tasks are dominated by either marketing or R&D, many critical tasks such as understanding customer needs, setting product-development goals, matching solutions to customer needs, establishing the core benefit proposition, and resolving engineering-design and customer-need tradeoffs must involve marketing and R&D. We found that the scientific literature is persuasive in the conclusion that cooperation, when it occurs, leads to greater success and more sales from new products.
However, cooperation is not always easy. There are many barriers to both communication and cooperation. Marketing and R&D exist in different thought-worlds, each with their own language, culture, accepted goals, and accepted procedures. There is some speculation that personality and organizational barriers exist. In many cases, marketing and R&D are located in different buildings or cities such that communication is physically difficult.
Faced with these barriers firms have developed means to enhance communication. Organizational structures such as coordinating groups, project teams, and matrix structures have been set up to encourage communication. Personnel have been transferred among and within groups and physical facilities have been relocated. Firms have encouraged informal social systems and, in some cases, formal integrative management systems. Each approach has met with some degree of success, if they are accepted by the groups involved and if the groups really want to cooperate.
Of the formal management systems, Quality Function Deployment has met with the most
recent success. QFD attempts to enhance communication with an interfunctional group that meets regularly and uses the House of Quality (and subsequent Houses) to focus marketing, engineering, R&D, and manufacturing on the voice of customer. In particular, the House of Quality provides a direct link from customer needs to design attributes so that the language of the customer can be translated to the language of the builder. In one scientific study, we found that QFD did enhance communication within functions and between functions by encouraging team members to talk to one another directly rather than through management filters.
Further study of 35 projects in nine firms suggested that QFD provides strategic benefits in 83% of the projects and tactical short-term benefits in 27% of the projects. The key variables that seemed necessary for the success of QFD were (1) that firms viewed QFD as an investment rather than a cost, (2) that both management and team members were committed to QFD as a process, and (3) that QFD was used to solve a specific product or process goal.
Because this chapter appears in a marketing handbook, we closed with a review of the marketing input to QFD. The new challenge of QFD is the large number of customer needs (100-400) that are necessary to complete the communication link between marketing and R&D. This number of customer needs is an order of magnitude larger than the number of customer needs with which marketing research normally deals. As a result, it becomes important to use a qualitative research technique that is cost effective and which provides a reasonably complete set of customer needs. Our research suggests that experiential one-on-one interviews may be the most cost-effective method because greater depth is gained relative to focus groups.
Our research also suggests that customer input is critical in the structuring of the voice of the customer. When customers are asked to sort the customer needs, they tend to sort them to reflect how they use the product. When team members are asked to sort the customer needs, they tend to sort them to reflect how they build the products. If the goal is to satisfy the needs of the customer, it is clear that the customer-sort structure is the better representation.
Finally, our research suggests that importance ratings for the customer needs can be measured via surveys and that such measures correlate well with customer interest and preference. We have not yet identified a "best" survey measure but, regrettably, we do know that frequency of mention in the qualitative research is not a good surrogate for importance.
We believe that the marketing-R&D interface is important to profitability. To date there has been much research in the R&D literature and there has been much interest by industry. QFD, one formal technique, has spread rapidly throughout American and Japanese industries and we expect that trend to continue. But QFD is not just a process; it is a living, growing, and evolving set of tools to integrate and improve the marketing-R&D interface. We hope that research on QFD and on the interface continues. Perhaps ten years from now marketing and R&D will be fully integrated to the point where they work smoothly together for improved new products and greater long-run profits to the firm.
REFERENCES
Akao, Yoji (1987), *Quality Deployment: A Series of Articles*, (Lawrence, MA: G.O.A.L., Inc.) Translated by Glen Mazur.
Aldrich, Carole and Thomas E. Morton (1975), "Optimal Funding Paths for a Class of Risky R&D Projects," *Management Science*, 21, 5, (January), 491-500.
Allen, Thomas J. (1970), "Communications Networks in R&D Laboratories," *R&D Management*, 1, 14-21.
Allen, Thomas J. (1978), *Managing the Flow of Technology: Technology Transfer and the Dissemination of Technological Information Within the R&D Organization*, (Cambridge, MA: The MIT Press).
Allen, Thomas J. (1986), *Managing the Flow of Technology*, The MIT Press, Cambridge, MA.
American Supplier Institute (1987), *Quality Function Deployment: A Collection of Presentations and QFD Case Studies*, (Dearborn, MI: American Supplier Institute, Inc.)
Babcock, Daniel L. (1991), "Chapter 15: Project Organization, Leadership, and Control," *Managing Engineering and Technology*, Prentice Hall, Englewood Cliffs, NJ.
Block, J. (1977), "Recognizing the Coherence of Personality," In D. Magnusson and N.S. Endler (eds.), *Interactional Psychology: Current Issues and Future Prospects*, Wiley, New York.
Bonnet, D. (1986), "Nature of the R&D/Marketing Cooperation in the Design or Technologically Advanced New Industrial Products," *R&D Management*, 16, 117-126.
Bordley, Robert F., and Kioumars Paryani (1991), "Prioritizing Design Improvements Using Quality Function Deployment," General Motors Corporation Working Paper, submitted to *Management Science*.
Calder, Bobby J. (1979), "Focus Groups and the Nature of Qualitative Marketing Research," *Journal of Marketing Research*, 14, (August), 353-364.
Capon, Noel, and Rashi Glazer (1987), "Marketing and Technology: A Strategic Co-alignment," *Journal of Marketing*, 51, 1-14.
Clark, Kim (1985), "The Interaction of Design Hierarchies and Market Concepts in Technological Evolution," *Research Policy*, 14, 235-251.
Clausing, Don (1986). "QFD Phase II: Parts Deployment," American Supplier Institute Publication, Warren MI.
Cooper, Robert G. (1983), "The New Product Process: An Empirically-Based Classification Scheme," *R&D Management*, 13(1), 1-13.
Cooper, Robert G. (1984a), "New Product Strategies: What Distinguishes the Top Performers?," *Journal of Product Innovation Management*, 2, 151-164.
Cooper, Robert G. (1984b), "How New Product Strategies Impact on Performance," *Journal of Product Innovation Management*, 2, 5-18.
Cooper, Robert G. and Ulricke de Brentani (1991), "New Industrial Financial Services: What Distinguishes the Winners," *Journal of Product Innovation Management*, 8, 75-90.
Cooper, Robert G., and Elko Kleinschmidt (1986), "An Investigation into the New Product Process: Steps, Deficiencies, and Impact," *Journal of Product Innovation Management*, 3, 71-85.
Cooper, Robert G., and Elko Kleinschmidt (1987), "New Products: What Separates Winners from Losers?," *Journal of Product Innovation Management*, 4, 169-184.
de Brentani, Ulrike (1989), "Success and Failure in New Industrial Services," *Journal of Product Innovation Management*, 6, 239-58.
Dougherty, Deborah (1987), "New Products in Old Organizations: The Myth of the Better Mousetrap," unpublished doctoral dissertation, Massachusetts Institute of Technology.
Dougherty, Deborah (1989), "Interpretive Barriers to Successful Product Innovation," Marketing Science Institute Report # 89-114.
Douglas, M. (1987), *How Institutions Think*, Rutledge and Kegan Paul, London.
Eureka, William E., (1987), "Introduction to Quality Function Deployment," Section III of Quality Function Deployment: A Collection of Presentations and QFD Case Studies, American Suppliers Institute Publication, January.
Feldman, Laurence P. and Albert L. Page (1984), "Principles vs. Practice in New Product Planning," *Journal of Product Innovation Management*, 1, 43-55.
Fern, Edward F. (1982), "The Use of Focus Groups for Idea Generation: The Effect of Group Size, Acquaintanceship, and Moderator on Response Quantity and Quality," *Journal of Marketing Research*, 19, (February), 1-13.
Glaser, B.G. (1967), and A.L. Strauss, *The Discovery of Grounded Theory*, Aldine Publishing, Chicago, IL.
Green, Paul E. (1984), "Hybrid Models of Conjoint Analysis: An Expository Review," *Journal of Marketing Research*, 21,2, (May), 155-169.
Green, Paul E. and Abba Kreiger (1992), "An Application of a Product Positioning Model to Pharmaceutical Products," *Marketing Science*, 11, 2, (Spring).
Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing Research: New Developments and Directions," *Journal of Marketing*, 54, 4, (October), 3-19.
Griffin, Abbie (1989), "Functionally Integrating New Product Development," unpublished PhD thesis, Massachusetts Institute of Technology.
Griffin, Abbie (1991), "Evaluating Development Processes: QFD as an Example," University of Chicago Working Paper.
Griffin, Abbie, and John R. Hauser (1992), "Patterns of Communication Among Marketing, Engineering and Manufacturing -- A Comparison Between Two New Product Teams," accepted for publication *Management Science*, vol 38, No. 3, (March).
Griffin, Abbie, and John R. Hauser (1991), "The Voice of the Customer," Working Paper, Sloan School of Management, M.I.T., Cambridge, MA 02139.
Gupta, Ashok K., S.P. Raj, and David Wilemon (1986a), "A Model for Studying R&D-Marketing Interface in the Product Innovation Process," *Journal of Marketing*, 50(April), 7-17.
Gupta, Ashok K., S.P. Raj, and David Wilemon (1986b), "R&D and Marketing Managers in High-Tech Companies: Are They Different?" *IEEE Transactions on Engineering Management*, EM-33(1), 25-32, (February).
Gupta, Ashok K., S.P. Raj, and David Wilemon (1986c), "Managing the R&D/Marketing Interface," *Research and Technology Management*, March-April, 38-43.
Gupta, Ashok K., S.P. Raj, and David Wilemon (1985a), "The R&D-Marketing Interface in High-Technology Firms," *Journal of Product Innovation Management*, 2, 12-24.
Gupta, Ashok K., S.P. Raj, and David Wilemon (1985b), "R&D and Marketing Dialogue in High-Tech Firms," *Industrial marketing Management*, 14, 289-300.
Gupta, Ashok K. and David Wilemon (1988), "Improving the R&D/Marketing Interface," Working Paper, Syracuse University, Syracuse, NY, (February).
Hauser, John R. (1991), "Comparison of Importance Measurement Methodologies and their Relationship to Consumer Satisfaction," M.I.T. Working Paper, Sloan School of Management, Cambridge, MA, January.
Hauser, John R. (1992), "Puritan-Bennett, the Renaissance Spirometry System: Listening to the Voice of the Customer," M.I.T. Working Paper, Sloan School of Management, Cambridge, MA, January.
Hauser, John R., and Donald Clausing (1988), "The House of Quality," *Harvard Business Review*, (May-June), 63-73.
Hauser, John R., and Patricia Simmie, "Profit Maximizing Perceptual Positions: An Integrated Theory for the Selection of Product Features and Price," *Management Science*, Vol. 27, No. 2, (January 1981), pp. 33-56.
Hauser, John R. and Glen L. Urban (1979), "Assessment of Attribute Importances and Consumer Utility Functions," *Journal of Consumer Research*, 5, (March), 251-62.
Hayes, Robert H., Steven C. Wheelwright, and Kim B. Clark (1988), *Dynamic Manufacturing*, The Free Press, New York.
Hise, Richard T., Larry O'Neal, A. Parasuraman and James U. McNeal (1990), "Marketing/R&D Interaction in New Product Development: Implications for New Product Success Rates," *Journal of Product Innovation Management*, 7, 2, (June), 142-155.
Jewkes, John, David Sawers, and Richard Stillerman (1969), "The Individual Inventor," Chapter V in *The Sources of Invention (Second Edition)*, W.W. Norton & Company, New York, NY.
Kamien, Morton, I., and Nancy L. Schwartz (1972), "Timing of Innovations Under Rivalry," *Econometrica*, 40, 43-60.
King, Robert (1987), *Better Designs in Half the Time: Implementing Quality Function Deployment (QFD) in America*, (Lawrence, MA: G.O.A.L., Inc.)
Kogure, and Yoji Akao (1983), "Quality Function Deployment and CWQC," *Quality Progress*, 16:10, (October), 25-29.
Kohli, Rajeev and R. Sukumar (1990), "Heuristics for Product-line Design Using Conjoint Analysis," *Management Science*, 36, 12, (December), 1464-1478.
Kotler, Philip (1991), *Marketing Management*, Seventh Edition, Prentice-Hall, Inc., Englewood Cliffs, NJ.
Lawrence, Paul R., and Jay W. Lorsch (1967), *Organization and Environment*, Harvard Business School Press, Cambridge, MA.
Lee, Tom, and Louis L. Wilde (1980), "Market Structure and Innovation: A Reformulation," *Quarterly Journal of Economics*, 94, 429-436.
Lorsch, Jay W., and Paul R. Lawrence (1965), "Organizing for Product Innovation," *Harvard Business Review*, 109-120, (January-February).
Loury, Glenn C. (1979), "Market Structure and Innovation," *Quarterly Journal of Economics*, 93, 395-410.
Lucas, Robert E. (1971), "Optimal Management of a Research and Development Project," *Management Science*, 17, 11, (July), 679-697.
Lynch, John G., Jr. (1985), "Uniqueness Issues in the Decompositional Modeling of Multi-attribute Overall Evaluations: An Information Integration Perspective," *Journal of Marketing Research*, 22, (February), 1-19.
Marquis, Donald G., and D.L. Straight (1965), "Organizational Factors in Project Performance," MIT Sloan School of Management Working Paper.
McElroy, John, (1987), "For Whom are We Building Cars?," *Automotive Industries*, (June), 68-70.
Mehrez, Abraham (1983), "Development and Marketing Strategies for a Class of R and D Projects, with Time Independent Stochastic Returns," *R.A.I.R.O. Recherche Operationnelle/Operations Research*, 17, 1, (February), 1-13.
Miles, Raymond E., and Charles C. Snow (1978), *Organizational Strategy, Structure, and Process*, McGraw-Hill, New York, NY.
Moenaert, Rudy K. and William E. Souder (1990), "An Information Transfer Model for Integrating Marketing and R&D Personnel in New Product Development Projects," *Journal of Product Innovation Management*, 7, 2, (June), 91-107.
Moore, William L. (1987), "New Product Development Practices of Industrial Marketers," *Journal of Product Innovation Management*, 4, 6-20.
Norton, John A., Mark E. Parry, and X. Michael Song (1992), "The Impact of Firm Strategy, Environmental Uncertainty, and Organizational Structure and Climate on R&D-Marketing Integration in American Chemical And Pharmaceutical Firms," Working Paper, University of Chicago.
Park, Jaesun (1987), "Dynamic Patent Races with Risky Choices," *Management Science*, 33, 12, 1563-1575.
Parry, Mark E., and X. Michael Song (1991), "The Impact of Firm Strategy, Environmental Uncertainty, and Organizational Structure and Climate on R&D/Marketing Integration in Japanese High-Technology Firms," *Proceedings: 1991 Product Development Management Association International Conference*, Boston, MA 49-59.
Pelz, Donald C., and F.M. Andrews (1966), *Scientists in Organizations*, Revised Edition, (Ann Arbor, MI: University of Michigan Press).
Pessemier, Edgar A. (1986), *Product Management: Strategy and Organization*, (Robert E. Krieger Publishing Company, Inc.: Malabar, FL).
Pinto, Mary Beth and Jeffrey K. Pinto (1990), "Project Team Communication and Cross-Functional Cooperation in New Program Development," *Journal of Product Innovation Management*, 7, 200-212.
Porteus, Evan L. and Seugjin Whang (1991), "On Manufacturing/Marketing Incentives," 37, 9, (September), 1166-1181.
Ray, M.L. (1982), *Advertising and Communication Management*, Prentice-Hall, Inc., Englewood Cliffs, N.J., 157.
Reinganum, Jennifer F. (1982), "A Dynamic Gave of R and D: Patent Protection and Competitive Behavior," *Econometrica*, 50, 3, (May), 671-688.
Reynolds, Thomas J. and Jonathan Gutman (1988), "Laddering Theory, Method, Analysis, and Interpretation," *Journal of Advertising Research*, 28, 1, 11-31.
Roberts, Edward B. (1987), "Managing Technological Innovations: A Search for Generalizations," in Edward B. Roberts, ed., *Managing Technological Innovation*, Oxford University Press, Oxford, England.
Roberts, Edward B. (1988), "What We've Learned: Managing Invention and Innovation," *Research and Innovation Management*, 31, 11-29.
Romesburg, H. C. (1984), *Cluster Analysis for Researchers*, (Lifetime Learning Publications: Belmont, CA).
Roussel, Philip A., Kamal N. Saad, and Tamara J. Erickson (1991), *Third Generation R&D: Managing the Link to Corporate Strategy*, (Harvard Business School Press: Boston, MA).
Rothwell, R., C. Freeman, A. Horsley, V. Hervis, A. Roberston, and J. Crawford
(1974), "SAPPHO" Updated: Project SAPPHO Phase II," *Research Policy*, 32, 258-291.
Ruekert, Robert W., and Orville C. Walker (1987), "Marketing's Interaction with Other Functional Units: A Conceptual Framework and Empirical Evidence," *Journal of Marketing*, 51, (January), 1-19.
Saxberg, B. and J. W. Slocum (1968), "The Management of Scientific Manpower," *Management Science*, 14, 8, B473-B489.
Schein, Edgar H. (1985), *Organizational Culture and Leadership*, Jossey-Bass Publishers, San Francisco, CA.
Schmalensee Richard and Jacques-François Thisse (1988), "Perceptual Maps and the Optimal Location of New Products: An Integrative Essay," *International Journal of Research in Marketing*, 5, 225-249.
Shapiro, Ben P. (1987), "The New Intimacy," Harvard Business School Note, ICCH #788010 (October 1).
Shocker, Allan D. and V. Srinivasan (1979), "Multi-attribute Approaches for Product Concept Evaluation and Generation: A Critical Review," *Journal of Marketing Research*, 16, (May), 159-180.
Silver, Jonathan Alan and John Charles Thompson, Jr. (1991), "Understanding Customer Needs: A Systematic Approach to the 'Voice of the Customer,'" Master's Thesis, Sloan School of Management, M.I.T., Cambridge, MA 02139.
Siwolop, S. (1987), "R&D Scoreboard," *Business Week*, 134-155, (June 22).
Song, X. Michael (1991), "An Empirical Investigation of the Marketing/R&D Interface," Unpublished Ph.D. Dissertation, University of Virginia, Darden School, (August).
Souder, William E. (1978), "Effectiveness of Product Development Methods," *Industrial Marketing Management*, 7.
Souder, William E. (1987), "Managing the R&D/Marketing Interface," Chapter 10 in *Managing New Product Innovations*, Lexington Books, Lexington, MA.
Souder, William E. (1988), "Managing Relations Between R&D and Marketing in New Product Development Products," *Journal of Product Innovation Management*, 5, 6-19.
Sullivan, Lawrence P. (1986), "Quality Function Deployment," *Quality Progress*, 19(6), (June), 39-50.
Takeuchi, Hirotaka and Ikujiro Nonaka (1986), "The New New Product Development Game," *Harvard Business Review*, 66, 1, (January-February), 137-146.
Urban, Glen L., and John R. Hauser (1980), *Design and Marketing of New Products*, (Englewood Cliffs, NJ: Prentice-Hall, Inc.)
von Hippel, Eric (1978), "Successful Industrial Products from Consumers' Ideas," *Journal of Marketing*, 42, 1, (January), 39-49.
von Hippel, Eric (1986), "Lead Users: A Source of Novel Product Concepts," *Management Science*, 32, 7, (July), 791-805.
Webster, Frederick E., "The Changing Role of Marketing in the Corporation," Marketing Science Institute Report # 91-127, October.
Wiebecke, George, Hugo Tschirky and Eberhard Ulich (1987), "Cultural Differences at the R&D/Marketing Interface: Explaining Inter-Divisional Communication Barriers," *Proceedings of the IEEE Conference on Management and Technology*, Atlanta, GA.
Wilkie, William L. and Edgar A. Pessemier (1973), "Issues in Marketing's Use of Multi-attribute Attitude Models," *Journal of Marketing Research*, 10, (November), 428-441.
Wind, Yoram, Paul E. Green, Douglas Shifflet, and Marsha Scarbrough (1989), "Courtyard by Marriott: Designing a Hotel facility with Consumer-Based Marketing Models," *Interfaces*, 19, (January-February), 25-47. |
A STRUCTURED THRESHOLD MODEL FOR MOUNTAIN PINE BEETLE OUTBREAK
MARK A. LEWIS\textsuperscript{1}, WILLIAM NELSON\textsuperscript{2} AND CAILIN XU\textsuperscript{2}
ABSTRACT. A vigor-structured model for mountain pine beetle outbreak dynamics within a forest stand is proposed and analyzed. This model explicitly tracks the changing vigor structure in the stand. All model parameters, other than beetle vigor preference, were determined by fitting model components to empirical data. An abrupt threshold for tree mortality to beetle densities allows for model simplification. Based on initial beetle density, model outcomes range from decimation of the entire stand in a single year, to inability of the beetles to infect any trees, to an intermediate outcome where an initial infestation which subsequently dies out before the entire stand is killed. A model extension is proposed for dynamics of beetle aggregation. This involves a stochastic formulation.
1. Introduction
Mountain pine beetles (\textit{Dendroctonus ponderosae} Hopkins) are the single most destructive pine forest pest (Logan and Powell, 2001; Logan et al., 2003). Although mountain pine beetles occur naturally throughout the pine forests of western North America, their range is spreading quickly, likely due to changing climate (Logan et al., 2003).
1.1. Biological Background. Mountain pine beetle outbreaks range from isolated attacks on single trees to mass attacks of virtually all trees in a stand. Mass attacks can decimate forests stands in one or two years. While the exact determinants of successful outbreaks are unclear, initial beetle attack density and host vigor (as measured by wood production per unit of leaf area) are key factors (Figure 1). When vigor is low or beetle density high, attacks are successful (tree is killed). When vigor is high or beetle density low, attacks are only partially successful (strip attack) or unsuccessful (tree repels attack).
The mountain pine beetle life cycle has been researched intensively, and is well-understood. The beetles attack and breed in live host trees—a process that results in host mortality. Most mountain pine beetle populations complete one generation a year (Safarnyik and Carroll, 2006) (Figure 2). Recently developed adults emerge from their host trees in late summer, and take flight in search of new hosts.
The flight period for the entire population is brief, only about two weeks long. Dispersal of flying beetles through the forest is guided by two kinds of chemical signals: kairomones, produced by trees, and pheromones, produced by beetles already in the process of attacking new hosts. (See Logan et al. (1998); White and Powell (1998) for models of these dynamics.)
\textit{Date:} June 30, 2006.
\textsuperscript{1} Department of Mathematical and Statistical Sciences, University of Alberta, \textsuperscript{2} Department of Biological Sciences, University of Alberta.
Figure 1. Empirical thresholds for attack success as a function of host vigor. Each symbol is an individual tree. Solid circles indicate trees were killed by beetles; open circles indicate trees resisted attack and are alive; grey circles indicate trees had a strip attack. The grey lines are empirically estimated thresholds for host mortality. a) Data for mountain pine beetles attacking lodgepole pine redrawn from Waring and Pitman (1985). The threshold is based the maximum likelihood fit of equation (2) with $\beta(\nu) = \beta_0 \exp(\beta_1 \nu)$ (see Appendix B for details). b) Data for spruce bark beetles attacking norway spruce from Mulock and Christiansen (1986). The axes presented here are converted from the original axes to be comparable with panel (a) (see Appendix A for details). The threshold is redrawn from the original work.
Given that attack success depends crucially upon tree vigor (Figure 1), and that beetles can sense tree vigor through kairomones, it is natural to ask whether beetles preferentially attack less vigorous trees over more vigorous. There is some evidence of this so-called primary attraction in mountain pine beetle attacks (Moeck and Simmons, 1991), although is important to note that this is only part of the chemical ecology of mass attack.
Once beetles settle on a potential host, they attempt to bore through the outer bark into the phloem tissue. Healthy trees can resist attacks by producing resin to slow down or stop beetles from constructing tunnels (galleries) to lay eggs. If sufficient beetles attack a particular host, then resin defenses can be overwhelmed, and beetles successfully construct egg galleries in the phloem tissue. The density of beetles required to overcome tree defenses can be measured empirically (Figure 1). When the attack is successful, the eggs develop into larvae, which create feeding galleries that girdle and often kill the host tree. Beetle populations often over-winter as late instar larvae, and resume development in the spring. Pupation occurs in early spring and adults emerge in late summer.
1.2. Models for Beetle Dynamics. Models for mountain pine beetle population dynamics differ in their level of ecological detail—ranging from strategic models
that phenomenalize the ecological interactions into a single replacement function for the population (Berryman et al., 1984), to complex spatial models that explicitly describe the processes of dispersal, aggregation and attack (Powell et al., 1996). While more complex models are arguably more realistic, it is often difficult to study them and disentangle the contribution of ecological features (White and Powell, 1997). Strategic models, on the other hand, are well suited to study the general dynamics of a system, but often at the expense of realism.
Here we develop and analyze a strategic population model of the mountain pine beetle to study how host selection and mass attack influence the dynamics of beetle outbreaks. Our work is based on the models of Berryman (1979); Berryman et al. (1984), but provides greater realism by explicitly incorporating how the attack process depends on host vigor.
Early strategic models of mountain pine beetle populations were based on productivity curves for the processes of attack and reproduction at the spatial scale of a forest stand (Berryman, 1979). Productivity curves describe the density of beetles emerging from a stand in a current year, based on the density of attacking beetles in the previous year. While used to depict dynamics at the scale of a forest stand, they are based on observations of per-capita beetle fecundity on individual trees. For aggressive bark beetles, beetle fecundity curves are unimodal, reflecting
the balance between overwhelming host defenses and reducing reproduction due to overcompensatory competition. For example, if the number of beetles attacking a host is too low, then attacks fail because of host defenses (Figure 1), but if the number of attacks are too high, then reproduction is reduced because of intraspecific competition. The density of attacking beetles giving rise to the maximum per capita fecundity is the minimum density required to kill the host (Raffa and Berryman, 1983), which implies that hosts with different levels of vigor have different fecundity curves. Thus, the productivity curve for the entire stand, which is the sum of the fecundity curves over all trees, depends on the distribution of vigor in the stand.
Because productivity curve models describe the density of beetles emerging from a stand in a current year based on the density of attacking beetles in the previous year, they offer a straightforward way to predict beetle dynamics. However, they implicitly assume the vigor structure of a stand remains constant through time (Berryman, 1979). Such an assumption is unrealistic for mountain pine beetles because outbreak dynamics alter the forest at a much faster rate than the forest regenerates. A changing vigor structure in a forest implies a dynamic productivity curve. In this paper we analyze the changing vigor in the forest stand, developing a population model structured by host vigor. We extend the model to include beetle aggregation on the tree hosts.
1.3. A Structured-Population Model for Mass Attack. As demonstrated in Figure 1, host tree vigor plays a central role in the success or failure of beetle mass attack. Indeed, the beetle density threshold for successful mass attack depends explicitly upon host tree vigor. However, at the level of a forest stand, beetles are likely to encounter variable host tree vigor, as it varies from tree-to-tree. Because the time scale for beetle attack is much faster than the forest regeneration time scale, beetle attack can modify the year-to-year vigor structure in a stand as the attack progresses. If low vigor trees in a stand are attacked early, then the remaining trees may be better able to withstand subsequent attack. In this way, changes in the vigor structure of a stand will affect the year-to-year dynamics of attacking beetle success.
To date, mathematical models for pine beetle attack have generally excluded the vigor structure of the host tree population (but see Raffa and Berryman (1986) and (Powell et al., 1996)). However, structured-population models have an established history of revealing key details of processes governing population dynamics (Caswell et al., 1997). A particularly useful outcome of structured-population models is a simplified projection of overall population levels from one year to the next Gurney and Nisbet (1998).
It is the purpose of this paper to investigate the interplay between attack success and primary attraction of beetles to trees, when both of these are affected by the host tree vigor. Specifically we will investigate conditions on beetle density and tree host vigor required for a successful mass attack. We will use a vigor- and bark area-structured model for attack. The model component will be based on a mixture of an empirical evidence for the attack success, and theoretically-derived relationships for the primary attraction to host trees. As explained in the next section, we will refer to this structured-population model as a ‘structured-threshold model’, as it couples curves of the sort shown in Figure 1 to beetle production in the subsequent generation.
| Variable | Definition and (units) |
|----------|------------------------|
| $t$ | time index in years (yr) |
| $a$ | bark area of a tree (m$^2$) |
| $\nu$ | vigor index of a tree (g m$^{-2}$ yr$^{-1}$) |
| $f_t(a, \nu)$ | distribution of trees, with respect to bark area and vigor of tree (number per unit area per unit vigor, yr g$^{-1}$) |
| $F_t(a, \nu)$ | distribution of bark area from all trees with respect to bark area and vigor of tree (per unit vigor, g$^{-1}$ m$^2$ yr) |
| $\chi(b_t, \nu)$ | proportion of hosts killed for a given level of host vigor and attacking beetle density (dimensionless) |
| $b_t(a, \nu)$ | density of attacking beetles (number per unit bark area, m$^{-2}$) |
| $p(b_t, \nu)$ | number emerging beetles per attack (dimensionless) |
| $e_t(a, \nu)$ | number of emerging beetles from a host tree (dimensionless) |
| $P_t(b_t, a, \nu)$ | total number of emerging beetles in the stand (dimensionless) |
| $k(a, \nu)$ | preference kernel for trees of given area and vigor (per unit vigor per unit area, yr g$^{-1}$) |
| $\tilde{k}(\nu)$ | preference kernel for trees of given vigor (per unit vigor, g$^{-1}$ m$^2$ yr) |
| $\phi(\nu)$ | bark area per unit vigor (area per unit vigor, g$^{-1}$ m$^4$ yr) |
Table 1. Symbols and variables used the mountain pine beetle production model and its simplification.
The model output will include a simplified projection of total beetle population levels from one year to the next as the beetles modify the forest stand structure, destroying trees that remain susceptible to attack at that year’s beetle density. We will analyze the qualitative dynamics of the simplified projection model, showing how threshold behavior and bistable dynamics in the beetle population levels arise naturally from the underlying structured-population model.
2. Structured-threshold model
We develop a general model for host tree and beetle density, structured according to host tree vigor and host tree bark area. We then simplify the model under the assumption that beetles distribute themselves only according to host vigor and not according to host tree bark area. The model variables, their definitions and their units are given in Table 1.
2.1. Model Development. In our model we refer to a host as a single tree, and a stand is a group of trees. From the perspective of the mountain pine beetle, host vigor is described by a single index $\nu$ that characterizes the capacity for resinous defenses. As shown in Figure 1, an appropriate measure is wood produced per unit leaf area in the host tree. To remain consistent with the empirical evidence, density here refers to the number of beetles per unit area of bark $a$.
The density of attacking beetles on a particular host $i$ is represented by $b(a_i, \nu_i)$, which may vary with both the vigor and size (bark area) of a tree. The number of emerging beetles per attack from host $i$ is given by the function $p(b(a_i, \nu_i), \nu_i)$, which depends on both attack density and host vigor. This function includes the probability of host mortality, as well as the effects of intraspecific competition. In
the early literature, it is referred to as the 'productivity curve'. Empirical evidence suggests that the productivity curve is a threshold function over $b(a_i, \nu_i)$ at any particular value of $\nu$. For example, we could use
$$p(b(a, \nu), \nu) = \chi(b(a, \nu), \nu) A e^{-\gamma b(a, \nu)} \quad (1)$$
where
$$\chi(b(a, \nu), \nu) = \left( 1 + e^{-\alpha(b(a, \nu) - \beta(\nu))} \right)^{-1}. \quad (2)$$
The function $\chi(b(a, \nu), \nu)$ represents the proportion of hosts killed for a given level of host vigor and attacking beetle density, and $\beta(\nu)$ defines the line where $\chi(b(a, \nu), \nu) = 0.5$ (Figure 3). The maximum likelihood fit of $\beta(\nu) = \beta_0 \exp(\beta_1 \nu)$ to attack data is shown in Figure 1. This curve gives a slightly better fit than the original linear function of Waring and Pitman (1985).
The parameter $\alpha$ and the function $\beta(\nu)$ describe the expected probability of host mortality $\chi$. Empirical evidence suggests that the transition from low to high probability of mortality is rapid, and that the function $\beta(\nu)$ is a monotonically increasing function of host vigor (Figure 1).
The parameters $\gamma$ and $A$ reflect the effects of intraspecific competition in the beetles. Researchers have found that the influence of competition begins at attack densities greater than the attack threshold, and that the strength of competition is independent of host vigor (e.g., Raffa and Berryman (1983)). This suggests that the processes of host defense and intraspecific competition can be modeled independently as shown in (1) (Berryman et al., 1984). Equation (1) gives a non-monotonic growth function, with the possibility of overcompensation in the beetle dynamics. In this paper we will estimate $\gamma$ and $A$ from the data in Raffa and Berryman (1983) (Table 2, Appendix B).
The total number of beetles that emerge from a given host tree with area $a$ and vigor $\nu$ is
$$e(a, \nu) = \frac{p(b(a, \nu_i), \nu)}{\text{beetles per attack}} \cdot \frac{a}{\text{bark area}} \cdot \frac{b(a, \nu)}{\text{attackers per unit area bark}}. \quad (3)$$
We define the distribution of host structure as $f(a, \nu)$, The total number of trees in the stand is
$$N = \int_0^\infty \int_0^\infty f(a, \nu) \, da \, d\nu \quad (4)$$
and $f(a, \nu)/N$ is a probability density function for the joint distribution of tree area and vigor.
The total number of emerging beetles over the entire stand in any given year $t$ is $e(a, \nu)$ weighted by $f(a, \nu)$
$$P_t = \int_0^\infty \int_0^\infty f_t(a, \nu) \cdot \frac{a p(b_t(a, \nu), \nu) b_t(a, \nu)}{\text{host type distribution}} \, da \, d\nu \quad (5)$$
$$= \int_0^\infty \int_0^\infty F_t(a, \nu) \cdot \frac{p(b_t(a, \nu), \nu) b_t(a, \nu)}{\text{bark area density}} \, da \, d\nu \quad (6)$$
where $F_t(a, \nu) = af_t(a, \nu)$ is the distribution of total bark area with respect to the bark area of a tree and vigor of a tree.
If we assume that the forest dynamics of growth, competition and non-beetle caused mortality are very slow relative to the change caused by beetle attacks, then the distribution of potential hosts is given by
$$f_{t+1}(a, \nu) = f_t(a, \nu)(1 - \chi(b_t(a, \nu), \nu))$$ \hspace{1cm} (7)
In effect, we are modeling an epidemic moving through an otherwise static forest. Multiplying equation (7) by bark area $a$ yields
$$F_{t+1}(a, \nu) = F_t(a, \nu)(1 - \chi(b_t(a, \nu), \nu)).$$ \hspace{1cm} (8)
Attack densities and distribution of host structure will change from year to year. The balance equation for attack densities is the number of emerging beetles redistributed over the new hosts in the stand.
$$b_{t+1}(a, \nu) = K_{t+1}(a, \nu)P_t$$ \hspace{1cm} (9)
$$= K_{t+1}(a, \nu) \int_0^\infty \int_0^\infty F_t(a, \nu) p(b_t(a, \nu), \nu) b_t(a, \nu) da d\nu$$ \hspace{1cm} (10)
where $K_{t+1}(a, \nu)$ is a function that redistributes the attacking beetles onto the available bark area that is left after infestation. The hosts will have already suffered damage from the beetle infestation at time $t$, and so the available bark area is $F_{t+1}(a, \nu)$.
It is assumed that proportion $S$ of the beetles survive the redistribution process. Hence, the function $K_{t+1}(a, \nu)$ must conserve the number of attacking beetles so that the density of attacking beetles, weighted by the total available bark area equals the total number of emerging beetles
\begin{equation}
\int_0^\infty \int_0^\infty b_{t+1}(a, \nu) F_{t+1}(a, \nu) \, da \, d\nu = SP_t.
\end{equation}
Multiplying both sides of (9) by $F_{t+1}(a, \nu)$, integrating, and applying equation (11) yields
\begin{equation}
\int_0^\infty \int_0^\infty K_{t+1}(a, \nu) F_{t+1}(a, \nu) \, da \, d\nu = S.
\end{equation}
If the beetles have no preference regarding tree bark area or vigor then the choice
\begin{equation}
K_{t+1}(a, \nu) = \frac{S}{\int_0^\infty \int_0^\infty F_{t+1}(a, \nu) \, da \, d\nu}
\end{equation}
will distribute the beetles evenly with respect to these factors. In this case $K_{t+1}$ is a constant in equation (10), and hence the density of beetles per unit area bark $b_{t+1}$ is a constant. Note that the actual distribution of beetles, given by $b_{t+1}F_{t+1}(a, \nu)$, is nonconstant, and is equal to zero for those bark area and vigor values where there are no trees ($f_{t+1}(a, \nu) = 0$).
Alternatively, we can define a preference function $k(a, \nu)$, with preference function values larger than one indicating more preferred tree bark area and vigor, and preference function values less than one indicating less preferred tree bark area and vigor. The modification of equation (13) that includes the preference function is
\begin{equation}
K_{t+1}(a, \nu) = \frac{Sk(a, \nu)}{\int_0^\infty \int_0^\infty k(a, \nu) F_{t+1}(a, \nu) \, da \, d\nu}.
\end{equation}
Here the preference function $k(a, \nu)$ appears both in the numerator and under the integral sign in the normalization constant in the denominator. Equations (10) and (14) become
\begin{equation}
b_{t+1}(a, \nu) = k(a, \nu) \frac{\int_0^\infty \int_0^\infty F_t(a, \nu) \chi(b_t(a, \nu), \nu) b_t(a, \nu) SAe^{-\gamma b(a, \nu)} \, da \, d\nu}{\int_0^\infty \int_0^\infty k(a, \nu) F_t(a, \nu)(1 - \chi(b_t(a, \nu), \nu)) \, da \, d\nu}.
\end{equation}
The model system of equations, (1)–(2), (8) and (15), is a nonlinear integrodifference system. The unknown functions to be solved for are the density of attacking beetles $b_t(a, \nu)$ and the distribution of total total bark area not yet destroyed by infestation $F_t(a, \nu)$.
In reality, we expect that beetle aggregation as well as tree preference will play a role in the distribution of beetles. We defer incorporation of this beetle aggregation to Section 4.
2.2. Model Simplification. Although beetles can respond to size cues for trees (Cole and McGregor, 1983), we focus on the minimal conditions under which outbreak can occur and assume that attacking beetles distribute themselves only according to host vigor, independently of tree size, and are able to attack the lower vigor trees preferentially. **Although this undoubtedly an oversimplification of dynamics of tree preference by beetles, it can be considered a “best-case” establishment scenario, where by choosing low-vigor trees, beetles can establish most easily. Then then the above system can be simplified.** Define $\phi_t(\nu)$ as the distribution of bark area of live hosts as a function of host vigor, so
$$\phi_t(\nu) = \int_0^\infty F_t(a, \nu) \, da$$
The total bark area of live hosts per hectare in the stand is given by
$$M_t = \int_0^\infty \phi_t(\nu) \, d\nu$$
If we assume that emerging beetles only redistribute themselves by host vigor, then the redistribution function $K_{t+1}$ depends only upon host vigor $\nu$. Substitution into equation (15) shows that $b_{t+1}$ depends only upon host vigor $\nu$. Therefore equation (15) becomes
$$b_{t+1}(\nu) = \tilde{k}(\nu) \frac{\left( \int_0^\infty \phi_t(\nu) \chi(b_t(\nu), \nu) b_t(\nu) A e^{-\gamma b_t(\nu)} \, d\nu \right)}{\int_0^\infty \tilde{k}(\nu) \phi_t(\nu) \left( 1 - \chi(b_t(\nu), \nu) \right) \, d\nu}.$$
Integration of (8) yields
$$\phi_{t+1}(\nu) = \phi_t(\nu) \left( 1 - \chi(b_t(\nu), \nu) \right).$$
We assume that the maximum vigor level possible is $\nu_m$, and that the average level of bark area initially available is $\phi_0$. The threshold location function $\beta(\nu)$ is assumed to have the form $\beta(\nu) = \beta_0 \exp(\beta_1 \nu)$. While it would be possible to use the original Waring and Pitman (1985) straight-line function $\beta(\nu) = \beta_0 + \beta_1 \nu$, this function does not fit the data as well (Figure 1), and is no simpler in the subsequent analysis. The initial number of beetles is given by
$$N_0 = \int_0^{\nu_m} \phi_0(\nu) b_0(\nu) \, d\nu,$$
2.3. Nondimensionalization. To facilitate analysis we rescale the variables to be dimensionless
$$\nu^* = \frac{\nu}{\nu_m}, \quad b^* = \frac{b}{\beta_0}, \quad \phi^* = \frac{\phi}{\phi_0}.$$
| Parameter | Estimate (dimensionless) | Source |
|---------------------------|--------------------------|---------------------------------------------|
| $\alpha$ threshold steepness | 0.21m$^2$ (4.69) | Waring and Pitman (1985) |
| $A$ beetle growth | 32.9 (32.9) | Raffa and Berryman (1983) |
| $S$ dispersal survival | variable | - |
| $\beta_0$ attack threshold | 22.34m$^{-2}$(1) | Waring and Pitman (1985) |
| $\beta_1$ nonlinear threshold | 0.0178 g$^{-1}$ m$^2$ yr$^1$ (2.1) | Waring and Pitman (1985) |
| $\gamma$ density dependence | 0.0166 m$^{-2}$(0.371) | Raffa and Berryman (1983) |
| $\nu_m$, maximum vigor | 116.4 g m$^{-2}$ yr$^{-1}$ (1) | Waring and Pitman (1985) |
| $A_T$ average bark area | 12.6 m$^2$(-) | He (2006) |
Table 2. Parameters used in the mountain pine beetle model. Details of calculation of the parameters are given in Appendix B.
Corresponding dimensionless parameters are given by
$$\gamma^* = \gamma \beta_0, \quad \alpha^* = \alpha \beta_0, \quad \beta_1^* = \beta_1 \nu_m, \quad N_0^* = \frac{N_0}{\nu_m \beta_0 \phi_0}, \quad A^* = SA$$
and functions by
$$\tilde{k}^*(\nu^*) = \nu_m \tilde{k}(\nu^* \nu_m), \quad \chi^*(b^*(\nu^*), \nu^*) = \left(1 + e^{-\alpha^*(b^* - (\exp(\beta_1^* \nu^*))}\right)^{-1}.$$
Dropping asterisks for notational simplicity, we rewrite equation (18) in its dimensionless form
$$b_{t+1}(\nu) = \tilde{k}(\nu) \left(\frac{\int_0^1 \phi_t(\nu) \chi(b_t(\nu), \nu) b_t(\nu) Ae^{-\gamma b_t(\nu)} d\nu}{\int_0^1 \tilde{k}(\nu) \phi_t(\nu) (1 - \chi(b_t(\nu), \nu)) d\nu}\right),$$
with $0 < \nu < 1$ and the average initial bark density equal to one $\tilde{\phi}_0 = 1$. Equation (19) remains unchanged.
3. A Population Projection Model
One mathematical approach for simplifying a structured-population model to a simplified, non-structured population model is to use a method of population projection (Powell et al., 1996) This entails assuming a particular form structured variable (in this case tree host vigor) which then allows for simplification. In our case, we restrict ourselves to the case where the distribution of vigor of the trees is initially uniform (i.e., each possible vigor level $0 < \nu < 1$ is equally likely, so that $\phi_0 = 1$). By way of example, we consider the case of an exponential preference function, with attraction to the lower vigor trees $\tilde{k}(\nu) = c \exp(-c\nu)$.
3.1. A recurrence relation for beetle density at low host vigor. In this section we derive a recurrence relation for beetle density at low host vigor $\tilde{b}_t$. We assume that the beetles are initially distributed according to the preference function $\tilde{k}(\nu)$ so that
$$b_0(\nu) = \tilde{b}_0 \exp(-c\nu).$$
where
\begin{equation}
\tilde{b}_0 = N_0 \frac{c}{1 - \exp(-c)}
\end{equation}
is chosen to ensure that \( \int_0^1 b_0(\nu) \, d\nu = N_0 \). Here \( \tilde{b}_0 \) can be interpreted as the beetle density for low host vigor. To simplify the analysis we consider the case where \( \alpha \to \infty \) so that \( \chi \to H(b(\nu) - \exp(\beta_1 \nu)) \) where \( H \) is the Heaviside step function. In other words, for a given vigor level \( \nu \), the beetles are unsuccessful (\( \chi(b(\nu), \nu) = 0 \)) if the beetle density is subthreshold (\( b(\nu) < \exp(\beta_1 \nu) \)), and are successful (\( \chi(b(\nu), \nu) = 1 \)) if the beetle density is superthreshold (\( b(\nu) > \exp(\beta_1 \nu) \)). Beetle success results in reproduction, and also in the killing of the host.
Under these assumptions, the condition for the beetle population to initially reproduce is \( b_0(\nu) - \exp(\beta_1 \nu) > 0 \) for some value or \( \nu \). The value of \( \nu \) which gives the largest left hand side is \( \nu \to 0 \), and hence the condition for initial beetle reproduction is \( \tilde{b}_0 > 1 \). This is satisfied if either enough beetles are introduced (\( N_0 \) sufficiently large), or the preference for low vigor trees is pronounced (\( c \) sufficiently large).
If initial number of beetles introduced \( N_0 \) is very large then the beetle population may successfully reproduce for all vigor levels. This occurs when \( b(\nu) - \exp(\beta_1 \nu) > 0 \) for all \( \nu \). The value of \( \nu \) that gives the smallest left hand side is \( \nu = 1 \). Hence, the condition for successful reproduction for all vigor levels becomes \( b_0(1) > \exp(\beta_1) \), or equivalently, \( \tilde{b}_0 > (\exp(\beta_1)) \exp(c) \). When this condition is satisfied, the entire stand is infested with beetles and destroyed in a single time step.
At intermediate values of \( N_0 \) the low vigor trees will host beetles, and the high vigor trees will not (Figure 4). In this case, there is threshold tree vigor level \( \nu_0 \), below which the beetle reproduces (and also kills the trees), and above which the beetle does not reproduce, and the trees are left uncathed. The threshold vigor value \( \nu_0 \) is calculated as the value of \( \nu \) satisfying \( b_0(\nu) = \exp(\beta_1 \nu) \). Using equation (25) this is rewritten as \( \tilde{b}_0 \exp(-c\nu) = \exp(\beta_1 \nu) \). This can be solved explicitly
\begin{equation}
\nu_0 = \frac{1}{c + \beta_1} \log(\tilde{b}_0).
\end{equation}
Mathematically, the threshold function \( \chi = 1 \) for \( 0 < \nu < \nu_0 \) and \( \chi = 0 \) for \( \nu_0 < \nu < 1 \). Using this idea we can rewrite (24) for \( t = 0 \) as
\begin{equation}
b_1(\nu) = \tilde{b}_1 \exp(-c\nu),
\end{equation}
where
\begin{equation}
\tilde{b}_1 = \frac{\int_0^{\nu_0} \tilde{b}_0 c \exp(-c\nu) A e^{-\gamma \tilde{b}_0 \exp(-c\nu)} \, d\nu}{\int_{\nu_0}^{1} c \exp(-c\nu) \, d\nu}.
\end{equation}
Integrating (28) with respect to \( 0 < \nu < 1 \) yields the number of beetles in the stand after one time step as \( N_1 = \tilde{b}_1 (1 - \exp(-c))/c \).
If we assume that the beetle population has grown and spread through increasing vigor classes over successive generations, then \( \phi_{k+1}(\nu) \) is zero for \( 0 < \nu < \nu_{t+1} \) and
is one for $\nu_{t+1} < \nu < 1$. Here $\nu_{t+1}$ satisfies $\tilde{b}_{t+1} \exp(-c\nu) = \exp(\beta_1 \nu)$, and hence
$$\nu_{t+1} = \frac{1}{c + \beta_1} \log(\tilde{b}_{t+1}).$$
where
$$b_{t+1}(\nu) = \tilde{b}_{t+1} \exp(-c\nu),$$
and
$$\tilde{b}_{t+1} = \frac{\int_{\nu_t}^{\nu_{t-1}} \tilde{b}_t c \exp(-c\nu) A e^{-\gamma \tilde{b}_t \exp(-c\nu)} \, d\nu}{\int_{\nu_t}^1 c \exp(-c\nu) \, d\nu}.$$
This expression yields a positive $\tilde{b}_{t+1}$ providing $\nu_t > \nu_{t-1}$. Thus equations (30) and (32) are valid so long as $\nu_t$ is an increasing sequence with values below 1. This
generates a corresponding, increasing sequence of $\tilde{b}_t$ values. When $\nu_t \leq \nu_{t-1}$ the population dies. When $\nu_t \geq 1$ the entire stand has been invaded.
Equation (32) also satisfies equation (29) for the case $t = 0$. If the substitution $\tilde{b}_{-1} = 1$ is made, then equation (30) yields $\nu_{-1} = (c + \beta_1)^{-1} \log(\tilde{b}_{-1}) = 0$, indicating that prior to the infestation outbreak there are no infected trees. As above, $\tilde{b}_t$ can be translated into the total number of beetles in the stand $N_t$ by integrating (31) from $\nu_t$ to 1 to yield
$$N_t = \frac{\tilde{b}_t \left( \tilde{b}_t^{-c/(c+\beta_1)} - e^{-c} \right)}{c}. \quad (33)$$
Perhaps surprisingly, equation (32) can be integrated exactly. However, the cases with and without density-dependence ($\gamma$ greater than zero and equal to zero) must be treated separately. In the next section we analyze the density-independent case ($\gamma = 0$). In the following section we analyze the density-dependent case ($\gamma > 0$).
### 3.2. Linear growth model
When $\gamma = 0$, integration of equation (32) yields
$$\tilde{b}_{t+1} = A \frac{\tilde{b}_t (\exp(-c\nu_{t-1}) - \exp(-c\nu_t))}{\exp(-c\nu_t) - \exp(-c)}. \quad (34)$$
Equation (30) allows us to rewrite the right hand side in terms of $\tilde{b}_t$ and $\tilde{b}_{t-1}$
$$\tilde{b}_{t+1} = A \frac{\tilde{b}_t \left( \tilde{b}_t^{\frac{\beta_1+c}{c}} - \tilde{b}_{t-1}^{\frac{\beta_1+c}{c}} \right)}{\tilde{b}_{t-1}^{\frac{\beta_1+c}{c}} \left( 1 - \tilde{b}_t^{\frac{\beta_1+c}{c}} \exp(-c) \right)}. \quad (35)$$
Equation (35) is a discrete-time dynamical system that describes the progression of disease through a stand structured according to vigor. Starting with the initial condition $\tilde{b}_{-1} = 1$ and $\tilde{b}_0$ given by equation (26), we can evaluate $\tilde{b}_{t+1}$ for successive time steps, calculate the corresponding spread through the vigor classes from equation (30) and the corresponding total beetle numbers from (33).
What are the possible outcomes of such a calculation? If the beetle population is reproducing, it must be above threshold $(\chi(h(\nu), \nu) = 1)$ in a region where bark density $\phi(\nu)$ is nonzero. Because the beetle destroys all available trees at all vigor levels where it is above threshold, and this is not replaced, a reproducing population has $\tilde{b}_{t+1} > \tilde{b}_t$. In other words, in each time a reproducing beetle population invades and destroys higher vigor classes. Hence, if the sequence of $\tilde{b}_t$ values from equation (35) starts to decline, then the beetle population has gone extinct. This can happen, even when the growth rate $A$ is much larger than unity, because the beetle is destroying its resource, the forest bark, as time progresses.
This excludes the possibility of an endemic population. However, an endemic population may be possible in a more complex model where trees change vigor classes as they mature. This is a subject for further research.
If the beetle population does not die out before the entire stand is consumed, the alternative is that the $\tilde{b}_t$ values grows monotonically past the critical value $\tilde{b}_t = \exp(c + \beta_1)$. At this point the entire structured stand (as described by all vigor classes of size less than or equal to the maximum value, one) is destroyed.
Figure 5. Forest damage as a function of initial beetle density and host preference predicted by the structured threshold model. The cumulative proportion of trees killed at the end of an infestation is shown by the gray-scale, where white represents zero host mortality and black represents the mortality of all trees in the stand (colorbar). The dashed black line depicts the minimum beetle density required to successfully kill the weakest hosts, and the dashed white line depicts the density above which all hosts are killed in the first year of the infestation. The solid white line in panel (a) shows the initial beetle numbers guaranteed to eventually kill the entire stand, calculated from equation (42). Axes values that are above and to the right of the solid line give rise to populations that grow in the second year. a) Dynamics with a constant attack threshold ($A = 2$, $\gamma = 0.8$, and $\beta = 0$) b) Dynamics using empirically derived parameters ($A = 33$, $\gamma = 0.37$, and $\beta = 2.1$). Initial beetle abundance can be translated back to dimensional units using equation (22).
3.3. Nonlinear growth model. Integration of (32) for the case $\gamma > 0$ yields
$$\tilde{b}_{t+1} = \frac{A}{\gamma} \frac{e^{-\gamma \tilde{b}_t \exp(-c v_t)} - e^{-\gamma \tilde{b}_t \exp(-c v_{t-1})}}{\exp(-c v_t) - \exp(-c)}.$$
(36)
Note that, even though this is undefined for the linear model ($\gamma = 0$), it converges to the solution of the linear model (34) as $\gamma \to 0$. This calculation can be facilitated by Taylor expansion of the exponential terms in the numerator to leading order in $\gamma$. Equation (30) allows us to rewrite the right hand side in terms of $\tilde{b}_t$ and $\tilde{b}_{t-1}$
$$\tilde{b}_{t+1} = \frac{A \tilde{b}_t^{1+c}}{\gamma} \frac{\exp(-\gamma \tilde{b}_t \tilde{b}_t^{\beta_1+c}) - \exp(-\gamma \tilde{b}_t \tilde{b}_{t-1}^{\beta_1+c})}{1 - \tilde{b}_t^{\beta_1+c} \exp(-c)}.$$
(37)
As before, we have a discrete-time dynamical system with initial condition $\tilde{b}_{-1} = 1$ and $\tilde{b}_0$ given by equation (26), whose solution $\tilde{b}_{t+1}$, calculated for successive time steps, yields the extent of the spread through vigor classes from equation (30) and the corresponding total beetle numbers from (33).
3.3.1. A Lower Bound For Mass Attack Shows Threshold Effects. To demonstrate a threshold effect, we consider a simplified system where the attack success threshold is constant ($\beta_1 = 0$, see Figure 4). We can rewrite equations (35) and (37) in terms of the ratio $r_t = \tilde{b}_t / \tilde{b}_{t-1}$ to give
$$r_{t+1} = A \frac{r_t - 1}{1 - \tilde{b}_t \exp(-c)} = f(r_t, \tilde{b}_t)$$
(38)
and
$$r_{t+1} = \frac{A}{\gamma} \frac{\exp(-\gamma) - \exp(-\gamma r_t)}{1 - \tilde{b}_t \exp(-c)} = g(r_t, \tilde{b}_t)$$
(39)
The initial conditions to these equations are given by $r_0 = \tilde{b}_0 / \tilde{b}_{-1} = \tilde{b}_0 > 1$ (equation (26)).
We consider an approximations to the dynamical equations (38) and (39) which provides a lower bound $r_\ell$ for $r_t$. The approximation is
$$r_{\ell+1} = f(r_\ell, r_\ell) = A \frac{r_\ell - 1}{1 - r_\ell \exp(-c)}$$
(40)
for equation (38) and
$$r_{\ell+1} = g(r_\ell, r_\ell) = \frac{A}{\gamma} \frac{\exp(-\gamma) - \exp(-\gamma r_\ell)}{1 - r_\ell \exp(-c)}$$
(41)
for equation (39). To show that $r_\ell$ is a lower bound for $r_t$ observe that $f$ and $g$ are increasing functions of their second argument $\tilde{b}_t$. In turn $\tilde{b}_t = r_t r_{t-1} \ldots r_1 r_0$ is bounded below by $r_t$ because the increasing sequence $\tilde{b}_s$ implies $r_s = \tilde{b}_s / \tilde{b}_{s-1} > 1$ for $0 < s < t - 1$.
The equations can be analyzed graphically using cobwebbing (Figure 7). For each system there is a unique positive unstable steady state $r^*$ which is larger than one. For equation (40) the steady state is $r^* = (\sqrt{(A - 1)^2 + 4 \exp(-c)} - (A - 1))/(2 \exp(-c))$. For equation (41) the steady state must be found numerically. Providing $r_0 > r^*$, the cobwebbing shows $r_\ell > r^*$ for all $t$. This also holds true for $\tilde{b}_0$: if $\tilde{b}_0$ exceeds $r^*$ it also will remain larger than $r^*$ for all $t$. This is because $\tilde{b}_0 = r_0$ and $r_\ell$ is a lower bound for $r_t$. The implication is that when $\tilde{b}_0$ exceeds
Figure 6. Forest damage as a function of initial beetle density and host preference predicted by the structured threshold model. See Figure 5 for legend details. a) Dynamics with increased intraspecific competition ($A = 33$, $\gamma = 0.8$, and $\beta = 2.1$) b) Dynamics with decreased maximum fecundity ($A = 10$, $\gamma = 0.37$, and $\beta = 2.1$).
$r^*$ the beetle will eventually destroy the entire stand. Employing (26) to translate this to a constraint on $N_0$ yields a critical number beetles required to eventually destroy the entire stand as
$$N_{0c} = r^* \frac{1 - \exp(-c)}{c}.$$
This is an upper bound for the number of beetles required to destroy the entire stand. As can be seen in Figure 5(a), fewer beetles may also suffice to destroy the entire stand.
When $r_t$ lies between 1 and $r^*$, the ratio $r_t$ declines to an $r_t$ value below one. The threshold for declining $r_t$ lies below $r^*$ because $r_t$ is a lower bound for $r_t$. This threshold is calculated numerically in Figure 5(a) for the linear model.
4. A stochastic model with beetle aggregation
In this section we modify the simplified beetle density model (24) to account for nonuniform density of beetles due to aggregation. Here we simply propose the model, leaving its analysis and application to beetle outbreaks for future work.
The simplified beetle density model (24) assumes that, for a given vigor level $\nu$, the density of beetles (number per unit area of bark) at time $t$ is given precisely by $b_t(\nu)$. Whether a local outbreak is successful depends on whether $b_t(\nu)$ exceeds the outbreak threshold (Figure 1), yielding a value of $\chi$ near one. In reality, the density of beetles in a given unit of tree bark will vary, depending on whether the unit of tree bark contains a local aggregation of beetles. Even when average beetle levels are low, outbreaks of beetles can succeed locally if aggregations of beetles drive the local level of $\chi$ close to one.
One approach to modeling insect aggregation is to simulate its spatial structure explicitly using partial differential equations or related models (Powell et al., 2000). Another approach describes variation in the insect densities using the ideas of random variables: noninteracting insects are Poisson distributed on their hosts, while
aggregating insects are overdispersed (variance exceeds the mean). This second approach has been widely used in the context of host-parasitoid dynamics (Hassell, 1978) where the host is an insect larvae, and the aggregating insect is the parasitoid fly or wasp. We follow this second approach for modeling mountain pine beetle aggregation. In our case the host is in a given unit area of tree bark, and the aggregating insect is the pine beetle.
We describe density of beetles on bark with vigor $\nu$ as a random variable $B_t(\nu)$ with expected value $b_t(\nu)$. Recall that the nondimensionalization (21) calculates the units for $b_t(\nu)$ as density relative to the low-vigor threshold density $\beta_0$. In other words, a value of $B_t(\nu)$ exceeding one is sufficient to kill a tree, providing $\nu$ is very small. The random variable $B_t(\nu)$ can be translated into the number of beetles on a tree host of area $A_T$. This is through multiplication by the dimensionless scaling factor $\beta_0 A_T$ to give $B_t(\nu)\beta_0 A_T$. If beetles act independently of one another, the number of beetles on a tree of area $A_T$ should be Poisson distributed, with mean $b_t(\nu)\beta_0 A_T$, and a corresponding variance also equal to $b_t(\nu)\beta_0 A_T$. However, beetle aggregation will cause the beetles to overdisperse so that the variance exceeds the mean. This additional variation in beetle levels will translate into increased variation in outbreak success for any given value of $b_t(\nu)$. Translating back to the dimensionless beetle density, a Poisson distributed beetle population will have a mean of $b_t(\nu)$ and corresponding variance equal to $(\beta_0 A_T)^{-1}$, and an overdispersed beetle population will have variance exceeding $(\beta_0 A_T)^{-1}$.
Classical approaches to insect aggregation use a negative binomial random variable for the number of overdispersed insects on a host (Hassell, 1978). While this approach would be possible for mountain pine beetle, we adopt a more flexible approach, using a continuous random variable $B_t(\nu)$ with probability density function $g(b; b_t(\nu), \theta)$ that has mean $b_t(\nu)$ and dispersion parameter $\theta$.
Equation (24) is interpreted as the equation for the expected density of beetles, and is rewritten as
$$b_{t+1}(\nu) = \tilde{k}(\nu) \left( \frac{\int_0^1 \phi_t(\nu) \int_0^\infty g(b; b_t(\nu), \theta) \chi(b, \nu) b Ae^{-\gamma b} \, db \, d\nu}{\int_0^1 \tilde{k}(\nu) \phi_t(\nu) \left( 1 - \int_0^\infty g(b; b_t(\nu), \theta) \chi(b, \nu) \, db \right) \, d\nu} \right),$$
As the threshold function $\chi$ becomes steep ($\alpha \to \infty$) is it
$$b_{t+1}(\nu) = \tilde{k}(\nu) \left( \frac{\int_0^1 \phi_t(\nu) \int_{\exp(\beta_1 \nu)}^\infty g(b; b_t(\nu), \theta) b Ae^{-\gamma b} \, db \, d\nu}{\int_0^1 \tilde{k}(\nu) \phi_t(\nu) G(\exp(\beta_1 \nu); b_t(\nu), \theta) \, d\nu} \right),$$
where $G(b; b_t(\nu), \theta)$ is the cumulative density function for $b$. The corresponding equation for the distribution of bark area of live hosts as a function of host vigor (19) becomes
$$\phi_{t+1}(\nu) = \phi_t(\nu) G(\exp(\beta_1 \nu); b_t(\nu), \theta).$$
One possible form for $g(b; b_t(\nu), \theta)$, consistent with observations of beetle densities of zero in uninfested trees and approximately 60 m$^{-2}$ in infested trees, would be bimodal, with peaks at $b = 0$ and $b = b_m \approx 3$ (in nondimensional variables).
While this appears to be a fruitful avenue for development, we leave it for future research.
In some cases, the integrands in equation (44) can be simplified. For example, if the probability density function $g$ is chosen as the gamma distribution with mean $b_t(\nu)$ and variance $b_t^2(\nu)/\theta$
$$g(b; b_t(\nu), \theta) = \frac{b^{\theta-1} e^{-b/\theta}}{\Gamma(\theta)} \left( \frac{b_t(\nu)}{\theta} \right)^{\theta},$$
with corresponding cumulative density function
$$\Pr(B_t < b) = G(b; b_t(\nu), \theta) = 1 - \frac{\Gamma(\theta, b\theta/b_t(\nu))}{\Gamma(\theta)},$$
where $\Gamma(\cdot)$ is the gamma function and $\Gamma(\cdot, \cdot)$ is the incomplete gamma function (Figure 8), then in the limit $\theta \to \infty$, $g(b; \theta, b_t) \to \delta(b-b_t)$ and $G(b; \theta, b_t) \to H(b-b_t)$ and model (24) is regained (Figure 8). Using the cumulative density function (47), the numerator of (44) can be rewritten as
$$A \int_0^1 \phi_t(\nu) h(b_t(\nu)) \, d\nu$$
where
$$h(b_t(\nu)) = b_t(\nu) \left( \frac{\theta}{\theta + \gamma b_t(\nu)} \right)^{\theta+1} \left( 1 - G \left( e^{\beta_1 \nu}, \frac{(\theta+1)b_t(\nu)}{\theta + \gamma b_t(\nu)}, \theta + 1 \right) \right).$$
The variance of the gamma distribution is given as $b_t^2/\theta$ and hence the population is overdispersed when $b_t^2/\theta > (\beta_0 A_T)^{-1}$, or equivalently $\theta < b_t^2(\nu)\beta_0 A_T$. For a beetle population near attack threshold density $b_t(\nu) \approx 1$, and the inequality for an overdispersed population is given approximately by $\theta < \beta_0 A_T \approx 281$ (Table 2).
Figure 8 shows the gamma distribution for various $\theta$ values, and demonstrates that this distribution (solid line) also closely approximates the non-aggregating Poisson model (dots) for the appropriate value of $\theta$ ($\theta = 281$). For this value of $\theta$ the probability density function approaches a delta function and the cumulative density function approaches a step function, indicating that the behavior of this non-aggregating model will be close to that of the deterministic model (24).
The gamma distribution allows for the possibility that, even under heavy beetle attack, the most common observation (mode) would be trees with no attacks (Figure 8, long-dashed line). This arises in the limit $\theta \to 1$ as (46) becomes an exponential distribution $g$. The lower panel of Figure 8 illustrates how, when the average beetle density lies below that threshold level for attack success, higher levels of aggregation (lower $\theta$) mean that a larger fraction of attacks are successful.
Even with the simplification of a gamma distribution for $g$, equation (44)–(45) must be solved numerically using quadrature. The numerical solution of these equations, for varying values of beetle aggregation ($\theta$), is shown in Figure 9. The $\theta$ values for beetle aggregation are as illustrated in Figure 8. The top row of Figure 9) shows no aggregation ($\theta = 281$), while the second and third rows of the figure show increasing levels of aggregation ($\theta = 10$ and $\theta = 1$, respectively). (The corresponding probability density and cumulative density functions for the aggregation functions are shown in Figure 8.) All other parameters in the model are chosen to be identical to the top panels of Figure 5 (left column) and Figure 6 (right
Figure 8. The gamma function can be used to describe beetle aggregation. The probability density function (46) (top panel) and cumulative density function (47) (bottom panel) for a gamma distribution of beetles are shown for mean beetle density $b_t(\nu) = 1$. Dispersion parameters are $\theta = 1$ (exponential, long dashed), $\theta = 10$ (dashed) and $\theta = 281$ (solid). Lower $\theta$ levels indicate higher aggregation levels. At $\theta \to \infty$ the beetle density is given by its mean, and the model becomes deterministic. The dots indicate the Poisson distribution rescaled to correspond with the $\theta = 281$ case. This is approximated closely by the solid line arising from the gamma distribution. To observe the effect of aggregation, observe that for $\beta_1 = 2.1$ and $\nu = 0.25$ the threshold for successful attack, $\exp(\beta_1 \nu) = 1.86$ is exceeded by about 15% of the beetles with dispersion parameter $\theta = 1$, but almost none of beetles with larger $\theta$.
column). Note that the top panel of the left column of Figure 9 (stochastic model with linear dynamics and no aggregation) is similar to the top panel of Figure 5 (deterministic model with linear dynamics and no aggregation), and that the top panel of Figure 9 (stochastic model with nonlinear dynamics and no aggregation) is similar to the top panel of Figure 6 (deterministic model with nonlinear dynamics and no aggregation).
As with the previous figures, the black dashed, solid white, and white dashed lines show threshold beetle abundances, as calculated for the deterministic models. They are simply reproduced in this figure for reference. Lines in the left column are identical to those in the top panel of Figure 5. The black dashed line indicates the minimum beetle abundance required to kill the weakest hosts in the deterministic model with no aggregation; the solid white line indicates the minimum beetle abundance guaranteed to eventually kill the entire stand in the deterministic model with no aggregation; and white dashed line indicates the minimum beetle abundance guaranteed to kill the stand in a single year in the deterministic model with no aggregation. In a similar manner, lines in the right column are identical to those in the top panel of Figure 6.
Note that the grey area in each panel increases with increasing aggregation. This can be seen as one moves from the top row to the bottom row in either column. This means that increased aggregation makes it more likely that a stand will be successfully attacked (white areas are replaced by grey), but also makes it more likely that a fraction of the stand will escape being killed (black areas are replaced by grey). We interpret this result as follows: at low beetle abundance, aggregation helps ensure that the attack threshold is exceeded in some trees, and hence that some individuals in a stand become infected. However, at high abundance, aggregation means that certain trees fall below the attack threshold as the beetles move to other trees, and hence the entire stand is not infected at levels above threshold.
5. Discussion
Previous productivity curve models have modeled beetle production in an entire stand. This does not easily account for the changing vigor structure as the beetle infestation progressed. In this paper we have modeled and analyzed a vigor-structured model for beetle dynamics within a stand. This model explicitly tracks the changing vigor structure in the stand. All model parameters, other than vigor preference $c$ and dispersal survival $S$, were determined by fitting model components to empirical data (Table 2). The closest existing model to the one developed here is the computer simulation model of Raffa and Berryman (1986). However, that model did not provide an explicit mathematical formulation, such as the one given here. We hope that our explicit mathematical formulation provides a foundation for basis further insight and analysis by other researchers.
The added detail needed to track vigor structure has generated mathematical complexity: the model model (1)–(2), (8) and (15) is a system of nonstandard nonlinear integrodifference equations. However, the assumption of an abrupt threshold response ($\alpha \to \infty$) allowed for model simplification, given by reduction to a delayed discrete-time dynamical system for beetle population levels. Analysis of this simplified model allowed for predictions of model outcomes (fraction of stand killed by beetles) as a function of the two unknown quantities: initial number of beetles introduced to the stand $N_0$ (variable) and vigor preference $c$ (see Figures 5 and 6).
Figure 9. Forest damage as a function of initial beetle density predicted by the stochastic threshold model. Dynamics parameters in the left column are $\alpha = 2$, $\gamma = 0.8$ and $\beta = 0$, and dynamics parameters in the right column are $\alpha = 33$, $\gamma = 0.8$ and $\beta = 2.1$. Aggregation parameters are $\theta = 281$ in the first row, and $\theta = 10$ and $1$ in the second and third rows. Note the grey area in each panel increases with increasing aggregation, as one moves from the top row to the bottom row.
Our assumption that beetles respond precisely to the low tree vigor is an simplification, as they cue on a large number of chemical signals of which tree kairomones are just one. In addition, the beetles may cue on larger tree size, which often corresponds with low tree vigor. As mentioned in Section 2.2 our simplification can be considered a best-case establishment scenario, where, by choosing low-vigor trees, beetles can establish most easily. In our example, we consider a vigor distribution that is initially uniform. Although this is a useful starting point, it likely that the initial vigor distribution is structured, possibly with peaks at low and high vigor. It is intriguing to consider whether such structure could give rise to meta-stable dynamics, where beetles slowly build up populations in a small number low-vigor hosts before breaking out and attacking the high vigor hosts (c.f. Safranyik and Carroll (2006)).
A model extension to account for beetle aggregation used ideas from the models for host-parasitoid dynamics with aggregating parasitoids (Hassell, 1978). This model allows researchers to use empirically derived distributions of host selection and aggregation to describe complex spatial processes. This stochastic formulation is quite general, but adds an additional level of model complexity. One possible approach would be to choose the bimodal aggregation function, described in Section 4, right after equation (45).
Although likely less realistic for mountain pine beetle than the bimodal distribution, the gamma distribution provides a flexible, but relatively tractable distribution for aggregating beetle numbers on the host trees. Using this, the effect of aggregation on outbreak dynamics can be analyzed numerically, as shown in Figure 9. Here the numerical results show that the effect of aggregation is to increase the likelihood of successful attack of a stand, but to decrease the likelihood that the entire stand is killed. Further work is needed to estimate the appropriate aggregation parameter for the mountain pine beetle.
The work of Nelson et al. (2008) asks why current risk models fail, concluding that the primary reason is because beetle density has been removed as a hazard index in current risk models. As shown graphically in Figures 5, 6 and 9, our model has provided a framework where beetle density is a key factor determining forest damage. Our model has underscored the role of initial population levels $N_0$, as well as selective preference of beetles for low vigor classes $c$, and the nonlinear attack threshold $\beta_1$ in determining the attack success and, when successful, attack outcomes (entire forest killed versus a fraction).
Our goal has been to develop a strategic model which can be used to relate risk of forest damage to known quantities. There are additional processes, not included in our model, that govern dynamics in natural stands. These include competition with other bark beetles (Safranyik and Carroll, 2006), variable redistribution survivorship $S$ (Burnell, 1997), evidence that host selection can change with beetle density (Wallin and Raffa, 2004) and temperature-dependent beetle phenology (Bentz et al., 1991). While strategic models that leave out too many ecological factors may be unable to predict risk may be unable to predict risk (Nelson et al., 2008), we believe that the next step is to validate the vigor-structured model against mountain pine beetle infestation data. Based on the outcome of this, it may be necessary to develop the model further. This will be the subject of further work.
| Parameter | Estimate | Description |
|-----------|-----------|--------------------------------------------------|
| $H$ | 18.5 m | maximum attack height |
| $H^*$ | 1850 cm | maximum attack height |
| $S^*$ | 25 cm | average sapwood area |
| $\delta$ | .322 g cm$^{-3}$ | wood density |
| $L^*$ | 10 | relationship between sapwood area and leaf area |
Table 3. Parameters for converting the axes in Figure 1(b).
Acknowledgments
This study was funded by Natural Resources Canada–Canadian Forest Service under the Mountain Pine Beetle Initiative. Publication does not necessarily signify that the contents of this report reflect the views or policies of Natural Resources Canada–Canadian Forest Service. The first author was also supported by an NSERC Discovery grant and Canada Research Chair. The second author was also supported by NSERC and Alberta Ingenuity Fund postdoctoral fellowships. The third author was supported by a Killam postdoctoral fellowship. Thanks to three anonymous reviewers and to the Lewis Lab for helpful suggestions that have improved this paper. Thanks also to members of the TRIA Project (Mountain Pine Beetle System Genomics).
Appendix A: Notes for converting axes in Figure 1(b)
The axes used is the original manuscript by Mulock and Christiansen (1986) were $Y = \frac{420}{D}$, where $A$ is the number of successful attacks on the entire host and $D$ is the diameter at breast height of the tree, and $X = \frac{B}{S}$, where $B$ is the yearly increase in a trees basal area, and $S$ is the sapwood area. We seek to transform these data onto new axes of $Y^* = A^*$, where $A^*$ is the number of attacks per meter square of bark area, and $X^* = \frac{M}{L}$, where $M$ is the yearly mass gain of the tree and $L$ is the leaf area of the tree. To convert each data point, we would need the height, sapwood area, and diameter of each tree. Since this information is unavailable, we instead transform the data using the average value of these quantities for the stand as follows: $Y^* = \frac{Y_{100}}{H_{\pi}}$, where $H$ is the maximum height of the attacks (Mulock and Christiansen, 1986). The factor of 100 converts the diameter in meters to bark area in meters. Host vigor is converted as $X^* = \frac{X_{S^*} H^* \delta}{100 L^*}$, where $S^*$ is the average sapwood area in a stand, $\delta$ is the density of wood, $H^*$ is the average tree diameter in centimeters, and $L^*$ is the allometric relationship relating sapwood area to leaf area in Norway Spruce (Stancioui and O’Hara, 2006)
Appendix B: Methods for estimating parameters
Parameter estimates were derived from two studies on mountain pine beetles. Beetle parameters were estimated from data in Raffa and Berryman (1983), and host parameters were estimated from data in Waring and Pitman (1985).
Beetle parameters ($A$, $\gamma$). Raffa and Berryman (1983) report observations of the number of pupae produced per attack as a function of attack density. If we assume that the mortality from pupae to adult is small, then these data provide the
necessary information to estimate $\Lambda$ and $\gamma$ for that particular stand, in the year that
the study was undertaken. The data were digitized and the parameters estimated
by fitting the following non-linear function $p(b(a,\nu),\nu) = A \exp(-\gamma b(a,\nu))$. The
minimization was done using the nls library in the R statistical environment.
**Host parameters** $(\alpha, \beta_0, \beta_1, \nu_m)$. Waring and Pitman (1985) present a data
set of host mortality as a function of attacking beetle density and host vigor for
mountain pine beetles in a lodgepole pine stand. We use this data to obtain esti-
mates for $\alpha$, $\beta_0$, and $\beta_1$. A small number of the data were for strip-attacks where
only a portion of the tree is killed. Since the extent of the damage is not reported,
and since they only represent a small proportion of the data set, we exclude them
from the analysis. The expected probability of host mortality ($\pi_i$) is given by
$\pi_i = (1 + \exp(-\alpha(y_i - \beta_0 \exp(\beta_1 x_i))))^{-1}$ where $x_i$ is the vigor of host $i$, and $y_i$
is the density of attacking beetles. Since the response variable is binary (host is either
dead or alive), the parameters were estimated by minimizing the log-likelihood ob-
jective function ($L$) for binary data $L = -\sum_{i=1}^{n}(z_i \log(\frac{1}{1-\pi_i}) + \log(1-\pi_i))$ where $z_i$
is the observed host outcome (0 or 1). The minimization was done using the optim
library in the R statistical environment. To avoid numerical errors, the predicted
probability of host mortality was bound between 0.0001 and 0.999.
The maximum vigor ($\nu_m$) was taken as the maximum vigor observed in the
stand.
**References**
B.J. Bentz, J.A. Logan, and G.D. Amman. Temperature-dependent development
of mountain pine-beetle (coleoptera, scolytidae) and simulation of its phenology.
*Canadian Entomologist*, 123:1083–1094, 1991.
A.A. Berryman. Dynamics of bark beetle populations: Analysis of dispersal and
redistribution. *Bulletin de la Leciete Entomologique Suisse*, 52:227–234, 1979.
A.A. Berryman, N.C. Stenseth, and D.J. Wolkkind. Metastability of forest ecosys-
tems infested by bark beetles. *Researches on Population Ecology*, 26:13–29, 1984.
D.G. Burnell. A dispersal-aggregation model for mountain pine beetle in lodgepole
pine stands. *Researches on Population Ecology*, 19:99–106, 1997.
H. Caswell, R.M. Nisbet, A.M de Roos, and S. Tuljapurkar. Structured-population
models; Many methods, a few basic concepts. In S. Tuljapurkar and H. Caswell,
editors, *Structured-population Models in Marine, Terrestrial, and Freshwater Sys-
tems*. Chapman and Hall, 1997.
W.E. Cole and M.D. McGregor. Estimating the rate and amount of tree loss
from mountain pine beetle infestations. Technical Report Res. Pap. INT-318,
U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range
Experiment Station, 1983.
W.S.C. Gurney and R.M. Nisbet. *Ecological Dynamics*. Oxford University Press,
Oxford, UK, 1998.
M.P. Hassell. *The dynamics of arthropod predator-prey systems*. Princeton Mono-
graphs in Population Biology. Princeton University Press, Princeton, NJ, 1978.
F. He. Observations of mountain pine beetle infestations of lodgepole pine stands
in southern British Columbia. Unpublished, 2006.
J.A. Logan and J.A. Powell. Ghost forests, global warming, and the mountain pine
beetle. *American Entomologist*, 47:160–173, 2001.
J.A. Logan, P. White, B.J. Bentz, and J.A. Powell. Model analysis of spatial patterns in mountain pine beetle outbreaks. *Theoretical Population Biology*, 53: 236–255, 1998.
J.A. Logan, R. Regniere, and J.A. Powell. Assessing the impacts of global warming on forest pest dynamics. *Frontiers in Ecology and the Environment*, 1:130–137, 2003.
H.A. Moeck and C.S. Simmons. Primary attraction of mountain pine beetle *Dendroctonus ponderosae* Hopk. (Coleoptera: Scolytidae), to bolts of lodgepole pine. *Canadian Entomologist*, 123:299–304, 1991.
P. Mulock and E. Christiansen. The threshold of successful attack by *Ips Typographus* on *Pinus abies*: a field experiment. *Forest Ecology and Management*, 14: 125–132, 1986.
W.A. Nelson, A. Potapov, M.A. Lewis, A.E. Hundsdorfer, and F. He. Balancing ecological complexity in predictive models: a reassessment of risk models in the mountain pine beetle system. *Journal of Applied Ecology*, 45:248–257, 2008.
J.A. Powell, J.A. Logan, and B.J. Bentz. Local projections for a global model for mountain pine beetle attacks. *Journal of Theoretical Biology*, 179:243–260, 1996.
J.A. Powell, B. Kennedy, P. White, B.J. Bentz, J.A. Logan, and D. Roberts. Mathematical elements of attack risk analysis for mountain pine beetles. *Journal of Theoretical Biology*, 204:601–620, 2000.
K.F. Raffa and A.A. Berryman. The role of host plant-resistance in the colonization behavior and ecology of bark beetles *Coleoptera, Scolytidae*. *Ecological Monographs*, 53:27–49, 1983.
K.F. Raffa and A.A. Berryman. A mechanistic computer-model of mountain pine-beetle populations interacting with lodgepole pine stands and its implications for forest managers. *Forest Science*, 32:789–805, 1986.
L. Safranyik and A. Carroll. The biology and epidemiology of the mountain pine beetle in lodgepole pine forests. In L. Safranyik and B. Wilson, editors, *The mountain pine beetle: A synthesis of its biology and management in lodgepole pine*. Natural Resources Canada, Canadian Forest Service, 2006.
P.T. Stancioiu and K.L. O’Hara. Leaf area and growth efficiency of regeneration in mixed species, multiaged forests of the Romanian Carpathians. *Forest Ecology and Management*, 222:55–66, 2006.
K.F. Wallin and K.F. Raffa. Feedback between individual host selection behavior and population dynamics in an eruptive herbivore. *Ecological Monographs*, 74: 101–116, 2004.
R.H. Waring and G.B. Pitman. Modifying lodgepole pine stands to change susceptibility to mountain pine beetle attack. *Ecology*, 66:889–897, 1985.
P. White and J. Powell. Phase transition from environmental to dynamic determinism in mountain pine beetle attack. *Bulletin of Mathematical Biology*, 59: 609–643, 1997.
P. White and J. Powell. Spatial invasion of pine beetles into lodgepole forests: a numerical approach. *SIAM Journal of Scientific Computing*, 20:164–184, 1998. |
Collaboration for Improving the Business
Using Teams to Implement Quality
by Ray Svenson, Karen Wallace, and Guy Wallace
Improving performance by efficiently implementing tools and techniques from the quality movement throughout an organization requires using teams — the right teams, teams in the right structure, and teams devoted to the right projects and/or processes. Each team must act within the context of the organization’s entire *quality/business* effort. Each team must have its place within an established, formal hierarchy of teams at all levels of the organization. And each must undergo its own “life cycle” from formation to excellence, a life cycle that depends on planning and support from the leadership of the organization.
**Three Types of Teams.** Teams for implementing and managing the quality effort are different than work teams. While some teams may be permanent, many are temporary. A sample overall team structure is shown below.
Please route to:
______________________ □
______________________ □
______________________ □
The hierarchy includes teams of three types:
- An executive leadership team (ELT)
- Process steering teams
- Project improvement teams (PITs).
The executive leadership team and the process steering teams are permanent parts of the organization. Project improvement teams respond to specific issues; they dissolve as the issues are resolved.
“The ELT provides overall leadership…”
The *executive leadership team* provides overall leadership to the improvement effort. (This team can go by names other than ELT; it’s the concept behind the team that’s important, not the name.) The executive leadership team:
- Establishes the mission, vision, and values
- Develops a business architecture
- Establishes executive-level owners of key processes
- Conducts company-level assessments
- Develops company-level improvement plans and priorities
- Establishes, sponsors, and charters process steering teams
- Establishes, sponsors, and charters company-level improvement project teams
- Maps company-level leadership processes
- Ensures that improvement projects integrate with overall company processes
- Manages the empowerment and accountability of subordinate teams
- Advocates and communicates TQM concepts, goals, and structure inside and outside the business; “walks the talk.”
On December 20, 1993, “the new corporate model” made the cover of *Business Week*. One of the seven key elements of the horizontal corporation — along with emphasis on organizing around process and on training for employees — is teams. “Use teams to manage everything,” states the article.
Teams *can* help business performance, but the *extent* to which teams are used effectively throughout an organization, and the *types* of teams used, depends upon the needs of the business. Reorganizing everyone into work teams to accomplish the goals of the business may not be appropriate. “Lone Rangers” may still be required — they simply have to understand key business objectives and business processes.
We’ve found that one of the main objectives for organizing into work teams is to get away from functional “siloism,” where the demands and perspective of a particular discipline — engineering, sales, etc. — tend to blind employees to the goals of the company as a whole. Multidisciplined work teams, the reasoning goes, can focus on a particular process. The problem — *process* siloism often replaces *functional* siloism. The broader corporate mission still is unrealized; the corporate *part* still is optimized rather than the corporate *whole*. Teams should exist for the sake of the business processes; teams should not exist simply for the sake of having teams.
The ELT is permanent. The business may already have this team in place under the guise of some sort of cross-functional senior committee; with enough emphasis on quality, this committee becomes the ELT. Members remain in their functional areas, providing (through their roles on the ELT) the control and direction necessary for organizational improvement.
“Each process steering team focuses on a particular process…”
Working at lower levels than the executive leadership team are *process steering teams*. Each process steering team focuses on a particular process and searches for improvement opportunities. The team prioritizes the opportunities, then determines how each opportunity fits into the overall business strategy handed down from the executive leadership team. A process steering team can be the process team or leadership team for a major location — such as a plant — or for a process such as sales. Cross-functional in either case, the process steering team:
- Identifies function/process business architecture
- Conducts assessments
- Devises improvement plans and priorities based on the company plan
- Builds a function/process-level measurement hierarchy
- Establishes, sponsors, and charters improvement team projects
- Maps and improves selected processes
- Ensures that improvements are properly integrated into the organization
- Manages the empowerment and accountability of improvement teams
- Provides support and resources to improvement teams
- Reports improvement results to the executive leadership team.
Process steering teams, like the ELT, are permanent. Members are from various functional disciplines and may be involved in the process to be improved as participants, suppliers, or customers (internal or external). Membership on the team is usually not a full-time job, although one or two of the members may spend most or all of their time in team activities.
Process steering teams deploy strategy to lower levels by sponsoring and chartering *project improvement teams* consisting of stakeholders in the areas to be improved. For example, a team with the charter to improve the product development process might include representatives from engineering, marketing, R&D, finance, business planning, and manufacturing. By incorporating the points of view and expertise from the various functional areas, the team has a much better chance of effectively implementing improvements than any single individual acting alone or any collection of individuals from a single function. The cross-functional team interaction is likely to expedite the improvement process, and it will also provide a vital synergy likely to make the improvement more effective. (Again: PITs are *not* equivalent to work teams. The members of work teams are busy performing the daily work of the business.)
Improvement teams:
- Prepare project plans
- Assess improvement potential
- Design and test improvements
Prepare plans for deploying their improvements
Oversee and support deployment
Report deployment results
Assess issues of integration
Recommend or negotiate integration with other processes and systems.
Improvement teams make the improvements the organization is looking for. In terms of the metaphor we use in *The Quality Roadmap*, this is where “the rubber meets the road.”
Project improvement teams work at any level. An improvement team comprised of members of the executive leadership team may work on high-level projects. Other teams are active at the middle management level and at the working level. Unlike the ELT and steering teams, however, PITs are temporary, dissolving once their chartered improvements are in place.
Not Just Teams. Three other elements in a team structure help companies implement quality and operate with quality. Sponsors are connections between different types of teams. Sponsors charter improvement teams, act as advocates for a team, help it obtain resources, help clear roadblocks, and provide reward and recognition to the team. A sponsor is a major stakeholder in the team’s objectives; the sponsor may even be the owner of the process to be improved. It is the sponsor who is the link between the improvement team and the executive leadership team or steering team.
Also with a vital place in the team structure is the organization’s chief executive. The CEO chairs the executive leadership team and provides overall strategic direction to the TQM effort. Active leadership and a strong commitment from this person are essential to the success of a quality effort.
A third crucial element is communication. Not only must overall direction and strategy be formulated by the ELT and steering teams, these must be communicated to the project improvement teams. Otherwise the PITs will remain adrift in a sea of corporate ambiguity with no destination, no direction.
Benefits of a Hierarchical Team Structure. What does this hierarchical structure add to the TQM effort? It allows for a strategic deployment of resources toward improvements that will most benefit the organization. It also allows TQM efforts to be integrated by virtue of the attention being paid at the highest levels of the organization.
Each improvement team is chartered and sponsored by a higher-level team. This means that no “dangling” teams waste effort on misguided missions, projects we classify as “water cooler” projects. Each team has a specific focus; each team has a well-defined direction.
Within this framework, strategic direction and investment resources flow from the top down. Improvement results flow from the bottom up.
Establishing Teams. As they’re chartered and organized, teams go through a rather predictable series of stages in their development. In conjunction with our strategic partner Prism Performance Systems (PPS) and Prism Custom Development Services (PCDS), we’ve developed a model for team development — a model that can be used
to accelerate the development of teams and prevent the development of dysfunctional teams. More about that model in the next issue of *Pursuing Performance*.
Whether or not an organization reorganizes into a horizontal corporation using *work teams* (see the accompanying sidebar) depends on the nature of an organization’s business. We believe that the use of teams in implementing *quality* into an organization requires the use of properly structured and chartered ELTs, process steering teams, and PITs. These teams are crucial to any quality effort.
This article is adapted from *The Quality Roadmap: How to Get Your Company on the Quality Track — And Keep It There*, authored by the principals of SWI ♦ Svenson & Wallace, Inc. See Page 14 for more information about this book.
**When Is It Justified, When Not?**
**Making a Case for Training**
*by Guy Wallace*
In a recent project, we identified more than 500 distinct training needs within the client corporation. Approximately 50 of the needs were being addressed by existing training. Should those other 450 be addressed?
We helped another client quantify the Performance Improvement Potential (Thomas F. Gilbert’s “PIP”) in the area of CAD-CAM operations; the PIP amounted to millions of dollars annually. Should that performance gap be closed?
Just because we can *define* a training need doesn’t mean that the need should always be met. Determining and quantifying a training need/performance problem, even one costing millions of dollars a year in nonconformance, may not automatically warrant the expenditure of corporate funds to address that problem, even though the problem seems to scream out to trainers for resolution.
The decision to spend thousands or millions of dollars on training is a *business* decision, not a *training* decision. Trainers should have no vested interest in whether a particular need is addressed. One of our responsibilities as trainers is to help our clients make good business decisions about the value of training relative to other organizational opportunities.
Many, but not all, of the 500 needs we identified for the first client will be addressed. They will be addressed on the basis of return on investment (ROI), on how much value the dollars spent resolving those needs will return to the corporation compared to other potential corporate expenditures. (Training is, of course, only one bucket into which corporate resources are strategically tossed.)
In order to allow our training clients to evaluate the investments we propose, we must:
- Use the language and numbers familiar to management — e.g., ROI
- Present the language and numbers in a familiar format — e.g., a business case.
Calculating the Return on Training Investment. Management has a fiduciary responsibility to use corporate funds to the best advantage of the corporation. Management should only spend money on a training project if the ROI for the project compares favorably to the ROI for other potential investments.
The ROI percentage can be expressed as:
\[ \frac{[\text{Return} - \text{Investment}]}{\text{Investment}} \]
A training investment of $50,000 a year that returns $75,000 annually has an ROI of 50%.
We contend that we can substitute either Gilbert’s Performance Improvement Potential (PIP) or the Cost of Nonconformance (CONC) for “Return” in the equation above. CONC is the potential value of performance minus the actual value of performance.
The *potential* value of performance, like the return, can be very difficult and problematic to measure; we prefer to use instead a simple but real cost of performance, for example as measured by the salaries of the performers. One hundred performers earning $35,000 a year gives a benchmark of $3.5 million for 100% proficient performance.
If we assume that our 100 performers actually work at 50% proficiency, then the *actual* value of performance is $1.75 million, resulting in a CONC (or a PIP) of $1.75 million annually: $3.5M minus $1.75M.
Spending a million dollars a year to bring this workforce to a proficiency of 100% yields an ROI of 75%: [$1.75M - $1.0M] + $1.0M. Bringing it to 90% proficiency yields an ROI of 40%.
Training ROI calculations are really much more complicated than this. Not all performers start at the same level of proficiency; they may not reach 100% proficiency after training; and factors such as scrap and rework, schedule slippages, sales volume, and lost opportunities may not be factored in.
We believe that a better model for calculating training ROI is one that often *understates* the return. How can the model understate the return on investment? We tend to use very conservative numbers in our calculations. Here is the algorithm we used to calculate the CONC for our CAD-CAM client.
\[
\begin{align*}
\$30K & \quad \text{Average loaded salary of CAD-CAM performer} \\
\times & \quad \text{25% Average job time spent using CAD-CAM on the job} \\
\times & \quad \text{800 Number of CAD-CAM performers} \\
\times & \quad \text{50% Average deficiency level of a CAD-CAM performer} \\
= & \quad \$3M \quad \text{Value of deficiency of performance}
\end{align*}
\]
The assumptions for this sample calculation are that:
- The average loaded salary of a CAD-CAM performer in the organization was $30,000.
- The average CAD-CAM user spent 25% of his or her working time using the system (some used the system more than others).
- There were 800 CAD-CAM users.
- The average deficiency level was 50%.
With these numbers, we calculated the value of the performance deficiency to be $3M.
But in reality, the numbers were low. The average fully loaded salary was significantly higher than the $30,000 we used in the calculation above — it was $50,000-plus. And the number of CAD-CAM users was upward of 1500… or even 3000 — the client wasn’t sure. Plugging those numbers into the algorithm gives CONCS ranging from $9.37M up to $18.75M. These are all potential annual savings that are left on the performance table.
In addition, some estimates of performer proficiency (estimates made by the performers’ own management) were far below the estimates we used in our algorithm — as low as 25% proficiency! The hourly cost of using the CAD-CAM system wasn’t factored into our algorithm (to keep it as simple as possible). Neither were the costs of potential rework or scrap caused by deficiencies in performance.
Our relatively simple algorithm, using numbers generated by the client, showed just how serious the performance gap was — and how large the potential return on investment for the training we had scoped. Even with a multimillion dollar price tag on the training, the ROI was phenomenal — so phenomenal, in fact, that the executives we were working with did not want the number to appear on paper.
**Obtaining Management Buy-In for Training ROI.** Presenting the training ROI in a successful business case means obtaining and packaging a variety of data. Find out what is an acceptable return in your organization and what kind of ROIs management has been achieving lately. Get management to help you generate the figures that will go into your business case — that reduces later disputes over the validity of the figures. Keep the model relatively simple; you *could* factor in an increase in sales volume to your training return, but figures like that — valid or not — are open to dispute because they depend on so many other variables, such as the state of the national economy. The more complicated your models are for calculating costs and returns, the more difficult it is, in our experience, to get the models accepted.
Because of the high visibility of the quality movement, we suggest that you use terms derived from disciplines of quality, productivity, and finance.
Be aware, also, of the politics of your situation. A multimillion dollar CONC may make the “owners” of the CONC feel wary that others will perceive the gap as a measure of their own incompetence; the client may, in such cases, be reluctant to use the “real” CONC numbers. With our CAD-CAM client, the amount of scrap caused by the relatively low proficiency level of the performers became an extremely sensitive issue; the cost of scrap attributable to the performers would have been very expensive. We wound up ignoring the issue and excluding the factor from our algorithm.
When you feel that you have a training case that makes sense from a business perspective, package and present your proposal as a genuine business case in a format accepted in your organization. Help your client come to a decision based on a *business* perspective, not a training department perspective. That allows you to become a business champion, not just a training champion.
---
Portions of this article are adapted from a longer piece by Guy Wallace that appeared in ASTD’s *Technical and Skills Training Journal* in May/June of 1991. The article contains additional detail on calculating and presenting training ROI. See the Journal or contact SWI for the text of that article.
Personal Profile: Diane Wagner
by Ron Grossman
For this issue, Pursuing Performance departs from standard operating procedure. Instead of having our own writer do this issue’s employee profile, we asked Ron Grossman for his impressions of Diane Wagner, SWI Senior Associate. Ron, who shares his life with Diane, is a reporter for the Chicago Tribune.
As a shortcut to understanding what makes Diane Wagner tick, let me echo (a polite way of saying “steal”) a saying of Hillel, greatest of the ancient rabbis. Once, he was confronted by a wise-guy student willing to bet the rabbi couldn’t teach him the Torah while the impatient young man stood on one foot.
Save your money, Hillel replied.
“All you need know is this: Don’t do unto others what you wouldn’t have them do to you,” he said. “All the rest is commentary.”
All you need know about Diane is that she lives according to the good rabbi’s version of the Golden Rule.
Now here’s the commentary.
Diane Wagner
Of an evening, she and I will be sitting around reading the newspaper. To me, such is the most solitary of activities. I love to wrap the pages psychologically around me, using reports of mankind’s foibles as a screen against the outside world.
Diane reads with one hand. In the other, she holds a little clipper that she takes to one page after another, snipping out this and that article while exclaiming:
“Oh, look at this! I just know Judy (or Marlene, or Don, or Andy, etc., etc.) would love to see this.”
“Uh… yeah,” I reply in the distracted voice native to my gender.
By next morning, each of those newspaper clippings will already be enclosed with a little note in a stamped envelope, ready to be mailed to friends and clients all across the country.
Last October while we were vacationing in the south of France, Diane spotted some ceramic tiles in a shop, just perfect for a friend’s birthday present. The birthday in question was a good five months away.
It’s not hard to see where Diane comes by that bottomless capacity to empathize with others. Her parents hadn’t much in the way of material goods to bequeath her, being simple working folk. But their legacy was infinitely more valuable. Diane’s mother (who died a few years ago) and father (now 89) were immigrants from Czechoslovakia. They lived in the New World according to the same communal ethic that had sustained their peasant ancestors. Think of others first, second, and third. Never complain. Save for a rainy day. If it’s sunny, save anyway.
After growing up on Chicago’s West Side and in Cicero, Diane attended Northern Illinois University, where she studied English literature. Although the actual mileage between the neighborhood where she was born and De Kalb, Illinois isn’t that
great, a Chicago immigrant neighborhood and a college campus were cultural light years apart in the 1960s. But in bridging that gap, Diane learned to accept difference as a fact of contemporary life. Indeed, she acquired a taste for diversity that persists to the present. Our townhouse walls are covered with pottery and photographs collected on travels to various countries.
The experience of straddling two worlds — street corner society and the ivory tower — gave Diane a kind of anthropologist’s eye which, in her professional life, she has trained on the various cultures of corporate America.
“You must remove your own intellectual blinders…”
After completing an advanced degree in psychology, she helped run a community mental health center in De Kalb. That was no mean trick on the Middle American prairies, where Freud and the mysteries of the human psyche are hardly daily table talk. That taught her the basic lesson every teacher or business consultant must learn: to help people, especially those under stress, you must remove your own intellectual blinders and take a look at how the world seems to a student or client.
Eventually, Diane returned to Chicago where she worked for the Legal Services Corporation, a federally funded program to enable poor folk to have their day in court. The experience provided her with a kind of do-it-yourself course in power and politics, as she witnessed the varying clashes of interest that determine public policy.
Later, she was struck by parallels in the power relationships that define the playing field in corporations. Her professional philosophy rests on the axiom that a successful consulting engagement begins with identifying the tensions between individuals and the organizations in which they work.
From Legal Services, she went on to the IBM Corporation, where she worked as an internal consultant for more than ten years. For much of that time, she was stationed in New York, commuting back to Chicago weekends to keep up with family obligations. She also enrolled in a doctoral program in adult education at Columbia University. She’s just now processing the data for her dissertation, a study of how people actually use the much-touted, high-tech computers and communication tools currently transforming the American workplace.
Last year Diane joined SWI.
Watching her return from consulting engagements, she seems happiest not when she’s had the spotlight. Rather, she’ll be more enthusiastic when reporting having helped clients learn a trick or two they never suspected themselves capable of. In a Buck Rogers age of high technology, it’s easy to overlook the human dimension of the workplace. But Diane believes that the fanciest of gadgets don’t lead to profits unless the people using them bring to work with them every day an intelligence, a spirit of cooperation, and a willingness to share experience.
So, to this stranger to the business, at least, it seems that the secret of being an effective consultant may come down to never forgetting do-unto-others lessons learned at your immigrant-parents’ kitchen table.
The SWI PACTSM Process
Phase 4 — Training Development/Acquisition
by Guy Wallace
Performance-based Accelerated Customer-driven Training
This article is the sixth in a series describing the SWI PACTSM Process for training design. We include under “training” all methods of deploying awareness, knowledge, and skills to target audiences. Training methods may include orientation, education, training, on-the-job development, cross-training, and so forth. The PACTSM Process covers both our approach to Curriculum Architecture design and to custom course development.
In previous articles in this series, we’ve covered the analysis and design that precedes the development of training courses. We’ve also covered the project planning and organization involved in such an effort. In this article we cover:
- Make/buy considerations for “filling the blanks” in the curriculum design
- The “standard” approach to acquiring training modules
- A shortcut method for acquisition
- Considerations for training module development.
The ideal approach, in our opinion, is to always go through the full-blown PACTSM Process Analysis (Phase 2) and Design (Phase 3) even if we think up front that we may “buy” modules instead of “make” them. Phases 2 and 3 give us the data we need to either develop or acquire the training modules in the curriculum. The output of Phase 3 of the PACTSM Process includes a Lesson Flow Diagram, Lesson Specs, and Activity Specs for each of the modules in the Curriculum Architecture design (see the last issue of Pursuing Performance). The approved design document describes in detail exactly what is to be developed or acquired, including the ideal lesson configuration and lesson activities for information, demonstrations, and exercises. The design should have been thoroughly reviewed by the project’s Steering Team, of course. Used correctly, this detailed and approved document can be very effective during Development/Acquisition, and can allow developers to move into Phase 4 with great confidence.
Training Modules: Make or Buy? Whether to buy or make is really an issue of balancing the impacts of having a training program that might be just “okay” versus a program that’s more on target.
For example, filling a need for presentation skills training might seem like an opportunity to take advantage of the courses available through the packaged training marketplace. You could check out a number of courses and simply buy the best one. But if you’ve done the homework dictated by a complete analysis and design, you’ll be able to assess make/purchase issues such as:
- Your learners’ applications of presentation skills
- What the training must do to help meet presentation performance objectives.
Whether you decide to buy that “okay” course or build a more optimal course will depend on the answers to a number of questions:
- How many people in the organization will be affected by your decision? The more people affected, the stronger the case may be for custom development.
- How big an investment are you considering for the purchase? The higher the investment, the more you might want to think about custom development.
- Is the content to be delivered of strategic importance to your organization? If so, the negative impact of a course slightly off target will be magnified.
- How different are the situations in which the skills are to be applied? Presentation skills for a team meeting are different than those for a television interview, a corporate board meeting, or a Senate hearing.
- Do different stakeholders have strong vendor preferences for packaged training solutions? In this case, perhaps a custom version integrating elements of several different approaches is the best way to go.
Even a simple example like presentation skills training demonstrates that the decision to purchase rather than develop may not be a “no-brainer.”
**Acquiring Training Modules.** If you have done a full analysis and design for the modules you intend to buy, you will have the ideal set of shopping criteria. Simply compare and contrast the features of each potential purchase to your design. You cannot expect to find a course configured *exactly* like your design; however, for your desired modules you should be able to effectively evaluate:
- Content items (information)
- Example items (demonstrations)
- Practice items (exercises).
What if you don’t find anything close enough? You’ll easily change paths from “acquire” to “develop.” You don’t have to start all over; simply get approval from the Steering Team, amend your project plan, and begin development.
**The Shortcut Method of Acquisition.** There *are* situations when it’s appropriate to simply buy the best program available and when the purchase may not need to be done following the full-blown analysis and design we usually recommend. You may have seen some of these situations.
For example: Was the needs analysis done *politically* rather than *professionally*? Has a high-level manager uncovered a need to be addressed by a preselected training solution, and is the manager intractable enough so that no one will challenge his or her solution? If that’s the case, go directly to “Purchase” without passing “Go,” especially if:
- Not that many people will be affected by the decision.
- The investment is moderate.
- The impact of the content to be delivered is not of strategic value and lacks a high-enough potential return on investment.
- You have a great training evaluation system in place (giving you more than just “smile sheet” data), and you will be able to later judge the purchase choice based on data and not opinion.
In this case, you have an opportunity to skip a battle and get back to dealing with higher-impact training issues. Buy a program.
If you can’t do a full and detailed analysis and design, there are shortcuts for coming up with the criteria you’ll use to shop for the training you’re going to purchase. Talk to members of your target audiences to find out what they want and don’t want in their training. Capture:
- Their preferred delivery method (CBT, classroom, readings, video, etc.)
- Minimum and maximum length of training tolerated
- The preferred amount of interaction
- Applications and situations in which the skills will be used
- Preferences for specific content, case studies, and exercises (we once had a client R&D group object to the “simple” examples used in a prospective packaged course on quality tools).
The insights you gain via these conversations will help you narrow the range of options you find in the packaged training marketplace.
**Data Gathering for Development (More Analysis?!)**. If you’ll be *developing* the courses in your Curriculum Architecture, the data you’ve gathered during analysis and design provides most of what you’ll need. The Performance Model will be especially useful. However, during development you’ll gather more targeted information from subject matter experts (SMEs) and master performers (MPs) on topics relevant to particular training modules. The Performance Model will be the vehicle for doing more targeted analysis; you won’t have to start from square one.
Most of the time, data gathering is done through interviews with subject matter experts or master performers for specific tasks or knowledge/skill items. The purposes of the data gathering step are to:
- Get specific “how-to” techniques for relevant situations
- Find real-life examples to use in the training
- Discover significant “variations” on the target task. For example, if the target task is to develop a budget, variations might include dealing with cross-department projects, currency exchange rates for international projects, a lack of available forecast data, ambiguity — whatever barriers to performance exist.
One of the Phase 4 ground rules is that only *minor* changes to the design are allowable. The Steering Team is told up front that minor tweaking might have to occur after the design is approved. SMEs and MPs who know that the design they’re seeing has been sanctioned by the Steering Team can (hopefully) be restrained in their enthusiasm for redesign. You yourself will achieve insights during this phase that will allow you to deliver a better product; and you should be allowed to do so, but your goal at this point is to keep changes evolutionary rather than revolutionary.
**To Test or Not to Test: Developmental Testing**. We feel that training developers should perform internal and informal developmental tests during this phase as they see fit. For example, it’s usually worthwhile to try out exercises to ensure that
instructions are complete, that learners have enough information to answer questions, that exercises are not too difficult and not too simple, and so forth.
However, much of the time the structure of the content — and the way it’s expressed — is rather arbitrary; one approach will work just as well as another. Be aware that if you ask for opinions on content and expression during a developmental test, you will surely get those opinions, along with the consequent rework (and potential schedule slippages). Unless you feel there are substantive issues on which you’d like interim feedback, it may be better to let the pilot test in Phase 5 give you the feedback you want and need.
We suggest that for Phase 4 you subscribe to the realistic notion that you will deploy imperfection and then continuously improve rather than deferring deployment for perfection. That continuous improvement is what Phase 5, Pilot Test, is all about.
We also have an opinion on whether to conduct those infamous, time-consuming, unnecessary walk-throughs of each and every page (or screen, etc.) of the training under development. These are a developer’s nightmare. A walk-through usually degenerates into The Great Wordsmithing Contest of Arbitrary Choices and Developer Dis-Empowerment. In our experience, very few meaningful changes occur during Phase 4 walk-throughs. In fact, walk-throughs usually increase cycle times, increase costs, detract value, and demean developers through the implied micro-management of their work.
**Lessons from Other Worlds.** The work we’ve done over the years has given us a good exposure to the product development process in a number of industries. We’ve found that the world outside of training has learned a number of lessons that we can apply to our own product development process if we’re not too proud. Indeed, the PACTSM Process borrows concepts, precepts, tools, and techniques from both product management and the quality movement. Some of the lessons we’ve learned are:
- Detailed planning is a must.
- Communicating to test understanding and manage expectations is critical.
- Front-end load your process with all the inputs from all of the stakeholders; don’t rush into development before getting everyone’s “stakes” placed.
- Don’t add new players (SMEs) along the way unless absolutely necessary. They disrupt the process and cause rework. If they must be brought on midproject, spend a lot of time letting them know what’s gone on before, the decisions that have been made, the tradeoffs for those decisions, and the rationales.
**Conclusion.** Whether you make or buy, develop or acquire, the work you do during preceding phases of the PACTSM Process serves you well. And when you finish Phase 4, the outputs you have are ready for Phase 5 — Pilot Testing. That’s the subject of the next article on the PACTSM Process in the next issue of *Pursuing Performance*.
What Karen Found at Barnes & Noble
"There I was at Barnes & Noble," says Karen Wallace, "in the business section, looking at books on — what else — quality. And there, right in front of me, was *The Quality Roadmap*!"
What does one say when one unexpectedly encounters one's own, just-published book for the first time in a bookstore? Does etiquette preclude an "all right"? Does one simply smile? Is it permissible to grab the arm of a clerk and claim authorship?
Karen was somewhat unprepared because the January day she saw the book was so soon after the book's expected availability date. But it was a pleasant experience for the first-time book author to discover that at least some discriminating bookstores are carrying the work she codeveloped with her SWI partners.
Barnes & Noble browsers less familiar than Karen with the book will be able to open the cover and find out more on the inside of the dust jacket:
"Whether your company is straying and struggling midway along the quality path or has just begun the journey, *The Quality Roadmap* will help you get back on the road. And you'll not only continue your journey, but you'll see your destination clearly and understand why your company is on this pilgrimage...
"It's a pragmatic, universal planning model developed by the authors and the Council for Continuous Improvement, a consortium of more than 140 quality-seeking companies. This model is based on patterns of success that both the authors and the Council found in research and was created specifically to help you take a long-range look at your company's quality efforts and figure out: Where are we now? Where do we want to be? How do we get there?"
Heavy questions for the quality aficionado. If the answers to those dust jacket questions are of interest to you, may we suggest a visit to your local discriminating bookseller or a call to the book's publisher — AMACOM Books, a division of the American Management Association, 135 W. 50th Street, New York, NY 10020; 1-800-262-9699. The book's ISBN number is 0-8144-5117-9, and the price on the dust jacket is $24.95.
Dick Hill, Ph.D., is the most recent addition to the SWI ♦ Svenson & Wallace, Inc. consulting team. Dick offers a broad background of experience across a spectrum of industries. His 15 years with oil, gas, and petrochemical firms include line management, staff support, and internal consultant assignments. A recent assignment at Amoco Oil Company with the Amoco Pipeline subsidiary involved restructuring the organization from a traditional, functional, geographic structure to team-based, product-focused, strategic business units. Dick has also developed productivity and quality improvement programs across a range of industries. Dick holds a bachelor's degree in business administration, a master's in business administration, and a doctorate degree from the University of Wisconsin. In his free time, Dick enjoys running, cooking, reading psychological and business literature, and listening to jazz and classical music.
Janet James recently joined our production staff. She brings with her more than ten years of experience in desktop publishing, graphic arts, and spreadsheets. Janet has utilized these skills to create newsletters, brochures, education/training materials, sales and marketing tools, software documentation, and CBT courses. Before joining SWI, Janet contracted for AT&T, Arthur Andersen, General Motors, and Subaru Technical Design Center. Off the job, Janet enjoys World War II history and classical music.
Scott Carlson is the newest member of our administrative staff. Scott assists our business manager with financial records, invoices, and our time-tracking database. Scott has a financial background in both retailing and property management. He has worked as the senior property manager of a property management company where he negotiated contracts and managed the budget. In his spare time, Scott enjoys golfing, gardening, and cooking.
Ray Svenson and Mark Schroeder, Manager, Learning System-Human Resources of Amoco Production Company (APC), will present “Designing an Organizational Learning System to Achieve Strategic Business Results” at the national NSPI Conference in April. They will discuss how Amoco designed and implemented a new learning system integrating organizational development and training to help achieve its strategic business goals. This effort, led by a cross-functional team of APC personnel and supported by executive management, will allow APC to better manage and coordinate resources across operating units.
Also at NSPI, Guy Wallace and Pete Hybert will present “Curriculum Architecture Design Process,” a systems approach for organizing training products required to meet strategic business needs. This process integrates principles from the disciplines of quality, systems engineering, product management, instructional design, and performance technology in a systematic approach for addressing the skill and knowledge requirements of a business.
The Training and Development Strategic Plan Workbook, authored by Ray Svenson and Monica Rinderer, was selected as a finalist in the publications/communications category for the NSPI 1993-1994 Awards of Excellence. The Awards will be announced at the NSPI Conference in April.
Karen Wallace presented “World Class Training” at the Midwest Training Conference in March. This session described the role of training in business performance, provided a detailed description of World Class Training Systems, and outlined a proven, four-phase strategic planning approach for its achievement. Karen and Ray Svenson also presented “A Roadmap for Implementing TQM” at the ASTD-sponsored Best of America Conference in January. They also spoke about their book *The Quality Roadmap: How to Get Your Company on the Quality Track — And Keep It There*.
Dreama Perry presented “The TQM Roadmap and Operations Systems of the Future” to the Northeastern Illinois University American Production and Inventory Control Society (APICS) student chapter this past winter. Her presentation emphasized how critical it is for manufacturing companies to develop business architectures, especially in light of current trends toward globalization within the manufacturing industry. Dreama gave the audience an overview of the components of a business architecture and an explanation of some of the benefits resulting from its development.
We congratulate Diane Wagner on her new position as a member of the board of directors of ibstpi (International Board of Standards for Training, Performance, and Instruction). A nonprofit service organization, ibstpi works to improve performance in a variety of professions. The ibstpi board is engaged in ongoing research dealing with new standards and advances in the technology and measurement systems. The next board meeting is in April at the NSPI Conference in San Francisco. For more information regarding ibstpi, please contact Heather Castline at ibstpi publications, 800-236-4303.
SWI ♦ Svenson & Wallace is pleased to announce that we have signed a new lease with our current landlord. We are well underway to completing the expansion of our new office by early April. We are happy to tell you that for the convenience of our clients and friends, our address and telephone number will not change. Our new office suite will include three meeting rooms, one large conference room, and one training room. Stop by, visit our office, and see our new facilities!
Next in *Pursuing Performance*:
♦ The SWI PACT™ Process: Phase 5
♦ Simulations in Training
♦ And more...
*Pursuing Performance* is published quarterly for the clients and friends of SWI. Our goal is to share specific applications of Human Performance Technology and Total Quality Management technologies in order that organizations may improve quality and productivity. Contributing articles are encouraged. Direct submissions and comments to Jennifer L. Freeland, SWI, 1733 Park Street, Suite 201, Naperville, IL 60563; voice (708) 416-3323; fax (708) 416-9770.
Copyright © 1994 by SWI, unless otherwise noted. All rights reserved. |
Context-Sensitive Error Correction: Using Topic Models to Improve OCR
Michael Wick Michael G. Ross Erik G. Learned-Miller
University of Massachusetts Amherst
Computer Science and Psychology Departments
Amherst, MA, USA
email@example.com, firstname.lastname@example.org, email@example.com
Abstract
Modern optical character recognition software relies on human interaction to correct misrecognized characters. Even though the software often reliably identifies low-confidence output, the simple language and vocabulary models employed are insufficient to automatically correct mistakes. This paper demonstrates that topic models, which automatically detect and represent an article’s semantic context, reduces error by 7% over a global word distribution in a simulated OCR correction task. Detecting and leveraging context in this manner is an important step towards improving OCR.
1. Introduction
As researchers and the general public become more reliant on computer-searchable document databases, paper documents that have not been translated into computer strings are in grave danger of being forgotten [1]. Optical character recognition (OCR) software has made great strides over the past few decades, but the translation of documents into searchable strings still requires that humans manually proofread and correct the output. This paper presents a new algorithm for automatically correcting errors in OCR output. By automatically detecting the semantic context of OCRed documents, our algorithm can use topic-specific word frequency information to correct corrupted words.
While there has been much focus on improving the accuracy of OCR by incorporating language models to guide error detection and correction, these models are typically global and must be learned and applied on the same domain to be effective. For example, the distribution of words in Car & Driver differs from the distribution of words in the New England Journal of Medicine, therefore using word frequencies observed in one periodical to correct OCR results from the other could be disastrous.
Imagine proof-reading the result of a document extracted by OCR software and encountering the string “tonque”. Given no context, it is reasonable to believe that the software could commonly mistake ‘q’ for ‘g’ and that the actual word should be “tongue”. However, given that the article is about sports cars, we may want to change our beliefs; perhaps the word is more likely to be “torque” than “tongue”. While humans have an innate ability to adapt to the properties of a specific domain, a global word frequency model does not.
One possible solution is to create many independent topic-specific vocabulary models, but that imposes high training costs and requires end users to semantically classify every article prior to OCR. Additionally, it does not solve the problem of OCRing documents that contain multiple categories.
In the domains of social networks and document corpus modeling, these questions are often addressed by applying topic models. Topic models can automatically describe a document as a mixture of semantic topics, each with an independent vocabulary distribution. These models can be learned and applied automatically, dynamically determining the context of new documents without user input. The additional strength they bring to language modeling offer the prospect of improved OCR results and reduced reliance on human error correction.
This paper describes the use of a topic model to correct simulated OCR output and demonstrates that it outperforms a global word probability model across a substantial data set. This use of contextual modeling is the first step towards a number of promising new techniques in document processing.
2. Related Work
Topic models [5] come in a number of varieties, this work uses Latent Dirichlet Allocation (LDA) developed by Blei et al. [2] LDA is a generative model that represents each document as a “bag of words” in which word order is
discarded and only word frequencies are modeled. A corpus is represented by a Dirichlet distribution that indicates the probabilities of different topic mixtures. A new document is generated by selecting a topic mixture — for example, the document might be 80% about music, 10% about computers, and 10% about politics. This defines a document-specific multinomial distribution. To generate individual words, repeatedly draw a topic from this distribution and then sample from the multinomial word probability distribution associated with that topic.
LDA models can be learned automatically from unlabeled document collections and then used to infer the topics present in a new document. No user input is required, a crucial difference between these techniques and those used by Strohmaier et al. [6] to correct OCR output with topic-specific dictionaries. Furthermore, LDA models allow a document to contain any mixture of topics, avoiding the need to artificially divide articles into fixed categories. Wei and Croft [7] have demonstrated that useful LDA models can be built from large corpora.
There have been many previous attempts to use language models to improve OCR results. Zhang and Chang [8] post-processed OCR output with a linear combination of language models to correct errors. Hull [3] used a Hidden Markov Model to incorporate syntactic information into character recognition.
3. Topic Modeling for Error Correction
3.1 Model construction
The error correction algorithm consists of two models: a topic model that provides information about word probabilities and an OCR model that represents the probability of character errors.
The LDA topic model is trained from a collection of unlabeled documents using Andrew McCallum’s MALLET software [4]. We assume that these documents are free of OCR errors, and the output of the training is two sets of probability distributions: the Dirichlet prior over topic mixtures and a set of per-topic multinomial word distributions (as discussed in Section 2). During the error correction process these distributions will be used to detect the topic mixture present in each OCR document, which will in turn enable good estimation of the relative probabilities of possible word corrections.
The OCR model represents the probability of different character corruptions in the documents. It is clear that some corruptions are much more likely than others — for example, OCR software is more likely to mistake ‘i’ for ‘j’ than to confuse ‘x’ and ‘l’. Therefore the OCR model is non-uniform. We expect OCR to produce the correct result on most character instances, so the probability of a correct recognition is relatively high. The notation $P(l^f | l^s)$ designates the probability that the OCR software generates letter $l^f$ given that the truth is letter $l^s$. This model is used both to generate simulated OCR output for testing purposes and as part of the correction process. Statistics from actual character recognition output could be used to construct an OCR model that would enable our method to be used as a post-processor for real-world OCR software.
3.2 Error-correction algorithm
The algorithm takes an OCR document and a list of its incorrect words. Currently, the incorrect word list is provided by an oracle, but many OCR packages are capable of indicating low certainty words to their users.
For each incorrect word $w_i$ in the document, we generate a list of all strings that differ from $w_i$ by zero, one, or two characters. Due to the combinatorial explosion of this method, we do not consider words that are three or more characters different from the original string. For each word $w_c$ in the candidate list, we assign a score based on the model that is used and the letters that are flipped. This combines the OCR model and the probability of the candidate word into:
$$\text{Score}(w_c) = P(w_c) \prod_j^N P(l^f_j | l^s_j)$$
where $P(w_c)$ is the probability of the word, $N$ is the number of letters in the word, and $P(l^f_j | l^s_j)$ is the probability that letter $l^s_j$ was mistaken for $l^f_j$. For a topic model the probability of a word is
$$P(w) = \sum_k^M P(w | t_k) P(t_k)$$
where $w$ is a word, $M$ is the number of topics in the model, and $t_k$ is a topic. $P(t_k)$ is computed by applying the trained topic model to the correctly recognized words in the document.
After the scores of all candidates are computed, the word is corrected by substituting the highest-scoring candidate. Ties are broken randomly and corrections only occur if the selected string scores strictly higher than the original.
4. Experiments
4.1 Data
For our experiments we use the publicly accessible 20 Newsgroups data corpus available at http://people.csail.mit.edu/jrennie/20Newsgroups/. This
data set is well suited for our experiments as it contains documents from various domains. For the experiments, we used documents from the alt.atheism (480 documents), comp.graphics (588 documents), sci.space (594 documents), talks.politics.guns (549 documents), talks.politics.mideast (569 documents), talks.politics.misc (467 documents), rec.autos (595 documents), and religion.misc (377 documents) newsgroups.
We tested our system on corpora containing two (comp.graphics and talk.politics.mideast), four (adding sci.space and talk.politics.guns), six (adding alt.atheism and talk.politics.misc) and eight (adding talk.religion.misc and rec.autos) newsgroups. In each case, the documents were randomly divided, setting aside 100 testing documents and using the remainder for training. The testing documents were corrupted by the OCR error model described previously and lists of the corrupted words in each document were provided to the correction algorithm.
The same model parameters were used throughout the experiments to demonstrate that no extensive parameter tuning is necessary for this method. The number of topics was fixed to 30 — even though we never test on 30 newsgroups, each newsgroup might cover several distinct, although related, topics.
Using the algorithm described previously, we evaluated two word models: a global word frequency model and an LDA topic model. The only difference between the models is in the calculation of $P(w_c)$. The global model used the same multinomial distribution for every correction of every document, while the topic model used the correctly recognized words to determine the topic probabilities and adapt $P(w_c)$ to the local context.
### 4.2 Results
Table 1 displays the error correction results for both global and topic-based language models while varying the number of newsgroups the documents are drawn from. The topic model outperforms the global model for every tested combination of newsgroups, reducing error by an average of 7%.
An example from the rec.autos newsgroup demonstrates how the topic model enables this improvement in error correction. It is possible to qualitatively understand the topics in the model by looking at the most probable words under each one’s distribution. In Figure 1, we see several of the most probable topics given the correct words in a particular rec.autos document. Clearly, topic 10 contains words related to cars, while the other topics seem to relate to other subjects such as science or religion.
Figure 2 shows the probabilities of each of the Figure 1 topics given the rec.autos document. Topic 10, the “cars” topic, clearly dominates this distribution. In Figure 3, we see that the topic model was able to correct several corrupt car-related strings while the global model made incorrect substitutions. Clearly, this success was the result of the document-specific contextual information provided by the topic model.
### 5. Conclusion and Future Work
We developed an algorithm for applying topic modeling to OCR error correction. This model outperformed a global word distribution on the error correction task on simulated data due to its ability to determine the context of each document and provide a tailored word probability model.
The initial success of using topic models to correct simulated OCR output points to a number of exciting avenues for future work. Applying it as a post-processor to real OCR output will allow us to further validate the approach, as will the collection of larger data sets. We expect that the model’s advantages over a global word frequency model will increase with the diversity of the test and training corpora.
Additionally this problem provides an excellent framework for testing advances in topic modeling. Often researchers provide lists of topic words to demonstrate their success, but OCR correction could be an objective metric of success.
The topic model approach to OCR correction relies on
Topic modeling can also be made practical without an error-free training set of digital documents. Many archival OCR projects involve converting back issues of academic journals so they can be useful for future researchers. Some of these journals are in old fonts or printed on decaying paper stock, so OCR software would only recognize a few words with high confidence. Due to evolutions in vocabulary, there might be very few or no equivalent digital documents for use in topic model training.
However, with a large enough collection of related documents, an initial topic model could be formed from the relatively few words that are confidently recognized. This initial model might allow for high confidence in more words on a second pass, which would in turn lead to a more detailed topic model. Thus a topic model could be bootstrapped from a weak OCR algorithm and result in a strong OCR algorithm for difficult documents.
This iterative style is part of the general *iterative contextual modeling* (ICM) approach to OCR. We believe that ICM can provide a framework for leveraging not only language but also appearance context to advance to new levels of performance on challenging documents.
### 6 Acknowledgements
This work was supported in part by the Center for Intelligent Information Retrieval, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0427594, and in part by U.S. Government contract #NBCH040171 through a subcontract with BBNT Solutions LLC. Any opinions, findings and conclusions or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsor.
### References
[1] H. Baird. Digital libraries and document image analysis. In *International Conference on Document Analysis and Recognition*, 2003.
[2] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. *Journal of Machine Learning Research*, 3, 2003.
[3] J. Hull. Incorporating language syntax in visual text recognition with a statistical model. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 18(12), 1996.
[4] A. K. McCallum. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu, 2002.
[5] M. Steyvers and T. Griffiths. Probabilistic topic models. In T. Landauer, D. McNamara, S. Dennis, and W. Kintsch, editors, *Latent Semantic Analysis: A Road to Meaning*. Laurence Erlbaum, 2006. In press.
[6] C. Strohmaier, C. Ringlstetter, K. Schulz, and S. Mihov. Lexical postcorrection of OCR-results: The web as a dynamic secondary dictionary? In *International Conference on Document Analysis and Recognition*, 2003.
[7] X. Wei and B. Croft. LDA-based document models for ad-hoc retrieval. In *Proceedings of SIGIR06*, 2006.
[8] D. Zhang and S. Chang. A bayesian framework for fusing multiple word knowledge models in videotext recognition. In *IEEE Conference on Computer Vision and Pattern Recognition*, 2003. |
ON THE JONES POLYNOMIAL AND ITS APPLICATIONS
ALAN CHANG
ABSTRACT. This paper is a self-contained introduction to the Jones polynomial that assumes no background in knot theory. We define the Jones polynomial, prove its invariance, and use it to tackle two problems in knot theory: detecting amphichirality and finding bounds on the crossing numbers.
1. Preliminaries
1.1. Definitions. For the most part, it is enough to think of a knot as something made physically by attaching the two ends of a string together. Since knots exist in three dimensions, when we need to draw them on paper, we often use knot diagrams. Figure 1.1 contains examples of knot diagrams.

Figure 1.1. Examples of knot diagrams
As we can see from Figure 1.1b and Figure 1.1c, different diagrams can represent the same knot. To see that these two are really the same knot, we could make Figure 1.1b out of a piece of string and move the string around in space (without cutting it) so that it looks like Figure 1.1c.
There are some restrictions on knot diagrams: (1) each crossing must involve exactly two segments of the string and (2) those segments must cross transversely. (See Figure 1.2.)

Figure 1.2. Examples of invalid knot diagrams
There are two ways to travel around a knot; these correspond to the orientations of the knot. An oriented knot is a knot with a specified orientation. On a knot diagram, we can indicate an orientation via an arrow. (See Figure 1.3.)

**Figure 1.3.** Two orientations of the trefoil
Sometimes we’ll use more than one piece of string, so we define a link to be a generalization of a knot: links can be made by multiple pieces of string. For each string, we attach the two ends together. (Note that we do not attach the ends of two different strings together.)
The number of components of a link is the number of strings used. (Observe that every knot is a link with one component.) A link diagram is a straightforward generalization of a knot diagram, and an oriented link is a link where all the components have specified orientations.
Figure 1.4 contains examples of two-component links.

**Figure 1.4.** Examples of links with two components
### 1.2. More mathematically...
Readers who are not satisfied with the definitions given above may prefer the definition of a knot given here.
**Definition 1.1.** Let $X$ be a topological space. An isotopy of $X$ is a continuous map $h : X \times [0, 1] \to X$ such that $h(x, 0) = x$ and $h(\cdot, t)$ is a homeomorphism for each $t \in [0, 1]$.
**Definition 1.2.** A knot is a smooth embedding $f : S^1 \to \mathbb{R}^3$. Two knots are considered equivalent if they are related by a smooth isotopy of $\mathbb{R}^3$.
This definition of a knot has the advantage of placing knot theory on firm mathematical grounding. For more detailed definitions, see [Cro04], [Mur96], or [Lic97].
**Remark 1.3.** We need the embeddings in Definition 1.2 to be smooth to avoid pathological “wild” knots. Instead of requiring the maps to be smooth, we could require them
to be to be piecewise linear. Either of these is enough to guarantee the non-existence of wild knots. See [Cro04, Chapter 1] for what can happen if we do not assume either regularity condition.
1.3. Knot invariants. A knot invariant is something (such as number, matrix, or polynomial) associated to a knot. A link invariant is defined similarly for links.
Example 1.4. The unknotting number of a knot $K$ is the minimum number of times that $K$ must be allowed to pass through itself to get to the unknot. This is a knot invariant.
Example 1.5. We will define something called the crossing number.
Suppose for a knot $K$, we take a diagram $D$ of $K$ and count the number of crossings in $D$. This number is not an invariant of $K$ because $K$ has many different diagrams that differ in number of crossings. For example, in Figure 1.5, we see two different diagrams of the unknot.

**Figure 1.5.** Two diagrams of the unknot, with different number of crossings.
Thus, we have not yet successfully defined a knot invariant. However, if we consider all diagrams of $K$ and take the minimum number of crossings over all diagrams, then we do have an invariant of $K$. This is called the crossing number of a knot. We will see this invariant again in section 4.
1.4. Reidemeister moves. In 1926, Kurt Reidemeister proved that given two diagrams $D_1$ and $D_2$ of the same knot, it is always possible to get from one diagram to the other via a finite sequence of moves, now called Reidemeister moves. These moves can be divided into three types:

**Figure 1.6.** The three types of Reidemeister moves
We can think of Type I as adding/removing a twist, Type II as crossing/uncrossing two strands, and Type III as sliding a strand past a crossing.
It is clear that these moves do not change the knot. What is important is that these three moves (along with planar isotopy) are enough for us to get from one diagram of
a knot to any other diagram of the same knot. For a proof of this remarkable fact, see [Mur96, Chapter 4].
This discovery gives us a method to prove that certain quantities are knot invariants. Suppose there is a quantity that we are trying to show is a knot invariant, but it is defined in terms of a knot diagram. (As shown in Example 1.5, we have to be careful if we try to define a knot invariant in terms of a single knot diagram of a knot.) Because of Reidemeister’s theorem, if we can show the quantity is unchanged when we alter the diagram via any Reidemeister move, then we know it is an invariant. We will use this technique in the next section.
2. The Jones Polynomial
2.1. Introduction. The Jones polynomial is an invariant\(^1\) whose discovery in 1985 brought on major advances in knot theory. For a link \(L\), the Jones polynomial of \(L\) is a Laurent polynomial in \(t^{1/2}\). (By “Laurent polynomial,” we mean that both positive and negative integral powers of \(t^{1/2}\) are allowed.)
Vaughan Jones’s construction of the polynomial was through a complicated process. Louis Kauffman developed a much easier construction by introducing another polynomial, called the bracket polynomial. This polynomial is defined in terms of link diagrams instead of links. As we will see, the bracket polynomial is not a link invariant.
2.2. Resolving a crossing. We will want to relate the bracket polynomial of a link diagram \(D\) to bracket polynomials of “simpler” link diagrams. The following definition makes the idea of simplifying a diagram precise.
**Definition 2.1.** Suppose we start with a crossing of the form \(\times\). The 0-resolution of this crossing is \(\langle\langle\) and the 1-resolution is \(\overline{\langle}\). (For example, see Figure 2.1.)

**Figure 2.1.** Resolving a crossing
A diagram may need to be rotated so that the crossing in concern appears as \(\times\). For example, after a \(90^\circ\) rotation, we see that the 0- and 1-resolutions of \(\times\) are \(\overline{\langle}\) and \(\langle\langle\), respectively.
\(^1\)More precisely, the Jones polynomial is an invariant of oriented links. However, when the link has one component (i.e. is a knot), the polynomial does not depend on the orientation. Thus, the Jones polynomial is both an (unoriented) knot invariant as well as an oriented link invariant.
Here is one way to think of a 0-resolution: if we are traveling along a knot and reach a crossing in which we are on the upper strand, then we turn left onto the lower strand. (For a 1-resolution, we would turn right instead.)
2.3. The Bracket Polynomial. The bracket polynomial of a diagram $D$ is a Laurent polynomial in one variable $A$ and is denoted $\langle D \rangle$. It is completely determined by three rules:
(BP1) \quad \langle \bigcirc \rangle = 1
(BP2) \quad \langle D \sqcup \bigcirc \rangle = (-A^2 - A^{-2}) \langle D \rangle
(BP3) \quad \langle \bigtimes \rangle = A \langle \bigcirc \rangle + A^{-1} \langle \bigtimes \rangle
(The BP stands for “bracket polynomial.”) Let’s go through what these rules mean, one by one.
(1) The first relation (BP1) states that the bracket polynomial of the knot diagram $\bigcirc$ is the constant polynomial 1. (Note, however, that this does not mean that the bracket polynomial of any diagram depicting the unknot is 1. For example, $\bigcirc \bigtimes \bigtimes \bigtimes$ is also a diagram of the unknot, but this diagram turns out to have bracket polynomial $-A^9$.)
(2) For the second relation, the expression $D \sqcup \bigcirc$ denotes a diagram $D$ with an extra circle added. Furthermore, the circle does not cross the rest of the diagram. If we do have a diagram of this form, then BP2 means that we can find its bracket polynomial by starting with the bracket polynomial of the diagram with the circle removed and multiplying it by $-A^2 - A^{-2}$. For example, using BP2 (along with BP1), we have
$$\langle \bigcirc \bigcirc \rangle = (-A^2 - A^{-2}) \langle \bigcirc \rangle = -A^2 - A^{-2}$$
(3) In order to apply the third relation, we need to resolve crossings. Start with a diagram $D$ and fix a crossing. If $D_0$ and $D_1$ are the 0- and 1-resolutions of this crossing, then BP3 states that $\langle D \rangle = A \langle D_0 \rangle + A^{-1} \langle D_1 \rangle$. For example,
$$\langle \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[thick] (0,0) .. controls (0,.5) and (.5,.5) .. (.5,0);
\draw[thick] (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0);
\end{tikzpicture} \rangle = A \langle \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[thick] (0,0) .. controls (0,.5) and (.5,.5) .. (.5,0);
\draw[thick] (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0);
\end{tikzpicture} \rangle + A^{-1} \langle \begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[thick] (0,0) .. controls (0,.5) and (.5,.5) .. (.5,0);
\draw[thick] (.5,0) .. controls (.5,-.5) and (0,-.5) .. (0,0);
\end{tikzpicture} \rangle$$
The key idea of computing the bracket polynomial of a diagram $D$ lies in BP3. Using this rule we can recursively compute bracket polynomials of knot diagrams via diagrams of fewer crossings. Eventually we reach diagrams with no crossings.
Example 2.2. Let us consider the Hopf link \((\mathcal{H})\). Applying BP3 gives us:
\[
\langle \mathcal{H} \rangle = A \langle \mathcal{H}_1 \rangle + A^{-1} \langle \mathcal{H}_2 \rangle
\]
Thus, we have reduced the problem to determining the bracket polynomials of the two diagrams on the right side of (2.1), each of which has one crossing. Using BP3 again,
\[
\langle \mathcal{H}_1 \rangle = A \langle \mathcal{H}_{11} \rangle + A^{-1} \langle \mathcal{H}_{10} \rangle
\]
\[
\langle \mathcal{H}_2 \rangle = A \langle \mathcal{H}_{21} \rangle + A^{-1} \langle \mathcal{H}_{20} \rangle
\]
Combining (2.1) and (2.2), we see that
\[
\langle \mathcal{H} \rangle = A^2 \langle \mathcal{H}_{11} \rangle + \langle \mathcal{H}_{10} \rangle + \langle \mathcal{H}_{21} \rangle + A^{-2} \langle \mathcal{H}_{20} \rangle
\]
Invoking BP1 and BP2 gives us
\[
\langle \mathcal{H}_{10} \rangle = \langle \mathcal{H}_{20} \rangle = 1 \quad \text{and} \quad \langle \mathcal{H}_{11} \rangle = \langle \mathcal{H}_{21} \rangle = -A^2 - A^{-2}
\]
Putting everything together gives us \(\langle \mathcal{H} \rangle = -A^4 - A^{-4}\)
In Example 2.2, we decomposed the Hopf link into four diagrams. The four diagrams correspond to the four ways of resolving the two crossings of \(\mathcal{H}\). Each of these diagrams is called a *smoothing*.
Definition 2.3. Given a link diagram \(D\), a *smoothing* of \(D\) is a diagram in which every crossing of \(D\) has been resolved (either by a 0-resolution or a 1-resolution).
We can see that in general, a link diagram \(D\) with \(n\) crossings has \(2^n\) distinct smoothings. Furthermore, we can see from Example 2.2 that these smoothings allow us to determine the bracket polynomial of \(D\).
In the general case, number the crossings 1, \ldots, \(n\). For \(\epsilon_1, \ldots, \epsilon_n \in \{0, 1\}\), let \(D_{\epsilon_1 \epsilon_2 \cdots \epsilon_n}\) be the smoothing of \(D\) where crossing \(i\) is resolved via a \(\epsilon_i\)-resolution. For example, if \(D = \mathcal{H}\), and the top crossing is labeled 1 (so the bottom crossing is labeled 2), then
\[
D_{00} = \mathcal{H}_1, \quad D_{01} = \mathcal{H}_2, \quad D_{10} = \mathcal{H}_{11}, \quad D_{11} = \mathcal{H}_{21}
\]
For a smoothing \(D_\epsilon\), where \(\epsilon = \epsilon_1 \epsilon_2 \cdots \epsilon_n\), define
\[
s_0(\epsilon) = \text{number of 0-resolutions in } D_\epsilon
\]
\[
s_1(\epsilon) = \text{number of 1-resolutions in } D_\epsilon
\]
Using this notation, we see that a smoothing $D_\epsilon$ contributes a term $A^{s_0(\epsilon) - s_1(\epsilon)} \langle D_\epsilon \rangle$ to the bracket polynomial $\langle D \rangle$. Define $\langle D, \epsilon \rangle = A^{s_0(\epsilon) - s_1(\epsilon)} \langle D_\epsilon \rangle$. Then we can summarize our results from above in the following lemma.
**Lemma 2.4.** Let $D$ be a link diagram with $n$ crossings. Then
$$\langle D \rangle = \sum_{\epsilon \in \{0,1\}^n} \langle D, \epsilon \rangle$$
**Remark 2.5.** The bracket polynomial of a smoothing $D_\epsilon$ of $D$ is particularly easy to compute. We know $D_\epsilon$ consists of a number (say $k$) of non-crossing loops. (For example, see (2.3).) By repeatedly applying BP2, we see that $\langle D_\epsilon \rangle = (-A^2 - A^{-2})^{k-1}$.
2.4. **Invariance under Type II and Type III Moves.** Because of our discussion of invariants and Reidemeister moves at the end of subsection 1.4, we should study how the bracket polynomial behaves under Reidemeister moves.
**Lemma 2.6** (Invariance under Type II). If link diagrams $D$ and $D'$ are related by one application of a Type II Reidemeister move, then $\langle D \rangle = \langle D' \rangle$.
**Proof.** Start with the diagram $D = \bowtie \bowtie$. Label the two crossings so that 1 is on the left and 2 is on the right. The four ways of resolving these crossings are given by
$$D_{00} = \bowtie \bowtie, \quad D_{01} = \bowtie \bowtie, \quad D_{10} = \bowtie \bowtie, \quad D_{11} = \bowtie \bowtie.$$
As in Lemma 2.4, we have
$$\langle \bowtie \bowtie \rangle = A^2 \langle \bowtie \bowtie \rangle + \langle \bowtie \bowtie \rangle + \langle \bowtie \bowtie \rangle + A^{-2} \langle \bowtie \bowtie \rangle$$
Using BP2, we get $\langle \bowtie \bowtie \rangle = (-A^2 - A^{-2}) \langle \rangle$, so three of the four terms on the right hand side of (2.4) cancel out:
$$A^2 \langle \bowtie \bowtie \rangle + \langle \bowtie \bowtie \rangle + A^{-2} \langle \bowtie \bowtie \rangle = A^2 \langle \rangle + ((-A^2 - A^{-2}) \langle \rangle) + A^{-2} \langle \rangle = 0$$
We are left with $\langle \bowtie \bowtie \rangle = \langle \bowtie \bowtie \rangle$. This proves that the bracket polynomial is unchanged under a Type II move. □
**Remark 2.7.** The coefficients in BP2 and BP3 were chosen specifically so that the three terms cancel out in (2.5).
**Lemma 2.8** (Invariance under Type III). If link diagrams $D$ and $D'$ are related by one application of a Type III Reidemeister move, then $\langle D \rangle = \langle D' \rangle$.
Proof. We could start with the diagram $\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$ and relate it to the 8 diagrams that correspond to the 8 possible ways of resolving the three crossings. Then we could do the same thing with $\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$. We would indeed see that both diagrams give the same result.
However, we can make a cleaner argument by noting what happens if we only resolve the crossing between the two upper strands:
$$\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} = A \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} + A^{-1} \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
$$\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} = A \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} + A^{-1} \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
To complete the proof, it suffices to show that
$$\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} = \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} \quad \text{and} \quad \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} = \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
The equality on the left holds by two applications of Lemma 2.6. The equality on the right holds because the two diagrams are related by a planar isotopy. □
2.5. Type I moves. We have shown that the bracket polynomial is invariant under Type II and Type III Reidemeister moves. If it is also invariant under Type I moves, then we would have a genuine link invariant. However, this is not the case, as the following computation shows:
$$\begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} = A \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} + A^{-1} \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
(2.6)
$$= A \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture} + A^{-1}(-A^2 - A^{-2}) \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
$$= -A^{-3} \begin{tikzpicture}[baseline=-0.5ex]
\draw[thick] (0,0) -- (1,1);
\draw[thick] (0,1) -- (1,0);
\end{tikzpicture}$$
2.6. The Jones Polynomial. To resolve the issue of invariance under Type I moves, we will define another number associated to link diagrams, called the *writhe*.
Recall that an oriented link diagram is a diagram in which all components have a specified direction. Once we have an orientation, we can define positive and negative crossings by Figure 2.2.

Figure 2.2. Two types of crossings for an oriented diagram
Remark 2.9. As the following diagrams show, if we reverse the direction of all components of a link, the crossing types (positive/negative) do not change. □
Since a knot is a link with one component, we can define positive and negative crossings for knot diagrams without specifying an orientation on the knot. Note that this is not true for links in general. If we reverse the orientations of some (but not all) of the components of a link, then some crossing types will change.
**Example 2.10.** Consider the oriented trefoil given in Figure 2.3.

The three crossings on the left are positive crossings; the crossing on the right is a negative crossing. If we switch the orientation, the crossing types remain the same. \( \diamondsuit \)
Let \( n_+(D) \) and \( n_-(D) \) be the number of positive and negative crossings, respectively, of an oriented link diagram \( D \). For the trefoil in Figure 2.3, we have \( n_+(D) = 3 \) and \( n_-(D) = 1 \).
**Definition 2.11.** For an oriented link diagram \( D \), the *writhe* of \( D \) is \( w(D) = n_+(D) - n_-(D) \). \( \diamondsuit \)
For the trefoil in Figure 2.3, we have \( w(D) = 2 \). Observe that if we remove the twist on the right side of the trefoil in Figure 2.3, then we decrease \( n_- \) by 1. Thus, the writhe of the resulting diagram \( D' \) (a trefoil in its “typical” depiction) is \( w(D') = 3 \).
As we have shown in the example above, the writhe is not invariant under Type I moves. Going from \( \mathcal{D} \) to \( \mathcal{D}_- \) decreases \( n_- \) by 1. Thus,
\[
w(\mathcal{D}_-) = w(\mathcal{D}) - 1
\]
(2.7)
This shows that the writhe is not a link invariant. However, we can observe that like the bracket polynomial, the writhe is unchanged under Type II and Type III moves.
We know precisely how Type I moves affect the writhe and the bracket polynomial. These are given by (2.7) and (2.6), respectively. Thus, a certain combination of the writhe and the bracket polynomial will in fact give us a link invariant!
Recall that uncurling the twist in \( \mathcal{D} \) leads to a multiplication by \(-A^{-3}\) in the bracket polynomial. The key idea we will use is that we can “cancel” this multiplication by using the writhe. The following lemma makes this idea precise.
**Lemma 2.12.** For an oriented link \( L \) with diagram \( D \), the polynomial \((-A)^{-3w(D)} \langle D \rangle\) is an invariant of the link \( L \).
Proof. Because both the writhe and the bracket polynomial are invariant under Type II and Type III moves, we only need to check that the quantity \((-A)^{-3w(D)}\langle D \rangle\) is unchanged under Type I moves. This is immediate from (2.6) and (2.7):
\[
(-A)^{-3w(\text{---})}\langle \text{---} \rangle = (-A)^{-3w(\text{---})+3} \cdot (-A)^{-3}\langle \text{---} \rangle \\
= (-A)^{-3w(\text{---})}\langle \text{---} \rangle \quad \square
\]
Remark 2.13. The importance of Lemma 2.12 is that finally, we have a polynomial associated to an oriented link \(L\) that does not depend on the choice of the diagram used to compute the polynomial.
We have pretty much given a complete construction of the Jones polynomial, as well as a proof of its invariance. The only thing is that the actual Jones polynomial is normalized in a different way (because it was discovered via different means).
Definition 2.14. For an oriented link \(L\), the *Jones polynomial* of \(L\), denoted \(V_L(t)\), is obtained by taking the expression \((-A)^{-3w(D)}\langle D \rangle\) and setting \(A = t^{-1/4}\). \(\diamond\)
Remark 2.15. We have shown that the Jones polynomial is an invariant of oriented links. For knots, recall that the writhe does not depend on the orientation. If we retrace our arguments above, we see that the Jones polynomial is also an invariant of (unoriented) knots. \(\diamond\)
The next two sections aim to answer the question “What is the Jones polynomial good for?” We will give two applications, the second much more involved than the first.
3. Detecting chiral knots
Definition 3.1. A knot \(K\) is *amphichiral* if it equivalent to its mirror image (in \(\mathbb{R}^3\)). Otherwise, it is *chiral*. \(\diamond\)
Consider, for example the two diagrams in Figure 3.1. We may ask if they are the same knot.

(A) left-handed trefoil \hspace{2cm} (B) right-handed trefoil
**Figure 3.1.** Mirror images of a trefoil
By using the Jones polynomial, we will see that the answer is no: the left-handed and right-handed trefoils are distinct knots. (In other words, the trefoil is *chiral.*)
For a knot $K$, let $K^{\text{flip}}$ denote the mirrored knot. If $K$ has a diagram $D$, then $D^{\text{flip}}$ denotes the corresponding mirrored diagram for $K^{\text{flip}}$.
Suppose we start with a smoothing $D_\epsilon$ of $D$. What do we get when we mirror this to $(D_\epsilon)^{\text{flip}}$? The answer is a smoothing of $D^{\text{flip}}$. See the example in Figure 3.2.

**Figure 3.2.** Relations between mirroring and smoothing. (In the labeling, the three crossings are numbered from top to bottom.)
The example makes a general pattern clear. Every smoothing $\epsilon$ of $D$ corresponds to the dual smoothing $\hat{\epsilon}$ of $D^{\text{flip}}$. The dual smoothing $\hat{\epsilon}$ is obtained by reversing every resolution in $\epsilon$. (That is, interchange 0s and 1s.) For example, if $\epsilon = 011$, then $\hat{\epsilon} = 100$. We can write the relation as
\[(D^{\text{flip}})_\epsilon = D_{\hat{\epsilon}}\]
Observe that $s_0(\epsilon) = s_1(\hat{\epsilon})$ and $s_1(\epsilon) = s_0(\hat{\epsilon})$. Using (3.1), it follows that
\[\langle D^{\text{flip}} \rangle(A) = \sum_{\epsilon \in \{0,1\}^n} A^{s_0(\epsilon) - s_1(\epsilon)} \langle (D^{\text{flip}})_\epsilon \rangle(A) = \sum_{\epsilon \in \{0,1\}^n} A^{-(s_0(\hat{\epsilon}) - s_1(\hat{\epsilon}))} \langle D_{\hat{\epsilon}} \rangle(A)\]
Recall that the bracket polynomial of a smoothing $\langle D_\epsilon \rangle(A)$ is some power of $(-A^2 - A^{-2})$. Hence, $\langle D_\epsilon \rangle(A) = \langle D_\epsilon \rangle(A^{-1})$, giving us
\[\langle D^{\text{flip}} \rangle(A) = \sum_{\epsilon \in \{0,1\}^n} (A^{-1})^{s_0(\hat{\epsilon}) - s_1(\hat{\epsilon})} \langle D_{\hat{\epsilon}} \rangle(A^{-1}) = \langle D \rangle(A^{-1})\]
Also, $w(D) = -w(D^{\text{flip}})$, since positive crossings in $D$ become negative crossings in $D^{\text{flip}}$ and vice versa. We can translate the results we have just obtained to a statement about the Jones polynomial.
**Lemma 3.2.** For any knot $K$,
\[V_{K^{\text{flip}}}(t) = V_K(t^{-1})\]
**Example 3.3.** For the left handed trefoil, we have $V(t) = -t^{-4} + t^{-3} + t^{-1}$. It follows that the Jones polynomial of the right-handed trefoil is $V(t) = -t^4 + t^3 + t$. Since the two polynomials are not the same, we can conclude that the left-handed and right-handed trefoils are distinct. \hfill \diamondsuit
Using the same reasoning as Example 3.3, we have the following result.
**Theorem 3.4.** If $K$ is a knot and $V_K(t) \neq V_K(t^{-1})$, then $K$ and $K^{\text{flip}}$ are distinct. That is, $K$ is chiral.
**Remark 3.5.** Note that Theorem 3.4 does not tell us anything about $K$ if $V_K(t) = V_K(t^{-1})$. It would be nice if $V_K(t) = V_K(t^{-1})$ implied that $K$ is amphichiral. However, there are chiral knots with symmetric Jones polynomials. (See [Mur96, Exercise 11.2.2] for an example.)
**Remark 3.6.** Recall that the Jones polynomial is not just a knot invariant but also an oriented link invariant. Thus, all the results in this section (in particular, Theorem 3.4) hold if we replace “knot” with “oriented link.”
### 4. Bound on crossing numbers
#### 4.1. Crossing number of a knot
In Example 1.5, we gave an example of a natural knot invariant. We repeat it here.
**Definition 4.1.** The *crossing number* of a knot $K$ is the minimum number of crossings needed to draw the knot in a plane. It is denoted $c(K)$.
The crossing number is a difficult invariant to work with. Suppose we start with a knot $K$ and draw a few diagrams of $K$. Suppose that out of all our diagrams, the one with the fewest number of crossings uses $n$ crossings. Then we know $c(K) \leq n$. However, we cannot be sure that there is no diagram of $K$ with fewer crossings. Drawing more diagrams will not help; there are infinitely many ways to represent $K$ as a knot diagram.
The aim of this section is to use the Jones polynomial to give a nontrivial lower bound for $c(K)$.
#### 4.2. Reduced diagrams and removable crossings
In certain cases, it is easy to tell that a crossing can be removed, as in Figure 4.1a below:

In Figure 4.1a, the crossing in the center can easily be removed by flipping the right half over, leaving us with Figure 4.1b. More generally, consider knots of the form in Figure 4.2.
That is, there are exactly two strands in the region between $X$ and $Y$ that go from $X$ to $Y$ and that cross each other once. We can flip $Y$ to remove the crossing. This leads us to make a very natural definition.
**Definition 4.2.** A knot diagram $D$ is *reduced* is it does not have the form of Figure 4.2. The crossing in the region between $X$ and $Y$ is called a *removable crossing*. $\diamondsuit$
The condition of being reduced is a good starting point in our attempt to determine the crossing number of knots. Having a reduced diagram $D$ of a knot $K$, however, is not enough to conclude that there is no diagram of $K$ with fewer crossings. Consider the unknot as depicted in Figure 4.3.

**Figure 4.3.** Reduced diagram of the unknot with three crossings
This diagram is reduced and contains three crossings, but the unknot can be drawn with zero crossings.
**Remark 4.3.** At this point, we might note that crossings that can be eliminated by Type II Reidemeister moves ($\bowtie \rightarrow \bowtie$) are also easy to identify in a knot diagram. Why is the situation $\bowtie$ not included in the definition of “unreduced”? It turns out not to be needed. We will see why in subsection 4.4. $\diamondsuit$
### 4.3. Some combinatorial aspects of knots
We make a brief digression to discuss some combinatorics relating to knots. A knot diagram divides the plane into disjoint regions, or faces. (The exterior of a knot diagram is considered a face too.) We will look at two things: number of faces, and colorings of the faces.
**Lemma 4.4.** For a knot with $n$ crossings, there are $n + 2$ faces.
**Proof.** We think of the knot as a graph and apply Euler’s formula: $V - E + F = 2$. Because each crossing involves two strands, each vertex of the graph has degree four, so there are $2n$ edges. Then $V = n$ and $E = 2n$ gives $F = n + 2$. $\square$
It is always possible to color the faces of a knot in an alternating black-white pattern, as shown in Figure 4.4. (Recall that we are considering the exterior of the knot diagram to be a face as well, which explains why Figure 4.4b is a valid checkerboard coloring of the trefoil.)

**Figure 4.4.** Examples of checkerboard colorings
Observe that by resolving a crossing, we still get a checkerboard coloring. (See Figure 4.5.)

**Figure 4.5.** Resolving a crossing still gives us a checkerboard coloring.
Given a checkerboard coloring of a knot diagram, we can divide the crossings into two types. (See Figure 4.6.) The coloring in Figure 4.6a is called “0-separating” because a 0-resolution separates the two black regions. The coloring in Figure 4.6b is called “1-separating” for similar reasons.

**Figure 4.6.** Two coloring types at a crossing
We could ask the following question: Given a knot diagram, when does a checkerboard coloring consist of only 0-separating crossings? (For example, the trefoil in Figure 4.4a and the figure-8 knot in Figure 4.4c both satisfy this property.)
To answer that question, consider the portion of a knot diagram given in Figure 4.7a. Suppose we start at point $A$ and move rightwards along the horizontal strand. We will reach the crossing at the center of the diagram. (Observe that this is a 0-separating
---
2Note that if a checkerboard coloring consists only of 0-separating crossings, then we can invert the colors (interchange black and white) to get a coloring that consists only of 1-separating crossings.
crossing.) In this crossing, the horizontal strand we are traveling along goes underneath the vertical strand.
\[
\begin{array}{ccc}
\text{(A)} & \text{(B)} & \text{(C)} \\
\end{array}
\]
**Figure 4.7**
We keep moving rightwards after we pass the crossing. Eventually we will reach another crossing. (See Figure 4.7b.) There is only one way to make this crossing 0-separating. The horizontal strand must go over the vertical strand. (See Figure 4.7c.)
The general pattern is clear: if we want this knot diagram to have only 0-separating crossings, then the strand must alternate over-under-over-under.
**Definition 4.5.** A knot diagram is *alternating* if the strand alternates between going over and going under at crossings. A knot is *alternating* if there is a alternating diagram of the knot.
As it turns out, alternating knots are a very well-behaved class of knots. We have just seen that alternating knots have the following property.
**Lemma 4.6.** A knot diagram admits a checkerboard coloring consisting of only 0-separating crossings if and only if the diagram is alternating.
### 4.4. Reduced alternating knot diagrams.
**Definition 4.7.** A knot diagram $D$ is *reduced alternating* if it is both reduced and alternating.
The definition may seem trivial, but we should point out one thing: if we start with an unreduced alternating diagram (of the form Figure 4.2), we can apply the “flipping” operation to get rid of the removable crossing. What is important is that the resulting diagram will still be alternating.
Thus, to determine $c(K)$ for a given alternating knot $K$, we may start with an alternating diagram of $K$. Next, we can remove all the removable crossings to get a reduced alternating diagram $D$. If this diagram has $n$ crossings, we know that $c(K) \leq n$.
If it is possible to remove any more crossings, it is not immediately obvious: “flipping” will not help since there are no more removable crossings. Type II Reidemeister moves ($\bowtie \rightarrow \bowtie$) will not help either: because the diagram is alternating, $\bowtie$ will not appear.
As we will show in the next section, there is in fact no way to lower the number of crossings from that point. The proof will not proceed by drawing more diagrams. (We have already explained in subsection 4.1 why that approach will not work.) Instead, we will use the Jones polynomial to help us.
4.5. **Bound on the crossing number.** Recall that for a link diagram $D$, Lemma 2.4 gives the bracket polynomial in terms of the smoothings of $D$. In particular, $\langle D \rangle = \sum \langle D, \epsilon \rangle$, where $\langle D, \epsilon \rangle = A^{s_0(\epsilon) - s_1(\epsilon)} \langle D_\epsilon \rangle$.
Let $o(\epsilon)$ be the number of circles in $D_\epsilon$. (We use the letter $o$ because it looks like a circle!) Then $\langle D_\epsilon \rangle = (-A^2 - A^{-2})^{o(\epsilon)-1}$.
If $\epsilon$ and $\epsilon'$ differ by one resolution, then $o(\epsilon') = o(\epsilon) \pm 1$, because changing one resolution either merges two circles in $D_\epsilon$ or separates one circle into two.
**Definition 4.8.** For a Laurent polynomial $f(x)$, we define $\text{hp}(f)$ to be the highest power of $x$ that appears in $f$, and we define $\text{lp}(f)$ similarly to be the lowest power. We define $\text{span}(f) = \text{hp}(f) - \text{lp}(f)$.
**Remark 4.9.** In particular, we will be looking at the span of the bracket polynomial. Why is this numerical value useful? Take a knot $K$ with diagram $D$. If we change $D$ by a Type I Reidemeister move, all the powers in the bracket polynomial are shifted by the same amount, according to (2.6). Thus, the value of $\text{span} \langle D \rangle$ only depends on the knot $K$. It does not depend on the diagram $D$, even though $\langle D \rangle$ does depend on the diagram.
We have
$$\text{hp} \langle D, \epsilon \rangle = \text{hp} A^{s_0(\epsilon) - s_1(\epsilon)}(-A^2 - A^{-2})^{o(\epsilon)-1} = s_0(\epsilon) - s_1(\epsilon) + 2o(\epsilon) - 2$$
We know that $\text{hp} \langle D \rangle \leq \max_\epsilon \text{hp} \langle D, \epsilon \rangle$, so naturally, we would be interested in determining which smoothing $\epsilon$ maximizes the expression $s_0(\epsilon) - s_1(\epsilon) + 2o(\epsilon) - 2$.
Let $n$ be the number of crossings of $D$. Choosing $\epsilon = 0$ (i.e. the smoothing with all 0-resolutions) will maximize $s_0(\epsilon) - s_1(\epsilon)$ (by setting it to $n$), but we still need to consider the $2o(\epsilon)$ term. The following lemma shows that we do not need to worry about that term.
**Lemma 4.10.** If $\epsilon$ is a smoothing, and we change a 0-resolution to a 1-resolution to get $\epsilon'$, then $\text{hp} \langle D, \epsilon \rangle \geq \text{hp} \langle D, \epsilon' \rangle$.
**Proof.** As we go from $\epsilon$ to $\epsilon'$, the quantity $s_0 - s_1$ decreases by 2. Since $|o(\epsilon) - o(\epsilon')| = 1$, the quantity $2o$ can increase by 2 or decrease by 2. In either case, we have
$$s_0(\epsilon) - s_1(\epsilon) + 2o(\epsilon) - 2 \geq s_0(\epsilon') - s_1(\epsilon') + 2o(\epsilon') - 2$$
$\square$
Corollary 4.11. When we let $\epsilon$ range over all smoothings, $\text{hp} \langle D, \epsilon \rangle$ achieves a maximum at $\epsilon = 0$.
As a result, we know that $\text{hp} \langle D \rangle \leq \max_\epsilon \text{hp} \langle D, \epsilon \rangle = \text{hp} \langle D, 0 \rangle = n + 2o(0) - 2$. We have the analogous statement that $\text{lp} \langle D \rangle \geq \text{lp} \langle D, 1 \rangle = -n - 2o(1) + 2$. (Here, $1$ is the smoothing with all 1-resolutions.) Thus, we have
$$\text{span} \langle D \rangle \leq 2n + 2(o(0) + o(1)) - 4$$
Furthermore, if the knot is reduced alternating, we have nicer results, because of the following
Lemma 4.12. If $D$ is a reduced alternating diagram and $\epsilon$ is a smoothing with exactly one 1-resolution, then $o(0) = o(\epsilon) + 1$. (That is, as we go from $D_0$ to $D_\epsilon$, we merge two circles together.)
Proof. Because the diagram is alternating, we can give the diagram a checkerboard coloring so that all the crossings are 0-separating. See for example, Figure 4.8a.
Then when we look at $D_0$, all the circles bound the shaded faces in the coloring. (See Figure 4.8b.)
When we change one of the 0-resolutions to a 1-resolution, we add a “bridge” across a white region. (See Figure 4.8c.) Because the diagram $D$ is reduced, the bridge connects two separate black regions. Thus, the number of circles decreases by one, so $o(0) = o(\epsilon) + 1$. \qed
Remark 4.13. The assumption that the diagram $D$ is reduced is necessary for the last step. Compare the diagrams in Figure 4.8 with the diagrams in the following example, Figure 4.9.
What ends up happening here is that the bridge formed in Figure 4.9c connects the same black region to itself, thus increasing the number of circles. This is due to the removable crossing.
\[ \diamondsuit \]
**Corollary 4.14.** If \( D \) is a reduced alternating diagram and \( \epsilon \) is a smoothing with exactly one 1-resolution, then \( \text{hp} \langle D, 0 \rangle > \text{hp} \langle D, \epsilon \rangle \).
**Proof.** Since, \( o(0) = o(\epsilon) + 1 \), we can modify the proof of Lemma 4.10 to get a strict inequality. \( \square \)
**Corollary 4.15.** If \( D \) is a reduced alternating diagram, then \( \text{hp} \langle D \rangle = \text{hp} \langle D, 0 \rangle \).
**Proof.** We have \( \langle D \rangle = \langle D, 0 \rangle + \sum_{\epsilon \neq 0} \langle D, \epsilon \rangle \). By Lemma 4.10 and Corollary 4.14, we know that the highest power in \( \sum_{\epsilon \neq 0} \langle D, \epsilon \rangle \) is strictly lower than the highest power in \( \langle D, 0 \rangle \). \( \square \)
Everything stated above has an analogue for the 1-smoothing; in particular, the analogue of Corollary 4.15 is that \( \text{lp} \langle D \rangle = \text{lp} \langle D, 1 \rangle \) for reduced alternating diagrams \( D \). We summarize our results so far with the following lemma.
**Lemma 4.16.** If \( D \) is a knot diagram with \( n \) crossings, then
\[
\text{span} \langle D \rangle \leq 2n + 2(o(0) + o(1)) - 4
\]
Furthermore, if \( D \) is reduced alternating, then equality holds.
Next, we try to analyze the quantity \( o(0) + o(1) \). This is easy if \( D \) is alternating: we can color \( D \) so that it only has 0-separating crossings. Then the key observation is that \( o(0) \) counts the black faces and \( o(1) \) counts the white faces. Thus, \( o(0) + o(1) \) equals the total number of faces. Invoking Lemma 4.4, we see that \( o(0) + o(1) = n + 2 \).
In fact, alternating knots are optimal, in the sense given in the following lemma.
**Lemma 4.17.** Let \( D \) be a connected link diagram.\(^3\) If \( D \) has \( n \) crossings, then
\[
o(0) + o(1) \leq n + 2
\]
Furthermore, if \( D \) is reduced alternating, then equality holds.
We can see that this lemma contains a few subtle technicalities. We need the statement to be about connected link diagrams as opposed to knot diagrams so that we can apply induction properly. After we prove this lemma, we will only use the result for knot diagrams.
\(^3\)We say that a link diagram is *connected* if its graph is connected. For example, the usual diagram for the unlink \( \bigcirc \bigcirc \) is not connected. However, if the two components overlap in the diagram (as in \( \bigcirc \bigcirc \)), then the diagram is connected. (Note that knot diagrams are always connected.)
Proof. We have already proved the special case when $D$ is alternating. For a general connected link diagram $D$, we proceed by induction. For the base case ($n = 0$), we have the usual diagram for the unknot (O). Since there are no crossings to resolve, we have $o(D_0) = o(D_1) = 1$, which completes the base case.\footnote{For this proof, we write $o(D_0)$ instead of $o(0)$ to emphasize that $D$ is the knot we are taking the 0-smoothing of.}
For the inductive step, suppose that the statement is true for all connected link diagrams with $n$ crossings. Take a connected link diagram $D$ with $n + 1$ crossings.
Pick one of the crossings of $D$ (call it $x$). We can resolve $x$ via a 0- or a 1-resolution; observe that at least one of the two resulting diagrams is connected. Suppose, without loss of generality, that applying the 0-resolution on $x$ leaves us with a connected link diagram. Let this diagram be $E$. (See Figure 4.10.)
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figure10.png}
\caption{Various smoothings of $D$ and $E$ ($x$ is the crossing on the bottom of $D$)}
\end{figure}
Note that $E_0$ and $E_1$ are also smoothings of $D$. In fact:
- $E_0 = D_0$.
- $E_1$ and $D_1$ differ at exactly one crossing (namely, $x$).
Thus, $o(E_0) = o(D_0)$ and $o(E_1) = o(D_1) \pm 1$. Furthermore, $E$ is a connected link diagram with $n$ crossings, so by the inductive hypothesis, $o(E_0) + o(E_1) \leq n + 2$. Putting the results together gives us $o(D_0) + o(D_1) \leq n + 3$, which completes the induction. \qed
Combining the previous two lemmas gives us the following.
Corollary 4.18. If $D$ is a knot diagram with $n$ crossings, then
$$\text{span} \langle D \rangle \leq 4n$$
Furthermore, if $D$ is reduced alternating, then equality holds.
We are almost there! Recall (as we briefly discussed in Remark 4.9) that $\text{span} \langle D \rangle$ does not depend on the diagram $D$ used for a knot $K$.
If $D$ is a diagram for $K$ with exactly $c(K)$ crossings, then the corollary gives us
\begin{equation}
\text{span} \langle D \rangle \leq 4c(K)
\end{equation}
From the definition of the Jones polynomial (Definition 2.14), we have $4 \text{span} V_K = \text{span} \langle D \rangle$. Using this relationship, we can restate (4.1) without any reference to particular knot diagrams.
**Theorem 4.19.** Let $K$ be a knot. Then $c(K) \geq \text{span} V_K$. Furthermore, if $K$ is an alternating knot, then equality holds.
In other words, the span of the Jones polynomial gives a lower bound on the crossing number. Recall that lower bounds were difficult to determine in general.
Furthermore, if a knot is an alternating knot, we can determine its crossing number as follows: first, we start with an alternating diagram of the knot. Then we can eliminate removable crossings until we get a reduced alternating diagram. At this point, we know there is no way to draw the knot with fewer crossings, so we can count that number of crossings and that will be the crossing number of our knot.
For example, we know the crossing numbers of the trefoil and figure-8 knot are 3 and 4, respectively, because Figure 4.11a and Figure 4.11b are reduced alternating. The knot depicted in Figure 4.11c may look intimidating, but once we notice that the diagram is reduced alternating, we can conclude that the knot has crossing number 11.

**Figure 4.11.** Simply count the crossings of these reduced alternating knot diagrams and we get the crossing numbers of the knots!
## 5. More on the Jones Polynomial
The Jones polynomial gives us valuable information about knots, as the previous sections show. Our understanding of the Jones polynomial, however, is far from complete.
5.1. How well does the Jones polynomial distinguish among knots? Suppose we have two knot diagrams $D$ and $D'$, and suppose $D$ and $D'$ are diagrams of knots $K$ and $K'$, respectively. We want to check if $K = K'$ (i.e. if $D$ and $D'$ are diagrams of the same knot). From the two diagrams alone, we are able to compute $V_K(t)$ and $V_{K'}(t)$.
If we get $V_K(t) \neq V_{K'}(t)$, then we know $K \neq K'$. However, what if we get $V_K(t) = V_{K'}(t)$? Does that imply $K = K'$? The answer is no. In other words, different knots could have the same Jones polynomial. (See [Mur96, Example 11.2.6 and Example 11.2.7] for examples.)
However, would could pose the following (weaker) question: Does $V_K(t) = 1$ imply that $K = \bigcirc$? The answer to that turns out to be unknown.\footnote{However, we should note that [Thi01] gives examples of non-trivial 2-component links with Jones polynomials equal to the Jones polynomial of the unlink ($\bigcirc \bigcirc$). In fact, for each $k > 1$, [EKT03] gives an infinite family of non-trivial $k$-component links with Jones polynomials equal to the $k$-component unlink.}
5.2. “Categorifying” the Jones polynomial. In the late 1990s, Mikhail Khovanov developed a homology theory for links that generalizes the Jones polynomial by “categorifying” it. Some (qualitative) facts about Khovanov homology are:
- The Khovanov homology of an oriented link $L$, denoted $Kh(L)$, can be computed from link diagrams using rules similar to BP1, BP2, and BP3. [BN02] provides a very friendly introduction and proves that $Kh(L)$ is an oriented link invariant by showing that it is unchanged under the Reidemeister moves.\footnote{Understanding how to compute $Kh(L)$ is not too difficult; understanding the proof of invariance, however, requires some basic knowledge of homological algebra.}
- Whereas we assign each oriented link diagram to a polynomial (the bracket polynomial), Khovanov’s theory assigns each oriented link diagram to a chain complex of graded vector spaces.
- Khovanov homology is \textit{strictly} stronger than the Jones polynomial: while it is possible to recover $V_L(t)$ from $Kh(L)$, there are distinct links $L$ and $L'$ with the property that $V_L(t) = V_{L'}(t)$ but $Kh(L) \neq Kh(L')$. (See [BN02].)
- Khovanov homology \textit{does} detect the unknot. That is, if $Kh(L) = Kh(\bigcirc)$, then $L = \bigcirc$. This is proved in [KM11]. (Recall that for the Jones polynomial, this problem is unsolved.)
- Just as we used the Jones polynomial to prove a lower bound on the crossing number, Khovanov homology can be used to provide insight into other knot invariants. For example, [Ras04] uses Khovanov homology to give a lower bound on the \textit{slice genus} of a knot.
6. ACKNOWLEDGMENTS
This paper is the author’s junior paper for his spring 2013 term at Princeton University. Many thanks is owed to Professor Zoltán Szabó for his patient and helpful guidance throughout the semester. The author would also like to thank Nate Dowlin, Minh-Tam Trinh, and Feng Zhu for enlightening discussions as well as useful suggestions during the writing process.
REFERENCES
[BN02] Dror Bar-Natan, On Khovanov’s categorification of the Jones polynomial, Algebraic and Geometric Topology 2 (2002), 337–370.
[Cro04] P.R. Cromwell, Knots and links, Cambridge University Press, 2004.
[EKT03] Shalom Eliahou, Louis H Kauffman, and Morwen B Thistlethwaite, Infinite families of links with trivial Jones polynomial, Topology 42 (2003), no. 1, 155–169.
[KM11] Peter B Kronheimer and Tomasz S Mrowka, Khovanov homology is an unknot-detector, Publications mathématiques de l’IHÉS 113 (2011), no. 1, 97–208.
[Lic97] W.B.R. Lickorish, An introduction to knot theory, Graduate Texts in Mathematics Series, Springer Verlag, 1997.
[Mur96] Kunio Murasugi, Knot theory and its applications, Modern Birkhäuser classics, Birkhäuser, 1996.
[Ras04] Jacob A Rasmussen, Khovanov homology and the slice genus, arXiv preprint math/0402131 (2004).
[Thi01] Morwen Thistlethwaite, Links with trivial Jones polynomial, Journal of Knot Theory and Its Ramifications 10 (2001), no. 04, 641–643. |
Distribution and Cross-Sections of Fast States on Germanium Surfaces
By C. G. B. GARRETT and W. H. BRATTAIN
(Manuscript received May 10, 1956)
A theoretical treatment of the field effect, surface photo-voltage and surface recombination phenomena has been carried out, starting with the Hall-Shockley-Read model and generalizing to the case of a continuous trap distribution. The theory is applied to the experimental results given in the previous paper. One concludes that the distribution of fast surface states is such that the density is lowest near the centre of the gap, increasing sharply as the accessible limits of surface potential are approached. From the surface photo-voltage measurements one obtains an estimate of 150 for the ratio \((\sigma_p/\sigma_n)\) of the cross-sections for transitions into a state from the valence and conduction bands, showing that the fast states are largely acceptor-type. On the assumption that surface recombination takes place through the fast states, the cross-sections are found to be: \(\sigma_p \sim 6 \times 10^{-15} \text{ cm}^2\) and \(\sigma_n \sim 4 \times 10^{-17} \text{ cm}^2\).
I. INTRODUCTION
The existence of traps, or "fast" states, on a semiconductor surface, becomes apparent from three physical experiments: measurements of field effect,\(^1\) of surface photovoltage,\(^2\) and of surface recombination velocity \(s\). Results of combined measurements of these three quantities on etched surfaces of \(p\)- and \(n\)-type germanium have been presented in the preceding paper.\(^3\) The present paper is concerned with the conclusions which may be drawn from these experiments as to the distribution in energy of these surface traps, and the distribution of cross-sections for transitions between the traps and the conduction and valence bands.
The statistics of trapping at a surface level has been developed by Brattain and Bardeen\(^2\) and by Stevenson and Keyes,\(^4\) following the work on body trapping centers of Hall\(^5\) and of Shockley and Read.\(^6\)
It is known that surface traps are numerous on a mechanically damaged surface\(^7\) or on a surface that has been bombarded but not annealed;\(^8\)
and that on an etched surface their density is comparatively low. It is also known that the available results cannot be accounted for by a single level, or even two levels, so that one is evidently dealing either with a large number of discrete states or a continuous spectrum. A given trapping centre is completely described by specifying: (i) whether it is donor-like (either neutral or positive) or acceptor-like (neutral or negative); (ii) its position in energy; and (iii) the values for the constants $C_p$ and $C_n$ (related to cross-sections) occurring in the Shockley-Read theory. In this paper we shall deduce what we can about these quantities, using the experimental results previously presented.
At the outset it must be admitted that it is by no means certain that the same set of surface states appear in the field-effect experiment and give rise to surface recombination. However, (i) it is found that such surface treatments as increase $s$ also reduce the effective mobility in the field-effect experiment; (ii) any surface trap must be able to act as a recombination centre, unless one of the quantities $C_p$ and $C_n$ is zero,\textsuperscript{9} and (iii) the capture cross-sections obtained by assuming that the field-effect traps are in fact recombination centres are, as we shall see below, eminently reasonable.
As to the nature of the surface traps, not too much can be said at the moment. The lack of sensitivity to the cycle of chemical environment used argues against their being associated with easily desorbable surface atoms; the intrinsically short time constants (Section 5) suggest that they are on or very close to the germanium surface. The possibility that the surface traps are Tamm levels\textsuperscript{10} remains; or they could be corners or dislocations. However, the reproducibility with which a given value of $s$ may be obtained by a given chemical treatment of a given sample, followed by exposure to a given ambient, suggests that there is nothing accidental about their occurrence.
II. STATISTICS OF A DISTRIBUTION OF SURFACE TRAPS
We start by quoting results from the work of Shockley and Read\textsuperscript{6} and Stevenson and Keyes\textsuperscript{4} on the occupancy factor $f_t$ and the flow $U$ of minority carriers (per unit area) into a set of traps having a single energy level and statistical weight unity:
$$f_t = \frac{(C_n n_s + C_p p_1)}{[C_n (n_s + n_1) + C_p (p_s + p_1)]} \quad (1)$$
$$U = \frac{C_n C_p (p_s n_s - n_i^2)}{[C_n (n_s + n_1) + C_p (p_s + p_1)]} \quad (2)$$
where the symbols have the following meanings:
$n_s$, $p_s$ — densities of electrons and holes present at the surface
\( n_1, p_1 \) — values which the equilibrium electron and hole densities at the surface would have if the Fermi level coincided with the trapping level
\[
C_n = N_t v_{Tn} \sigma_n; C_p = N_t v_{Tp} \sigma_p,
\]
where \( N_t \) stands for density of traps per unit area, \( v_{Tn} \) is the thermal speed for electrons and \( v_{Tp} \) that for holes, and \( \sigma_n \) and \( \sigma_p \) are the cross-sections for transitions between the traps and the conduction and valence bands respectively.
If we introduce the surface potential \( Y \) and the quantity \( \delta \), defined as \( (\Delta p / n_i) \), where \( \Delta p \) is the added carrier density in the body of the semiconductor, we may write:
\[
\begin{align*}
n_s &= \lambda^{-1} n_i e^Y (1 + \lambda \delta) \\
p_s &= \lambda n_i e^{-Y} (1 + \lambda^{-1} \delta)
\end{align*}
\]
(3)
where \( \lambda = p_0 / n_i \), \( p_0 \) being the equilibrium hole concentration in the body of the semiconductor. We further introduce the notation:
\[
\begin{align*}
n_1 &= n_i e^{-\nu} & p_1 &= n_i e^\nu \\
(C_p / C_n)^{\frac{1}{2}} &= \chi
\end{align*}
\]
(4)
The quantity \( \nu \) thus represents the energy difference, measured in units of \( (kT/e) \), between the trapping level and the centre of the gap;* and is positive for states below, negative for those above, this level. The parameter \( \chi \) will be most directly associated with whether the state is donor-like or acceptor-like. If it is donor-like (neutral or positive), a transition involving an electron in the conduction band will be aided by Coulomb attraction whereas one involving a hole will not; so one would expect \( \chi \ll 1 \). For an acceptor-like trap, (neutral or negative) the contrary holds, and one expects \( \chi \gg 1 \).
Using (4), the occupancy factor (1) becomes
\[
f_t = \frac{\chi^{-1} \lambda^{-1} e^Y (1 + \lambda \delta) + \chi e^\nu}{\chi^{-1} \lambda^{-1} e^Y (1 + \lambda \delta) + \chi^{-1} e^{-\nu} + \chi \lambda e^{-Y} (1 + \lambda^{-1} \delta) + \chi e^\nu}
\]
\[
= \frac{1}{2} \lambda^{-\frac{1}{2}} e^{-\frac{1}{2} Y} e^{\frac{1}{2} \nu} \text{sech} \left[ \frac{1}{2} (Y + \nu) - \frac{1}{2} \ln \lambda \right] \quad \text{for } \delta = 0
\]
(5)
Note that, in thermodynamic equilibrium, the occupancy factor does not depend in any way on the cross-sections, whereas for \( \delta \neq 0 \) it does, through the ratio \( \chi \).
* Strictly speaking, one should say “position of the Fermi level for intrinsic semiconductor” instead of “centre of the gap.” These will fail to coincide if the effective masses of holes and electrons are unequal, as they certainly are in germanium.
Similarly, the flow of carrier-pairs to the surface (2) becomes:
\[ U = \]
\[ N_t(v_{Tn}v_{Tp})^{1/2}(\sigma_n\sigma_p)^{1/2}n_i \frac{(\lambda + \lambda^{-1})\delta + \delta^2}{\chi^{-1}\lambda^{-1}e^\gamma(1 + \lambda\delta)\chi^{-1}e^{-\nu} + \chi\lambda e^{-\gamma}(1 + \lambda^{-1}\delta)\chi e^\nu} \]
which, for \( \delta \to 0 \), tends to the linear law \( U = sn_i\delta \), where \( s \), the surface recombination velocity, is given by:
\[ s/(v_{Tn}v_{Tp})^{1/2} = N_tS_t \]
where
\[ S_t = (\lambda + \lambda^{-1})(\sigma_n\sigma_p)^{1/2}/2[ch(\nu + \ell n \chi) + ch(Y - \ell n \lambda - \ell n \chi)] \]
The surface density \( \Sigma_s \) of trapped charge is given by:
\[ \Sigma_s = N_tf_t \]
where \( f_t \) is the occupancy factor, given by (5).
Now let us turn to the question of a distribution of surface traps through the energy \( \nu \). Suppose that the density of states having \( \nu \) lying between \( \nu \) and \( \nu + d\nu \) is \( \bar{N}(\nu) \, d\nu \), expressed in units \((n_i\Omega)\). Then the total surface recombination velocity arising from all traps, and the total trapped surface charge density, are given by:
\[ s/(v_{Tn}v_{Tp})^{1/2} = n_i\mathcal{L} \int S_t(\nu)\bar{N}(\nu) \, d\nu \]
\[ \bar{\Sigma}_s = \int f_t(\nu)\bar{N}(\nu) \, d\nu \]
where \( S_t(\nu) \) and \( f_t(\nu) \) are explicit functions of \( \nu \), given by (5) and (7). The limits of the integrals in (9) and (10) are the values of \( \nu \) corresponding to the conduction and valence band edges; however, as we shall see, it is often possible to replace these limits by \( \pm \infty \).
In summing up the contributions in the way represented by (9), we have implicitly ignored the possibility of inter-trap transitions, supposing that the population of each trap depends only on the rates of exchange of charge with the conduction and valence bands, and is independent of the population of any other trap of differing energy.
What kind of function do we expect \( \bar{N}(\nu) \) to be? Brattain and Bardeen postulated that \( \bar{N}(\nu) \) was of the form of two delta-functions, corresponding to discrete trapping levels high and low in the band. This assumption is not consistent with the observed facts in regard to field effect, surface
photo-voltage, or surface recombination velocity. The general difficulty is that the observed quantities usually vary less rapidly with surface potential than one would expect. It is possible to fit the field-effect observations of Brown and Montgomery\textsuperscript{11} with a larger number of discrete levels, but this would call for a “sharpening up” of the trapped charge distribution as the temperature is lowered, and this appears to be contrary to what is observed.* It is always possible that the surface is patchy, in which case almost \textit{any} variation with mean surface potential could be explained. The simplest assumption, however, seems to be that $N(\nu)$ is a rather smoothly-varying function. All we need assume for the moment is that it is everywhere finite, continuous and differentiable. We may then differentiate equation (10) with respect to $Y$ and $\delta$ under the integral sign, and get $(\partial \bar{\Sigma}_s / \partial Y)_\delta$ and $(\partial \bar{\Sigma}_s / \partial \delta)_Y$, the quantities for which experimental measurements were reported in the previous paper.\textsuperscript{3}
\begin{align}
\left( \frac{\partial \bar{\Sigma}_s}{\partial Y} \right)_\delta &= \int \frac{\bar{N}(\nu) \ d\nu}{4 \ c h^2 [\frac{1}{2} (\nu + Y) - \frac{1}{2} \ell n \lambda]} \\
\left( \frac{\partial \bar{\Sigma}_s}{\partial \delta} \right)_Y &= - \int \frac{\bar{N}(\nu) (\frac{1}{2} (\lambda^{-1} + \lambda) \ell h [\frac{1}{2} (\nu - Y) + \frac{1}{2} \ell n \lambda] + \ell n \chi) + \frac{1}{2} (\lambda^{-1} - \lambda)) \ d\nu}{4 c h^2 [\frac{1}{2} (\nu + Y) - \frac{1}{2} \ell n \lambda]}
\end{align}
Notice that the expression in brackets in the numerator of (12) generally has the value $\lambda^{-1}$ or $-\lambda$, except near the point $\nu = Y - \ell n \lambda - 2 \ell n \chi$. This is indicative of the fact that, whatever the exact form of $\bar{N}(\nu)$, the ratio of $-(\partial \bar{\Sigma}_s / \partial \delta)_Y / (\partial \bar{\Sigma}_s / \partial Y)_\delta$ tends to these limiting values ($\lambda^{-1}$ and $-\lambda$) for sufficiently large negative and positive $Y$ respectively.
It may be verified from (7), (11) and (12) that $(\partial \bar{\Sigma}_s / \partial Y)_\delta$, found from the field effect experiment, depends only on $N(\nu)$; $(\partial \bar{\Sigma}_s / \partial \delta)_Y$, found from the surface photo-voltage, depends on $\bar{N}(\nu)$ and $\chi$; while $s$, the surface recombination velocity, depends in addition on the geometric mean cross-section $(\sigma_n \sigma_p)^{1/2}$. Both $\chi$ and $(\sigma_n \sigma_p)^{1/2}$ might themselves, of course, be functions of $\nu$. Thus relations (7), (11) and (12) are integral equations, from which the three unknown functions of $\nu$ may in principle be deduced from the experimental results. (Equation 11, in fact, may be solved explicitly. P. A. Wolff\textsuperscript{17} has shown, however, that, to determine $N(\nu)$ unambiguously, it is necessary to know $(\partial \bar{\Sigma}_s / \partial Y)_\delta$ for all values of $Y$ in the range $\pm \infty$.)
The foregoing considerations apply to “small-signal” measurements.
* There are some changes with temperature, but not what one would expect if there were only discrete surface states.
Fig. 1 — The fit between Equations (13) and (14) and the experimental data. The circles and dots give the experimental data for the \( n \) and \( p \)-type samples respectively and the solid straight lines represent Equations (13) and (14).
But it is also possible, once \( N(\nu), \chi \) and \( (\sigma_n \sigma_p)^{1/2} \) are known, to calculate the expected behavior of the surface photo-voltage and surface recombination rate at high light intensities, and compare the answer with the experimental findings. We hope to discuss this matter in a later paper.
III. ANALYSIS OF THE EXPERIMENTAL DATA BY USE OF THE DELTA-FUNCTION APPROXIMATION
Let us first consider the interpretation of our field effect measurements by means of (11). We start by finding empirical expressions that describe the observed dependence of \( (\partial \Sigma_s / \partial Y) \) on \( Y \) (Fig. 6 of the preceding paper\(^3\)). Except at values of \( (Y - \ell n \lambda) \) close to the extremes reached one may fit quite well by a hyperbolic cosine function. Fig. 1 shows the function whose hyperbolic cosine is \( (\partial \Sigma_s / \partial Y) / (\partial \Sigma_s / \partial Y)_{\text{min}} \) plotted against \( Y - \ell n \lambda \). From this figure we find:
22.6 ohm-cm \( n \)-type:
\[
\left( \frac{\partial \Sigma_s}{\partial Y} \right)_\delta = 4.5ch[0.36(Y - \ell n \lambda) - 0.8]
\]
(for \( (Y - \ell n \lambda) > -4 \))
8.1 ohm-em p-type:
\[
\left( \frac{\partial \Sigma_s}{\partial Y} \right)_\delta = 9.7 ch[0.31(Y - \ell n \lambda) - 0.5]
\]
for \(2 > (Y - \ell n \lambda) > -4\)
For values of \((Y - \ell n \lambda)\) less than \(-4\), it appears that \(\Sigma_s\) is changing more rapidly with \(Y\) than is indicated by (13) and (14). We shall comment on this point later. Excluding this region, we note that in both cases the variation with \(Y\) is everywhere slow in comparison with \(e^Y\), and proceed on the assumption that \(\tilde{N}(\nu)\) is a function of \(\nu\) that varies everywhere slowly in comparison with \(e^\nu\). Then (11) indicates that there is one fairly sharp maximum in the integrand in the range \(\pm \infty\), occurring at that value of \(\nu\) which coincides with the Fermi level:
\[
\nu = -Y + \ell n \lambda
\]
The integral in (11) could be evaluated in series about this point (method of steepest descents). The zero-order approximation is got by replacing
\[
\frac{1}{4} \text{sech}^2 \left[ \frac{1}{2}(\nu + Y) - \frac{1}{2}\ell n \lambda \right] \quad \text{by} \quad \delta(\nu + Y - \ell n \lambda).
\]
Later we shall proceed to an exact solution, and we shall find that this delta-function approximation is not too bad. From (11) we now find:
\[
\left( \frac{\partial \Sigma_s}{\partial Y} \right)_\delta \sim \int \tilde{N}(\nu) \delta(\nu + Y - \ell n \lambda) \, d\nu = \tilde{N}(-Y + \ell n \lambda)
\]
This mathematical procedure will be seen to be equivalent to identifying \((\partial \Sigma_s/\partial Y)_\delta\) with the density of states at the point in the gap which coincides with the Fermi-level at the surface. Using (13) and (14), one gets:
22.6 ohm-em n-type:
\[
\tilde{N}(\nu) = 4.5 \, ch(0.36\nu + 0.8)
\]
8.1 ohm-em p-type:
\[
\tilde{N}(\nu) = 9.7 \, ch(0.31\nu + 0.5)
\]
As we shall see in the next section, the exact solutions differ from (16) and (17) only in the coefficients preceding the hyperbolic cosines.
Turning to the surface photo-voltage measurements, we take (12) and again replace
\[
\frac{1}{4} \text{sech}^2 \left[ \frac{1}{2}(\nu + Y) - \frac{1}{2}\ell n \lambda \right] \quad \text{by} \quad \delta(\nu + Y - \ell n \lambda)
\]
Using (15), one gets:
\[
-\frac{\left(\overline{\partial \Sigma_s / \partial \delta}\right)_Y}{\left(\partial \Sigma_s / \partial Y\right)_\delta} = \frac{1}{2} (\lambda^{-1} + \lambda) \, t h(-Y + \ell n \lambda + \ell n \chi) + \frac{1}{2} (\lambda^{-1} - \lambda)
\]
(18)
This procedure, inaccurate as it is, has the advantage that no particular assumption need be made concerning the functional dependence of \( \chi \) on \( \nu \), it being understood that \( \chi \) in (18) has the value holding for \( \nu = -Y + \ell n \lambda \). In particular, if \( Y_0 \) is that value of \( Y \) at which the ratio \(-\left(\partial \Sigma_s / \partial \delta\right)_Y/\left(\partial \Sigma_s / \partial Y\right)_\delta\) changes sign,
\[
\ell n \chi_0 = Y_0 - \ell n \lambda + t h^{-1}[(\lambda - \lambda^{-1})/(\lambda + \lambda^{-1})]
\]
(19)
From the experimental data, one finds, for the \( n \)-type sample, \( \ell n \chi_0 \sim 2.4 \) (at \( \nu = -3.5 \)); for the \( p \)-type sample, \( \ell n \chi_0 \sim 1.0 \) (at \( \nu = 1.9 \)).
In view of the approximations made, these estimates would not be expected to be more precise than \( \pm 1 \) to 2 units. Notice that both values are positive, and that the difference between them is small in comparison with the difference in \( \nu \). This suggests that we start afresh with the assumption that \( \chi \) is independent of \( \nu \), and work out the surface photovoltage integral exactly. This is done in the next section.
**IV. EXACT TREATMENT FOR THE CASE**
\[
\tilde{N}(\nu) = A \, ch(q\nu + B),
\]
**WITH CONSTANT CROSS-SECTIONS**
The results of the previous section suggest the procedure of assuming that \( N(\nu) \) is of the functional form given by (16) and (17), and evaluating the integrals (9), (11) and (12) exactly. The integral for \((\partial \Sigma_s / \partial Y)\), (11), depends only on the form of \( N(\nu) \) and may be evaluated at once. To get \((\partial \Sigma_s / \partial \delta)\), (12), one must know how \( \chi \) depends on \( \nu \). On the basis of the work of the previous section, we shall suppose that \( \chi \) is independent of \( \nu \). (Properly, we need only assume that \( \chi \) varies with \( \nu \) more slowly than \( e^\nu \). Since the function \( t h[\frac{1}{2}(\nu - Y) + \frac{1}{2} \ell n \lambda + \ell n \chi] \) has one of the values \( \pm 1 \) everywhere except close to \( \nu = Y - \ell n \lambda - 2 \ell n \chi \), and since the denominator of (12) has a sharp minimum at \( \nu = -Y + \ell n \lambda \), it follows that the region in which \((\partial \Sigma_s / \partial \delta)_Y\) changes sign will be governed mainly by the value of \( \chi \) at \( \nu = -\ell n \chi \).)
To get \( s \) [(9), using (7)], one must also assume something about the geometric mean cross-section, \((\sigma_n \sigma_p)^{1/2}\). In the absence of any information on this score, we shall assume that \((\sigma_n \sigma_p)^{1/2}\) also is independent of \( \nu \), and see how the computed variation of \( s \) with \( Y \) compares with the experimental results.
We assume:
\[ \bar{N}(\nu) = A \ ch (q\nu + B) \]
(20)
and substitute in (11), (12) and (7). In view of the sharp maximum in the integrands of these expressions, it is permissible to set the limits which should correspond to the edges of the gap or of the state distribution equal to \( \pm \infty \). The integrals are conveniently evaluated by the contour method (see Appendix 1) and yield the following results:
\[
\left( \frac{\partial \bar{\Sigma}_s}{\partial Y} \right)_s = A \pi q \ \cosec \ \pi q \ ch [B - q(Y - \ell n \lambda)]
\]
(21)
\[
\left( \frac{\partial \bar{\Sigma}_s}{\partial \delta} \right)_Y = -A \pi q \ \cosec \ \pi q \ ch [B - q(Y - \ell n \lambda)] \times
\]
\[
\left[ \frac{1}{2} (\lambda^{-1} + \lambda) \left( -\coth y + \frac{sh \ qy \ ch \ \mathcal{B}}{q \ sh^2 y \ ch (qy - \mathcal{B})} \right) + \frac{1}{2} (\lambda^{-1} - \lambda) \right]
\]
(22)
where
\[
\begin{align*}
y &= Y - \ell n \lambda - \ell n \chi \\
\mathcal{B} &= B - q \ \ell n \chi
\end{align*}
\]
(23)
\[
\frac{s}{(v_{Tn} v_{Tp})^{1/2}}
\]
(24)
\[
= \frac{1}{2} (\lambda + \lambda^{-1}) (\sigma_n \sigma_p)^{1/2} n_i \mathcal{L} \ 2\pi A \ sh \ qy \ ch \ \mathcal{B} \ \cosec \ \pi q \ \cosech \ y
\]
Comparing (21) with (15), we see that the delta-function approximation is in error to the extent that it replaces \( \pi q \ \cosec \ \pi q \) by 1. With the value of \( q \) found experimentally, this is not too bad; we can now, however, by fitting the right-hand side of (21) to the experimental facts, (13) and (14), obtain exact solutions for \( N(\nu) \):
22.6 ohm-cm n-type
\[ N(\nu) = 3.6 \ ch(0.36\nu + 0.8) \quad \text{(for } \nu < 4) \]
8.1 ohm-cm p-type
\[ N(\nu) = 8.3 \ ch(0.31\nu + 0.5) \quad \text{(for } \nu < 4) \]
(25)
The question arises as to whether this solution for the distribution is unique. We have already pointed out that the mathematical methods fail if the distribution is discontinuous. It seems that (25) represents the only solution that is slowly-varying, in the sense used in the previous section; its correctness could presumably be checked by carrying out experiments at different temperatures. For \( \nu > 4 \), the above expressions
do not fit the observed facts, because, for $Y - \ell n \lambda < -4$, the charge in fast states is found to change more rapidly than is given by the empirical expressions in (13) and (14). The behaviour in this region is perhaps indicative of the existence of a discrete trapping level just beyond the range of $\nu$ which can be explored by our techniques. The observations (see Fig. 6 of preceding paper\textsuperscript{8}) can be described by postulating, in addition to the continuous distribution of states given above, a level of density about $10^{11}$ cm$^{-2}$, situated at $\nu = 6$, or a higher density still further from the center of the gap. Statz et al.\textsuperscript{13} using the “channel” techniques, which are valuable for exploring the more remote parts of the gap, have proposed a level of density $\sim 10^{11}$ cm$^{-2}$, situated at about 0.14 volts below the center of the gap ($\nu = 5.5$): this is not in disagreement with the foregoing.
In order to compare (22) with the experimental data derived from the surface photo-voltage, it is necessary to choose a value for $\chi$. Fig. 2 shows the comparison with the results presented in the preceding paper. On the vertical axis, the values of $(\partial \Sigma_s / \partial \delta) / (\partial \Sigma_s / \partial Y)$ plotted have been divided by $(\lambda + \lambda^{-1})$, in order to show the n and p-type results on the same scale. (Note that the limiting values of this quantity should be $\lambda / (\lambda + \lambda^{-1})$ and $-\lambda^{-1} / (\lambda + \lambda^{-1})$, so that the vertical distance between the limiting values should be 1, independent of $\lambda$). The theoretical curves have been drawn with the value $\ell n \chi = 2.5$, in order to give best fit between theory and experiment at the points at which the ordinate changes sign. (It may be seen from the form of (22) that, with the actual value of the other parameters, the main effect of adopting a different value of $\ell n \chi$ would be to shift the theoretical curve horizontally, while a change of $\lambda$ shifts it vertically without in either case greatly modifying its shape). The fit between theory and experiment is not quite as good as could be expected, even taking into account the rather low accuracy of the measurements. The variation of $(\partial \Sigma_s / \partial \delta) / (\partial \Sigma_s / \partial Y)$ with $Y$ found experimentally seems to be rather slower than the theory would lead one to expect. The main points to make are: (i) the difference in $Y$ between the zeros for the two samples (5.4 ± 1) is about what it should be (4.8) on the assumption that $\ell n \chi$ is the same for both samples and of the order of unity; and (ii) paying attention mainly to the zeros, the estimate $\ell n \chi = 2.5$ is likely to be good to ±1.
Now let us consider the surface recombination velocity. Here we are on somewhat shakier ground, in that, in deriving (24), we have had to assume not only that $\chi$ is independent of $\nu$, but $(\sigma_n \sigma_p)^{1/2}$ also. First we note from (24) that the maximum value of $s$ should occur at $Y - \ell n \lambda = \ell n \chi$. Comparing with the experimental results given in the preceding paper,
Fig. 2 — Experiment and theory for
\[
\left[ \left( \frac{\partial \Sigma_s}{\partial \delta} \right) / \left( \frac{\partial \Sigma_s}{\partial Y} \right) \right] / \lambda + \lambda^{-1}
\]
Solid lines theory; circles and dots, with smooth curves through the points, represent experimental results for \( n \) and \( p \)-type samples, respectively.
we see maxima at \( Y - \ln \lambda = 2.0 \) for the \( p \)-type sample, and 3.5 for the \( n \)-type sample. Both these values are within the limits to \( \ln \chi \) given in the previous paragraph, thus confirming the estimate made there. Fig. 3 shows a comparison between the experimental results and (24). The graph has been fitted horizontally, by setting \( \ln \chi = 2.5 \), as found above; vertically, to agree with the mean value at that point. The agreement with experiment is reasonable, although again, just as in Fig. 2, the experimental variation of \( s \) with \( (Y - \ln \lambda) \) is rather slower than one would expect.
The fact that the experimental values, both of surface photo-voltage and of surface recombination velocity, vary more slowly than expected, is susceptible of a number of interpretations: (i) The deduced distribution of fast states might be wrong. However, the most likely alternative distributions — isolated levels, or a completely uniform distribution —
give (in at least some ranges of $Y$) a more rapid instead of a smoother variation of these quantities so long as the surface is homogeneous. (ii) The estimates of the changes in $Y$ might be too large. It is unlikely that our calibration is sufficiently in error, and other workers have obtained results comparable to ours. The only possibility would be that the mobility of carriers near the surface is larger (instead of smaller, as found by Schrieffer) than inside — which seems quite out of the question. (iii) The ratio of capture cross-sections varies with $\nu$. This, however, would only be in the right direction if one were to assume that the ratio $\chi$ increases with the height of the level in the gap — i.e., that the high states behave like acceptors, and the low ones like donors. While not quite impossible, this is an unlikely result. (iv) The surface is patchy. It is probable that a range of variation of two to four times $(kT/e)$ in surface potential would be sufficient to account for the observed slow variation of surface photo-voltage and recombination velocity with mean surface potential. We have refrained from detailed calculations of patch effects, on the grounds that, without detailed knowledge of the magnitude and distribution of the patches, it would be possible to construct a model that could indeed fit the facts, but one would have little confidence in the result. The possibility of patches warns us to view with caution the exact distribution function deduced for the fast states. It would still be conceivable, for example, that one has but two discrete states, as originally proposed by Brattain and Bardeen, and that the apparent existence of a band of states in the middle of the gap arises from the fact that there are always some parts of the surface at which the Fermi level is close to one or other of these states. Fortunately the conclusions as to the cross-sections are not too sensitive to the exact distribution function assumed.
Using the mean of the two coefficients in (25), substituting $n_i = 2.5 \times 10^{13} \text{ cm}^{-3}$, $\mathcal{L} = 1.4 \times 10^{-4} \text{ cm}$, $(v_{Tn} v_{Tp})^{1/2} = 1.0 \times 10^7 \text{ cm/sec}$, in (24), and using the experimental result (see Fig. 3) that $s_{\max}/(\lambda + \lambda^{-1}) = 1.2 \times 10^2 \text{ cm/sec}$, one obtains $(\sigma_p \sigma_n)^{1/2} = 5 \times 10^{-16} \text{ cm}^2$. Now setting $(\sigma_p/\sigma_n) = \chi^2 \sim e^5 \sim 150$, one gets for the separate cross-sections:
$$\sigma_p = 6 \times 10^{-15} \text{ cm}^2$$
$$\sigma_n = 4 \times 10^{-17} \text{ cm}^2$$
These values appear to be eminently reasonable. Burton et al., who studied recombination through body centres associated with nickel and copper in germanium, found $\sigma_p > 4 \times 10^{-15} \text{ cm}^2$, $\sigma_n = 8 \times 10^{-17} \text{ cm}^2$ for nickel, and $\sigma_p = 1 \times 10^{-16}$, $\sigma_n = 1 \times 10^{-17}$ for copper. The fact that
Fig. 3 — Experiment and theory for surface recombination. Solid curve theory circles and dots for $n$ and $p$-type samples, respectively.
our estimates for $\sigma_p$ and $\sigma_n$ appear to be of the expected order of magnitudes lends strong support to the view that identifies the traps appearing in the field-effect and surface photo-voltage experiments with those responsible for surface recombination.
The result that $(\sigma_p/\sigma_n) = 150$ is good evidence that the fast states are acceptor-like. This statement must be restricted to the range $|v| < 4$; the states that are outside this range might be of either type. Also one might allow a rather small fraction of the states near the middle to be donor-type, without serious trouble; but the experimental results compel one to believe that most of the fast states within 0.1 volts or so of the centre of the gap are acceptor-like.
V. TRAPPING KINETICS
The foregoing considerations have concerned the steady-state solution to the surface trapping problem. If the experimental constraints are changed sufficiently rapidly, however, there may be effects arising from the finite time required for the charge in surface states to adapt itself.
to the new conditions.\textsuperscript{14} This section will concern itself with the trapping time constants (which are not directly related to the rate of recombination of minority carriers).
One case of trapping kinetics has been discussed by Haynes and Hornbeck.\textsuperscript{9} A general treatment of surface trapping kinetics is necessarily quite involved, and will be taken up in a future paper. Here we shall restrict ourselves to giving an elementary argument relating to the high-frequency field effect experiment of Montgomery.\textsuperscript{15} To simplify the discussion, we assume that the surface in question is of the "super" type; i.e., the surface excess of the bulk majority carrier is large and positive. At time $t = 0$, a large field is suddenly applied normal to the surface; the induced charge appears initially as a change in the surface excess of the bulk majority carrier; as time elapses, charge transfer between the space-charge region and the fast states takes place, until equilibrium with the fast states has been re-established. What time constant characterizes this process?
Take electrons as the majority carrier. Then the flow of electrons into the fast states must equal the rate of decrease of the surface excess of electrons. For a single level one may write:
$$U_n = N_t v_{Tn} \sigma_n [(1 - f_t) n_s - f_t n_1]$$
$$= -\dot{\Gamma}_n$$
(26)
For a continuous distribution of levels, one can say that only those levels within a few times $(kT/e)$ of the Fermi level at the surface will be effective, so that one may regard the distribution as being equivalent to a single state with $n_1 = n_i \exp (Y - \ln \lambda)$, which will be about half full. We assume further that the density of fast states is sufficient for the changes in $\Gamma_n$ to be large in comparison with those in $f_t$, as is reasonable, having regard to the relative magnitudes of the measured values of $(\partial \Sigma_s / \partial Y)_s$ found in the present research, and of $(\partial \Gamma_p / \partial Y)_s$ and $(\partial \Gamma_n / \partial Y)_s$. Thus $f_t$ may be treated as a constant in equation (26). Further, we may set $n_s = 4 \Gamma_n^2 / n_s \mathcal{L}^2$, as may be proved from considerations on the space-charge region.\textsuperscript{16} Solving (26) with these conditions, one finds, for the transient change in $\Gamma_n$ between the initial and the quasi-equilibrium state:
$$\Delta \Gamma_n \propto \left(1 - \text{th} \frac{t}{\tau}\right)$$
(27)
where
$$\tau = \lambda e^{-Y} \mathcal{L} / [N_t v_{Tn} \sigma_n \sqrt{2} \sqrt{f_t (1 - f_t)}]$$
To clarify the order of magnitude of time constant involved, let us substitute \( \mathcal{L} \sim 10^{-4} \) cm, \( N_t \sim 10^{11} \) cm\(^{-2}\), \( v_{Tn} \sim 10^7 \) cm/sec., \( \sigma_n \sim 10^{-15} \) cm\(^2\), \( f_t \sim 0.5 \), \( \lambda e^{-\gamma} \sim 1 \). This gives \( \tau \sim 10^{-7} \) sec, which suggests that one would be unlikely to run into trapping time effects in the field-effect experiment at frequencies less than 10 Mcyc/sec. This conclusion is consonant with the findings of Montgomery.\(^{15}\)
**Appendix 1**
**Evaluation of the Integrals in Section 4**
The integrals occurring in Section 4, giving the experimentally accessible quantities \( (\partial \bar{\Sigma}_s / \partial Y) \), \( (\partial \bar{\Sigma}_s / \partial \delta) \) and \( s \) in terms of the surface trap distribution and cross-sections, are conveniently evaluated by contour integration. In view of the general applicability of this method in dealing with integrals of the sort that arise from such a distribution of traps, we include here a short note on the procedure used. The integrals needed are:
\[
I_1 = \int_{-\infty}^{+\infty} ch(cx + g) \sech^2 x \, dx
\]
\[
I_2 = \int_{-\infty}^{+\infty} th(x + b) \, ch(cx + g) \sech^2 x \, dx
\]
\[
I_3 = \int_{-\infty}^{+\infty} \frac{ch(cx + g)}{chx + chk} \, dx
\]
To evaluate \( I_1 \), we evaluate \( \int ch(cz + g) \sech^2 z \, dz \) around the contour shown in Fig. 4. The contributions from the parts \( z = \pm R \) vanish in the limit \( R \to \infty \), so that the integral has the value:
\[
(1 - \cos c\pi) \int_{-\infty}^{+\infty} ch(cx + g) \sech^2 x \, dx - i \sin c\pi \int_{-\infty}^{+\infty} sh(cx + g) \sech^2 x \, dx
\]

The integrand has one pole within the contour, at $x = \frac{1}{2}i\pi$, at which the residue is $-c(\cos \frac{1}{2}c\pi sh g + i \sin \frac{1}{2}c\pi ch g)$. Multiplying by $2\pi i$ and equating the real part to that in the above expression, one obtains:
$$I_1 = \pi c \csc \frac{1}{2}c\pi ch g$$
The same contour is used in evaluating $I_2$; there are now poles at $z = \frac{1}{2}i\pi$ and at $z = \frac{1}{2}i\pi - b$, and one obtains:
$$I_2 = \pi c \coth b ch g \csc \frac{1}{2}c\pi$$
$$- 2\pi \csc \frac{1}{2}c\pi \coth^2 b sh \frac{1}{2}bc ch(\frac{1}{2}bc - g)$$
To evaluate $I_3$, one integrates $\int [ch(cz + g)/(chz + chk)] dz$ around the contour shown in Fig. 5. There are poles at $i\pi \pm k$. Proceeding as before, one finds:
$$I_3 = 2\pi sh ck ch g \csc \pi c \coth k$$
**Appendix 2**
**Limitation of Surface Recombination Arising from the Space-Charge Barrier**
The question of the resistance to flow of carriers to the surface arising from the change in potential across the space-charge layer has been discussed by Brattain and Bardeen.\(^2\) Here we shall recalculate this effect by a better method, which again shows that, within the range of surface potential studied, the effect of this resistance on the surface recombination velocity is for etched surfaces quite negligible.
Let $I_p$ and $I_n$ be the hole and electron (particle) currents towards the surface, and let $x$ be the distance in a direction perpendicular to the surface, measuring $x$ positive outwards. Then the gradient of the quasi-Fermi levels $\varphi_p$ and $\varphi_n$ at any point is given by:
$$\nabla \varphi_p = \mp \left( \frac{I_p/\mu_p}{I_n/\mu_n} \right) \left( \frac{p}{n} \right)$$ \hspace{1cm} (1)

Then the total additional change in $\varphi_p$ and $\varphi_n$ across the space-charge region, arising from the departure in uniformity in the carrier densities $p$ and $n$, is:
\[
\Delta \varphi_p = -\frac{I_p}{\mu_p} \int \left( \frac{1}{p} - \frac{1}{p_0} \right) dx \\
\Delta \varphi_n = \frac{I_n}{\mu_n} \int \left( \frac{1}{n} - \frac{1}{n_0} \right) dx
\]
(2)
Suppose now that the true surface recombination rate is infinite, so that the quasi-Fermi levels must coincide at the surface, and:
\[
\varphi_p + \Delta \varphi_p = \varphi_n + \Delta \varphi_n
\]
(3)
These equations, together with the known space-charge equations,\textsuperscript{16} complete the problem. Notice first, from (2), that $\Delta \varphi_p$ will be large only if there is a region in which $p$ is small ($Y \gg 1$), while $\Delta \varphi_n$ is large only when, in some region, $n$ is small ($Y \ll -1$). Introducing the quantity $\delta$, approximating for $\delta$ small, equating $I_p$ and $I_n$ and setting the result equal to $sn_s \delta$, one finds:
\[
Y \ll -1 \\
s \rightarrow (D_n/\mathfrak{L})(\lambda^{1/2} + \lambda^{-3/2})e^{\frac{1}{2}Y}
\]
\[
Y \gg 1 \\
s \rightarrow (D_p/\mathfrak{L})(\lambda^{-1/2} + \lambda^{3/2})e^{-\frac{1}{2}Y}
\]
(4)
The coefficients $(D_n/\mathfrak{L})$ and $(D_p/\mathfrak{L})$ are of the order of $4 \times 10^5$ cm/sec. The most extreme case encountered in our work is that occurring at the ozone extreme for the n-type sample ($\lambda = 0.34$, $Y = -6$), for which the surface recombination velocity, if limited by space-charge resistance alone, would be about one-quarter of this ($10^5$ cm/sec). The fact that the observed surface recombination velocity is lower than that by more than two orders of magnitude shows that space-charge resistance is not a limiting factor in the present experiments. Equations 4 might well hold on a sand-blasted surface, however, where the trap density is much higher.
**References**
1. W. L. Brown, Surface Potential and Surface Charge Distribution from Semiconductor Field Effect Measurements, Phys. Rev. **98**, p. 1565, June 1, 1955.
2. W. H. Brattain and J. Bardeen, Surface Properties of Germanium, B.S.T.J., **32**, pp. 1–41, Jan., 1953.
3. W. H. Brattain and C. G. B. Garrett, page 1019 of this issue.
4. D. T. Stevenson and R. J. Keyes, Measurements of Surface Recombination Velocity at Germanium Surfaces, Physica, 20, pp. 1041–1046, Nov. 1954.
5. R. N. Hall, Electron-Hole Recombination in Germanium, Phys. Rev., 87, p. 387, July 15, 1952.
6. W. Shockley and W. T. Read, Jr., Statistics of the Recombination of Holes and Electrons, Phys. Rev., 87, pp. 835–842, Sept. 1, 1952.
7. T. M. Buck and F. S. McKim, Depth of Surface Damage Due to Abrasion on Germanium, J. Elec. Chem. Soc., in press.
8. H. H. Madden and H. E. Farnsworth, Effects of Ion Bombardment Cleaning and of Oxygen Adsorption on Life Time in Germanium, Bull. Am. Phys. Soc., II, 1, p. 53, Jan., 1956.
9. J. A. Hornbeck and J. R. Haynes, Trapping of Minority Carriers in Silicon. I. P-Type Silicon. II. N-Type Silicon, Phys. Rev., 97, pp. 311–321, Jan. 15, 1955, and 100, pp. 606–615, Oct. 15, 1955.
10. Ig. Tamm, Uber eine mögliche Art der Elektronenbindung an Kristalloberflächen, Phy. Zeits. Sowj., 1, pp. 733–746, June, 1932.
11. H. C. Montgomery and W. L. Brown, Field-Induced Conductivity Changes in Germanium, Phys. Rev., 103, Aug. 15, 1956.
12. J. A. Burton, G. W. Hull, F. J. Morin and J. C. Severiens, Effects of Nickel and Copper Impurities on the Recombination of Holes and Electrons in Germanium, J. Phys. Chem., 57, pp. 853–859, Nov. 1953.
13. H. Statz, G. A. deMars, L. Davis, Jr., and H. Adams, Jr., Surface States on Silicon and Germanium Surfaces, Phys. Rev., 101, pp. 1272–1281, Feb. 15, 1956.
14. C. G. B. Garrett, The Present Status of Fundamental Studies of Semiconductor Surfaces in Relation to Semiconductor Devices, Proc. West Coast Electronics Components Conf. Los Angeles, pp. 49–51, June, 1955.
15. H. C. Montgomery and B. A. McLeod, Field Effect in Germanium at High Frequencies, Bull. Am. Phys Soc., II, 1, p. 53, Jan., 1956.
16. C. G. B. Garrett and W. H. Brattain, Physical Theory of Semiconductor Surfaces, Phys. Rev., 99, pp. 376–387, July 15, 1955.
17. P. A. Wolff, Private communication. |
We are IntechOpen, the world’s leading publisher of Open Access books
Built by scientists, for scientists
7,500 Open access books available
195,000 International authors and editors
210M Downloads
154 Countries delivered to
TOP 1% Our authors are among the most cited scientists
14% Contributors from top 500 universities
WEB OF SCIENCE™
Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI)
Interested in publishing with us?
Contact firstname.lastname@example.org
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
1. Introduction
High power fibre lasers (HPFLs) find applications in the material processing, automotive, medical, telecoms and defence industries. Over 1kW of output power [1] has been demonstrated as the race to scale up the power while maintaining excellent beam quality and achieving impressive power conversion efficiency is ongoing. During the mature stages of the HPFLs technology, the automated simulation-based optimization of HPFLs is expected to contribute significantly to the formulation of optimal designs and to improve intuition for the conception of new fibre lasers. This chapter researches the common ground between computational photonics and direct search optimization methods with the prospect to propose optimized fibre designs for HPFLs.
Published work on the subject of pump light enhancement in the active core of cladding pumped fibres could be categorized as follows:
a. Pump absorption ion system optimization [2-5]
b. Pumping techniques focusing on how to couple more power into the inner cladding [6-10]
c. Fibre designs that focus on maximizing the overlap between the coupled pump light and absorbent core volume [11-15]
d. Holistic solutions that attempt to address (b) and (c) simultaneously [16-18]
Schemes in (c) are usually compatible with categories (b) and (d) meaning that the special fibres proposed by (c) can be pumped by schemes in (b) or they can be modified for use in the schemes of category (d) to further increase the pump absorption.
In category (b), Koplow et al [9] list a set of pumping schemes evaluation criteria and propose an embedded mirror side pumping scheme after discussing the contemporary pumping methods. Their technique initially appeared attractive and for that it was tested numerically within the computation environment of the simulation method proposed in [19]. It was found, however, that it does not benefit from the fibre cross section optimization because it reduces the percentage of higher order modes resulting in absorption degradation. Another side pumping technique which, in contrast with the previous, did not require machining of the pumped fibre was proposed by Polynkin et al [8]. A DCF was pumped via evanescent field coupling. This scheme appears to be fully compatible with the incorporation of optimized fibre topologies in place of the conventional circular inner cladding with centred core. Lassila [6] proposed a scalable side pumping scheme that could benefit from tailored axially symmetrical (presumably as far as the inner cladding is concerned) cross sections.
A pump absorption enhancing scheme that could fit in category (c) was proposed by Baek et al [14]. The authors incorporated a long period fibre grating (LPFG) in a cladding pumped configuration and measured a 35% increase in pump absorption as a direct result of the LPFG. A similar approach based on the reflection of the residual pump light was reported by Jeong et al [15]. The free end of the single-end pumped DCF was shaped into a right-angled cone that reflected more than 55% of the unabsorbed pump light that offered an 18% increase in absorbed pump power. This is one more scheme which could benefit from optimised fibre topologies. Recently, the use of a large area helical core was proposed [11] for the enhancement of pump absorption and simultaneous rejection of high order lasing modes naturally suggesting that optimized helical solid-state holes (that may be fabricated by rotation just like the helical core) could exhibit a similar tapering effect [19] as that reported here. This could avoid the need to increase the core area when increasing the inner clad area [12] to accept more pump power.
In the category of holistic solutions, Kouznetsov and Moloney proposed [16] and modelled analytically [17] the tapered slab delivery of multimode pump light to a small diameter inner cladding. This scheme combines the specially designed pump waveguide and corresponding inner cladding that could also fit in the shallow-angle single pumping category listed in [9]. It benefits highly from the coupling of multimode light into a narrow inner cladding while potential drawbacks are the leakage of high order pump modes and the fabrication difficulties. An alternative approach is demonstrated experimentally by Peterka et al [18]. The proposed DCF is single-end pumped and has a double-D cross section with the core at the centre of its half section. The input side is processed so that signal and pump delivery fibres can be spliced on the two specially fabricated facets. Overall, a promising way forward appears to be the development of generic and modular solutions within categories (a), (b) and (c) and then the synergistic combination of the three. This would act as a practical two stage approach that could amplify the pump absorption enhancement and consecutively the laser output power.
The results reported in this chapter fit in the aforementioned second category of pump absorption enhancing schemes. The interpretation of the original NM algorithm [20] as well as the deterministic cross section shape perturbation technique [21] are presented in this chapter in the form of structured pseudocode-functions. The proposed notation serves as the background for the development and validation of improved methods. Furthermore, additional fibre topology encoding schemes at higher dimensions are introduced and a modern interpretation of NM is given prior to the proposal of stochastically enhanced NM forms described in pseudocode syntax. The proposed algorithms are compared with commercial implementations of the genetic algorithm (GA) [22], generalised pattern search (GPS) [25-27] and mesh adaptive direct search (MADS) [28-29] methods that are also tested here for their performance and suitability. All the aforementioned algorithms share a set of common characteristics: they can operate exclusively on the function values (zeroth-order or derivative free or direct search methods) and they were tuned to their most parsimonious instances to the extent that their global convergence properties were not compromised. Here, the term global convergence is used to mean first order convergence to a point far enough from an arbitrary starting point. It does not mean convergence to a point $x_* : f(x_*) \leq f(x), \forall x \in \mathbb{R}^n$ adhering to the terminology in the extensive review for direct search methods by Kolda et al. A third common characteristic is that they all call a 3-dimensional (3-D) fibre simulation method, described and validated in [19], in order to evaluate the objective function.
The contributions made in this chapter are summarized below:
- First reported stochastic simulation-based optimization of DCF topologies (to the best of the authors knowledge)
- Pseudocode descriptions of proposed algorithms for ease of verification and/or use by other researchers
- Benchmarking of several optimization algorithms with an emphasis on their statistical nature
- An optimization problem description scheme that allows the incorporation of inhomogeneous independent variables
- The proposal of perturbed stochastic search patterns as generalizations of the simplex formation pattern with possible applicability in pattern search algorithms
- The concept of implicitly constrained optimization via perturbed pattern search
- The proposal of the enhanced stochastically perturbed Nelder-Mead (ESPNM) method for implicitly constrained global optimization with simple bounds
- The unified description of NM, NM’s stochastic forms, GPS and MADS methods based on the pattern search concept
- Mostly globally (as opposed to mostly locally in [21]) optimized DCF designs with an emphasis on manufacturability and modular design
The next section describes a set of optimization schemes on relatively low dimensions and compares NM, NM’s stochastic variants (simple sampling Monte Carlo techniques), GA, GPS and MADS methods. Section 3 focuses on optimization schemes and algorithms in higher dimensions, introduces the perturbed patterns for simple and importance sampling Monte Carlo optimization and compares the locally introduced algorithms. The simulation parameters as well as the settings of each algorithm are given in section 4 where the optimization results for DCFs with polymer as well as air outer cladding are also discussed. Finally, section 5 concludes this chapter.
2. Bound-constrained zeroth-order optimization algorithms
The optimization problem considered in this chapter is
\[
\min_{P \in \Omega} f(P), \quad f : \mathbb{R}^n \rightarrow \mathbb{R} \cup \{\pm \infty\}
\]
where,
\[
f(P) = -P_{abs, \lambda \beta}(P),
\]
$P$ is a point in $\mathbb{R}^n$, $n$ is the number of variables and $\Omega$ is the bounded function domain. Equation (2) gives the objective function which maps a DCF topology to the corresponding negative total absorbed pump power value [19]. The current notation partly adheres to that of [23] by assuming that
\[
\Omega = \left\{ P \in \mathbb{R}^n : l \leq P \leq u \right\} \text{ where } l, u \in \left\{ \mathbb{R} \cup \{\pm \infty\} \right\}^n.
\]
The optimization domain $\Omega$ constitutes a declaration of the computational bounds and physically meaningful function domain. It acts as a barrier when applying the optimization algorithm not to $f$ but to $f_0$ where
\[ f_\alpha = \begin{cases}
f(P) & \text{if } P \in \Omega \\
\infty & \text{otherwise}
\end{cases}. \tag{4} \]
The current work attempts to solve a simulation-based optimization problem where the objective function can be evaluated to only a few significant figures. This observation along with the noise that may be present in the computed function values or the expense of these computations render the calculation of derivatives impossible or impractical. Hence there is a need to treat the optimization problem with direct search methods.
The GA, GPS and MADS methods are implemented here via the commercially available ‘genetic algorithm and direct search toolbox’ within the MATLAB technical computing environment. The amount of subjective evaluation of the aforementioned algorithms was kept to a minimum by carefully tuning their parameters so that both global convergence and low computational cost are served in a well balanced way. Moreover, directly comparable sets of optimizations were performed by each method in order to gain statistical insight into the benefits of each algorithm and build intuition into their performance for a more objective judgment.
The NM simplicial search method has been comprehensively studied theoretically [30-32], extensively applied mostly in chemistry but also in optics [33-34], criticized for its inadequacies [35], remedied [36], enhanced [37,38] and even stochastically incorporated [39]. However, all theoretical improvements have led to a reduction in its computational efficiency. The main strength of the original algorithm is that when it succeeds it offers the best efficiency indicating that the most successful modifications of the simplex descent concept, with applications in computationally intensive problems, are expected to be those that maintain the number of function evaluations required to a minimum. Due to NM’s susceptibility to different interpretations and the need to clearly and concisely describe the NM-based methods proposed here, its current interpretation is crystallized in algorithm NM and associated subalgorithms NM_SimplexGener, FuncEval, SmxAssessm and NM_Step. The latter follows the modern practice examined by Lagarias [30] and is described in section 3 as a subset of subalgorithm ESPNM_Step introduced there. Algorithm NM shows distinctively its two main operations being the generation of the initial simplex (S_n) along orthogonal directions around the start point (at line 3) and the line search procedure recursively executed (at line 10) by calling NM_Step during an iteration (while loop: lines 8-12). The descent path is governed by the descent coefficient set (reflection, expansion, contraction, shrinkage) assigned in line 6. Line 2 of algorithm NM implies that the generation of the initial simplex (a polytope in \( \mathbb{R}^n \) with \( n+1 \) vertices - the minimum statistical information required to capture first order information) is essentially a pattern search operation along all \( n \)-directions denoted by the column vectors of the \( n \times n \) pattern matrix (\( \Sigma_{NM} \)) which in this case is practically the identity matrix (\( \Sigma_{NM} = I_{n \times n} \)). The initial simplex is generated by the subalgorithm NM_SimplexGener with respect to the start point and vector \( M \) where the mesh sizes of the all independent variables are stored. In this way, the simultaneous optimization of inhomogeneous variables (of different physical meaning, units, mesh size) is practically implemented. An example is the case where the diameter and refractive index of an embedded hole are simultaneously optimized. Essentially, this is the integration of a parametric optimization procedure into a more robust non-parametric optimization scheme.
An important implication of subalgorithm NM_SimplexGener is that it should form a nondegenerate initial ($j = 0$) simplex. That is,
$$\text{vol}(S_j) = \frac{\left| \det \left( P_1^{(j)} - P_{n+1}^{(j)}, P_2^{(j)} - P_{n+1}^{(j)}, \ldots, P_n^{(j)} - P_{n+1}^{(j)} \right) \right|}{n!} > 0$$
The satisfaction of inequality (5) is important in order to conserve the numerical integrity of the ‘flexible polytope’ when descending in $\mathbb{R}^n$ and avoid convergence to a non-minimizer after collapsing one or more of its vertices on the hyperplane of others [35].
A simple sampling Monte Carlo approach is exercised here by means of the stochastic Nelder-Mead method (SNM) with the prospect to increase NM’s efficiency and probability to find a global minimizer in low dimensions. The SNM method is partly implemented by substituting line 2 in algorithm NM with
**Algorithm NM.** Interpretation of the modern Nelder-Mead (NM) method:
$$\left[ P_i, f_i, \sigma_i \right] = \text{NM}\left( P_i, M, \sigma_{\text{tol}}, \Omega \right)$$
**Input:** (start point $P_i = [p_{i,1}, p_{2,1}, \ldots, p_{n,1}]^T$ in $\mathbb{R}^n$, mesh size vector $M = [m_1, m_2, \ldots, m_n]^T$, stopping value for the halting criterion and optimization domain $\Omega = [B_1, B_2, \ldots, B_n]^T$ where $B_i = [p_{i,\text{min}}, p_{i,\text{max}}]^T | i = 1, 2, \ldots, n$ (bounds)). **Output:** [optimal point, corresponding function value, standard deviation of $\{f_i | i = 1, 2, \ldots, n+1, \text{tol}\}$ after the last iteration].
1. $j := 0$ // iteration index
2. $\Xi_{\text{NM}} := [\xi_1, \xi_2, \ldots, \xi_n] = I_{n \times n}$ // initial simplex formation pattern
3. call $[S_0] = \text{NM\_SimplexGener}(P_i, \Xi_{\text{NM}}, M, n)$ // nondegenerate initial simplex
4. for each $[P_i | i = 1, 2, \ldots, n+1]$ call $s[f_i] = \text{FuncEval}(P_i, \Omega)$ endfor // objective function evaluations
5. $F_i := [f_1, f_2, \ldots, f_{n+1}]_{n \times (n+1)}$ // initial objective matrix
6. $\{r, e, c, s\} := \{1, 2, 1/2, 1/2\}$ // descent coefficients standard values
7. call $[\tilde{f}_i, \tilde{f}_h, P_h, \tilde{f}, \tilde{P}] = \text{SmxAssessm}(S_j, F_i)$ // current simplex $(S_j)$ assessment
8. while $(\sigma_i \geq \sigma_{\text{tol}})$ // where, $\sigma_i = \left( \left( f_i - \tilde{f} \right)^2 | i = 1, 2, \ldots, n+1, \text{tol} \right)^{1/2}$ (descent halting criterion)
9. $j := j + 1$ // increment
10. call $[S_j, F_j, \text{step}] = \text{NM\_Step}(P_i, P_j, \tilde{P}, \Omega, f_i, f_j, r, e, c, s, S_j, F_i)$ // NM step
11. call $[\tilde{f}_i, \tilde{f}_h, P_h, \tilde{f}, \tilde{P}] = \text{SmxAssessm}(S_j, F_i)$ // simplex assessment
12. endwhile // end of iteration loop
13. return $P_j, f_j, \sigma_j$ // output.
Subalgorithm NM_SimplexGener. NM initial simplex generation:
\[ \left[ S_0 \right] = \text{NM\_SimplexGener} (P_i, \Xi_{NM}, M, n) \]
Input: (start point \( P_i \), NM pattern, mesh size vector and length of \( P_i \)). Output: [initial simplex matrix].
1. for each simplex vertex in the set \( \{ P_i | i=2,3,\ldots,n+1 \} \)
2. \[ P_i := P_i + (M \circ S_{i-1}) \]
// where, \( \circ : \left[ a_i \right]_{n \times n} \circ \left[ b_i \right]_{n \times n} = \left[ a_i b_i \right]_{n \times n} \)
3. endfor
4. \[ S_0 := \left[ P_1, P_2, \ldots, P_{n+1} \right]_{i=(n+1)} \]
5. return \( S_0 \) // output.
Subalgorithm FuncEval. Objective function evaluation:
\[ \left[ f_i \right] = \text{FuncEval} (P, \Omega) \]
Input: (point in \( \Re^n \), bounds). Output: [function value].
1. if \( P \in \Omega \) then
2. \[ f_i := f(P_i) \] // compute (simulate)
3. else
4. \[ f_i := +\infty \] // assign a large positive value
5. endif
6. return \( f_i \) // output.
Subalgorithm SmxAssessm. Simplex assessment:
\[ \left[ t_w, t_b, P_w, P_b, \bar{t}, \bar{P} \right] = \text{SmxAssessm} (S, F) \]
Input: (simplex, objective matrices). Output: [worse, best function values, corresponding points, mean function value and centroid for \( i \neq h \)].
1. \[ f_w := \max \left\{ f_i | i=1,2,\ldots,n+1 \right\}; \text{assign } P_w | i=i(f_w) \]
2. \[ f_b := \min \left\{ f_i | i=1,2,\ldots,n+1 \right\}; \text{assign } P_b | i=i(f_b) \]
3. \[ \bar{t} := \left\langle f_i | i=1,2,\ldots,n+1, \text{sm} \right\rangle; \quad \bar{P} := \left\langle P_i | i=1,2,\ldots,n+1, \text{sm} \right\rangle \]
4. return \( t_w, P_w, t_b, P_b, \bar{t}, \bar{P} \) // output.
\[
\Xi_{\text{SNM}} = \begin{pmatrix}
\xi_1, \xi_2, \ldots, \xi_n \\
m_1 & 0 & \cdots & 0 \\
0 & m_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & m_n
\end{pmatrix}
\] (6)
where \( \{m_i\}_{i=1,2,\ldots,n} \in [-1,1] \) is a set of uniformly distributed pseudorandom numbers. A short description of the pseudorandom number generator used is given in section 4. SNM can use subalgorithm NM_SimplexGener to generate the initial simplex after substituting \( \Xi_{\text{NM}} \) with \( \Xi_{\text{SNM}} \) in the set of input arguments. During the generation of \( S_i \) around \( P_i \), on the basis of \( \Xi_{\text{SNM}} \), a set of randomly signed orthogonal directions are searched while the initial mesh sizes fluctuate randomly as well (between zero and their nominal values stored in \( M \)). The second and last part of the SNM implementation is to discard line 6 of algorithm NM and to add the following line
\[
\text{assign } \{r,e,c,s\}_j \in \left[0.5,1\right] \left[2,4\right] \left[0.25,0.5\right] \left[0.3,0.7\right]
\] (7)
just before line 10. The later means that the descent coefficients are recursively set to uniformly distributed random values within the designated ranges. With regard to the modern understanding of the Nelder-Mead algorithm, the descent coefficients must satisfy the conditions \( r \in (0,+\infty) \), \( e \in \left(1,+\infty\right) \cap \left(r,+ \infty\right) \), \( c \in [0,1] \) and \( s \in [0,1] \). According to Lagarias [30], the condition \( r \in (0,+\infty) \) is not stated explicitly in the original paper by Nelder and Mead but is implicit in the presentation of the original algorithm [20].
A significant role during an optimization is played by the corresponding optimization problem encoding key which orderly stores the independent variables of a fibre topology in a column vector (point \( P \) in \( \Re^N \)) that is read by the fibre simulator. The construction of the computation grid and/or the setting of the simulation parameters involved in the evaluation of the objective function are then based on the information encoded in the coordinates of \( P \). For fixed perimetric lines of laminas participating in a DCF cross section, the following encoding keys are used in this chapter:
\[
P = \begin{pmatrix}
y,z,y_{h,1},z_{h,1},y_{h,2},z_{h,2},\ldots,y_{h,N},z_{h,N}
\end{pmatrix}^\top
\] (8)
in \( \Re^{2(N+1)} \) for an inner cladding topology embedding \( N \)-holes and a single active core. The first pair \( (y,z) \) of elements represents the coordinates of the core centroid on the cross section plane while each pair in the set \( \{(y_{h,i},z_{h,i})\}_{i=1,2,\ldots,N} \) appearing in \( P \), represents the centroid coordinates of the \( i \)-th hole. Equation (8) encodes a fibre topology according to the ‘Offset’ optimization scheme under which the centroid coordinates of each involved lamina is optimized independently. Following the same notation, the point
\[
P = \begin{pmatrix}
y,z,y_{h,1},z_{h,1},\ldots,y_{h,N},z_{h,N},d_1,\ldots,d_N
\end{pmatrix}^\top
\] (9)
in \( \Re^{1+N+2} \) encodes the same topology under the ‘Offset-Diameter’ scheme that, in addition to (8), allows the simultaneous optimization of circular hole diameters or square hole side lengths. Furthermore, the point
\[
P = \begin{pmatrix} y, z, y_{h,1}, z_{h,1}, \cdots, y_{h,N}, z_{h,N}, A_1, \cdots, A_N, B_1, \cdots, B_N \end{pmatrix}^\top
\] (10)
in \( \Re^{2(2N+1)} \) allows, in addition to (8), the independent optimization of the horizontal and vertical characteristic dimension of each hole (major-minor axis of an ellipse for initially circular holes and length-height of a parallelepiped for initially square holes). Encoding keys (9) and (10) demonstrate the need for individually defined mesh sizes tailored to the domains within which the search for optimal values is desired. This view is reinforced by
\[
P = \begin{pmatrix} y, z, y_{h,1}, z_{h,1}, \cdots, y_{h,N}, z_{h,N}, A_1, \cdots, A_N, B_1, \cdots, B_N, R_1, \cdots, R_N \end{pmatrix}^\top
\] (11)
that includes variables representing refractive index values. The independent variables in (11) are inhomogeneous not only in terms of corresponding mesh size and domain but also in physical meaning and units. In this case, point \( P \) in \( \Re^{6N+3} \) represents the 'Offset-Major-Minor-Index' optimization scheme that expands (9) by including the refractive indices \( \{R_i\}_{i=1,2,\ldots,N} \) of dielectric holes embedded in the inner cladding.
Four groups of thirty optimizations are executed next in each of the \( \Re^{10} \) and \( \Re^{18} \) spaces in order to compare the performance of SNM variants with algorithm NM and in relation to the dimensionality of the optimization space. To avoid fragmentation, it is thought adequate for the current discussion to report that all optimizations were initiated from the same start point. The later represents a double-clad design with a polymer outer cladding and four rods embedded in the inner cladding (solid-state circular dielectric holes) assumed to be made of CBYA alloy glass with a refractive index of 1.430 [40,19].
Figure 1 demonstrates the \( \Re^{10} \) sets of optimizations executed following different strategies. The type of search strategy is denoted by the [Initial simplex, Descent coefficients] pair where the letter D denotes deterministic as opposed to S denoting stochastic implementation. The initial circular inner cladding topology with centred core included four symmetrically embedded circular holes at the corners of a centred square and absorbed \( P_{\text{abs, tot}} = 8.60 \text{W} \).
![Fig. 1. Four groups of 30 optimizations in \( \Re^{10} \) from the same starting point and under different optimization strategies: (a) SNM[D,D] = NM (here for variable mesh size). (b) SNM[S,D]. (c) SNM[D,S]. (d) SNM[S,S].](image-url)
section 4 was used here after verifying that approximately the same trends were followed. The fibre was 1cm long and 126 rays carried the pump energy while the rest of the parameters were kept constant. The graphs along the first row of figure 1 plot the values of the total absorbed pump power (optimized as a function of the core and hole offsets) while those along the second row present the corresponding number of objective function evaluations recorded prior to convergence. The mean value ($\mu$ - dashed line) and standard deviation ($\sigma$) of the plotted values is also reported in each graph. Figure 1(a) (1st column) reveals the influence of the mesh size random variance on the NM results. The SNM[S, D] strategy results are shown in figure 1(b) where the initial simplex vertices are formed stochastically while the simplex descent is based on deterministic coefficient values. Figure 1(c) corresponds to the case of constant initial simplex (that of the first optimization in figure 1(a)) but this time the value of each optimization coefficient is recursively and randomly determined prior to each iteration during the simplex descent (SNM[D, S] strategy). Finally, figure 1(d) presents the results for the case where both the initial simplex and descent coefficients are randomly determined (SNM[S, S]). All optimizations in figure 1 were initiated from the same start point. The results variations observed in figures 1(b)-(d) are attributed solely to the stochastic nature of SNM while those in figure 1(a) originate from the mesh size variations. The best performing optimization strategy in $\Re^{18}$ can be chosen on different criteria serving different applications. The strategy that delivers acceptably optimized objective function values with minimum uncertainty is preferred here. It offers the smallest spread of objective function values for the second lowest mean number of function evaluations.
Figure 2 presents the corresponding study in $\Re^{18}$ where the area and the ellipticity of the four holes are optimized in addition to the core and hole offsets previously optimized in $\Re^{10}$. The four examined strategies are presented here in the same order as in figure 1. Strategy (b) is preferred in this case because it offered the highest mean absorption at the highest certainty. This comes at the cost of the maximum mean number of function evaluations exhibiting this time the strongest spread around their mean value. In both $\Re^{10}$ and $\Re^{18}$ spaces it appears that SNM[D, S] offers the lowest number of function evaluations and, more importantly, a slower growth in function evaluations with increasing dimensions [29]. This is a highly desired feature for the optimization of expensive objective functions.

**Fig. 2.** Four groups of 30 optimizations in $\Re^{18}$ from the same starting point and under different optimization strategies: (a) SNM[D,D] = NM. (b) SNM[S,D]. (c) SNM[D,S]. (d) SNM[S,S].
Fig. 3. Four groups of 30 optimizations in $\mathbb{R}^{10}$ from the same starting point and driven by different algorithms: SNM(S, S), GA[Np1], GPS[Np1,2N], MADS[Np1,2N].
The fittest SNM strategies are compared next with three global optimization methods operating in $\mathbb{R}^{10}$ and $\mathbb{R}^{18}$ in figures 3 and 4 correspondingly. Figure 3(a) plots again the results for algorithm SNM(S, S) while figures 3(b)-(d) report the corresponding results from GA, GPS and MADS methods. The detailed set-up of each method is reported in section 4. The expression GA[Np1] denotes that each GA optimization started with $(n+1)$ initial population members generated by random sampling of $\Omega$. By GPS[Np1, 2N] it is meant that the search pattern includes $n + 1$ directions and that the poll pattern matrix stores $2n$ directions. GPS and MADS algorithms implement two distinct steps namely the search and poll. The search step can be absent or be a pattern search or any other heuristic or Monte Carlo method [41] or preferably a method that uses inexpensive surrogates to approximate the objective function [42]. The search step adopted in this work implements a pattern search along the directions denoted by the column vectors in the pattern matrix
$$\Xi_{GPS,Search} = \Xi_{GPS,Np1} = \begin{bmatrix}
1 & 0 & \cdots & 0 & -1 \\
0 & 1 & \cdots & 0 & -1 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & -1
\end{bmatrix}_{n \times (n+1)}. \quad (12)$$
The poll step is a compulsory pattern search that is closely linked to the convergence theory of pattern search algorithms [29]. The adopted poll patterns are represented by the column vectors in
$$\Xi_{GPS,Poll} = \Xi_{GPS,2N} = \begin{bmatrix}
1 & 0 & \cdots & 0 & -1 & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 & 0 & -1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1 & 0 & 0 & \cdots & -1
\end{bmatrix}_{n \times 2n}. \quad (13)$$
The GPS algorithm invokes the poll step only when the search step fails to produce a point in $\mathbb{R}^n$ that improves the optimal function value recorded so far. After a poll step, the mesh size is adapted (contracts after an unsuccessful poll and expands after a successful poll) and
Fig. 4. Four groups of 30 optimizations in $91^{18}$ from the same starting point and driven by different algorithms: SNM(S,S), GA(Np1), GPS(Np1,2N), MADS(Np1,2N).
A new iteration begins. MADS is a stochastic form of GPS. The $\Xi_{\text{MADS,Search}} = \Xi_{\text{MADS,Np1}}$ pattern matrix stores $n+1$ randomly generated column vectors while $\Xi_{\text{MADS,Poll}} = \Xi_{\text{MADS,2N}}$ is generated using a random permutation of an $(n \times n)$ linearly independent lower triangular matrix. Both of the above patterns are regenerated prior to each iteration according to MALAB’s documentation.
Before discussing the results in figure 3, it is informative to note that the variations in the GPS optimization results are due to the use of a different mesh size for each optimization whilst all other results exhibit variations originating from the stochastic nature of the corresponding algorithm. SNM, GA, GPS and MADS achieved an average objective improvement of 56%, 45%, 84% and 81% correspondingly. In $91^{18}$ (figure 4) the corresponding percentages are 58%, 46%, 90% and 93%. It is obvious at this stage that GPS and MADS managed to find optimizers located in deeper valleys indicating global convergence with higher probability than GA and SNM. On the computational expense front in $91^{18}$ the GA, GPS and MADS were correspondingly 121%, 91% and 111% more expensive than SNM while in $93^{18}$ they were 118%, 104% and 113% more expensive than SNM. The GA is consistently the most expensive method. The reported results agree with other benchmark results [43,44] and although GA promises global convergence when evolving a large initial population [45], it is not preferred here due to it being unsuitable for the optimization of expensive functions. The above analysis indicates that in the examined dimensions the most efficient strategy would be to use SNM as a first stage optimization tool, a numerical telescope that can relatively inexpensively designate the vicinity that offers the highest probability to contain a global optimizer. A second stage search with the significantly more expensive GPS of MADS methods is then justified in the SNM designated subdomains. Nevertheless, and in agreement with section 4 results, the SNM method offers the best case efficiency when it succeeds in finding a global optimizer.
3. Implicitly constrained zeroth-order optimization algorithms with simple bounds
The stochastic forms of NM proposed in this section solve optimization problems in higher dimensions that are difficult to treat or incompatible with GA, GPs and MADS. In addition they achieve global convergence at low computational cost. The ‘Offset-Perimeter’ encoding
key ([21] gives a schematic representation) is used to map a variable perimetric line shape for each lamina comprising a fibre cross section. Under this scheme, the shape of a given cross section can be fully optimized but at a considerably higher computational cost. The dimensionality of the objective function domain increases by at least an order of magnitude depending on the sampling density of each lamina perimeter included in a cross section. A fibre topology that includes $N$-holes in the inner cladding is represented in $\mathbb{R}^{2(n_c + n_{h_1} \cdots + n_{h_N} + 1)}$ by a single point of the form
$$\mathbf{P} = \begin{pmatrix}
y_c, z_c, y_{c,1}, z_{c,1}, \cdots, y_{c,n_c}, z_{c,n_c}, y_{h_1,1}, z_{h_1,1}, \cdots, y_{h_1,n_{h_1}}, z_{h_1,n_{h_1}}, \cdots, \\
y_{h_N,1}, z_{h_N,1}, \cdots, y_{h_N,n_{h_N}}, z_{h_N,n_{h_N}}
\end{pmatrix}^T$$
(14)
where $n_c$ is the number of points that sample the inner cladding perimeter and $n_{h_i}$ is the $i$-th hole perimetric point set population. The aforementioned encoding key includes the core centre coordinates but does not optimize the hole offsets. However, this is a feature that could be included into the coordinates set of $\mathbf{P}$.
Even for a low resolution polygonic approximation of a smooth perimeter, all the previously compared algorithms generate trial points that abruptly perturb a smooth start point and lack physical integrity and/or manufacturability. Examples of such perturbations are given in figures 5(a)-(c) showing typical trial points that the corresponding algorithms NM, GPS, MADS may generate during an optimization. Most representative trial points are those of the GA algorithm shown in figures 5(d), 5(e) for two different bounding configurations. It becomes obvious that GA scrambles randomly the start point coordinates failing to produce children or members of the initial population with physical integrity. Figure 5(e) suggests that a scheme capable of generating smooth perturbations is needed. An effort was
![Fig. 5. Trial points (or initial population members). (a) NM. (b) GPS. (c) MADS. (d) GA with own population, bounded within [-4,4]mm (e) GA with own population, bounded within the ± 50μm zone from start point. (f) GA with PNM initial population.](image-url)
made to construct suitable constrains that would force the mapped coordinates to change in groups forming smooth, local and able to propagate perturbations along the perimeter of a lamina but it appears that this is a non-functioning approach.
Although algorithm NM is not meant for constrained optimization it was found that it can be modified to perform implicitly constrained optimization. The outline of the related process is that after generating a suitable pattern, the vertices of the initial simplex could obey pattern imprinted constraints which propagate all the way to the convergence point at the end of a descent. The simplest implementation of the above concept is implemented via the perturbed Nelder-Mead (PNM) algorithm and by virtue of subalgorithms PNM_PattGener and PNM_SimplexGener. The former of the subalgorithms generates a pattern of the form
\[
\Xi_{PNM} = \begin{bmatrix}
v_2 & v_1 & 0 & \ldots & 0 \\
v_3 & v_2 & v_1 & \ldots & 0 \\
0 & v_3 & v_2 & \ldots & 0 \\
0 & 0 & v_3 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & v_1 \\
0 & 0 & 0 & \ldots & v_{2-nn}
\end{bmatrix}
\]
(15)
when the perturbed element group population is \( k=3 \). Equation (15) demonstrates essentially the propagation of a constant disturbance involving \( k \)-elements along the length of the additive identity (\( n \times 1 \) zero vector). In line 1 of PNM_PattGener, the set \( \{v_q\} \equiv \left(0, 1/\sigma_N \sqrt{2\pi}\right)_{q=1,2,\ldots,k} \) with statistical median \( v_\mu = v_{(k+1)/2} \), follows the normal distribution \( N(\mu, \sigma_N^2) \) where \( \sigma_N \) is the predefined standard deviation of the distribution with a probability density function shown in the line 1 comment. It is notable that \( \Xi_{PNM} = \Xi_{NM} = 1 \) when \( k = 1 \) and \( \sigma_N = 1/\sqrt{2\pi} \), indicating that \( \Xi_{PNM} \) is a generalization of
Algorithm PNM. The Perturbed Nelder-Mead (PNM) method:
\[ [P_i, f_i, \sigma_i] = PNM(P_1, M, \sigma_{init}, \Omega, \sigma_N, k) \]
Input: (as in algorithm NM but with scalar mesh size and in addition, standard deviation of the normal distribution and perturbed element set population (odd positive integer; \( k = 2\tau + 1; \tau \in Z^* \))). Output: (as in algorithm NM).
1 // same as in algorithm
NM
2 call \[ [\Xi_{PNM}] = PNM_PattGener(n, \sigma_N, k) \] // PNM pattern generation
3 call \[ [S_0] = PNM_SmxGener(P_1, \Xi_{PNM}, M, \mu) \] // initial simplex generation
1-13 NM // same as in algorithm
Subalgorithm PNM_PattGener. Perturbed Nelder-Mead (PNM) pattern generation:
\[ \Xi_{\text{PNM}} = \text{PNM_PattGener}(n, \sigma_N, k) \]
Input: (number of variables, standard deviation of the normal distribution, perturbed element set population (odd positive integer; \( k = 2r + 1, r \in \mathbb{Z}^+ \))). Output: [PNM pattern matrix].
1. \( N := \begin{bmatrix} v_1 & v_2 & \cdots & v_k \end{bmatrix}^\top |_{k=2r+1, r \in \mathbb{Z}^+} // \text{where } \{v_q | q=1,2,\ldots,k\} = \left( \frac{1}{\sigma_N \sqrt{2\pi}} \right) \exp \left[ -\frac{(q-\mu)^2}{2\sigma_N^2} \right]^{-1} \right] \)
2. \( \varepsilon := (k-1)/2 \ // \text{number of variables in either bell shape branch excluding the median } (\mu) \)
3. for each PNM pattern-matrix column in the set \( \{\xi_i | i=1,2,3,\ldots,n\} \)
4. \( \xi_i := \begin{bmatrix} \xi_{i1}, \xi_{i2}, \ldots, \xi_{in} \end{bmatrix}^\top = (0,0,\ldots,0)^\top // \text{additive identity } (n \times 1 \text{ zero vector}) \)
5. \( \left( \begin{bmatrix} \xi_{i-\varepsilon}, \xi_{i-(\varepsilon-1)}, \ldots, \xi_{i-1}, \xi_i, \xi_{i+1}, \ldots, \xi_{i+(\varepsilon-1)}, \xi_{i+\varepsilon} \end{bmatrix}^\top \right)_{i=1} := N // \text{bell shaped perturbation} \)
6. endfor
7. \( \Xi_{\text{PNM}} := (\xi_1, \xi_2, \ldots, \xi_n)_{n \times n} // \text{PNM pattern matrix} \)
8. return \( \Xi_{\text{PNM}} // \text{output.} \)
Subalgorithm PNM_SmxGener. PNM initial simplex generation:
\[ S_0 = \text{PNM_SmxGener}(P_1, \Xi_{\text{PNM}}, M, n) \]
Input: (start point \( P_1 \), PNM pattern, mesh size and length of \( P_1 \)). Output: [initial simplex matrix].
1. for each simplex vertex in the set \( \{P_i | i=2,3,\ldots,n+1\} \)
2. \( P_i := P_1 + M \left( \frac{1}{\varepsilon_{\max,i-1}} \right) \xi_{i-1} // \text{where, } \varepsilon_{\max,i-1} = \max \{ \varepsilon_{\max,i-1} | i=1,2,\ldots,n \} \)
3. endfor
4. \( S_0 := \begin{bmatrix} P_1, P_2, \ldots, P_{n+1} \end{bmatrix}_{\text{vol}(S_0)=0} // \text{output.} \)
\( \Xi_{\text{NM}} \). Subalgorithm PNM_SmxGener returns the initial simplex vertices as a result of the superposition between the start point (\( P_1 \)) and the search directions stored in \( \Xi_{\text{PNM}} \). The practical outcome is the propagation of a bell-shaped perturbation along the elements in \( P_1 \) and is illustrated in figure 6 which assumes that \( k = 3 \) and shows clearly the \( n \)-steps of the perturbation propagation process which generates the initial simplex vertices \( P_2 \) to \( P_{n+1} \). Also clearly demonstrated is that a set of vertices created at the start and the end of the process bare a perturbation that is abrupt at one end. This is a drawback of the described technique that has a small overall effect though due to the comparatively small number of vertices baring such a non-smooth perturbation. Soon after the start of the simplex decent,
the abruptly perturbed vertices are naturally substituted by newly discovered and better performing smoothly perturbed vertices. The only damage made is the comparatively small reduction in the probability to capture optimal vertices right from the start of the process. The height of the bell shape is controlled, in subalgorithm PNM_SmxGener, via the mesh size $M$ while its full-width half-maximum is set via $\sigma_N$. The factor $1/\hat{\sigma}_{\text{max}}$, used in line 2, normalizes the bell-shaped perturbation, stored in the pattern, to the maximum value of 1 in order to scale the perturbation height to the predefined mesh size $M$. A set of decoded initial simplex vertices is given in figure 7(a) where the start point was a cross section with circular inner cladding embedding an offset circular hole and an offset core. For completeness, figure 5(f) shows a child produced by GA after having been initiated with the same initial population that comprised the initial simplex vertices in PNM. Here the child’s features have been improved compared to figures 5(d), 5(e) but still the GA algorithm appears unable to generate a smooth optimizer.
Following the proposal of SNM method in section 2, algorithm PNM naturally suggests its stochastic version SPNM which can be implemented by the simultaneous random assignment of $(M, \sigma_N)$ and/or the simplex descent coefficients. The random assignment of $(\sigma_N, M)$ is implemented just before the generation of $\Xi_{\text{SPNM}}$ which now stores, as opposed to $\Xi_{\text{PNM}}$, a set of directions that still smoothly but this time randomly perturb the

**Fig. 6.** Illustration of the bell shape propagation in the nonrandomized initial simplex generation scheme for perturbed vertex elements number $k = 3$. Under this scheme, the shape of the perturbation propagates along the whole vertex in $n$-steps while preserving its shape.
Fig. 7. Propagation instances of a perturbation envelope, (a) PNM method, (b) ESPNM method stochastic envelope (1st row) and random core offsets inside a selected vertex (importance sampling- 2nd row).
additive identity. Both PNM and SPNM algorithms still call the same iteration subalgorithm (NM\_Step) as NM and SNM do. The assignment of the random $M$, $\sigma_N$ parameter values is implemented as in algorithm ESPNM (enhanced stochastically permuted NM) proposed next. Algorithm ESPNM enhances SPNM method by dynamically and preferentially forming the initial and also intermediate (during a descent) simplices as well as conditionally and adaptively regenerating the intermediate simplices. The implementation of ESPNM method is described in algorithm ESPNM and associated subalgorithms ESPNM\_PattGener, ESPNM\_SmxObjGener and ESPNM\_Step. Subalgorithm ESPNM\_PattGrner generates a simplex formation pattern of the type
$$\Xi_{ESPNM} = \begin{bmatrix}
v_{1,1} & 0 & \cdots & 0 & 1 & \cdots & 1 \\
\vdots & v_{1,2} & \cdots & 0 & 1 & \cdots & 1 \\
v_{k,1} & \vdots & 0 & 0 & \cdots & 0 \\
0 & v_{k,2} & \cdots & 0 & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & v_{1,n-(k-1)} & 0 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & 0 & \cdots & 0 \\
0 & 0 & \cdots & v_{k,n-(k-1)} & 0 & \cdots & 0
\end{bmatrix}_{n \times n}. \quad (16)$$
Algorithm ESPNM. Enhanced stochastically perturbed Nelder-Mead (ESPNM) method:
\[ \left[ P_j, f_j, \sigma_j \right] = \text{ESPNM} \left( P_j, M, \sigma_{\text{halt}}, \Omega, \sigma_N, k, \nu_m \right) \]
Input: (as in algorithm PNM, max consecutive shrinkages(>=2)). Output: [as in algorithm NMI].
1. \( j := 0 \) // iteration index
2. \( \Xi_{\text{ESPNM}} = \text{ESPNM\_PattGener}(n, \sigma_N, k) \) // stochastically permuted pattern
3. call \( \left[ S_j, F_j \right] = \text{ESPNM\_SmxObjGener} \left( P_j, \Xi_{\text{ESPNM}}, \Omega, M, n, k \right) \) // stochastic simplex
4. \( \nu_c := 0 \) // consecutive shrinkages number initialization
5. call \( \left[ \tilde{f}_j, f_j, P_j, \tilde{P}, \overline{P} \right] = \text{SmxAssessm} \left( S_j, F_j \right) \) // current simplex (S_j) assessment
6. while \( (\sigma_j \geq \sigma_{\text{halt}}) \) // where, \( \sigma_j = \left( \left( \frac{\tilde{f}_j - \overline{f}}{k-1, 2, \ldots, n+1, j+h} \right)^2 \right)^{1/2} \) (descent halting criterion)
7. \( j := j + 1 \) // increment
8. assign \( \{ r, e, c, s \}_j = \left[ [0.5, 1], [2, 4], [0.25, 0.5], [0.3, 0.7] \right] \) // random, uniformly distributed
9. call \( \left[ S_j, F_j, \text{step}, \nu_c \right] = \text{ESPNM\_Step} \left( P_j, P_j, \overline{P}, \Omega, f_{jh}, f_j, r, e, c, s, S_j, F_j, P_j, \Xi_{\text{ESPNM}}, M, n, k, \nu_c, \nu_m \right) \)
10. call \( \left[ \tilde{f}_j, f_j, P_j, \tilde{P}, \overline{P} \right] = \text{SmxAssessm} \left( S_j, F_j \right) \) // simplex assessment
11. endwhile // end of iteration loop
12. return \( P_j, f_j, \sigma_j \) // output.
The concept behind \( \Xi_{\text{ESPNM}} \) is that it stores a leftmost set of direction vectors that propagate the perturbation envelope starting from the first element in \( P_j \) (1st column of \( \Xi_{\text{ESPNM}} \)) and stopping when the opposite end of the envelope reaches the last element (\([n-(k-1)]\)-th column). In this way there remain \( k-1 \) unfilled columns in \( \Xi_{\text{ESPNM}} \) (\([n-(k-2)]\)-th to \( n \)-th column) that are assigned as shown. The later, in conjunction with subalgorithm ESPNM\_SmxObjGener, will allow the selection of the best vertex so far (\( P_{\text{opt}} \)) and its subsequent perturbation with emphasis to its most influential elements (importance sampling). In this case, the aforementioned influential elements are the first two chosen on the basis that they control the offset of the active core where the pump photons absorption takes place. Subalgorithm ESPNM\_SmxObjGener describes the stochastic assignment of each perturbation propagation instance along \( P_j \) (line 4). In addition to the simplex matrix it also returns the objective matrix since the simplex is generated dynamically based on the feedback from the function evaluations. Then it evaluates the objective function at the perturbed vertices and selects the fittest (\( P_{\text{opt}} \)) amongst them (line 8). Its final operation is to randomly scramble the core offset along positive directions within the optimal cross section, represented by the decoded \( P_{\text{opt}} \), in search for objective improving coordinates (lines 9-13). The initial polytope generated in this way is again a numerically non-degenerate structure.
Subalgorithm ESPNM_PattGener. ESPNM pattern generation:
\[ \Xi_{\text{ESPNM}} = \text{ESPNM_PattGener}(n, \sigma_N, k) \]
Input: (number of variables, maximum standard deviation of the normal distribution and number of perturbed variables (odd positive integer)). Output: [EPSNM pattern matrix].
1. \( \varepsilon := (k - 1)/2 \quad // \text{number of variables in either branch of the normal distribution} \)
2. assign \( \{\sigma_{N,i}\}_{i=1,2,\ldots,n-2\varepsilon} \in [\sigma_N/2, \sigma_N] \quad // \text{uniformly distributed random values} \)
3. for each ESPNM pattern matrix column vector in the set \( \{\xi_i \mid i = n+1, \ldots, n-\varepsilon\} \)
4. \( N_{i,-\varepsilon} := (v_1, v_2, \cdots, v_k)^T \mid _{k-2\varepsilon+1, \ldots, k} \quad // \{v_q \mid q = 1, 2, \ldots, k\} = N(\mu, \sigma^2_{N,i,-\varepsilon}) \in [0, 1/\sigma_{N,i,-\varepsilon}\sqrt{2\pi}) \)
5. \( \xi_{i,-\varepsilon} := (\xi_1, \xi_2, \cdots, \xi_\theta)^T = (0, 0, \ldots, 0)^T \quad // \text{additive identity } (n \times 1 \text{ zero vector}) \)
6. \( \left[ \begin{array}{c}
\xi_{i-\varepsilon}, \xi_{i-(\varepsilon-1)}, \xi_{i-\varepsilon-1}, \xi_{i}, \xi_{i+\varepsilon}, \cdots, \xi_{i+(\varepsilon-1)}, \xi_{i+\varepsilon+1} \\
\end{array} \right]^T_{4k-1} := N_{i,-\varepsilon} \quad // \text{bell shaped perturbation} \)
7. endfor
8. for each ESPNM pattern matrix columns in the set \( \{\xi_i \mid i = n-(k-2), n-(k-3), \ldots, n\} \quad // \text{k-1 vectors} \)
9. \( \xi_i := (\xi_1, \xi_2, \cdots, \xi_n)^T = [(1, 1, 0, 0, \ldots, 0)]^T \quad // \text{preferential perturbation pattern vectors} \)
10. endfor
11. \( \Xi_{\text{ESPNM}} := (\xi_1, \xi_2, \cdots, \xi_n)_{n \times n} \quad // \text{ESPNM pattern matrix} \)
12. return \( \Xi_{\text{ESPNM}} \quad // \text{output.} \)
Using this technique in high dimensions means that the initial simplex is formed by a search process with an extra element of intelligence which is the selective collection of information within a subset of dimensions offering higher probability to deliver substantially optimized objective function values and/or second order information. In other words, a subset of simplex vertices record a certain space of higher interest, while keeping the coordinates in the rest of the dimensions frozen, adding an element of exploratory search right from the start of the process. The aforementioned assignment process of the initial simplex is graphically illustrated in figure 8 for a small number of perturbed elements \( k = 5 \) selected to assist the demonstration of the selective randomization concept. It is also assumed there that \( P_2 \) performed optimally amongst the vertices from \( P_1 \) to \( P_{n+1-(k-1)} \) and as shown it is vertex \( P_3 \) that is further processed and used as the basis to assign the remaining \( k-1 \) vertices of the initial simplex \( (S_1) \). The top two elements of each of the vertices \( P_{n+1-(k-2)} \) to \( P_{n+1} \) in figure 8 show the way the represented core centre coordinates are randomly altered to capture further and better focused objective function information in the sub-dimensions of higher probability to capture optimal objective function values. A schematic visualization of the above process is given in figure 7(b) where the cross sections shown...
Subalgorithm ESPNM_SmxObjGener. ESPNM simplex, objective matrices generation:
\[ \begin{bmatrix} S, F \end{bmatrix} = \text{ESPNM_SmxObjGener}(P_1, \Xi_{\text{ESPNM}}, \Omega, M, n, k) \]
Input: (start point \( P_1 \), ESPNM pattern optimization domain, mesh size, length of \( P_1 \) and number of perturbed variables). Output: [simplex and objective matrices].
1. assign \[ \{a_i | i=1,2,\ldots,n-(k-1)\} \in [-M,M] \] // uniformly distributed random bell-amplitude values
2. assign \[ \{m_i | i=1,2,\ldots,2(k-1)\} \in [-M,M] \] // uniformly distributed random mesh size values
3. for each simplex vertex in the set \[ \{P_i | i=2,3,\ldots,(n+1)-(k-1)\} \] // perturbation propagation loop
4. \( P_i := P_1 + a_i / (\xi_{\max,i-1}) \cdot \xi_{i-1} \) // where, \( \xi_{\max,i-1} = \max \{ \xi_{\sigma,j-1} | j=1,2,\ldots,n \} \)
5. call \[ f_i = \text{FuncEval}(P_i, \Omega) \] // function evaluation at the simplex vertices
6. endfor
7. call \[ f_1 = \text{FuncEval}(P_1, \Omega) \] // function evaluation at the start point
8. \( f_{\text{opt}} := \min \{f_i | i=1,2,\ldots,(n+1)-(k-1)\} \); assign \( P_{\text{opt}} |_{\sigma_j=i} = P_{\text{opt}} \) // optimal vertex selection
9. for each simplex vertex in the set \[ \{P_i | i=(n+1)-(k-2),(n+1)-(k-3),\ldots,n+1\} \]
10. \( \xi_{1,i-1} := \xi_{1,i-1} m_{2(i-n+k-2)-1} ; \xi_{2,i-1} := \xi_{2,i-1} m_{2(i-n+k-2)} \)
11. \( P_i := P_{\text{opt}} + \xi_{i-1} \) // stochastic perturbation of core centre coordinates in selected \( P_{\text{opt}} \)
12. call \[ f_i = \text{FuncEval}(P_i, \Omega) \] // function evaluation at the perturbed \( P_{\text{opt}} \)
13. endfor
14. \( S := \left[ P_1 P_2 \cdots P_{n+1} \right]_{(n+1) \times (n+1)} \); \( F := \left[ f_1 f_2 \cdots f_{n+1} \right]_{1 \times (n+1)} \) // simplex, objective matrices
15. return \( S, F \) // output.
along the first row are instances of the stochastic bell shape propagation while the second row shows the importance sampling process [46] which is practically a uniformly random search for improved core offsets in the vicinity of \( P_{\text{opt}} \). The aforementioned process is invoked once by algorithm ESPNM during the initial simplex (\( S_0 \)) generation at line 3 and then recursively during the line search process (simplex descent) at line 21 of subalgorithm ESPNM_Step. The later is executed conditionally in the vicinity of the currently best vertex (\( P_1 \)) when subsequent shrinkages are recorded indicating descent on a problematic landscape (noisy, discontinuous, nonconvex with many narrow and deep basins). It is also executed adaptively by halving the mesh size prior each new simplex generation around the preserved \( P_1 \) in order to accelerate convergence (ESPNM_Step line 20). This process resembles the mesh size contraction in GPS and MADS and places ESPNM in the class of methods that optimize a function by iterative processes executed on a tower of meshes [29]. An important aspect of the initial simplex generation at line 3 of algorithm ESPNM is to choose appropriate values for \( M \) and \( \sigma_N \) parameters such that the initial simplex spans an
Fig. 8. Illustration of the selectively randomized initial simplex generation scheme for perturbed vertex elements number $k = 5$. The last four $(k - 1)$ simplex vertices are versions of the vertex ($P_n$) that was the optimal point found, containing the core centre coordinates altered by the set of normally distributed pseudorandom coefficients $\{r_1, r_2, \ldots, r_k\}$.
area that includes many valleys (nonconvex objective function) as opposed to forming a small initial simplex with all its vertices located inside a single valley. The latter will almost certainly result in local convergence.
Subalgorithm ESPNM\_Step implements a line search operation that guides the simplex when descending in $\mathbb{R}^n$. The aforementioned subalgorithm NM\_Step is a subset of ESPNM\_Step formed by removing the if-then-else-endif module (lines 19-25) after keeping lines 23 and 24. It includes a stronger expansion condition (line 4) and strict inequalities (lines 11 and 12). Also, the seven input arguments are removed as well as the last output argument. In previous work [21], the weaker expansion condition was used in NM\_step ($f_e < f_i$ as in the original algorithm [20]).
**Subalgorithm ESPNM\_Step.** Interpretation of the ESPNM step operation:
$$\left[ S_f, F_f, step, \nu_z \right] = \text{ESPNM\_Step}\left( P_b, P_r, \overline{P}, \Omega, f_b, f_t, r, e, c, s, S_f, F_f, P_r, \Xi_{\text{ESPNM}}, M, n, k, \nu_z, \nu_m \right)$$
**Input:** (worse, best points, centre of polytope excluding $P_b$, bounds, highest, lowest function values, reflection, expansion, contraction, shrinkage coefficients, current simplex, objective matrices, start point $P_r$, ESPNM pattern, mesh size, length of $P_r$, number of perturbed variables, consecutive and max consecutive shrinkages). **Output:** (current simplex, objective matrices, operation step, consecutive shrinkages).
1 $P_r := (1 + r) \overline{P} - r P_0$; call $\left[ f_r \right] = \text{FuncEval}(P_r, \Omega)$ // calculate; evaluate reflection point
2 if $(f_r < f_t)$ then
3 $P_e := e P_r + (1 - e) \overline{P}$; call $\left[ f_e \right] = \text{FuncEval}(P_e, \Omega)$ // calculate; evaluate expansion point
4 if $\left[ (f_e < f_t) \land (f_e < f_r) \right]$ then // stronger expansion condition (modern NM)
5 $P_h := P_e$ in $S_j$; $f_h := f_e$ in $F_j$; step := 'expansion' // expansion operation
6 else
7 $s P_0 := P_r$ in $S_j$; $f_h := f_r$ in $F_j$; step := 'reflection' // reflection
8 endif
9 else
10 $f_m := \max \left\{ f_i \mid i=1,2,\ldots,n+1; i \neq t \right\}$
11 if $(f_r \geq f_m)$ then
12 if $(f_r < f_h)$ then
13 $P_0 := P_r$ // improved $P_0$ to be used in line 16
14 $P_h := P_r$ in $S_j$; $f_h := f_r$ in $F_j$; step := 'reflection' // reflection
15 endif
16 $P_c := c P_0 + (1 - c) \overline{P}$; call $\left[ f_c \right] = \text{FuncEval}(P_c, \Omega)$ // contraction point
17 if $(f_c > f_h)$ then
18 $\left\{ P_i := c (P_i + P) \mid i=1,2,\ldots,n+1; i \neq t \right\}$; step := 'shrinkage'; $\nu_c := \nu_c + 1$ // shrinkage
19 if $(\nu_{\text{max}} = \nu_{\text{max}})$ then
20 $P_0 := P_c$; $M := M/2$; $\nu_c := 0$ // preservation; adaptation; reset
21 call $\left[ S_j, F_j \right] = \text{ESPNM_SmrxObjGener}(P_1, \Xi_{\text{ESPNM}}, \Omega, M, n, k)$ // new smx
22 else
23 for each $\left\{ P_i \mid i=1,2,\ldots,n+1; i \neq t \right\}$ call $\left[ f_i \right] = \text{FuncEval}(P_i, \Omega)$ endfor
24 $F_j := [f_1 f_2 \cdots f_{n+1}]_{1 \times (n+1)}$ // evaluation of shrunk simplex
25 endif
26 else
27 $P_h := P_r$ in $S_j$; $f_h := f_r$ in $F_j$; step := 'contraction' // contraction
28 endif
29 else
30 $P_h := P_r$ in $S_j$; $f_h := f_r$ in $F_j$; step := 'reflection' // reflection
31 endif
32 endif
33 return $S_j$, $F_j$, step, $\nu_c$ // output.
Fig. 9. Four groups of 15 optimizations in $9t^{1/2}$ from the same starting point and driven by different algorithms: (a) SPNM[D,D] = PNM. (b) ESPNM[S,D]. (c) SPNM[D,S]. (d) ESPNM[S,S].
Figure 9 presents a comparison of the algorithms proposed in this section. The (*, *) notation denotes {simplex generation, descent coefficients} pairs that can be either deterministically (D) or stochastically (S) assigned. The corresponding start point was a circular non-holey inner cladding with a centred core which absorbed 5.6W of pump power. The reported results indicate that the best performing algorithm is ESPNM[S, S] because it delivered, on average, the optimal function values exhibiting at the same time the lowest spread around their mean value. It demonstrated a 152% improvement of the mean $P_{abs, \text{tot}}$ compared to the 113% offered by PNM for a 61% increase in computation cost over PNM.
4. Optimization results
The inner cladding of a conventional DCF has a numerical aperture (NA) of 0.48 while the core NA is 0.175. The core doping density is 20,000ppm-by-volume, the launched pump power is 100W and the fibre length is 10cm for all the optimization results presented in this section. The pump light has a random modal content, its energy is propagated via 288 rays in 10 time steps and the absorption computation grid of the active core is comprised of 100 volume elements. The pump light wavelength is $\lambda_p = 975$nm at which the Yb$^{3+}$ (Er$^{3+}$/Yb$^{3+}$ ion system) absorption cross section is $2.1 \times 10^{-24}$ m$^2$. The simulated fibres are single-end pumped by a 600µm diameter pure silica core (standard fibre bundled pump delivery fibre) and NA of 0.48 when pumping a fibre with polymer outer-clad or it is assumed to be surrounded by an air outer cladding when pumping a double-clad fibre which also has an air outer cladding. This work focuses on a set of fibre topologies that are thoroughly optimized and computationally compared on a common basis that avoids confusion and develops intuition into their absorption trends. Although space restrictions did not allow comprehensive parametric optimization, a sample of parametric optimization results in $9t^{1/2}$ is presented in figure 10 which shows a set of fairly similar optimizers exhibiting almost identical absorption characteristics. Algorithm NM converged to the reported shapes for different pairs of fibre length and pump power values correspondingly. The optimization process started from the same initial cross section and run under the same settings. Figure 10 demonstrates the generality of the optimization results reported in tables 1-4 which are approximately valid within the ranges [0.1, 1]W and [0.1, 1]m of pump power and fibre length correspondingly.
Fig. 10. Absorption performance of four convergence points resulting from optimization runs under different fibre length and pump power values.
The computing platform used for the optimizations reported in this chapter, is the same as the platform described in reference [19]. The CPU time consumed for the objective function evaluation at each start point is shown in the tables of this chapter for a more informative presentation. The strongest influence on the recorded CPU times originates from the total number of scattering operations which fluctuates slightly during an optimization. The computational efficiency of the 3-D fibre simulation method used was compared in [19] with other methods reported in the literature.
All Monte Carlo algorithms proposed in this chapter made use of the built-in MATLAB random number generator to produce the required sequences of uniformly distributed pseudorandom numbers. The built-in function is based on the random number generator by Marsaglia and Zaman [47] which was specifically designed to produce floating point values and uses a lagged Fibonacci generator with a cache of 32 floating point numbers between 0 and 1 combined with a separate, independent random integer generator based on bitwise logical operations. As a result, MATLAB’s built-in generator has a period of $2^{1682}$ (number of values produced before the sequence begins to repeat itself) and can theoretically generate all numbers between $2^{-53}$ and $1-2^{-53}$, all with equal probability to occur.
Figure 11 demonstrates the effort to optimize the offset of the core inside a circular (1st row figures) and a square (2nd row figures) inner cladding. The CPU time required for a single function evaluation for the circular fibre was approximately 28s on the MATLAB platform. Figures 11(a) and 11(e) show the corresponding pump power absorption surfaces generated by sampling the total absorbed pump power ($P_{abs, tot}$) calculated at 49 nodes (by moving...
Fig. 11. Core offset optimization (in $\Omega^2$) inside a circular (1st row) and a square inner-clad (2nd row). (a),(e) Transverse distribution of total absorbed power. (b),(f) Interpolated objective function surface and simplex descent path on the actual surface. (c),(g) Beam overlap images. (d),(h) Cumulative absorption of the initial guess (circles) and the convergence point (triangles).
the core on each one) of a Cartesian grid covering a square area $4900\mu m^2$ and interpolating the values on a 784 nodes grid covering the same area. This information is plotted here in order to observe the behaviour of the referred to as the modern interpretation of the Nelder-Mead algorithm adopted in this work. For the circular inner cladding, the $P_{abs,tot}$ values exhibit the well known symmetrical distribution around the centre of the cross section with the peak appearing near the inner-to-outer cladding interface. Figure 11(b) shows the surface that plots the corresponding values ($-P_{abs,tot}$) of the objective function and the path followed by the lowest vertex of the simplex (which is a triangle here in $\Omega^2$). The descent started from the region of the initial guess which was the centre of the cross section $(y_{x,init}, z_{z,init}) = (0, 0)\mu m$ and the algorithm converged at the point $(y_{x,opt}, z_{z,opt}) = (-38, -203)\mu m$ denoting that the optimum offset of the core from the centre is approximately at a distance of 69% of its radius for the considered operation point.
The corresponding path for the square DCF is shown in figure 11(f) on a fragment of the objective function surface. Here the simplex started again from the cross section centre and converged this time to the point $(y_{x,opt}, z_{z,opt}) = (-24, 126)\mu m$ where the core is situated at a distance from the centre that is approximately 21% of the inner cladding side length. In both figures it is apparent that the direct search method achieved better landscape resolutions and at lower computation cost than those achieved through the initial evaluation of $P_{abs,tot}$ at the grid nodes. Furthermore, figure 11(b) suggests graphically that first-order convergence from an arbitrary starting point (global convergence) is achieved at a point
\( P_{\text{optm}} \in \mathfrak{R}^2 \) very close to an optimizer \( x_* \) that is a stationary point of the objective function satisfying the second-order sufficiency condition (\( \exists x_* : \nabla^2 f(x_*) > 0 \) for a differentiable function). The spatial distribution of \( P_{\text{abs, tot}} \) across the cross section plane of the circular DCF is also clearly followed by the lowest order standing wave that developed in the beam overlap image in figure 11(c). The corresponding surface for the square DCF shows the improved scrambling of the modes achieved by this cross section. The peak standing high above the rest on each surface denotes the location of the core within the inner cladding. Figure 11(d) shows the dramatic improvement of absorption in the offset core of the circular DCF which is the direct result of the simplex descent to a deep valley while figure 11(h) demonstrates that there is comparatively little room for improvement when offsetting the core within a square DCF.
Table 1 presents the results from the simultaneous optimization of the cross section and refractive index performed mostly by the stochastic variants of NM at relatively low dimensions. The listed schemes (column 5) optimized the offset, size, shape and refractive index of an encompassed lamina while the shape of the inner cladding remained constant. These results represent a telescopic view into the considered optimization domains facilitated by the parsimonious nature of NM and SNM methods. All dielectric holes shown are assumed to be made of CBYA alloy-glass [40] apart from the row 3 optimizer representing an attempt to search for improved refractive index values. The increased CPU times recorded for the most complicated and absorbent topologies is due to the correspondingly larger number of scatterings occurring inside them. The optimal cross section in table 1 is the row 8 optimizer, found by stochastic search in \( \mathfrak{R}^{18} \) where the offset as well as ellipticity and size of four large area holes were allowed to vary independently. The single hole designs demonstrated high potential to achieving optimal absorption while when square shapes for the inner cladding or embedded holes were used, the absorption dropped considerably. The same was the case when air holes or hexagonal CBYA holes of variable offset and size were optimized (not shown). As far as the preliminary results in table 1 are considered, the cross sections worth to invest on in terms of computational expense for further optimization by the MADS method appear to be the:
- Four elliptical holes scheme (row 8)
- Circular hole topologies because they are easier to manufacture and showed improved absorption potential (row 6) after initiating a second optimization from a previous optimizer (row 5)
- Single large-hole cross section due to its simplicity and good performance.
The most promising topologies from table 1 are taken to the next level for optimization by MADS which promises to deliver global optimizers with higher probability but a significant increase in computational cost is expected. Prior to discussing the results in table 2 it is useful to describe the algorithmic settings of GA, MADS and GPS methods because they had an impact on all corresponding results. The optimizations executed by GA in section 2 started with \( n + 1 \) members in the initial population (to match the number of vertices maintained by a simplex), the elite population size was set to the nearest integer of \( \frac{(n+1)}{10} \), the cross over factor was 0.8, the migration factor was 0.2 and the migration interval was set to 20. With the need to make the GPS and MADS as computationally efficient as possible with a minimum negative impact on their global convergence properties, they were set up as follows. Neither complete search nor complete poll
| Start point | Optimizer | Start point $P_{\text{abs, tot}}$ (W) | Optimizer $P_{\text{abs, tot}}$ (W) | Encoding scheme | Algorithm | Optimization space | Func Evals (#) | Start point $t_{\text{CPU}}$ (s) |
|-------------|-----------|--------------------------------------|-----------------------------------|-----------------|------------|-------------------|---------------|------------------|
| | | 25.3 | 63.8 | Offset | NM | $\mathbb{R}^2$ | 78 | 27.6 |
| | | 63.6 | 69.1 | Offset-Diameter | SNM | $\mathbb{R}^5$ | 335 | 65.8 |
| | | 63.6 | 69.5 | Offset-Diameter-index | NM | $\mathbb{R}^6$ | 328 | 65.8 |
| | | 57.9 | 64.6 | Offset | SNM | $\mathbb{R}^{10}$ | 227 | 109.0 |
| | | 57.9 | 67.0 | Offset-Diameter | SNM | $\mathbb{R}^{14}$ | 304 | 109.0 |
| | | 67.0 | 69.8 | Offset-Diameter | SNM | $\mathbb{R}^{14}$ | 425 | 101.7 |
| | | 54.4 | 58.6 | Offset-Diameter | SNM | $\mathbb{R}^{14}$ | 306 | 84.7 |
| | | 57.9 | 70.7 | Offset-Major-Minor | SNM | $\mathbb{R}^{18}$ | 558 | 109.0 |
| | | 56.2 | 65.0 | Offset-Major-Minor | SNM | $\mathbb{R}^{18}$ | 412 | 91.3 |
| | | 54.7 | 59.3 | Offset-Major-Minor | SNM | $\mathbb{R}^{18}$ | 355 | 76.4 |
Table 1. Optimization results for polymer outer-clad and holey inner-clad with NM variants.
| Start point | Optimizer | Start point $P_{\text{abs, tot}}$ (W) | Optimizer $P_{\text{abs, tot}}$ (W) | Encoding scheme | Algorithm [Np1, 2N] | Optimization space | Func Evals (#) | Start point $t_{\text{CPU}}$ (s) |
|-------------|-----------|--------------------------------------|-----------------------------------|----------------|---------------------|------------------|---------------|-----------------|
| n=1.430 | | 63.6 | 71.1 | Offset-Major-Minor-Index | MADS | $\mathbb{R}^7$ | 112 | 65.8 |
| n=1.480 | | 57.9 | 71.0 | Offset-Diameter | MADS | $\mathbb{R}^{14}$ | 441 | 109.0 |
| | | 69.8 | 69.8 | Offset-Diameter | MADS | $\mathbb{R}^{14}$ | 235 | 97.3 |
| | | 57.9 | 65.3 | Offset-Major-Minor | MADS | $\mathbb{R}^{18}$ | 452 | 109.5 |
Table 2. MADS optimization results for polymer outer-clad and holey inner-clad.
operations were allowed resulting in an opportunistic style of direct search iteration that stops as soon as a better point has been found. Also, the first direction of search after a successful poll or search step is set to be the one that was successful in the previous iteration (exploratory search tactic). A so called tabu list that records the already visited points was maintained so that the expense of unnecessary function re-evaluations would be avoided. This added a tabu search metaheuristic element to MADS and GPS that was found to offer up to approximately 40% reduction in function evaluations. Tabu search is not recommended for stochastic functions but in this case the stochastic noise was suppressed. One other setting that can reduce the computation expense is to accelerate the rate at which the mesh size is adapted after a non-successful iteration. This setting was not enabled in this work because it was found to significantly reduce the probability to discover a global optimizer (at a benefit of 20% reduction in function evaluations). The last setting, shared by all optimization methods used here is the minimization halting criterion. In order to achieve an equally economical minimization that avoids unnecessary function evaluations at the vicinity of an already-well approximated optimizer, all halting criterions were set to stop the minimization when saturation in the improvement of the lowest recorded objective function value as a function of the number of iterations was observed. Regarding MATLAB’s ‘Genetic Algorithm and Direct Search Toolbox’ used to implement the GA, GPS, and MADS optimizations, it was found via observation that the above halting condition was satisfied for ‘Function Tolerance’ (a parameter compared against the cumulative change in the best function value over a number of iterations) values of $10^{-6}$, $10^{-7}$ and $10^{-4}$ correspondingly. In algorithm NIM and all its forms proposed in sections 2 and 3, the saturation of the fittest function value was observed for
$$\sigma_{\text{halt}} \cong (1/10)\sigma_0$$ \hspace{1cm} (17)
where $\sigma_0$ is the standard deviation of the initial objective matrix elements excluding the highest value. The success of (17) depends on the standard deviation of the function values, stored in the initial objective matrix ($F_0$), not being too large so that the simplex will reach the neighbourhood of an optimizer before the condition $\sigma_j \geq \sigma_{halt}$ is satisfied at the end of the $j$-th iteration. When the aforementioned criterion fails to halt the simplex after acceptably approximating an optimizer, then the descent halts after a relatively small number of iterations and a large improvement in the objective (row 1 in table 3, row 5 in table 4). Then the process is restarted using the discovered point as a new start point (row 2 in table 3, row 6 in table 4). In this way, the inherent tendency of NM (and proposed NM-based methods) to perform unnecessary iterations after having adequately approximated an optimizer was avoided.
| Start point | Optimizer | Start point $P_{abs,tot}$ (W) | Optimizer $P_{abs,tot}$ (W) | Encoding scheme | Algorithm [S, S] | Optimization space $R^{182}$ | Func Evals (#) | Start point $t_{CPU}$ (s) |
|-------------|-----------|-------------------------------|-----------------------------|-----------------|------------------|-----------------------------|----------------|--------------------------|
| | | | | Offset-Perimeter | ESPNM | | | |
| | | 25.3 | 64.8 | Offset-Perimeter | ESPNM | | | |
| | | 64.8 | 68.7 | Offset-Perimeter | ESPNM | | | |
| | | 61.5 | 67.2 | Offset-Perimeter | ESPNM | | | |
| | | 53.5 | 61.3 | Offset-Perimeter | ESPNM | | | |
| | | 63.6 | 69.7 | Offset-Perimeter | ESPNM | | | |
Table 3. Optimization results for polymer outer cladding with algorithm ESPNM.
After the direct comparison of several algorithms in section 2, the MADS method was chosen as the most successful at lower dimensions in terms of probability to find global optimizers. The most distinctive topologies listed in table 1 are re-optimized in table 2 under MADS. The 1st row of table 2 shows the results from an attempt to optimize the same start point as in row 3 of table 1 but this time with an added dimension. The discovered optimizer outperformed all optimizers from table 1 showing that an offset core topology with a single large hole of optimal ellipticity delivers the strongest pump absorption. Although the refractive index was independently varied during the optimization, the MADS algorithm
converged to an optimizer with the exactly the same hole refractive index, a manifestation of the discrete nature of pattern search. One other aspect of the MADS algorithm is that it demonstrates an inherent tendency to preferentially search along those directions that exhibit the stronger influence on the objective function values. The results in row 4 disappointed because although the start point was the same as in row 8 of table 1, the MADS algorithm converged to an optimizer in $3^{414}$ that was strongly outperformed by the SNM found optimizer (for a higher cost though this time). This observation suggests that a surprisingly low number of function evaluations is a sign of local convergence. However, the discovered optimizer indicates that if a centred core topology is sought after then the design parameters of the holes can be optimized for improved absorption strength. Row 2 in table 2 shows a successful optimization in $3^{414}$ that improved over the later suggesting that the optimization of the hole-ellipticity may not be justified if it is significantly more difficult to manufacture. An interesting result is reported in row 3 of table 2 where MADS converged to the start point after about 1/3 of the expected number of function evaluations. This behaviour of MADS was observed several times and showed that its success depends strongly on starting the process from a point far away from an optimizer, a property which is not shared by SNM as suggested by the results in row 6 of table 1.
Remaining in the class of topologies with polymer outer cladding, table 3 presents the optimization of inner cladding and hole perimeters along with the core offset at high dimensions. The two-stage optimization (rows 1,2) of a circular inner cladding with centred core resulted in a cross section with a minor spiral deformation to the inner cladding perimeter and an offset core. Row 3 adopted a start point resembling the spiral fibre proposed by Kouznetsov and Moloney [48] and converged to an inner cladding shape that is a perturbed spiral shape with the core located closer to the centre. The optimizer in row 3 suggests that a spiral cross section can be further improved. Row 4 demonstrates that a square fibre has limited prospects for competitive improvement while row 5 shows a case of local convergence in $3^{4362}$ where a global optimization is potentially very expensive due to the high dimensions. Finally, table 4 presents a set of optimization attempts for double-clad topologies with air outer cladding. The CPU times recorded here are much higher than in the polymer outer cladding case because the air-clad designs support higher order modes (rays of higher transmission angles under the absorption model in [19]) resulting in significantly increased number of scattering operations on the dielectric interfaces. An interesting finding was that an optimized polymer hole (row 2) can be very efficient in decoupling the pump light from its volume. In this way the pump modes are forced to propagate inside the significantly reduced inner cladding volume with a dramatic effect on the increase of the pump photons overlap with the active core volume. This design can be used with moderate pump power levels though due to the low damage threshold of a polymer. However, it has been demonstrated that high glass-transition temperature thermoplastic polymers can be thermally co-drawn into micro-sized structures without cracking or delamination [49]. A direct comparison between MADS and ESPNM is provided by the results in rows 3 and 4 where a dodecagon shaped inner cladding with offset core is optimized. The two algorithms converged to optimizers of the same absorption performance but ESPNM did so at a significantly lower cost. The dodecagon shape was chosen due to the small number of perimetric sampling points involved which did not allow MADS to generate trial points without physical meaning (or lacking manufacturability), contrary to the cases in figure 5. Furthermore, an air outer cladding may be easier to fabricate around a polygonic inner cladding by means of a comb of suitably shaped air holes.
| Start point | Optimizer | Start point $P_{abs,tot}$ (W) | Optimizer $P_{abs,tot}$ (W) | Encoding scheme | Algorithm | Optimization space | Func Evals (#) | Start point $t_{CPU}$ (s) |
|-------------|-----------|-------------------------------|-----------------------------|-----------------|------------|-------------------|---------------|--------------------------|
| | | 28.1 | 66.7 | Offset | NM | $\Re^2$ | 68 | 58.9 |
| | | 71.3 | 76.3 | Offset-Major-Minor-Index | MADS [Np1, 2N] | $\Re^7$ | 126 | 155.7 |
| | | 71.3 | 73.4 | Offset-Perimeter | ESPNM [S, S] | $\Re^{26}$ | 629 | 55.3 |
| | | 71.3 | 73.4 | Offset-Perimeter | MADS [Np1, 2N] | $\Re^{26}$ | 898 | 55.3 |
| | | 28.1 | 71.2 | Offset-Perimeter | ESPNM [S, S] | $\Re^{182}$ | 669 | 58.9 |
| | | 71.2 | 73.9 | Offset-Perimeter | ESPNM [S, S] | $\Re^{182}$ | 11199 | 60.0 |
Table 4. Optimization results for air outer cladding.
The predictions reported here may be compared to the 35% pump absorption enhancement reported by Baek et al [14] and to the 18% improvement measured by Jeong et al [15] for a circular fibre with centred core. Based on the current results, in the case of polymer coated DCFs, it is predicted that the optimizer in row 1 of table 2 can offer an approximate enhancement of 180% compared to a conventional circular DCF with centred core. Against a conventional circular DCF with optimally offset core (table 1, row 1 optimizer), an enhancement of 11% is predicted. For the air outer cladding case, assuming high power operation (no polymer holes), a 160% improvement (table 4, row 6 optimizer) is predicted against a centred circular DCF and 10% enhancement compared to the circular optimizer with optimally offset core.
5. Summary
Several stochastic algorithms based on the deterministic Nelder-Mead method were proposed and benchmarked against pattern search methods and a genetic algorithm. In low dimensions, the proposed Monte Carlo NM variants offered improved computational efficiency via a simple sampling approach. Implicitly constrained search combined with
importance sampling offered efficient global convergence in high dimensions. Smoothly perturbed patterns were proposed that may find theoretical support for constrained optimization. The fittest algorithms were applied to the cross section geometry and corresponding refractive index profile optimization. The identified advantages of the aforementioned pump absorption enhancement concept were:
- In the case of the holey DCFs the size of the inner cladding can be scaled to accept more pump power without the need to increase the core size. The solid state holes can be correspondingly scaled to retain their pump light tapering effect into the core volume.
- The proposed holey cross sections are compatible with the helical core concept and most side pumping schemes. Multi-core ribbon lasers [12] may also benefit from optimized solid-state holes
- No fibre machining is needed while also compatibility with standard fibre manufacturing is maintained
The main limitation may be the low fabrication tolerance implied by the complexity of most proposed topologies. On the front of correctly predicting their relative absorption performance, limitations are imposed by error levels induced by stochastic and numerical noise during optimization as well as simulation inaccuracies induced during function evaluations.
6. References
[1] Jeong Y, Sahu J, Payne D and Nilsson J 2004 Ytterbium-doped large-core fiber laser with 1.36 kW continuous-wave output power *Optics Express* 12 6088-92
[2] Yahel E, Hess O and Hardy A A 2006 Modeling and Optimization of High-Power Nd$^{3+}$-Yb$^{3+}$ Codoped Fiber Lasers *IEEE J. Lightwave Technol.* 24 1601-9
[3] Yahel E and Hardy A 2003 Modeling High-Power Er$^{3+}$-Yb$^{3+}$ Codoped Fibre Lasers *IEEE J. Lightwave Technol.* 21 2044–52
[4] Vienne G G, Caplen E J, Dong L, Minelly D J, Nilson J and Payne N D 1998 Fabrication and Characterization of Yb$^{3+}$:Er$^{3+}$ Phosphosilicate Fibres for Lasers *IEEE J. Lightwave Technol.* 16 1990–2001
[5] Federighi M and Di Pasquale F 1995 The Effect of Pair-Induced Energy Transfer on the Performance of Silica waveguide Amplifiers with High Er$^{3+}$/Yb$^{3+}$ Concentrations *IEEE Photonics Tech. Lett.* 7 303–5
[6] Lassila E, Hernberg R and Alahautala T 2006 Axially symmetric fiber side pumping *Optics Express* 14 8638-43
[7] Yan P, Gong M, Li C, Ou P, Xu A 2005 Distributed pumping multifiber series fiber laser *Optics Express* 13 2699-706
[8] Polynkin P, Temyanko V, Mansuripur M and Peyghambarian N 2004 Efficient and Scalable Side Pumping Scheme or Short High-Power Optical Fiber Lasers and Amplifiers *IEEE Photonics Tech. Lett.* 16 2024-26
[9] Koplow J P, Moore S W and Kliner D A V 2003 A New method for Side Pumping of Double-Clad Fiber Sources *J. Quantum Electr.* 39 529-40
[10] Koplow J P, Goldberg L, and Dahv A. V. Kliner D A V 1998 Compact 1-W Yb-Doped Double-Cladding Fiber Amplifier Using V-Groove Side-Pumping *IEEE Photonics Tech. Lett.* 10 793-5
[11] Wang P, Cooper L J, Sahu J K and Clarkson 2006 Efficient single-mode operation of a cladding-pumped ytterbium-doped helical-core fiber laser *Optics Letters* 31 226-8
[12] Wang P, Clarkson W A, Shen D Y, Copper L J and Sahu J K 2006 Novel concepts for high-power fibre lasers *Solid state lasers and amplifiers II: Proc. SPIE* (Strasbourg, France, 5-6 April 2006) Vol. 6190, 61900I (Apr. 17, 2006) ed A. Sennaroglu *et al* pp 1-12
[13] Jiang Z and Marcianite J R 2006 Mode-area scaling of helical-core, dual-clad fiber lasers and amplifiers using an improved bend-loss model *J. Opt. Soc. Am. B* 23 2051-8
[14] Baek S, Roh S, Jeong Y and Lee B 2006 Experimental Demonstration of Enhancing Pump Absorption Rate in Cladding-Pumped Ytterbium-Doped Fiber Lasers Using Pump-Coupling Long-Period Fiber Gratings *IEEE Photonics Techn. Lett.* 18 700-2
[15] Jeong Y, Baek S, Nilsson J and Lee B 2006 Simple and compact, all-fibre retro-reflector for cladding-pumped fibre lasers *Electronics Letters* 42 15-6
[16] Kouznetsov D and Moloney J V 2003 Highly Efficient, High-Gain, Short-Length, and Power-Scalable Incoherent Diode Slab-Pumped Fiber Amplifier/Laser *J. Quantum Electr.* 39 1452-61
[17] Kouznetsov D and Moloney J V 2004 Slab Delivery of Incoherent Pump Light to Double-Clad Fiber Amplifiers: An Analytic Approach *J. Quantum Electr.* 40 378-83
[18] Peterka P, Kasik J, Matejec V, Kubeek V and Dvoiak P 2006 Experimental demonstration of novel end-pumping method for double-clad fiber devices *Optics Letters* 31 3240-2
[19] Dritsas I, Sun T and Gratton K T V 2006 Numerical simulation based optimization of the absorption efficiency in double-clad fibres *IoP J. Opt. A: Pure Appl. Opt.* 8 49-61
[20] Nelder J A and Mead R 1965 A simplex method for function minimization *Computer Journal* 7 308-13
[21] Dritsas I, Sun T and Gratton K T V 2006 Double-clad fibre numerical optimization with a simplex method *Solid state lasers and amplifiers II: Proc. SPIE* (Strasbourg, France, 5-6 April 2006) Vol. 6190, 61900L (Apr. 17, 2006) ed A. Sennaroglu *et al* pp 1-12
[22] Conn A R, Gould N I M and Toint P L 1991 A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds *SIAM J. Numer. Analysis* 28 545-72
[23] Audet C and Dennis J E, jr. 2003 Analysis of generalized pattern searches *SIAM J. Optim.*, 13 889-903
[24] Lewis R M and Torczon V 2002 A globally convergent augmented lagrangian pattern search algorithm for optimization with general constraints and simple bounds *SIAM J. Optim.*, 12 1075-89
[25] Lewis R M and Torczon V 2000 Pattern Search Methods for Linearly Constrained Minimization *SIAM J. Optim.*, 10 917-41
[26] Lewis R M and Torczon V 1999 Pattern Search Algorithms for Bound Constrained Minimization *SIAM J. Optim.*, 9 1082-99
[27] Torczon V 1997 On the Convergence of Pattern Search Algorithms *SIAM J. Optim.*, 7 1-25
[28] Audet C and Dennis J E, jr. 2006 Mesh adaptive direct search algorithms for constrained optimization *SIAM J. Optim.*, 17 188–217
[29] Abramson M A, Audet C and Dennis J E, Jr. 2006 Nonlinear Programming by Mesh Adaptive Direct searches SIAG/OPT Views-and-news vol 17 no 1 pp 2-11
[30] Lagarias J C, Reeds J A, Wright M H and Wright P E 1998 Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions SIAM J. Optim., 9 112-47
[31] Kolda T G, Lewis R M and Torczon V 2004 Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods SIAM Review 45 385-482
[32] Torczon V 1991 On the convergence of the multidirectional search algorithm SIAM J. Optim., 1 123-45
[33] Teng C-H, Chen Y-S and Hsu W-H 2006 Camera self-calibration method suitable for variant camera constraints Applied Optics 45 688-96
[34] Lhomme F, Caucheteur C, Chah K, Blondel M and Megret P 2005 Synthesis of fiber Bragg grating parameters from experimental reflectivity: a simplex approach and its application to the determination of temperature-dependent properties Applied Optics 44 493-7
[35] McKinnon K I M 1998 Convergence of the Nelder-Mead simplex method to a nonstationary point SIAM J. Optim., 9 148–58
[36] Kelley C T 1999 Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition SIAM J. Optim., 10 43–55
[37] Tseng P 1999 Fortified-descent simplicial search method: A general approach SIAM J. Optim., 10 269–88
[38] Zhang R and Shi F G 2004 A Novel Algorithm for Fiber-Optic Alignment Automation IEEE Trans. Advance. Packaging 27 173-8
[39] Lin J Wu Y Huang T S 2004 Articulate Hand Motion Capturing Based on a Monte Carlo Nelder-Mead Simplex Tracker Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04-23-26 Aug.) 4 pp 975–978
[40] Zhang L, Gan F and Wang P 1994 Evaluation of refractive-index and material dispersion in fluoride glasses Applied Optics 33 50-6
[41] Amar J G 2006 The Monte Carlo Method in Science and Engineering IEEE Computing in Science & Engineering vol 8 no 2 pp 9-19
[42] Bandler J W, Kozieł S and Madsen K 2006 Space Mapping for Engineering Optimization SIAG/OPT Views-and-news vol 17 no 1 pp 19-26
[43] Renders J-M and Flasse S P 1996 Hybrid methods using genetic algorithms for global optimization IEEE Trans. Systems, Man and Cybernetics, Part B 26 243–58
[44] Wessel S, Trebst S and Troyer M 2005 A renormalization approach to simulations of quantum effects in nanoscale magnetic systems SIAM J. Multiscale model simul. 4 237-49
[45] Wen M and Yao J 2006 Birefringent filter design by use of a modified genetic algorithm Applied Optics 45 3940-50
[46] Luijten E 2006 Fluid Simulation with the Geometric Cluster Monte Carlo Algorithm IEEE Computing in Science & Engineering vol 8 no 2 pp 20-9
[47] Marsaglia G and Zaman A 1991 A new class of random number generators Ann. Appl. Probab. 3 462–80
[48] Kozunetsov D and Moloney J 2002 Efficiency of pump absorption in double-clad fibre amplifiers. II. Broken circular symmetry J. Opt. Soc. Am. B 19 1259–63
[49] Temelkuran B, Hart S D, Benoit G, Joannopoulos J D and Fink Y 2002 Wavelength-scalable hollow optical fibres with large photonic bandgaps for CO$_2$ laser transmission *Nature* 420 650-3
Stochastic Optimization Algorithms have become essential tools in solving a wide range of difficult and critical optimization problems. Such methods are able to find the optimum solution of a problem with uncertain elements or to algorithmically incorporate uncertainty to solve a deterministic problem. They even succeed in addressing uncertainty with uncertainty. This book discusses theoretical aspects of many such algorithms and covers their application in various scientific fields.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Ioannis Dritsas, Tong Sun and Ken Grattan (2011). Global Optimization of Conventional and Holey Double-Clad Fibres by Stochastic Search, Stochastic Optimization - Seeing the Optimal for the Uncertain, Dr. Ioannis Dritsas (Ed.), ISBN: 978-953-307-829-8, InTech, Available from: http://www.intechopen.com/books/stochastic-optimization-seeing-the-optimal-for-the-uncertain/global-optimization-of-conventional-and-holey-double-clad-fibres-by-stochastic-search
© 2011 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license. |
F. E. Ives.
PHOTOGRAPHIC ATTACHMENT FOR OPTICAL INSTRUMENTS.
(Application filed June 30, 1902.)
4 Sheets—Sheet 1.
Fig. 7.
Fig. 1.
Witnesses:
Louis W. Schickel
Herman E. Metcalf.
Inventor:
Frederic E. Ives,
by his Attorneys,
Howard Howard.
THE NORRIS PETERS CO., PHOTO-LITHO, WASHINGTON, D.C.
F. E. Ives.
PHOTOGRAPHIC ATTACHMENT FOR OPTICAL INSTRUMENTS.
(Application filed June 30, 1902.)
4 Sheets—Sheet 2.
Fig. 5.
Fig. 2.
Witnesses:
Inventor:
Frederic E. Ives,
by his Attorneys:
F. E. Ives.
PHOTOGRAPHIC ATTACHMENT FOR OPTICAL INSTRUMENTS.
(Application filed June 30, 1902.)
4 Sheets—Sheet 3.
Fig. 4.
Fig. 5.
Witnesses:
Louis M. F. Whitehead
Norman E. Metcalf.
Inventor:
Frederic E. Ives,
By his Attorneys:
Hosmer & Brown
THE MORRIS PETERS CO., PHOTOLITHO, WASHINGTON, D.C.
F. E. Ives.
PHOTOGRAPHIC ATTACHMENT FOR OPTICAL INSTRUMENTS.
(Application filed June 30, 1902.)
4 Sheets—Sheet 4.
Fig. 8.
Fig. 6.
Witnesses:
H. T. Whitcomb,
Herman E. Metcalf.
Inventor:
Frederic E. Ives;
by his Attorneys
Hossein Howan
THE HOWE PETERS CO., PHOTO-LITHO, WASHINGTON, D.C.
To all whom it may concern:
Be it known that I, Frederic E. Ives, a citizen of the United States, residing in Philadelphia, Pennsylvania, have invented certain Improvements in Photographic Attachments for Optical Instruments, of which the following is a specification.
The object of my invention is to facilitate the production of photomicrographs or photographs of the images seen in the eyepieces of other optical instruments—such as spectrosopes, polariscopes, &c.—and also to secure the same optical conditions for photography as for vision, my aim being to provide for reproducing whatever the eye may see at any time in the ordinary use of the microscope or other optical instrument without disturbing the position or adjustment of the latter and at the same time obtain the same nominal amplification without measurement and the same definition without refocusing. This object I attain in the manner hereinafter set forth, reference being had to the accompanying drawings, in which—
Figure 1 is a view showing the application of my photographic attachment to an ordinary form of microscope, the latter being shown as adjusted to an inclined position. Fig. 2 is a similar view showing the application of the attachment to the microscope when the same is adjusted to a vertical position. Fig. 3 is a sectional plan view on the line \(a\ a\), Fig. 1. Fig. 4 is a sectional plan view of the camera, illustrating the construction which it is preferred to employ when a camera having an adjustable extension is desired. Fig. 5 is a detached view of part of the camera-carrying element of the attachment, and Figs. 6, 7, and 8 are views illustrating certain modifications in the construction of the camera attachment.
An ordinary form of microscope is illustrated at 1 in the drawings, the tube of the microscope being carried by an arm 2, attached to a pillar and stage, which is pivotally mounted upon a post 3, projecting upwardly from a base 4, of horseshoe form, and having a rearwardly-extending projection 5, as shown in Fig. 3.
The camera consists of a box 6, having at the rear end any ordinary form of plate-holder and provided at the front end with a fixed-focus lens 7, preferably a lens of ten-inch focus, so that the amplification of the image in the photograph will be exactly what would be calculated for the microscope-image, the usual microscope rule being to assume an image distance of ten inches.
Another reason for using the camera having a fixed focus of ten inches is to render the definition in the photograph equal to that seen in the microscope, which would not usually be the case if the camera were not provided with a lens focusing for parallel rays, because the alteration in the focus which would then be necessary involves a departure from the conditions assumed as a basis for calculating the best construction for objectives and eyepieces. A camera thus constructed when applied to the eyepiece of the microscope will reproduce the image seen in the microscope correct in definition and amplification, provided the microscope has been focused with an emmetropic eye and that the photograph is made by the action of the same light-rays that form the visible image. If the microscope-images are focused by a shortsighted or by an abnormally-far-sighted eye, a slight readjustment of the focus of the microscope may be necessary to make the image perfectly sharp upon the ground glass of the camera; but to avoid this possibility the myopic or hypermetropic microscopist may focus the microscope through a glass which corrects the eye for parallel rays.
In order to provide for the ready application of the camera to the microscope without disturbing the latter and for the equally ready removal of the camera from the microscope when the photographic exposure has been made, I mount the camera-box 6 so that it can slide longitudinally upon a frame 8 or upon suitable arms acting as a substitute for said frame, the latter being pivoted concentrically with the pivot of the microscope. Hence the axis of the camera can be readily aligned with that of the microscope, and the camera can be longitudinally adjusted upon its carrier to accord with any longitudinal adjustment of the microscope necessary for the proper focusing of the same.
In the present instance the arms 8\(a\) of the supporting-frame 8, which straddle the body 2 of the microscope, have pins 9, which can
be dropped into slots at the upper ends of plates 10, projecting upwardly from a base structure 11, which has projecting arms 11a for bearing against the sides of the horseshoe-base 4 of the microscope, so as to insure the proper lateral adjustment of the camera-supporting frame in respect to the same, the rear portion 11 of the base structure having laterally-adjustable arms 12 for engaging the rearward projection 5 of the microscope-base and also having a forwardly-projecting setscrew 13, which by contact with said rearward projection 5 will so govern the longitudinal adjustment of the base structure 11 in respect to the microscope-base as to insure the proper alinement of the pivot of the microscope and the sockets which receive the pivot-pins 9 of the camera-carrying frame.
In order to steady the camera-support 8 in any position of angular adjustment which it may assume, I use a strut 14, pivoted to the base 11, and having at the upper end an antifriction-roller 15, which runs in contact with the under face of the frame 8, as shown in Fig. 1, said strut being acted upon by a coiled spring 16, so that its roller has a constant bearing upon the under side of the frame 8 in any position of angular adjustment assumed by said frame, a slotted brace 17 and clamping-screw 18 serving to lock the strut 14 in position when the angular position of the camera has been finally determined.
When the microscope and camera are adjusted to a vertical position, the camera-supporting frame can be locked in such vertical position by engagement of a pivoted hook 19 on the strut with an eye 20 on the under side of the supporting-frame 8, as shown in Fig. 2. Similar hooks 2 may be employed for retaining the pivot-pins 9 of the camera-frame within the sockets of the supporting-plates 10.
Longitudinal adjustment of the camera upon the supporting-frame 8 may be effected in any suitable manner. For instance, in Fig. 1 the camera has guides 22 embracing the edges of the frame and is provided with a clamping-screw 23, whereby it may be locked to said frame in any position of longitudinal adjustment thereon.
If a camera with adjustable extension is desired, I prefer to adopt the construction shown in Fig. 4, in which one box 24 slides within another box 25, a clamping-screw 26 on the latter engaging with a spring-plate 27 on the box 24, so as to secure the latter in position after adjustment.
In that form of the invention shown in Fig. 5 a rack-and-pinion adjusting device 28 29 is employed for effecting longitudinal adjustment of the camera 6 upon the supporting-frame 8, and a telescopic strut 29, pivoted to the camera, bears at its lower end against a rearward extension 30 of the base structure 11 of the attachment. Either of these constructions, adapted for use with the ordinary horseshoe-base microscope, may, after once being adjusted to the inclination and extension of the microscope, be instantly removed as a whole by a single rectilinear movement, effected by one hand, and when replaced with equal facility will be ready for instant use without readjustment, if meanwhile the inclination or extension of the microscope has not been altered.
It is obvious that different forms of microscope-bases may call for modifications in the shape or construction of the base of the attachment or in the support for the pivot-pins which carry the camera. For instance, these pivot-supports might in some cases project 80 from the base of the microscope itself, as at 10a in Fig. 7, or from a base upon which the microscope rests in such position as not to interfere with the comfortable use of the microscope when the camera is not attached thereto, and microscope-stands which have no pivots and are therefore always used in the upright position require neither pivots nor strut for the camera-support, but only a simple standard. It is also obvious that one may choose to make the amplification in the photograph different from that in the microscope—say one-half or one and one-half times linear—in which case a camera having a five-inch focus or one having a fifteen-inch focus could be substituted for the camera having a ten-inch focus, still utilizing other features of my invention, or the camera may have an adjustable extension, as in Fig. 4, with lenses of various foci for various amplifications, or some of the features of my invention may be used for photography without the eyepiece or camera-lens. It should also be understood that while the construction described is preferable the same principles may be embodied and the same results attained with some modifications of construction. For example, the camera need not necessarily be movable as a rigid whole, and the extension-arms, instead of being attached to the rigid base carrying the camera, may pass through the guides 22', attached to the bottom or sides of the camera or its base, which is then movable by sliding along the arms, as shown in Fig. 7, or the extension-arms may, as shown at 8' in Fig. 8, be telescopic instead of sliding through guides on the camera or attaching to a separate base upon which the camera is movable.
The construction which I have shown in Fig. 1 has the advantage, among others, of providing a fixed central position for the attachment or bearing of a supporting-strut, which is the most convenient means for securing a positive and rigid, though adjustable, support for the camera, whereby all danger of vibration is overcome.
Having thus described my invention, I claim and desire to secure by Letters Patent—
1. The combination of an optical instrument having an eyepiece, with a camera complete with lens having a fixed focus similar to the image distance of the instrument, whereby the image seen in the eyepiece is reproduced without alteration of focus, definition or amplification, substantially as specified.
2. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby said camera is mounted in position for use in connection with an optical instrument, substantially as specified.
3. A camera focused for parallel rays and movable to and fro as a rigid whole in the line of its optical axis, in combination with a support whereby said camera is mounted in position for use in connection with an optical instrument, substantially as specified.
4. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby said camera is mounted in position for use in connection with an optical instrument, said support having as an element a base constructed to engage the base of said instrument, substantially as specified.
5. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby the camera is mounted in position for use in connection with an optical instrument, one of the elements of said support being a strut whereby the support is braced and stiffened when the camera is in position for use, substantially as specified.
6. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby the camera is mounted in position for use in connection with an optical instrument, one of the elements of said support being a pivoted strut whereby the support is braced and stiffened when the camera is in position for use, substantially as specified.
7. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby the camera is mounted in position for use in connection with an optical instrument, elements of said support being a base constructed for engagement with the base of said instrument, and a strut whereby the support is braced and stiffened when the camera is adjusted in position for use, substantially as described.
8. A camera focused for parallel rays and movable to and fro in the line of its optical axis, in combination with a support whereby the camera is mounted in position for use in connection with an optical instrument, elements of said support being a base constructed for engagement with the base of said instrument, and a pivoted strut whereby the support is braced and stiffened when the camera is adjusted in position for use, substantially as specified.
9. A camera focused for parallel rays and movable to and fro in the line of its optical axis, and a support whereby said camera is mounted in position for use in connection with an optical instrument, elements of said support being a base engaging the base of the instrument, and a strut interposed between said base and that portion of the support upon which the camera is mounted, substantially as specified.
10. A camera focused for parallel rays and movable to and fro in the line of its optical axis, whereby said camera is mounted in position for use in connection with an optical instrument, elements of said support being a base engaging the base of the instrument, and a pivoted strut interposed between said base and that portion of the support upon which the camera is mounted, substantially as specified.
11. The combination of a pivotally-mounted optical instrument, with a camera movable in the line of its optical axis, and a pivoted and swinging support for said camera having its pivotal axis concentric with the pivotal axis of the instrument, substantially as specified.
12. The combination of a pivotally-mounted optical instrument, with a camera focused for parallel rays and movable in the line of its optical axis, and a pivoted and swinging support for said camera having its pivotal axis concentric with the pivotal axis of the instrument, substantially as specified.
13. The combination of a pivotally-mounted optical instrument, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a pivoted and swinging support for said camera having its pivotal axis concentric with the pivotal axis of the instrument, substantially as specified.
14. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a base engaging the base of the instrument, and a camera-carrying element pivotally mounted upon said base in line with the pivotal axis of the instrument, substantially as specified.
15. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera, comprising a base engaging the base of the instrument, and a camera-carrying element pivotally mounted upon said base in line with the pivotal axis of the instrument, substantially as specified.
16. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, a pivoted and swinging support for said camera having its pivotal axis concentric with that of the instrument, and a strut for bracing said pivotal camera-support when the camera is adjusted for use, substantially as specified.
17. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, a pivoted and swinging support for said camera having its pivotal axis concentric with that of the instrument and a strut for bracing said pivotal camera-support when the camera is adjusted for use, substantially as specified.
18. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, a pivoted and swinging support for said camera having its pivotal axis concentric with that of the instrument and a pivotal strut for bracing said pivotal camera-support when the camera is adjusted for use, substantially as specified.
19. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, a pivoted and swinging support for said camera having its pivotal axis concentric with that of the instrument, and a pivoted strut for bracing said pivotal camera-support when the camera is adjusted for use, substantially as specified.
20. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a base constructed to engage the base of the instrument, a camera-carrying element pivoted to said base in line concentrically with the pivot of the instrument, and a strut interposed between the said camera-carrying element of the support and the base, substantially as specified.
21. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera comprising a base constructed to engage the base of the instrument, a camera-carrying element pivoted to said base in line concentrically with the pivot of the instrument, and a strut interposed between said camera-carrying element of the support and the base, substantially as specified.
22. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a base constructed to engage the base of the instrument, a camera-carrying element pivoted to said base in line concentrically with the pivot of the instrument, and a pivoted strut interposed between the said camera-carrying element of the support and the base, substantially as specified.
23. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera comprising a base constructed to engage the base of the instrument, a camera-carrying element pivoted to said base in line concentrically with the pivot of the instrument, and a pivoted strut interposed between the said camera-carrying element of the support and the base, substantially as specified.
24. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, a support for said camera, and pivotal mountings for said support located on opposite sides of the camera-mounting, and providing pivotal axes for the camera-support concentric with that of the instrument, substantially as specified.
25. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, a support for said camera, and pivotal mountings for said support located on opposite sides of the camera-mounting and providing pivotal axes for the camera-support concentric with that of the instrument, substantially as specified.
26. The combination of an optical instrument having pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a camera-carrying element and a base engaging the base of the instrument and having pivotal bearings for the camera-carrying element concentric with that of the instrument-mounting, said base also having an adjustable member for bearing upon the base of the instrument, substantially as specified.
27. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a camera-carrying element and a base engaging the base of the instrument and having on opposite sides of the latter pivotal bearings for the camera-carrying element which bearings are concentric with that of the instrument-mounting, substantially as specified.
28. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera comprising a camera-carrying element and a base engaging the base of the instrument and having on opposite sides of the latter pivotal bearings for the camera-carrying element, which bearings are concentric with that of the instrument-mounting, substantially as specified.
29. The combination of an optical instrument having pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera detachably mounted in pivotal bearings concentric with those of the instrument-mounting, substantially as specified.
30. The combination of an optical instrument having pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera detachably mounted in pivotal bearings concentric with those
of the instrument-mounting, substantially as specified.
31. The combination of an optical instrument having pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a camera-carrying element and a base engaging the base of the instrument, said camera-carrying element being detachably mounted in pivotal bearings on said base, which bearings are concentric with that of the instrument-mounting, substantially as specified.
32. The combination of an optical instrument having pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a camera-carrying element, a base engaging the base of the instrument and having pivotal bearings for said camera-carrying element which are concentric with the axis of the instrument-mounting, a pivoted strut interposed between the base and said camera-carrying portion of the support, a spring for acting upon said strut and causing it to accommodate itself to the various angular adjustments of said camera-carrying element of the support, and means for locking said strut in its different positions of adjustment, substantially as specified.
33. The combination of an optical instrument having a pivotal mounting, with a camera movable in the line of its optical axis, and a support for said camera comprising a camera-carrying element, a base engaging the base of the instrument and having pivotal bearings for the camera-carrying element concentric with the axis of the instrument-mounting, a pivoted strut interposed between the base and the camera-carrying element of the support, and means for locking said strut in its different positions of adjustment, substantially as specified.
34. The combination of an optical instrument having a pivotal mounting, with a camera focused for parallel rays and movable as a rigid whole in the line of its optical axis, and a support for said camera comprising a camera-carrying element, a base engaging the base of the instrument and having pivotal bearings for the camera-carrying element concentric with the axis of the instruments, a pivoted strut interposed between the base and the camera-carrying element of the support, a spring for acting upon said strut and causing it to accommodate itself to the various angular adjustments of said camera-carrying element of the support, and means for locking said strut in its different positions of adjustment, substantially as specified.
In testimony whereof I have signed my name to this specification in the presence of two subscribing witnesses.
FREDERIC E. IVES.
Witnesses:
F. E. BECHTOLD,
FLORENCE HILLMAN. |
Market Forces and the Changing Behaviour of Media Houses in Contemporary Scenario: An Analytical Study
* Dr. Ajai Pal Sharma
Effulgence
Vol. 12 No. 1
January - June, 2014
Rukmini Devi Institute of Advanced Studies
E-mail : email@example.com, Website : www.rdias.ac.in
http://effulgence.rdias.ac.in/user/default.aspX
https://dx.doi.org/10.33601/effulgence.rdias/v12/i1/2014/97-103
Abstract
Knowledge from one generation to another has helped him learn from both success and failure for the progress and growth of mankind. Everyone in this universe has full freedom and power to express his happiness or show discontent with the status quo and demand for a change. The right to freedom of speech has been recognized as a human right under Article 19 of the UDHR (Universal Declaration of Human Rights) and recognized in international human rights law in the International Covenant on Civil and Political Rights (ICCPR). Media is considered as the fourth pillar of democracy but questions are being raised on media today that it is not showing what needs to be shown in the real form. There must be different types of pressures from different sections of the society which compel the media houses to speak their languages to survive in the competitive market. Journalism may claim that it is the voice of the nation but people don’t believe as there are enough evidences that it is the mouthpiece of business houses. There are various newspapers or magazines which are owned by the corporate houses and speak their languages. So it can be said that in the contemporary environment media houses are under various pressures which compel them to write or speak which brings them more business. This paper is an attempt to find out various factors and reasons which has impacted the media houses in the contemporary times and changed their priorities.
Keywords: Discontent, UDHR, ICCPR, Contemporary, Pillar
INTRODUCTION AND REVIEW OF THE PROBLEM
Freedom of communicating the ideas and opinions is one of the most precious of the rights of mankind which can be discussed from various perspectives and one of these perspectives can be the real existence of democracy as well as human dignity. It is very much true that the power of expression has allowed man to exchange knowledge from one generation to another and has helped him learn from both success and failure for the progress and growth of mankind. This expression may take different forms, be it written, oral, pictorial, cartoons, or passed through other different signs or means. Everyone in this universe has full freedom and power to express one’s happiness or show discontent with the status quo and demand for a change. The right to freedom of speech is recognized as a human right under Article 19 of the UDHR (Universal Declaration of Human Rights) and recognized in International Human Rights Law in the International Covenant on Civil and Political Rights (ICCPR).
This paper was presented in National Conference on ‘Freedom of Expression: Ethical Parameters and Market Forces in Media Industry’ organized by Maharaja Agrasen College, University of Delhi on 8-9 Feb 2013.
The author is indebted and thankful to Prof. HS Chandalia of Central University of Haryana for providing his valuable insights in writing this paper.
Assistant Professor, Department of Management Studies, School of Law, Governance, Public Policy and Management, Central University of Haryana, Jund-Pali Villages, Mahendergarh
But what is happening today is the real matter for concern and needs serious discussion. Media is considered as the fourth pillar of democracy but questions are being raised on media today that it is not showing what needs to be shown in the real form. There must be different types of pressures from different sections of the society which compels the media houses to speak their languages. Both print and electronic media have been at the receiving end of gagging acts by various market forces. An American Commission on Freedom of Press stated in its report (1),
“Freedom of Press is essential to political liberty, where men cannot freely convey their thoughts to one another, no freedom is secure. Where freedom of expression exists, the beginning of a free society and means for retention of liberty, are already present. Free expression is, therefore, unique among liberties. It is the ‘matrix’, the indispensable condition of nearly every other form of freedom.”
But today the freedom of press seems to be missing, may be possibly due to various market forces which compel the media houses to bow in front of these market pressures. Media serve not just to disseminate information to the masses but is the maker of the public opinion. Even the impact of media is such that it impacts the public policies and sets an agenda after filtering the information. The selection of media offering has to be at that level so that the reporters and editors can give final shape to the information to be disseminated to the general public. McCombs and Shaw defined it as (2),
“The impact of the mass media—the ability to effect cognitive change among individual, to structure their thinking—has been labeled the agenda-setting function of mass communication. Here may lie the most important effect of mass communication, its ability to mentally order and organize our world for us. In short, the mass media may not be successful in telling us what to think, but they are stunningly successful in telling us what to think about.”
The agenda setting can be at three levels; media agenda—the process of assigning priorities to the subjects to be discussed by the media, public agenda—the impact of these priorities on the public opinion and policy agenda means how these opinions and reactions are helpful in giving shape to the government policies. But the media operates in a societal set up and competitive environment being surrounded by different types of market forces directly or indirectly and cannot afford to oppose, if it has to survive. The records says that there are 812 licensed television channels which includes 338 news channels including 215 Hindi news channels (out of 338) and the market size of television is nearly 38000 crores which comprises of 25000 crore from pay channels and 13000 crore comes from the advertising. It is well known that lot of money is required to run either any channel or newspaper which generally gives rise to a big question; where from that money will come to run all these channels and the answer can be found that the market can only help the channels to run on 24 hour basis through their advertising costs. So when it comes to the survival issue and beat the competitors in the market than there is no way that one can avoid markets but would like to take advantage of it. In this course of action the channels starts depending on them to meet their increasing expenditures and are bound to move as per their requirements. This can be noticed while watching news channels where the newreaders generally emphasis going into commercial breaks that clearly shows the demanding position of the advertiser who has paid heavy money to that channel. Let us discuss and deliberate upon various market forces which may be responsible for this type of developments in media houses.
Rationale and Objectives of the Study
Today it is clearly seen that how the media houses are being hijacked and overpowered by various market forces resulting in meeting the objectives of market forces rather than bringing out the real and emerging issues of the society, to inform and aware the society, which has taken a back seat in both the mediums either electronic or print. The economic and business implications of this new phenomenon in the present context has attracted promoters and advertisers towards media houses in order to grab some share and reaching out to the larger audiences through media. This article aims to find out various market forces which are mainly responsible for this concern and if possible, what can be the alternatives to overcome this problem.
Research Methodology
Methodology used for writing this paper is purely exploratory in nature considering views from various experts in this field, Television debates, various newspapers, inputs received from conferences and seminars based on this theme and of course the subjective ideas of the author has also been included. This paper was also presented at a National Conference held by one of the leading colleges of University of Delhi and the inputs received from the experts from that conference are also being considered and incorporated in this paper.
Various Market Forces Impacting Behaviour of Media Houses
Advancement and Application of Technology
No doubt that the advancement in the technology has brought revolution in the world of journalism too and has made the presentation better, competitive and has given opportunity to improve the content, quality and appeal of the programs. But this emergence of new communication technology and increasing globalization has brought forth a set of new challenges and opportunities too for the media houses. Especially the emergence and increasing use of internet has posed a challenge for the print media. This has also happened because the reading habits of the consumers are changing and most of the young generation and techno savvy people, who are constraints of time, visit on the internet whether they need to update themselves about the happenings in the country or it may be the matter of any type of purchase. These changing habits and the increasing attraction because of the improved technology have given chance to the media houses to tie up with the advertisers so that they can reach to the readers of the internet information. The results show that whenever we open any website on internet we are bound to see the advertisements whether we like it or not.
Increasing Market Oriented Competition
The increasing competition of expansion of media industry has given acceleration to the growth of this industry resulting in many mergers and alliances. Due to these alliances the corporate are investing in the media houses so that their interest can be protected the way they desire to do so. This is only because of market oriented competition keeping in view of the shared interest of the shareholders i.e. corporations. But this is not a healthy sign for the media industry and requires serious deliberations on how to keep the dominance of market forces away from the freedom of expressing themselves as they want. As far as the television is concerned the fierce competition for viewership and advertising is making the shots and all channels are being viewed as a profit making ventures. In addition to this the increasing number of media houses is also one of the strong market forces which see the ‘packaging of news’ as an essential requirement to earn more and more money. Hence increasing competition has decreased the quality of contents and the most prominent place (including front page) in the newspaper is given to the advertisers. It is reported that 75 percent of the newspaper readers are being shared by Times of India and Hindustan Times (16 daily English newspapers), 5 percent by Economic Times and rest 20 percent is shared by all other newspapers. The weight of the TOI is increasing day by day only because they need to create more and more advertising space and that clearly shows the impact of marketing on the media houses.
Globalization and Foreign Direct Investment
Due to globalization the global media barons have got access to invest and dominate in the India media houses. After the government allowed Foreign Direct Investment in media more Multi National Corporations have started investing in Indian media resulting into making the competition tougher. One of the most disturbing developments in the recent years is that the media is spinning out of control, that it is being blown whichever way the winds of consumerism and globalization take it. Today we are living in the globalized world and as more and more FDI is being allowed all over the country which requires space to advertise them and nothing can be the better source than the electronic or print media and in this way they purchase the media houses and start running them as per their wish and direction.
Revenue Generation the Sole Objective
For all the media houses revenue generation has become the sole motive irrespective of the way it is being generated. To achieve this sole objective they don’t even hesitate using unhealthy social practices, may be by promoting harmful consumer products. In this race social and serious content is generally being replaced by celebrity gossips, sexually attractive pictures and other colorful stories. Economic and commercial compulsions of free market have pushed newspaper to give more space to entertainment news which gets more attraction of urban class instead of serious content on the social issues resulting into revenue generation as the sole objective and driver of the market. But it is also evident that a major portion especially the rural class is unable to afford all these. The records say that nearly 90 percent of income to media houses is generated through advertising modes and if it is true then there is no doubt that media houses are being impacted highly by the market forces in the contemporary scenario.
Catering to the Needs of Handful Market Forces
The increasing involvement of corporate in the media houses has pressurized them to speak their languages. Even extra pages in newspaper are being added for special coverage on the subjects; like environment, health, science, gender, law and others, which were largely ignored before, to give space to those who have overpowered media through money power. Most of the magazine’s covers are also dominated by wide range of family events to earn more and more money from the advertisers and become habitual of doing that resulting into speaking the languages of the handful corporate forces in the long run. This all is happening to fulfill the desires of the handful people who run the corporate that include those who pretend to be the media houses.
Dramatic Increase in Advertising
Today we are living in a market economy and are compelled to move with the winds of the market. General Secretary of South Asian Free Media Association, Imtiaz Alam, agreed that corporatization of media is the increasing fashion where both the format and content are being decided by the advertisers/sponsors instead of editors. While Editor-in-chief of English newspaper The Hindu, N. Ram concludes that “we can’t have walls between the editorial functions and the marketing functions but have to draw a line.” He did not explain any features of this so called ‘line’. One can think that so-called line between the advertisements and the news is diminishing quickly as advertising is a main source of income for market-oriented media corporations which make them dependable on advertisers and as a result the advertisers are demanding steeper and quicker results from media industry and incursion in media contents and if media houses are unable to do that they are likely to lose the market share and the fear of losing the market share compel them to go the way the advertiser want them to do.
Changing Priorities of the Media Houses
The priorities of the channels are changing in today’s environment in comparison to the earlier times. As per the very reliable media sources (name not disclosed) Shri Rajdeep Sardasai, the well know journalist of the country, once used to shut the doors of the meeting room for the marketing people as the top priority was given to the contents and not to any other matter and as the time changed the same person started closing the meeting in between when the marketing people requires to meet him on the issue of hiring space by the advertisers/marketers. This could be possible only because of the changing priorities of the channels because they need to run the channel and channel can’t be run if there is no money and money is being made available by the advertisers by finding a suitable place in the channel, newspaper or any magazine. This is the impact of market forces on the media houses to which shows how the priorities of the media houses have changed on e the period of times.
Conflicting Demands of Sponsors and Audiences
Media houses are under constant pressure to put their interest ahead of everything, due to which the public interest is subordinated. Everyone know that the sponsor’s interests are short term whereas the public interest is of long term consisting of various social issues and developmental issues but the public interests are
being overlooked when compared with the sponsors interests. Let us take the example of Times of India; one of the leading newspapers in India has become an urban glamorous paper which was considered being a pro establishment serious paper. The line between management and editorial policy has become blurred and its circulation is touching huge numbers. Hence the media industry is increasingly defined as being what the audiences are interested in but if we analyze seriously then in real sense they are running away from their responsibilities. The research shows that owner of Times of India i.e. Bennett & Coleman invests up to 15% equity in different entrepreneur companies and in return helps them to promote their brand on special rates and give coverage to these companies. So both the media houses as well as corporate are helping each other to take maximum benefits to their credit and that is only because of their own interest than the interest of the general public.
**More to Please the Advertisers than to inform the Audiences**
Due to increasing pressure from the advertisers even the auto section is designed in the newspapers to create a market place for advertisers and the practice of selecting news in order to make advertising more effective is becoming so common that it has achieved the status of scientific precision. Media depends on advertising for survival, which in turn obeys the dictates of the tiny community of big business houses and corporate. It is estimated that 80 percent of revenue for media houses generally come from advertising. According to a research by Centre of Media Studies (CMS), only 8 percent of prime time television news covered development issues in 2008 which indicates the relationship between the media houses and other corporate or political giants involved in such corporate. Most of the front pages of the newspapers is found full of advertising in today’s competitive environment which is the confirmed proof that how market has impacted the media houses.
**Appointment of Editors**
Even the appointment of the Editors is being influenced by market forces and he is forced to bring business otherwise his job may be on risk. Today even Editors are not in a position to enjoy their freedom of expression because of various pressures in the world of media which has become purely profit oriented. In the older times editors were at their freedom to choose the news to be included in their newspapers but today they have to give priority to the advertisements over news because it brings heavy amount of money to the media houses, may it be from the corporate world or any other sources.
Journalism may claim that it is the voice of the nation but people don’t believe as there are enough evidences that it is the mouthpiece of business houses. There are various newspapers or magazines which are owned by the corporate houses and speak their languages. So it can be said that in the contemporary environment media houses are under various pressures which compel them to write or speak which brings them more business. This raises many questions like: Does an editors’ freedom end where the proprietor’s eyebrow begins to rise? Is an editor custodian of the proprietor’s interest? The fact can’t be denied that the value of the newspaper will continue to be on the basis of its contents and this responsibility purely lies on the shoulder of the editor.
After deliberating on various market forces it can be said that we should not expect much from the market driven media. Star India Chief Executive Officer, Uday Shankar bluntly confesses that media from the day one was business oriented and further adds that it can’t be run without the help of advertisers because after all money is required to meet different types of expenses. Certainly media may not be a tool for social reforms but it should not forget that if it is a business than it must have some corporate social responsibility and to fulfill that responsibility it must not forget the issues of social development which are of more concerned for the rural and people who are living below poverty line. If the story of rape case of one urban woman can be telecasted on 24 hours basis than the woman who lives in the rural areas and belongs to marginalized society also deserves the same treatment at least from the media houses but we feel sorry that it is not happening.
The Way Ahead
Media conglomerates are largely run for business interests and not for the charity but one should not forget that a second stage revolution is possible through alternative media which generally originates from the people itself. However it is presumed that the term alternative media comes from the Western countries where it is described as the right to criticize the government actions and policies but it is always not true and can be a media for expressing the issues related to common man which are being generally ignored by the main stream media. Then there are many alternatives in the field of journalism may be in the form of blogs, smaller publications like Samyantar (Hindi magazine) owned by Mr. Pankaj Bist, Frontline can also be put in the category of alternative media which usually raise the issues of public concern and care for people, art films and many more.
If we talk about the film industry which is considered the most glamorous world and more oriented towards making profits and always keep running in the search of various markets which can help them to increase their TRP (Television Rating Points) ratings. But in this world also there are people who not only think but reflect their thinking in their films, such films are Peepli Live, which is a great example of showing the hijacking of real issues by the media houses, Paan Singh Toomer is also such film which has raised the real issues related to sports industry (more market oriented) and has even not a single incident of violence, whereas in the contemporary environment it is very difficult to find such films which raises the issues of such nature.
Community Radio is also such initiative taken by the Government of India where people are motivated to select their programs for community development along with the responsibility of acting as anchor and producer too, may be in the limited area or range where they live. In addition to all there are art films too which are the real representative of the people who are being ignored by the main media in the race of earning maximum market share. But it can’t be denied that in the race of competition and to earn more and more market share the space for alternative media is shrinking and there is need to save the alternative media which is more people savvy.
CONCLUSION
Hence in the concluding part it can be said that the most disturbing development is that the wind of consumerism and globalization is so speedy that the media is spinning out of control from the grip of the editors and being controlled by various market forces. Even while following sports especially any cricket event one can find that the first and last ball of the over is being sacrificed to the advertisers because that match is being played with the power of money being paid by those advertisers and they are bound to oblige them. Not only the cricket or any other sports, even the news channels like Aaj Tak, which has the highest rating among the news channels, also shows news for one to one and half minute followed by six to seven minutes of advertisements and other channels, are also sailing in the same boat. Even in the race of this competition senior journalists are also not untouched and we know that recently Mr. Prasoon Vajpai shifted from Zee News to Aaj Tak and so is the case with the Prabhu Chawla who shifted from Aaj Tak to IBN 7 and there are n number of such cases which are being affected by the power of market and money. Now there is a need to find out some solution so that these media houses can be brought back on track and help in raising the issues related to common man. Nothing can be better than alternative media which is considered to be the real face of the people, but this is shrinking because of the increasing impact of the main media, and needs some upliftment. No doubt that such media is still alive and people are there to run such media and days are not far away when that media will again show its colors as it has been showing in the past.
REFERENCES
1. D.A. Seko (2007). Impact of Media and Advertising. Accessed on Jun 13, 2013.
2. Freedom of Speech. Anti Essays. Retrieved on January 18, 2013, from the World Wide Web http://www.antiessays.com/free-essays/73776.html
3. Freedom of Speech by Hasbullzz279. Retrieved on Jun 18, 2012, from the World Wide Web http://www.freedomofspeech.com
4. Karol Jakubowicz (2010). The Right to Public Expression: A Modest Proposal for an Important
5. Karol Jakubowicz (2004). Human Rights and the Information Society: A Preliminary Overview Council of Europe Preparatory Group on Human Rights, the Rule of Law and the Information Society
6. Monica Macovei (2004). Freedom of Expression (Human Rights Handbooks, No. 2) Strasbourg Council of Europe. 2nd edition, January 2004.
7. McCombs, Maxwell E., and Donald L. Shaw (1977). The Emergence of American Political Issues. New York: West Publishing Co.
8. Namarta Joshi, Academic Disclosure, A Refereed Research Journal, Vol.1, No. 1, Jun 2012.
9. Onora O’Neill (2004). Rethinking Freedom of Press. Dublin: Royal Irish Academy.
10. Ranbir Singh (2012). Academic Disclosure, A Refereed Research Journal, Vol.1, No. 1, Jun 2012
11. Robert D. Leigh (1994). A Free and Responsible Press: A General Report on Mass Communication, edited by Commission on Freedom of the Press,
12. Http://www.un.org/en/documents/udhr/index.shtml#a19. Retrieved on Jun 16, 2013.
13. Http://www.indiantelevision.com/headlines/y2k12/apr/apr129.php. Retrieved on May 02, 2013. |
Effects of caffeine, fructose, and glucose ingestion on muscle glycogen utilization during exercise
by Mark Alan Erickson
A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Physical Education
Montana State University
© Copyright by Mark Alan Erickson (1985)
Abstract:
Five competitive cyclists (four male and one female) were studied during 95 min of bicycle ergometer exercise (approx. 65% V02max) to determine the effects of ingesting caffeine before exercise (CAF) (5mg/kg body weight), fructose before exercise (FRU) (lg/kg), glucose during exercise (GLU) (lg/kg), a combination of caffeine/fructose before plus glucose during exercise (CFG) (same quantities as other trials), and a control (CON) on muscle glycogen utilization during exercise. Each subject performed all trials with not less than seven days and not more than fourteen days between trials. Preexercise ingestion occurred one hour prior to exercise and ingestion during exercise began fifteen minutes into the ride. Muscle biopsies were performed before initial ingestion (BIM) and following exercise (FEM).
Muscle glycogen levels were similar in all five trials, both before ingestion (CON = 152.0 umol/gr W.w., CAF = 144.6, CFG = 135.7, FRU = 146.5, GLU = 138.2) and following exercise (CON = 60.66, CAF = 81.44, CFG = 68.72, FRU = 79.86, GLU = 76.40). Muscle glycogen utilization, however, was greater (P < 0.05) during trial CON than trials CAF and GLU. Although not statistically significant, there was a trend (P < 0.1) towards lower glycogen utilization in trials CFG and FRU when compared with trial CON. No significant differences were observed between trials CAF, CFG, FRU, and GLU. These data indicate that caffeine ingestion before exercise and glucose ingestion during exercise can decrease muscle glycogen utilization.
EFFECTS OF CAFFEINE, FRUCTOSE, AND GLUCOSE INGESTION ON MUSCLE GLYCOGEN UTILIZATION DURING EXERCISE
by
Mark Alan Erickson
A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Physical Education
MONTANA STATE UNIVERSITY
Bozeman, Montana
December 1985
APPROVAL
of a thesis submitted by
Mark Alan Erickson
This thesis has been read by each member of the thesis committee and has been found to be satisfactory regarding content, English usage, format, citations, bibliographic style, and consistency, and is ready for submission to the College of Graduate Studies.
12/11/85
Date
Robert J. Schuengel
Chairperson, Graduate Committee
Approved for the Major Department
12/13/85
Date
Head, Major Department
Approved for the College of Graduate Studies
1/14/86
Date
Wang L. Parsons
Graduate Dean
STATEMENT OF PERMISSION TO USE
In presenting this thesis in partial fulfillment of the requirements for a master's degree at Montana State University, I agree that the library shall make it available to borrowers under rules of the library. Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made.
Permission for extensive quotation from or reproduction of this thesis may be granted by my major professor, or in his/her absence, by the Director of Libraries when, in the opinion of either, the proposed use of the material is for scholarly purposes. Any copying or use of the material in this thesis for financial gain shall not be allowed without my written permission.
Signature ____________________________
Date 12-16-85
ACKNOWLEDGMENTS
I would like to express my gratitude to everyone involved with this project. Their contributions of time, facilities, supplies, and knowledge provided me with the support needed to complete this thesis.
Special thanks to Dr. Bob Schwarzkopf, thesis chairperson, he provided me with guidance while leaving me responsible for decisions concerning the thesis.
Dr. Robert McKenzie put more time and energy into this thesis than could be expected of anyone. The donation of his time and skill in performing the muscle biopsies helped make this thesis possible.
Dr. Jack Catlin was always available for advice and supplies, both of which I was in constant need.
Thanks to Dr. Sam Rogers for the use of his lab, and to Dr. Jack Robbins for the use of his marbles.
The five subjects gave all that was asked and more in the form of blood, sweat, and muscle. This was what the study was all about.
Finally I would like to thank my wife Libbi for her constant encouragement, and prodding to get me out of bed and to the lab at 5:30am.
# TABLE OF CONTENTS
| Section | Page |
|----------------------------------------------|------|
| LIST OF TABLES | vii |
| ABSTRACT | viii |
| 1. INTRODUCTION | 1 |
| Problem Statement | 2 |
| Hypothesis | 2 |
| Delimitations | 2 |
| Limitations | 3 |
| Definitions | 4 |
| 2. REVIEW OF RELATED LITERATURE | 6 |
| 3. METHODOLOGY | 19 |
| Research Method | 19 |
| Sample | 19 |
| Testing Battery | 21 |
| Maximal Oxygen Consumption | 22 |
| Anthropometric Measurements | 23 |
| Blood Samples | 23 |
| Muscle Glycogen Levels | 24 |
| Respiratory Gas Analysis | 25 |
| Heart Rate | 25 |
| Relative Perceived Exertion | 26 |
| Control and Experimental Tests | 26 |
| Statistical Treatment of Data | 29 |
| 4. RESULTS | 30 |
| HR, VO2, RPE, and RQ | 30 |
| Blood Glucose | 35 |
| Blood FFA | 36 |
| Muscle Glycogen | 37 |
| Section | Page |
|------------------------------------------------------------------------|------|
| 5. DISCUSSION | |
| Caffeine | 39 |
| Fructose | 41 |
| Glucose | 43 |
| Caffeine, Fructose, Glucose | 44 |
| 6. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS | |
| Summary | 45 |
| Conclusions | 47 |
| Recommendations | 48 |
| BIBLIOGRAPHY | 50 |
| APPENDICES | |
| Appendix A | 58 |
| Subject Questionnaire | 59 |
| Human Subject Consent Form | 61 |
| Gas Analysis Calculations Form | 62 |
| Appendix B - Borg Scale | 63 |
| Appendix C - Time Schedule | 65 |
LIST OF TABLES
| Table | Title | Page |
|-------|--------------------------------------------|------|
| 1 | PHYSICAL CHARACTERISTICS OF SUBJECTS | 21 |
| 2 | HEART RATE (HR) | 31 |
| 3 | OXYGEN UPTAKE (VO2) | 32 |
| 4 | RELATIVE PERCEIVED EXERTION (RPE) | 33 |
| 5 | RESPIRATORY QUOTIENT (RQ) | 34 |
| 6 | BLOOD GLUCOSE | 35 |
| 7 | BLOOD FFA | 36 |
| 8 | MUSCLE GLYCOGEN | 37 |
ABSTRACT
Five competitive cyclists (four male and one female) were studied during 95 min of bicycle ergometer exercise (approx. 65% VO2max) to determine the effects of ingesting caffeine before exercise (CAF) (5mg/kg body weight), fructose before exercise (FRU) (1g/kg), glucose during exercise (GLU) (1g/kg), a combination of caffeine/fructose before plus glucose during exercise (CFG) (same quantities as other trials), and a control (CON) on muscle glycogen utilization during exercise. Each subject performed all trials with not less than seven days and not more than fourteen days between trials. Preexercise ingestion occurred one hour prior to exercise and ingestion during exercise began fifteen minutes into the ride. Muscle biopsies were performed before initial ingestion (BIM) and following exercise (FEM).
Muscle glycogen levels were similar in all five trials, both before ingestion (CON = 152.0 umol/gr w.w., CAF = 144.6, CFG = 135.7, FRU = 146.5, GLU = 138.2) and following exercise (CON = 60.66, CAF = 81.44, CFG = 68.72, FRU = 79.86, GLU = 76.40). Muscle glycogen utilization, however, was greater (P < 0.05) during trial CON than trials CAF and GLU. Although not statistically significant, there was a trend (P < 0.1) towards lower glycogen utilization in trials CFG and FRU when compared with trial CON. No significant differences were observed between trials CAF, CFG, FRU, and GLU. These data indicate that caffeine ingestion before exercise and glucose ingestion during exercise can decrease muscle glycogen utilization.
CHAPTER 1
INTRODUCTION
Endurance athletes and coaches at all levels of competition are interested in increasing performance. One factor limiting endurance performance is muscle glycogen depletion. In an attempt to increase performance and decrease glycogen usage some athletes ingest carbohydrates and/or caffeine before and/or during exercise. Uninformed or misinformed ingestion of these substances could result in a possible increase in endurance performance and glycogen sparing (9,21,35,36,44), increased glycogen depletion (28,41,44), physical discomfort (6,54,41,44), physical harm (6,54), or even death (6). To gain optimal benefit and reduce the risk of complications, the relative effects of these substances should be known before use in competition. This study will attempt to clarify these effects.
Problem Statement
What are the relative effects of: 1) fructose ingestion before exercise, 2) glucose ingestion during exercise, 3) caffeine ingestion before exercise, 4) fructose/caffeine ingestion before exercise plus glucose ingestion during exercise, and 5) control ingestion on muscle glycogen usage and other measured parameters?
Hypothesis
There will be no significant difference in muscle glycogen utilization or the other measured parameters while using the tested dietary manipulations.
Delimitations
This study was delimited to a group of five selected competitive cyclists living in the Bozeman area, 1984-1985.
Limitations
1. One subject was female and four subjects were male.
2. Subjects were highly trained competitive cyclists.
3. A Schwinn Bio-Dyne bicycle ergometer provided the exercise work load.
4. All tested substances and solutions were administered according to body weight.
5. Only one concentration of each tested substance was used.
6. Exercise started sixty minutes following the initial ingestion of solution.
7. The blood sample taken following exercise was taken within three minutes after exercise stopped.
8. The muscle biopsy taken following the ninety minute exercise period was taken within ten minutes after exercise stopped.
Definitions
Caffeine: a bitter alkaloid, usually obtained from coffee or tea; used chiefly as a stimulant and diuretic (50).
Carbohydrate: a food substance that includes various sugars and starches and is found in the body in the form of glucose and glycogen (57).
Endurance: the ability to resist fatigue. Includes muscular endurance, which is local or specific endurance, and cardiovascular endurance, which is a more general, total body endurance (57).
Ergometer: a device for exercising the subject in a manner in which the physical work performed can be measured, e.g., bicycle ergometer (57).
Fatigue: inability to continue work, due to any one or a combination of factors (57).
Fatty Acid: along with glycerol, the product of the breakdown of fats (57).
Fructose: a monosaccharide, sometimes known as fruit sugar (56).
Glucose: a simple sugar which is transported in the blood and metabolized in the tissues (57).
Glycogen: the storage form of carbohydrates (CHO) in the body, found predominately in the muscles and liver (57).
Highly trained competitive cyclist: (for the purpose of this study) males with a VO2max greater than 60 ml/kg x min or females with a VO2max greater than 50 ml/kg x min, and having competitive experience at the state level.
Hypoglycemia: abnormally low blood glucose, or sugar, levels (57).
Maximal oxygen uptake (VO2max): the best physiological index of total body endurance. Also referred to as aerobic power, maximal oxygen intake, maximal oxygen consumption, and cardiovascular endurance capacity (57).
Respiration: the exchange of gases at both the level of the lung and tissue (57).
Respiratory exchange ratio (R or RER): the ratio of carbon dioxide (CO2) expired to oxygen consumed, at the level of the lungs (57).
Respiratory quotient (RQ): the ratio of the CO2 produced in the tissues to the oxygen consumed by the tissues (57).
CHAPTER 2
REVIEW of RELATED LITERATURE
Competition has probably been around as long as mankind itself. With competition man has wanted to go faster and farther. This has invited the attention of scientists for many years concerning the effects of diet and other factors on athletic performance. Over one hundred years ago vonLeibig (45) suggested that protein might be the fuel for muscular work. However, Cathcart (11) showed urinary excretion of nitrogen was not affected by prolonged strenuous exercise. This has been confirmed repeatedly (12,30) indicating that protein was not used as a fuel for muscular work to any great extent.
Krogh and Lindhard (42), using respiratory exchange ratio (RQ) measurements, found that fat and carbohydrate (CHO) were used as energy sources for muscular work. The proportion of these substrates depended on both the intensity and duration of exercise (4,13,52). The importance of CHO in improving work capacity was demonstrated by Dill, Edwards, and Talbott
They administered CHO to dogs, resulting in considerably increased work times before the animals were exhausted. By varying the diet prior to work, Christensen and Hansen (15) produced marked improvements in work times, a CHO rich diet giving work times two to three times longer than a high fat diet.
More recent studies using direct methods, involving needle biopsy of the working muscle, have shown that glycogen stores decreased during work (13, 17, 38, 52) and were almost emptied during exhaustive work (38). Further, the initial glycogen content was closely correlated to work time to exhaustion (38), the more glycogen stored in the muscle the longer the time to exhaustion. This fact prompted the practice of CHO loading, which induced elevated glycogen levels in the tissues and enhanced performance time (38). Foster, et al. (28) reported that endurance performance could be determined by the rate of muscle glycogen utilization along with the preexercise muscle glycogen content.
To further increase work time, athletes have sought dietary manipulations that could decrease the rate of muscle glycogen utilization. Through dietary manipulations athletes have attempted to spare glycogen by supplementation with exogenous CHO, and increased free fatty acid (FFA) availability and metabolism with
cafeine. Manipulations of diet have included: 1) CHO ingestion before exercise, 2) CHO ingestion during exercise, 3) caffeine ingestion before and/or during exercise and, 4) combinations of the preceding.
The effects of CHO ingestion before exercise on endurance performance and its relationship to glycogen sparing are varied. In an attempt to provide CHO to contracting muscles, it has been observed that glucose ingestion before exercise resulted in increased insulin secretion. This hyperinsulinemia was followed by an exercise induced rapid decrease in blood glucose concentration and greater depletion of muscle glycogen (18,44). Several studies that have compared fructose ingestion with that of glucose or sucrose have noted significantly lower elevations in plasma glucose and little or no elevation in insulin after fructose ingestion. The normal hypoglycemic effect after glucose ingestion was also avoided with fructose (5,23,41,44). Because of these findings, it was proposed that fructose ingestion before submaximal exercise could have a glycogen sparing effect (44).
To examine the effects of various carbohydrates on the metabolic and hormonal response to exercise, Koivisto, et al. (41) administered 75 grams glucose, fructose, or placebo to nine well trained males.
(VO2max=60 ml/kg x min) forty-five minutes before cycle ergometer exercise performed at 75% maximal oxygen uptake (VO2max) for thirty minutes. After glucose ingestion, the rise in plasma glucose was three times greater (P<0.005) and in plasma insulin 2.5 times greater (P<0.01) than after fructose. During exercise, after glucose administration plasma glucose fell from 5.3 to 2.5 millimole/liter (mM/l) (P<0.001) and after fructose from 4.5 to 3.9 mM/l (P<0.05). The fall in plasma glucose was closely related to the preexercise levels of plasma insulin (r=0.82, P<0.001) and glucose (r=0.81, P<0.001). Both glucose and fructose ingestion decreased the FFA levels by 40-50% (P<0.005) and during exercise they remained 30-40% lower after CHO than placebo administration (P<0.02). The researchers suggested that glucose ingestion prior to exercise resulted in hypoglycemia during vigorous exercise; this rapid fall in plasma glucose was mediated, at least in part, by hyperinsulinemia. Fructose ingestion, in contrast, was associated with a modest rise in plasma insulin and did not result in hypoglycemia during exercise.
Levine, et al. (44) examined substrate utilization after fructose, glucose, or water ingestion in four male and four female subjects during three treadmill
runs at approximately 75% VO2max. Each test was preceded by three days of a CHO rich diet. The runs were thirty minutes long and at least one week apart. Exercise began forty-five minutes after ingestion of 300 milliliters (ml) of randomly assigned 75 gram fructose(F), 75 gram glucose(G), or control(C) solution. Muscle glycogen depletion determined by preexercise and postexercise muscle biopsies (gastrocnemius muscle) was significantly (P<0.05) less during the F trial than during the C or G trials. Venous blood samples revealed significant increases in serum glucose (P<0.05) and insulin (P<0.01) within forty-five minutes after the G drink, followed by a significant decrease (P<0.05) in serum glucose during the first fifteen minutes of exercise. These changes were not observed in the C or F trials. R.Q. was higher (P<0.05) during the G than the C or F trials for the first five minutes of exercise and lower (P<0.05) during the C trial compared with the G or F trials for the last fifteen minutes of exercise. The higher R.Q. values indicated an increased CHO utilization. Using these results, Levine suggested that fructose ingestion before thirty minutes of submaximal exercise maintains stable blood glucose and insulin concentrations, which may lead to the observed sparing of muscle glycogen.
Replenishment of fluid and nutrients during exercise has been reported to be mainly limited by gastric emptying, which was controlled by many independent factors (16,27). In order to further analyze some of these factors, Costill, et al. (19) undertook a series of studies to determine the effects of solute volume (200,400,600,800 ml), temperature (5,15,25,35C), and glucose concentration (139,278,556,834 mM) on the rate of gastric emptying. Additional observations were made to assess the effects of exercise intensity on the rate at which 400 ml of a glucose (139 mM) solution was removed from the stomach. At rest the addition of even small amounts of glucose (>139 mM) induced a marked reduction in the rate of gastric emptying. The volume of fluid remaining in the stomach fifteen minutes after ingestion was somewhat greater when the solution was thirty-five degrees centigrade as opposed to colder fluids (P<0.05). The rate of gastric emptying increased in proportion to the volume ingested, with a maximal rate of emptying attained at a volume of 600 ml. In general, exercise had no effect on gastric emptying until the working intensity exceeded 70% VO2 max. Costill emphasized the importance of minimizing the glucose content of solutions ingested in order to obtain an optimal rate
of fluid replacement, since in combination with high intensity exercise even small amounts of carbohydrate could slow gastric emptying.
By the use of naturally enriched carbon-13 glucose as a metabolic tracer, Pirnay, et al. (48) investigated the utilization of exogenous glucose ingested during muscular exercise. Four subjects walked on an uphill treadmill for two hours, and three others for four hours. The energy expenditure, close to 50% of the individual VO2max, varied from 1.9 to 2.1 liters of oxygen/minute, while the heart rate ranged between 142 and 165 beats/minute. The subjects, who were on a mixed diet and had fasted overnight, were given 100 grams of naturally labeled glucose. Following this intake, the expired CO2 became rapidly enriched in carbon-13. The increase was observed as early as fifteen minutes after the oral intake, and reached a maximum within one to two hours, when utilization of exogenous glucose varied between 500 and 650 mg/min, representing as much as 55% of the carbohydrate metabolism and 24% of the total energy expenditure.
Oral glucose administration was studied by Costill, et al. (18) at rest and during prolonged physical activity (60-72% VO2max). Seven male subjects ingested an aqueous solution of glucose (31.8 grams in
300 ml) that was tagged with 40 microcuries of uniformly labeled glucose carbon-14 (14C). In the exercise test the men ran or cycled for thirty minutes before and sixty minutes after glucose feeding. In both the exercise and resting experiments venous blood samples were taken at: 0, 3, 5, 7, 9, 11, 13, 15, 20, 30, 40, 50, 60, 90, and 180 minutes after the glucose administration. Expired air was sampled and analyzed for O2, CO2, and 14CO2. Compared to the control studies (resting) the serum glucose-14C and expired 14CO2 data suggested that exercise had little or no effect on intestinal glucose absorption. From thirty to seventy-five minutes after the glucose ingestion, the liver appeared to decrease its contribution to blood glucose. The initial appearance of 14C in the serum and expired air occurred five to seven minutes after the glucose feeding both at rest and during exercise. The glucose 14C was utilized 6.5 times faster during exercise than at rest. During the final 20 minutes of exercise, the ingested glucose was found to comprise about 5% of the total carbohydrate oxidation.
Brooke, et al. (9) conducted a study to determine the effects of normal and glucose work diets on the performance of racing cyclists. After preliminary trials, eight male racing cyclists performed rides at
loads to elicit approximately 67% VO2max in the laboratory on four dietary treatments during work: 1) glucose syrup drink, (2) isocaloric "normal" feeding containing rice pudding and sucrose, (3) low energy (< 4kcal) isovolumetric drink, and (4) no food. Both high energy diets delayed the onset of exhaustion, providing more combusted carbohydrate as shown by elevated RQ and blood glucose levels. More efficient work was performed when the glucose syrup drink was taken in comparison to the other diets. The researchers concluded that CHO ingestion during prolonged exercise increased performance, as measured by time to exhaustion.
Coyle, et al. (22) conducted a study to determine whether CHO feeding during exercise could delay the development of fatigue. Ten trained cyclists performed two bicycle ergometer exercise tests one week apart. The initial work rate required 74% of VO2max. The point of fatigue was defined as the time at which the exercise intensity the subjects could maintain decreased below their initial work rate by 10% of VO2max. During one exercise test the subjects were fed a glucose polymer solution beginning twenty minutes after the onset of exercise; during the other test they were given a placebo. Blood glucose concentration was
20-40% higher during the exercise after glucose ingestion than during the placebo trial. The RQ was unchanged by the glucose feeding. Fatigue was postponed by carbohydrate feeding in seven of the ten subjects. This effect appeared to be mediated by prevention of hypoglycemia in only two subjects. The exercise time to fatigue for the ten subjects averaged 134 minutes without and 157 minutes with glucose feeding. The increased time to fatigue with glucose feeding was significant (P<0.01). From these results Coyle concluded that CHO feeding during prolonged strenuous exercise could delay fatigue.
Observations with rats and humans have demonstrated a sparing of muscle glycogen and an enhanced capacity for endurance performance when FFA were elevated (20,32,51). In each of these studies the increased endurance performance and diminished rate of CHO metabolism followed an injection of heparin, which stimulated an increase in plasma FFA with a subsequent increase in fatty acid oxidation. Since caffeine was known to stimulate the mobilization of FFA (2), studies were undertaken to determine the effects of caffeine ingestion on endurance performance and metabolism (21,35).
In an effort to assess the effects of caffeine ingestion on metabolism and performance during prolonged exercise, Costill, et al. (21) had nine competitive cyclists (two females and seven males) exercise to exhaustion on a bicycle ergometer at 80% VO2max. One trial was performed one hour after ingesting decaffeinated coffee (D), while a second trial (C) required that each subject consume coffee containing 330 milligrams of caffeine sixty minutes before the exercise. Following the ingestion of caffeine (C), the subjects were able to perform an average of 90.2 minutes of cycling as compared to an average of 75.5 minutes in trial D. Measurements of plasma free fatty acids, plasma glycerol, and R.Q. were reported by the researchers as evidence of a greater rate of lipid metabolism during the caffeine trial as compared to the decaffeinated exercise treatment. Calculations of CHO metabolism from RQ data revealed that the subjects oxidized roughly 240 grams of CHO in both trials. Fat oxidation, however, was significantly higher (P<0.05) during the C trial (118 grams or 1.31 grams/minute) than in the D trial (57 grams or 0.75 grams/minute). On the average the participants rated (Perceived Exertion Scale) their effort during the C trial to be significantly (P<0.05) easier than the
demands of the D treatment. From their findings the researchers concluded that, the enhanced performance observed in the C trial was likely the result of the combined effects of caffeine on lipolysis and its positive influence on nerve impulse transmission.
Nine trained cyclists were studied by Ivy, et al. (35) to determine the effects of caffeine (CAF) and glucose polymer (GP) feedings on work production (kpm) during two hours of isokinetic cycling exercise (80 rpm). Ingestion of 250 milligrams of CAF sixty minutes prior to the ride was followed by ingestion of an additional 250 milligrams in divided doses fed at fifteen minute intervals over the first ninety minutes of the exercise. This treatment significantly increased work production by 7.4% and VO2 by 7.3% as compared to control (C) while the subjects' perception of exertion remained unchanged. Ingestion of approximately ninety grams of GP during the first ninety minutes (12.8 g/15min) of the exercise had no effect on total work production or VO2. It was, however, effective in reducing the rate of fatigue over the last thirty minutes of cycling. Although GP maintained blood glucose and insulin levels (P<0.05) above those of the C and CAF trials, total CHO utilization did not differ between treatments. During
the last seventy minutes of the CAF trial, however, fat oxidation was elevated 31% and appeared to provide the substrate needed for the increased work production during this period of exercise. The researchers reported an enhanced rate of lipid catabolism and work production following the ingestion of caffeine.
In summary, the experimental findings of previous researchers indicate that glycogen sparing and/or endurance performance may be increased through the use of fructose ingestion before exercise (44), glucose ingestion during exercise (9,22,35,36), and caffeine ingestion before exercise (21,35). More testing is needed to determine the relative effectiveness of these substances and their combined effects on glycogen sparing and endurance performance.
CHAPTER 3
METHODOLOGY
Research Method
The experimental research design used for this study was a crossover single-blind protocol with five subjects, each acting as their own control. The study was designed to measure the relative effects of ingesting: 1) a fructose solution before exercise (FRU), 2) a glucose solution during exercise (GLU), 3) a caffeine solution before exercise (CAF), 4) a fructose/caffeine solution before plus a glucose solution during exercise (CFG), and 5) a control solution (CON) on muscle glycogen utilization, blood glucose and free fatty acid levels, and other measured parameters.
Sample
The sample consisted of one female and four male competitive cyclists that volunteered from the Bozeman area. All subjects completed a basic personal data questionnaire before participation in the study. A
sample subject questionnaire is found in Appendix A. All subjects had competition experience at the state level. Physical characteristics including: VO2max, age, height, weight, sex, and body composition of each subject by sex and for the total sample are included in Table 1. There were no reported cases of diabetes mellitus in the subjects' immediate families, and none of the subjects reported any sugar tolerance problems. None of the subjects reported taking any prescription drugs during the test period. Training levels remained constant throughout the test period. All subjects were informed of the nature of the study including: number and type of blood and muscle biopsy samples to be taken, intensity required for each test ride, time requirements, and the nature of the various ingested substances before giving their verbal and written consent to participate. A sample consent form is found in Appendix A. All subjects were told they could withdraw from the study at any time.
Table 1. Physical Characteristics of Subjects.
| Subject | Sex | Age | VO2max | % Body Fat | Height | Weight |
|---------|-----|-----|--------|------------|--------|--------|
| JW | M | 24 | 73.3 | 4.1 | 178 | 75.5 |
| BG | M | 35 | 68.7 | 5.8 | 170 | 70.9 |
| BO | F | 25 | 57.8 | 12.5 | 166 | 60.0 |
| SJ | M | 29 | 63.5 | 5.6 | 190 | 79.3 |
| RC | M | 24 | 62.8 | 6.6 | 185 | 77.3 |
| MEAN | | 27.4| 65.2 | 6.9 | 178 | 65.2 |
| SE (+ or-) | (2.1) | (2.7) | (1.4) | (4.5) | (3.4) |
Testing Battery
1. Maximal oxygen consumption (VO2max).
2. Anthropometric measurements.
A. Height (HT)
B. Body weight (BW)
C. Abdominal skinfold (AB), (males only).
D. Chest skinfold (CH), (males only).
E. Suprailiac skinfold (SI), (female only).
F. Thigh skinfold (TH).
G. Triceps skinfold (TR), (female only).
3. Blood samples.
A. Blood glucose levels (BGL).
B. Blood fatty acid levels (BFA).
4. Muscle glycogen levels.
5. Respiratory gas analysis.
A. Percent VO2max (%VO2max).
B. Respiratory Exchange Ratio (RER).
6. Heart Rate (HR).
7. Relative perceived exertion (RPE).
8. Control and experimental tests.
A. Control (CON).
B. Fructose ingestion before exercise (FRU).
C. Glucose ingestion during exercise (GLU).
D. Caffeine ingestion before exercise (CAF).
E. Caffeine/fructose ingestion before exercise plus glucose ingestion during exercise (CFG).
**Maximal Oxygen Consumption**
VO2max was measured using a standardized open circuit procedure patterned after Fox, et al. (29) on a Schwinn Bio-Dyne bicycle ergometer with a pedal frequency of 90 rpm. Gases were collected in a 350 liter Collins Tissot and analyzed with a Beckman E2 (Oxygen) and LB2 (Carbon dioxide) analyzer calibrated with standard gases prior to each test. Criteria for a maximal value was a plateau or decline in oxygen consumption at a subsequent higher intensity exercise bout and a RER greater than 1.0. The peak value will be used in the absence of attaining maximal criteria.
Standard calculation procedures were followed for computation of VO2max (ml/kg x min, corrected for standard temperature and pressure dry (STPD) (46). A sample calculation form is found in Appendix A.
**Anthropometric Measurements**
Height and body weight were determined using a Detecto-Medic. Height was measured with the subject bare foot and recorded in centimeters (cm). Body weight was measured with the subject wearing cycling shorts and a T-shirt, and recorded in kilograms (kg).
All skinfolds were measured by the same trained technologist using Lange skinfold calipers and recorded in millimeters (mm). The sites described by Behnke, et al. (1) were used to collect the skinfold data. Three measurements were taken at each site to the nearest 0.5 mm. The average of the three scores was recorded unless one deviated by more than 1.0 mm, in which case the site was remeasured. Percent body fat was then determined using the tables of Pollock (49).
**Blood Samples**
All blood samples were taken by the same medical technologist. Samples were drawn before ingestion of the initial solution (BIB), before exercise began.
(BEB), and following exercise (FEB). They were allowed to clot for thirty minutes and were then centrifuged ten minutes and separated. The serum was then frozen in liquid nitrogen until analysis. The serum was analyzed, by a trained laboratory technologist, for glucose (BGL) using a hexokinase method (53) and free fatty acids (BFA) using a colorimetric method (43). BGL was reported as milligram percent (mg%) and BFA as micromoles per liter (umol/l).
**Muscle Glycogen Levels**
Muscle glycogen levels were determined from muscle samples obtained by biopsy. The biopsies were performed by the same medical doctor. Biopsies were performed following the BIB (BIM) and FEB (FEM) blood samples. The "suction" method described by Evans, et al. (26), using a 5mm biopsy needle, was used to perform all biopsies. All samples were taken from the vastus lateralis. Samples were frozen in liquid nitrogen within thirty seconds of removal and stored in liquid nitrogen until analyzed.
Analysis of muscle glycogen levels was performed by the same trained laboratory technologist using an acid hydrolysis method described by Passonneau, et al. (47). Adipose tissue and blood were removed from the sample in a cold chamber (-40C), and the sample was
then weighed on a torsion balance. After the muscle was hydrolyzed, centrifuged, and the supernate fluid neutralized to pH 6-7 with KOH, the glycogen concentration was measured as glucose using a hexokinase technique (53). Glucose concentration was reported as micromoles per gram wet muscle weight (umol/g w.w.).
**Respiratory Gas Analysis**
Respiratory gases were collected and analyzed every ten minutes during exercise using the same procedure used for VO2max. Standard calculation procedures were used to calculate %VO2max and RER (46). A sample calculation form is found in Appendix A.
**Heart Rate**
Heart rate (HR) was monitored every ten minutes during exercise following the initial warmup using a Schwinn pulse monitor.
Relative Perceived Exertion
A perceived exertion scale developed by Borg (8) was used to rate each subject's relative perceived exertion (RPE) every ten minutes during exercise following the initial warm up. A sample is found in Appendix B.
Control and Experimental Tests
Each subject performed the control ride (CON), and the four experimental rides (FRU, CAF, GLU, and CFG) in a randomized order. A standardized procedure was followed for each test with a minimum of seven days and a maximum of fourteen days between tests. Preconditions necessary for controlled testing were followed by each subject prior to each test.
Each subject reported to the laboratory, between 6:30am and 8:00am, following a normal night's sleep and eight to twelve hours postabsorptive. Each subject was given recommendations concerning the quantity and content of food intake for the two days prior to testing. Balanced meals high in CHO and of slightly greater quantity than normal (for the subject) were recommended. Instructions were given to abstain from the ingestion of any foods or liquids (other than water) during the eight hours preceding each test.
Training was restricted the two days preceding each test. Two days prior to each test an easy workout was allowed. The subjects abstained from training the day before testing. This procedure of diet and exercise was an attempt to prevent low pretest muscle glycogen levels.
A blood sample (BIB) and muscle biopsy (BIM) sample were taken from each subject when he/she reported to the laboratory and before ingestion of the initial solution, a time schedule can be found in Appendix C. All solutions were administered as milliliters solution per kilogram body weight (ml/kg), glucose and fructose were administered as grams substance per kilogram body weight (g/kg), and caffeine was administered as milligrams caffeine per kilogram body weight (mg/kg). Following initial blood and biopsy samples, each subject was given a grape Kool-Aid solution (10ml/kg) containing either: 1) fructose (lg/kg) (FRU), 2) Nutra Sweet (CON and GLU), 3) caffeine (5mg/kg) (CAF), or 4) fructose (lg/kg) and caffeine (5mg/kg) (CFG) with instructions to ingest the solution within five minutes. All solutions were served cold. The subject was not informed of the contents of the solution. The subject was then placed in a quiet area for forty-five minutes. At this time
he/she returned to the laboratory and the second blood sample (BEB) was taken.
Exercise started sixty minutes following ingestion. A five minute warm up at approximately 50% VO2max and ninety rpm was followed by ninety minutes of exercise at approximately 65%-70% VO2max and ninety rpm. Respiratory gases were monitored every ten minutes during exercise. Heart rate and RPE were monitored every ten minutes during exercise, following the initial warm up. The subject was given a grape Kool-Aid solution (4ml/kg) containing Nutra Sweet (CON, FRU, CAF) or glucose (0.25g/kg) (GLU, CFG) with instructions to ingest the solution within two minutes. This solution was administered fifteen minutes, thirty minutes, forty-five minutes, and sixty minutes into exercise. Total fluid intake during exercise was twelve ml/kg and total glucose intake (GLU, CFG) was one g/kg. The subject then dismounted the Bio-Dyne and a blood sample (FEB) and muscle biopsy (FEM) were taken.
Statistical Treatment of Data
An analysis of variance with multiple factors (MSUSTAT AVMF) was used to assess differences with each subject receiving all combinations of "factors". Standard error (SE + or -) was used to indicate variation in individual means. Multiple comparison of factor means was performed using the Least Significant Difference by Student's T to locate differences between means and a probability level of 0.05 was chosen as the criterion for acceptance of statistical significance.
CHAPTER 4
RESULTS
The results will be presented in four sections. The first section will include HR, VO2, RPE, and RQ. Blood glucose, blood FFA, and muscle glycogen follow in separate sections.
HR, VO2, RPE, and RQ
No significant differences were observed between trials for heart rate (Table 2), or oxygen uptake (Table 3). There was a significant difference (P < 0.05) in RPE after thirty minutes of exercise between the CAF and FRU trials (Table 4). Significant differences (P < 0.05) in RQ were observed after twenty minutes of exercise between the FRU, and the CON; GLU, CAF rides (Table 5).
Table 2. Heart rate (HR) (SE + or -) in beats per minute versus time.
| Time (min.) | CON | CAF | CFG | FRU | GLU |
|-------------|-------|-------|-------|-------|-------|
| 15 | 147.8 | 143.0 | 151.4 | 151.0 | 145.2 |
| | (6.9) | (6.0) | (5.1) | (4.8) | (3.9) |
| 25 | 150.6 | 142.6 | 153.2 | 151.4 | 148.2 |
| | (7.3) | (6.4) | (5.2) | (5.0) | (4.4) |
| 35 | 148.4 | 144.2 | 157.2 | 152.2 | 149.8 |
| | (7.7) | (7.1) | (4.8) | (4.8) | (4.7) |
| 45 | 150.4 | 146.0 | 156.6 | 151.4 | 153.0 |
| | (7.8) | (7.1) | (4.4) | (5.2) | (4.3) |
| 55 | 151.2 | 146.8 | 158.4 | 151.2 | 153.4 |
| | (7.5) | (6.8) | (4.3) | (4.8) | (4.0) |
| 65 | 150.4 | 146.0 | 157.4 | 148.6 | 154.6 |
| | (7.8) | (6.7) | (4.4) | (5.4) | (4.0) |
| 75 | 150.8 | 145.8 | 157.6 | 150.0 | 154.8 |
| | (8.5) | (6.3) | (4.5) | (5.1) | (3.4) |
| 85 | 152.0 | 148.0 | 156.2 | 150.2 | 155.5 |
| | (7.9) | (6.2) | (5.5) | (5.4) | (3.8) |
| 95 | 154.4 | 151.8 | 155.8 | 153.4 | 155.2 |
| | (8.2) | (6.8) | (5.5) | (5.8) | (3.3) |
Table 3. Oxygen uptake (VO2) (SE + or -) in liters per minute versus time.
| Time (min.) | CON | CAF | CFG | FRU | GLU |
|------------|-------|-------|-------|-------|-------|
| 10 | 2.996 | 3.032 | 3.026 | 3.058 | 2.874 |
| | (0.278)| (0.284)| (0.269)| (0.329)| (0.260)|
| 20 | 3.128 | 3.198 | 3.114 | 2.996 | 2.994 |
| | (0.303)| (0.309)| (0.257)| (0.306)| (0.262)|
| 30 | 3.112 | 3.092 | 3.172 | 3.110 | 3.076 |
| | (0.278)| (0.251)| (0.259)| (0.261)| (0.280)|
| 40 | 3.156 | 3.178 | 3.114 | 3.110 | 3.116 |
| | (0.302)| (0.318)| (0.259)| (0.281)| (0.279)|
| 50 | 3.138 | 3.234 | 3.144 | 3.150 | 3.088 |
| | (0.293)| (0.280)| (0.255)| (0.272)| (0.277)|
| 60 | 3.148 | 3.200 | 3.160 | 3.176 | 3.142 |
| | (0.305)| (0.272)| (0.265)| (0.279)| (0.267)|
| 70 | 3.148 | 3.282 | 3.220 | 3.152 | 3.150 |
| | (0.310)| (0.288)| (0.283)| (0.292)| (0.267)|
| 80 | 3.258 | 3.262 | 3.200 | 3.162 | 3.170 |
| | (0.324)| (0.306)| (0.269)| (0.282)| (0.275)|
| 90 | 3.260 | 3.288 | 3.200 | 3.240 | 3.194 |
| | (0.312)| (0.256)| (0.238)| (0.312)| (0.262)|
Table 4. Relative perceived exertion (RPE) (SE + or -) versus time.
| Time (min.) | CON | CAF | CFG | FRU | GLU |
|------------|-------|-------|-------|-------|-------|
| 15 | 12.40 | 11.00 | 11.80 | 12.40 | 12.00 |
| | (0.51)| (0.45)| (0.37)| (0.60)| (0.45)|
| 25 | 12.60 | 11.60 | 11.80 | 12.60 | 12.00 |
| | (0.51)| (0.24)| (0.37)| (0.51)| (0.45)|
| 35 | 13.00 | 11.80 | 12.40 | 13.40 | 12.80 |
| | (0.63)| (0.37)| (0.40)| (0.75)| (0.37)|
| 45 | 13.40 | 12.00 | 13.00 | 13.00 | 13.40 |
| | (0.81)| (0.45)| (0.55)| (0.45)| (0.40)|
| 55 | 13.60 | 12.80 | 13.20 | 13.80 | 13.60 |
| | (0.93)| (0.37)| (0.49)| (0.58)| (0.51)|
| 65 | 14.40 | 13.20 | 13.60 | 14.00 | 13.60 |
| | (0.81)| (0.20)| (0.51)| (0.55)| (0.60)|
| 75 | 14.20 | 13.00 | 13.80 | 14.60 | 13.80 |
| | (0.73)| (0.32)| (0.37)| (0.60)| (0.73)|
| 85 | 14.80 | 13.40 | 13.80 | 14.80 | 13.60 |
| | (0.97)| (0.40)| (0.37)| (0.58)| (0.68)|
| 95 | 15.40 | 13.80 | 13.80 | 14.80 | 13.80 |
| | (1.03)| (0.49)| (0.58)| (0.58)| (0.58)|
Table 5. Respiratory quotient (RQ) (SE + or -) versus time.
| Time (min.) | CON | CAF | CFG | FRU | GLU |
|------------|-------|-------|-------|-------|-------|
| 10 | 0.896 | 0.910 | 0.925 | 0.925 | 0.894 |
| | (0.012)| (0.025)| (0.014)| (0.007)| (0.016)|
| 20 | 0.900 | 0.909 | 0.929 | 0.947 | 0.900 |
| | (0.007)| (0.015)| (0.009)| (0.015)| (0.012)|
| 30 | 0.903 | 0.896 | 0.928 | 0.922 | 0.898 |
| | (0.010)| (0.014)| (0.010)| (0.007)| (0.011)|
| 40 | 0.899 | 0.897 | 0.915 | 0.912 | 0.893 |
| | (0.008)| (0.015)| (0.010)| (0.007)| (0.012)|
| 50 | 0.893 | 0.889 | 0.916 | 0.924 | 0.901 |
| | (0.009)| (0.019)| (0.010)| (0.009)| (0.009)|
| 60 | 0.892 | 0.886 | 0.913 | 0.913 | 0.904 |
| | (0.008)| (0.016)| (0.015)| (0.015)| (0.010)|
| 70 | 0.889 | 0.886 | 0.912 | 0.905 | 0.909 |
| | (0.010)| (0.015)| (0.015)| (0.009)| (0.010)|
| 80 | 0.887 | 0.878 | 0.906 | 0.902 | 0.908 |
| | (0.013)| (0.016)| (0.015)| (0.013)| (0.013)|
| 90 | 0.886 | 0.876 | 0.917 | 0.901 | 0.906 |
| | (0.012)| (0.017)| (0.015)| (0.017)| (0.009)|
Blood Glucose
As illustrated in Table 6, there were no significant differences between the before ingestion blood glucose (BIBG) levels. Before exercise blood glucose (BEBG) levels for the CAF and CFG trials were significantly higher ($P < 0.05$) than that of the FRU trial. This difference resulted from an increase in blood glucose in the CAF (7%) and CFG (9%) with a contrasting 9% decrease in blood glucose observed in the FRU trial. From BEBG to following exercise blood glucose (FEBG), blood glucose levels increased 29% for GLU, 21% for CFG, 12% for FRU and 9% for CON, and decreased 9% for CAF. This resulted in FEBG levels for the CFG and GLU trials that were significantly higher ($P < 0.05$) than those for the CON, CAF, and FRU trials.
Table 6. Blood glucose (SE + or -) in mg%.
| Sample | CON | CAF | CFG | FRU | GLU |
|--------|-------|-------|-------|-------|-------|
| BIBG | 84.44 | 91.88 | 89.08 | 85.48 | 87.26 |
| | (3.20)| (4.66)| (3.95)| (3.60)| (2.10)|
| BEBG | 84.38 | 98.04 | 96.82 | 77.42 | 84.42 |
| | (1.50)| (5.64)| (7.91)| (7.18)| (2.04)|
| FEBG | 92.18 | 89.44 | 117.4 | 86.38 | 109.0 |
| | (3.27)| (3.41)| (8.6) | (3.39)| (3.58)|
Blood FFA
No differences were observed between the before ingestion blood fatty acid (BIFA) levels, as illustrated in Table 7. Between the BIFA and the before exercise blood fatty acid (BEFA) samples, blood FFA levels decreased 30% for FRU, 15% for GLU, and 5% for CON. In contrast, FFA levels increased 24% for CFG and 8% for CAF. The BEFA levels of CFG were higher ($P < 0.05$) than those of FRU. During exercise, FFA levels increased in each trial, producing a significant ($P < 0.05$) time effect even though the amount of increase varied. Following exercise blood fatty acid (FEFA) levels for the CON, CAF, and FRU rides were significantly higher ($P < 0.05$) than the FEFA levels for the CFG and GLU rides.
Table 7. Blood FFA (SE + or -) in umol/liter.
| Sample | CON | CAF | CFG | FRU | GLU |
|--------|-------|-------|-------|-------|-------|
| BIFA | 476.2 | 417.2 | 434.2 | 474.2 | 454.4 |
| | (39.0)| (54.8)| (59.7)| (61.1)| (29.5)|
| BEFA | 451.4 | 451.4 | 537.2 | 328.6 | 385.6 |
| | (18.9)| (14.7)| (147.4)| (10.1)| (28.9)|
| FEBF | 1336 | 1126 | 737.2 | 1034 | 714.4 |
| | (229) | (123) | (138.8)| (129) | (51.5) |
Muscle Glycogen
Muscle glycogen levels were similar in all five trials, both before ingestion (BIMG) and following exercise (FEMG), as illustrated by Table 8. Muscle glycogen utilization (BIMG-FEMG), however, was greater \((P < 0.05)\) during trial CON than trials CAF and GLU. Although not statistically significant, there was a trend \((P < 0.1)\) toward lower glycogen utilization in trials CFG and FRU when compared with trial CON. No significant differences were observed between trials CAF, CFG, FRU, and GLU.
Table 8. Muscle glycogen (SE + or -) in umol/g w.w.
| Sample | CON | CAF | CFG | FRU | GLU |
|----------|---------|---------|---------|---------|---------|
| BIMG | 152.0 | 144.6 | 135.7 | 146.5 | 138.2 |
| | (19.5) | (7.6) | (13.1) | (7.1) | (13.6) |
| FEMG | 60.66 | 81.44 | 68.72 | 79.68 | 76.40 |
| | (17.72) | (7.21) | (12.76) | (8.39) | (14.09) |
| BIMG-FEMG| 91.36 | 63.14 | 66.98 | 66.80 | 61.84 |
| | (10.01) | (7.90) | (10.78) | (12.05) | (5.25) |
CHAPTER 5
DISCUSSION
It has been demonstrated that during prolonged exercise, the onset of fatigue is delayed if muscle glycogen is spared (20, 28, 32, 51). With this in mind, the present investigation was undertaken to determine the relative effects of: 1) caffeine ingestion before exercise, 2) fructose ingestion before exercise, 3) glucose ingestion during exercise, and 4) caffeine/fructose ingestion before exercise plus glucose during exercise on muscle glycogen utilization and other measured parameters. Knowledge of the effects of the various treatments could help athletes at all levels of competition make a more educated decision as to their optimal use. The subjects used in this study were highly trained competitive cyclists, therefore the information is most directly applied to this group of athletes. However, athletes from other disciplines may also find the information relevant.
Caffeine
Caffeine has become popular as a potential ergogenic aid for endurance athletes based on evidence that caffeine increases lipid utilization in trained cyclists (21,35). In other studies, however, caffeine failed to increase lipid oxidation or inhibit carbohydrate utilization (10,40) In the present study muscle glycogen utilization was significantly lower during CAF than CON. However increased blood FFA levels or decreased RQ values, when compared to CON, were not present.
Several explanations are possible for the failure of caffeine to increase estimated FFA utilization in the present study. Based on past competition results, these subjects are capable of prolonged cycling using lipid as a significant energy source at approximately 65% of VO2max. Lipolysis is increased during exercise due to the release of epinephrine, at a rate proportional to exercise intensity (55). Bellet (3) has shown that caffeine also increases lipolysis by increasing epinephrine release. It is possible that exercise-induced release of epinephrine is a more influential mediator of plasma FFA concentration than caffeine-induced release in some well trained athletes. Caffeine may play a more important role in promoting
lipid oxidation at a higher exercise intensity when glycogen sparing is critical. For example, during cycling requiring approximately 80% of VO2max, and presumably involving a significant anaerobic contribution to energy needs based on reported RQ-values, Essig et al. (25) observed increased lipid utilization and muscle glycogen sparing. The possibility that caffeine promotes lipid oxidation at higher exercise intensities is also indirectly supported by the knowledge that long-term training enhances lipid utilization due to increased lipolytic enzyme activity and mitochondrial size (33). Thus, it is possible that long-term endurance training confers optimal ability to utilize FFA during cycling at 65% of VO2max, allowing little potentiation by caffeine. If the exercise intensity had been higher or the fitness level of the subjects lower, caffeine may have had a greater influence on FFA mobilization and utilization.
Blood glucose increased 7% from BIBG to BEBG, possibly due to caffeine's stimulatory effect on the adrenal glands (6), then dropped 9% between BEBG and FEBG. These results may indicate that the decreased utilization of muscle glycogen was at the expense of increased liver glycogen utilization.
Four of the five subjects reported feeling very
motivated during the first thirty minutes of exercise and the tester had to monitor pedal RPM to prevent excessive spinning. Although not significant (except at thirty minutes compared to FRU), CAF consistently had the lowest RPE readings. These observations probably resulted from caffeine's stimulatory effect on the central nervous system.
**Fructose**
Since preexercise glucose feedings have been shown to increase the rate of muscle glycogen utilization during exercise (44) and decrease exercise time to exhaustion (28), a source of CHO without these negative effects would be preferred as a preexercise feeding. Fructose has been considered as a possible source for preexercise CHO intake. Absorption of fructose from the gut occurs more slowly than does that of glucose and, in healthy humans, 70% - 90% of ingested fructose enters the portal circulation as fructose. In the fasted state most of the glucose formed in the liver is converted to glycogen and subsequently there is no significant rise in plasma glucose or insulin levels (5,23). In a previous study, Levine (44) reported a decrease in muscle glycogen utilization using fructose compared to a control.
The present study supports the glycogen findings of Levine. Although not significant, there was a trend toward lower glycogen utilization ($P < 0.1$) with the FRU compared to CON. No significant difference was observed between GLU, CAF, CFG or FRU. Blood glucose decreased between BIBG and BEBG, while previous studies reported a slight increase in blood glucose following fructose ingestion (41,44,31). This variation may be due to a time factor, since this study measured blood glucose sixty minutes after ingestion while the previous studies took samples at forty five minutes following ingestion.
RQ remained above 0.9 during the entire exercise period. This indicates that CHO was available, and metabolized for fuel.
The high RPE (significantly higher than CAF after thirty minutes of exercise) values reported were due to gastric upset experienced by three of the five subjects. Because of its slower absorption from the intestine, it is not uncommon for osmotic diarrhea to occur after fructose in doses as large as used in the present study (14). Before an athletic event, fructose supplementation would likely be less than that employed experimentally.
Glucose
It has been demonstrated that the time of glucose ingestion is critical to its effects on physical performance (9,20,44), and that ingested glucose is present in the blood in as little as five minutes (18). It has been previously reported that ingested glucose is metabolized during prolonged exercise (18,48), may decrease the liver's contribution to blood glucose (18), and that glucose ingested during prolonged exercise can delay fatigue (9,22). It was demonstrated in the present study that muscle glycogen utilization was significantly decreased when glucose (GLU trial compared to CON) was ingested during ninety minutes of exercise. FEBG levels were higher than BEBG levels and RQ levels remained very stable at approximately the 0.9 level throughout exercise. FEFA levels for GLU, and CFG were significantly lower than the FEFA level of CON. This may indicate that the ingested glucose was used as a fuel source, thus saving the body's fuel reserves. Also of interest was the finding that RPE levels remained stable from the thirty minute reading to the end of exercise. This finding was reinforced by comments from two subjects that the exercise felt very consistent during GLU and continuing would not be difficult.
Caffeine, Fructose, and Glucose
The CFG trial was included to explore the possibility that the various treatments were additive in regard to glycogen sparing. It was found that glycogen utilization tended to be less in comparison to CON, but not different from the remaining treatments. More variability was present between individual BEBG, FEBG, and BEFA with CFG than any other treatment, indicating that individual reaction to the ingestion of multiple substances is a complex problem. Blood glucose increased from BIBG to BEBG to FEBG, this along with the consistently high RQ readings indicate that CHO was readily available as a fuel source. Also of interest was the finding that the rise in blood FFA was modest even though caffeine was ingested, this was previously observed after caffeine and sucrose ingestion at rest (2). These findings raise many questions concerning the effects of ingesting multiple substances before and during exercise.
CHAPTER 6
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS
Summary
Five competitive cyclists (four male and one female) were studied during 95 min of bicycle ergometer exercise (approx. 65% VO2max) to determine the effects of ingesting caffeine before exercise (CAF) (5mg/kg body weight), fructose before exercise (FRU) (1g/kg), glucose during exercise (GLU) (1g/kg), a combination of caffeine/fructose before plus glucose during exercise (CFG) (same quantities as other trials); and a control (CON) on muscle glycogen utilization during exercise. Each subject performed all trials with not less than seven days and not more than fourteen days between trials. Preexercise ingestion occurred one hour prior to exercise and ingestion during exercise began fifteen minutes into the ride. Muscle biopsies were performed before initial ingestion (BIM) and following exercise (FEM). Blood samples were taken before ingestion (BIB), before exercise began (BEB), and following exercise (FEB). Blood samples were analyzed for
glucose and FFA.
There were no significant differences between BIB glucose levels. BEB glucose levels for the CAF and CFG trials were significantly higher (P < 0.05) than that of the FRU trial. FEB glucose levels for the CFG and GLU trials were significantly higher (P < 0.05) than those for the CON, CAF, and FRU trials.
No differences were observed between the BIB FFA levels. The BEB FFA levels of CFG were higher (P < 0.05) than those of FRU. During exercise, FFA levels increased in each trial, producing a significant (P < 0.05) time effect even though the amount of increase varied. FEB FFA levels for the CON, CAF, and FRU rides were significantly higher (P < 0.05) than those for the CFG and GLU rides.
Muscle glycogen levels were similar in all five trials, both before ingestion and following exercise. Muscle glycogen utilization, however, was greater (P < 0.05) during trial CON than trials CAF and GLU. Although not statistically significant, there was a trend (P < 0.1) toward lower glycogen utilization in trials CFG and FRU when compared with trial CON. No significant differences were observed between trials CAF, CFG, FRU, and GLU.
These data indicate that caffeine ingestion before exercise and glucose ingestion during exercise can decrease muscle glycogen utilization. This decrease in muscle glycogen utilization could result in an increase in endurance performance.
**Conclusions**
This study was conducted with a small group of competitive cyclists, therefore broad generalizations could not be drawn. Based on the limitations of this study the following conclusions appear justified.
1. Due to the variability among subjects, the use of any of the tested substances should be limited to training until the effects on the individual are known.
2. Glucose was a safe and consistent performer, providing a decreased utilization of muscle glycogen compared to CON. This could lead to an increase in endurance performance. The concentration ingested by an athlete should meet the needs of CHO supplementation and fluid replacement.
3. Caffeine ingestion decreased muscle glycogen utilization and received the lowest scores for RPE. These findings indicate that caffeine
ingestion may increase endurance performance. Individual reactions to caffeine varied and side effects are possible. Its use needs to be restricted to safe levels, and individual tolerance should be determined.
4. Fructose ingestion caused gastric upset at the levels ingested in this study, therefore use in competition would probably be less. Individual tolerance levels and effects need to be determined before use in competition.
5. Ingestion of multiple substances is a complex problem requiring additional research.
Recommendations
Based on the results of this study, further research in the area of ingesting caffeine, fructose, glucose or combinations of the preceding seem warranted. The following recommendations are presented.
1. Strict control of dietary intake and training load during the testing period, to control preexercise muscle glycogen levels, would be beneficial for statistical reasons.
2. Varying the amount of ingestion to determine if a dose:response relationship exists for the individual substances would be valuable.
3. The time of ingestion prior to and during exercise should be investigated to determine the optimal time of ingestion.
4. The relation between percent VO2max, fitness level, and substrate utilization should be examined to determine the role caffeine plays in glycogen sparing.
5. The relative effects the tested substances have on endurance performance as measured by time to exhaustion should be investigated.
BIBLIOGRAPHY
BIBLIOGRAPHY
1. Behnke, A., and J. Wilmore. *Evaluation and Regulation of Body Build and Composition*. Englewood Cliffs, N.J.: Prentice-Hall, 1974.
2. Bellet, S., A. Kershbaum, and E. Finch. "Response of free fatty acids to coffee and caffeine." *Metabolism* 17: 702-707, 1968.
3. Bellet, S., L. Roman, O. De Castro, K. Eunkim, and A. Kershbaum. "Effect of coffee ingestion on catecholamine release." *Metabolism* 18: 288-291, 1969.
4. Bock, A., C. Vancoulaert, D. Dill, A. Folling, and L. Hurzthal. "Studies in muscular activity. Pt. IV: The 'Steady State' and R.Q. during work." *Journal of Physiology* 66: 162, 1928.
5. Bohannon, N., J. Karam, and P. Forsham. "Endocrine responses to sugar ingestion in man." *J. Am. Diet. Assoc.* 23: 555-560, 1980.
6. Bolton, S., et al. "Caffeine; its effects, uses and abuses." *Journal of Applied Nutrition* 33: 35-53, 1981.
7. Bonen, A., et al. "Effects of menstrual cycle on metabolic response to exercise." *Journal of Applied Physiology* 55: 1506-1513, 1983.
8. Borg, G. "Perceived exertion: A note on history and methods." *Medicine and Science in Sports* 5: 90-93, 1973.
9. Brooke, J., G. Davies, and L. Green. "The effects of normal and glucose syrup work diets on the performance of racing cyclists." *Journal of Sports Medicine* 15: 257-265, 1975.
10. Casal, D., and A. Leon. "Failure of caffeine to affect substrate utilization during prolonged running." *Medicine and Science in Sports and Exercise* 17(1): 174-179, 1985.
11. Cathcart, E. "The influence of work on protein metabolism." *Physiology Review* 5: 225, 1925.
12. Cathcart, E., and W. Burnett. "The influence of muscle work on metabolism in varying conditions of diet." *Proc. Roy. Soc.* (London) B99: 405, 1926.
13. Chasiotis, D., et al. "Regulation of glycogenolysis in human muscle at rest and during exercise." *Journal of Applied Physiology* 53: 708-715, 1982.
14. Chen, M., and R. Whistler. "Metabolism of D-Fuctose." *Advanced Carbohydrate Chemistry and Biochemistry* 34: 285-343, 1977.
15. Christensen, E., and O. Hansen. "Respitorscher quotient and O2-Aufnahme." *Scandinavian Archives of Physiology* 81: 180, 1939.
16. Costill, D., W. Krammer, and A. Fisher. "Fluid ingestion during distance running." *Archives of Environmental Health* 21: 520-525, 1970.
17. Costill, D., R. Bowers, K. Sparks, and C. Turner. "Muscle glycogen utilization during prolonged running." *Journal of Applied Physiology* 31: 353-356, 1971.
18. Costill, D., A. Bennett, G. Branam, and D. Eddy. "Glucose ingestion at rest and during prolonged exercise." *Journal of Applied Physiology* 34(6): 764-769, 1973.
19. Costill, D., and B. Saltin. "Factors limiting gastric emptying during rest and exercise." *Journal of Applied Physiology* 37(5): 679-683, 1974.
20. Costill, D., E. Coyle, G. Dalsky, W. Evans, W. Fink, and D. Hoopes. "Effects of elevated plasma FFA and insulin on muscle glycogen usage." *Journal of Applied Physiology* 43(4): 695-699, 1977.
21. Costill, D., et al. "Effects of caffeine ingestion on metabolism and exercise performance." *Medicine and Science in Sports* 10(3): 155-158, 1978."
22. Coyle, E., et al. "Carbohydrate feeding during prolonged strenuous exercise can delay fatigue." *Journal of Applied Physiology* 55: 230-235, 1983.
23. Crapo, P., O. Kolterman, and J. Olefsky. "Effects of oral fructose in normal, diabetic, and impaired glucose tolerance subjects." *Diabetes Care* 3: 575-582, 1980.
24. Dill, D., H. Edwards, and J. Talbott. "Studies in muscular activity" *Journal of Physiology* (London) 77: 49-54, 1932.
25. Essig, D., D. Costill, and P. Van Handel. "Effects of caffeine ingestion on utilization of muscle glycogen and lipid during leg ergometer cycling." *International Journal of Sports Medicine* 1: 86-90, 1980.
26. Evans, W., S. Phinney, and V. Young. "Suction applied to a muscle biopsy maximizes sample size." *Medicine and Science in Sports and Exercise* 14(1): 101-102, 1982.
27. Fordtran, J., and B. Saltin. "Gastric emptying and intestinal absorption during prolonged severe exercise." *Journal of Applied Physiology* 23: 331-335, 1967.
28. Foster, C., D. Costill, and W. Fink. "Effects of preexercise feedings on endurance performance." *Medicine and Science in Sports* 11(1): 1-5, 1979.
29. Fox, E., R. Bartels, C. Billings, D. Mathews, R. Bason, and W. Webb. "Intensity and distance of interval training programs and changes in aerobic power." *Medicine and Science in Sports* 5: 18, 1972.
30. Garry, R. "The static effort and the excretion of uric acid." *Journal of Physiology* 62: 364, 1927.
31. Hargreaves, M., D. Costill, A. Katz, and W. Fink. "Effect of fructose ingestion on muscle glycogen usage during exercise." *Medicine and Science in Sports and Exercise* 17(3): 360-363.
32. Hickson, R., M. Rennie, R. Conlee, W. Winder, and J. Hollozy. "Effects of increased plasma fatty acids on glycogen utilization and endurance." *Journal of Applied Physiology* 43: 829-833, 1977.
33. Holloszy, J. "Biochemical adaptations to endurance exercise," *Exerc. Sports Sci. Rev.* 1: 45-71, 1973.
34. Howald, H., et al. "Nutrient intake and energy regulation in physical exercise." *Exrerientia* [Suppl.] 44: 77-88, 1983.
35. Ivy, J., et al. "Influence of caffeine and carbohydrate feedings on endurance performance." *Medicine and Science in Sports* 11(1): 6-11, 1979.
36. Ivy, J., et al. "Endurance improved by ingestion of a glucose polymer supplement." *Medicine and Science in Sports and Exercise* 15(6): 466-471, 1983.
37. Jurkowski, J., et al. "Effects of menstrual cycle on blood lactate, O2 delivery, and performance during exercise." *Journal of Applied Physiology* 51: 1493-1499, 1981.
38. Karlsson, J., and B. Saltin. "Diet, muscle glycogen, and endurance performance." *Journal of Applied Physiology* 31(2): 203-206, 1971.
39. Kirby, R., A. Bonen, A. Belcastro, and C. Campbell. "Needle muscle biopsy: techniques to increase sample sizes, and complications." *Archives of Physical and Medical Rehabilitation* 63: 264-268, 1982.
40. Knapik, J., B Jones, M. Toner, W. Daniels, and W. Evans. "Influence of caffeine on serum substrate changes during running in trained and untrained individuals." *Biochem. Exerc.* 13: 514-519, 1983.
41. Koivisto, V., S. Karonen, and E. Nikkila. "Carbohydrate ingestion before exercise: comparison of glucose, fructose, and sweet placebo." *Journal of Applied Physiology* 51(4): 783-787, 1981.
42. Krogh, A., and J. Lindhard. "Relative value of fat and carbohydrate as a source of muscular energy." *Biochemical Journal* 14: 290, 1920.
43. Lauwersy, R. "Colorimetric determination of free fatty acids." *Analytical Biochemistry* 32: 331-333, 1969.
44. Levine, L. W. Evans, B. Cadarette, E. Fisher, and B. Bullen. "Fructose and glucose ingestion and muscle glycogen use during submaximal exercise." *Journal of Applied Physiology* 55(6): 1767-1771, 1983.
45. von Liebib, J. "Letters on Chemistry", 1841.
46. McArdle, W., F. Katch, and V. Katch. *Exercise Physiology: Energy, Nutrition, and Human Performance*. Philadelphia: Lea & Febiger, 1981.
47. Passonneau, J., and V. Lauderdale. "A comparison of three methods of glycogen measurement in tissues." *Analytical Biochemistry* 60: 405-412, 1974.
48. Pirnay, F., M. Lacroix, F. Mosora, A. Luyckx, and P. Lefebvre. "Glucose oxidation during prolonged exercise evaluated with naturally labeled [13C]glucose." *Journal of Applied Physiology* 43(2): 258-261, 1977.
49. Pollock, M., D. Schmidt, and A. Jackson. "Measurement of cardiorespiratory fitness and body composition in the clinical setting." *Comprehensive Therapy* 6(9), 1980.
50. *The Random House Dictionary* New York: Random House, Inc., 1980.
51. Rennie, M., W. Winder, and J. Holloszy. "A sparing effect of increased plasma fatty acids on muscle and liver glycogen content in exercising rat." *Biochemical Journal* 156: 647-655, 1976.
52. Richter, E., et al. "Muscle glycogenolysis during exercise; dual control by epinephrine and contractions." *American Journal of Physiology* 242: E25-32, 1982.
53. SIGMA Glucose Hexokinase Kit. St. Louis, SIGMA Chemical Company, nd.
54. von Borstel, R., et al. "Caffeine metabolism." *Food Technology* 37: 40-43, 1983.
55. Von Euler, U. "Sympatho-adrenal activity in physical exercise." *Medicine and Science in Sports and Exercise* 6: 165-170, 1974.
56. Whitney, E., and C. Cataldo. *Understanding Normal and Clinical Nutrition*. St. Paul: West Publishing Co., 1983.
57. Wilmore, J. *Training for Sport and Activity*. Boston: Allyn and Bacon, Inc., 1982.
APPENDICES
APPENDIX A
SAMPLE FORMS
Subject Questionnaire
Name: ___________________________ Phone: _______________
Address: ____________________________________________
Age: ________ Height (cm): ________ Weight (kg): ________
(Circle correct answer)
1. Is anyone in your immediate family diabetic? (Yes, No)
If yes, then who? (yourself, father, mother, brother, sister)
2. Have you cycled competitively at the state level in the last year? (Yes, No)
3. Do you drink coffee? (Yes, No)
If yes, cups per day? ________
4. Do you drink non-herbal tea? (Yes, No)
If yes, cups per day? ________
5. Do you drink soft drinks? (Yes, No)
If yes, cups per day? ________
6. Have you ever experienced any type of bleeding disorder, easy bruising, or slow healing? (Yes, No)
If yes, please explain. ______________________________________
7. Are you using any type of prescriptive drug? (Yes, No)
If yes, please list. _______________________________________
8. Do you plan on participating in any type of competition between January 1st and February 28th? (Yes, No)
If yes, please list dates. ______________________________________
9. Please list two days of the week (first and second choices) you are free for testing from 7:00am to 10:00am.
First: _________________ Second: _________________
10. Has your training been consistent for the last month, and do you expect it to remain constant throughout testing? (Yes, No)
If no please explain.
11. Please list present weekly training regimen.
## Gas Analysis Calculations
| Name | Date |
|------|------|
| Age | Weight (kg) | Height (cm) |
|-----|-------------|-------------|
| Exercise Level | Speed (rpm) | Exercise (min) | Rest (min) |
|----------------|-------------|----------------|------------|
| RPE | Heart Rate |
|-----|------------|
### Volume
1. $T_2 = \text{Tissot, Final}$
2. $T_1 = \text{Tissot, initial}$
3. $T_d = \text{Tissot, difference}$
4. $T_d \times 3.244 = VE$ uncorrected
5. $BP = \text{Barometric Pressure}$
6. $T_c = \text{Tissot Temperature}$
7. $STPD$
8. $!/! \times (4) = VE$ corrected
### Oxygen Consumption
9. $E_2$
10. $(9) \times 21/1000 = O_2\%$
11. $20.08 - (10) = O_2$ Extract
12. $(9) \times (11)/100 = O_2$ l/min
13. $(12) \times 1000/BW(kg) = O_2$ ml/kg min
14. $(13)/VO_{2\text{max}} \times 100 = \%VO_{2\text{max}}$
15. $CO_2\% = CO_2\%$ production
16. $(15)/(11) = RQ$
17. $(4)/(13) = VE$
$$S_{100} = \left[\frac{273}{(273+T_c)}\right] \times \left[\frac{(BP-VP)/760}\right]$$
| $T_c$ | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 |
|-------|----|----|----|----|----|----|----|----|----|----|
| $VP$ | 12.8 | 13.6 | 14.5 | 15.5 | 15.5 | 17.5 | 18.7 | 19.8 | 21.1 | 22.4 |
MONTANA STATE UNIVERSITY
Committee on Human Subjects in Research
CONSENT FORM
Title of Project:
Name of Researcher:
Name of Person Briefing Subject:
I, ____________________________, am a willing participant in this project and (please print name of participant) have been informed of the following items:
I. I have read or had the opportunity to read the proposal which includes the general description of this research project, its purpose and benefits;
II. I have read the research document and been given an explanation of all procedures to be followed and why I have been asked to participate;
III. I have been given an explanation of my specific involvement and any foreseeable risks or discomfort;
IV. I have been given a description of any benefits which may be expected from the research;
V. I have been given a disclosure of appropriate alternative procedures or courses of treatment that might be advantageous;
VI. I have been assured of confidentiality of records identifying me as a subject;
VII. I have been given assurance that minimal risk is involved in my participation in this study;
VIII. I understand as a voluntary participant that I may withdraw from the experiment at any time that I desire without any loss of benefits to which I am entitled;
IX. I have been given the opportunity to ask questions at any time about the experiment, my rights and whom to contact in the event of research-related injury to me and all questions have been answered to my satisfaction.
Signature
If the person giving consent is not the participant, a statement that he/she is legally authorized to represent the participant must be included, e.g.
I am the parent or legal guardian of _______________________________________
If drugs are involved, this form will not be used as we need specific information about the drug being used.
APPENDIX B
RELATIVE PERCEIVED EXERTION SCALE
| How did the exercise feel? | Rating |
|------------------------------------|--------|
| Very, very light | 6 |
| Very light | 7 |
| Fairly light | 8 |
| Somewhat hard | 9 |
| Hard | 10 |
| Very hard | 11 |
| Very, very hard | 12 |
| | 13 |
| | 14 |
| | 15 |
| | 16 |
| | 17 |
| | 18 |
| | 19 |
| | 20 |
APPENDIX C
TIME SCHEDULE
| Minute | Activity |
|--------|----------|
| -80 to -60 | Preingestion blood sample, muscle biopsy |
| -60 to -55 | Ingest initial solution (10 ml/kg) |
| -10 to -05 | Preexercise blood sample |
| 00 | Start warm up (50% VO2max at 90 rpm) |
| 05 | Start exercise (70% VO2max at 90 rpm) |
| 09 to 10 | Collect respiratory gases |
| 15 | Record RPE and HR |
| 15 to 17 | Ingest solution (4 ml/kg) |
| 19 to 20 | Collect respiratory gases |
| 25 | Record RPE and HR |
| 29 to 30 | Collect respiratory gases |
| 30 to 32 | Ingest solution (4 ml/kg) |
| 35 | Record RPE and HR |
| 39 to 40 | Collect respiratory gases |
| 45 | Record RPE and HR |
| 45 to 47 | Ingest solution (4 ml/kg) |
| 49 to 50 | Collect respiratory gases |
| 50 | Record RPE and HR |
| 59 to 60 | Collect respiratory gases |
| 60 to 62 | Ingest solution |
| 65 | Record RPE and HR |
| 69 to 70 | Collect respiratory gases |
| Time | Activity |
|------|----------------------------------|
| 75 | Record RPE and HR |
| 79 to 80 | Collect respiratory gases |
| 85 | Record RPE and HR |
| 89 to 90 | Collect respiratory gases |
| 95 | Record RPE and HR |
| 95 to 115 | Blood sample, muscle biopsy |
N378
Er442
cod.2 |
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.
1. Introduction
In this paper, we consider the problem of hashing, which is concerned with learning binary embeddings of data in order to enable fast approximate nearest neighbor retrieval. We take a task-driven approach, and seek to optimize learning objectives that closely match test-time performance measures. Nearest neighbor retrieval performance is frequently measured using ranking-based evaluation metrics, such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG) [26], but the optimization of such metrics has been deemed difficult in the hashing literature [30]. We propose a novel learning to rank formulation to tackle these difficult optimization problems, and our main contribution is a gradient-based method that directly optimizes ranking metrics for hashing. Coupled with deep neural networks, this method achieves state-of-the-art results.
Our formulation is inspired by a simple observation. When performing retrieval with binary vector encodings and the integer-valued Hamming distance, the resulting ranking usually contains ties, and different tie-breaking strategies can lead to different results (Fig. 1). In fact, ties are a common problem in ranking, and much attention has been paid to it, including in Kendall’s classical work on rank correlation [15], and in the modern information retrieval literature [3, 28]. Unfortunately, the learning to hash literature largely lacks tie-awareness, and current evaluation protocols rarely take tie-breaking into account. Thus, we advocate using tie-aware ranking evaluation metrics, which implicitly average over all permutations of tied items, and permit efficient closed-form evaluation.
Our natural next step is to learn hash functions by optimizing tie-aware ranking metrics. This can be seen as an instance of learning to rank with listwise loss functions, which is advantageous compared to many other ranking-inspired hashing formulations. To solve the associated discrete and NP-hard optimization problems, we relax the problems into their continuous counterparts where closed-form gradients are available, and then perform gradient-based optimization with deep neural networks. We specifically study the optimization of AP and NDCG, two ranking metrics that are...
widely used in evaluating nearest neighbor retrieval performance. Our results establish the new state-of-the-art for these metrics in common image retrieval benchmarks.
2. Related Work
Hashing is a widely used approach for practical nearest neighbor retrieval [39], thanks to the efficiency of evaluating Hamming distances using bitwise operations, as well as the low memory and storage footprint. It has been theoretically demonstrated [1] that data-dependent hashing methods outperform data-independent ones such as Locality Sensitive Hashing [14]. We tackle the supervised hashing problem, also known as affinity-based hashing [18, 25, 30], where supervision is given in the form of pairwise affinities. Regarding optimization, the discrete nature of hashing usually results in NP-hard problems. Our solution uses continuous relaxations, which is in line with relaxation-based methods, e.g., [4, 18, 25], but differs from alternating methods that preserve the discrete constraints [22, 29, 30] and two-step methods [6, 23, 48].
Supervised hashing can be cast as a distance metric learning problem [29], which itself can be formulated as learning to rank [21, 27]. Optimizing ranking metrics such as AP and NDCG has received much attention in the learning to rank literature. For instance, surrogates of AP and NDCG can be optimized in the structural SVM framework [10, 45], and bound optimization algorithms exist for NDCG [38]. Alternatively, there are gradient-based methods based on smoothing or approximating these metrics [2, 11, 19, 35]. Recently, [36] tackles few-shot classification by optimizing AP using the direct loss minimization framework [34]. These methods did not consider applications in hashing.
In the learning to hash literature, different strategies have been proposed to handle the difficulties in optimizing listwise ranking metrics. For example, [40] decomposes listwise supervision into local triplets, [22, 44] use structural SVMs to optimize surrogate losses, [33] maximizes precision at the top, and [41, 47] optimize NDCG surrogates. In other recent methods using deep neural networks, the learning objectives are not designed to match ranking evaluation metrics, e.g., [4, 20, 42, 48]. In contrast, we directly optimize listwise ranking metrics using deep neural networks.
Key to our formulation is the observation that the integer-valued Hamming distance results in rankings with ties. However, this fact is not widely taken into consideration in previous work. Ties can be sidestepped by using weighted Hamming distance [22, 46], but at the cost of reduced efficiency. Fortunately, tie-aware versions of common ranking metrics have been found in the information retrieval literature [28]. Inspired by such results, we propose to optimize tie-aware ranking metrics on Hamming distances. Our gradient-based optimization uses a recent differentiable histogram binning technique [4, 5, 37].
3. Hashing as Tie-Aware Ranking
3.1. Preliminaries
Learning to hash. In learning to hash, we wish to learn a hash mapping $\Phi : \mathcal{X} \rightarrow \mathcal{H}^b$, where $\mathcal{X}$ is the feature space, and $\mathcal{H}^b = \{-1, 1\}^b$ is the $b$-dimensional Hamming space. A hash mapping $\Phi$ induces the Hamming distance $d_\Phi : \mathcal{X} \times \mathcal{X} \rightarrow \{0, 1, \ldots, b\}$ as\footnote{Although the usual implementation is by counting bit differences, this equivalent formulation has the advantage of being differentiable.}
$$d_\Phi(x, x') = \frac{1}{2} (b - \Phi(x)^\top \Phi(x')).$$
We consider a supervised learning setting, or supervised hashing, where supervision is specified using pairwise affinities. Formally, we assume access to an affinity oracle $\mathcal{A}$, whose value indicates a notion of similarity: two examples $x_i, x_j \in \mathcal{X}$ are called similar if $\mathcal{A}(x_i, x_j) > 0$, and dissimilar when $\mathcal{A}(x_i, x_j) = 0$. In this paper, we restrict $\mathcal{A}$ to take values from a finite set $\mathcal{V}$, which covers two important special cases. First, $\mathcal{V} = \{0, 1\}$, or binary affinities, are extensively studied in the current literature. Binary affinities can be derived from agreement of class labels, or by thresholding the original Euclidean distance in $\mathcal{X}$.\footnote{The latter is sometimes referred to as “unsupervised hashing” in the literature due to the absence of class labels.} The second case is multi-level affinities, where $\mathcal{V}$ consists of non-negative integers. This more fine-grained model of similarity is frequently considered in information retrieval tasks, including in web search engines.
Throughout this paper we assume the setup where a query $x_q \in \mathcal{X}$ is retrieved against some database $S \subseteq \mathcal{X}$. Retrieval is performed by ranking the instances in $S$ by increasing distance to $x_q$, using $d_\Phi$ as the distance metric. This is termed “retrieval by Hamming ranking” in the hashing literature. The ranking can be represented by an index vector $R$, whose elements form a permutation of $\{1, \ldots, |S|\}$. Below, let $R_k$ be the $k$-th element in $R$, and $\mathcal{A}_q(i) = \mathcal{A}(x_{R_k}, x_i)$. Unless otherwise noted, we implicitly assume dependency on $x_q, S$, and $\Phi$ in our notation.
Ranking-based evaluation. Ranking-based metrics usually measure some form of agreement between the ranking and ground truth affinities, capturing the intuition that retrievals with high affinity to the query should be ranked high. First, in the case of binary affinity, we define $N^+ = |\{x_i \in S | \mathcal{A}_q(i) = 1\}|$. Average Precision (AP) averages the precision at cutoff $k$ over all cutoffs:
$$\text{AP}(R) = \frac{1}{N^+} \sum_{k=1}^{|S|} \mathcal{A}_q(R_k) \left[ \frac{1}{k} \sum_{j=1}^{k} \mathcal{A}_q(R_j) \right].$$
Next, for integer-valued affinities, Discounted Cumulative
Gain (DCG) is defined as
\[
DCG(R) = \sum_{k=1}^{|S|} G(A_q(R_k))D(k),
\]
where \( G(a) = 2^a - 1, \ D(k) = \frac{1}{\log_2(k + 1)} \).
\( G \) and \( D \) are called gain and (logarithmic) discount, respectively. Normalized DCG (NDCG) divides DCG by its maximum possible value, ensuring a range of \([0, 1]\):
\[
NDCG(R) = \frac{DCG(R)}{\max_{R'} DCG(R')}.
\]
### 3.2. Tie-Awareness in Hashing
When evaluating information retrieval systems, special attention is required when there exist ties in the distances [3, 28]. In this case, the ranking \( R \) is not unique as the tied items can be ordered arbitrarily, and the tie-breaking strategy may have a sizable impact on the result. We have given an example in Fig. 1. Surprisingly, we found that current ranking-based hashing evaluation protocols usually do not take tie-breaking into account, which could result in ambiguous comparisons or even unfair exploitation. Perhaps more importantly, ties render the formulation of direct optimization unclear: what tie-breaking strategy should we assume when using AP or NDCG as optimization objectives? Thus, we believe that it is important to seek tie-aware evaluation metrics for hashing.
Rather than picking a fixed tie-breaking strategy or relying on randomization, the tie-aware solution that we propose is to average the value of the ranking metric over all possible permutations of tied items. This solution is appealing in several ways: it is deterministic, it is unambiguous and cannot be exploited, and it reduces to the ordinary version when there are no ties. However, there is one caveat: generating all permutations for \( n \) tied items requires \( O(n!) \) time, which is super-exponential and prohibitive. Fortunately, [28] observes that the average can be computed implicitly for commonly used ranking metrics, and gives their tie-aware versions in closed form. Based on this result, we further describe how to efficiently compute tie-aware ranking metrics by exploiting the structure of the Hamming distance.
We focus on AP and NDCG, and denote the tie-aware versions of AP and (N)DCG as \( AP_T \) and \( (N)DCG_T \), respectively. First, we define some notation. With integer-valued Hamming distances, we redefine the ranking \( R \) to be a collection of \((b + 1)\) “ties”, i.e. \( R = \{R^{(0)}, \ldots, R^{(b)}\} \), where \( R^{(d)} = \{i | d_h(x_q, x_i) = d\} \) is the set of retrievals having Hamming distance \( d \) to the query. We define a set of discrete histograms conditioned on affinity values, \((n_{0,v}, \ldots, n_{b,v})\), where \( n_{d,v} = |R^{(d)} \cap \{i | A_q(x) = v\}|\), \( \forall v \in V \), and their cumulative sums \((N_0, \ldots, N_b, N)\) where \( N_{d,v} = \sum_{j \leq d} n_{j,v} \). We also define the total histograms as \( n_d = \sum_{v \in V} n_{d,v} \) with cumulative sum \( N_d = \sum_{j \leq d} n_j \).
Next, Proposition 1 gives the closed forms of \( AP_T \) and \( DCG_T \). We give proof in the appendix.
**Time complexity Analysis.** Let \(|S| = N\). Given the Hamming distances \(\{d_h(x_q, x) | x \in S\}\), the first step is to generate the ranking \( R \), or populate the ties \(\{R^{(d)}\}\). This step is essentially the counting sort for integers, which has \(O(bN)\) time complexity. Computing either \(AP_T\) or \(DCG_T\) then takes \(O(\sum_d n_d) = O(N)\) time, which makes the total time complexity \(O(bN)\). In our formulation, the number of bits \( b \) is a constant, and therefore the complexity is linear in \( N \). In contrast, for real-valued distances, sorting generally takes \(O(N \log N)\) time and is the dominating factor.
For the normalized \(NDCG_T\), the normalizing factor is unaffected by ties, but computing it still requires sorting the gain values in descending order. Under the assumption that the set of affinity values \(V\) consists of non-negative integers, the number of unique gain values is \(|V|\), and counting sort can be applied in \(O(|V|N)\) time. The total time complexity is thus \(O((b + |V|)N)\), which is also linear in \(N\) provided
---
**Proposition 1.** Both \(AP_T\) and \(DCG_T\) decompose additively over the ties. For \(V = \{0, 1\}\), let \(n^+_d \triangleq n_{d,1}, N^+_d \triangleq N_{d,1}\), and \(N^+ = \sum_d n^+_d\), the contribution of each tie \(R^{(d)}\) to \(AP_T\) is computed as
\[
AP_T(R^{(d)}) = \frac{n^+_d}{n_d N^+} \sum_{t=N_{d-1}+1}^{N_d} \frac{N^+_{d-1} + (t - N_{d-1} - 1)n^+_{d-1} + 1}{t}.
\]
For \(DCG_T\), the contribution of \(R^{(d)}\) is
\[
DCG_T(R^{(d)}) = \sum_{i \in R^{(d)}} \frac{G(A_q(i))}{n_d} \sum_{t=N_{d-1}+1}^{N_d} D(t) = \sum_{v \in V} \frac{G(v)n_{d,v}}{n_d} \sum_{t=N_{d-1}+1}^{N_d} D(t).
\]
**Proof.** See appendix. \(\square\)
that \(|V|\) is known. We note that counting sort on Hamming distances is also used by Lin et al. [22] to speed up loss-augmented inference for their NDCG surrogate loss.
### 3.3. The Learning to Rank View
Since we focus on optimizing ranking metrics, our work has connections to learning to rank [24]. Many supervised hashing formulations use loss functions defined on pairs or triplets of training examples, which correspond to *pointwise* and *pairwise* approaches in learning to rank terminology. We collectively refer to these as local ranking losses. Since we optimize evaluation metrics defined on a ranked list, our approach falls into the *listwise* category, and it is well-known [9, 40, 44] that listwise ranking approaches are generally superior to pointwise and pairwise approaches.
We further note that there exists a mismatch between optimizing local ranking losses and optimizing for evaluation performance. This is because listwise evaluation metrics are *position-sensitive*: errors made on individual pairs/triplets impact results differently depending on the position in the list, and more so near the top. To address this mismatch, local ranking methods often need nontrivial weighting or sampling heuristics to focus on errors made near the top. In fact, the sampling is especially crucial in triplet-based methods, e.g. [22, 42, 48], since the set of possible triplets is of size \(O(N^3)\) for \(N\) training examples, which can be prohibitive to enumerate. Triplet-based methods are also popular in the metric learning literature, and it is similarly observed [43] that careful sampling and weighting are key to stable learning. In contrast, we directly optimize listwise ranking metrics, without requiring sampling or weighting heuristics: the minibatches are sampled at random, and no weighting on training instances is used.
### 4. Optimizing Tie-Aware Ranking Metrics
In this section, we describe our approach to optimizing tie-aware ranking metrics. For discrete hashing, such optimization is NP-hard, since it involves combinatorial search over all configurations of binary bits. Instead, we are interested in a relaxation approach using gradient-based deep neural networks. Therefore, we apply continuous relaxation to the discrete optimization problems.
#### 4.1. Continuous Relaxations
Our continuous relaxation needs to address two types of discrete variables. First, as is universal in hashing formulations, the bits in the hash code are binary. Second, the tie-aware metrics involve integer-valued histogram bin counts \(\{n_{d,v}\}\).
We first tackle the binary bits. Commonly, bits in the hash code are generated by a thresholding operation using the sgn function,
\[
\Phi(x) = (\phi_1(x), \ldots, \phi_b(x)),
\]
\[
\phi_i(x) = \text{sgn}(f_i(x; w)) \in \{-1, 1\}, \forall i,
\]
where in our case \(f_i\) are neural network activations, parameterized by \(w\). We smoothly approximate the sgn function using the tanh function, which is a standard technique in hashing [4, 8, 20, 25, 41, 42]:
\[
\phi_i(x) \approx \hat{\phi}_i(x) = \tanh(\alpha f_i(x; w)) \in (-1, 1).
\]
The constant \(\alpha\) is a scaling parameter.
As a result of this relaxation, both the hash mapping and the distance function (1) are now real-valued, and will be denoted \(\hat{d}_q\) and \(\hat{d}_q\), respectively. The remaining discreteness is from the histogram bin counts \(\{n_{d,v}\}\). We also relax them into real-valued “soft histograms” \(\{c_{d,v}\}\) (described below), whose cumulative sums are denoted \(\{C_{d,v}\}\). However, we face another difficulty: the definitions of AP_T (6) and DCG_T (7) both involve a finite sum with lower and upper limits \(N_{d,-} + 1\) and \(N_{d,+}\), which themselves are variables to be relaxed. We approximate these finite sums by continuous integrals, removing the second source of discreteness. We outline the results in Proposition 2, and leave proof and error analysis to the appendix.
Importantly, both relaxations have closed-form derivatives. The differentiation for AP_T (11) is straightforward, while for DCG_T removes the integral in (12).
#### 4.2. End-to-End Learning
We perform end-to-end learning with gradient ascent. First, as mentioned above, the continuous relaxations AP_T and DCG_T have closed-form partial derivatives with respect
---
**Proposition 2.** The continuous relaxations of AP_T and DCG_T, denoted as AP_T and DCG_T respectively, are as follows:
\[
AP_T(R^{(d)}) = \frac{c_d^+(c_d^+ - 1)}{N^+(c_d - 1)} + \frac{c_d^+}{N^+ c_d} \left[ C_{d-1}^+ + 1 - \frac{c_d^+ - 1}{c_d - 1} (C_{d-1} + 1) \right] \ln \frac{C_d}{C_{d-1}},
\]
\[
DCG_T(R^{(d)}) = \ln 2 \sum_{v \in V} \frac{G(v)c_{d,v}}{c_d} \int_{C_{d-1} + 1}^{C_d + 1} \frac{dt}{\ln t}.
\]
**Proof.** See appendix. \(\square\)
to the soft histograms $\{c_{d,v}\}$. Next, we consider differentiating the histogram entries. Note that before relaxation, the discrete histogram $(n_{0,v}, \ldots, n_{b,v})$ for $\forall v \in V$ is constructed as follows:
$$n_{d,v} = \sum_{x_i | A_q(i) = v} 1[d_\Phi(x_q, x_i) = d], \quad d = 0, \ldots, b.$$
(13)
To relax $n_{d,v}$ into $c_{d,v}$, we employ a technique from [4, 37], where the binary indicator $1[\cdot]$ is replaced by a differentiable function $\delta(d_\Phi(x_q, x_i), d)$ with easy-to-compute gradients. Specifically, $\delta(d_\Phi(x_q, x_i), d)$ linearly interpolates $\hat{d}_\Phi(x_q, x_i)$ into the $d$-th bin with slope $\Delta > 0$:
$$\forall z \in \mathbb{R}, \delta(z, d) = \begin{cases}
1 - \frac{|z - d|}{\Delta}, & |z - d| \leq \Delta, \\
0, & \text{otherwise}.
\end{cases}$$
(14)
Note that $\delta$ approaches the indicator function as $\Delta \to 0$. We now have the soft histogram $c_{d,v}$ as
$$c_{d,v} = \sum_{x_i | A_q(i) = v} \delta(\hat{d}_\Phi(x_q, x_i), d),$$
(15)
and we differentiate $c_{d,v}$ using chain rule, e.g.
$$\frac{\partial c_{d,v}}{\partial \hat{\Phi}(x_q)} = \sum_{x_i | A_q(i) = v} \frac{\partial \delta(\hat{d}_\Phi(x_q, x_i), d)}{\partial \hat{d}_\Phi(x_q, x_i)} - \frac{\hat{\Phi}(x_i)}{2}.$$
(16)
The next and final step is to back-propagate gradients to the parameters of the relaxed hash mapping $\hat{\Phi}$, which amounts to differentiating the tanh function.
As shown in Fig. 2, we train our models using minibatch-based stochastic gradient ascent. Within a minibatch, each example is retrieved against the rest of the minibatch. That is, each example in a minibatch of size $M$ is used as the query $x_q$ once, and participates in the database for some other example $M - 1$ times. Then, the objective is averaged over the $M$ queries.
## 5. Experiments
### 5.1. Experimental Setup
We conduct experiments on image retrieval datasets that are commonly used in the hashing literature: CIFAR-10 [16], NUS-WIDE [13], 22K LabelMe [31], and ImageNet100 [8]. Each dataset is split into a test set and a database, and examples from the database are used in training. At test time, queries from the test set are used to perform Hamming ranking on the database, and the performance metric is averaged over the test set.
- **CIFAR-10** is a canonical benchmark for image classification and retrieval, with 60K single-labeled images from 10 classes. Following [42], we consider two experimental settings. In the first setting, the test set is constructed with 100 random images from each class (total: 1K); the rest is used as database, and 500 images per class are used for training (total: 5K). The second setting uses the standard 10K/50K split and the entire database is used in training.
- **NUS-WIDE** is a multi-label dataset with 270K Flickr images. For the database, we use a subset of 196K images associated with the most frequent 21 labels as in [20, 42]. 100 images per label are sampled to construct a test set of size 2.1K, and the training set contains 500 images per label (total: 10.5K).
- **LabelMe** is an unlabeled dataset of 22K images. As in [7], we randomly split LabelMe into a test set of size 2K and database of 20K. We sample 5K examples from the database for training.
- **ImageNet100** is a subset of ImageNet, containing all the images from 100 classes. We use the same setup as in [8]: 130 images per class, totaling 130K images, are sampled for training, and all images in the selected classes from the ILSVRC 2012 validation set are used as queries.
Retrieval-based evaluation of supervised hashing was recently put into question by [32], which points out that for multi-class datasets, binary encoding of classifier outputs is already a competitive solution. While this is an important point, deriving pairwise affinities from multi-class label agreement is a special case in our formulation. As mentioned in Sec. 3.1, our formulation uses a general pairwise affinity oracle $\mathcal{A}$, which may or may not be derived from labels, and can be either binary or multi-level. In fact, the datasets we consider range from multi-class/single-label (CIFAR-10, ImageNet100) to multi-label (NUS-WIDE) and unlabeled (LabelMe), and only the first case can be addressed by multi-class classification. For multi-level affinities, we also propose a new evaluation protocol using NDCG.
We term our method TALR (Tie-Aware Learning to Rank), and compare it against a range of classical and state-of-the-art hashing methods. Due to the vast hashing literature, an exhaustive comparison is unfortunately not feasible. Focusing on the learning to rank aspect, we select representative methods from all three categories:
- **Pointwise (pair-based).** Methods that define loss functions on instance pairs: Binary Reconstructive Embeddings (BRE) [18], Fast Supervised Hashing (FastHash) [23], Hashing using Auxiliary Coordinates (MACHash) [30], Deep Pair-Supervised Hashing (DPSH) [20], and Hashing by Continuation (HashNet) [8].
- **Pairwise (triplet-based).** We include a recent method, Deep Triplet-Supervised Hashing (DTSH) [42].
- **Listwise (list-based).** We compare to two listwise ranking methods: Structured Hashing (StructHash) [22] which optimizes an NDCG surrogate, and Hashing with Mutual Information (MIHash) [4] which optimizes mutual information as a ranking surrogate for binary affinities.
These selected methods include recent ones that achieve state-of-the-art results on CIFAR-10 (MIHash, DTSH), NUS-WIDE (DTSH, HashNet) and ImageNet100 (HashNet).
Since tie-aware evaluation of Hamming ranking performance has not been reported in the hashing literature, we re-train and evaluate all methods using publicly available implementations.
### 5.2. AP Optimization
We evaluate AP optimization on the three labeled datasets, CIFAR-10, NUS-WIDE, and ImageNet100. As we mentioned earlier, for labeled data, affinities can be inferred from label agreements. Specifically, in CIFAR-10 and ImageNet100, two examples are neighbors (i.e. have pairwise affinity 1) if they share the same class label. In the multilabeled NUS-WIDE, two examples are neighbors if they share at least one label.
#### 5.2.1 CIFAR-10 and NUS-WIDE
We first carry out AP optimization experiments on the two well-studied datasets, CIFAR-10 and NUS-WIDE. For these experiments, we perform finetuning using the ImageNet-pretrained VGG-F network [12], which is used in DPSH and DTSH, two recent top-performing methods. For methods that are not amenable to end-to-end training, we train them on fc7-layer features from VGG-F. On CIFAR-10, we compare all methods in the first setting, and in the second setting we compare the end-to-end methods: DPSH, DTSH, MIHash, and ours. We do not include HashNet as it uses a different network architecture (AlexNet), but will compare to it later on ImageNet100.
We present AP optimization results in Table 1. By optimizing the relaxation of $AP_T$ in an end-to-end fashion, our method (TALR-AP) achieves the new state-of-the-art in AP on both datasets, outperforming all the pair-based and triplet-based methods by significant margins. Compared to listwise ranking solutions, TALR-AP outperforms StructHash significantly by taking advantage of deep learning, and outperforms MIHash by matching the training objective to the evaluation metric. A side note is that for NUS-WIDE, it is customary in previous work [20, 42] to report AP evaluated at maximum cutoff of 5K (AP@5K), since ranking the full database is inefficient using general-purpose sorting algorithms. However, focusing on the top of the ranking overestimates the true AP, as seen in Table 1. Using counting sort, we are able to evaluate $AP_T$ on the full database efficiently, and TALR-AP also outperforms other methods in terms of AP@5K.
#### 5.2.2 ImageNet100
For ImageNet100 experiments, we closely follow the setup in HashNet [8] and fine-tune the AlexNet architecture [17] pretrained on ImageNet. Due to space limitations, we report comparisons against recent state-of-the-art methods on ImageNet100. The first competitor is HashNet, which is empirically superior to a wide range of classical and recent methods, and was previously the state-of-the-art method on ImageNet100. We also compare to MIHash, as it is the second-best method on CIFAR-10 and NUS-WIDE in the previous experiment. As in [8], the minibatch size is set to 256 for all methods, and the learning rate for the pre-trained convolution and fully connected layers are scaled down, since the model is fine-tuned on the same dataset that it was originally trained on. AP at cutoff 1000 (AP@1000) is used as the evaluation metric.
ImageNet100 results are summarized in Table 2. TALR-AP outperforms both competing methods, and the improvement is especially significant with short hash codes (16 and 32 bits). This indicates that our direct optimization approach produces better compact binary representations that preserve desired rankings. The state-of-the-art performance with com| Method | 12 Bits | 24 Bits | 32 Bits | 48 Bits | S1 (AP$_T$) | AP$_T$ |
|-----------------|---------|---------|---------|---------|-------------|--------|
| BRE [18] | 0.361 | 0.448 | 0.502 | 0.533 | 0.561 | 0.578 |
| MACHash [30] | 0.628 | 0.707 | 0.726 | 0.734 | 0.361 | 0.361 |
| FastHash [23] | 0.678 | 0.729 | 0.742 | 0.757 | 0.646 | 0.686 |
| StructHash [22] | 0.664 | 0.693 | 0.691 | 0.700 | 0.639 | 0.645 |
| DPSH [20]* | 0.720 | 0.757 | 0.757 | 0.767 | 0.658 | 0.674 |
| DTSH [42] | 0.725 | 0.773 | 0.781 | 0.810 | 0.660 | 0.700 |
| MIHash [4] | 0.687 | 0.775 | 0.786 | 0.822 | 0.652 | 0.693 |
| TALR-AP | **0.732** | **0.789** | **0.800** | **0.826** | **0.709** | **0.734** |
* Trained using parameters recommended by authors of DTSH.
Table 1: AP comparison on CIFAR-10 and NUS-WIDE with VGG-F architecture. On CIFAR-10, we compare all methods in the first setting (S1), and deep learning methods in the second (S2). We report the tie-aware AP$_T$, and additionally AP@5K for NUS-WIDE. TALR-AP optimizes tie-aware AP using stochastic gradient ascent, and achieves state-of-the-art performance.
| Method | 16 Bits | 24 Bits | 32 Bits | 48 Bits | S2 (AP$_T$) | AP@5K |
|-----------------|---------|---------|---------|---------|-------------|-------|
| DPSH [20]* | 0.908 | 0.909 | 0.917 | 0.932 | 0.758 | 0.818 |
| DTSH [42] | 0.916 | 0.924 | 0.927 | 0.934 | 0.773 | 0.820 |
| MIHash [4] | 0.929 | 0.933 | 0.938 | 0.942 | 0.767 | 0.809 |
| TALR-AP | **0.939** | **0.941** | **0.943** | **0.945** | **0.795** | **0.848** |
Table 2: AP@1000 results on ImageNet100 with AlexNet. TALR-AP outperforms state-of-the-art solutions using mutual information [4] and continuation methods [8].
Compact codes has important implications for cases where memory and storage resources are restricted (e.g., mobile applications), and for indexing large-scale databases.
### 5.3. NDCG Optimization
We evaluate NDCG optimization with a multi-level affinity setup, i.e., the set of affinity values $\mathcal{V}$ is a finite set of non-negative integers. Multi-level affinities are common in information retrieval tasks, and offer more fine-grained specification of the desired structure of the learned Hamming space. To our knowledge, this setup has not been considered in the hashing literature.
In the multi-label NUS-WIDE dataset, we define the affinity value between two examples as the number of labels they share, and keep other settings the same as in the AP experiment. For the unlabeled LabelMe dataset, we derive affinities by thresholding the Euclidean distances between examples. Inspired by an existing binary affinity setup [7] that defines neighbors as having Euclidean distance within the top 5% on the training set, we use four thresholds \{5%, 1%, 0.2%, 0.1%\} and assign affinity values \{1, 2, 5, 10\}. This emphasizes assigning high ranks to the closest neighbors in the original feature space. We learn shallow models on precomputed GIST features on LabelMe. For gradient-based methods, this means using linear hash functions, i.e., $f_i(x; w) = w^T x$, in (9). For methods that are not designed to use multi-level affinities (FastHash, MACHash, DPSH, MIHash), we convert the affinities into binary values; this reduces to the standard binary affinity setup on both datasets.
We give NDCG results in Table 3. Again, our method with the tie-aware NDCG objective (TALR-NDCG) outperforms all competing methods on both datasets. Interestingly, on LabelMe where all methods are restricted to learn shallow models on GIST features, we observe slightly different trends compared to other datasets. For example, without learning deep representations, DPSH and DTSH appear to perform less competitively, indicating a mismatch between their objectives and the evaluation metric. The closest competitors to TALR-NDCG on LabelMe are indeed the two listwise ranking methods: StructHash which optimizes a NDCG surrogate using boosted decision trees, and MIHash which is designed for binary affinities. TALR-NDCG outperforms both methods, and notably does so with linear hash functions, which have lower learning capacity compared StructHash’s boosted decision trees. This highlights the benefit of our direct optimization formulation.
### 5.4. Effects of Tie-Breaking
We lastly discuss the effect of tie-breaking in evaluating hashing algorithms. As mentioned in Sec. 3.2, tie-breaking is an uncontrolled parameter in current evaluation protocols, which can affect results, and even be exploited. To demonstrate this, we consider for example the AP experiment in CIFAR-10’s first setting, presented in Sec. 5.2. For each
| Method | NUS-WIDE 16 Bits | NUS-WIDE 32 Bits | NUS-WIDE 48 Bits | NUS-WIDE 64 Bits | LabelMe 16 Bits | LabelMe 32 Bits | LabelMe 48 Bits | LabelMe 64 Bits |
|-----------------|------------------|------------------|------------------|------------------|----------------|----------------|----------------|----------------|
| BRE [18] | 0.805 | 0.817 | 0.827 | 0.834 | 0.807 | 0.848 | 0.871 | 0.880 |
| MACHash [30] | 0.821 | 0.821 | 0.821 | 0.821 | 0.683 | 0.683 | 0.683 | 0.687 |
| FastHash [23] | 0.885 | 0.896 | 0.899 | 0.902 | 0.844 | 0.868 | 0.855 | 0.864 |
| DPSH [20] | 0.895 | 0.905 | 0.909 | 0.909 | 0.844 | 0.856 | 0.871 | 0.874 |
| DTSH [42] | 0.896 | 0.905 | 0.911 | 0.913 | 0.838 | 0.852 | 0.859 | 0.862 |
| StrucHash [22] | 0.889 | 0.893 | 0.894 | 0.898 | 0.857 | 0.888 | 0.904 | 0.915 |
| MIHash [4] | 0.886 | 0.903 | 0.909 | 0.912 | 0.860 | 0.889 | 0.907 | 0.914 |
| TALR-NDCG | **0.903** | **0.910** | **0.916** | **0.927** | **0.866** | **0.895** | **0.908** | **0.917** |
* Evaluated on the 5K training subset due to kernel-based formulation.
Table 3: NDCG comparison on NUS-WIDE (VGG-F architecture) and LabelMe (shallow models on GIST features). TALR-NDCG optimizes tie-aware NDCG using stochastic gradient ascent, and consistently outperforms competing methods.
Figure 3: Effects of tie-breaking: we plot the ranges of test-time mAP values spanned by all possible tie-breaking strategies, for all methods considered in the CIFAR-10 experiment (first setting). Horizontal axis: mAP. Black dots: values of tie-aware AP_T. Without controlling for tie-breaking, relative performance comparison between different methods is essentially ambiguous. The ambiguity is eliminated by tie-awareness.
method included in this experiment, we plot the range of test set mAP spanned by all possible tie-breaking strategies. As can be seen in Fig. 3, the ranges corresponding to different methods generally overlap; therefore, without controlling for tie-breaking, relative performance comparison between different methods is essentially ambiguous. The ranges shrink as code length increases, since the number of ties generally decreases when there are more bins in the histogram.
Current hashing methods usually compute test-time AP and NDCG using random tie-breaking and general-purpose sorting algorithms. Interestingly, in all of our experiments, we observe that this produces values very close to the tie-aware AP_T and NDCG_T. The reason is that with a randomly ordered database, averaging the tie-unaware metric over a sufficiently large test set behaves similarly to the tie-aware solution of averaging over all permutations. Therefore, the results reported in the current literature are indeed quite fair, and so far we have found no evidence of exploitation of tie-breaking strategies. Still, we recommend using tie-aware ranking metrics in evaluation, as they completely eliminate ambiguity, and counting sort on Hamming distances is much more efficient than general-purpose sorting.
We note that although random tie-breaking is an approximation to tie-awareness at test time, it does not answer the question of how to optimize the ranking metrics during training. Our original motivation is to optimize ranking metrics for hashing, and the existence of closed-form tie-aware ranking metrics is what makes direct optimization feasible.
6. Conclusion
We have proposed a new approach to hashing for nearest neighbor retrieval, with an emphasis on directly optimizing evaluation metrics used at test-time. A study into the commonly used retrieval by Hamming ranking setup led us to consider the issue of ties, and we advocate for using tie-aware versions of ranking metrics. We then make the novel contribution of optimizing tie-aware ranking metrics for hashing, focusing on the important special cases of AP and NDCG. To tackle the resulting discrete and NP-hard optimization problems, we derive their continuous relaxations. Then, we perform end-to-end stochastic gradient ascent with deep neural networks. This results in the new state-of-the-art for common image retrieval benchmarks.
Acknowledgements
The authors would like to thank Qinxun Bai, Peter Gacs, and Dora Erdos for helpful discussions. This work is supported in part by a BU IGNITION award, NSF grant 1029430, and gifts from Nvidia.
References
[1] Alexandr Andoni and Ilya Razenshteyn. Optimal data-dependent hashing for approximate near neighbors. In *Proc. ACM Symposium on Theory of Computing (STOC)*, 2015.
[2] Christopher J. Burges, Robert Ragno, and Quoc V. Le. Learning to rank with nonsmooth cost functions. In *Advances in Neural Information Processing Systems (NIPS)*, 2007.
[3] Guillaume Cabanac, Gilles Hubert, Mohand Boughanem, and Claude Chrisment. Tie-breaking bias: Effect of an uncontrolled parameter on information retrieval evaluation. In *International Conference of the Cross-Language Evaluation Forum*, 2010.
[4] Fatih Cakir, Kun He, Sarah Adel Bargal, and Stan Sclaroff. MIIHash: Online Hashing with Mutual Information. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2017.
[5] Fatih Cakir, Kun He, Sarah Adel Bargal, and Stan Sclaroff. Hashing with mutual information. *arXiv preprint arXiv:1803.00974*, 2018.
[6] Fatih Cakir and Stan Sclaroff. Supervised hashing with error correcting codes. In *Proc. ACM International Conference on Multimedia*. ACM, 2014.
[7] Fatih Cakir and Stan Sclaroff. Adaptive hashing for fast similarity search. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2015.
[8] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S Yu. HashNet: Deep learning to hash by continuation. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2017.
[9] Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In *Proc. International Conference on Machine Learning (ICML)*, 2007.
[10] Soumen Chakrabarti, Rajiv Khanna, Uma Sawant, and Chiru Bhattacharyya. Structured learning for non-smooth ranking losses. In *ACM SIGKDD Conference on Knowledge Discovery and Data Mining*, 2008.
[11] Olivier Chapelle and Mingrui Wu. Gradient descent optimization of smoothed information retrieval metrics. *Information Retrieval*, 13(3):216–235, 2010.
[12] Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In *Proc. British Machine Vision Conference (BMVC)*, 2014.
[13] Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yan-Tao Zheng. NUS-WIDE: A real-world web image database from National University of Singapore. In *Proc. ACM CVPR*, 2009.
[14] Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In *Proc. International Conference on Very Large Data Bases (VLDB)*, 1999.
[15] Maurice G Kendall. *Rank correlation methods*. Griffin, 1948.
[16] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
[17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In *Advances in Neural Information Processing Systems (NIPS)*, 2012.
[18] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In *Advances in Neural Information Processing Systems (NIPS)*, 2009.
[19] Andrey Kustarev, Yury Ustinovsky, Yury Logachev, Evgeny Grechinikov, Ilya Segalovich, and Pavel Serdyukov. Smoothing NDCG metrics using tied scores. In *Proc. ACM CIKM*, 2011.
[20] Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang. Feature learning based deep supervised hashing with pairwise labels. In *Proc. International Joint Conference on Artificial Intelligence (IJCAI)*, 2016.
[21] Daryl Lim and Gert Lanckriet. Efficient learning of mahalanobis metrics for ranking. In *Proc. International Conference on Machine Learning (ICML)*, 2014.
[22] Guosheng Lin, Fayao Liu, Chunhua Shen, Jianxin Wu, and Heng Tao Shen. Structured learning of binary codes with column generation for optimizing ranking measures. *International Journal of Computer Vision (IJCV)*, 2016.
[23] Guosheng Lin, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, and David Suter. Fast supervised hashing with decision trees for high-dimensional data. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2014.
[24] Tie-Yan Liu. Learning to rank for information retrieval. *Foundations and Trends® in Information Retrieval*, 3(3):225–331, 2009.
[25] Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. Supervised hashing with kernels. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2012.
[26] Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. *Introduction to information retrieval*. Cambridge university press, 2008.
[27] Brian McFee and Gert R Lanckriet. Metric learning to rank. In *Proc. International Conference on Machine Learning (ICML)*, 2010.
[28] Frank McSherry and Marc Najork. Computing information retrieval performance measures efficiently in the presence of tied scores. In *Proc. European Conference on Information Retrieval*, 2008.
[29] Mohammad Norouzi, David J Fleet, and Ruslan R Salakhutdinov. Hamming distance metric learning. In *Advances in Neural Information Processing Systems (NIPS)*, 2012.
[30] Ramin Raziperchikolaei and Miguel A Carreira-Perpinán. Optimizing affinity-based binary hashing using auxiliary coordinates. In *Advances in Neural Information Processing Systems (NIPS)*, 2016.
[31] Bryan C Russell, Antonio Torralba, Kevin P Murphy, and William T Freeman. LabelMe: a database and web-based tool for image annotation. *International Journal of Computer Vision (IJCV)*, 77(1):157–173, 2008.
[32] Alexandre Sablayrolles, Matthijs Douze, Nicolas Usunier, and Hervé Jégou. How should we evaluate supervised hashing? In *IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, 2017.
[33] Dongjin Song, Wei Liu, Rongrong Ji, David A Meyer, and John R Smith. Top rank supervised binary coding for visual search. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2015.
[34] Yang Song, Alexander G. Schwing, Richard S. Zemel, and Raquel Urtasun. Training deep neural networks via direct loss minimization. In *Proc. International Conference on Machine Learning (ICML)*, 2016.
[35] Michael Taylor, John Guiver, Stephen Robertson, and Tom Minka. Softrank: optimizing non-smooth rank metrics. In *Proc. ACM International Conference on Web Search and Data Mining (WSDM)*, 2008.
[36] Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. Few-shot learning through an information retrieval lens. In *Advances in Neural Information Processing Systems (NIPS)*, pages 2252–2262, 2017.
[37] Evgeniya Ustinova and Victor Lempitsky. Learning deep embeddings with histogram loss. In *Advances in Neural Information Processing Systems (NIPS)*, 2016.
[38] Hamed Valizadegan, Rong Jin, Ruofei Zhang, and Jianchang Mao. Learning to rank by optimizing NDCG measure. In *Advances in Neural Information Processing Systems (NIPS)*, 2009.
[39] Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. *arXiv preprint arXiv:1408.2927*, 2014.
[40] Jun Wang, Wei Liu, Andy X Sun, and Yu-Gang Jiang. Learning hash codes with listwise supervision. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2013.
[41] Qifan Wang, Zhwei Zhang, and Luo Si. Ranking preserving hashing for fast similarity search. In *Proc. International Joint Conference on Artificial Intelligence (IJCAI)*, 2015.
[42] Yi Wang, Xiaofang Shi and Kris M Kitani. Deep supervised hashing with triplet labels. In *Proc. Asian Conference on Computer Vision (ACCV)*, 2016.
[43] Chao-Yuan Wu, R. Manmatha, Alexander J. Smola, and Philipp Krähenbühl. Sampling matters in deep embedding learning. In *Proc. IEEE International Conference on Computer Vision (ICCV)*, 2017.
[44] Zhou Yu, Fei Wu, Yin Zhang, Siliang Tang, Jian Shao, and Yueting Zhuang. Hashing with list-wise learning to rank. In *Proc. ACM SIGIR Conference on Research & Development in Information Retrieval*, 2014.
[45] Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A support vector method for optimizing average precision. In *Proc. ACM SIGIR Conference on Research & Development in Information Retrieval*, 2007.
[46] Lei Zhang, Yongdong Zhang, Jinhui Tang, Ke Lu, and Qi Tian. Binary code ranking with weighted hamming distance. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2013.
[47] Fang Zhao, Yongzhen Huang, Liang Wang, and Tieniu Tan. Deep semantic ranking based hashing for multi-label image retrieval. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2015.
[48] Bohan Zhuang, Guosheng Lin, Chunhua Shen, and Ian Reid. Fast training of triplet-based deep binary embedding networks. In *Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2016. |
1. On April 24, 2009, we dismissed the appeal herein, affirmed the judgment of Sinclair-Haynes, J. (Ag.) delivered on May 10, 2005, and awarded costs of the appeal to the respondent to be agreed or taxed. We promised then to put our reasons in writing. This we now do.
2. The respondent filed a claim on September 22, 2004, against the appellant (his father) for $4.8 million with interest in the amount of $3.84 million, being sums he claimed the appellant acknowledged as due and owing to him (the respondent).
3. The particulars of claim indicate that in or about October, 1988, the respondent, while residing in the United States of America, transmitted the sum of one hundred thousand United States dollars (US$100,000.00) to the appellant for it to be applied towards the purchase of a house for the respondent. Although the appellant led the respondent to believe that a house had been purchased, that was not the case. When the respondent returned to Jamaica in October, 2003 and discovered that no house had been purchased, he demanded the return of his money. The particulars further indicate that by letter dated July 6, 2004, the appellant wrote to his attorney-at-law, copying same to the respondent, acknowledging the debt with interest.
4. A defence was filed out of time on November 12, 2004, denying the claim. In this defence, the appellant's wife, purporting to have a power of attorney, asserted that the appellant was of unsound mind at the time of the letter of July 6, 2004 and that the document is not the appellant's deed. Non est factum was the plea. Alternatively, the appellant contends that he was unduly influenced by the respondent who had taken him to his (the appellant's) attorney-at-law for the preparation of the letter.
5. The respondent filed an application for court orders, namely, the striking out of the defence and the entry of summary judgment in his favour. The learned judge noted that the defence had been filed outside the prescribed time, and that the appellant had not applied for extension of time to file same. She considered whether she would have been dealing justly, fairly and expeditiously with the matter were she to have allowed the appellant to file the defence. It was her view that there had been non-compliance with the rules through "sheer ignorance", but reminded herself that ignorance of the law was no excuse.
6. The learned judge then proceeded to consider whether there was a real prospect of the defence succeeding. In doing so, she considered the evidence of Mrs. Evadney Lyle, particularly that she was unable to disprove the respondent's case as to the remittance of the sums of money and the purpose for which they were sent. She concluded thus:
"In the circumstances, even if I had been mindful to exercise my discretion to allow the defendant's defence to stand, Mrs. Lyle, through her own admission, could not prove her allegations."
In view of Mrs. Lyle's lack of knowledge of the facts so as to be able to dispute the claim and to sustain the defences of undue influence and non est factum, as well as what she saw as Mrs. Lyle's lack of standing to
defend the action, the learned judge felt obliged to enter judgment for the respondent. She ordered as follows:
1. Summary Judgment in favour of the Claimant on the Claim herein.
2. Defence of the Respondent/Defendant struck out.
3. That Judgment be entered for the Claimant in terms of the Claim Form filed herein."
**Grounds of Appeal**
7. The following grounds of appeal were filed:
"a. The learned judge erred in law and/or misdirected herself when she granted summary judgment in circumstances where the Claimant failed to establish that the Defendant had no real prospect of successfully defending the claim having regard to:
i. That there was an issue on the defence as to whether the Defendant was mentally competent to provide the alleged letter of acknowledgment dated the 6th July 2004.
ii. The statement made by Evadne Lyle that the Defendant was of unsound mind prior to and at the date of the alleged letter of acknowledgement of 6th July 2004 and this statement was not tested in cross-examination at the summary judgment application and in the circumstances is an issue that should have been determined at trial.
iii. The issue as to whether Dr. McKenzie, being a general practitioner, was capable of providing an opinion of the mental condition
of the Defendant was a matter that ought not to have been determined on a summary judgment application.
b. The Claimant’s claim was statute-barred and there was no proper acknowledgement of the said debt capable of reviving the claim.
c. The learned trial judge erred in law in granting summary judgment in circumstances where the debt and/or claim was discharged and extinguished by an accord and satisfaction and/or compromise.
d. The learned trial judge erred in law and/or misdirected herself in granting summary judgment were the issue of the Claimant procuring the alleged letter of acknowledgment of 6th July 2004 by undue influence or a catching and unconscientious bargain, which said issue could not have been determined at summary judgment application but at trial where the said issues could be properly investigated.
e. The learned judge in granting summary judgment failed to appreciate that the Defendant was a patient suffering from mental disorder as defined under the Mental Health Act, and as a consequence and pursuant to Part 23 of the Civil Procedure Rules a next friend ought to have been appointed to represent the Defendant prior to the Claimant proceeding with his application for summary judgment.”
8. In written submissions, the appellant contended that the following were issues for determination at a trial:
I. The circumstances of the actual creation of the debt;
II. The mental capacity of the appellant with particular reference to the letter of July 6, 2004;
III. The knowledge of the respondent as regards the appellant's mental capacity;
IV. Whether there had been the relationship of trust and confidence between the parties, and there has been an abuse of the relationship;
V. Whether there had been accord and satisfaction thereby extinguishing the original debt.
9. From this summary of the issues, it is clear that the creation and continued existence of the debt, as well as the mental capacity of the appellant were the matters that the appellant wished us to focus on. It is also clear that these matters related to different periods of time.
The creation of the debt
10. The contention of the appellant was that the judge's first consideration ought to have been ensuring that the debt had in fact been created. According to the submission, the only acknowledgment of the debt was the letter of July 6, 2004, the validity of which was under challenge given the mental capacity of the appellant. The respondent however pointed to other correspondence between the parties and submitted that there has been clear acknowledgment of the debt.
11. On December 16, 2003, the appellant's attorney-at-law wrote to the respondent's attorneys-at-law thus:
"It is our understanding that Mrs. Evadne Lyle, the wife of our client has agreed to settle the outstanding indebtedness to Mr. Allan Lyle. We now write to request that whatever arrangements that are being made by Mrs. Lyle in this matter insofar as they affect the assets or interest of Mr. Lyle must first be referred to us, as Mrs. Lyle has no authority to pledge the credit or assets of Mr. Lyle."
This letter did not deny the existence of a debt. On the contrary, it acknowledges the debt as outstanding, and expresses the preference for the arrangements for its settlement to be referred to the appellant's attorney-at-law, as the appellant's wife has no authority to pledge the appellant's credit or assets.
12. Notwithstanding this letter, the appellant's wife's attorneys-at-law wrote in the following terms to the respondent's attorneys-at-law on February 10, 2004:
"Our client is the wife of Mr. Vernal Lyle, who is the father of your client. In or about 1996 your client while living in the United States of America sent funds totaling Eighty Thousand Dollars United States currency (US$80,000.00) to his father in Jamaica for safe keeping.
Your client recently returned ...
Your client has demanded the return of the above-stated funds from his father...this matter has created a very uncomfortable atmosphere within the home."
Your client has indicated his willingness to accept this reimbursement in Jamaican currency...
Our client has taken steps to put these funds together and should be in a position to reimburse your client in full within five (5) days...
We shall be happy to hear from you in respect of the foregoing as soon as possible so that this matter can proceed to an amicable conclusion."
This letter is not only further acknowledgment of the existence of the debt but it also indicates an intention and willingness to settle it. The appellant's wife in the said letter put forward as conditions for settlement:
(a) the handing over of all keys to the premises by the respondent, upon reimbursement; and
(b) the respondent's undertaking not to return to the premises unless invited.
It ill behoves the appellant's wife, now with a limited power of attorney, to be seeking to deny this acknowledgment of the debt, and requesting a trial in circumstances where she will not be able to contradict in any respect the existence of the debt. In the circumstances, there was no merit seen in this ground of appeal.
The mental capacity of the appellant
13. The tenor of the appellant's submission in this regard was that the respondent's claim was based on the letter of July 6, 2004, but the
appellant was not in a mental condition to give the instructions contained therein. According to the submission, the appellant's signature on the letter was a mere mark as the mind of the appellant did not go with the writing. Hence, the learned judge should have regarded the appellant's mental state as an issue for determination, and so summary judgment should not have been entered. The respondent has countered by saying that, taken to its logical conclusion, if the appellant is declared non compos mentis, he would be unable to give evidence at a trial to dispute the claim. The debt arose before July 6, 2004, and so that date could not be the date from which the Court would assess the merits of the claim.
14. In the documents placed before Sinclair-Haynes, J. (Ag.) was a certificate dated September 29, 2004, signed by Dr. Clive McKenzie, physician and surgeon, to the effect that the appellant:
(a) had been his patient since October 4, 2003;
(b) was suffering from hypertension, debilitating osteoarthritis, Alzheimer’s disease, poor vision and amnesia;
(c) was at times disorientated in time, place and person; and
(d) was unable to make any sober decision at the time of the certificate.
15. It should be noted that the certificate does not eliminate the possibility of the appellant giving instructions to his attorney-at-law, and
signing the letter in question. Amnesia relates to the inability to recollect past events and, no doubt, there are degrees of such a condition. It does not mean that there is an inability to give instructions or to sign documents. The disorientation that he suffers is also "at times". In any event, the important point is that whatever medical challenges the appellant may have faced since he became a patient of Dr. McKenzie in October, 2003, they would have been of no relevance to the debt which he incurred several years before. The existence of this debt was obviously communicated by him to his wife who acknowledged same in discussions and correspondence with the respondent. It is this communication that would have enabled the wife, with her limited power of attorney, to make the detailed proposals she made to the respondent with a view to resolving the dispute.
16. The respondent returned to Jamaica in October, 2003. That was the very month in which the appellant came under the care of Dr. McKenzie. The revelation of the misuse of the respondent's funds was made upon the respondent's return. Since then, the appellant has been continually represented by an attorney-at-law. There is absolutely no evidence that the appellant was unable to give instructions to his attorneys-at-law in this matter at any stage. In the absence of such evidence, it has to be assumed that the instructions he gave were
properly given. Dr. McKenzie was careful not to say that the appellant was of unsound mind. This allegation that was put before Sinclair-Haynes, J. (Ag.) was sheer speculation. In the circumstances, the ground as to mental capacity was of no moment.
17. In our view, the failure of these grounds sufficiently disposed of the appeal. It was rather ingenious of the appellant to advance on appeal before us grounds in relation to accord and satisfaction as well as the claim being statute-barred. These were never made part of the defence that was filed on November 12, 2004; nor were they raised before the learned judge. In any event, the acknowledgment of the debt referred to earlier defeats the point as to the claim being statute-barred; and the absence of satisfaction means that there has been no consideration, even if indeed there was an accord.
18. In the light of the above, we concluded that the appeal was without merit and had to be dismissed, with costs to the respondent to be agreed or taxed. |
SCHEDULE 1
Part 2 – Plans of the Agreement Area
Original Information compiled from BS/82 SP156403, SP156404, SP156405, SP156406, SP156407, SP156408, SP156409, SP156410, SP156411, SP156412, SP156413, SP156414, the Department of Natural Resources and Mines
Boundaries measured with QASS
Datum: Australian National Datum 1985
Lot 1 excludes the tidal lands of Wakooka Creek and Saltwater Creek. Lot also excludes land below ordinary low water at Spring tides (vide section 3(1)(c) of the Aboriginal Land Act 1991).
Lot 4 includes tidal lands of all other rivers, creeks and streams within Lot 4 vide Schedule 1 of the Aboriginal Land Regulation 1991.
See table A for diagrams A–E, sheet 1 for diagrams A & B, sheet 2 for diagrams A & C, sheet 3 for diagrams B & D, sheet 4 for diagrams C & E, sheet 5 for diagrams D & F, sheet 6 for diagrams E & G, sheet 7 for diagrams G & H, sheet 8 for explained points, reference marks, permanent marks & some information & sheet 9 for survey & ancillary boundary reports
Exempt Land
Under Section 68(1)(g) & 95(1)(b)
of the SMI Act 2003 (Protected Area)
Trial boundary defined using new source material under Section 81(3) of the SMI Act 2003
Trial boundaries determined from SPOTMaps (2004–2010) satellite imagery in the Department on Natural Resources and Mines
Plan of Lots 1–5
Cancelling Lot 4 on NPW531, Lot 2 on AP12349, Lot 5 on SP156403 and Lot 37 on USL8141
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Starcke & Lakefield
Meridian: MGA (Zone 55) vide SP156403 Survey Records No
Scale: 1:300 000
Format: STANDARD
Copyright protects the plan's being ordered by you. Unauthorised reproduction or amendments are not permitted.
1. Certificate of Registered Owners or Lessees.
I/We The State of Queensland (represented by Department of National Parks, Recreation, Sport and Racing and Department of Natural Resources and Mines—Land Act)
Signed by Jason Jacobi
* as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown herein in accordance with Section 50 of the Land Title Act 1994.
* as Lessees of the land agree to this plan.
Signed by (Applicant) 11/9/13
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
Dated this day of
#
#
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
CMS Number:
Name:
4. References:
Dept Ref.: CNS 929057
Local Govt:
Surveyor:
5. Lodged by:
Department of National Parks, Peninsula Tenure Resolution Program
PO Box 45897
Cairns, QLD 4870
Action Officer: L Morrissey
ELVAS Ref: 2013/000081
(Include address, phone number, reference, and Lodger Code)
6. Existing
| Title Reference | Description | New Lots | Road | Secondary Interests |
|-----------------|----------------------|----------|------|---------------------|
| 47001684 | Lot 37 on USL8141 | 4 | – | – |
| 47502020 | Lot 4 on NPW531 | 4 | – | – |
| 47502020 | Lot 2 on API2349 | 1–5 | – | – |
| 47502020 | Lot 5 on SP156403 | 4 | – | – |
7. Orig Grant Allocation:
8. Map Reference:
7869–32312
9. Parish:
As shown
10. County:
Melville
11. Passed & Endorsed:
Department of Natural Resources and Mines
Date: 18–04–2013
Signed:
Designation: Principal Surveyor
12. Building Format Plans only.
13. Lodgement Fees:
| Item | Amount |
|-----------------------|--------|
| Survey Deposit | $ |
| Lodgement | $ |
| ...New titles | $ |
| Photocopy | $ |
| Postage | $ |
| TOTAL | $ |
14. Insert Plan Number
SP252501
NOTE – Permit to Occupy 713598161 is cancelled
| Line | Bearing | Dist |
|------|---------------|------|
| 1-29 | 95°04'15" | 345.64 |
| 1-30 | 187°00'15" | 636.47 |
| 3-34 | 183°11' | 60.05 |
| 4-34 | 183°11' | 60.05 |
| 5-50 | 208°02'15" | 61.802 |
| 6-60 | 194°12'30" | 68.032 |
| 7-70 | 180°00' | 65.317 |
| 8-80 | 185°45" | 60.455 |
| 9-90 | 180°00'20" | 60.065 |
| 10-100 | 175°44" | 54.744 |
| 11-110 | 223°34'20" | 60.589 |
| 12-120 | 223°34'20" | 60.589 |
| 13-130 | 237°51'50" | 61.823 |
| 14-140 | 203°09'55" | 58.653 |
| 15-150 | 180°00'10" | 60.019 |
| 16-160 | 114°10'35" | 64.352 |
| 17-170 | 100°00' | 60.000 |
| 18-180 | 179°33'25" | 64.536 |
| 19-190 | 207°55'07" | 60.878 |
| 20-200 | 180°00'05" | 60.005 |
| 21-210 | 168°21'40" | 76.82 |
| 22-220 | 156°10'10" | 170.465|
| 23-230 | 156°10'10" | 170.465|
| 23-24 | 11°09'15" | 376.348|
| 23-25 | 11°09'15" | 376.348|
| 23-26 | 11°09'15" | 376.348|
| 23-27 | 11°09'15" | 376.348|
| 23-28 | 11°09'15" | 376.348|
| 23-29 | 11°09'15" | 376.348|
| 23-30 | 11°09'15" | 376.348|
| 23-31 | 11°09'15" | 376.348|
| 23-32 | 11°09'15" | 376.348|
| 23-33 | 11°09'15" | 376.348|
| 23-34 | 11°09'15" | 376.348|
| 23-35 | 11°09'15" | 376.348|
| 23-36 | 11°09'15" | 376.348|
| 23-37 | 11°09'15" | 376.348|
| 23-38 | 11°09'15" | 376.348|
| 23-39 | 11°09'15" | 376.348|
| 23-40 | 11°09'15" | 376.348|
| 23-41 | 11°09'15" | 376.348|
| 23-42 | 11°09'15" | 376.348|
| 23-43 | 11°09'15" | 376.348|
| 23-44 | 11°09'15" | 376.348|
| 23-45 | 11°09'15" | 376.348|
| 23-46 | 11°09'15" | 376.348|
| 23-47 | 11°09'15" | 376.348|
| 23-48 | 11°09'15" | 376.348|
| 23-49 | 11°09'15" | 376.348|
| 23-50 | 11°09'15" | 376.348|
| 23-51 | 11°09'15" | 376.348|
| 23-52 | 11°09'15" | 376.348|
| 23-53 | 11°09'15" | 376.348|
| 23-54 | 11°09'15" | 376.348|
| 23-55 | 11°09'15" | 376.348|
| 23-56 | 11°09'15" | 376.348|
| 23-57 | 11°09'15" | 376.348|
| 23-58 | 11°09'15" | 376.348|
| 23-59 | 11°09'15" | 376.348|
| 23-60 | 11°09'15" | 376.348|
| 23-61 | 11°09'15" | 376.348|
| 23-62 | 11°09'15" | 376.348|
| 23-63 | 11°09'15" | 376.348|
| 23-64 | 11°09'15" | 376.348|
| 23-65 | 11°09'15" | 376.348|
| 23-66 | 11°09'15" | 376.348|
| 23-67 | 11°09'15" | 376.348|
| 23-68 | 11°09'15" | 376.348|
| 23-69 | 11°09'15" | 376.348|
| 23-70 | 11°09'15" | 376.348|
| 23-71 | 11°09'15" | 376.348|
| 23-72 | 11°09'15" | 376.348|
| 23-73 | 11°09'15" | 376.348|
| 23-74 | 11°09'15" | 376.348|
| 23-75 | 11°09'15" | 376.348|
| 23-76 | 11°09'15" | 376.348|
| 23-77 | 11°09'15" | 376.348|
| 23-78 | 11°09'15" | 376.348|
| 23-79 | 11°09'15" | 376.348|
| 23-80 | 11°09'15" | 376.348|
| 23-81 | 11°09'15" | 376.348|
| 23-82 | 11°09'15" | 376.348|
| 23-83 | 11°09'15" | 376.348|
| 23-84 | 11°09'15" | 376.348|
| 23-85 | 11°09'15" | 376.348|
| 23-86 | 11°09'15" | 376.348|
| 23-87 | 11°09'15" | 376.348|
| 23-88 | 11°09'15" | 376.348|
| 23-89 | 11°09'15" | 376.348|
| 23-90 | 11°09'15" | 376.348|
| 23-91 | 11°09'15" | 376.348|
| 23-92 | 11°09'15" | 376.348|
| 23-93 | 11°09'15" | 376.348|
| 23-94 | 11°09'15" | 376.348|
| 23-95 | 11°09'15" | 376.348|
| 23-96 | 11°09'15" | 376.348|
| 23-97 | 11°09'15" | 376.348|
| 23-98 | 11°09'15" | 376.348|
| 23-99 | 11°09'15" | 376.348|
| 23-100| 11°09'15" | 376.348|
**Diagram B**
Scale: 1:25,000
Insert Plan Number SP252501
Diagram L
Scale: 1:10,000
Diagram C
Scale: 1:25,000
Land Title Act 1994 - Land Act 1984
Form 21A Version I
State copyright reserved.
Insert Plan Number SP252501
DIAGRAM E
Scale - 1:25 000
DIAGRAM F
Scale - 1:25 000
DIAGRAM H
Scale - 1:6000
Insert
Plan
Number
SP252501
Diagram G
Scale - 1:25,000
Diagram M
Scale - 1:10,000
Insert Plan Number SP252501
SURVEY REPORT
This plan is prepared for the transfer of Lot 4 under the Aboriginal Land Act 1991 and the recreation of National Park over the grant under the Aboriginal Land Act 1991 and the Nature Conservation Act 1992. The land is exempt land under Sections 66(1)(d) and 95(1)(b) of the Survey and Mapping Infrastructure Act 2003 (Protected Land).
The land is currently National Park, with the exception of Lot 37 on USL8148. All the land is currently transferable land under the Aboriginal Land Act 1992. Lot 4 excludes esplanade 60 metres wide along the coast between Stn 8 near Cape Bowen and Stn 85 and 20 metres wide along the left bank of the Jeannie River between Stn C and Stn D.
The esplanade along the Jeannie River was surveyed by Surveyor MacLean on BS192. NPW 531 excludes the Jeannie River adjacent to and for the length of the esplanade. Surveyor MacLean's survey was plotted using existing survey control and overlayed on aerial photography. In all sections where the river was located by Surveyor MacLean good agreement with the location of the existing river bank visible in the SPOTMaps (2004-2010) imagery was evident. This indicates the river to be very stable in this area. However the survey on BS192 is very deficient in the number of points located in the northern section of the original traverse. The limited offsets in the field notes would place the esplanade within the tidal waters of the river in this section. The original survey of the river was therefore supplemented with additional river points determined from the imagery to properly define the esplanade as intended on BS192. It is noted that Surveyor MacLean did leave a number of blank pages along his traverse in the field notes which he appears not to have completed. The esplanade points on sheet 8 have been derived by offsetting the 20 metre width from the river location using the FNs of BS192 and the points obtained from the imagery in accordance with Section 3.3.1 "Remote Area Surveys" of the Cadastrial Survey Requirements Version 6.
The esplanade between Stn 85 and 86 has been defined using new source material under Section 89(3) of the SMI Act by offsetting the 60 metres from the location of the tidal boundary evidenced in SPOTMaps (2004-2010) imagery. This tidal boundary is Ordinary High Water Mark at Mean Spring Tides (MHWS).
Lots 1-5 are to be excised from the existing Cape Melville National Park as the road formation encroaches on these areas. The lots will be dedicated as road after the excision is complete. The formation is a new formation built within the last 3-4 years. The road to the SW between Stns 22 and 23 does not exist on the ground. An extensive search in the locality did not find any formation or significant evidence of the existence of a road in this location. The boundaries of the road have been determined from the DCDB. It is noted the terminal point of the road near Stn 33 is on top of a high hill and practicable access would not be possible to this location. If at any time in the future a road was to be built to provide access to Lot 8 on AP12349 and the SW corner of Lot 9 on SP104583, action would need to be taken to excise this road from the new deed and close this existing dedicated road.
The Mt Webb Wakooka Road from Stn 1 south to Stn 30 and the Wakooka Creek Rd has been surveyed in their current dedicated location which is consistent with the long standing location of the existing track. Small deviations due to seasonal washouts have been ignored. The current location of the track is mostly within the surveyed corridor. The eastern most end of the Wakooka Creek Road is located on coastal saltpans and is not traversable. The location of this part of the road has been derived form the DCDB. It would not be practicable to construct a road in this location.
AMBULATORY BOUNDARY REPORT
Lot 4 on NPW531 excludes the tidal lands of Dead Dog, Saltwater, Wakooka and Rocky Creeks, and the Jeannie River but includes the tidal lands of the Howick River and other tidal creeks and streams. This has been maintained on this plan and the tidal lands of the Howick River and other tidal creeks not excluded above are included in Lot 4. The tidal boundary of the coastline is Ordinary High Water Mark at Mean Spring tides which equates to MHWS. The tidal lands included in Lot 4 are those lands between MHWS and MLWS. Any land below ordinary low water at Spring Tides (waters of the sea and sea bed) is excluded from Lot 4 vide section 31(1)(a) of the Aboriginal Land Act 1991.
The coastal boundary has been determined from SPOTMaps (2004-2010) satellite imagery using new source material under Section 89(3) of the SMI Act 2003. The tidal boundary is MHWS. The land is exempt Land under Sections 66(1)(d) and 95(1)(b) of the SMI Act 2003 (Protected Land). The location of MHWS identified in the imagery is either: on the upper part or sandy beaches, at the colour change on the rocky headlands predominately caused by being covered by water at most tides, or at the landward edges of the main mangroves where mangroves exist. In the excluded river systems where no mangroves exist, the tidal boundary is generally the top of the steep bank containing the MHWS tides. Extensive salt pans exist around these rivers and creeks and these are mostly above MHWS and therefore included in Lot 4. The upper reach of the Jeannie River is also excluded from Lot 4 to the south western extent of the 20 metre wide esplanade along the right bank. The Jeannie River is not tidal from the SW end of the esplanade for about 500 metres to some falls. The river is tidal below the falls. The non tidal river boundary is top of the high bank as picked up by surveyor MacLean on BS192.
## SURVEY PLAN
### APPROX M.G.A. COORDINATES:
| STATION | EASTING | NORTHING | ZONE |
|---------|----------|----------|------|
| A | 204 862 | 8 425 133| 55 |
| B | 204 901 | 8 427 133| 55 |
| C | 202 478 | 8 427 458| 55 |
| D | 200 685 | 8 427 200| 55 |
| H | 203 863 | 8 442 473| 55 |
Positional accuracy 50 metres
### PERMANENT MARKS
| PM | BEARING | DIST | STN | TYPE | EASTING | NORTHING | ZONE | ORDER | CLASS |
|--------|---------|------|---------|--------------|----------|----------|------|-------|-------|
| E--PM | at station | 701094 | Standard | 204 909-943 | 8 429 601-972 | 55 | 4th | D |
| F--PM | at station | 702555 | Standard | 201 033-736 | 8 435 822-949 | 55 | 4th | D |
| G--PM | at station | 700800 | Unknown | 201 026-928 | 8 441 117-648 | 55 | 4th | D |
| J--PM | at station | 701642 | Cen light house | 211 875-069 | 8 441 117-648 | 55 | 4th | D |
### Exempt Land Under Section 66 (1), (d), of the SML Act 2003 (Protected Area):
- **CLACK ISLAND**
- **KING ISLAND**
### Scale: 1:60000 - Lengths are in Metres.
**Plan of Lots 1 - 9**
Cancelling Lot 413 on NPW606
**LOCAL GOVERNMENT:** Cook S.C.
**LOCALITY:** LAKEFIELD
Meridian: MGA Zone 55
Survey Records: No
**Format:** STANDARD
**Date:** 22-02-2013
**Custodian Surveyor**
---
Copyright protects the plan's being ordered by you. Unauthorised reproduction or amendments are not permitted.
1. Certificate of Registered Owners or Lessees.
1/We THE STATE OF QUEENSLAND (represented by Department of National Parks, Recreation Sport and Racing)
Signed by Jason Jacobs
(Names in full)
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as lessees of this land agree to this plan.
Signed by (ADDQ QPWS) 11/9/12
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
% Dated this ........................................... day of ...........................................
.................................................. # .................................................. #
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
LMS Number:
Name:
4. References:
Dept File - 2013/000872
Local Govt:
Surveyor:
5. Lodged by DATSIMA
CAPE YORK PENINSULA TENURE REGULATION PROGRAM
PO Box 4547
CAIRNS QLD 4870
ACTION OFFICER: L MORRISSEY
ELWIS RUF 2013/00872
(Include address, phone number, reference, and Lodger Code) CS495
6. Existing Created
| Title Reference | Description | New Lots | Road | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47502181 | Lot 413 on NPW605 | I-9 | | |
7. Orig Grant Allocation:
8. Map Reference: 7769
9. Parish: MACLEAR
10. County: Melville
11. Passed & Endorsed:
By: Dept Natural Resources & Mines
Date: 22/2/13
Signed: M.Wallace D.Malley
Designation: Cadastral Surveyor
12. Building Format Plans only.
I certify that:
* As far as it is practicable to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
* Parts of the building shown on this plan encroach onto adjoining lots and road
Cadastral Surveyor/Director* Date
13. Lodgement Fees:
| Description | Amount |
|----------------------|--------|
| Survey Deposit | $ |
| Lodgement | $ |
| New Titles | $ |
| Photocopy | $ |
| Postage | $ |
| TOTAL | $ |
14. Insert Plan Number SP220299
Diagram A
Scale 1:15000 - Lengths are in Metres.
Diagram B
Scale 1:6000 - Lengths are in Metres.
DIAGRAM H
1:15000
SOUTH PACIFIC OCEAN
CORAL SEA
FLINDERS ISLANDS
BLACKWOOD ISLAND (P1)
182.9 ha
Rattlesnake Channel
Fly Channel
Scale 1:15000 - Lengths are in Metres.
Insert Plan Number SP220299
DIAGRAM J
1:4000
SOUTH PACIFIC OCEAN
CORAL SEA
8 (Pt) 4625 m²
8 (Pt) 1.218 ha
1.862 ha Total
8 (Pt) 1815 m²
7 (Pt) 455 m²
7 (Pt) 2.366 ha
2.4115 ha Total
CLACK ISLAND
Scale 1:4000 - Lengths are in Metres.
Insert Plan Number SP220299
DIAGRAM K
i: 10000
SOUTH PACIFIC OCEAN
CORAL SEA
9 (Pt) 2935 m²
9658 m²
4203 m²
3593 ha
2031 m²
906 m²
1338 m²
10.35 ha
19.6421 ha Total
KING ISLAND
Scale 1:10000 - Lengths are in Metres.
Insert Plan Number SP220299
Survey Report SP220299
This Plan has been prepared for the issue of a freehold title of Lots 1-9 under the *Aboriginal Land Act 1991* and to re-create National Park over the grant under the *Nature Conservation Act 1992* and the *Aboriginal Land Act 1991*. The land is currently National Park and the survey was carried out in accordance with Section 3.31 of the Cadastral Survey Requirements (Remote Area Surveys).
The tidal boundary of Lots 1-9 is High Water Mark at Mean Spring Tides which equates to Mean High Water Spring (MHWS). The nearest site with published tide prediction is Leggitt Island which has published values of 2.7 metres for MHHW (Diurnal tides). MHHW is by definition close to MHWS (semi diurnal tides).
The location of the tidal boundaries of Lots 1-6 have been interpreted using Spot Maps (2004-2010) 2.5m satellite imagery in the Department of Natural Resources and Mines. Lots 1-6 are mainland Islands and the position of the tidal boundary has been interpreted using knowledge of the tidal intersects on the adjacent mainland and observation of the tide on Clack Island. The position of the HWM intersect lies towards the top of sandy beaches, at the landward edge of the mangroves and the change in colouration on the rocky headland. The location of the boundary has been interpreted from the imagery with no site verification. Future definitions of the boundary on the ground may vary depending on the accuracy of the interpretation of the feature adopted and identified in the image.
The tidal boundaries of Lots 7-9 have been positioned using Spot Maps (2004-2010) 2.5m satellite imagery in the Department of Natural Resources and Mines. Ground truthing was carried out using Class PE GNSS observation in December 2012 when the tide height was close to 2.7metres on Leggitt Island. The tide on the day therefore was at a position that would generally be very close to High Water Mark at Mean Spring Tides. All these islands are coral cay islands with the exception of Clack Island. Most of the islands contain extensive mangrove systems all of which were observed to lie below the tidal boundary. i.e the base of the mangroves were covered by the sea at a tide of 2.7 metres and therefore seaward of the tidal boundary.
The extensive mangrove systems on these Islands are seaward of the tidal boundary and within the State Marine Park.
The presentation of NPW 606 has no indication as to the definition of the boundary of the Islands and show the adjoiners as being the Coral Sea. The land immediately adjacent to the tidal boundary of the Islands is State Marine Park. The extent of the State Marine Park is normally Low Water mark at Mean Spring Tides to High Water Mark at Mean Spring Tides. The boundary of National Park Islands is normally High Water Mark at Mean Spring Tides. There is no indication on NPW 606 that the boundary is anything other than High Water Mark at Mean Spring Tides, which is therefore defined as the boundary of these Islands.
Also note under Section 7 of the *Administrative Boundaries Terminology Act 1985*, which was in force at the time (and subsequently repealed by section 142 of the *Survey and Mapping infrastructure Act 2003*), any coastline shown on a map shall be High Water Mark. High Water Mark is defined as Mean High Water Spring tide under section 3 of that act.
The land is deemed to be exempt land under section 66(1) (d) of the *SMI Act 2003* (protected area) and the new source material section 89(3) of the *SMI Act 2003* allows for the intersection of a tidal plane in this case, as the land will become a protected area under the *Nature Conservation Act 1992* and indigenous land under the *Aboriginal Land Act 1991*.
**Plan of Lots 1-9**
Cancelling Lot 414 on NPW607
**LOCAL GOVERNMENT:** COOK S.C.
**LOCALITY:** STARCKE
**Meridian:** MGA Zone 55
**Survey Record:** No
**Scale:** 1:125000
**Format:** STANDARD
**Permanently Marked Points:**
| PM | BEARING | DIST | TYPE | STN | EASTING | NORTHING | ZONE | ORDER CLASS |
|----|---------|------|------|-----|---------|----------|------|-------------|
| A-TM | 18°34' | 950508 | Station in Rock | 1503 | 294 | 833.79 | 8 | 46 | 275.545 |
| A-TM/G | 18°34' | 950508 | Station in Rock | 1503 | 294 | 833.79 | 8 | 46 | 275.545 |
| A-TM/G | 18°34' | 950508 | Station in Rock | 1503 | 294 | 833.79 | 8 | 46 | 275.545 |
| C-TM | 18°34' | 709033 | Station in Rock | 1503 | 294 | 833.79 | 8 | 46 | 275.545 |
| F-TM | 18°34' | 702150 | Station in Rock | 1503 | 294 | 833.79 | 8 | 46 | 275.545 |
**Approximate M.G.A. Coordinates:**
| STATION | EASTING | NORTHING | ZONE |
|---------|---------|----------|------|
| B | 875.894 | 8 | 45 |
| C | 875.894 | 8 | 45 |
| D | 271.323 | 8 | 45 |
| E | 271.323 | 8 | 45 |
| F | 263.379 | 8 | 45 |
| G | 271.323 | 8 | 45 |
*Positional accuracy ±20 metres*
---
**Exempt Land Under Section 86 (a) of the SM Act 2003 (Projected Area).**
*See Survey Record Sheet 8*
1. Certificate of Registered Owners or Lessees.
I/we The State of Queensland (Represented by Department of National Parks, Recreation, Sport and Racing)
Signed by Jason Jacobi
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as Lessees of and agree to this plan:
Signature of Registered Owners
* Rule out whichever is inapplicable
2. Planning Body Approval.
* [Planner's name] hereby approves this plan in accordance with the:
% [Percentage]
Dated this day of ...........................................
......................................................... #
......................................................... #
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
CMS Number:
Name:
Surveyor:
4. References:
Dept File: 2013/000870
Local Govt:
Surveyor:
5. Lodged by: DAFS/IMIA Cairns Yorke Peninsula Tenure Resolution Program PO Box 45477 CAIRNS QLS 48710 ACTION OFFICER L MORRISSEY ELVAS REF 2013/000870
6. Existing Created
| Title Reference | Description | New Lots | Road | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47502074 | Lot 414 on NPW607 | I-9 | | |
Encroachment Notice issued to the owners of esplanade (SLAM in DNRM) and the Lighthouse (Australian Maritime Safety Authority) on 22/2/2013 in accordance with s10 of the Survey and Mapping Infrastructure Regulation 2004.
12. Building Format Plans only.
I certify that:
* As far as it is practical to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
# Part of the building shown on this plan encroaches onto adjoining* lots and road
Cadastral Surveyor/Director* Date
*Delete words not required
13. Lodgement Fees:
STATE ACTION
Survey Deposit $0
Lodgement $0
New Titles $0
Photocopy $0
Postage $0
TOTAL $0
14. Insert Plan Number SP220300
Diagram A
1:1250
Diagram B
Not to Scale
3411 m²
SOUTH BARROW ISLAND
ESPLANADE
75x75 mm plastic pegs placed at Stations 1-4.
No Mark placed at all other new corners.
Permanent Marks
| Line | Bearing | Dist | Sth | Thrs | Easting | Northing | Class |
|------|---------|------|-----|------|---------|----------|-------|
| A-1 | 28°44' | 2.813| | | | | |
| 2-11 | 29°00' | 0.777| | | | | |
| 4-10 | 29°00' | 0.777| | | | | |
| 6-10 | 29°00' | 0.777| | | | | |
| 8-11 | 29°00' | 0.777| | | | | |
SP220300
State copyright reserved
Insert Plan Number
Diagram C
Diagram D
Diagram 4
Diagram 3
Diagram 2
Diagram 1
Approximate Coordinates
| Station | Easting | Northing |
|---------|---------|----------|
| B | 208.644 | 8 415.916 |
| C | 217.123 | 8 415.916 |
| D | 217.123 | 8 415.916 |
Eccentricity accuracy: 20 meters
Scale 1:4000 - Lengths are in Metres.
Insert Plan Number SP220300
| Station | Easting | Northing | Zone |
|---------|---------|----------|------|
| 2 | 273 145 | 8 402 967| 50 |
Easternmost accuracy 50 metres
**Diagram G**
5 (Pt)
Tide bay
64 m²
Beanley Island
**Diagram F**
5 (Pt)
406 m²
470 m² Total
Beanley Island
**Diagram H**
5 (Pt)
406 m²
Beanley Island
**Diagram I**
5 (Pt)
406 m²
Beanley Island
---
**APPENDIX G: COORDINATES**
| Station | Easting | Northing | Zone |
|---------|---------|----------|------|
| 2 | 273 145 | 8 402 967| 50 |
Easternmost accuracy 50 metres
---
**Scale 1:5000 - Lengths are in Metres**
Insert Plan Number SP220300
Survey Report SP220300
This Plan has been prepared for the issue of a freehold title of Lots 1-9 under the *Aboriginal Land Act 1991* and to re-create National Park over the grant under the *Nature Conservation Act 1992* and the *Aboriginal Land Act 1991*. The land is currently National Park and the survey was carried out in accordance with Section 3.3.1 of the Cadastral Survey Requirements (Remote Area Surveys).
The right line boundary of Lot 1 is derived by offsetting the ellipsoidal width of 30.175 metres from location of the MHWS tidal plane intersect with the island, as was the intention of plan ME2. The large discrepancy in area occurred because of the apparent misidentification of this tidal boundary by plan ME2. The foreshore around the island is generally rock formation and hence very stable. The beaches on the northern end of the island are well vegetated with mangroves and movement would be minimal. The tidal boundary was located using GPS RTK techniques and was generally just below the landward edge of the mangroves and just below the change in colouration on the rocks due to the long term tidal influence.
The tidal boundary of Lots 2-9 is High Water Mark at Mean Spring Tides which equates to Mean High Water Spring (MHWS). The nearest site with published tide prediction is Leggitt Island which has published values of 2.7 metres for MHHW (Diurnal tides). MHHW is by definition close to MHWS (semi diurnal tides).
The tidal boundaries of Lots 2-9 have been positioned using Spot Maps (2004-2010) 2.5m satellite imagery in the Department of Natural Resources and Mines. Ground truthing was carried out using Class PE GNSS observation in December 2012 when the tide height was close to 2.7 metres on Leggitt Island. The tide on the day therefore was at a position that would generally be very close to High Water Mark at Mean Spring Tides. All these islands are coral cay islands. Most of the islands contain extensive mangroves systems all of which were observed to lie below the tidal boundary. i.e the base of the mangroves were covered by the sea at a tide of 2.7 metres and therefore seaward of the tidal boundary.
The extensive mangrove systems on these Islands are seaward of the tidal boundary and within the State Marine Park.
The presentation of NPW 607 has no indication as to the definition of the boundary of the Islands and show the adjoiningers as being the Coral Sea. The land immediately adjacent to the tidal boundary of the Islands is State Marine Park. The extent of the State Marine Park is normally Low Water mark at Mean Spring Tides to High Water Mark at Mean Spring Tides. The boundary of National Park Islands is normally High Water Mark at Mean Spring Tides. There is no indication on NPW 607 that the boundary is anything other than High Water Mark at Mean Spring Tides, which is therefore defined as the boundary of these Islands.
Also note under Section 7 of the *Administrative Boundaries Terminology Act 1985*, which was in force at the time (and subsequently repealed by section 142 of the *Survey and Mapping infrastructure Act 2003*), any coastline shown on a map shall be High Water Mark. High Water Mark is defined as Mean High Water Spring tide under section 3 of that act.
The land is deemed to be exempt land under section 66(1) (d) of the *SMI Act 2003* (protected area) and the new source material section 89(3) of the *SMI Act 2003* allows for the intersection of a tidal plane in this case, as the land will become a protected area under the *Nature Conservation Act 1992* and indigenous land under the *Aboriginal Land Act 1991*.
I, Leslie Cyrus Fehlhaber hereby certify that the land comprised in this plan was surveyed by me personally, and by Lyle van Tenhoven, Cadastral Surveyor, for whom I am responsible, and that the plan is accurate, that the said survey was performed in accordance with the Surveying and Land Information Act 2003 and Surveyors Act 2003 and associated Regulations and Standards and that the said survey was completed on 25-11-2012.
Leslie Cyrus Fehlhaber
Cadastral Surveyor
Date: 1-3-2013
LOCAL GOVERNMENT: Cook Shire Council
LOCALITY: Starcke
Meridian: MGA (Zone 55) by GNSS obser'n
Survey Records No
Scale: 1:250 000
Format: STANDARD
Exempt Land
Under Section 95(1)(b)
of the SMI Act 2003
(Protected Area)
SP252507
1. Certificate of Registered Owners or Lessees.
1/We The State of Queensland (represented by Department of National Parks, Recreation, Sport and Racing).
Signed by Jason Jacob
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown herein in accordance with Section 50 of the Land Title Act 1994.
*Lessees of this land agree to this plan.
Signed by Registered Owners Lessees
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
%
Dated this day of
#
#
* Insert the name of the Planning Body. % Insert applicable approving legislation.
g Insert designation of signatory or delegation
3. Plans with Community Management Statement:
CMS Number:
Name:
4. References:
Dept File: 2013/1008290
Local Govt:
Surveyor: 707
5. Lodged by DATSIMA CAIRNS PENINSULA TENURE RESOLUTION PROGRAM PO Box 4597 CAIRNS QLD 4870 ACTION OFFICER: L MORRISSEY ELNRS REF: 2013/CO0829 (Include address, phone number, reference, and Ledger Code) CS2341
6. Existing Created
| Title Reference | Description | New Lots | Road | Secondary Interests |
|-----------------|-------------------|----------|------|---------------------|
| 47502077 | Lot 8 on AP12349 | 8 | - | - |
Benefiting Easement
| Easement | Lots to be benefited |
|----------|----------------------|
| 710898291| 8 |
8 | Lot 8 on AP12349
Lots Orig
7. Orig Grant Allocation:
8. Map Reference: 7868-32411
9. Parish: As shown
10. County: As shown
11. Passed & Endorsed: Department of Natural Resources and Mines
Date: 4-3-2013
Signed: Principal Surveyor
12. Building Format Plans only.
I certify that:
- As far as it is practical to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
- Part of the building shown on this plan encroaches onto all joining lots and road
Codisfral Surveyor/Director Date
13. Lodgement Fees: STATE ACTION No Fee
Survey Deposit $..........
Lodgement $..........
New titles $..........
Photocopy $..........
Postage $..........
TOTAL $..........
14. Insert Plan Number SP252507
L Leslie Clyde Fehlhaber hereby certify that I, and
Arminia Joy Compton, Registered Surveyors, have
made a careful surveyability, have made this plan under
Section 17 of the Survey and Mapping Infrastructure
regulations 2003 and the Survey and Mapping Infrastructure
Mapping Infrastructure Act 2003 and Surveyors Act
2003 and associated Regulations and Standards and
that this plan is correct to the best of our knowledge.
C157229, B531, B592, B582, J571063 and SP104579, in
the Department of Natural Resources and Mines
See sheet 2 for river and
creek points table and
ambulatory boundary report
Exempt Land
Under Section (95)(1)(b) of the
SMI Act 2003 ("Protected Area")
Plan of Lot 203
Canceling Lot 203 on NPW535
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Starcke
Meridian: MGA (Zone 55) vide SP104579 Survey Records No
Scale: 1:12 500
Format: STANDARD
Copyright protects the plan's being ordered by you. Unauthorised reproduction or amendments are not permitted.
1. Certificate of Registered Owners or Lessees.
I/We The State of Queensland (Represented by Department of National Parks, Recreation, Sport and Racing)
Signed by Jason Jacobi
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as Lessee(s) of this land agree to this plan.
Signature of Registered Owners *Lessee(s)
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
Dated this ........................................... day of ...........................................
........................................... #
........................................... #
* Insert the name of the Planning Body. % Insert applicable approving legislation.
Insert designation of signatory or delegation
3. Plans with Community Management Statement:
LMS Number:
Name:
4. References:
Dept File: 2013/000829
Local Govt:
Surveyor: 73
5. Lodged by DPTESIMA
CAPE YORK PENINSULA TENURE RESOLUTION PROGRAM
PO Box 4597
CAIRNS QLD 4870
ACTION OFFICER: L MORRISSEY
ELWAS REF: 2013/000843
[Include address, phone number, reference, and Lodger Code] CS2341
6. Existing
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------------|----------|------|---------------------|
| 47502113 | Lot 203 on NPWS35 | 203 | - | - |
7. Orig Grant Allocation:
8. Map Reference:
7967-44211
9. Parish: Tupia
10. County: Banks
11. Passed & Endorsed:
Department of Natural Resources and Mines
Date: 4-2-2013
Signed:
Designation: Principal Surveyor
12. Building Format Plans only.
* As far as it is practicable to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
* Part of the building shown on this plan encroaches onto adjoining lots and road
Colofard Surveyor/Director * Date
13. Lodgement Fees: STATE ACTION
Survey Deposit: $1
Lodgement: $
..........................New titles: $
Photocopy: $
Postage: $
TOTAL: $
14. Insert Plan Number SP252496
### Table A
**Orig River & Creek Points**
| Bearing | Dist |
|---------|-------|
| A | |
| 28°02'35" | 17.094 |
| 28°06'15" | 17.094 |
| 27°24'45" | 37.692 |
| 25°54'45" | 5.431 |
| 23°30'45" | 1.491 |
| 22°52'15" | 17.742 |
| C | |
| 26°17'45" | 9.449 |
| 26°17'45" | 9.449 |
| 31°09'35" | 142.247|
| 30°28'45" | 108.708|
| 26°06'45" | 57.688 |
| 26°06'20" | 41.031 |
| 26°06'20" | 41.031 |
| 27°34'35" | 81.093 |
| 48°12'50" | 76.253 |
| 41°25'50" | 25.187 |
| 31°30'45" | 57.471 |
| 35°44'45" | 106.605|
| 35°44'45" | 106.605|
| 11°58'20" | 80.467 |
| 11°58'20" | 80.467 |
| 33°29'25" | 81.966 |
| 33°29'25" | 81.966 |
| 30°09'25" | 29.459 |
| 30°09'25" | 29.459 |
| 32°55'08" | 12.26 |
| 32°55'08" | 12.26 |
| 32°45'25" | 03.694 |
| 32°45'25" | 03.694 |
| 31°49'45" | 73.239 |
| 31°49'45" | 73.239 |
| 41°00'25" | 61.083 |
| 41°00'25" | 61.083 |
| 34°42'10" | 35.817 |
| 34°42'10" | 35.817 |
| 26°09'31" | 35.553 |
| 26°09'31" | 35.553 |
| 24°50'50" | 56.923 |
| 24°50'50" | 56.923 |
| 34°22'55" | 71.038 |
| 34°22'55" | 71.038 |
| E | |
| 33°14'25" | 37.957 |
| 33°14'25" | 37.957 |
| 33°13'15" | 18.334 |
| 33°13'15" | 18.334 |
| F | |
| 30°04'45" | 57.536 |
| 30°04'45" | 57.536 |
| 31°19'25" | 20.217 |
| 31°19'25" | 20.217 |
| 30°10'20" | 26.681 |
| 30°10'20" | 26.681 |
| 31°04'15" | 21.647 |
| 31°04'15" | 21.647 |
| 33°08'35" | 15.368 |
| 33°08'35" | 15.368 |
| 30°09'50" | 42.679 |
| 30°09'50" | 42.679 |
| 32°17'45" | 18.393 |
| 32°17'45" | 18.393 |
| 31°19'20" | 20.232 |
| 31°19'20" | 20.232 |
| 31°07'25" | 19.02 |
| 31°07'25" | 19.02 |
| G | |
| 30°07'28" | 38.839 |
| 30°07'28" | 38.839 |
| 31°04'30" | 4.565 |
| 31°04'30" | 4.565 |
| 25°03'50" | 36.964 |
| 22°03'50" | 21.069 |
| 47°15'04" | 62.579 |
---
**Ambulatory Boundary Report**
**Exempt Land under section 95 (1) (b) of the SMI Act (Protected Area)**
Land exempt under section 95(1)(b) of the Survey and Mapping Infrastructure Act as the land is currently National Park and will be issued as Deed of Grant under the Aboriginal Land Act 1991 and a National Park created over the Grant under the Nature Conservation Act 1992 and the Aboriginal Land Act 1991.
The Ambulatory Boundary has been compiled from BS27, BS31, BS192, C157189, C157229 & SP104579 in the Department of Natural Resources and Mines.
The location of the originally surveyed ambulatory Morgan River boundary has been plotted over SPOTMaps (2004-2010) satellite imagery and positioned using control on PM121475 (Str Jm 13 on SP104579). The previously surveyed plotted position is consistent with a mass of rainforest which exists along both sides of the Morgan River. The mass of forest is evident within the imagery. The accurate location of the current river bank is not readily identifiable on the image due to the denseness of the existing forest.
Parts of the River were first surveyed by Surveyor Starcke on C157189 in 1882 and by Surveyor Coban on BS27 in 1922 and on BS31 in 1923. I connected to the river on SP104579 at Str J and at Str B (on this plan) in 1998 and no significant difference was noted. Currently I am carrying out surveys on Lot 208 on BK15758 on the lower side of the River adjacent to this lot and a major bend in the River. Measurements taken to the River in the NW corner of Lot 208 show there is no significant difference to the position of the river since the original survey by Surveyor Amos in 1885.
The River in this area is heavily vegetated with rainforest and the banks rising from the river bed are quite steep and high. Given these features there is unlikely to be significant change in the river over time. There have on a number of occasions, including original surveys showed the "post and river" coincidental or only a short distance apart, which would indicate that the measurements were to the top of the high bank of the river, and the high bank is quite stable.
Given the stable nature of the River the original river traverse surveys have been accepted to define the ambulatory boundary on this plan.
See sheet 2 for diagram, traverses, reference marks and permanent marks and sheet 3 for tables A and B (creek points), survey report and ambulatory boundary report.
Peg pld at stations 5, 5a and 22-27
Exempt Land
Under Section 95(1) of the SMI Act 2003 (Protected Land)
Plan of Lot 2
Canceling Lot 2 on SP189914
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Cooktown
Meridian: MGA (Zone 55) vide GNSS obsr'n
Scale: 1:80 000
Format: STANDARD
SP252508
1. Certificate of Registered Owners or Lessees.
I/we The State of Queensland (Represented by Department of National Parks, Recreation, Sport and Racing)
Signed by Jason Jacob
2. Planning Body Approval.
The Planning Body hereby approves this plan in accordance with the:
3. Plans with Community Management Statement.
4. References:
CMS Number: 715314447
Dept File: GNS 2287
Local Govt: Cairns Regional Council
Surveyor: Surveyor
5. Lodged by TARTSIMA CARE YORK PENINSULA TENURE RESOLUTION PROGRAM
PO Box 459
CAIRNS QLD 4870
ACTION OFFICER; L MORRISSEY
ELMS REF 2013/000861
6. Existing Created
| Title Reference | Description | New Lots | Road | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47502096 | Lot 2 on SP189914 | 2 | - | |
7. Building Format Plans only
I certify that:
- As far as it is practical to determine, no part of the building shown on this plan encroaches onto adjoining lots and road.
- Part of the building shown on this plan encroaches onto adjoining lots and road.
8. Lodgement Fees
| Action | Amount |
|--------|--------|
| Survey Deposit | $ |
| Lodgement | $ |
| New Titles | $ |
| Photocopy | $ |
| Postage | $ |
| TOTAL | $ |
9. Insert Plan Number
SP252508
| Sym | To | Reference Marks | Bearing | Dist |
|-----|----|-----------------|---------|------|
| 5 | Pin| SP89914 | 270°36' | 0.69 |
| 6 | Pin| SP89914 | 270°36' | 0.992|
| 7 | Pin| SP89914 | 24°27'39"45" | 0.923|
| 8 | Pin| SP89914 | 32°37' | 0.98 |
| 9 | Pin| SP89914 | 32°37' | 0.98 |
| 10 | Pin| SP89914 | 32°37' | 0.98 |
| 11 | Pin| SP89914 | 32°37' | 0.98 |
| 12 | Pin| SP89914 | 32°37' | 0.98 |
| 13 | Pin| SP89914 | 32°37' | 0.98 |
| 14 | Pin| SP89914 | 32°37' | 0.98 |
| 15 | Pin| SP89914 | 32°37' | 0.98 |
| 16 | Pin| SP89914 | 32°37' | 0.98 |
| 17 | Pin| SP89914 | 32°37' | 0.98 |
| 18 | Pin| SP89914 | 32°37' | 0.98 |
| 19 | Pin| SP89914 | 32°37' | 0.98 |
| 20 | Pin| SP89914 | 32°37' | 0.98 |
| 21 | Pin| SP89914 | 32°37' | 0.98 |
| 22 | Pin| SP89914 | 32°37' | 0.98 |
| 23 | Pin| SP89914 | 32°37' | 0.98 |
| 24 | Pin| SP89914 | 32°37' | 0.98 |
| 25 | Pin| SP89914 | 32°37' | 0.98 |
| 26 | Pin| SP89914 | 32°37' | 0.98 |
| 27 | Pin| SP89914 | 32°37' | 0.98 |
| 28 | Pin| SP89914 | 32°37' | 0.98 |
**Traverses etc**
| Sym | Traversing | Dist |
|-----|------------|------|
| 5 | 6 | 0.327|
| 7 | 8 | 0.327|
| 9 | 10 | 0.327|
| 11 | 12 | 0.327|
| 13 | 14 | 0.327|
| 15 | 16 | 0.327|
| 17 | 18 | 0.327|
| 19 | 20 | 0.327|
| 21 | 22 | 0.327|
| 23 | 24 | 0.327|
| 25 | 26 | 0.327|
| 27 | 28 | 0.327|
**Diagram**
Scale: 1:2500
Insert Plan Number: SP252508
### Table B
| Bearing | Dist |
|---------|------|
| 62°53' | 273.0|
| 70°47' | 246.0|
| 69°17' | 218.0|
| 89°53' | 209.0|
| 86°14' | 180.0|
| 55°01' | 247.0|
| 39°16' | 177.0|
| 49°55' | 155.0|
| 358°09' | 200.0|
| 334°27' | 106.0|
| 344°30' | 19.0 |
| 359°08' | 247.0|
| 304°49' | 278.0|
| 237°35' | 239.0|
| 104°47' | 176.0|
| 349°59' | 247.0|
| 05°36' | 233.0|
| 26°17' | 261.0|
| 80°27' | 261.0|
| 83°13' | 269.0|
| 02°44' | 476.0|
| 71°41' | 232.0|
| 10°00' | 124.0|
| 49°05' | 214.0|
| 20°08' | 217.0|
| 51°12' | 179.0|
| 40°10' | 114.0|
| 358°12' | 114.0|
| 54°47' | 157.0|
| 09°37' | 170.0|
| 34°31' | 100.0|
| 350°28' | 189.0|
| 04°51' | 189.0|
| 30°33' | 209.0|
| 33°59' | 157.0|
| 40°45' | 245.0|
| 39°51' | 225.0|
| 43°20' | 239.0|
| 44°32' | 256.0|
| 40°24' | 225.0|
| 07°11' | 233.0|
| 13°04' | 161.0|
| 34°42' | 100.0|
| 35°31' | 107.0|
| 26°27' | 100.0|
| 33°28' | 100.0|
| 31°24' | 149.0|
| 19°02' | 222.0|
| 24°42' | 156.0|
| 30°11' | 153.0|
| 44°43' | 100.0|
| 32°23' | 107.0|
| 21°13' | 134.0|
| 35°04' | 100.0|
| 32°28' | 63.0 |
| 32°45' | 109.0|
| 02°51' | 100.0|
| 20°55' | 76.0 |
| 34°48' | 100.0|
| 25°09' | 135.0|
| 243°38' | 125.0|
| 70°51' | 189.0|
| 70°08' | 131.0|
| 33°42' | 100.0|
| 35°09' | 100.0|
| 45°59' | 100.0|
| 14°33' | 100.0|
| 43°15' | 103.0|
| 20°16' | 83.0 |
| 21°09' | 106.0|
| 330°07' | 100.0|
| 355°30' | 128.0|
| 344°07' | 133.0|
| 344°49' | 133.0|
| 74°48' | 81.0 |
| 74°52' | 73.0 |
| 42°32' | 69.0 |
| 60°33' | 69.0 |
| 25°23' | 70.0 |
| 40°33' | 123.0|
| 21°53' | 106.0|
| 29°33' | 73.0|
| 34°04' | 100.0|
| 57°08' | 100.0|
| 57°47' | 106.0|
| 34°53' | 44.0 |
| 41°42' | 62.0 |
| 23°36' | 82.0 |
| 35°04' | 100.0|
| 344°36' | 83.0 |
| 330°48' | 135.0|
| 134°49' | 70.0 |
| 355°45' | 221.0|
| 10°17' | 124.0|
| 24°47' | 199.0|
| 30°04' | 90.0 |
| 40°40' | 130.0|
| 23°54' | 240.0|
### Table B cont'd
| Bearing | Dist |
|---------|------|
| 9°41' | 152.0|
| 33°15' | 78.0 |
| 33°44' | 106.0|
| 33°36' | 134.0|
| 44°38' | 64.0 |
| 92°5 | 134.0|
| 34°16' | 92.0 |
| 342°56' | 85.0 |
| 10°36' | 100.0|
| 31°00' | 238.0|
| 31°00' | 142.0|
| 31°00' | 83.0 |
| 48°33' | 132.0|
| 110°2 | 110.0|
| 143°37' | 85.0 |
| 49°21' | 105.0|
| 18°27' | 48.0 |
| 13°56' | 59.0 |
| 26°05' | 68.0 |
| 49°30' | 69.0 |
| 49°52' | 148.0|
| 33°47' | 78.0 |
| 65°00' | 86.0 |
| 49°10' | 160.0|
| 51°20' | 15.0 |
| 68°33' | 186.0|
| 69°39' | 75.0 |
| 55°57' | 66.0 |
| 359°57' | 45.0 |
| 76°0 | 76.0 |
| 67°59' | 99.0 |
| 61°43' | 71.0 |
| 189°33' | 104.0|
| 259°54' | 66.0 |
| 109°36' | 100.0|
| 45°47' | 90.0 |
| 17°11' | 120.0|
| 29°37' | 94.0 |
| 33°38' | 98.0 |
| 30°92' | 51.0 |
| 33°33' | 63.0 |
| 29°04' | 102.0|
| 34°02' | 102.0|
| 31°22' | 80.0 |
| 33°08' | 70.0 |
| 30°00' | 93.0 |
| 29°02' | 122.0|
| 235°40' | 78.0 |
| 125°0 | 76.0 |
| 31°06' | 39.0 |
| 38°00' | 80.0 |
| 290°35' | 88.0 |
| 31°06' | 43.0 |
| 149°30' | 132.0|
| 302°55' | 125.0|
| 309°55' | 95.0 |
| 257°55' | 44.0 |
| 34°28' | 62.0 |
| 37°40' | 94.0 |
| 346°00' | 94.0 |
| 30°00' | 19.0 |
| 189°30' | 34.0 |
| 323°13' | 100.0|
| 197°55' | 103.0|
| 307°56' | 130.0|
| 318°00' | 128.0|
| 303°00' | 190.0|
| 308°11' | 187.0|
| 337°59' | 52.0 |
| 347°50' | 45.0 |
| 348°01' | 89.0 |
| 344°01' | 94.0 |
| 38°44' | 51.0 |
| 109°56' | 42.0 |
| 347°56' | 42.0 |
| 359°56' | 44.0 |
| 12°51' | 32.0 |
| 53°51' | 51.0 |
### Table A
| Bearing | Dist |
|---------|------|
| 302°36' | 39.0 |
| 68°36' | 21.0 |
| 63°36' | 25.0 |
| 43°16' | 12.0 |
| 31°06' | 65.0 |
| 59°20' | 90.0 |
| 13°46' | 143.0|
| 65°33' | 22.0 |
| 53°53' | 181.0|
**Ambulatory Boundary Report**
This plan has been prepared for transfer of Lot 2 under the Aboriginal Land Act 1994 and re-creation of the National Park under the Cape York Peninsula Heritage Act 2007 over the grant. The ambulatory boundary at Bridge Creek is defined by new source material and has been determined from SPOT Maps (2004 – 2010) satellite imagery in accordance with Sect 3.1 Cadastral Survey Requirements.
The Bridge Creek ambulatory boundary is the top of the river bank but the feature is more evident on the imagery used to compile the boundary. The positioning of the river in relation to the cadastral boundaries has been made plotting the position of the marks placed onto the imagery. The position of the monuments on the ground in relation to the ambulatory boundary was consistent with the plotting of the points confirming the relative positional accuracy of the imagery.
River points shown on the plan are determined from the imagery and not surveyed by measurement on the ground. Future definitions of the boundary may vary depending on the accuracy of the interpretation of the feature adopted and identified in the image.
Watershed feature boundary determined from 1:50,000 topographical maps Mount Cook and Mt Webb.
Survey was carried out in accordance with the Cadastrial Survey Requirements Section 3.31 Remote Area Survey
8039 ha
Parish of Munburra
Parish of Bitwon
County of Mebulla
Parish of Cowton
Parish of Tucia
Original information compiled from SP104579, SP104582 and DP25207 in the Department of Natural Resources and Mines
See sheet 2 for reference marks, Permanent marks, watershed points and compulsory boundary report
Plan of Lot 215
Canceling Lot 215 on NPW46
Scale: 1:60 000
Format: STANDARD
Meridian: MGA (Zone 55) vide SP104579
Survey Records No
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Starcke
Cadastral Surveyor
Date: 13/02/2013
Copyright protects the plan's being ordered by you. Unauthorised reproduction or amendments are not permitted.
### Table A
**Watershed Points**
| Bearing | Dist |
|---------|------|
| 310°11' | 817.0 |
| 340°25' | 375.0 |
| 352°25' | 293.0 |
| 353°10' | 260.0 |
| 29°36' | 345.0 |
| 28°40' | 376.0 |
| 27°45' | 374.0 |
| 77°35' | 206.0 |
| 120° | 120.0 |
| 30°35'55'| 107.0 |
| 336°55' | 143.0 |
| 336°11' | 141.0 |
| 332°44'8"| 367.0 |
| 343°54' | 475.0 |
| 345°14' | 373.0 |
| 328°32' | 135.0 |
| 170° | 170.0 |
| 9°45' | 165.0 |
| 4°12' | 269.0 |
| 10° | 10.0 |
| 344°28' | 113.0 |
| 306° | 297.0 |
| 39°10' | 263.0 |
| 333°35' | 319.0 |
| 333°11' | 319.0 |
| 313°25' | 321.0 |
| 342°42' | 304.0 |
| 346°40' | 307.0 |
| 228°33' | 282.0 |
| 241°30' | 285.0 |
| 264°40' | 295.0 |
| 233°43' | 136.0 |
| 233°27' | 136.0 |
| 216°53' | 260.0 |
| 239°38' | 262.0 |
| 227°50' | 437.0 |
| 234°26' | 197.0 |
| 194°34' | 184.0 |
| 244°38' | 165.0 |
| 264°33' | 225.0 |
| 241°31' | 271.0 |
| 295°38' | 274.0 |
| 321°19' | 125.0 |
| 321°11' | 125.0 |
| 251°46' | 223.0 |
| 313°10' | 463.0 |
| 263°34' | 163.0 |
| 255°27' | 151.0 |
| 245°10' | 163.0 |
| 229°58' | 35.0 |
| 260°06' | 115.0 |
| 260°00' | 120.0 |
| 272°13' | 158.0 |
| 255°00' | 160.0 |
| 255°04' | 227.0 |
| 239°29' | 309.0 |
| 239°13' | 125.0 |
| 242°13' | 74.0 |
| 348°40' | 153.0 |
| 348°40' | 153.0 |
| 306°54' | 29.0 |
| 260°15' | 160.0 |
| 260°55' | 49.0 |
### Reference Marks
| Sln | To | Origin | Bearing | Dist |
|-----|----------|----------------|---------|------|
| 2 | Pin fd | Bloodwood fd | 317°52' | 156.3 |
| 2 | Bloodwood fd | 330°48' | 3-557 |
| 3 | OIP | 89°/SP104579 | 3°26'12" | 10 |
| 4 | OIP | ORT | 89°/SP104579 | 162°48' | 66 |
| 5 | OIP | #2/SP104578 | 17°33' | 0-307|
### Permanent Marks
| PM | Origin | Bearing | Dist | N° | Type |
|----|----------------|---------|------|----|------|
| 1 | OPM | 84/SP104579 | OPM at station | 21490 |
| 1-OPM | 84/SP104579 | 72°16'30" | 961.28 | 21478 |
| 2-OPM gone | 64/SP104579 | 54°34'55" | 921.48 | 21477 |
| 4 | OPM | 43°38'40" | 243.583 | 90746 |
| 6 | OPM | 40/SP104580 | OPM at station | 22372 |
### Ambulatory Boundary Report
This plan has been prepared for transfer of Lot 215 under of the Aboriginal Land Act 1991 and re-creation of the National Park under the Nature Conservation Act 1992 and the Aboriginal Land Act 1991 over the grant. The watershed boundary of Lot 215 was determined from 1:50 000 Mount Cookbar and Starcke River topographic maps and compiled from DP252507.
The positioning of the watershed boundary in relation to the cadastral boundaries has been made plotting the position of the marks placed onto the mapping. The position of the monuments on the ground is consistent with the boundary boundary and consistent with the plotting of the points confirming the relative positional accuracy of the mapping. Ambulatory boundary points shown on the plan are determined from the mapping and not surveyed by measurement on the ground. Future definitions of the boundary may vary depending on the accuracy of the interpretation of the feature adopted and identified on the topographic maps.
## Survey Report
This plan has been prepared for the issue of a freehold title of Lot 3 under the Aboriginal Land Act 1991. The survey was carried out in accordance with the Cadastral Survey Requirements Section 3.31 'Remote Area Surveys'.
The tidal boundary has been determined from SPOTMaps (2004 - 2010), 2.5 satellite imagery held in the DNRM. Imagery available through Google Earth, on which features were clearer, was also used to assist in interpretation of the SPOTMaps imagery.
The subject land is USL and is currently transferable under section 177 (1) (g) of the Aboriginal Land Act 1991.
The boundaries on lot 3 on CP887717 are unsurveyed; therefore subdivisions 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply. Consequently, the tidal boundary is determined using new source materials under Part 7 Division 7 subdivision 6, section 89 of the SMI Act 2003.
For the most part, the island is surrounded by rocky beaches. The natural features in accordance with the appropriate boundary principles that define the tidal boundary are evidenced by the following:
- Where the foreshore is a rocky beach, the landward edge of the top of the beach is adopted.
- Where the foreshore is a rocky beach, the change in colouration due to the long term tidal influence on the rocks has been adopted. This is also evidenced by a high flotsam line in some places.
- Where the foreshore is mangroves (on part of the south eastern edge of the island) the landward edge of the mangroves has been adopted.
The tidal boundary as adopted is not subject to tidal inundation under any combination of astronomical conditions and average meteorological conditions.
The features are in a stable location that has been observed to have long term sustainability under normal seasonal events, e.g. the seaward edge of the vegetation, the landward edge of mangroves or the upper extent of the tidal stain on the stable rocky beach.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
---
### Table A
| Bearing | Dist |
|---------|------|
| 18°19'45" | 6.9 |
| 135°45' | 7.6 |
| 141°10' | 6.6 |
| 164°45' | 6.1 |
| 180°00' | 6.7 |
| 172°30' | 6.6 |
| 173°33' | 6.1 |
| 174°08' | 6.4 |
| 134°05' | 8.7 |
| 126°25' | 7.4 |
| 155°49' | 9.0 |
| 179°49' | 6.6 |
| 200°03' | 6.6 |
| 175°25' | 9.0 |
| 100°14' | 6.3 |
| 244°11' | 6.8 |
| 234°49' | 9.4 |
| 265°00' | 6.5 |
| 062°45' | 9.2 |
| 244°00' | 6.5 |
| 276°14' | 9.0 |
| 264°16' | 11.7 |
| 275°29' | 6.6 |
| 269°02' | 10.5 |
| 266°30' | 6.2 |
| 274°00' | 10.5 |
| 273°33' | 8.7 |
| 275°25' | 10.1 |
| 294°30' | 12.2 |
| 303°06' | 10.8 |
| 318°33' | 10.5 |
| 318°10' | 5.5 |
| 290°10' | 10.5 |
| 265°20' | 6.0 |
| 253°48' | 5.4 |
| 234°01' | 8.2 |
| 255°23' | 10.0 |
| 274°00' | 7.3 |
| 260°17' | 14.4 |
| 326°55' | 5.6 |
| 327°55' | 10.7 |
| 338°37' | 14.0 |
| 295°17' | 5.1 |
| 352°36' | 7.3 |
| 316°10' | 5.4 |
| 324°00' | 5.4 |
| 335°02' | 6.1 |
| 355°17' | 10.7 |
| 247°36' | 10.5 |
| 216°16' | 6.0 |
| 269°00' | 6.0 |
| 333°50' | 6.0 |
| 286°05' | 6.0 |
| 202° | 5.0 |
| 519°33' | 2.7 |
| 075°15' | 3.3 |
| 316°15' | 3.1 |
| 343°48' | 4.8 |
| 327°45' | 6.5 |
| 308°28' | 5.0 |
| 117°5 | 10.5 |
| 317°58' | 3.0 |
| 336°00' | 6.6 |
| 225°11' | 10.5 |
| 333°33' | 10.5 |
| 344°08' | 7.5 |
| 333°32' | 7.0 |
| 201°41' | 10.5 |
| 233°47' | 13.7 |
| 357°41' | 2.5 |
| 49°15' | 5.5 |
| 286°58' | 3.2 |
| 304°48' | 4.8 |
| 314°07' | 4.1 |
| 279°49' | 9.0 |
| 280°49' | 13.1 |
| 282°55' | 11.3 |
| 293°12' | 12.8 |
| 264°05' | 14.8 |
| 316°25' | 8.3 |
| 289°42' | 7.0 |
| 309°22' | 7.7 |
| 305°11' | 3.9 |
| 303°32' | 6.6 |
Continues above
---
### Table B
| Bearing | Dist |
|---------|------|
| 263°28' | 7.1 |
| 324°04' | 8.6 |
| 206°35' | 10.3 |
| 119°37' | 6.3 |
| 333°45' | 11.1 |
| 374°41' | 6.6 |
| 309°49' | 16.2 |
| 322°48' | 10.8 |
| 300°05' | 7.9 |
| 345°14' | 5.0 |
| 316°25' | 7.9 |
| 334°47' | 6.6 |
| 320°55' | 3.5 |
| 320°55' | 3.5 |
| 493°33' | 9.3 |
| 63°34' | 7.7 |
| 73°42' | 9.1 |
| 64°55' | 6.5 |
| 3°57' | 6.5 |
| 90°36' | 5.2 |
| 65°00' | 6.6 |
| 79°22' | 6.6 |
| 64°46' | 6.6 |
| 98°59' | 6.7 |
| 124°27' | 6.9 |
| 102°38' | 6.9 |
| 97°10' | 6.0 |
| 101°16' | 6.0 |
| 118°41' | 6.9 |
| 99°32' | 6.9 |
| 82°50' | 9.0 |
| 82°24' | 9.0 |
| 102°38' | 7.3 |
| 78°32' | 6.4 |
| 53°32' | 3.9 |
| 110°36' | 6.9 |
| 107°44' | 6.4 |
| 73°42' | 6.0 |
| 64°59' | 9.5 |
| 12°04' | 6.5 |
| 87°55' | 6.4 |
| 125°45' | 7.4 |
| 109°48' | 9.2 |
| 170°44' | 7.4 |
| 105°55' | 7.9 |
| 154°44' | 6.6 |
| 105°33' | 9.4 |
| 139°00' | 10.2 |
| 137°36' | 9.0 |
| 102°22' | 9.0 |
| 128°18' | 9.0 |
| 102°22' | 9.0 |
| 152°00' | 9.9 |
| 1354°47' | 8.9 |
| 1353°47' | 9.7 |
| 157°57' | 9.2 |
| 148°33' | 8.3 |
| 149°00' | 7.3 |
| 135°18' | 6.6 |
| 105°55' | 10.3 |
| 142°00' | 8.1 |
| 163°00' | 8.0 |
| 125°22' | 5.6 |
| 135°44' | 5.6 |
| 123°55' | 5.1 |
| 154°35' | 7.5 |
| 147°22' | 10.4 |
| 145°42' | 10.1 |
| 150°00' | 7.5 |
| 154°55' | 8.0 |
| 156°44' | 6.4 |
| 142°20' | 6.8 |
| 161°59' | 7.5 |
| 148°33' | 10.6 |
| 135°44' | 4.6 |
| 134°45' | 7.7 |
| 125°34' | 9.6 |
| 123°27' | 5.3 |
| 82°50' | 8.3 |
| 70°32' | 7.6 |
| 341° | 5.6 |
| 48°00' | 7.3 |
| 48°00' | 7.3 |
Continues above
---
### Plan of Lot 3
Cancelling Lot 3 on CP887717
**LOCAL GOVERNMENT:** Cook Shire Council
**LOCALITY:** Starcke
Meridian: Approx MGA (Zone 55)
Survey Records No
Scale: 1:2500
Format: STANDARD
---
I, Lyle van Tienhoven hereby certify that the land comprised in this plan was surveyed by me personally, and that the plan is accurate, that the said survey was performed in accordance with the provisions of the Aboriginal Interest Act 2003 and Surveyors Act 2003 and associated Regulations and Standards and that the said survey was completed on 18/2/2013.
Cadastral Surveyor
11/2/2013
Date
1. Certificate of Registered Owners or Lessees.
I/We The State of Queensland (represented by Department of Natural Resources and Mines—Land Act)
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as Lessees of this land agree to this plan.
Signature of *Registered Owners *Lessees
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
%
Dated this day of
#
#
* Insert the name of the Planning Body. # Insert designation of signatory or delegation % Insert applicable approving legislation.
3. Plans with Community Management Statement:
CMS Number:
Name:
Dept File: 2013/000800
Loca Govt:
Surveyor: 706
4. References:
WARNING: Folded or Mutilated Plans will not be accepted. Plans may be rolled. Information may not be placed in the outer margins.
5. Lodged by DAFESIMA CAPE YORK PENINSULA TENURE RESOLUTION PROGRAM PO Box 4597 CAIRNS QLD 4870 ACTION OFFICER L MORRISSEY ELVRS Ref: 2013/000800 (Include address, phone number, reference, and Lodger Code) C0341
6. Existing Created
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47005130 | Lot 3 on CP887717 | .3 | - | - |
Administrative Advices
| Dealing | Lots to be encumbered |
|---------|-----------------------|
| 710624787 | 3 |
7. Orig Grant Allocation:
8. Map Reference: 7869-31342
9. Parish: Ninian
10. County: Melville
11. Passed & Endorsed: Department of Natural Resources and Mines
Dated: 14-03-2013
Signed: [Signature]
Designation: Principal Surveyor
12. Building Format Plans only.
I certify that:
- As far as I am practicable to determine, no part of any building shown on this plan encroaches onto adjoining lots or roads.
- Part of the building shown on this plan encroaches onto adjoining lots and road.
Copied to Surveyor/Director * Date
13. Lodgement Fees: Single Action No Fees Penalties
Survey Deposit: $
Lodgement: $
New Titles: $
Photocopy: $
Postage: $
TOTAL: $
14. Insert Plan Number SP252519
Survey Report
This plan has been prepared for the issue of a freehold title of Lot 5 under the Aboriginal Land Act 1991. The survey was carried out in accordance with the Cadastral Survey Requirements Section 3.31 'Remote Area Surveys'.
The tidal boundary has been determined from SPOTMaps (2004 - 2010) 2.5 satellite imagery held in the DNRM. The subject land is USL and is currently transferable under section 177 (1) (g) of the Aboriginal Land Act 1991. The subject land is not lot 5 on CP887718, therefore subdivisions 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply. Consequently, the tidal boundary is determined using new source materials under Part 7 division 5 subdivision 5, section 89 of the SMI Act 2003.
For the most part, the island is surrounded by rocky beaches. The natural features in accordance with the ambulatory boundary principles that define the tidal boundary are evidenced by the following.
- Where the foreshore is a sandy beach, the landward side of the top of the beach is adopted. This mostly coincides with the seaward edge of the vegetation.
- Where the foreshore is a rocky beach, the change in colouration due to the long term tidal influence on the rocks has been adopted.
- Where the foreshore is mangroves, the landward edge of the mangroves has been adopted.
The tidal boundary is not subject to tidal inundation under any combination of astronomical conditions and average meteorological conditions.
The features are in a stable location that has been shown to have long term sustainability under normal seasonal events, ie landward edge of mangroves or the tidal stain on a statorcloy beach.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
Tidal boundary defined using new source material under Section 89 of the SMI Act 2003
1. Certificate of Registered Owners or Lessees.
1/We The State of Queensland (Represented by Department of Natural Resources and Mines—Land Act)
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown herein in accordance with Section 50 of the Land Title Act 1994.
*as Lessees of this land agree to this plan.
Signature of *Registered Owners *Lessees
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
% Dated this ................................... day of ...................................
........................................ # ........................................ #
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
CMS Number:
Name:
Orig File: 2013/000800
Local Govt:
Surveyor: 709
4. References:
5. Lodged by DAFSIPMA Cape York Peninsula Tenure Resolution Program PO Box 4587 CAIRNS QLD 4870 ACTION OFFICER: L. MORRISSEY eLVAS Ref 2013/000800 (Include address, phone number, reference, and Lodger Code) CS2341
6. Existing Created
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47005129 | Lot 5 on CP887718 | .5 | - | - |
Administrative Advices
| Lease | Lots to be encumbered |
|-------|-----------------------|
| 710624787 | 5 |
7. Orig Grant Allocation:
8. Map Reference: 7869–43234
9. Parish: Melville
10. County: Melville
11. Passed & Endorsed: Department of Natural Resources and Mines
Date: 12.01.2013
Signed: Designation: Principal Surveyor
12. Building Format Plans only.
I certify that:
- As far as it is practicable to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
- Part of the building shown on this plan encroaches onto adjoining lots and road
Cooperating Surveyor/Director * Date
13. Lodgement Fees: Strike Action No Fee Required
Survey Deposit: $
Lodgement: $
New Titles: $
Photocopy: $
Postage: $
TOTAL: $
14. Insert Plan Number SP252520
Survey Report
This plan has been prepared for the issue of a tenanted title of Lot 4 under the Aboriginal Land Act 1991. The survey was carried out in accordance with the Cadastral Survey Requirements Section 3.11 'Remote Area Surveys'.
The tidal boundary has been determined from SPOTMaps (2004 - 2010) 2.5 satellite imagery held in the DNRM. The subject land is USA and is currently tractable under section 177 (1) (g) of the Aboriginal Land Act 1991. The boundaries on Part 4, other than the unsurveyed and therefore subdivision 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply. Consequently, the tidal boundary is determined using new source materials under Part 7 division 7 subdivision 5, section 89 of the SMI Act 2003.
For the most part, the island is surrounded by rocky beaches. The natural features in accordance with the ambiguity of the SMI Act 2003 that define the tidal boundary are evidenced by the following:
- Where the foreshore is a sandy beach, the maximum size of the top of the beach is adopted. This mostly coincides with the seaward edge of the vegetation.
- Where the foreshore is a rocky beach, the change in colouration due to the long term tidal influence on the rocks has been adopted.
- Where the foreshore is mangroves, the landward edge of the mangroves has been adopted.
The tidal boundary is not subject to tidal inundation under any combination of astronomical conditions and average meteorological conditions.
The features are in a stable location that has been shown to have long term sustainability under normal seasonal events, ie landward edge of mangroves or the tidal stain on a stable rocky beach.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
---
**Table A**
Tidal boundary points
| Bearing | Date |
|---------|------|
| 10°9'33" | 1/2 |
| 15°20'08" | 1/6 |
| 20°17'0" | 1/5.5 |
| 22°34'1" | 1/8.4 |
| 25°24'3" | 6/2.8 |
| 8°29'0" | 2/5.6 |
| 30°05'6" | 2/1 |
| 32°45'0" | 2/1.5 |
| 35°35'0" | 3/3 |
| 0°1'4" | 10/5 |
| 15°50'0" | 1/4.5 |
| 34°40'1" | 1/8 |
| 35°55'0" | 1/3.5 |
| 40°1" | 1/6 |
| 35°20'04" | 10/5 |
| 33°37'0" | 8/4 |
| 21°58" | 13/0 |
| 13°57" | 1/0 |
| 53°39" | 1/9 |
| 28°0" | 1/8 |
| 38°41" | 13/3 |
| 89°42" | 18/9 |
| 93°00" | 13/1 |
| 12°0" | 18/5 |
| 17°42" | 14/5 |
| 46°2" | 13/2 |
| 170°58" | 2/4 |
| 170°55" | 3/4.5 |
| 175°38" | 14/9 |
---
**Approx MGA Co-Ordinates Zone 55**
| Stn | Easting | Northing |
|-----|---------|----------|
| D | 234 200 | 8 430 500 |
---
I, Lyne van Tienhoven hereby certify that the land comprised in this plan is owned by me personally, and that the plan is accurate, that the said survey was performed in accordance with the Surveying and Mapping Act 2003, the SMI Act 2003, the Surveyors Act 2003 and associated Regulations and Standards and that the said survey was completed on 9.2.2013.
Cadastral Surveyor
11.3.2013
---
**Plan of Lot 4**
NPW531
DP252501
Tidal boundary defined using new source material under Section 89 of the SMI Act 2003
---
**LOCAL GOVERNMENT: Cook Shire Council**
**LOCALITY: Starcke**
Meridian: Approx MGA (Zone 55)
Survey Records No
Scale: 1:4000
Format: STANDARD
SP252521
1. Certificate of Registered Owners or Lessees.
I/We The State of Queensland (Represented by Department of Natural Resources and Mines - Land Act).
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as Lessee of this land agree to this plan.
Signature of *Registered Owners *Lessee
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
%
Dated this day of
#
#
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
OMS Number:
Name:
4. References:
Dept Ref: 2013/000800
Local Govt: 710
Surveyor:
5. Lodged by DATSIMA CAPE YORK PENINSULA TENURE RESOLUTION PROGRAM PO Box 4547 CAIRNS QLD 4870 ACTION OFFICE L MORRISSEY ELWAS REF: 2013/000800 (Include address, phone number, reference, and Lodger Code) CS2 341
6. Existing Created
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------|----------|------|---------------------|
| 47005128 | Lot 4 on CPB87719 | 4 | - | - |
Administrative Advices
| Dealing | Lots to be encumbered |
|---------|-----------------------|
| 710624787 | 4 |
7. Orig Grant Allocation:
8. Map Reference: 7869-4.34.23
9. Parish: Melville
10. County: Melville
11. Passed & Endorsed: Department of Natural Resources and Mines
Date: 12-03-2013
Signed: [Signature]
Designation: Principal Surveyor
12. Building Format Plans only.
I certify that:
- As far as it is practicable to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
- Part of the building shown on this plan encroaches onto adjoining lots and road.
Copied/Printed Surveyor/Director * Date
13. Lodgement Fees: STRIP ACTION NO FEE PAYABLE
Survey Deposit $ ..........
Lodgement $ ..........
New Titles $ ..........
Photocopy $ ..........
Postage $ ..........
TOTAL $ ..........
14. Insert Plan Number SP252521
Survey Report
This plan has been prepared for the issue of a freehold title of Lots 1 and 2 under the Aboriginal Land Act 1981. The survey was carried out in accordance with the Cadastral Survey Requirements Section 3.31 'Remote Area Surveys'.
The tidal boundary has been determined from SPOTMaps (2004 - 2010) 2.5 satellite imagery held in the DNRM. Imagery available through Google Earth, on which features were clearer, was also used to assist in interpretation of the SPOTMaps imagery.
The entire land on Lot 1/51 and is currently transferable under section 177 (1) (g) of the Aboriginal Land Act 1981. The boundaries on Lots 1 and 2 on CP887590 are unsurveyed; therefore subdivisions 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply. Consequently, the tidal boundary is determined using new source materials under Part 7 division 2 subdivision 5, section 89 of the SMI Act 2003.
For the most part, the islands are rock without beaches. The natural feature in accordance with the ambulatory boundary principles that define the tidal boundary is evidenced by the change in colouration due to the long term tidal influence on the rocks.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
---
**Table A**
| Bearing | Dist |
|---------|------|
| 220°10' | 3-9 |
| 230°10' | 4-5 |
| 216°55' | 7-8 |
| 235°35' | 6-8 |
| 273°33' | 6-2 |
| 273°34' | 5-1 |
| 326°55' | 5-7 |
| 333°49' | 3-5 |
| 445°45' | 4-4 |
| 20°58' | 3-5 |
| 49°55' | 5-4 |
| 49°54' | 5-4 |
| 49°37' | 5-2 |
| 50°55' | 5-3 |
| 70°03' | 3-4 |
| 147°13' | 3-5 |
| 147°05' | 4-4 |
| 147°56' | 1-8 |
| 170°35' | 2-0 |
| 164°17' | 2-7 |
| 159°30' | 3-8 |
**Table B**
| Bearing | Dist |
|---------|------|
| 344°17' | 2-2 |
| 293°47' | 2-9 |
| 78°45' | 3-9 |
| 103°33' | 3-5 |
| 29°43' | 3-2 |
| 29°59' | 3-5 |
| 55°26' | 3-5 |
| 60°00' | 6-0 |
| 103°34' | 3-1 |
| 185°34' | 4-3 |
| 89°25' | 4-0 |
| 78°44' | 2-3 |
| 127°05' | 3-0 |
| 168°48' | 2-3 |
| 158°33' | 1-7 |
| 173°44' | 2-0 |
| 269°24' | 2-0 |
| 19°50' | 2-3 |
| 19°33' | 2-8 |
| 224°48' | 2-9 |
| 226°44' | 3-3 |
| 203°55' | 3-9 |
| 237°44' | 3-0 |
| 273°33' | 3-1 |
| 285°53' | 3-7 |
| 320°03' | 3-1 |
| 315°04' | 3-9 |
| 169°17' | 2-0 |
| 157°54' | 2-0 |
| 225°04' | 3-0 |
| 225°05' | 2-4 |
| 234°50' | 2-2 |
---
**Plan of Lots 1 & 2**
Cancelling Lots 1 & 2 on CP887590
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Starcke
Meridian: Approx MGA (Zone 55) Survey Records No
Scale: 1:2500
Format: STANDARD
---
1. Lyle van Tienhoven hereby certify that the land comprised in this plan was surveyed by me personally or under my immediate supervision that said survey was performed in accordance with the Surveying and Mapping Infrastructure Act 2003 and Surveyors Act 2003, relevant Regulations and Standards and that the said survey was completed on 28.8.2013.
Lyle van Tienhoven
Cadastral Surveyor
Date: 11.3.2013.
1. Certificate of Registered Owners or Lessees.
1/We The State of Queensland (Represented by Department of Natural Resources and Mines—Land Act)
(Names in full)
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown herein in accordance with Section 50 of the Land Title Act 1994.
*as Lessees of this land agree to this plan:
Signature of *Registered Owners *Lessees
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the: %
Dated this day of
# # #
* Insert the name of the Planning Body. % Insert applicable approving legislation.
# Insert designation of signatory or delegation
3. Plans with Community Management Statement:
CMS Number:
Name:
Dept File: 2013/000800
Local Govt:
Surveyor: 708
4. References:
WARNING: Folded or Mutilated Plans will not be accepted. Plans may be rolled. Information may not be placed in the outer margins.
5. Lodged by DATSIMA
CAREY PARK TENURE RESOLUTION PROGRAM
PO Box 4597
CAIRNS QLD 4870
Action Officer: L. MORRISSEY
eLVRS Ref: 2013/000800
(Include address, phone number, reference, and Lodger Code) C62341
6. Existing Created
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------------|----------|------|---------------------|
| 47005131 | Lot 2 on CP887590 | 2 | - | - |
| 47005132 | Lot 1 on CP887590 | 1 | - | - |
Administrative Advices
Dealing Lote to be encumbered
710624787 I & 2
12. Building Format Plans only.
1. Building Plan.
- As far as it is practical to determine, no part of the building shown on this plan encroaches onto adjoining lots or roads.
- Part of the building shown on this plan encroaches onto adjoining lots and road.
Coastal Surveyor/Director * Date
13. Lodgement Fees: STRIP ACTION NO FEES PAYABLE
Survey Deposit
Lodgement $
New Titles $
Photocopy $
Postage $
TOTAL $
14. Insert Plan Number SP252522
Survey Report
This plan has been prepared for the issue of a freehold title of Lot 1 under the Aboriginal Land Act 1991. The survey was carried out in accordance with the Cadastrial Survey Requirements Section 3.31 'Remote Area Surveys'.
The tidal boundary has been determined from SPOTMaps (2004 - 2011) 2.5 satellite imagery held in the DNRM, Imagery available through Google Earth, on which features were much clearer, was also used to assist in interpretation of the SPOTMaps imagery.
The subject land is USL and is currently transferable under section 177(1)(g) of the Aboriginal Land Act 1991.
The divisions 1 - 6 of Part 7 division 2 of the SMI Act 2003 are unsurveyed, therefore subdivisions 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply.
Consequently, the tidal boundary is determined using new source materials under Part 7 Division 7 Subdivision 5, section 89 of the SMI Act 2003.
For the subject lot, the land is surrounded by rocks and rocky beaches. The natural features in accordance with the amendment boundary principles that define the lotar boundary are evidenced by the following:
- Where the foreshore is a sandy beach, the landward side of the top of the beach is adopted. This mostly coincides with the seaward edge of the vegetation.
- Where the foreshore is a rocky beach, the change in colouration due to the long term tidal influence on the rocks has been adopted.
The tidal boundary is not subject to tidal inundation under any combination of astronomical conditions and average meteorological conditions.
The features are in a stable location that has been shown to have long term sustainability under normal seasonal events, ie seaward edge of vegetation or the tidal stain on rocks on the beach.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
---
**Table A**
| Bearing | Dist |
|---------|------|
| 254° | 2.9 |
| 269°16' | 2.9 |
| 334°16' | 4.25 |
| 349°16' | 5.1 |
| 96°55' | 3.9 |
| reverse | 3.8 |
| 186°24' | 4.2 |
**Table B**
| Bearing | Dist |
|---------|------|
| 132° | 9.7 |
| 198°35' | 7.0 |
| 149°02' | 7.1 |
| 221°09' | 9.9 |
| 237°13' | 9.1 |
| 238°13' | 9.5 |
| 296°41' | 13.0 |
| 309°55' | 7.1 |
| 273°57' | 7.0 |
| 310°48' | 9.3 |
| 221°13' | 7.7 |
| 282°42' | 5.5 |
| 336°11' | 9.2 |
| 45°16' | 6.5 |
| 355°23' | 7.4 |
| 30°07' | 7.5 |
| 69°20' | 8.2 |
| 22°03' | 13.6 |
| 34°41' | 8.4 |
| 52°37' | 5.1 |
| 09°17' | 10.7 |
| 106°32' | 7.7 |
| 134°16' | 11.6 |
| 128°16' | 7.3 |
| 198°33' | 7.9 |
| 175°00' | 6.9 |
| 127°31' | 13.7 |
**Approx MGA Co-Ordinates Zone 55**
| Stn | Easting | Northing |
|-----|---------|----------|
| C | 228 320 | 8 434 230|
| D | 228 350 | 8 434 150|
Tidal boundary defined using new source material under Section 89 of the SMI Act 2003
---
I, Lyle van Tienhoven hereby certify that the land comprised in this plan was surveyed by me personally and that my measurements and the said survey was performed in accordance with the Surveying and Mapping Infrastructure Act 2003 and Surveyors Act 2003 and associated Regulations and Standards and that the said survey was completed on 28.2.2013.
L. Lyle van Tienhoven
Cadastral Surveyor
Date: 11-3-2013
---
LOCAL GOVERNMENT: Cook Shire Council
LOCALITY: Starcke
Meridian: Approx MGA (Zone 55)
Survey Records No
Plan of Lot 1
Scale: 1:1500
Format: STANDARD
Canceling Lot 1 on CP887589
SP252523
Survey Report
This plan has been prepared for the issue of a freehold title of Lot 2 under the Aboriginal Land Act 1991. The survey was carried out in accordance with the Cadastral Survey Requirements Section 3.31 "Remote Area Surveys".
The tidal boundary was determined from SPOTMaps (2004 - 2010) 2 x 5 satellite imagery held in the DNRM Imagery available through Google Earth, on which features were much clearer, was also used to assist in interpretation of the SPOTMaps imagery.
The subject land is USL and is currently transferable under section 177 1 (j) of the Aboriginal Land Act 1991.
The boundaries on lot 3 on CP887589 are unreviewed; therefore subdivisions 2 - 4 of Part 7 division 2 of the SMI Act 2003 do not apply. Consequently, the tidal boundary is determined using new source materials under Part 7 division 7 subdivision 5, section 89 of the SMI Act 2003.
For the most part, the island is surrounded by rocks and rocky beaches. The natural features in accordance with the ambulatory boundary principles that define the tidal boundary are evidenced by the following:
- Where the foreshore is a sandy beach, the landward side of the top of the beach is a feature. This mostly coincides with the seaward edge of the vegetation.
- Where the foreshore is a rocky beach, the change in colouration due to the long term tidal influence on the rocks has been adopted.
The tidal boundary is not subject to tidal inundation under any combination of astronomical conditions and average meteorological conditions.
The features are in a stable location that has been shown to have long term sustainability under normal seasonal events, ie seaward edge of vegetation or the tidal stain on rocks on the beach.
The location of the tidal boundary is consistent with the public interest as defined in Part 7 of the SMI Act 2003.
L Lyle van Tienhoven hereby certify that the land contained in this plan is owned by me personally, and that the plan is accurate, that the said land is subject to the provisions of the Surveying and Mapping Infrastructure Act 2003 and Surveying Standards and associated Regulations and Standards and that the said survey was completed on 26/8/2013.
Cadastral Surveyor
Date: 1-3-2013
Plan of Lot 2
Cancelling Lot 2 on CP887589
LOCAL GOVERNMENT: Cook Shire Council LOCALITY: Starcke
Meridian: Approx MGA (Zone 55)
Survey Records No
Scale: 1:3000
Format: STANDARD
Tidal boundary defined using new source material under Section 89 of the SMI Act 2003
1. Certificate of Registered Owners or Lessees.
1/We The State of Queensland (Represented by Department of Natural Resources and Mines - Land Act).
(Names in full)
*as Registered Owners of this land agree to this plan and dedicate the Public Use Land as shown hereon in accordance with Section 50 of the Land Title Act 1994.
*as Lessees of this land agree to this plan.
Signature of *Registered Owners *Lessees
* Rule out whichever is inapplicable
2. Planning Body Approval.
* hereby approves this plan in accordance with the:
%
Dated this ........................................ day of .........................................
........................................ # ........................................ #
* Insert the name of the Planning Body. # Insert designation of signatory or delegation % Insert applicable approving legislation.
3. Plans with Community Management Statement:
OMS Number: ........................................
Name: ........................................
Dept# : 2013/000800
Local Govt: ........................................
Surveyor: 711
4. References:
Dept# : 2013/000800
Local Govt: ........................................
Surveyor: 711
6. Existing
| Title Reference | Description | New Lots | Rood | Secondary Interests |
|-----------------|-------------------|----------|------|---------------------|
| 47005133 | Lot 2 on CP887589 | 2 | | |
Administrative Advices
| Dealing | Lots to be encumbered |
|---------|-----------------------|
| 710624787 | 2 |
7. Orig Grant Allocation :
8. Map Reference :
7769 - 12121
9. Parish : Melville
10. County : Melville
11. Passed & Endorsed :
By : Department of Natural Resources and Mines
Date : 12-22-2013
Signed : ........................................
Designation : Principal Surveyor
12. Building Formal Plans only.
I certify that:
• As far as it is practicable to determine, no part of the building shown on this plan encroaches onto adjoining lots or road.
• Part of the building shown on this plan encroaches onto adjoining lots and road.
13. Lodgement Fees :
Lodgement $ ........................................
New Titles $ ........................................
Photocopy $ ........................................
Postage $ ........................................
TOTAL $ ........................................
14. Insert Plan Number SP252524
PLAN OF KALPOWAR NATURE REFUGE (KEY DIAGRAM)
Kalpowar Nature Refuge (Abt 28 855 ha) shown
Scale - 1:500 000
MGA Zone 55
PARISHES:
BARMUN, CALMUURA, HAFMAL
KALPOWAR, LAKEFIELD, WATCHA
WIRA and YMPOOR
COUNTIES:
MELVILLE
LOCALITIES:
BIRTHDAY PLAINS
LAKEFIELD and STARCKE
EPA DISTRICT: CAIRNS
LOCAL GOVERNMENT: COOK SC
QPWS DISTRICT/REGION: CAPE YORK/DRY TROPICS
STATE ELECTORATE: COOK
Prepared by Tenure Actions Branch, EPA Date: 12/03/2007
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992 and the State & Major Infrastructure Act 2003.
© The State of Queensland, Environmental Protection Agency.
Map Reference
1:100 000
7767, 7768, 7769 & 7868
PLAN PA255A
SHEETS 1 - 5
ITEM 2
PLAN OF KALPOWAR NATURE REFUGE (SHEET1)
Kalpowar Nature Refuge (Abt 28 855 ha) shown.
Scale - 1:100 000
MGA Zone 55
| PARISH: | CALMURRA, HARMAL |
| COUNTY: | MELVILLE |
| LOCALITY: | BIRTHDAY PLAINS and LAKEFIELD |
EPA DISTRICT: CAIRNS
LOCAL GOVERNMENT: COOK SC
QPWS DISTRICT/REGION: CAPE YORK/DRY TROPICS
STATE ELECTORATE: NORTHERN COOK
Prepared by: Torrens Actions Branch, EPA Date: 12/03/2007
Produced in accordance with the provisions of the Nature Refuges Act 1992 and the Survey & Mapping Infrastructure Act 2003.
© The State of Queensland, Environmental Protection Agency.
Map Reference
1:100 000 7768
What every care is taken to ensure the accuracy of this product the Environmental Protection Agency makes no representations or warranties about its accuracy, reliability, completeness or suitability for any particular purpose. Additionally, the Environmental Protection Agency disclaims all responsibility and all liability (including without exception, liability in negligence) for all expenses, losses, damages (including indirect or consequential damage) and costs which might incur as a result of the product being inaccurate or incomplete in any way and for any reason.
PLAN OF KALPOWAR NATURE REFUGE (SHEET2)
Kalpowar Nature Refuge (Abt 28 855 ha) shown
Scale - 1:100 000
MGA Zone 55
| PARISH: | KALPOWAR, WATCHA |
| COUNTY: | MELVILLE |
| LOCALITY: | BIRTHDAY PLAINS and LAKEFIELD |
EPA DISTRICT: CAIRNS
LOCAL GOVERNMENT: COOK SC
QPWS DISTRICT/REGION: CAPE YORK/DRY TROPICS NORTHERN
STATE ELECTORATE: COOK
Prepared by Tenure Actions Branch, EPA Date: 12/03/2007
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992 and the Survey & Mapping Infrastructure Act 2003.
© The State of Queensland. Environmental Protection Agency.
Map Reference 1:100 000 7768
PLAN PA255A SHEET 2
ITEM 2
PLAN OF KALPOWAR NATURE REFUGE (SHEET 4)
Kalpowar Nature Refuge (Abt 28 855 ha) shown.
Scale - 1:100 000
MGA Zone 55
PARISHES: BARMUN, LAKEFIELD and YIMPOOR
COUNTIES: MELVILLE
LOCALITIES: LAKEFIELD and STARCKE
EPA DISTRICT: CAIRNS
LOCAL GOVERNMENT: COOK SC
QPWS DISTRICT/REGION: CAPE YORK/DRY TROPICS NORTHERN
STATE ELECTORATE: COOK
Prepared by Tenure Actions Branch, EPA Date: 12/03/2007
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992 and the Survey & Mapping Infrastructure Act 2003.
© The State of Queensland. Environmental Protection Agency.
Map Reference
11100 000
7767,7768
PLAN PA255A
SHEET 4
Coral Sea
CAIRNS
BRISBANE
MOUNT ISA
ITEM 2
PLAN OF KALPOWAR NATURE REFUGE (SHEET 3)
Kalpowar Nature Refuge (Abt 28 855 ha) shown..................................................................
Scale - 1:100 000
MGA Zone 55
| PARISH: | LAKEFIELD, WATCHA and YIMPOOR |
|------------------|--------------------------------|
| COUNTY: | MELVILLE |
| LOCALITY: | BIRTHDAY PLAINS and LAKEFIELD |
EPA DISTRICT: CAIRNS
LOCAL GOVERNMENT: COOK SC
QPWS DISTRICT/REGION: CAPE YORK/DRY TROPICS
STATE ELECTORATE: COOK
Prepared by: 'Nature Actions Branch, EPA' Date: 12/03/2007
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992 and the Survey & Mapping Infrastructure Act 2003.
© The State of Queensland, Environmental Protection Agency.
Map Reference
1:100 000 7768
Whilst every care is taken to ensure the accuracy of this product the Environmental Protection Agency makes no representations or warranties about its accuracy, reliability, completeness or suitability for any particular purpose. Additionally, the Environmental Protection Agency disclaims all responsibility and all liability (including without limitation, liability in negligence) for all expenses, losses, damages (including indirect or consequential damage) and costs which might result as a result of this product being inaccurate or incomplete in any way and for any reason.
PLAN OF KALPOWAR NATURE REFUGE (SHEET 5)
Kalpowar Nature Refuge (Abt 28 855 ha) shown
Scale - 1:100 000
MGA Zone 55
| PARISH: | Wika |
| COUNTY: | Melville |
| LOCALITY: | Starcke |
EPA DISTRICT: Cairns
LOCAL GOVERNMENT: Cook SC
QPWS DISTRICT/REGION: Cape York/Dry Tropics Northern
STATE ELECTORATE: Cook
Prepared by: Tenure Actions Branch, EPA Date: 12/03/2017
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992 and the Survey & Mapping Infrastructure Act 2003.
© The State of Queensland, Environmental Protection Agency.
Map Reference 1:100 000 7868
While every care is taken to ensure the accuracy of this product the Environmental Protection Agency makes no representations or warranties about its accuracy, reliability, completeness or suitability for any particular purpose. Additionally, the Environmental Protection Agency disclaims all responsibility and all liability (including without limitation, liability in negligence) for all expenses, losses, damages (including indirect or consequential damage), and costs which might ensue as a result of this product being inaccurate or incomplete in any way and for any reason.
MELSONBY NATURE REFUGE
Abt 3610 ha
LEGEND
- Melsonby Nature Refuge
- Exclusion Area
| POINT | EASTING | NORTHING |
|-------|---------|----------|
| A | 263943 | 831477 |
| B | 264958 | 8329983 |
| C | 269046 | 8320844 |
| D | 269046 | 8320844 |
| E | 265501 | 8313905 |
| F | 264584 | 8313405 |
| G | 264289 | 8313160 |
| H | 262029 | 8311384 |
LOCALITY: COOKTOWN
LOCAL GOVERNMENT: COOK SC
QFWG DISTRICT: CAPE YORK / DRY TROPICS
EPA DISTRICT: CARNS
STATE ELECTORATE: COOK
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992, and the Survey and Mapping Infrastructure Act 2003.
The State of Queensland Environmental Protection Agency Prepared by Tenure Action Group, EPA Date: 16/10/2006
Produced to delineate the boundaries of the Nature Refuge under the provisions of the Nature Conservation Act 1992, and the Survey and Mapping Infrastructure Act 2003.
I, Leslie Clyde Fehlhaber hereby certify that the land comprised in this plan was surveyed by me personally and by Scott David Littleton, Surveying Graduate, for whose work I accept responsibility, and that the plan is accurate, that the said survey was performed in accordance with the Survey and Mapping Infrastructure Act 2003 and Surveyors Act 2003 and associated Regulations and Standards and that the survey was completed on 21-7-2005.
Cadastral Surveyor
17-3-2006
Plan of Emts A & D in Lot 7 on SP156403
LAKEFIELD
PARISH: & YIMPOOR
COUNTY: Mosman
Meridian: Of SP156403
Scale: 1:80000
Format: STANDARD
SP171862
Copyright protects the plan's being ordered by you. Unauthorised reproduction or amendments are not permitted.
Diagram A
Scale - 1:8000
Diagram D
Scale - 1:20,000
State copyright reserved.
Insert Plan Number SP171862
| FM | Origin | Permanent Marks | Bearing | Dist | No |
|----|--------|-----------------|---------|------|----|
| 1 | SP156403 | OPM at station | 156/6/5 | 2 | 2 |
| 20 | | Pyl pid at station | 66/369 | 2 | 2 |
| 21 | | Pyl pid at station | 16/597 | 2 | 2 |
| 22 | | Pyl pid at station | 16/592 | 2 | 2 |
| 23 | | Pyl pid at station | 16/593 | 2 | 2 |
| 24 | | OPM at station | 16/594 | 2 | 2 |
| 25 | | SP156403 | | | |
| FM | Origin | Permanent Marks | Bearing | Dist | No |
|----|--------|-----------------|---------|------|----|
| 1 | SP156403 | OPM at station | 156/6/5 | 2 | 2 |
| 20 | | Pyl pid at station | 66/369 | 2 | 2 |
| 21 | | Pyl pid at station | 16/597 | 2 | 2 |
| 22 | | Pyl pid at station | 16/592 | 2 | 2 |
| 23 | | Pyl pid at station | 16/593 | 2 | 2 |
| 24 | | OPM at station | 16/594 | 2 | 2 |
| 25 | | SP156403 | | | |
**Diagram C**
Scale: 1:20,000
Parish of Lakefield
Parish of Ympoor
EMT A
EMT D
RIVER
NORMANBY
Insert Plan Number: SP171862 |
High intensity aspects of the J-PARC facility
Tadashi Koseki for the J-PARC accelerator group
J-PARC center, KEK and JAEA
1. Overview of the J-PARC facility
2. Status of high intensity operation of the Linac/RCS
3. Status of the Main Ring
3-1. High intensity operation of fast extraction
3-2. Slow extraction commissioning
3-3. Improvements for high intensity operation
4. Energy upgrade of the linac
5. Summary
J-PARC: Join project between KEK&JAEA
Neutrino beams to SK
MLF (Material and Life science experimental Facility)
MR
JFY 2006 / 2007
JFY 2008
JFY 2009
Bird's eye photo in Jan. 2008
Hadron experimental hall
Linac
- **Particle:** H⁻
- **Energy:**
- 181 MeV at present
- 400 MeV by installing ACS in 2012
(Construction of ACS has been started.)
- **Peak current:**
- 30 mA at 181 MeV
- 50 mA at 400 MeV in the future
- **Repetition:** 25 Hz
- **Pulse width:** 0.5 msec
Front-end = IS + LEBT + RFQ + MEBT
Front-end 50 MeV 181 MeV
DTL (27 m) SDTL (84 m)
3 MeV
Debuncher 1 ACS
Debuncher 2
90-deg dump
100-deg dump
30-deg dump
0-deg dump
RCS (Rapid Cycling Synchrotron)
Multi-purpose machine:
- Neutron/muon source
- Booster of the MR injection
- Circumference 348.3 m
- Injection energy 181 MeV (400 MeV)
- Extraction energy 3.0 GeV
- Repetition rate 25 Hz
- Output power 1.0 MW
To MLF
To MR
Main parameters of MR
| Parameter | Value |
|----------------------------|--------------------------------------------|
| Circumference | 1567.5 m |
| Repetition rate | ~0.3 Hz |
| Injection energy | 3 GeV |
| Extraction energy | 30 GeV (1st phase) |
| | 50 GeV (2nd phase) |
| Superperiodicity h | 3 |
| | 9 |
| Number of bunches | 8 |
| Rf frequency | 1.67 - 1.72 MHz |
| Transition γ | j 31.7 (typical) |
| Number of dipoles | 96 |
| quadrupoles | 216 (11 families) |
| sextupoles | 72 (3 families) |
| steerings | 186 |
| Number of cavities | 5 |
Three dispersion free straight sections of 116-m long:
- Injection and collimator systems
- Slow extraction (SX)
to Hadron experimental Hall
- MA loaded rf cavities and Fast extraction (FX) (beam is extracted inside/outside of the ring)
outside: Beam abort line
inside: Neutrino beamline (intense ν beam is sent to SK)
Status of high intensity operation of the Linac / RCS
Performance recovery of LINAC-RFQ
Since the autumn of 2008, the most urgent issue of the linac was discharge in the RFQ. The RCS beam power for users was limited at 20 kW due to the RFQ problem.
In the 2009 summer shutdown,
- improved vacuum system
- performed in-situ baking
- Base pressure is ~several x $10^{-7}$ Pa
- Hydro-carbon components in residual gases gradually reduce during rf conditioning
**IMPROVED ITEMS**
- DOWN SIZED ION SOURCE APERTURE
- MOISTURE FREE FILTER
- OIL FREE ROUGH PUMP SYSTEM
**REDUCE GAS FLOW FROM UPPER STREAM.**
**ADOPT MOISTURE FREE FILTER.**
**OIL FREE ROUGH PUMP SYSTEM.**
**RFQ PUMP SPEED [l/sec]:** 3,300 ➔ 12,500
**ION SOURCE PUMP SPEED [l/sec]:** 6,000 ➔ 9,000
Painting injection of RCS
Transverse painting
- Momentum offset (=offset of rf frequency);
- 0~0.2% in momentum
- Superposition of 2nd harmonic rf voltage;
- 80% of the amplitude of the fundamental one
- Phase sweep of the 2nd harmonic rf voltage;
- -80 to 0 deg relative to the fundamental one
Longitudinal painting
RCS beam ellipse
Injection beam
Correlated Anti-correlated
Painting emittance; 0~216 π mm mrad
H. Hotchi
Beam loss reduction by the painting
Intensity loss observed for 300 kW-equivalent intensity beam;
- ~7% with no painting – (5)
- ~5% with the transverse painting - (6) \((\varepsilon_{tp}; 100\pi \text{ mm mrad, correlated})\)
- ~1% by adding the longitudinal painting – (9) \((V_{2nd}; 80\%/Δφ; -80 \text{ deg}/Δp; -0.2\%)\)
Red - measured with DCCT
Blue - simulations
H. Hotchi
After the recovery of Linac-RFQ, high power operation of the RCS has become possible and 120 kW operation has started for the MLF users.
Neutron beamline: 12 beamlines are now under commissioning and open for users.
Muon beamline: The highest intensity pulsed muon source in the world with the 120 kW beam.
300 kW operation: achievement and issues
On Dec. 10, 300 kW-1 hours beam delivery from the RCS to the MLF was successfully demonstrated.
The Laslett tune shift at the injection energy of 181 MeV for the 300 kW operation is equivalent to the value at the injection energy of 400 MeV for 1 MW operation, design goal of the RCS.
\[ \Delta \nu = -\frac{r_p n_t}{2\pi \beta^2 \gamma^3 \varepsilon B_f} \]
\[ \sim -0.15 \]
( \( B_i=0.4, \varepsilon=216\pi \text{ mm mrad} \) )
Design goal:
400 MeV
50 mA Linac current:
4.2E13 ppb
→1 MW
At present:
181 MeV
15 mA Linac current
1.3E13 ppb
→0.3 MW
The 300 kW demonstration showed the beam loss issues should be solved before starting users operation. Following improvements are in progress:
1. Installation of the small foil (40 mm → 15 mm in vertical) to reduce the number of foil hits during painting injection
2. Installation of AC power supplies for sextupoles
Before 2010 summer shutdown, the sextupoles were driven by DC power supplies and chromaticity is corrected only at the injection energy. AC power supplies are necessary to reduce beam loss in the acceleration.
Status of the Main Ring
Operating points of the FX and SX
Simulation result with various ring imperfections, field errors, high field components, fringe fields, alignment errors. (A. Molodozhentsev)
Measured results with 3 GeV DC beam of $4 \times 10^{11}$ ppb $\times$ 1 bunch (1% intensity)
Beam survival after 1.9 sec storage
On the linear coupling resonance, we have large beam loss. Correction of the linear coupling resonance is important for high power operation in the MR.
Correction of linear coupling resonance
Linear coupling resonance correction is performed using vertical local bumps in two SDs, SDA019 and SDB028.
Measured beam loss on the linear coupling resonance for various pairs of bump heights.
A pair of +4 mm in SDA019 and -5 mm in SDB028 is effective for the correction.
(22.2, 20.8)
With correction
Without correction
High power operation with the fast extraction
Super-Kamiokande (ICRR, Univ. Tokyo)
T2K experiment:
Tokai-to-Kamioka(T2K) long baseline neutrino oscillation experiment
J-PARC Main Ring (KEK-JAEA, Tokai)
Goal is discovery of $\nu_e$ appearance ($\theta_{13}$)
History of beam delivery to the T2K experiment
The T2K group has started physics data taking since January 17, 2010.
- Beam power up to 100 kW is delivered to T2K experiment.
- Power of long term stable operation is limited up to ~50 kW due to kick angle drift.
The kicker system is replaced in the 2010 summer shutdown period.
100 kW operation
100 kW continuous beam delivery to the NU beam line were demonstrated in June 2010.
The extracted particles to the NU beamline is $7.5 \times 10^{13}$ ppp:
The world highest level of ppp in synchrotrons.
Beam loss localizes on the ring collimator section.
Typical injection loss is 100~200 W < present collimator limit 450 W.
Layout of beam lines at the hadron experimental hall.
Three beamlines (K1.8, K1.8BR, K1.1) are in operation. K1.1BR will be commissioned in October 2010.
Slow extraction
FT: 0.7-2.63 sec
Main magnet pattern
Res. Sextupoles (8 in arc)
SX bumps(4)
ESS's (2)
Mag. Septa (10) in DC
QFN (48 quads. in arc)
Tune ramping by QFN:
(22.30, 20.78) -> (22.35, 20.78)
\[ \Delta Q_x = 67 \]
Spill feedback system
Spill feedback using EQ, RQ and DSP system was installed in the 2009 summer shutdown.
Beam spill signal is fed into the DSP system and current pattern of the correction is sent to PSs of feedback quadrupoles.
EQ: for constant spill structure ( < 100 Hz)
RQ: for ripple compensation ( < 3 kHz)
SX operation
Beam commissioning of the SX with spill feedback system has been started in October 2009. Commissioning of secondary beam lines in the HD hall and partial users experiments have been carried out from October 2009 to February 2010.
So far, the maximum beam power of 2.6 kW has been delivered to the HD facility. Estimated extraction efficiency from the BLM counts is $\sim 98.5\%$.
Spill structure of the extracted beam
The spill measured by PMT with scintillator in the HD beam line
The spill has many sharp peaks, which come from fluctuation of tune due to current ripple of magnet power supplies.
\[ \text{Duty} = \frac{\left( \int_0^T I dt \right)^2}{\int_0^T dt \int_0^T I^2 dt} \approx 11\% \]
For improvement of the spill structure;
- operation with transverse rf noise (rf knock out)
- main PS tuning to reduce 600 Hz ripple using a trap filter
The slow extraction study will be resumed in October run.
Improvements performed in 2010 summer shutdown for higher intensity operation of MR
1. New FX kicker system in the 2010 summer shutdown
2. Additional shields for the collimators:
- 3-50 BT collimators in the 2010 summer
(ring collimators in the 2011 and 2012 summer shutdown periods)
3. Installation of 6th rf system:
It can be operated as a 2nd harmonic system for manipulation of longitudinal bunch form to reduce the effect of space charge force
4. Impedance recovery of the MA loaded rf cavities
Impedance reduction of the cavities was observed. Polishing and coating of cutting surfaces of the MA cores recover the impedance
5. Main magnet tuning for higher rep. rate:
3.52 s (present) -> 3.2 s (from November of 2010) -> 2.6 sec (Before 2011 summer)
6. Physical aperture of the injection dump section is enlarged by replacing the duct:
Narrow aperture of dump septum caused high residual activation.
New FX kicker magnets
Original injection scheme:
4 batch injection from the RCS to MR
-> 8 bunch operation in the MR
Harmonic number of the MR is nine and one vacant bucket makes the 1.1 μs room for the rise time of kicker.
Before the 2010 summer shutdown, the MR operated with 6 bunches. It is limited by the performance of extraction kicker magnets. The pulse rise time of 1.6 μsec is too long to receive 8 bunches. → New kicker system with the shorter rise time than 1.1 μsec is installed in the 2010 summer shutdown.
Circuit and performance of the new FX kicker
- Thyatron & PFN capacitor should be in coaxial copper shield for faster rise time
- All connection should be done by Cu plate to reduce parasitic inductance
- Connector for high voltage cable should be “Low Inductance”
- Saturable inductor should be installed for faster rise time & to reduce beam impedance
Measured rise time using a dummy load
Short rise time less than 1 μsec is achieved.
Orbit drift of the extracted beam occurred during the continuous operation with beam power > 50 kW. Horizontal beam position drifted ~1 mm (tolerable limit) on the graphite target and ~10 mm in the muon monitor of the neutrino facility for 1~2 hours continuous operation of 65 kW.
When the operation was resumed after ~30 min beam off, the beam came back to the initial position. The orbit drift comes from the kick angle drift due to heating of ferrite cores by the beam induced field.
Solution of the heating problem of ferrite cores
Reduction of the beam coupling impedance by damping resistor.
Circuit of the kicker system
Beam induced wall current goes to the chamber GND. It cancels the induced magnetic fluxes in the ferrite cores.
Measurement of rf power spectrum in test bench
The old kicker (before the 2010 summer)
New kicker with damping resistor
Beam power spectrum: $R_e[Z_L] \times I_B(\omega)^2$
Impedance of kicker: $R_e[Z_L]$
Estimated power loss for 80 kW beam
Beam energy: 30 GeV, bunch width: 47 ns
number of particles: $10^{13}$ ppb x 6 bunches
using measured bunch form, duty factor: 60%
| Old Kicker (before 2010 summer) | New kicker with damping resistor |
|---------------------------------|---------------------------------|
| 1900 W loss | 210 W loss |
| | (20 W loss in ferrite) |
Power loss in the new kicker is estimated to be ~10% of the old kicker.
Water cooling channels are attached on the ferrite cores.
It is expected to reduce the temperature rise to ~1/5 for the case of 1kW power loss.
Installation of additional shields of 3-50 BT collimators
Loss power capacity increased from 0.45 kW to 2 kW
Shield thickness:
0.72 m for roof, 0.25 m for others
Concerns
• Radiation limit at ground level (Hakken Doro) 0.5 µSv/h
• Maintenance of the magnets and collimator devices
Solutions
• More shield 20 ~ 50 cm thick (iron equivalent)
• Potentiometers and switches with radiation hardness
MARS calculation has indicated that hands on maintenance and accidental jaw replacement are possible. Radiation doses have been estimated to be ~10 mSv/h at the surface of 30 cm thick iron.
Installation has been completed on September 6.
Residual activation data taken on duct surface and at 30 cm after RUN#34 (run in June 2010.) The RUN#34 (for T2K exp., 50-70 kW beam delivery) stopped at 7:00 on June 26.
At dispersion peaks: due to impedance reduction of two cavities
The aperture of the 3-50 BT collimator was set larger (~70 pi), because the installation of the additional shield was scheduled to start in the beginning of July. It can be used to reduce the loss in the ring collimators from the autumn run.
Energy upgrade of the linac
The full potential of the J-PARC facility cannot be realized with a 181 MeV linac. (e.g. 1MW@RCS, 0.75MW@MR)
The construction of 181 to 400MeV part of the linac was funded through the supplementary budget of JFY2008 (four years).
**181 MeV LINAC**
- IS
- RFQ
- DTL
- SDTL
- DB1
- DB2
3.1 m 27.1 m 91.2 m
(Two SDTL tanks are used as a debuncher temporarily.)
MEBT1
3 MeV 50.1 MeV 181 MeV
To 3-GeV RCS
L3BT
(Linac 3-GeV Beam Transport)
**400 MeV LINAC**
- IS
- RFQ
- DTL
- SDTL
- B1,2
- ACS
- DB1,2
15.9 m 108.3 m
MEBT2
Two bunchers
ACS
21 Acc. Modules
L3BT
Two debunchers
190.8 MeV 400 MeV
To 3-GeV RCS
ACS accelerating modules
Two acc. tanks have the same geometrical $\beta$.
25 ACS cavities will be manufactured in 3 years.
Final brazing and assembling
Module set into a vacuum furnace
Modules being assembled on the support
So far, the construction is on schedule. The ACS will be installed in the 2012 summer and beam commissioning of the 400 MeV will be started in the 2012 autumn/winter.
Summary
The linac and RCS deliver the high power and stable beam to the downstream facilities.
Recent highlights:
- 120 kW beam delivery to the MLF
- 300 kW operation for 1 hour was successfully demonstrated
160 kW beam delivery to the MLF is planned from December 2010
200 kW from January 2011
Recent highlights of the MR:
- Beam delivery of 100 kW in maximum to the NU beam line by FX
- Beam delivery of 2.6 kW in maximum to the HD hall by SX.
Continuous beam delivery > 100 kW will be started in the 2010 autumn.
Beam delivery > 5 kW will be started in this autumn: limited by radiation shield of beam dump of the HD hall. It will be increased 50 kW in the 2011 summer shutdown.
The construction of the ACS cavities are well in progress.
The beam commissioning of the 400 MeV operation is scheduled to start in 2012.
Thank you for your attention
Power upgrade plan of RCS and MR(FX)
For 8 bunches, 30 GeV at MR: \( P_{MR} = 1.6 \times (P_{RCS} / T_{MR}) \)
- 3-50BT collimator shields, RF (1st HH), FX kickers
- Ring collimator shields, RF (6th F, 2nd HH), Inj. Sep 1
- ACS Installation in JFY2012
- 400 MeV injection in the RCS
- RF (3rd HH), Inj. Sep 2, FX Septa, ..
February RUN (RUN#30):
Total deliver time to HD is 122 hrs. (5 days) : 1 kW(106.5 hrs.), 2 kW (2 hrs.), 1.5 kW (13.5 hrs.)
Total deliver time to NU is 72 hrs. (3 days) : 18 kW(19 hrs.), 27 kW (21 hrs.), 31 kW (32 hrs.)
Survey: 4 hours after the beam stop, measured by contact on the beam ducts.
(7 days after the stop of the beam delivery to HD)
The residual activation in SX section 1 week after beam stop is less than 100 μSv/h on contact.
(The guide line of activation max. is 1 mSv/h to allow hands on maintenance.)
Residual activation in the injection straight section
Red: duct surface
Blue: at 30 cm
Injection kicker (u): 5.8
1st collimator: 16
2nd collimator (u): 1000, 30
2nd collimator (d): 3000, 300
Absorber shield (u): 7000, 500
Absorber shield (d): 4500, 300
Injection beam dump (3 GeV, 3 kW)
3rd collimator (u): 1000, 100
3rd collimator (d): 9000, 400
Dump kicker1 (u): 3000, 520
Dump kicker1 (d): 1400, 330
Dump septum1 (u): 3500, 270
Collimator section
Injection dump system has high activation level because of the narrow aperture.
We replace the beam ducts with larger ones in this summer shutdown.
Measured tune during the tune ramping by QFN
Horizontal tune
$3\nu_X = 67$
Tune fluctuation $\sim \pm 0.003$
The cause is current ripple of quadrupoles PSs.
Circulating beam intensity [a. u.]
Variation of circulating beam intensity during slow extraction for two shots
Because of the tune fluctuation, the circulating beam decreases in the step-like shape.
Spill monitor signal in HD beam line
Extracted beam has many sharp peaks.
Duty $= \frac{\left( \int_0^T I dt \right)^2}{\int_0^T dt \int_0^T I^2 dt} \sim 1\%$
The T2K group has started physics data taking since January 2010.
Circulating beam intensity measured by DCCT for 65 kW operation. Cycle time is 3.52 sec.
RCS:
Transverse painting: 150 π mm.mrad
Longitudinal painting: Momentum offset 0.2 %
Phase sweep -100 deg
2nd Harmonics ON
MR:
Ring collimator aperture: 54 π for both H and V
RF: 80 -> 160 kV (100 msec)
Mountain plot of WCM signal: Time variation of longitudinal profile for two bunches.
The water cooling system decreases the temperature rise in the ferrite core by approximately 1/5.
Impedance reduction in MR cavities
- Impedance reduction was observed in all the cavities.
- Atmospheric exposure recovers the impedance. This procedure was regularly performed from January to June 2010.
- Oxidization/Deoxidization of cutting surface of the cores may be related to the impedance reduction.
All cores were replaced for Cav. #1 and #3
Impedance reduction in MR cavities (cont’d.)
Cut cores in the cooling water tank
Cutting surfaces of the cut core: damaged due to severe corrosion
- Polishing the cutting surface recovers the impedance.
The cutting surface of all the damaged cores are re-polished in this shutdown periods.
- Coating of cutting surface is under development. SiO2 coating seems to be effective and now testing.
Residual radiation level after beam shutdown
5 hour after 120 kW operation (June-July, 2010)
Red: measured at the chamber surface
Blue: measured at 30 cm
Unit: μSv/h
Front-end (7 m)
DTL (27 m)
SDTL (84 m)
181 MeV
Front-end = IS + LEBT+ RFQ + MEBT
DB 1 entrance: 85, 4.0
DB 2 entrance: 500, 16
Future ACS section: <240, 7.0
First bend: 30, 1.0
3 MeV 50 MeV
25, 1.0
35, 1.0
30, 1.3
200, 8.0
Residual radiation level after beam shutdown 5 hour after 120kW operation (July, 2009)
Red: measured at the chamber surface
Blue: measured at 30 cm
Unit: μSv/h
(A) Injection area
(B) Collimator section
(A) Injection area
(B) Collimator section
K. Yamamoto |
AMERICAN INSTITUTE OF ARCHITECTS
MAR 15 1974
LIBRARY
The end of the old run around.
GAF puts all these services under one roof.
Until now all of you logical architects have had to put up with a very illogical way of getting supplies and equipment. Running all over town.
Now you can come to a place designers can really appreciate, The GAF® Business Systems Center in Park Fletcher. Here, under one roof, you'll find:
Whiteprints, Diazo equipment and supplies, Copy service, Copy machines and supplies, Business forms, Microfilm equipment and service, Audio visual slides, transparencies and overhead projectors, Drafting equipment and supplies, and offset printing of all kinds.
No more run-around. You'll be dealing with GAF's highly trained professionals armed with first quality equipment and services. So your needs will get the care and attention they deserve. The GAF Business Systems Center. It's better by, and for design. Call the GAF Business Systems Center (241-2877) for a demonstration. Or better yet, come on out. We're at 2260 Distributors Drive, Park Fletcher Industrial Park, Indianapolis, Indiana 46241.
OFFICERS
Carlton C. Wilson, AIA, President
Donald E. Sporleder, AIA, Vice President/President-Elect
Henry G. Meier, AIA, Secretary
Keith L. Reinert, AIA, Treasurer
BOARD OF DIRECTORS
David M. Bowen, AIA (President, Indianapolis Chapter)
Charles E. Parrott, AIA (Vice President, Indianapolis Chapter)
Michael L. Fox, AIA (President, Central-Southern Chapter)
Charles D. Gardner, AIA (Vice President, Central-Southern Chapter)
Paul O. Tanck, AIA (President, Northern Indiana Chapter)
Kenneth D. Cole, AIA (Vice President, Northern Indiana Chapter)
Robert N. Kennedy, AIA (Delegate, Indianapolis Chapter)
James P. Lowry, AIA (Delegate, Indianapolis Chapter)
John Guyer, AIA (Delegate, Indianapolis Chapter)
Charles M. Sappenfield, AIA (Delegate, Central-Southern Chapter)
Richard L. Hartung, AIA (Delegate, Central-Southern Chapter)
Raymond Komorkoski, AIA (Delegate, Northern Indiana Chapter)
Conrad C. Jankowski, AIA (Delegate, Northern Indiana Chapter)
Ingvar H. Loefgren, AIA (Delegate, Northern Indiana Chapter)
Arthur L. Burns, AIA (Ex-officio)
EDITORIAL STAFF
Chairman, Editorial Policy Committee
ARTHUR J. MATOTT, AIA
Executive Director and Managing Editor
HAROLD WANG
Editorial Assistant
BETTY SWORD
CONTENTS
5 THE VIOLIN AND THE VIOLIN CASE
Louis I. Kahn's First Theatre
The Community Arts Center
Fort Wayne
14 NECROLOGY
COVER
The drawing of the Fort Wayne Community Arts Center on the cover of this issue and the site drawing on page 6 were prepared for and are reproduced with the permission of the Fort Wayne Fine Arts Foundation.
THE INDIANA ARCHITECT is the sole property of the Indiana Society of Architects, a state association of The American Institute of Architects. It is edited and published every other month in Indianapolis, Indiana. Periodicals Advertising Representative: 219, N. Pennsylvania St., Indianapolis, Indiana 46202; phone 635-5478. Subscription price: $10.00 per year, including all resident registered Indiana architects, school faculty members, and students, libraries, public, public officials, and members of the construction industry. Detailed information available on request.
NEW DIMENSIONS OF DESIGN
It's a fact that BELDEN provides over 200 variations in color, texture and size of brick — the largest selection in the industry to free the imagination of the creative architect for limitless scope of design. Your nearest BELDEN Dealer will gladly show you the facts in the form of samples, or write us at P. O. Box 910, Canton, Ohio 44701.
THE Belden Brick COMPANY / CANTON, OHIO
THE VIOLIN
AND THE VIOLIN CASE
Louis I. Kahn’s first theatre
The Community Arts Center
Fort Wayne
The week of October 1-6, 1973 was Dedication Week for The Community Arts Center, Fort Wayne, Indiana. The weather smiled, for the most part, on the many students, citizens, musicians, artists, architects and dignitaries who came to experience or to be part of the interesting and multifaceted Dedication Program. Drawing from Fort Wayne’s Fine Arts’ Community as well as from other areas, the program was designed to demonstrate the potentials of the new center as well as to honor those who had had a part of its development.
The Center is the partial realization of a dream of The Fort Wayne Fine Arts Foundation, of those in the arts, and of the citizens of Fort Wayne. At least partly forgotten in the celebration of its opening were the delays, the reduction in scope, the redesigns, the cost overruns and the criticisms of its architecture. Here, after twelve years of often frustrated effort by many persons, was a building to use, and the beginning of a possible larger arts center. Along with the City-County Building, the remodeled Allen County Court House, Freimann Square and the Central Fire Station, The Arts Center has become an element in Fort Wayne’s new Civic Center which is rising on redeveloped land.
The Community Arts Center is operated by The Fort Wayne Fine Arts Foundation, a privately funded organization founded in 1955, whose members include Fort Wayne Ballet, Fort Wayne Civic Theatre, Fort Wayne Art Institute, Fort Wayne Museum of Art, Fort Wayne Philharmonic Orchestra and Fort Wayne Community Concerts. The four and one-half million dollars required for the Center’s construction were raised largely by private donation and local solicitation, The Freimann Trust and other bequests representing a smaller part of the funding. Fort Wayne and Allen County have voted funds to partially defray operating costs, but no public monies were used for construction.
Louis I. Kahn at the dedication of the Community Arts Center, Fort Wayne. Above, with tour group—"A tall fountain should be here." Below, on stage, Kahn discusses the design of the theatre.
Site Plan above shows Art Center's location in Fort Wayne Civic Center. Kahn's theatre is seen thru the trees of Friemann Square, below.
That a city the size of Fort Wayne (about 200,000) has been able, through unfailing support of private citizens, to underwrite a Fine Arts Organization of nationally recognized quality is regarded everywhere as something of a miracle. The opening of the new Center, then, was the very real concern of the whole community, much more so than if the building had been the outright gift of a wealthy benefactor.
In 1961, after a series of interviews with other firms, The Fine Arts Foundation appointed architect Louis I. Kahn of Philadelphia to design what was then to be a complex of buildings housing all of the fine arts organizations on one site. This site—two city blocks in Fort Wayne's older commercial district—had been purchased by The Fort Wayne Redevelopment Commission with HUD funds and was a portion of its Downtown Redevelopment Project. The Fine Arts Foundation took option on the site which was gradually cleared of blighted 75-100 year old brick and wood party-wall structures.
There were programs and schemes, sketches and models, brochures and fund drives and many meetings with Kahn in Fort Wayne during a period when his name was gaining world renown. There were readjustments within the Fine Arts group. The Historical Museum, at first an element, withdrew. Kahn had to begin anew many times, but finally, in June, 1966 he presented a new model of a leaner complex with great hopes that perhaps this could be accepted.
Clustered, sometimes connected, buildings were arranged around a central Court of Entrance. A Music Hall of 2,500 seats was a pivotal element. Perimeter walls were perforated by inward-growing multi-story light wells. Kahn felt very strongly at this presentation that the entire complex should rise at one time and that he should be commissioned to prepare complete documents for all of the buildings.
Estimates made for the entire complex, however, indicated costs believed at the time to be completely beyond the conceivable base of private resources. After more than five years of struggle The Fine Arts Foundation now found itself in an indefensible position: the public could not accept the estimates and The Redevelopment Commission was pressing for purchase of optioned land. A difficult decision was made—the scope of the project was reduced and approximately half the site area was relinquished to the Freimann Trust as a site for a new urban park which was to become known as Freimann Square.
Architect Kahn then devised new schemes which contemplated only one theatre and a site development which could accommodate only some of the fine arts organizations. Drawings were produced for a structure housing most of the performing arts needs, but which could not serve as a Philharmonic Music Hall. The program was essentially that used for the eventual building. Again, however, preliminary costs were too high and the structure was redesigned. When high bids were received on the redesigned structure, several weeks of negotiations produced economies and an upward adjustment of the budget, largely possible through an additional bequest by The Freimann Trust. The site was purchased after many advances in option deadline by The Redevelopment Commission and construction went ahead even though complete funding was not in hand.
Floor Plans, above, and Kahn's sectional model, left, explain final Art Center design.
Fort Wayne’s Community Arts Center has been described as an intimate setting designed for drama, music and dance. The 767-seat auditorium with an orchestra pit convertible to a thrust stage will accommodate any Ballet or Civic Theatre performance. It is very suitable for the chamber music performances of the Philharmonic Orchestra—indeed the acoustics of the room drew raves when the full Orchestra performed there during Dedication Week. A three-story high scenery fly and a large set workroom supplement the stage proper. George Izenour, Yale professor of Theatre Design and Engineering, Kahn’s consultant in these areas, describes the theatre as “a highly articulated, artistically unique instrument created for stage speech and chamber music—one of the great public spaces of North America.”
Many people have seen the street facade of the building as a theatre masque and feel that one enters through the “mouth” of the masque. Kahn has said that this was not his intention, but that the unique forms resulted from efforts to restrain the thrusts of the arches which soar over its openings. The entry vestibule gives right and left onto stair loggias which extend full height down the sides of the auditorium. There is a dramatic sense of height and openness in these tall flanking spaces—a sense of excitement and a sense of rhythm in the gaited staircases that lead to the many single doors of the auditorium. Behind the entry-boxoffice vestibule are cloak and concession areas. Above is the Gallery which opens to the stair loggias and to the cityscape thru a series of Kahn’s great brick arches.
Behind the stage are two Rehearsal Halls, dressing rooms, an actor’s house or “Green Room,” a passage to a balcony which runs along the rear wall of the stage. An almost separate “power house” contains heating and mechanical equipment.
The Center is constructed of brick, site-cast and precast concrete and concrete block. All exterior walls are load-bearing brick with block infill. Brick is extended into the interior for use in the great arched openings and at jambs and spandrels of tiered windows on the east and west. Precast tees span 115’ clear above the Auditorium, resting on exterior walls. This is the shell, or “Violin Case,” as Kahn likens it.
Enclosed by but completely free of the case, the “Violin,” or acoustically tuned auditorium, is a separate structure of site-cast concrete. Its walls are splayed and folded into intricate vertical forms. These provide acoustic correction and form light and sound traps at 14 single entrance doors (to minimize latecomer intrusions on the performance). Overhead stage-type downlights cast dramatic light-and-shadow patterns on these folded walls. Theatrical lighting access corridors form the auditorium ceiling. These corridors are concrete tubes spanning the auditorium and are independent of the roof.
Kahn feels that this “building-within-a-building” concept was necessary to guarantee that harsh exterior noise—the busy nearby elevated railway, traffic, aircraft—would not disturb even the most delicate passages of a performance. This has been completely realized.
Carefully planned form tie depressions and incised form joints in the site-cast concrete create line and pattern in the great vertical masses. Concrete, concrete block and oak are as unashamed as the brick and are seen in their natural state throughout. Honesty in such large doses was very difficult for most people to swallow, however, and Kahn was frequently asked by the public when these materials would be "finished." "I think they are finished," he would reply, "Well-made concrete deserves to be exposed." The special block was not defended in this fashion, however, and the precast is too smooth and too high to offend.
Long a teacher of architecture at University of Pennsylvania, in Philadelphia, Louis I. Kahn was born on the Isle of Osel in the Baltic in 1901. He came to the United States in 1905 and became a citizen in 1915. Kahn received his bachelor of Architecture degree at the University of Pennsylvania in 1924 and has many high and honorary degrees, medals and awards from many sources including The Royal Swedish Academy of Fine Arts, The World Academy of Arts and Sciences and the National Institute of Arts and Letters. Kahn was the 1971 recipient of the Gold Medal of The American Institute of Architects. Dedication speaker John B. Hightower, President of The Associated Councils of the Arts, New York City, called Kahn "the greatest living architect in the world today." Famous Kahn buildings in the United States are The Salk Institute for Biological Sciences, La Jolla, California; The Yale Art Center; The Kimball Museum of Art, Fort Worth and the recent library of the Phillips Exeter Academy, Andover, New Hampshire.
Held by many to be magnificent architecture, the Fort Wayne Community Arts Center, Kahn's first theatre, lies in the shadow of disappointment, frustration and misunderstanding. Kahn came to Fort Wayne for the dedication and was beset by many critical questions regarding his design. He conducted a seminar for architectural students (open to the public) in the West Rehearsal Room after the dedication ceremony. Following a film on his work, Kahn discussed the nature of Architecture and of its special meanings for him. In an almost mystically philosophical mood, he spoke quietly of "beginnings" and of influences and values rather than of examples of his work.
Later, at each of the Center's four corners, Kahn appeared and spoke to gathered groups about his building and about architecture in general. He attempted to answer questions, many of which were polite, but some of which were piercing and provoked strong defensive answers. Most of the answers were genuine, patient and philosophical, however, and often delivered with twinkling eyes were the familiar Kahnisms: "If you ask a brick what it wants to be it would say 'an arch,'" "The sun never knew how great it was until it shone upon a brick wall," "Concrete is molten stone, a magnificent material—it would be criminal to cover it with anything," "An artist always makes apparent how he makes things—he never disguises or veneers," and "Trees are too fragile."
Lightly veiled were Kahn's annoyances with his working situation during the project's planning, with the budget limitations he had to observe, with the concrete block he was forced to use inside instead of the brick he loves, and with the softening effect of Freimann Square's trees through which one sees his building. And through it all ran the deep and understandable regret that his first scheme was not carried out. (continued on p. 13)
A report on what is being done about the gas situation.
This is one of a series of reports about the gas situation. The Gas Utilities of Central Indiana want you to know what is being done to assure future supplies and what can be done to conserve present supplies.
The solutions to the gas energy problem will not be easy.
The earth still contains tremendous reserves of natural gas, but we will need new wells off-shore and much deeper wells on land to tap them. And we will need new systems to deliver gas from reserves in Alaska and foreign lands.
The gas industry is working with the government to develop new technologies to derive clean "pipeline quality" gas from the abundant deposits of coal in this country; pilot plants are already in operation. In other plants around the country, gas is being produced from oil and naphtha. All of these new methods require great investments of time and money, and the clean gas they produce will cost more.
But while we work together today to conserve our nation's resources of energy, be assured that the gas industry is exploring ways to meet the need for clean energy tomorrow, too.
GAS UTILITIES OF CENTRAL INDIANA:
Central Indiana Gas Company
Citizens Gas & Coke Utility
Hoosier Gas Corporation
Indiana Gas Company, Inc.
Kokomo Gas and Fuel Co.
Richmond Gas Corporation
Terre Haute Gas Corporation
Save those calories
- with Styrofoam insulation
Predictions are that shortages of fuel will filter down to the industrial level this winter. The President has ordered builders of public buildings to reduce heat losses by 40%. It's said that the era of cheap energy may be coming to an end. To save both energy and money for as long as the building stands, we recommend the permanent insulation value of Styrofoam. Call on our free advisory services for any type of structure you may be planning.
calorie: the heat required to raise one gram of water one degree centigrade.
CALL FOR TECHNICAL INFORMATION
SEWARD SALES CORPORATION
1516 Middlebury
Elkhart, Indiana 46514
(219) 238-8507
3660 Michigan Street
Cincinnati, Ohio 45208
(615) 321-4140
2070 East 56th Street
Indianapolis, Indiana 46220
(317) 253-3239
THE VIOLIN
AND THE VIOLIN CASE
(continued from page 10)
It is there now, Kahn's building, and will be there for the generations. It is his "violin case" and the theatre within the violin—the two never touching. It is controversial for most of Fort Wayne, this strange building with the "face" and the complexion problems which will disappear after adolescence. It will age gracefully and settle into the city and be used, used, used—for it does and will continue to work well—and therein lies its real greatness.
Any new building must pass through a period during which people become used to it, but even when it is old and grimy and accepted, Kahn's theatre will be different from its neighbors—for in its great spaces will always be the philosophy and the mysticism of the "beginnings" of architecture and the presence of Louis I. Kahn who, often when many others did not, believed in his building and felt it was "a good work."—ARTHUR J. MATOTT
PHOTOGRAPHS: Fort Wayne News-Sentinel
Except p. 6, p. 9 right, p. 10 right by Gabriel De Lobbe
and p. 7 by Technika, Inc.
Tour group during Dedication Week. Kahn replies to questions, below.
NECROLOGY
RICHARD E. BISHOP, AIA
1892 - 1973
Mr. Bishop, 5940 Sherman Drive, Indianapolis, passed away in early November 1973. He was best known as the designer of most of the buildings in the Indiana state park system. His principal works include the Nancy Hanks Lincoln Memorial and the Children's Camp, Lincoln State Park; the Potawatomi Inn, Pokagon State Park; and the Abe Martin Lodge.
From 1934-1939 Mr. Bishop served the National Park Services as State Supervisor of Park and Recreational Planning for Indiana, Illinois, Michigan and Wisconsin. He was Planning Director for the Indiana Department of Conservation from 1942-1946 and afterwards went into private practice at which time he designed a Kentucky State Park. Mr. Bishop closed his office in 1971.
MERRITT HARRISON, FAIA
1886 - 1973
Mr. Harrison passed away 24 July 1973 in Westminster Village, Greenwood.
Mr. Harrison was a founder of the Indiana Society of Architects and was instrumental in uniting it with the old Indiana Chapter, AIA. He helped organize the Building Congress of Indiana in 1929 and served as chairman of the Building Congress of the United States from 1951-1954. He received his B.A. from Cornell University School of Architecture in 1911 and opened his own office in the Board of Trade Building in 1916. In partnership with William E. Russ from 1934 until the death of Mr. Russ in 1950, he maintained his own office until 1971.
Mr. Harrison's principal works include the Coliseum at the State Fair Grounds, Crispus Attucks High School, the Meridian Street Methodist Church and the Hillcrest and Broadmoor Country Clubs.
Writing A Specification?
Get Manual Dexterity
The complexity of air systems being introduced into contemporary construction requires the complete step by step cooperation of everyone involved. To aid in this important designer/builder relationship, SMACNA has prepared a series of technical standards and manuals covering many areas of sheet metal construction. Using these standards in your specifications provides for equitable bidding on consistent standards of construction, saving all parties valuable time and money.
For more information and copies of the standards and specification outlines applicable to your work, contact:
Jim Miller (Ft. Wayne) 219-489-4541
Ralph Potesta (Hammond) 219-838-5480
Bill Finney (Indianapolis) 317-546-4055
Don Golichowski (South Bend) 219-289-7380
A USEFUL GUIDE FOR SEPARATE SPECIFICATIONS AND SEPARATE BIDS FOR AIR HANDLING SYSTEMS.
INDIANA SHEET METAL COUNCIL BOX 55533 INDIANAPOLIS, INDIANA 46205
There's more to this than meets the eye.
Often the way a situation stacks up depends on your outlook. Ideally, the fewer the preconceptions, the more free the creative vision to see the best solution. Your client depends on your problem-solving experience to discover the best solutions.
Yet sometimes past solutions keep the disciplined mind from climbing new heights. That's when our Architect and Engineer Liaison representative can help. There are some stimulating electric ideas that may open your imagination to new steps in problem solving.
Maybe there's more to your problem than meets the eye. We'd like to help. Call us today. Phone Architects and Engineers Liaison, 635-6868, Ext. 2-264. |
Split Decisions
Guidance for Measuring Locality Preservation in District Maps
November 2021
The Center for Democracy & Technology (CDT) is a 25-year-old 501(c)3 nonpartisan nonprofit organization working to promote democratic values by shaping technology policy and architecture. The organisation is headquartered in Washington, D.C. and has a Europe Office in Brussels, Belgium.
Split Decisions
Guidance for Measuring Locality Preservation in District Maps
Authors
Jacob Wachspress
William T. Adler
WITH CONTRIBUTIONS BY
Kyle Barnes, Samir Jain, Ari Goldberg, and Tim Hoagland.
ACKNOWLEDGEMENTS
We thank Bernard Grofman for his feedback and suggestions.
Footnotes in this report include original links as well as links archived and shortened by the Perma.cc service. The Perma.cc links also contain information on the date of retrieval and archive.
# Contents
**Introduction** 6
**Why keep localities whole?** 9
- Simplify election administration 9
- Inform voters 9
- Obstruct extreme partisan gerrymandering 10
- Empower communities 11
- Statutory requirements 11
**The metrics** 13
- Geography-based 13
- Population-based 14
**Why are locality-splitting metrics helpful?** 18
- Guide redistricting officials 18
- Legal tool for challenging bad maps 18
- Public engagement 18
**Choosing (and using) a metric** 20
- Some metrics take population into account 20
- Some metrics are easier to explain 20
- Population-based metrics agree most of the time 21
- Recommendation 22
**Conclusion** 23
## Contents (cont.)
### Appendix
| Section | Page |
|------------------------------------------------------------------------|------|
| Formulas for population-based metrics | 24 |
| More detail on the distinguishability of metrics | 27 |
| More detail on punishing splits that divide a larger fraction of people in a locality | 28 |
| Accounting for populous localities and small districts | 28 |
Introduction
Every ten years, states redraw their congressional and state legislative district boundaries to account for the shifting population, a process known as redistricting. Redistricting can have an enormous impact on election outcomes. By carefully drawing voters into specific districts, mapmakers can, for example, change the partisan makeup\(^1\) of the U.S. House of Representatives by several seats or dramatically diminish minority representation.\(^2\) New maps can have a similarly large impact on who is elected to state legislatures throughout the country.
In the aftermath of the 2020 election, voters split along party lines\(^3\) when asked whether they trust the American election system. As recently as September 2021, 36% of Americans say that President Biden did not legitimately get enough votes to win the presidency.\(^4\) Unfortunately, the usual process for how districts are drawn is unlikely to bolster voters’ confidence that elections are free and fair. Typically, state legislatures enact maps as they do any other legislation. If one party controls the process, they can draw district lines to maximize their party’s share of seats (i.e., partisan gerrymandering). Or both parties can collaborate to ensure safe districts for incumbents, who can cruise towards an easy re-election (i.e., bipartisan gerrymandering). All forms of gerrymandering undermine the idea that voters should choose their representatives, rather than the other way around – and therefore undermine trust in democracy.
As we enter the decennial redistricting period, there is bad news and there is good news.
First, the bad news. In 2019,\(^5\) the U.S. Supreme Court held, in a 5–4 decision, that claims of partisan gerrymandering are not reviewable by federal courts. This is worrisome, because, as a result of the 2020 state legislative elections, the majority of
---
\(^1\) The Cook Political Report. (n.d.). *Road Map to Redistricting 2021-2022*. [perma.cc/8NML-AC3B]
\(^2\) Soffen, K. (2016, June 8). *How racial gerrymandering deprives black people of political power*. Washington Post. [perma.cc/BE36-KDLH]
\(^3\) Morning Consult. (n.d.). *How Voters’ Trust in Elections Shifted in Response to Biden’s Victory*. [perma.cc/FN2Z-FR2V]
\(^4\) Agiesta, J., Edwards-Levy, A. (2021, September 15). *CNN Poll: Most Americans feel democracy is under attack in the US*. CNN. [perma.cc/F658-7WRP]
\(^5\) *Rucho v. Common Cause*, 588 U.S. ___ (2019). [perma.cc/Q8N9-TCSM]
the population\(^6\) currently lives in a state where one party will have full control of redistricting. In those states, one party will be free to maximize partisan gain, unencumbered by the other party or, now, by federal courts.
But the good news is that several large states,\(^7\) including Colorado, Michigan, New York, Ohio, and others, have enacted redistricting reform since the last redistricting, ranging from bipartisan redistricting commissions, to citizens’ commissions, to new statutory and constitutional fairness requirements. Additionally, new software has created avenues for the public to engage in the process, such as by submitting maps, evaluating maps,\(^8\) and giving public input to redistricters, such as by indicating their communities of interest (COIs) – geographically contiguous groups with shared cultural or economic characteristics that create common representational interests.\(^9\)
When analyzing the effect that redistricting can have on representation, it is essential to determine which groups of voters are kept whole within a district, and which groups are split across districts. A large group of voters may have their electoral power needlessly diminished if they are concentrated within a single district – quarantining voter power that could otherwise be spread across multiple districts (i.e., “packing”) – or if they are fractured across districts such that no representative prioritizes their interests (i.e., “cracking”).
Groups of voters are not only defined by party, but also by race, ethnicity, language, economic interests, environmental interests, culture, history, shared government services, or other common legislative concerns.
This paper focuses specifically on measuring the extent to which a district map splits voters within a locality. In this paper, “localities” refers to contiguous geographic entities such as counties, cities, towns, and municipalities,\(^11\) as well as COIs defined by the public.
---
6 Wolf, S. (2021, August 11). *The Daily Kos Elections guide to how redistricting will unfold in all 50 states*. Daily Kos. [perma.cc/F2N6-S6JL]
7 Associated Press. (2020, March 5). *More states to use redistricting reforms after 2020 census*. Associated Press. [perma.cc/ZD8N-AO57]
8 District: (n.d.). *Districtr*. [perma.cc/3U6T-Q2MU]
Dave’s Redistricting: (n.d.). *About DRA*. [perma.cc/SS3B-TUPB]
Princeton Gerrymandering Project: (n.d.). *Redistricting Report Card*. [perma.cc/JS7B-N8A5]
PlanScore: (n.d.). *PlanScore*. [perma.cc/7ZJH-MCX9]
9 Representable, (n.d.). *Representable*. [perma.cc/H2ZL-X652]
10 The Freedom to Vote Act, introduced by Senate Democrats in September 2021, defines communities of interest as “an area for which the record before the entity responsible for developing and adopting the redistricting plan demonstrates the existence of broadly shared interests and representational needs, including shared interests and representational needs rooted in common ethnic, racial, economic, Indian, social, cultural, geographic, or historic identities, or sharing similar socioeconomic conditions. The term communities of interest may, if the record warrants, include political subdivisions such as counties, municipalities, Indian lands, or school districts, but shall not include common relationships with political parties or political candidates.” *Freedom to Vote Act*, S.2747, 117th Cong. (2021). [perma.cc/8BC6-5QPU]
11 The U.S. Census Bureau refers to localities that provide governmental services as “incorporated places” and recognizes other unincorporated communities as “census designated places.” *Census Designated Places (CDPs) for the 2020 Census-Final Criteria*, 83 Fed. Reg. 56290 (November 13, 2018). [perma.cc/AM8B-8G9H]
Keeping localities whole has several benefits to democracy. Accordingly, some states require that district maps preserve localities to the maximum extent possible. However, there is not a single best, commonly accepted way to measure the degree to which a district map splits localities. In this paper, we discuss the motivations for preserving localities and review current methods for measuring locality-splitting.
Some commonly-used metrics for measuring locality-splitting are entirely geography-based; they do not take into account where voters actually live. We recommend against using these metrics. We describe several population-based alternatives, introduce a new one which may have benefits, and provide additional guidance to those drawing or evaluating maps.
We hope that this guidance will enable redistricting officials and the public to select appropriate locality-splitting metrics, evaluate choices made in redistricting, and bring about fairer representation.
Groups of voters are not only defined by party, but also by race, ethnicity, language, economic interests, environmental interests, culture, history, shared government services, or other common legislative concerns.
Why keep localities whole?
Simplify election administration
Respecting local political boundaries makes it easier to administer elections. Election officials must create a unique ballot style for each set of different contests that a voter could be eligible for (including statewide, congressional, state legislative, municipal, and local races). If a district spans many different political boundaries, it increases the number of ballot styles, increasing the burden on election officials to ensure voters receive the right ballot. If election officials accidentally give some voters the wrong ballot, this can influence election outcomes and undermine confidence in the democratic process.
For example, in 2018, Virginia election officials accidentally assigned over a hundred voters to the wrong House of Delegates district.\(^{12}\) This mistake may have changed the winner of the election and the party that controlled the chamber. By reducing the number of ballot styles required, the establishment of district boundaries that follow county and/or municipality lines can mitigate administrative problems by simplifying ballot assignment and tabulation.
***
Inform voters
Keeping localities whole helps voters stay informed\(^{13}\) about their representatives. A 2010 study\(^{14}\) found that voters in better-preserved counties were more likely to be able to correctly name congressional candidates in their district. One possible explanation for this phenomenon is that TV advertising markets follow county boundaries, meaning that voters in preserved counties are more likely to only see news and advertisements about their specific congressional race. Another explanation is that voters get information from talking to their friends and neighbors, and that a county or a municipality may emerge as conversational shorthand for understanding local politics and representation.
***
\(^{12}\) Branscome, J. (2018, February 8). *Fredericksburg voters quietly drop lawsuit requesting new 28th District election*. The Free-Lance Star. [perma.cc/7757-4TT9]
\(^{13}\) Stephanopoulos, N. (2012). *Spatial Diversity*. Harvard Law Review, 125, 1903-2010. [perma.cc/4SME-H6RX]
\(^{14}\) Winburn, J., & Wagner, M. W. (2010). *Carving Voters Out: Redistricting's Influence on Political Information, Turnout, and Voting Behavior*. Political Research Quarterly, 63(2), 373-386. [perma.cc/UQG9-EN7G]
Obstruct extreme partisan gerrymandering
When map drawers must preserve localities with pre-existing boundaries, it can make extreme partisan gerrymandering more difficult. The map that is best for a particular party has no inherent reason to preserve county boundaries, so map drawers who must avoid county splitting have less freedom to slice up voters in the optimal way for their desired partisan outcome. It is important to note that preserving counties does not, on its own, guarantee partisan fairness (Figure 1).
Figure 1. North Carolina counties (outlined) and congressional districts for the 113th Congress and the 115th Congress (colored; top and bottom, respectively). After Republicans’ partisan gerrymander of North Carolina’s congressional districts in 2011 (top), a federal court\(^{15}\) tossed out the map, and the state legislature’s 2016 revision (bottom) split about one-third as many counties. However, the new map performed as well or better at protecting Republican seats, even during the Democratic Party’s strong 2018 midterms.\(^{16}\) While limitations on county splitting makes extreme partisan gerrymanders more difficult, it is far from a sufficient constraint to prevent them.
Source: U.S. Census Bureau.
***
\(^{15}\) Harris v. Cooper. (2016). [perma.cc/QC5E-WMS7]
\(^{16}\) Adler, W. T., & Thompson, S. A. (2018, November 7) The ‘Blue Wave’ Wasn’t Enough to Overcome Republican Gerrymanders. *New York Times*. [perma.cc/EC5P-3DAL]
Empower communities
Sometimes the reason to preserve a community is not because of shared political boundaries, but shared characteristics of the community’s voters. Historically, map drawers have marginalized minority voters\(^{17}\) (most often, Black voters in the Jim Crow South) by splitting them among many districts. In order to give minority voters more power to choose their representatives, the Voting Rights Act of 1965 requires that, in certain cases, map drawers must create minority “opportunity districts.”\(^{18}\)
Some states\(^{19}\) give similar consideration to COIs. Examples may be a school district, a historically Cuban neighborhood, or a mining town. These communities stand to benefit from the power to choose their representative and lobby for specific legislation. In states that consider COIs, citizens have the opportunity to engage in the redistricting process by defining the communities that are important to them. Community-mapping platform Representable\(^{20}\) allows communities to define their boundaries online and shares the data with redistricting officials. The mappability of COIs facilitates public testimony during the redistricting process and provides redistricting officials with the boundary data they need to preserve COIs.
***
Statutory requirements
Given the important reasons to preserve localities, many states have statutory or constitutional requirements to avoid excessive locality splitting. A majority of states mandate that districts account for political boundaries (like counties and municipalities). A smaller but growing number of states require COIs to be kept whole when possible.\(^{21}\) The Freedom to Vote Act, proposed by Democrats in the U.S. Senate in September 2021, would create federal requirements to preserve localities, including COIs, counties, and other political subdivisions.\(^{22}\)
Sometimes, state redistricting law defines exactly what it means to respect locality boundaries. For example, Ohio’s constitution provides detailed rules for how many counties and municipalities may be split and the manner in which they may be split.\(^{23}\) But ambiguous provisions are much more common. An example is an Idaho law requiring, “[t]o the maximum extent possible, districts shall preserve traditional neighborhoods and local communities of interest.”\(^{24}\) This imprecise statement leaves a lot of room for judgment, especially due to inevitable trade-offs between locality preservation and other requirements like equal population and compactness of districts. It also raises questions
---
\(^{17}\) Prokop, A. (2018, November 14). *What is racial gerrymandering?* Vox. [perma.cc/7CSN-V9R4]
\(^{18}\) Altman, M., & McDonald, M. (n.d.). *The Voting Rights Act.* Public Mapping Project. [perma.cc/RYF9-TUW6]
\(^{19}\) National Conference of State Legislatures. (2021, July 16). *Redistricting Criteria.* [perma.cc/S3KX-W87S]
\(^{20}\) Representable. (n.d.). *Representable.* [perma.cc/H2ZL-X652]
\(^{21}\) National Conference of State Legislatures. (2021, July 16). *Redistricting Criteria.* [perma.cc/S3KX-W87S]
\(^{22}\) *Freedom to Vote Act,* S.2747, 117th Cong. (2021). [perma.cc/B8C6-5QRU]
\(^{23}\) *Ohio Const.* art XIX, pt. 2. [perma.cc/D65L-AFR8]
\(^{24}\) *Idaho Stat.* § 72-1506. [perma.cc/9LSA-UFB3]
about how to define “traditional neighborhoods and local communities of interest.” Later, we will touch on this issue and the risks involved with requiring preservation of localities without pre-existing boundaries.
The metrics
Without specific guidance from redistricting law, it is tempting (and easy) to measure locality splitting by counting the number of localities that are split. However, this does not capture all the information about how many people are affected and how severely. As a result, several different splitting metrics appear in court documents and the redistricting literature.
Here we summarize five different metrics and introduce a sixth, describe the reasoning behind each one, and explain the similarities and differences in how they quantify locality splitting. In a later section, we will offer guidance on how a redistricting commission, journalist, or member of the public might choose a metric. For technical readers, we include mathematical definitions and detailed examples in the appendix. In addition, everything we discuss in this section is implemented in our GitHub repository.\(^{25}\)
***
**Geography-based**
*Localities split*
This is a very simple and commonly-used way to measure locality splitting: count the number of localities that are split into more than one district. In New Hampshire’s two congressional districts (Figure 2) there are five split counties: Grafton, Belknap, Merrimack, Hillsborough, and Rockingham.

\(^{25}\) Wachspress, J., Moffatt, C., & Adler, W. T. *Metrics of locality splitting in political districting.* (Version 0.21) [Computer software] [perma.cc/82DN-S35M]
**Locality intersections**
A shortcoming of the previous metric is that a locality that spans (i.e., intersects) three or more districts is treated exactly the same as a locality that spans just two districts. An alternative is to calculate the number of districts that intersect the locality. This way, splitting a locality into five districts is punished much more harshly than splitting it into two.\(^{26}\) (See Figure 3.)

***
**Population-based**
The previous two metrics treat every locality split the same, regardless of where people are actually located. This could under-penalize a split that separates a heavily-populated region (and therefore affects many people), and over-penalize a split involving a lightly-populated region (which affects fewer people). The previous metrics may also under-penalize splits that substantially divide a locality and over-penalize a split that peels off only a small fraction of people (Figure 4). For this reason, we suggest using metrics that work explicitly with population counts. These metrics measure the extent to which people in the same locality
---
\(^{26}\) Note that the number of contiguous geographical pieces does not matter for this calculation; for example, in the New Hampshire map above (Fig. 2), even though Rockingham County is split into three “pieces,” the locality intersection metrics only counts the number of districts that the county intersects. Non-contiguous county pieces in the same district may be a sign of gerrymandering called “fracking” but is not one of the statewide splitting scores that we evaluate in this report. Cervas, J., Grofman, B., Horgan, T., & Freimer, R. (2021). *Fracking: A Contiguity-Related Redistricting Metric*. SSRN. [perma.cc/HU7C-2Q27]
are split by a districting plan and punish plans that affect more people.

**Figure 4.** Partial map of North Carolina counties (outlined in white) and congressional districts for the 113th Congress and the 115th Congress (colored; left and right, respectively). Iredell County is split into two districts. Numbers indicate the population of Iredell County residing in each district. Geography-based splitting metrics score each of these county splits equally, despite the first map splitting Iredell County more significantly. **Source:** U.S. Census Bureau.
### Effective splits
The “effective splits” metric was proposed for measuring COI splitting by Wang et al. (2021)\(^{27}\) and has roots in the political science literature of the 1970s.\(^{28}\) It can be used to measure the splitting of any kind of locality, not just COIs. One way of thinking about this metric is: each person has a different perception of how split up their locality “feels,” which depends on the proportion of the locality’s people that are in that person’s district. This metric attempts to aggregate each person’s perception.
If a locality is split once, into two equally populated halves, each person feels as if the locality has been split once, for an effective splits score of 1. If it is split into three equal parts (each with 33.3% of the population), each person feels as if the locality is split twice, for an effective splits score of 2. If a locality is split into three parts constituting 80%, 10%, and
---
\(^{27}\) Wang, S., Chen, S. J., Ober, R., Grofman, B., Barnes, K., & Cervas, J. (2021). *Turning Communities Of Interest Into A Rigorous Standard For Fair Districting*. SSRN. [perma.cc/68FR-BMBE]
\(^{28}\) Laakso, M., & Taagepera, R. (1979). “Effective” Number of Parties: A Measure with Application to West Europe. *Comparative Political Studies*, 12(1), 3-27. [perma.cc/NHV3-F3GC]
10% of the population, the vast majority of people will feel relatively unsplit and the effective splits score will be lower (in this case, about 0.5).
**Conditional entropy**
The “conditional entropy” metric proposed by Guth, Nieh, and Weighill (2020) quantifies the extra amount of information created by the district boundaries once the locality boundaries are known.\(^{29}\)
The idea behind entropy in this case is to assign a “surprise score” to each person in the locality. If a person only knows the number of people from their locality in each district, how surprised will she be to learn which district she is in? If the locality is not split, no one will be surprised at all. If the locality is split into three parts constituting 90%, 5%, and 5% of the population, the people in the 90% part will not be surprised, but the people in the two 5% parts will be very surprised.
In order to quantify this surprise, conditional entropy divides 100% by the proportion of the locality’s people in the same district. Since 100% divided by 90% is about 1.11 and 100% divided by 5% is 20, the people in the more populous part are much less surprised. For somewhat technical reasons related to quantifying information, the entropy metric then takes the base 2 logarithm of these numbers and reports the average across all people.
**Square root entropy**
Duchin (2018) proposed a slight modification of the conditional entropy metric in order to punish low-population splits more strongly.\(^{30}\) The metric is known as “square root entropy” because the modification to the formula includes a square root sign.
**Split pairs**
With the goal of providing a simpler and more interpretable population-based metric than those in the literature, we are introducing the “split pairs” metric here. The metric calculates, among all pairs of people in the same locality, what proportion of them are split into two different districts. As a simple example, let’s say that a small, rural locality called Alphabetville has 8 residents: A, B, C, D, E, F, G, and H. Suppose that A, B, C, and D are in one district, E and F are in another, and G and H are in yet another (Figure 5).
---
\(^{29}\) Guth, L., Nieh, A., & Weighill, T. (2020). *Three Applications of Entropy to Gerrymandering*. arXiv. [perma.cc/Y8WU-SMFM]
\(^{30}\) Duchin, M. (2018, February). *Outlier analysis for Pennsylvania congressional redistricting*. [perma.cc/A3AP-L84Z]
Figure 5. Map of a very small hypothetical locality, Alphabetville. Alphabetville’s eight residents are divided into three separate districts (colored).
Then the following pairs of people are split into different districts:
AE, AF, AG, AH, BE, BF, BG, BH, CE, CF, CG, CH, DE, DF, DG, DH, EG, EH, FG, FH
while the following pairs are not:
AB, AC, AD, BC, BD, CD, EF, GH.
This makes 20 split pairs out of 28, for a split pairs score of $20/28=0.714$. If all of the people were placed in the same district, there would be no split pairs, and the score would be 0.
The split pairs score can be summarized through the following hypothetical story: A random person does not remember his congressional district, so he picks a person randomly from his locality and asks what that person’s district is. Then he guesses that he lives in the same district. What is the probability of guessing wrong? The split pairs metric.\footnote{A slightly different version of this metric was formulated by Saxon (working paper). [perma.cc/9LSB-STRQ]}
Why are locality-splitting metrics helpful?
**Guide redistricting officials**
Though state legislatures often prioritize partisan and incumbent interests in the redistricting process, redistricting officials do sometimes adhere to best practices when determining which map to enact. In particular, this is typically the mandate of independent redistricting commissions. Redistricting officials should use a principled, population-based measure of locality splitting to assess possible maps. And to the extent that redistricting officials use sampling algorithms to provide or evaluate different options, the locality-splitting metrics can be used to influence which randomly generated maps the algorithm should accept or reject.
***
**Legal tool for challenging bad maps**
Locality-splitting metrics provide a legal tool for challenging redistricting plans, especially in states that require preserving localities “to the extent possible” but do not give more precise instructions. If a redistricting plan splits localities in an egregious way, a plaintiff should have no problem finding an alternative map that is better on *all* of the metrics in the literature. If this alternative also improves on other legally-prescribed attributes (e.g., compactness of districts), this adds to the body of evidence that the enacted map does not comply with redistricting law.
When evaluating how badly a map splits localities, it may be important to analyze why the localities are split. For example: Was a particular locality split to avoid a split somewhere else? Was the split necessary for balancing population across districts? Was it split to achieve a partisan goal, such as creating a competitive district – or maximizing gain for one party? Was it split to achieve a racial goal, such as creating a minority opportunity district – or cracking a racial group?
***
**Public engagement**
An engaged public can put pressure on politicians to consider more than their own partisan and personal interests. This is especially relevant for COIs who want to advocate for their community to be kept whole. Through a collaboration with Representable, members of the public will be able to see any districting plan’s splitting scores for COIs. This will allow community members, coalitions of communities, and advocacy
---
32 Daley, D. (2016). *Rattled: Why Your Vote Doesn’t Count*. Liveright. [perma.cc/D2MB-NDR8]
33 DeFord, D., & Duchin, M. (2019). *Redistricting Reform in Virginia: Districting Criteria in Context*. *Virginia Policy Review*, 12(2), 120-146. [perma.cc/BVQ5-XYEE]
34 Representable. (n.d.). *Representable*. [perma.cc/H2ZL-X652]
organizations to experiment with alternative maps and hold redistricting officials accountable if their proposal splits communities excessively or unnecessarily.
**Risks and limitations**
Allowing the public to advocate for locality preservation or to draw their own COIs is not without its drawbacks. On the one hand, it allows regular citizens more input in the redistricting process than they have ever had. On the other hand, partisan actors can infiltrate this process, anonymously advocating for preserving particular localities only when it would favor their party. For example, in 2011, California Democratic House members reportedly coordinated efforts to influence the California Citizens Redistricting Commission in order to create a map that would protect Democratic incumbents.\(^{35}\) This effort included advocacy for preserving specific counties and municipalities, and submissions from a fake nonprofit organization (with an accompanying Facebook page). In response, the Commission recommended that future Commissions “discuss and make decisions about the potential manipulation of the input process.”\(^{36}\) Reportedly, partisan actors have recently provided comments to the 2021 Commission without disclosing their affiliations.\(^{37}\)
The threat of partisan actors “hacking” this process makes it even more important for regular citizens to be involved in a robust community-mapping process. This way, the public input phase actually represents real communities’ interests. And it is also important that mapmakers or other entities do as much due diligence as possible in validating the identities and possible partisan affiliations of those submitting input. Representable vets its submissions by requiring mappers to provide detailed explanations of the characteristics (cultural, economic, historical, etc.) that unite their community.\(^{38}\) They give this information to mapmakers and strongly encourage them to use it while assessing the validity of the submitted COIs. But redistricting commissions may want to go even further in ensuring that submitters’ partisan affiliations, if any, are made clear.
---
\(^{35}\) Pierce, O., & Larson, J. (2011, December 21). *How Democrats Fooled California’s Redistricting Commission*. ProPublica. [perma.cc/5TE2-76NG]
\(^{36}\) Aguirre, G. (2016, April). *Summary report and compilation of 2010 Commission actions and suggestions for future Citizens Redistricting Commissions*. California Citizens Redistricting Commission. [perma.cc/6N4D-EWAP]
\(^{37}\) Christopher, B., & Kamal, S. (2021, September 28). *Between the lines: Hidden partisans try to influence California’s independent redistricting*. CalMatters. [perma.cc/8TY4-YYXP]
\(^{38}\) Representable. (n.d.). *Representable*. [perma.cc/H2ZL-X652]
Choosing (and using) a metric
This section provides some guidance on how to use locality-splitting metrics to assess statewide redistricting plans, including which metric to select and other choices to make. Many choices depend on the user’s priorities, but we will make general recommendations where appropriate.
***
Some metrics take population into account
As described above, the most commonly used methods for measuring locality splitting metrics – such as counting the number of localities split by a district plan – do not take population into account (Fig. 4). Population-based metrics instead measure the degree to which a district plan divides the population. Given that redistricting is fundamentally about the representation of people, we recommend the use of population-based metrics, except when redistricters are bound by statute to use other metrics. (Where statutes require the use of geography-based metrics, we recommend that state legislatures consider altering the law to allow redistricters to use population-based metrics.)
***
Some metrics are easier to explain
Each metric quantifies something slightly different and may have benefits in different scenarios. But the importance of interpretability should not be understated. Advocates interacting with the press or lobbying elected officials require easily understandable metrics to get their point across. In certain circumstances, redistricting lawyers should be wary of bringing mathematical formulas into court, since judges may prefer standards that are simple and broadly applicable. (For example, Chief Justice John Roberts once referred to the “efficiency gap,” a relatively simple mathematical measure of partisan fairness, as “sociological gobbledygook.”)\(^{39}\)
A redistricting commission may opt for a complicated metric that aligns with its priorities, but most others in the redistricting community will likely prefer using an easily explainable metric. To be sure, ease of explanation may bring tradeoffs: for example, while the geography-based metrics are the simplest to explain, their failure to consider people makes them much less desirable than the population-based metrics (unless state law dictates that this is how splitting should be measured). We think that the split pairs metric may strike an appropriate balance.
***
\(^{39}\) *Gill v. Whitford*, 585 U.S. ___ (2018). [perma.cc/45MB-B2KP]
Population-based metrics agree most of the time
Among the population-based metrics, the particular choice matters but may not make a substantial difference in assessing maps. We analyzed the scores for Congressional and state legislative district maps before and after the 2010 redistricting (doing 123 comparisons), and found that any given pair of population-based metrics was in agreement 76–90% of the time on which map split counties worse.\(^{40}\) (See appendix for more detail.)
The occasional disagreements between the metrics often occur when trying to discern the difference between similar-looking maps. In the cases where people are likely to notice or complain about locality splitting, the metrics will convey the desired information. For example, after court-ordered Congressional redistricting\(^{41}\) in Florida (2015), Virginia (2016), North Carolina (2016) and Pennsylvania (2018), all of these metrics moved in the same direction on county splitting – they improved.
The reasons for the differences between the population-based metrics are subtle, and we will explain the two main factors here.
Some metrics punish more harshly splits that divide a larger fraction of people in a locality
A notable benefit of the population-based splitting metrics is that they treat locality splits differently depending on how many people they affect. Recall, the geography-based metrics do not differentiate a 96-4 split of a locality’s people from a 70-30 split (Fig. 4). However, there is no “correct answer” for how much worse it is to split up more people. The splitting metrics all treat this question differently. If redistricting officials want to punish splits that affect a larger fraction of the locality’s population \textit{a lot more harshly}, they might consider using effective splits. If they would rather punish these splits just \textit{a little more harshly}, square root entropy is the best choice. (See appendix for more detail.)
| Punishment for dividing a large fraction of people | most harsh |
|---------------------------------------------------|------------|
| Effective splits | |
| Split pairs | |
| Conditional entropy | |
| Square root entropy | |
| Localities split, locality intersections (\textit{tie}) | least harsh |
\textbf{Table 1.} Table indicating which metrics punish more harshly splits that divide a large fraction of people in the locality.
\(^{40}\) The two geography-based metrics agree with each other 87% of the time, and agree with the population-based metrics 60–79% of the time.
\(^{41}\) National Conference of State Legislatures. (2020, December 1). \textit{Redistricting Case Summaries | 2010 – Present}. [perma.cc/7PF5-JXJA]
Deciding how to aggregate scores to the statewide level
Any method for assessing an entire districting plan will require some method for aggregating the metrics for each locality of a particular type (such as each county, municipality, or COI) into a single plan score. The choice of aggregation method reflects a judgment about how to penalize splitting in high-population localities versus low-population localities. For localities split, locality intersections, and effective splits, we recommend simply adding up all the splitting scores. For conditional entropy, square root entropy, and split pairs, we recommend taking a population-weighted average of the locality scores, so that each locality impacts the statewide score in proportion to its population.
We recommend this because the first three metrics measure “splitting events,” treating each locality the same, while the logic behind the last three metrics operates on a “per person” level and only generalizes to the entire state if population weighting is used. (This is also consistent with the literature on conditional entropy and square root entropy.) The above analysis of pairwise metric agreement followed these conventions.\(^{42}\)
***
Recommendation
The geography-based metrics have a major shortcoming; they do not account for population. As such, we only recommend their use if state law dictates it (like in Ohio\(^{43}\)). Among the population-based metrics, we will stop short of recommending a single one as the “best” way to calculate locality splits. In general, these metrics give similar results, so the choice is not particularly important, but they do reflect different choices about how much more to penalize splits that affect a large portion of a locality’s population and splits that occur in more populous localities.
We proposed the new metric, split pairs, in an effort to provide a population-based metric that is as simple and interpretable as possible. It answers the relatively simple question: “How likely is a person to be in a different district than some randomly chosen person in her locality?” For the ease of explanation and the interpretable 0-to-1 scale, we believe this metric has promise. However, if a user wants to punish small splits, as in a map that takes tiny nibbles out of multiple localities, they may want to use a metric like square root entropy (Table 1).
---
\(^{42}\) It is worth noting that, among the types of localities that can be split by a district map, counties appear to be a special case: every person should reside in exactly one county. However, some people may not reside in a municipality, city, town, or other entity designated by the Census as an incorporated or unincorporated “place.” And, of course, people may additionally reside in zero, one, or multiple COIs, depending on the set of COIs that the user chooses to include. This means that when aggregating scores across a state for any category of non-county locality, it is likely that some people will belong to more localities than others. Additionally, there may be a tension between preserving different kinds of localities that overlap. For instance, many municipalities span multiple counties, so it is impossible to preserve both the county and municipal U.S. Census Bureau Census Designated Places (CDPs) for the 2020 Census-Final Criteria, 83 Fed. Reg. 56290 (November 13, 2018); List of U.S. municipalities in multiple counties, (2021, September 30), in Wikipedia.
\(^{43}\) Ohio Const. art XIX, pt. 2. [perma.cc/DE5L-AFR8]
Conclusion
With the August 12, 2021, release of the Census “redistricting file,” redistricting season began.\textsuperscript{44} As of publication time, only a few states have enacted or proposed any maps.\textsuperscript{45} We expect the pace of map releases to increase as states reach their deadlines to implement new maps.\textsuperscript{46}
Redistricters have always had many criteria to consider, with hard tradeoffs to be made.\textsuperscript{47} This time around, fair redistricting may be even more complex than usual. With increased public access to software enabling meaningful engagement, and with requirements in 25 states to solicit public feedback, redistricters have a lot on their plates.\textsuperscript{48} Clearly, legislators and commissions need quantitative tools to make sense of the decisions before them, including how to consider locality boundaries, whether pre-existing or submitted by the public. Additionally, the public benefits when tools to evaluate maps are publicly available. One such tool is the Princeton Gerrymandering Project’s Redistricting Report Card, which incorporates several metrics, including the split pairs metric introduced here.\textsuperscript{49}
For both the map-drawing process and the inevitable litigation over maps in the years to come, having principled metrics for evaluating map-making criteria is essential. A great deal of work has gone into creating methods for evaluating the partisan fairness of a map. Relatively less work has gone into creating or organizing methods for measuring locality splitting. We hope that this paper organizes and contextualizes that work, and provides a valuable resource for redistricters and courts alike.
\textsuperscript{44} U.S. Census Bureau (2021, August 12). \textit{2020 Census Redistricting Data Files Press Kit} [Press release]. \url{[perma.cc/UMV2-LKSE]}
\textsuperscript{45} FiveThirtyEight. (2021, September 28). \textit{What Redistricting Looks Like In Every State}. \url{FiveThirtyEight. [perma.cc/VABE-XNMe]}
\textsuperscript{46} National Conference of State Legislatures. (2021, March 29). \textit{State Redistricting Deadlines}. \url{[perma.cc/74HW-TGE3]}
\textsuperscript{47} National Conference of State Legislatures. (2021, July 16). \textit{Redistricting Criteria}. \url{[perma.cc/S3KX-W87S]}
\textsuperscript{48} Hernández, K. (2021, August 31). \textit{DIY Redistricting Allows Public to Draw Maps in More States}. Stateline. \url{[perma.cc/6SVH-4QCG]}
\textsuperscript{49} Princeton Gerrymandering Project. (n.d.) \textit{Redistricting Report Card}. \url{[perma.cc/SR5B-BBF2]}
Appendix
Formulas for population-based metrics
General definitions
For a given locality, let $D_1, D_2, \ldots, D_n$ be all the districts that have people in the locality.
For $i \in \{1, \ldots, n\}$, let $V_i$ be the number of locality residents in $D_i$.
Define $V := \sum V_i$ (i.e., the total population of the locality) and $p_i := \frac{V_i}{V}$ (i.e., the proportion of locality residents in district $i$).
Recalling our Alphabetville example from the body of the paper (Fig. 6), we would have $V_1 = 4$, $V_2 = 2$, $V_3 = 2$, $V = 8$, $p_1 = \frac{1}{2}$, $p_2 = \frac{1}{4}$, and $p_3 = \frac{1}{4}$.

Effective splits
The formula for effective splits is given as:
$$\text{Effective splits} = \frac{1}{\sum_{i=1}^{n} p_i^2} - 1.$$
Note that if a locality is split into two equally-populated parts, this formula becomes $\frac{1}{\frac{1}{4} + \frac{1}{4}} - 1 = 1$ effective split. In general, a locality that is split into $k$ equally-populated parts has $k - 1$ effective splits. Splits into unequal parts are punished more lightly.
The people of Alphabetville are into districts that comprise $\frac{1}{2}$, $\frac{1}{4}$, and $\frac{1}{4}$ of the locality’s population (Fig. 7). This gives an effective splits score of
$$\frac{1}{2} + \frac{1}{16} + \frac{1}{16} = 1 = \frac{10}{6} = 1.67.$$
To convert the effective splits scores from all of a state’s localities into a single score for the entire districting plan, we recommend adding the scores for each locality. This is because this metric attempts to represent “splitting events,” treating all localities equally.
**Conditional entropy**
The formula for conditional entropy is given as:
$$\text{Conditional entropy} = \sum_{i=1}^{n} p_i \log_2 \left( \frac{1}{p_i} \right)$$
Perhaps it is useful to think of this metric as an “average entropy per person.” The amount of entropy that each person in district $D_i$ contributes to the average is given by the formula $\log_2 \left( \frac{1}{p_i} \right)$ and shown in Fig. 7. Notice that if a person’s locality is kept whole, then $\frac{1}{p_i} = 1$ for everyone, and the amount of entropy contributed to the average is $\log_2 1 = 0$.
In Alphabetville, persons A, B, C, and D each have $p_i = \frac{1}{2}$, while persons E, F, G, and H each have $p_i = \frac{1}{4}$. Thus, the average conditional entropy per person is $\frac{1}{8} (4 \log_2 2 + 4 \log_2 4) = 1.5$.
To convert the conditional entropy scores from all of a state’s localities into a single score for the entire districting plan, we recommend taking the population-weighted average of the scores for each locality.
**Square root entropy**
The formula for square root entropy is given as:
$$\text{Square root entropy} = \sum_{i=1}^{n} p_i \sqrt{\frac{1}{p_i}} = \sum_{i=1}^{n} \sqrt{p_i}$$
Thinking of this metric as an average per person, we note that each person in district $D_i$ contributes $\sqrt{\frac{1}{p_i}}$ to the average. Note that this changes the score for non-split localities from 0 to 1.

In the context of our example, persons A, B, C, and D each contribute $\sqrt{2}$ to the score, while persons E, F, G, and H each contribute $\sqrt{4} = 2$. This gives an average of 1.71 per person.
To convert the square root entropy scores from all of a state’s localities into a single score for the entire districting plan, we recommend taking the population-weighted average of the scores for each locality.
**Split pairs**
The formula for split pairs is given as:
$$\text{Split pairs} = 1 - \frac{\sum_{i=1}^{n} \binom{V_i}{2}}{\binom{K}{2}}$$
where $\binom{K}{2}$ denotes $\frac{K(K-1)}{2}$.
Observe that the fraction is the number of pairs of people in the locality that are in the same district divided by the total number of pairs of people in the locality. By subtracting this from 1, we get the probability that a random person in the locality is in a different district from a random other person.
In Alphabetville, this expression becomes:
\[ 1 - \frac{\binom{6}{2} + \binom{7}{2} + \binom{4}{2}}{\binom{28}{2}} = 1 - \frac{6+1+1}{28} = \frac{20}{28} \approx 0.714. \]
To convert the split pairs scores from all of a state’s localities into a single score for the entire districting plan, we recommend taking the population-weighted average of the scores for each locality.
***
More detail on the distinguishability of metrics
To determine how different these metrics are from each other, we checked how often they agreed about whether redistricting plans scored better or worse for county splitting after 2010 redistricting. To ensure an apples-to-apples comparison, we omitted the congressional maps where the number of districts changed after the 2010 Census. We also omitted comparisons between metrics when at least one of the metrics gave the same score to both plans. This includes, for example, congressional maps with only a single congressional district. The sample size was 123 for comparisons between population-based metrics and between 85 and 108 for comparisons involving the geography-based metrics (due to several occasions when the number of splits or intersections did not change).
| | Locs. split | Loc. inters. | Eff. splits | Cond. entropy | Sqrt. ent. | Split pairs |
|------------------|-------------|--------------|-------------|---------------|------------|-------------|
| **Locs. split** | 1 | 0.87 | 0.78 | 0.62 | 0.60 | 0.68 |
| **Loc. intersects.** | 1 | | 0.79 | 0.76 | 0.77 | 0.75 |
| **Eff. splits** | | | 1 | 0.85 | 0.76 | 0.85 |
| **Cond. entropy** | | | | 1 | 0.89 | 0.90 |
| **Sqrt. ent.** | | | | | 1 | 0.79 |
| **Split pairs** | | | | | | 1 |
Table 2. The frequency with which a pair of metrics agree on whether redistricting plans scored better or worse for county splitting after 2010 redistricting. Orange indicates geography-based metrics, and blue indicates population-based metrics.
Table 2 shows the frequency with which two metrics agreed on the direction of the change after redistricting. It indicates that, a substantial majority of the time, the population-based metrics agree with another on which of two maps is better, but that they occasionally disagree. There
is much more disagreement between the population-based metrics and simply counting the number of splits. This occurs in part because counting splits is a relatively crude metric that ignores population and in part because of the different choices for how to aggregate locality scores into a statewide score. (This is why effective splits is the population-based metric that is most similar to the geography-based metrics.)
***
**More detail on punishing splits that divide a larger fraction of people in a locality**
To see how each metric answers the question, “How much worse is it to separate more people in a locality?” we compared the penalty for a 90/10 split to the penalty for a 50/50 split (Table 3). The ratios are in the rightmost column of the chart below. Higher ratios indicate the degree to which a metric punishes 50/50 splits more harshly than 90/10 splits. Notice the ratios of 1.0x for the geography-based metrics, which do not consider population.
| | 90/10 split penalty | 50/50 split penalty | Ratio |
|----------------------|---------------------|---------------------|-------|
| **Effective splits** | 0.22 | 1 | 4.6x |
| **Split pairs** | 0.18 | 0.50 | 2.8x |
| **Conditional entropy** | 0.47 | 1 | 2.3x |
| **Square root entropy** | 0.26 | 0.41 | 1.6x |
| **Localities split** | 1 | 1 | 1x |
| **Locality intersections** | 1 | 1 | 1x |
*Table 3.* The penalties each metric imposes on a 90/10 split or a 50/50 split, as well as the ratio of those penalties. Penalties are calculated by subtracting the score for an unsplit locality from a locality that is split 90/10 or 50/50. Orange indicates geography-based metrics, and blue indicates population-based metrics.
**Accounting for populous localities and small districts**
When a locality is much more populous than the required district population (as in Los Angeles County; see Fig. 8), there is no way to avoid splitting the locality.
---
50 Though there are several other splitting scenarios that may be worthy of future investigation as well. Duchin, M. (2018, February). *Outlier analysis for Pennsylvania congressional redistricting*. [perma.cc/A3AP-L84Z]
In these cases, some have proposed assessing maps on the extent to which districts are kept within a single locality. (In other words, swapping the roles of “locality” and “district” in all the splitting metrics.) In her report to the Pennsylvania governor, Duchin (2018) calculated county-splitting scores both ways and took the average. This symmetric method has the advantage of explicitly considering the treatment of very populous localities. However, some of the reasons for preserving localities (e.g. election administration, voter engagement) don’t apply to preserving districts. In practice, we found that it is uncommon for the two methods to disagree about whether a redistricting plan scores better than the state’s previous map (see Table 4).
---
51 Duchin, M. (2018, February). *Outlier analysis for Pennsylvania congressional redistricting*. [perma.cc/A3AP-L84Z]
| Metric | Agreement |
|------------------------|-----------|
| Locality intersections | 100% |
| Effective splits | 98% |
| Conditional entropy | 97% |
| Square root entropy | 93% |
| Split pairs | 91% |
| Localities splits | 91% |
Table 4. The likelihood that each metric agrees with the symmetric version of itself on whether a map scored better or worse for county splitting after the 2010 redistricting. Orange indicates geography-based metrics, and blue indicates population-based metrics. Note that the “intersections” metric is already symmetric by definition, as it does not differentiate between localities and districts.
cdt.org
cdt.org/contact
Center for Democracy & Technology
1401 K Street NW, Suite 200
Washington, D.C. 20005
202-637-9800
@CenDemTech |
EFFECTS OF ANGIOTENSIN II ON PLASMA CONCENTRATIONS OF VASOPRESSIN AND GLUCOCORTICOIDS IN THE CONSCIOUS RABBIT
by
Dallas A. Carter
A THESIS
Presented to the Department of Physiology and the Oregon Health Sciences University School of Medicine in partial fulfillment of the requirements for the degree of Master of Science
May, 1987
APPROVED:
(Dr. Virginia Brooks, Professor in Charge of Thesis)
(Dr. John Resko, Chairman, Graduate Council)
# TABLE OF CONTENTS
| Section | Page |
|------------------------------------------------------------------------|------|
| LIST OF TABLES | v |
| LIST OF FIGURES | vi |
| ACKNOWLEDGEMENTS | vii |
| ABSTRACT | viii |
| INTRODUCTION | 1 |
| GENERAL BACKGROUND | 2 |
| EXPERIMENTAL APPROACHES | 4 |
| ALL ACTION ON AVP RELEASE | 5 |
| Site of Action of All in AVP Release | 6 |
| Physiologic Role of All in AVP Release | 8 |
| ALL ACTION ON ACTH RELEASE | 10 |
| Evidence for Direct Pituitary Effects | 10 |
| Role of AVP | 11 |
| Role of CRF | 13 |
| Evidence for Effects on the Central Nervous System | 14 |
| Adrenal Actions of All on Corticosteroid Release | 16 |
| Physiological Significance | 18 |
| Summary | 21 |
| RABBIT STUDIES | 21 |
| METHODS AND MATERIALS | 22 |
| CATHETERIZATION | 22 |
| PROTOCOLS | 24 |
| 1. All Infusion Experiments | 24 |
| 2. All + Nitroprusside Infusion Experiments | 25 |
| 3. Nitroprusside Infusion Experiment | 25 |
| Section | Page |
|------------------------------------------------------------------------|------|
| 4. Hemorrhage Experiments | 26 |
| 5. Sodium Deprivation | 27 |
| REUSE OF ANIMALS | 28 |
| RANDOMIZATION | 28 |
| ASSAYS | 28 |
| ANALYSIS OF DATA | 29 |
| RESULTS | 31 |
| ANGIOTENSIN II INFUSION EXPERIMENTS | 31 |
| 1. Effects on Plasma AII Concentrations | 31 |
| 2. Effects on Blood Pressure | 32 |
| 3. Effects on Heart Rate | 32 |
| 4. Effects on Plasma Corticosteroid Concentration | 32 |
| 5. Effects on Plasma AVP Concentration | 33 |
| NITROPRUSSIDE INFUSION EXPERIMENT | 33 |
| HEMORRHAGE EXPERIMENTS | 34 |
| SODIUM DEPRIVATION EXPERIMENTS | 34 |
| DISCUSSION | 36 |
| SUMMARY AND CONCLUSIONS | 42 |
| REFERENCES | 43 |
| APPENDIX A | 69 |
| APPENDIX B | 72 |
LIST OF TABLES
1. Effect of All Infusion on Plasma All Concentrations
2. Effect of All Infusion on Blood Pressure
3. Effects of All & NP Infusion on Blood Pressure
4. Effect of All Infusion on Heart Rate
5. Effect of All & NP Infusion on Heart Rate
6. Effect of All Infusion on Plasma Glucocorticoid Concentration
7. Effect of All & NP Infusion on Plasma Glucocorticosteroid Concentration
8. Effects of All Infusion on Plasma Vasopressin Concentration
9. Effects of All & NP Infusion on Plasma Vasopressin Concentration
10. Effects of Hemorrhage on BP, HR, Plasma All, Corticosteroid and AVP Concentration
11. Plasma Osmolality During All Infusions
LIST OF FIGURES
FIGURE 1. Experimental Set Up
2. All Infusion Protocol
3. The Change in Mean Arterial Pressure From Control During Infusion of Two Doses of Angiotensin II Either Alone or In Combination with Nitroprusside
4. Effect of 20 ng·kg⁻¹·min⁻¹ Angiotensin II Infusion Either Alone or In Combination with 3 µg·kg⁻¹·min⁻¹ Nitroprusside on Plasma Corticosteroid Concentration
5. Effect of 40 ng·kg⁻¹·min⁻¹ Angiotensin II Infusion Either Alone or In Combination with Nitroprusside on Plasma Corticosteroid Concentration
6. Effect of 40 ng·kg⁻¹·min⁻¹ Angiotensin II plus 3-6 µg·kg⁻¹·min⁻¹ Nitroprusside Infusion on Plasma AVP Concentration
ACKNOWLEDGEMENTS
I am grateful to Dr. Virginia Brooks for her continual guidance, support, and patience throughout this project. It was an honor and a pleasure to work closely with her. I also wish to thank Ms. Pat McDaniel for her work with the various assays used in this project, plus her invaluable assistance in the preparation of the tables, charts, and graphs of this thesis. Without her technical assistance, I doubt that this project would have succeeded. I am indebted to Ms. Jackie Niemi for the work she did in preparing my manuscripts. Her help was invaluable. Further appreciation must be given to Dr. Lanny Keil, who most graciously agreed to assay plasma vasopressin levels. Finally, I must express my appreciation for the helpful suggestions, support, and patient understanding of my wife, Deborah Reid.
This research was made possible by funding from the Steinburg Fellowship and by an award from the Tarter Trust.
Angiotensin II (All) can act in rats, dogs, and other animals to increase plasma levels of vasopressin (AVP) and adrenal corticotropic hormone (ACTH) but so far no studies of these effects have been completed using rabbits. Rabbits may serve as excellent subjects for further research in this area. The present experiments were designed to begin development of a conscious rabbit model by determining the responses of plasma AVP and glucocorticoids to increases in plasma All. Plasma glucocorticoid concentrations were used as an index of ACTH release from the pituitary. Experiments were also conducted to determine whether the responses of AVP and ACTH levels were increased when the pressor effect of All was negated. Catheters were implanted into the central ear artery and marginal ear vein, and angiotensin II was infused intravenously at rates of 10, 20, or 40 ng·kg$^{-1}$·min$^{-1}$ for one hour (each dose was given on a different day). Some rabbits receiving 20 or 40 ng·kg$^{-1}$·min$^{-1}$ infusions of All also received 3-6 μg·kg$^{-1}$·min$^{-1}$ of nitroprusside to counteract the pressor effect of All. Only infusions of 40 ng·kg$^{-1}$·min$^{-1}$ of All combined with nitroprusside caused significant rises in plasma AVP or glucocorticoid concentrations. AVP level rose from 2.6 ± 0.7 pg/ml to 4.4 ± 1.3 pg/ml (p<0.05, n=5) and glucocorticoid levels from 44 ± 8 ng/ml to 64 ± 5 ng/ml (p<0.005, n=5) after 30 minutes of All + NP infusion. The rise in glucocorticoids was sustained for 60 minutes but the AVP increase was not. Further studies were done to estimate the physiologic range of plasma All concentrations in the rabbit. A 30% hemorrhage over 30 minutes decreased mean arterial pressure from 73 ± 4 mm Hg to 63 ± 2 mm Hg (p<0.05, n=3) and caused a large increase in plasma AVP levels, but
plasma concentration of All was not significantly affected. Ten days of a low sodium diet with diuretic injections did cause a mild rise in rabbit plasma All levels to 67 ± 31 pg/ml, but this value was lower than effective plasma All levels for AVP or corticosteroid release. These results indicate that All is able to stimulate AVP and probably ACTH release in the rabbit. They also support the hypothesis that the pressor effect of All counteracts stimulatory effects of All on AVP and ACTH release. This study did not establish that the plasma level of All found effective for promoting the release of AVP and ACTH is within the physiologic range of the rabbit. The methods developed in this study and the findings mentioned will be important in the design and execution of further experiments in this area using the rabbit.
INTRODUCTION
A growing body of evidence indicates that angiotensin II (AI) stimulates the release of both arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from the mammalian pituitary (for review, see Reid, 1984). The mechanisms, sites of action, and physiologic significance of these effects are the subjects of a considerable amount of ongoing research. Most studies in this area used dogs, rats, and humans, with whole animal studies favoring dogs and most *in vitro* work done in rats. Despite these efforts, it is still unclear how and where AI has these effects. Also unclear is the importance of these effects in mammalian homeostasis and the relevance of these findings to human pathophysiology.
Until now, rabbits have been neglected as subjects for experiments in this area. The effects of AI on AVP and ACTH release in the rabbit have never been published. The rabbit presents several advantages for more advanced studies in this area. Arterial and venous catheterization is fairly simple and it is possible to infuse substances into the carotid and vertebral arteries (Dickinson & Yu, 1967; Undesser et al., 1985b) as well as the cerebral ventricles (Rosendorff et al., 1970). Sinoaortic denervation, vagotomy, hypophysectomy, and brainstem and hypothalamic lesioning are more advanced procedures that are useful in the study of this area and can be performed easily in the rabbit (Guo et al., 1982; Undesser et al., 1985a). Rabbits are also inexpensive enough to be suitable for *in vitro* studies of pituitary and other tissues. This advantage permits
direct comparison of results of *in vivo* and *in vitro* studies without concern for interspecies differences.
Although rabbits are promising subjects for advanced studies in this area, the groundwork must first be laid by more basic studies to determine whether the effects of AII in rabbits are similar to those observed in other animals. The purpose of the present study was to examine the effects of elevated plasma AII concentrations on ACTH or AVP release in conscious rabbits. It also investigated the physiologic significance of these effects by estimating the physiologic range of plasma AII levels in rabbits and comparing that range to the AII concentrations produced by exogenous AII administration.
**GENERAL BACKGROUND**
Angiotensin II is well known as the active product of the renin-angiotensin system (Ganong, 1981; Reid, 1984). Its formation is regulated by the release of a proteolytic enzyme, renin, from the renal juxtaglomerular cells. Renin cleaves the decapeptide, angiotensin I, from its circulating precursor protein globulin, angiotensinogen. Circulating angiotensin I is then rapidly transformed into angiotensin II by an angiotensin converting enzyme in endothelial cells. This active octapeptide has a very short half-life in plasma (less than one minute) and is broken down rapidly into smaller peptide fragments.
The AII-regulating enzyme, renin, is released from the kidney in response to a number of conditions. Hypotension and sodium deprivation cause intrarenal mechanisms to stimulate its release (Ganong, 1981). In addition, extrarenal mechanisms including increases in renal sympathetic nerve activity and circulating catecholamines may also stimulate renin release. As a result, plasma AII levels increase in
response to stress in general as well as specifically to stresses that decrease plasma volume or kidney perfusion pressure.
Angiotensin II acts in several ways to increase or maintain blood pressure and enhance positive fluid balance (Ganong, 1981). Its action on the adrenal cortex to promote aldosterone release increases sodium retention. Its peripheral vasoconstrictor action directly increases systolic and diastolic arterial pressure.
Angiotensin II also acts within the central nervous system to increase blood volume and pressure (Reid, 1984; Phillips, 1987). It increases sympathetic output, causing additional vasoconstriction. It also inhibits parasympathetic activity to the heart, increasing cardiac output. Another well established central effect of All is its dipsogenic effect, which contributes to positive fluid balance. Finally, the stimulation of AVP and ACTH release by All is generally regarded as a central effect.
Both AVP and ACTH act to maintain blood volume and pressure. AVP is a hormone released primarily from the posterior pituitary, which strongly promotes renal fluid retention and also has powerful pressor effects (Ganong, 1981). ACTH, an anterior pituitary hormone, increases blood pressure when chronically administered (Whitworth et al., 1981). Its action on the adrenal cortex to increase aldosterone secretion also increases sodium retention (Ganong, 1981). Furthermore, its stimulation of adrenal cortical production and release of corticosteroids is important in the restoration of blood volume after acute hemorrhage (Gann & Pirkle, 1975).
EXPERIMENTAL APPROACHES
Generally, three basic approaches have been used to study the effects of All on AVP and ACTH in whole animals. (1) Exogenous All has been either injected by bolus or infused into animals to raise plasma and/or tissue concentrations of All. (2) Animals have also been subjected to physiologic stresses that cause increased endogenous All production. (3) Finally, animals with elevated All levels have been treated with competitive All antagonists such as saralasin or given inhibitors of converting enzyme to prevent the production of All. The second and third approaches are often used in studies attempting to assess the physiologic importance of All's effects in intact animals. The first approach can also evaluate the physiologic significance of the effects of All by comparing the plasma concentrations of All during infusions with those occurring naturally in the animal. This requires a knowledge of the physiologic range of plasma All levels.
Physiologic range is the range of responses of plasma or tissue All levels to various physiologic states producing extremes in the production of circulating All. If exogenous infusions of All have effects only when they produce plasma levels outside the physiologic range, the physiologic importance of these effects is questionable. Since interspecies differences may occur in physiologic All ranges, the physiologic range for each particular species must be estimated before results of All infusion experiments in the species can be interpreted accurately.
Increases in plasma levels of AVP and ACTH are generally assumed to be indicative of increases in pituitary release of these hormones. Plasma levels of AVP and ACTH are generally assayed by radioimmunoassay, but levels of plasma glucocorticoids have also been
used as indicators of ACTH release. This method is less direct than radioimmunoassay but may be a more sensitive indicator of the normally pulsatile ACTH output (Wood et al., 1982). However, the method is complicated during All infusions by the observation that All can act directly on the adrenals to increase glucocorticoid output (Braverman & Davis, 1973; Reid, 1984).
**All Action on AVP Release**
Intravenous infusions of All producing plasma levels within the physiologic range have been shown to increase plasma levels of AVP in some studies of dogs, humans, and rats (Ramsay et al., 1978; Uhlich et al., 1975; Klingbell et al., 1986). Other studies have only found this increase with supraphysiologic plasma All levels (Reid et al., 1982; Usberti et al., 1985; Knepel & Meyer, 1980; Padfield & Morton, 1977). A few studies demonstrate no AVP reaction to even high doses of All (Cowley et al., 1981; Hammer et al., 1980).
One possible reason for these discrepancies is suggested by evidence that All produces larger increases in AVP in animals with elevated plasma osmolality. It is well known that increased plasma osmolality is a powerful stimulant of AVP release (Ganong, 1981). The possibility that All may modulate this reaction was raised by Shimizu et al. (1973) when they found that intracarotid infusions of All increased the rise in plasma AVP produced by intravenous hypertonic saline infusions in anesthetized dogs. A more recent study found that All infusion increased AVP responsiveness to intracarotid administration of hypertonic saline in conscious dogs (Wade et al.,
1986). *In vitro* studies using organ culture rat hypothalamo-neurohypophyseal systems also showed that All plays an important role in modulating AVP release in response to increasing culture medium osmolality (Ishikawa et al., 1980; Sladek et al., 1982). Results such as these have led some researchers to suggest that the variability in the results of intravenous All infusion experiments may have been related to the fluid status of the animals at the time of experimentation. The lack of information on plasma osmolality in these studies makes these suggestions purely speculative. Nevertheless, it is apparent that further studies of All's action on AVP must carefully monitor or control the osmolality of the culture medium or the blood.
**Site of Action of All in AVP Release**
Many studies have been performed to determine the site of action of All to increase AVP release. Intracerebral ventricular infusions of All have been found to consistently produce rises in plasma AVP (Ganong et al., 1982; Keil et al., 1975; Mouw et al., 1971; Fisher & Brown, 1984). This suggests that circulating All may act directly on the brain to modulate AVP release. Since All does not cross the blood brain barrier, the circumventricular organs have been considered likely candidates as sites of action. These areas lack a blood brain barrier and are known to possess specific All receptors (Van Houten et al., 1980; 1983). Furthermore, there is evidence that intraventricular and blood-borne All may both act on these receptors (Van Houten et al., 1983).
The circumventricular organs are located at sites in both the brainstem and forebrain (Van Houten et al., 1980). Hypothalamic nuclei are especially prominent candidates for sites of action, considering
the importance of hypothalamic factors in the release of anterior pituitary hormones and the extension of hypothalamic neurons into the posterior pituitary. Seeking to differentiate a brainstem from forebrain site of action, Reid et al. (1982) used intravertebral and intracarotid infusions of All in dogs. They found that plasma AVP is increased in response to supraphysiological levels of intracarotid All but not increased during intravertebral infusions. Because carotid blood does not perfuse the brainstem (Reid et al., 1982), these data support the hypothesis that circulating All acts on forebrain sites to effect a release of AVP.
A leading candidate for a site of action is the subfornical organ. The subfornical organ is a circumventricular organ that is closely connected with hypothalamic nuclei which produce AVP (Phillips, 1987). Lesions of this organ attenuate the AVP response to All infusion (Thrasher & Keil, 1986). On the other hand, there is also some evidence favoring the neurohypophysis and the organ vasculosum of the lamina terminalis as sites of action (Reid, 1984; Phillips, 1987). For example, All stimulated AVP release from hypothalamo-neurohypophysial explants; these explants do not contain the subfornical organ (Sladek et al., 1982). Thus, the exact site or mechanisms by which All stimulates AVP release has not been definitely established.
Rabbits would be especially good animals for further studies of All's site of action of AVP release. Their size allows for chronic catheterization with the capacity for multiple blood samples over time. Their size is also advantageous for brain lesion studies, and they are much less expensive than the larger animals used for central nervous system ablation studies.
Physiologic Role of All in AVP Release
There has been much debate concerning the possibility that endogenous All may play a role in the regulation of AVP release. Some previously mentioned studies support this possibility. Exogenous All infusions producing plasma concentrations within the physiologic range have been demonstrated to result in increased AVP concentrations (Ramsey et al., 1978; Uhlich et al., 1975; Klingbell et al., 1986). On the other hand, other studies required supraphysiological levels to induce detectable AVP release (Padfield & Morton, 1977; Knepel & Meyer, 1980; Reid et al., 1982; Usberti et al., 1985).
It is important to emphasize that studies evaluating the physiologic role of All in the control of AVP release may underestimate the potency of endogenous All. This is because All may be acting on AVP release through multiple and sometimes opposing mechanisms. Like renin, AVP release is inversely related to blood pressure, and elevations in arterial pressure suppress vasopressin secretion (Reid, 1984). Since infused All has a potent pressor effect, it may be acting to inhibit AVP release at the same time it is exerting stimulatory effects. This possibility was investigated by Brooks et al. (1986) with experiments in conscious dogs. These investigators used nitroprusside infusions concurrently with All infusions to counteract the pressor effects of All. Physiologic plasma levels of All produced larger increases in plasma AVP concentrations when the pressor effect of All was negated.
Another aspect of this study (Brooks et al., 1986) used sodium depletion to chronically elevate endogenous All levels in dogs. Despite these high plasma All levels, AVP levels were not different.
than sodium replete controls. The infusion of the AII blocker, saralasin, produced a sharp fall in arterial pressure in these animals, emphasizing the importance of endogenous AII in the maintenance of blood pressure in these conditions. Remarkably, this fall in blood pressure had no effect on plasma AVP levels which would be expected to increase. The most likely explanation for this finding was that saralasin also blocked a tonic stimulation of AVP release by AII, counteracting the stimulatory effect of hypotension. If blood pressure was reduced by nitroprusside to levels similar to those produced by saralasin administration, significant increases in plasma AVP levels occurred. These findings support the importance of endogenous AII in the maintenance of AVP levels in the sodium deficient state.
The importance of AII in the vasopressin response to hemorrhage has been studied. Hemorrhage is a potent stimulator of AVP release in dogs, and also can stimulate the renin-angiotensin system (Claybaugh & Share, 1972). Mild hemorrhage can increase plasma AVP levels without affecting plasma renin activity (Claybaugh & Share, 1973). During more severe hemorrhages, blockade of the increase in plasma AII by a converting enzyme inhibitor does not affect the AVP response (Morton et al., 1977). Another study used renal blood vessel occlusion to prevent rises in plasma renin activity during hemorrhage (Claybaugh & Share, 1972). The suppression of renin activity had no effect on the response of AVP to hemorrhage. However, this study used anesthetized dogs and circulating AII does not increase AVP release during anesthesia (Claybaugh et al., 1972; Share, 1979).
Another potent stimulator of AVP release is hypotension (Lee et al., 1986). The increase in vasopressin during nitroprusside induced hypotension in conscious dogs is not affected by the intravenous
administration of saralasin (Brooks et al., 1986). Collectively, these studies suggest that All is not necessary for the AVP response to either hypotension or hypovolemia.
In summary, there is some evidence that All has a physiologic role in the regulation of AVP release in mammals, especially when plasma All levels are high. It is not clear what site of action is important for this role. Further studies using rabbits may help to locate this site of action and help to define All's role in AVP release.
**All Action on ACTH Release**
Intravenous, intracarotid, and intracerebroventricular infusions of All can promote the release of ACTH and/or 11-hydroxycorticosteroids into the circulation of conscious dogs (Ramsay et al., 1978; Maran & Yates, 1977; Reid et al., 1982; Keller-Wood et al., 1986; Raff et al., 1985; Klingbeil et al., 1986), rats (Daniels-Severs et al., 1971; Rivier & Vale, 1983; Ganong et al., 1982; Spinedi & Negro-Vilar, 1984) and humans (Rayyis, 1971). The mechanism and site of action for this effect have been investigated in numerous studies. Attention has been focused on the interrelationships between All and other, more potent corticotropin releasing compounds such as AVP and corticotropin releasing factor (CRF).
**Evidence for Direct Pituitary Effects**
One possible site of All action is the anterior pituitary. Maran and Yates (1977) have shown that intrapituitary infusions of All into conscious dogs stimulate ACTH release. Furthermore, these effects were induced by All infusion rates which were ineffective when given intravenously.
It also is well established that angiotensin II can act on cultured anterior pituitary cells to promote the release of ACTH (Sobel & Vagnucci, 1982; Sobel, 1983; Gaillard et al., 1981; Aguilera et al., 1983). However, it is much less potent than a number of other substances, most notably CRF and AVP (Hashimoto et al., 1979). This lack of potency requires the use of high concentrations of All to demonstrate any effect. Only two studies have demonstrated All effects at concentrations ($10^{-9}$ M) low enough to approach the upper limits of endogenous plasma All levels in rats (Gaillard et al., 1981; Spinedi & Negro-Vilar, 1984). Some studies have found that All at concentrations ineffective for ACTH release can potentiate the effect of CRF on this release (Vale et al., 1983; Schoenenberg et al., 1987). Still, these potentiating concentrations were still at least ten times higher than recorded peak plasma concentrations of All in the rat.
Thus, All may directly stimulate pituitary ACTH release. However, the high doses of All required suggest that All may act primarily by another mechanism, such as through the release of AVP or CRF.
**Role of AVP**
The observation that AVP and ACTH release are affected similarly by All infusion suggests the possibility that the release of ACTH may be dependent on increased pituitary levels of AVP. Indeed, AVP is known to have significant corticotropin releasing activity; much more than All itself (Hashimoto et al., 1979). One recent study in conscious dogs addressed this possibility. Klingbell et al. (1986) found that All produces larger increases in AVP release in water deprived dogs compared to water replete dogs. This increased response
of AVP was not accompanied by an increased ACTH response. Furthermore, when water replete dogs were infused by a vasopressin receptor (V1) antagonist, the plasma ACTH response to All was unchanged. Finally, they simultaneously infused All and AVP into dogs and found no synergy in the two hormones' action on ACTH. These researchers therefore concluded that AVP was not necessary for All's action on ACTH release.
A study in freely moving Sprague-Dawley rats also showed that the vasopressin antagonist dPTyr (Me) AVP did not effect the action of All on ACTH (Rivier & Vale, 1983). This lack of importance of AVP was also supported by a study comparing normal and homozygous Brattleboro rats (Spinedi & Negro-Vilar, 1984). Brattleboro rats have extremely low levels of plasma AVP, an inherited diabetes insipidus. Despite this deficiency, their ACTH response to bolus intravenous injections of All was not different than the response of normal rats. This suggests that AVP release is not an important factor in the stimulation of ACTH release by All. These researchers also removed and cultured the pituitaries from both strains of rats. Both responded with ACTH release equally well to All introduced into their media. It was also found that the Brattleboro pituitary cells were far more sensitive to CRF than the normals. This observation raises the possibility that Brattleboro rat cells could be compensating for a deficiency in vasopressin by increasing the sensitivity of the corticotrophs to the other major corticotropin releaser, CRF. This compensation could mask the loss of AVP as one mediator of the ACTH response to All. However, this possibility is unsupported and entirely speculative. Thus, the weight of the evidence argues against a significant effect of AVP as a mediator between All and ACTH release.
Role of CRF
Evidence is strong that CRF plays an important role in the action of All on ACTH. CRF may play two roles. 1) All may stimulate CRF release, allowing CRF to then exert its own powerful corticotropin releasing activity. 2) CRF may also be a necessary cofactor for a direct All action on the pituitary.
Studies of rats anesthetized by a method preventing endogenous CRF release showed that subsequent intravenous or intraperitoneal All infusions were ineffective in raising plasma ACTH levels (Rivier & Vale, 1983; Spinedi & Negro-Vilar, 1984). Simultaneous infusion of CRF with All in similarly anesthetized rats produced a modest synergism in the release of ACTH (Rivier & Vale, 1983). Another study in freely moving unanesthetized rats showed that anti-CRF serum blocked All stimulation of ACTH release (Rivier & Vale, 1983).
It is important to consider the previously mentioned study comparing normal and Brattleboro rats (Spinedi & Negro-Vilar, 1984). Brattleboro rats were found to have much larger ACTH release responses to intravenous CRF infusions than normals. This increased sensitivity to CRF was also found in isolated cultured Brattleboro pituitaries. If All stimulated ACTH release by stimulating CRF release, Brattleboro rats would be expected to have increased sensitivity to All. However, when All was administered both in vivo and in vitro, no difference in ACTH responses of normal and Brattleboro rats was observed. These data therefore argue against an All effect on CRF alone and suggest that All may act directly on the pituitary and that CRF is a required cofactor. However, it is not known whether CRF release in the Brattleboro rat is blunted, which could counteract increases in pituitary sensitivity.
The possibility that elevated CRF levels are necessary for All's effects was investigated in conscious dogs by Keller-Wood et al. (1986). Intravenous All infusion producing physiologic plasma concentrations were found to increase corticosteroid but not ACTH levels in plasma. Coinfusion of CRF with All was shown to produce a dose-response relationship between All infusion rate and ACTH or corticosteroid plasma levels. These experiments confirmed that All and CRF can interact to stimulate ACTH release. They did not, however, define whether CRF was acting as a cofactor or an intermediate for the effect of All. The possibility remains that All could both increase the release of CRF and then interact with CRF at the pituitary to increase ACTH output.
Further studies must be done to clearly define the role of CRF in All stimulation of ACTH release. One possible study is to block endogenous CRF release in an animal and then infuse doses of exogenous CRF to maintain constant plasma CRF levels. ACTH release in response to All infusions could then be determined. If the ACTH response to All was blocked, an active role of CRF would be indicated. Findings of continued All effectiveness would indicate that CRF was a permissive agent for All's mechanism of action. The methodological difficulty in this experiment is in blocking endogenous CRF without altering the effects of exogenous CRF. This could be done with anesthetics or by surgical ablation of the median eminence. The rabbit would be an excellent subject for studies of this type.
Evidence for Effects on the Central Nervous System
Lending support to the possibility that All acts by increasing CRF release, studies have indicated that central nervous system receptors
are important sites of action for All. All has been found to be effective in raising plasma 11-hydroxycorticoid levels when infused into the cerebral ventricles of dogs (Maran & Yates, 1977; Brooks & Malvin, 1979) and rats (Daniels-Severs et al., 1971; Ganong et al., 1982). Hypophysectomy blocks this response, indicating that ACTH release may be involved. Furthermore, when saralasin (All blocker) is infused into the cerebral ventricles of sodium deprived dogs, plasma cortisol levels decrease (Brooks & Malvin, 1979). This suggests that endogenous All acts on receptors in the central nervous system to increase ACTH release through the mediating effect of CRF.
Possible sites of action include the organ vasculosum of the lamina terminalis and the subfornical organ. Both are circumventricular organs bearing All receptors and both are either within (organ vasculosum of the lamina terminalis) or closely related (subfornical organ) to the hypothalamus. To evaluate the subfornical organ, one group surgically ablated this area in dogs (Thrasher & Keil, 1986). They found that the lesions attenuated ACTH secretion in response to intravenously administered All.
Another possible central site of All action on CRF release is the median eminence. It too is a circumventricular organ with All receptors and it is well known as the site of hypothalamic CRF release into the hypophyseal portal system (Ganong, 1981). Gann (1979) studied the importance of the median eminence in All stimulation of ACTH release by comparing two groups of anesthetized dogs. One group underwent complete brain removal sparing the areas of the median eminence and pituitary. The other group also underwent brain removal, but this time with removal of the median eminence and posterior
pituitary. The group with intact median eminences and pituitaries had much greater cortisol responses to intravenous All infusions. Those animals with only the anterior pituitary spared had similar responses to hypophysectomized dogs. It was concluded that the median eminence was necessary for the effect of All on ACTH.
Since the median eminence is the site of CRF secretion into the portal system, these findings agree with those that found CRF important for All's actions on ACTH. It remains uncertain whether All acts directly on the median eminence, on the other circumventricular organs mentioned, or both.
Adrenal Actions of All on Corticosteroid Release
Some studies have demonstrated that All infusion can increase corticosteroid levels when ACTH levels are suppressed. Hypophysectomized rats with low basal corticosterone levels experienced an 80% fall in these levels when infused with an All blocker (saralasin) (Davis & Freeman, 1976). Rabbits treated with dexamethasone to suppress ACTH release had low rates of corticosterone release. In these rabbits, high doses of All (100 ng·kg\(^{-1}\)·min\(^{-1}\)) had a mild but significant stimulatory effect on corticosterone output (Braverman & Davis, 1973). Anesthetized, nephrectomized, dexamethasone treated dogs were also shown to increase cortisol output in response to All (Bravo et al., 1975). These data suggest that All may directly stimulate adrenal corticosteroid release.
There is also evidence both in vitro and in vivo that All may act on the adrenal cortex synergistically with ACTH to stimulate corticosteroid release. This interaction was studied in conscious dogs pretreated with dexamethasone, which abolished the ACTH and cortisol
response to All (Brooks et al., 1984). Infusions of ACTH (0.3 ng·kg$^{-1}$·min$^{-1}$) to increase plasma levels to those noted in control animals failed to reestablish the cortisol response to All infusion. However, larger doses of ACTH (0.4 ng·kg$^{-1}$·min$^{-1}$) permitted a dose response effect of All on corticosteroid levels. This study indicates that ACTH must be increased above resting levels for All to produce a significant increase in glucocorticoids. It also suggests that infused All cannot act on the adrenal alone but must also act to increase ACTH release to exert its effects on corticosteroid release.
All's effect on ACTH release may be a more potent stimulator of corticosterone release than its direct adrenal effect. When dogs are hypophysectomized (Maran & Yates, 1977) or treated with dexamethasone (Brooks et al., 1984) the cortisol response to All infusion is abolished. Reid et al. (1982) investigated the importance of central versus peripheral effects in conscious dogs. Intracarotid All infusions were found to stimulate corticosteroid release at far lower doses than intravenously administered All. However, another study using sodium deprived dogs demonstrated decreases in corticosteroids when an All blocker (saralasin) was infused intravenously but no decreases when intracarotid or intravertebral saralasin was infused (Brooks & Reid, 1983). This group then concluded that All's primary effects were peripheral, not central.
In summary, evidence is strong that ACTH is involved with All induced corticosteroid release. The relative importance of All's stimulatory effect on ACTH release and interaction with ACTH at the adrenal cortex to release corticosteroids is presently not well established.
Physiologic Significance
Although evidence is strong that All can cause a release of ACTH from the pituitary, it is less well established that this effect is physiologically relevant. Some studies of intravenous or intracarotid All infusions in the dog have required supraphysiologic plasma concentrations to induce ACTH or corticosteroid release (Reid et al., 1982). The studies of intracerebral ventricular infusions also raise All levels in the cerebrospinal fluid far above levels of endogenous All recorded in animals (Reid, 1977).
As in AVP release, the effect of blood pressure on ACTH complicates the interpretation of All infusion experiments. ACTH release is inversely related to blood pressure (Gann, 1981; Brooks & Reid, 1986) and the pressor effect of All may inhibit ACTH release. This inhibition could then counteract more direct stimulation of ACTH release by All. Brooks & Reid (1986) studied this possibility. Conscious dogs were given infusions of All simultaneously with sufficient nitroprusside (NP) to maintain arterial pressure near control levels. This infusion resulted in a larger increase in ACTH release. The angiotensin II was infused at a rate maintaining plasma All concentrations well within the physiologic range. The possibility that the nitroprusside was producing an effect not related to its modulation of blood pressure was tested. A different vasodilator, hydralazine, was used in further experiments to negate the All induced blood pressure rise. These experiments showed similar results to the All + NP infusions. Thus, it appears that the pressor effect of exogenous All does counteract an effect to stimulate ACTH release.
Experiments on sodium depleted dogs were also done using an AII blocker, saralasin (Brooks & Reid, 1986). These studies showed that despite a large drop in blood pressure induced by saralasin infusion, plasma corticosteroid levels remained unchanged. This indicates that AII may well be acting to maintain the output of corticosteroids in this physiologic state. The potency of these animals' ACTH releasing response to hypotension was demonstrated with infusions of nitroprusside. A decrease in blood pressure to similar levels as induced by saralasin was accompanied by large increases in plasma ACTH concentrations. It is unlikely that AII simply uncouples the relationship between blood pressure and ACTH release. When nitroprusside was added to saralasin infusions to lower pressure further, large increases in ACTH again occurred, reconfirming the potency of nitroprusside induced hypotension in saralasin treated animals.
Studies utilizing infusions of saralasin into the cerebral ventricles of sodium deprived dogs have demonstrated significant decreases in plasma corticosteroid concentrations (Brooks & Malvin, 1979). These studies indicate that AII may play a physiologic role in central mechanisms of stimulating glucocorticoid release during sodium deprivation.
Another physiologic state in which AII may influence the adrenal-pituitary axis is that of high renin hypertension. Humans with high renin or renovascular hypertension have been found to have higher morning (0800) cortisol peaks than those with low renin hypertension or normotension (Atlas et al., 1981). Ten days of administration of Captopril (inhibitor of angiotensin converting enzyme) to the high renin subjects not only decreased mean arterial pressure but also
caused cortisol peak levels to fall. Analysis of the data of another study of six patients with essential hypertension (Angeli et al., 1981) showed a correlation between plasma renin activity and cortisol levels. Cortisol concentration then decreased in the three patients with high renin hypertension when treated with Captopril. Experiments in dogs with one clip Goldblatt hypertension showed that chronic activation of the renin angiotensin system produced no changes in plasma corticosteroid levels and that AI blockade with saralasin or Captopril did not decrease corticosteroid concentrations (Ben et al., 1984). This result conflicts with the previous human studies but also raises the possibility that differences in AI actions on corticosteroids may exist between dogs and humans.
Another physiologic state that has been investigated is hemorrhage. Hemorrhage is known to stimulate the renin-angiotensin system (Claybaugh & Share, 1972). Aguilera et al. (1983) investigated the role of angiotensin in the ACTH response to hemorrhage by infusing converting enzyme inhibitor into the cerebral ventricles of conscious rats. This infusion prevented a previously demonstrated increase in plasma ACTH levels during hemorrhage. This indicated the importance of centrally produced AI in this response. If peripherally generated AI was important, the central blockade of converting enzyme would be ineffective. Therefore, although AI may play a role in the ACTH response to hemorrhage, the role of the peripheral renin-angiotensin system is uncertain.
In conclusion, endogenous AI may promote ACTH release in a number of physiologic states, and this action may be important in normal responses to certain physiologic stresses.
Summary
Intravenous, intracarotid, and intracerebroventricular infusions of All can promote the release of ACTH and/or 11-hydroxycorticosteroids into the circulation of conscious dogs, rats, and humans. The relative importance of three potential sites of All action have been studied. All may effect ACTH release by interaction with receptors in the anterior pituitary or areas in the central nervous system that regulate the release of CRF into the hypophyseal portal system. The possibility of direct effects of All on the adrenal cortex for the release of 11-hydroxycorticosteroids must also be considered.
Rabbits may be excellent subjects for further studies of the mechanism and site of All's action of ACTH and/or corticosteroid release.
RABBIT STUDIES
The purpose of the present study was to examine the effects of increased plasma All concentrations on the release of AVP and ACTH from the rabbit pituitary. This project used two different approaches. The first approach involved intravenous infusion of All into conscious rabbits and addressed the question: Does All stimulate ACTH and AVP release in rabbits?
The second approach involved subjecting rabbits to physiologic stresses known to raise plasma All concentrations in other animals. Hemorrhage and sodium depletion were used to assess the range of endogenous All concentrations that were possible in rabbits. This approach addressed the question: Can rabbit endogenous All plasma levels rise to the levels found effective in stimulating AVP and/or ACTH release during All infusions?
Twenty-seven male New Zealand White rabbits weighing 2.0-4.0 kg were studied. They were housed in individual cages in the OHSU Animal Care Department and fed approximately 1 cup/150 g of Purina rabbit chow (DG-5315, sodium content 0.25-0.50%) daily unless otherwise specified. They also received tap water ad libitum. Rabbits were brought to the lab daily and placed in the stainless steel restrainer boxes used for experiments. They were given at least four days to become familiar with the lab environment before experimentation.
CATHETERIZATION
Arterial catheters were implanted for blood pressure and heart rate monitoring as well as blood sampling. Venous catheters were used for infusion of AlI and nitroprusside. The central arteries and marginal veins of the rabbit ears were selected for catheterization. Catheters were introduced in the morning of the day of experimentation and removed immediately upon completion of the experiment.
Arterial catheterization was begun by infusing subcutaneous 1% lidocaine (Elkins-Sinn, Inc.) bilaterally to the central ear artery. This maneuver was mildly irritating to the rabbit but prevented any further pain secondary to the catheterization. After local anesthesia, an 18 or 20 gauge catheter (3.2 cm Quik-Cath [Intravascular Over-the-Needle Teflon catheter], Travenol) was introduced into the vessel and advanced 1 cm toward the base of the ear. It was then flushed with 1-2 ml heparinized saline (heparin, Elkins-Sinn, Inc., 1000 u/ml diluted ten-fold in normal saline; 100 USP units/ml). This flush was easily
seen in the superficial ear vasculature and served as an indication of successful cannulation. The catheter was then secured with cloth tape.
If the first attempt at arterial catheterization failed, a second attempt was made using the opposite ear. If this failed a third attempt was occasionally attempted in the first ear. For further details of catheterization techniques, see Appendix A.
Following arterial catheterization, a 20-gauge catheter was placed in the marginal vein of the opposite ear. Lidocaine (1%) was used for local anesthesia. Catheter placement was aided by the placement of a paper clip on the proximal marginal ear to act as a tourniquet. The catheter (Quik-Cath, Travenol) was inserted in a similar manner to the arterial catheter.
Arterial catheters were connected by tubing (Tygon Mico Bore, I.D. 040) filled with heparinized saline to a pressure transducer (Microswitch, Model 135 PC 05 G1, Honeywell) linked to a Grass (Model 7D) polygraph (Figure 1). Pressure readings were recorded on two channels, one highly damped to reflect mean arterial pressure, and the other minimally damped to monitor pulse amplitude and rate. Pulse rate was recorded by a tachograph. The transducer and polygraph were calibrated before experiments using a water manometer. The internally calibrated tachograph readings were periodically compared with visual counts of inflections of the pressure tracing.
The venous catheter was connected by tubing to a 6 ml syringe filled with infusate. This syringe was placed in an infusion pump (Harvard Model 901) calibrated to deliver 0.080 ml/min.
Rabbits were allowed at least two hours after catheterization before an experiment began. Occasionally, severe vasoconstriction of the ear arteries delayed experiments several hours. The mean time
delay between catheterization and experimentation was 3 hrs and 40 min ± 74 min. This vasoconstriction was sometimes overcome by intra-arterial flushes of 1-2 ml of warm (40°C) saline. This technique was also useful when vasoconstriction during experiments interfered with blood sample withdrawal. When a stable baseline pressure had been maintained for at least thirty minutes, one of the following experiments was begun.
PROTOCOLS
1. ALL INFUSION EXPERIMENTS (Figure 2)
These experiments were conducted to determine the dose-response relationship between intravenous All infusions and plasma levels of All, AVP and corticosteroids. Experiments used 60-minute infusions of either 0, 10, 20, or 40 ng·kg⁻¹·min⁻¹ of All suspended in a 5% Dextrose in water solution, infused at a rate of 0.08 ml/min. An initial blood sample (4 ml) was withdrawn and replaced with 3.0 ml normal saline and 0.5 ml heparinized normal saline, which also flushed the arterial line. The infusion was then begun. After 30 minutes, the second arterial blood sample (2 ml) was withdrawn and replaced with 2.0 ml normal saline and 0.5 ml heparinized normal saline. Another 30 minutes of infusion passed and a third arterial sample (4 ml) was withdrawn and replaced. Infusion was then halted. After a 30-minute recovery period, a fourth and final arterial sample (4 ml) was withdrawn.
Samples were immediately put on ice. The 4 ml samples were divided with 1.8 ml placed in a tube with EDTA (0.2 ml of 0.3 M EDTA solution) and the remainder placed in a tube with two drops of heparin (1000 USP units/ml). All of the second sample (2 ml) was put in a heparin tube. At the end of the experiment, the samples were
centrifuged at 4°C and the plasma was separated from the red blood cells. Erythrocytes from the heparinized samples were resuspended in normal saline and reinfused into the rabbits. The plasma samples were separated into aliquots and frozen at -20°C for later assay of AII, AVP, and corticosteroid levels. Osmolality was also measured.
2. AII + NITROPRUSSIDE INFUSION EXPERIMENTS (Figure 2)
The purpose of these experiments was to ascertain the effect of AII on ACTH and AVP release independent of AII's pressor effect. Nitroprusside (NP) was infused simultaneously with AII at rates minimizing the elevation of blood pressure during the infusion. One set of experiments combined 3 μg·kg⁻¹·min⁻¹ NP with 20 ng·kg⁻¹·min⁻¹ AII. The other set combined 3-6 μg·kg⁻¹·min⁻¹ NP with 40 ng·kg⁻¹·min⁻¹ AII. The simultaneous AII and NP infusions were performed for one hour and blood samples were collected as described above for AII infusions. Solutions were infused at the rate of 0.08 ml/min.
3. NITROPRUSSIDE INFUSION EXPERIMENT
After one rabbit received an infusion of 20 ng·kg⁻¹·min⁻¹ AII plus 3 μg·kg⁻¹·min⁻¹ NP, it was allowed thirty minutes to recover and an infusion of nitroprusside alone was begun. The nitroprusside infusion rate was increased to a level that produced a 20 mm Hg drop in mean arterial pressure. This rate was calculated to be 60 μg·kg⁻¹·min⁻¹. When a 20 mm Hg fall was achieved, a 4 ml blood sample was drawn. The infusion was continued at a rate of 60 μg·kg⁻¹·min⁻¹ for ten more minutes, then a second 4 ml blood sample was drawn.
4. HEMORRHAGE EXPERIMENTS
These experiments were performed to measure the rise in plasma A11 levels occurring during a three-step 30% hemorrhage. Estimated blood volume was considered to be 65 ml/kg (Yamazaki & Sagawa, 1985). A catheter was placed in the central ear artery of each rabbit in a manner previously described. This catheter was also connected to the transducer and polygraph apparatus used before. After catheterization, the rabbits were given an hour to recover before experimentation was begun.
Ten minutes before hemorrhage, a control sample of arterial blood (4 ml) was drawn. At the beginning of the first bleed, a second sample of 4 ml was drawn. Blood was then dripped into a graduated tube with a small amount of heparin added. A total of 10% of the estimated blood volume was withdrawn (including the two samples) and the line was then flushed with 0.5 ml heparinized normal saline. Ten minutes after the start of the first hemorrhage, the second bleed was begun. Again, the first 4 ml of blood was placed in sample tubes and the remainder of another 10% of the estimated blood volume was dripped into a larger graduated tube. Ten minutes later this routine was repeated for the third 10% bleed. Ten minutes after the beginning of the third bleed, a final sample of 4 ml was withdrawn. Most bleeds required only two to three minutes although one required six, and another eight, minutes.
Blood samples were drawn and separated in a similar manner to that described in the All infusion experiments. The samples were centrifuged after the experiment and the heparinized erythrocytes were resuspended in saline and reinfused into the rabbits. Blood volume was also then restored with "non-sample" blood. The five plasma samples
were aliquoted and frozen and later assayed for All, AVP, and corticosteroid levels.
5. SODIUM DEPRIVATION
This experiment was designed to determine the response of plasma All levels to chronic sodium deprivation. Blood samples were taken from six rabbits for the measurement of baseline All, AVP, and corticosteroid concentrations. These rabbits were then fed a low sodium diet (Purina Modified Lab Rabbit Chow with No Added Sodium, sodium content 0.05%, providing 3 mEq sodium per day). On the third day of low sodium diet, 4 mg of furosemide was given intramuscularly to each rabbit and repeated every other day until the ninth day, inclusive. This dose of furosemide per kg body weight was similar to the dose given to dogs during other sodium deprivation experiments (Brooks & Reid, 1986). On the tenth day, a catheter was placed either in an ear artery or vein. Arterial catheterizations allowed blood pressure measurements. Blood samples were collected up to an hour after arterial catheterization but immediately after venous catheterization. Venous blood withdrawal was impossible in one rabbit (#1553) requiring the use of cardiac puncture technique. Anesthesia was induced with 0.6 ml Pentobarbital (64.8 mg/ml, Fort Dodge Laboratories, Inc.). Two 4 ml samples were collected from each rabbit and placed into iced tubes. Each sample was divided in half with 2 ml collected in a tube containing heparin and 2 ml collected in a tube containing EDTA. Samples were then centrifuged and handled as described before. Instead of using part of the heparinized plasma for assay of AVP, this aliquot was used for determination of plasma sodium
content. The remaining plasma was used for measures of plasma AII, glucocorticoids and osmolality.
**REUSE OF ANIMALS**
After infusion experiments, catheters were removed and animals returned to their quarters. A recovery period of seven days was allowed before further experimentation. During this period, further training was done. Rabbits were often subjects for both AII and AII + NP infusion experiments. None were used for more than four experiments because of increased difficulties with arterial catheterization.
**RANDOMIZATION**
The infusion protocol for each of the first eighteen experiments was selected by a random die roll from the protocols for 0, 10, 20, and 40 ng·kg\(^{-1}\)·min\(^{-1}\) AII infusions as well as the 20 ng·kg\(^{-1}\)·min\(^{-1}\) AII plus 3 μg·kg\(^{-1}\)·min\(^{-1}\) NP infusion protocol. After fourteen experiments, the protocol for 40 ng·kg\(^{-1}\)·min\(^{-1}\) AII + 3-6 μg·kg\(^{-1}\)·min\(^{-1}\) infusion was added to the other protocols selected at random. In the final fourteen infusion experiments, a protocol was chosen before each rabbit was brought to the lab for catheterization. Details of dates, rabbits used and randomization for each protocol can be found in Appendix B.
**ASSAYS**
Plasma samples collected in heparinized tubes were assayed for AVP and corticosteroids. AVP was extracted from plasma, dried, frozen, and shipped to another lab (LC Keil, USF, San Francisco, CA) for radioimmunoassay (Keil et al., 1977). Corticosteroid concentrations were measured by competitive binding protein radioassay (Murphy, 1967). The plasma samples anticoagulated with EDTA were assayed for
angiotensin II. All was extracted from plasma and measured by radioimmunoassay (Reid, 1981; Deschepper & Ganong, 1986).
Plasma sodium was measured by a Nova 1 Sodium/Potassium Analyzer (Nova Biomedical). Plasma osmolality was measured by a freezing point osmometer (Advanced Digimatic Osmometer, Model 3D2, Advanced Instruments Co).
ANALYSIS OF DATA
In All or All + NP infusion experiments, rabbits with high (>100 ng/ml) baseline corticosteroid levels were excluded from the statistical analysis. Rabbits with baseline corticosteroid levels below 100 ng/ml were not excluded, even if subsequent levels rose above 100 ng/ml. It was noticed in preliminary experiments that rabbits which appeared more agitated and had higher baseline blood pressure also tended to have glucocorticoid levels above 100 ng/ml. This value is comparable with the elevation in glucocorticoid levels in other rabbits exposed acutely to handling stress (Redgate et al., 1981). Therefore, this criteria was used to exclude rabbits that presumably were undergoing a stress reaction to the preparation of the experiment, since this reaction possibly could mask an effect of All on glucocorticoid levels.
The mean blood pressure value given for each experimental period was determined by calculating the average value of the polygraph tracing over the 20-minute period preceding the blood sample. Mean heart rate values were determined in a similar manner.
Cardiovascular and endocrine responses to angiotensin II infusion or hemorrhage were statistically evaluated with one-way analysis of
variance for repeated measures (Winer, 1971). If changes over the course of experiments were statistically significant, a post hoc Duncan's multiple-range test (Winer, 1971) was used to assess which values differed from the control value.
The increases in blood pressure produced by $20 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ All infusions were compared with those produced by $20 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ All + 3 $\mu\text{g} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ NP infusions with an unpaired t test (Winer, 1971). A similar comparison was made between $40 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ All infusions and $40 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ All + 3-6 $\mu\text{g} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ NP infusions.
Results of sodium deprivation experiments required different analysis. Sodium depleted All values were compared with the sodium replete values of the same rabbits along with control values of sodium replete rabbits used for other experiments. An unpaired t test (Winer, 1971) was used. Sodium depleted corticosteroid values were compared to controls with a paired t test (Winer, 1971). Plasma sodium values for sodium depleted rabbits were compared to values for different, sodium replete control rabbits with an unpaired t test (Winer, 1971).
Values in figures and tables were presented as means plus or minus standard error ($\pm$ SE).
RESULTS
Thirty-one All or All + NP infusion experiments were performed. High baseline plasma glucocorticoid levels (>100 ng/ml) excluded nine experiments from analysis. These nine also tended to have elevated baseline blood pressures and plasma AVP levels. Mean arterial pressure for control periods of rejected experiments was 80 ± 3 mm Hg compared to the value of 71 ± 2 mm Hg for control periods of included experiments (p<0.05, unpaired t test). The mean control AVP level of excluded experiments was 17 ± 5 pg/ml whereas included rabbits had a level of 6 ± 3 pg/ml (p=0.05).
ANGIOTENSIN II INFUSION EXPERIMENTS
1. Effects on Plasma All Concentration (Table 1)
Plasma All concentration did not rise from baseline (41 ± 15 pg/ml) levels during or after 60-minute infusions of the 5% dextrose in water vehicle (p>0.10, n=3). All infusions of 20 ng·kg⁻¹·min⁻¹ increased plasma All levels in all experiments, but this was followed by an elevated recovery level (176 pg/ml) in one rabbit. This resulted in a large recovery standard error and p>0.05 in one-way analysis of variance. However, the All concentration during infusion (131 ± 2 pg/ml) was much larger than the control level of 25 ± 3 pg/ml (p<0.001, n=3) when compared by a paired t test. When 20 ng·kg⁻¹·min⁻¹ All infusions were accompanied by 3 µg·kg⁻¹·min⁻¹ of nitroprusside, plasma All rose significantly (p<0.001) to levels not different from levels achieved by infusions of 20 ng·kg⁻¹·min⁻¹ of All alone (p>0.10, unpaired t test). Forty ng·kg⁻¹·min⁻¹ infusions of All also raised plasma All levels when administered either alone (191 ± 35 pg/ml,
p<0.005) or in combination with nitroprusside 194 ± 22 pg/ml, p<0.005). These two mean peak values were not different (p>0.10, unpaired t test). In all All infusions except 20 ng·kg⁻¹·min⁻¹, recovery plasma All concentrations were similar to preinfusion controls.
2. Effects on blood pressure (Tables 2 and 3)
When the 5% dextrose in water vehicle was infused alone, arterial pressure did not change significantly. Each dose of All produced increases in arterial blood pressure. When nitroprusside was infused simultaneously with either 20 or 40 ng·kg⁻¹·min⁻¹ All, arterial pressure did not change significantly from control values.
Figure 3 compares the pressor effects of All infusions with those of All + NP infusions. All infusion of 40 ng·kg⁻¹·min⁻¹ resulted in a mean pressor response significantly greater than the response to 40 ng·kg⁻¹·min⁻¹ All + 3-6 µg·kg⁻¹·min⁻¹ NP infusions (p<0.05, unpaired t test).
3. Effects on heart rate (Table 4 and 5)
Heart rate was not significantly affected by All infusion either alone or in combination with nitroprusside infusion. However, during the recovery periods following 20 and 40 ng·kg⁻¹·min⁻¹ All infusions, heart rate was elevated above control levels (p<0.05).
4. Effects on plasma corticosteroid concentration (Tables 6 and 7)
Infusions of All alone had no significant effects on plasma glucocorticoid concentration (Table 6 and Figures 4 and 5). Plasma glucocorticoid concentration was unaltered by simultaneous infusions of 20 ng·kg⁻¹·min⁻¹ All and 3 µg·kg⁻¹·min⁻¹ NP. However, 40 ng·kg⁻¹·min⁻¹ All + 3-6 µg·kg⁻¹·min⁻¹ NP infusions produced a 50% rise in mean
glucocorticoid level (p<0.005) which was sustained through the infusions. Glucocorticoids then returned to near the control concentration during the post-infusion recovery period (Table 7 and Figure 5).
5. Effects on plasma AVP concentration and osmolality
Infusions of All alone had no significant effects on plasma AVP levels (Table 8). Simultaneous infusion of 40 ng·kg\(^{-1}·\text{min}^{-1}\) All and 3-6 μg·kg\(^{-1}·\text{min}^{-1}\) NP did result in a small but significant increase in AVP (p<0.05) (Table 9 and Figure 6). This increase did not persist and by the end of the 60-minute infusion, AVP levels had fallen significantly to near control levels, where they remained after the recovery period. 20 ng·kg\(^{-1}·\text{min}^{-1}\) All + NP infusions had no effect on AVP levels (Table 9). Baseline osmolalities did not differ between protocol groups and no changes were found in mean osmolalities over the time course of any protocol group (Table 11). The mean baseline osmolality for all groups was 279 ± 5 mOsm/kg.
NITROPRUSSIDE INFUSION EXPERIMENT
Nitroprusside was infused into one rabbit at a rate of 30-60 μg·kg\(^{-1}·\text{min}^{-1}\) which produced a transient fall in mean arterial pressure from 62 to 42 mm Hg, over the course of 5 minutes. A blood sample was then taken (requiring 1.4 minutes) and when the arterial line was returned to the pressure transducer, mean pressure returned to 55 mm Hg. The mean pressure then rose gradually over the next ten minutes to 58 mm Hg immediately before the second sample was drawn. Plasma All concentration rose from a preinfusion level of 56 pg/ml to 340 pg/ml in the first sample and 330 pg/ml in the second. Plasma corticosteroid
concentration rose from 64 ng/ml to 76 and 81 ng/ml, respectively. AVP levels were not measured.
HEMORRHAGE EXPERIMENTS
Table 10 illustrates the effects of hemorrhage on blood pressures, heart rate, and plasma AII, corticosteroid, and AVP concentrations. Mean blood pressure was not significantly lower than control levels until after the third hemorrhage when it decreased from 73 ± 4 to 63 ± 2 mm Hg (p<0.05). Heart rate and plasma AII levels did not change significantly during the hemorrhage.
Corticosteroid levels began at 60 ± 8 ng/ml but fell to 58 ± 7 ng/ml after the first 10% bleed. The value after the final bleed was 69 ± 8 ng/ml. One-way analysis of variance repeated over time indicated that values changed over the course of the experiment. However, Duncans multiple range test indicated that the value after 10% hemorrhage was significantly different than the final corticosteroid value (p<0.05). However, no corticosteroid levels significantly differed from the control value.
AVP levels rose precipitously after the third 10% hemorrhage (p<0.01). Two of the three animals measured had final AVP plasma levels higher than the upper range of the assay (62.5 pg/ml). Both of these animals were therefore assigned a value of 62.5 pg/ml for purposes of data analysis.
SODIUM DEPRIVATION EXPERIMENTS
Sodium deprivation plus diuretic injections increased the mean plasma AII level to 67 ± 31 pg/ml (n=6), a value that was significantly different than the level of 30 ± 4 pg/ml (n=26) found in sodium replete
controls (p<0.05, unpaired t test). The rabbit that was bled while anesthetized had a much higher All level (221 pg/ml) than the other five sodium deprived rabbits.
Plasma sodium concentration was lower in sodium depleted rabbits (138.6 ± 1.2 mM, n=6) than sodium replete controls (142.1 ± 0.4 mM, n=9, p<0.05, unpaired t test). One sodium deprived rabbit had far lower levels (132.6 mM) than the others. This was the same rabbit with extremely elevated All levels (221 pg/ml).
Corticosteroids were not altered by sodium deprivation with control values of 47 ± 10 pg/ml (n=6) in sodium replete rabbits compared to 42 ± 6 pg/ml (n=6) when the rabbits were depleted (p>0.10, paired t test).
DISCUSSION
This study served three purposes. The first was to develop the conscious rabbit as a model for investigation of the actions of angiotensin II (AI) on the release of adrenocorticotropic hormone (ACTH) and arginine vasopressin (AVP) from the pituitary. The second purpose was to compare the actions of AI in the rabbit with those previously discovered in other animals, such as the rat and dog. Finally, this study sought to determine whether the actions of AI on ACTH and AVP release were physiologically significant in the rabbit.
A major difficulty encountered during this study was overcoming the tendency for ACTH and corticosteroid levels to increase due to incidental stresses of experimentation. Corticosteroids are elevated in rabbits during such diverse situations as exposure to a new environment, handling, and venipuncture (Redgate et al., 1981; Fenske et al., 1982). It was therefore uncertain at the outset of this project whether rabbits could be studied within a few hours of catheterization.
A major finding of this study was that if properly handled and trained, conscious rabbits are suitable for acute studies of ACTH release. Although 28% of infusion experiments were rejected due to elevated baseline corticosteroid levels, the incidence of rejection fell markedly as refinements in training and experimental technique were implemented.
Because of the small number of rabbits included in each experimental group, further findings of this study must be regarded as preliminary, and more work will be necessary before they can be solidly established. Despite this, some of the results reported in this study
reached high levels of significance; it is expected that these findings will be confirmed in the future.
The second major finding of this study was that All Infusions can induce rises in plasma corticosteroid and AVP levels in rabbits. However, this effect could only be demonstrated when the highest dose of All (40 ng·kg\(^{-1}\)·min\(^{-1}\)) was infused and its pressor effect was negated by concurrent nitroprusside coinfusion. This suggests that the pressor effect of All may inhibit AVP and ACTH release, counteracting other stimulatory effects of All. The enhancement of these actions of All by nitroprusside agrees with findings of similar experiments in dogs (Brooks & Reid, 1986; Brooks et al., 1986). Still, the infusion rate necessary for these effects in rabbits was much higher than the rate found effective for both effects in dogs (10 ng·kg\(^{-1}\)·min\(^{-1}\)).
It is conceivable that nitroprusside promotes AVP and corticosteroid secretion through a mechanism other than by decreasing blood pressure. The fact that plasma AVP and corticosteroid levels were unaltered by infusion of nitroprusside with 20 ng·kg\(^{-1}\)·min\(^{-1}\) All argues against a direct action of nitroprusside on AVP or ACTH release. Evidence in dogs shows that the vasodilator hydralazine, when infused simultaneously with All, is as effective as nitroprusside in stimulating AVP and ACTH (Brooks & Reid, 1986; Brooks et al., 1986). Brooks and colleagues also reported that sinoaortic baroreceptor denervation eliminated the effects of nitroprusside infusion on AVP and ACTH. This is further evidence that the action of nitroprusside on AVP and ACTH release is mediated by its effect on blood pressure alone. However, another mechanism of nitroprusside action cannot be decisively ruled out by these studies.
One interpretation of the finding that All infusions can increase plasma corticosteroid levels is that All stimulates the release of ACTH from the pituitary since corticosteroid levels are generally considered to be good indicators of pituitary ACTH release (Reid, 1984). However, the possibility of a direct stimulatory effect of All on the adrenal cortex cannot be ignored. This effect could cause corticosteroid levels to rise independently from ACTH release. A study of Braverman and Davis (1973) showed that All infusions increase corticosteroid output in adrenal veins of dexamethasone-treated rabbits. Since dexamethasone suppresses ACTH release, this was evidence of a direct effect of All on the adrenal cortex. It should be noted that this effect was seen with 100 ng·kg\(^{-1}\)·min\(^{-1}\) All infusion rates; in the present study the highest rate used was 40 ng·kg\(^{-1}\)·min\(^{-1}\). However, the Braverman and Davis study also found that the glucocorticoid release response to ACTH infusion was blunted by dexamethasone, raising the possibility that dexamethasone was acting at the adrenal cortex to inhibit glucocorticoid release. This could explain why high doses were needed to stimulate an increase in glucocorticoid output in their experiments. The finding of a direct adrenal effect does not preclude the possibility that All also increases ACTH release. Resolution of this issue awaits studies measuring ACTH release more directly in the rabbit.
This study presents evidence that All, when infused at a rate similar to that used in dogs, produces lower levels of plasma All. Infusions of 40 ng·kg\(^{-1}\)·min\(^{-1}\) All in rabbits were found to raise plasma All concentrations to 190 pg/ml. This value is similar to that found in dogs during infusions of only 10 ng·kg\(^{-1}\)·min\(^{-1}\). Infusion rates of 20 ng·kg\(^{-1}\)·min\(^{-1}\) All produced even lower plasma All concentrations in
rabbits. Therefore, evidence exists that a given dose of All is much less effective in raising plasma All levels in rabbits than in dogs. It also accounts for the necessity of using larger doses of All in rabbits than dogs to produce measurable rises in plasma AVP and ACTH levels.
Although All is apparently able to increase plasma levels of AVP and glucocorticoids in the rabbit, it remains uncertain that the plasma levels of All required for these effects are within the physiologic range. Two attempts were made to increase endogenous All levels above those found effective during All infusion.
The first attempt utilized a hypotensive hemorrhage of 30% of the rabbit's estimated blood volume. This hemorrhage produced no discernable increase in plasma All levels, despite a 10 mm Hg drop in arterial blood pressure and a precipitous rise in AVP levels. This finding is consistent with a study in dogs (Claybaugh & Share, 1973) that demonstrated that hemorrhage can increase plasma AVP levels without affecting plasma renin activity. However, the present hemorrhage was more severe than hemorrhage rates in the dog which were effective in increasing both AVP and All levels. This study also conflicts with results of a study in conscious rabbits which showed that a 20% hemorrhage raised plasma renin activity almost three-fold (Bartley & Anderson, 1984).
The second attempt to raise endogenous plasma All levels utilized a ten-day sodium deprivation regimen. Sodium deprivation is an effective stimulus of the renin-angiotensin system in dogs (Brooks & Reid, 1983), rats (Semple, 1980), and rabbits (Braverman & Davis, 1973). In the latter study, 4-6 days of a sodium deficient diet plus
daily diuretic injections raised plasma renin activity in rabbits five-fold. The results of the present study were less impressive, with ten days of sodium deprivation producing a milder, although significant rise in plasma AII levels. Comparisons of daily sodium intake of the two rabbit groups may explain the difference in these findings. The sodium content of the low sodium rabbit chow used in the present experiment was 22 mEq/kg compared to a level of 7 mEq/kg in the chow used by Braverman and Davis (1973). Further experiments are needed with more severe sodium restriction to raise endogenous plasma AII levels maximally.
Supporting a physiologic role of AII in the release of AVP and ACTH in rabbits is the comparison of effective AII levels in rabbits (191 pg/ml) with those produced endogenously during sodium deprivation in the dog (172 pg/ml) and during hemorrhage in rats (500-1000 pg/ml). In addition, two of the rabbits in this study were found to have very high plasma AII levels. One sodium depleted rabbit required anesthesia for a cardiac puncture; it had a plasma AII level of 210 pg/ml. The other rabbit, given nitroprusside alone, experienced a 20 mm Hg fall in mean arterial pressure; it had plasma AII levels of 340 pg/ml. These preliminary data indicate that not only is AII effective in increasing AVP and corticosteroid release with plasma levels similar to endogenous levels seen in other animals, but the rabbit is also capable of producing these levels as well. Nevertheless, it is possible that these high rabbit AII plasma levels cannot be induced by physiological stimuli.
Cardiovascular responses to AII were observed in this study. The pressor response to AII infusion was similar to other published results in rabbits (Bartley & Anderson, 1984). It is notable that AII
Infusions did not significantly depress heart rates despite increases in mean arterial pressure. When the higher dose All infusions ended and blood pressure fell to near control levels, heart rate increased markedly. A likely explanation is that sustained elevation in arterial pressure raised the baroreflex set point, resulting in reflex increases in heart rate when blood pressure fell to normal levels. Rapid arterial baroreceptor resetting has been observed after only 15 minutes of pressure elevation in rabbits (Dorward et al., 1982).
SUMMARY AND CONCLUSIONS
It was found that intravenous infusions of All could stimulate rises in plasma AVP and glucocorticoid concentrations in the conscious rabbit. These effects of All could only be demonstrated when nitroprusside was infused simultaneously with the highest All infusion rate. This rate induced plasma All levels that were similar to levels found to be effective for the stimulation of AVP and ACTH release in the dog.
Another finding was that for a given rate of All infusion, plasma All levels rise far less in rabbits than in dogs. This necessitated higher infusion rates in rabbits to reach the same effective plasma levels.
Attempts to raise endogenous All levels in conscious rabbits by hemorrhage and sodium deprivation were only mildly successful. Neither procedure increased plasma All concentration levels to levels similar to those found effective for AVP and glucocorticoid release.
It is therefore concluded that All may increase the release of AVP and ACTH from the rabbit pituitary when its pressor effect is negated. This effect occurs when plasma levels are within the physiologic range of other animals. However, this study neither supports nor disproves the hypothesis that these effects of All are physiologically relevant in the rabbit. These findings will be an important contribution to further studies of All induction of AVP and ACTH release in conscious rabbits.
REFERENCES
Agullera G, CC Chueh, MK Mohan, KJ Catt: Role of angiotensin II in the regulation of ACTH secretion. Endocrinology 112A:90, 1983 [abstract]
Angeli A, B Orlandi, P Paccotti, R Tabasso, C Tamagnone, G Lavezzaro: Pituitary-adrenocortical function in patients during treatment with the angiotensin-converting enzyme inhibitor captopril. Clin Endocrinol 15:555-565, 1981
Altas SA, DB Case, JE Sealey, JH Laragh: Relationship between plasma renin and cortisol in hypertensive patients. Clin Sci 61:265s-268s, 1981
Bartley P, W Anderson: Prostaglandins and the renal responses to haemorrhage, angiotensin II and methoxamine in conscious rabbits. Clin Exp Pharm Physiol 11:71-80, 1984
Ben LK, J Maselli, LC Keil, IA Reid: Role of the renin-angiotensin system in the control of vasopressin and ACTH secretion during the development of renal hypertension in dogs. Hypertension 6:35-41, 1984
Braverman B, J Davis: Adrenal steroid secretion in the rabbit: Sodium depletion, angiotensin II, and ACTH. Am J Physiol 225:1306-1310, 1973
Bravo EL, MC Khosla, FM Bumpus: Vascular and adrenocortical responses to a specific antagonist of angiotensin II. Am J Physiol 228:110-114, 1975
Brooks VL, L Daneshvar, IA Reid: Mechanism of the rise in plasma
corticosteroids after intravenous angiotensin II infusion in conscious dogs. Fed Proc 43:717, 1984 [abstract]
Brooks VL, LC Kell, IA Reid: Role of the renin-angiotensin system in the control of vasopressin secretion in conscious dogs. Circ Res 58:829-838, 1986
Brooks VL, RL Malvin: An intracerebral, physiologic role for angiotensin: Effects of central blockade. Fed Proc 38:2272-2275, 1979
Brooks VL, IA Reid: Effects of blockade of brain and angiotensin II receptors in conscious, sodium-deprived dogs. Am J Physiol 245:R881-R887, 1983
Brooks VL, IA Reid: Interaction between angiotensin II and the baroreceptor reflex in the control of adrenocorticotropic hormone secretion and heart rate in conscious dogs. Circ Res 58:816-828, 1986
Claybaugh JR, L Share: Role of renin-angiotensin system in the vasopressin response to hemorrhage. Endocrinology 90:453-460, 1972
Claybaugh JR, L Share: Vasopressin, renin and cardiovascular responses to continuous slow hemorrhage. Am J Physiol 224:519-523, 1973
Claybaugh JR, L Share, K Shimizu: The inability of infusions of angiotensin to elevate plasma vasopressin concentration in the anesthetized dog. Endocrinology 90:1647-1652, 1972
Cowley AW, Jr, SJ Switzer, MM Skelton: Vasopressin, fluid, and electrolyte response to chronic angiotensin II infusion. Am J Physiol 240:R130-R138, 1981
Daniels-Severs A, E Ogden, J Vernikos-Danellis: Effects of centrally
administered angiotensin II in the unanesthetized rat. Physiol Behav 7:785-787, 1971
Davis JO, RH Freeman: The use of angiotensin II blockade to study adrenal steroid secretion. Fed Proc 35:2508-2511, 1976
Deschepper CF, WF Ganong: Interference of eluates from octadecy/cartridges with an angiotensin II radioassay. Peptides 1:365-367, 1986
Dickinson CJ, R Yu: Mechanisms involved in the progressive pressor response to very small amounts of angiotensin in conscious rabbits. Circ Res 20-21 (Suppl II):157-163, 1967
Dorward PK, MC Andresen, SL Burke, JR Oliver, PI Korner: Rapid resetting of the aortic baroreceptors in the rabbit and its implications for short-term and longer term reflex control. Circ Res 50:428-439, 1982
Fenske M, E Fuchs, B Probst: Corticosteroid, catecholamine and glucose levels in rabbits after repeated exposure to a novel environment or administration of (1-24) ACTH or insulin. Life Sci 31:127-132, 1982
Fisher LA, MR Brown: Corticotropin-releasing factor and angiotensin II: Comparison of CNS actions to influence neuroendocrine and cardiovascular function. Brain Res 296:41-47, 1984
Gaillard RC, A Grossman, G Gillies, LH Rees, GM Besser: Angiotensin II stimulates the release of ACTH from dispersed rat anterior pituitary cells. Clin Endocrinol 15:573-578, 1981
Gann DS: Cortisol secretion after hemorrhage: Multiple mechanisms. Nephron 23:119-124, 1979
Gann DS, MF Dallman, WC Engeland: Reflex control and modulation of ACTH and corticosteroids. Int Rev Physiol 24:157-199, 1981
Gann DS, JC Pirkle: Role of cortisol in the restitution of blood volume after hemorrhage. Am J Surg 130:565-569, 1975
Ganong WF: Review of Medical Physiology. Lange Medical Publications, Los Altos, CA, pp 178-197 and 364-370, 1981
Ganong WF, J Shinsako, IA Reid, LC Kell, DL Hoffman, EA Zimmerman: Role of vasopressin in the renin and ACTH responses to intraventricular angiotensin II. Ann NY Acad Sci 394:619-624, 1982
Guo GB, MD Thames, FM Abboud: Differential baroreflex control of heart rate and vascular resistance in rabbits: Relative role of carotid, aortic, and cardiopulmonary baroreceptors. Circ Res 50:554-565, 1982
Hammer M, K Olgaard, S Madsen: The inability of angiotensin II infusion to raise plasma vasopressin levels in haemodialysis patients. Acta Endocrinol 95:422-426, 1980
Hashimoto K, S Yunoki, H Hosogi, J Takahara, T Ofuji: Specificity of cultured anterior pituitary cells in detecting corticotropin releasing factor(s): The effect of biologically active peptides and neurotransmitter substances on ACTH release in pituitary cell cultures. Acta Med Okayama 33:81-90, 1979
Ishikawa S-E, T Salto, S Yoshida: The effect of osmotic pressure and angiotensin II on arginine vasopressin release from guinea pig hypothalamo-neurohypophyseal complex in organ culture. Endocrinology 106:1571-1578, 1980
Kell LC, J Summy-Long, WB Severs: Release of vasopressin by angiotensin II. Endocrinology 96:1063-1065, 1975
Keller-Wood M, B Kimura, J Shinsako, MI Phillips: Interaction between
CRF and angiotensin II in control of ACTH and adrenal steroids.
Am J Physiol 250:R396-R402, 1986
Klingbell CK, LC Kell, D Chang, IA Reid: Role of vasopressin in stimulation of ACTH secretion by angiotensin II in conscious dogs.
Am J Physiol 251:E52-E57, 1986
Knepel W, DK Meyer: Role of the renin-angiotensin system in isoprenaline-induced vasopressin release. J Cardiovasc Pharmacol 2:815-824, 1980
Laragh J, J Sealey: The renin-angiotensin-aldosterone hormonal system and regulation of sodium, potassium and blood pressure homeostasis. In: Handbook of Physiology, Section 8: Renal Physiology, edited by J Orloff and R Berliner. American Physiological Society: Washington, DC, p 849, 1973
Lee M, TN Thrasher, LC Kell, DJ Ramsay: Cardiac receptors, vasopressin, and corticosteroid release during arterial hypotension in dogs. Am J Physiol 251:R614-R620, 1986
Maran JW, FE Yates: Cortisol secretion during intrapituitary infusion of angiotensin II in conscious dogs. Am J Physiol 233:E273-E285, 1977
Morton JJ, PF Semple, IM Ledingham, B Stuart, MA Tehrani, AR Garcia, G McGarrity: Effect of angiotensin-converting enzyme inhibitor (SQ 20881) on the plasma concentration of angiotensin I, angiotensin II, and arginine vasopressin in the dog during hemorrhagic shock. Circ Res 41:301-308, 1977
Mouw D, JP Bonjour, RL Malvin, A Vander: Central action of angiotensin in stimulating ADH release. Am J Physiol 220:239-242, 1971
Murphy BEP: Some studies of the protein-binding of steroids and their application to the routine micro and ultramicro measurement of
various steroids in body fluids by competitive protein-binding radioassay. *J Clin Endocr* 27:973-990, 1967
Padfield PL, JJ Morton: Effects of angiotensin II on arginine-vasopressin in physiological and pathological situations in man. *J Endocrinol* 74:251-259, 1977
Phillips MI: Functions of angiotensin in the central nervous system [review]. *Ann Rev Physiol* 49:413-435, 1987
Raff H, J Shinsako, CE Wade, LC Kell, MF Dallman: Acute volume expansion decreases adrenocortical sensitivity to ACTH and angiotensin II. *Am J Physiol* 249:R611-R616, 1985
Ramsay DJ, LC Kell, MC Sharpe, J Shinsako: Angiotensin II infusion increases vasopressin, ACTH, and 11-hydroxycorticosteroid secretion. *Am J Physiol* 234:R66-R71, 1978
Rayyis SS, R Horton: Effect of angiotensin II on adrenal and pituitary function in man. *J Clin Endocrinol* 32:539-546, 1971
Redgate ES, RR Fox, FH Taylor: Strain and age effects immobilization stress in Jax rabbits. *Proc Soc Exp Biol Med* 166:442-448, 1981
Reid IA: Is there a brain renin-angiotensin system? *Circ Res* 41:147-153, 1977
Reid IA: The renin angiotensin system. In: *Hypertension Research: Methods and Models*, edited by RM Radzialowski. Dekker: New York, pp 101-137, 1981
Reid IA: Actions of angiotensin II on the brain: Mechanisms and physiologic role [editorial review]. *Am J Physiol* 246:F533-F543, 1984
Reid IA, VL Brooks, CD Rudolph, LC Kell: Analysis of the actions of
angiotensin on the central nervous system of conscious dogs. *Am J Physiol* 243:R82-R91, 1982
Rivier C, W Vale: Effect of angiotensin II on ACTH release in vivo: Role of corticotropin-releasing factor. *Regul Pept* 7:253-258, 1983
Rosendorff C, RD Lowe, H Lavery, WI Cranston: Cardiovascular effects of angiotensin mediated by the central nervous system of the rabbit. *Cardiovasc Res* 4:36-43, 1970
Schoenenberg P, P Kehrer, AF Muller, RC Gaillard: Angiotensin II potentiates corticotropin releasing activity of CRF\textsuperscript{41} in rat anterior pituitary cells: Mechanism of action. *Neuroendocrinology* 45:86-90, 1987
Semple PF: The effects of hemorrhage and sodium depletion on plasma concentrations of angiotensin II and [des-Asp\textsuperscript{1}] angiotensin II in the rat. *Endocrinology* 107:771-773, 1980
Share L: Interrelations between vasopressin and the renin-angiotensin system. *Fed Proc* 38:2267-2271, 1979
Shimizu K, L Share, JR Claybaugh: Potentiation by All of the vasopressin response to an increasing plasma osmolality. *Endocrinology* 93:42-50, 1973
Sladek CD, ML Blair, DJ Ramsay: Further studies on the role of angiotensin in the osmotic control of vasopressin release by the organ-cultured rat hypothalamo-neurohypophyseal system. *Endocrinology* 111:599-607, 1982
Sobel DO: Characterization of angiotensin-mediated ACTH release. *Neuroendocrinology* 36:249-253, 1983
Sobel D, A Vagnucci: Angiotensin II mediated ACTH release in rat pituitary cell culture. *Life Sci* 30:1281-1286, 1982
Spinedi E, A Negro-Vilar: Angiotensin II increases ACTH release in the absence of endogenous arginine-vasopressin. Life Sci 34:721-729, 1984
Thrasher TN, LC Keil: Effect of subfornical organ ablation on secretion of arginine vasopressin and adrenocorticotropic hormone in response to angiotensin II in conscious dogs. Fed Proc 45:166, 1986 [abstract]
Uhlrich E, P Weber, J Eigler, U Groschel-Stewart: Angiotensin stimulated AVP-release in humans. Klin Wochenschr 53:177-180, 1975
Undesser KP, EM Hasser, JR Haywood, AK Johnson, VS Bishop: Interactions of vasopressin with the area postrema in arterial baroreflex function in conscious rabbits. Circ Res 56:410-417, 1985a
Undesser KP, P Jing-Yun, MP Lynn, VS Bishop: Baroreflex control of sympathetic nerve activity after elevations of pressure in conscious rabbits. Am J Physiol 248:H827-H834, 1985b
Usberti M, S Federico, G Di Minno, B Ungaro, G Ardillo, C Pecoraro, B Cianclaruso, A Cerbone, F Cirillo, M Pannain, A Gargiulo, V Andreucci: Effects of angiotensin II on plasma ADH, prostaglandin synthesis, and water excretion in normal humans. Am J Physiol 248:F254-F259, 1985
Vale W, J Vaughan, M Smith, G Yamamoto, J Rivier, C Rivier: Effects of synthetic ovine corticotropin-releasing factor, glucocorticoids, catecholamines, neurohypophyseal peptides, and other substances on cultured corticotropic cells. Endocrinology 113:1121-1131, 1983
Van Houten M, EL Schiffrin, JFE Mann, BI Posner, R Boucher:
Radioautographic localization of specific binding sites for blood-borne angiotensin II in the rat brain. Brain Res 186:480-485, 1980
Van Houten M, ML Mangiapane, IA Reid, WF Ganong: [sar\(^1\), Ala\(^8\)] angiotensin II in cerebrospinal fluid blocks the binding of blood-borne [125I] angiotensin II to the circumventricular organs. Neuroscience 10:1421-1426, 1983
Wade CE, LC Keil, DJ Ramsay: Effects of sodium depletion and angiotensin II on osmotic regulation of vasopressin. Am J Physiol 250:R287-R291, 1986
Whitworth JA, D Saines, R Thatcher, A Butkus, BA Scoggins, JP Coghlan: Blood pressure, renal and metabolic effects of ACTH in normotensive man. Clin Sci 61:269s-272s, 1981
Winer BJ: Statistical Principles in Experimental Design. McGraw, New York, 1971
Wood CE, J Shinsako, MF Dallman: Comparison of canine corticosteroid responses to mean and phasic increases in ACTH. Am J Physiol 242:E102-E108, 1982
Yamazaki T, K Sagawa: Hypotension 1.5 min after 10% hemorrhage permits evaluation of rabbit's baroreflex. Am J Physiol 249:H450-H456, 1985
Table 1: EFFECT OF AII INFUSION ON PLASMA AII CONCENTRATION (pg/ml)
| INFUSION | SAMPLE | ONE WAY ANALYSIS |
|-------------------|--------|------------------|
| | C | E₂ | R | P | n |
| AII 20ng·kg⁻¹·min⁻¹ | 25±3 | 131±2 | 83±47| >.05| 3 |
| AII 20ng·kg⁻¹·min⁻¹ + 3μg·kg⁻¹·min⁻¹ NP | 35±7 | 149±13* | 42±8 | <.001| 4 |
| AII 40ng·kg⁻¹·min⁻¹ | 40±24 | 191±35* | 35±12| <.005| 3 |
| AII 40ng·kg⁻¹·min⁻¹ + 3–6μg·kg⁻¹·min⁻¹ NP | 22±6 | 194±22* | 24±5 | <.001| 5 |
AII=Angiotensin II; NP=Nitroprusside; C=Control; E₂ sample drawn after 60min of AII infusion; R=Recovery(90min). * different than control value—p<.001.
Sample values are means±SE.
Table 2: EFFECT OF All INFUSION ON BLOOD PRESSURE (mm Hg)
| All DOSE | C | E1 | E2 | R | P | F | n |
|----------|-----|-----|-----|-----|-----|------|---|
| 0 (Control) | 65 | 64 | 71 | 75 | | | 2 |
| 10ng·kg⁻¹·min⁻¹ | 85 | 93 | 97 | 78 | | | 1 |
| 20ng·kg⁻¹·min⁻¹ | 75±6 | 88±10* | 85±8* | 71±4 | <.05 | 8.582 | 3 |
| 40ng·kg⁻¹·min⁻¹ | 72±4 | 98±3* | 99±6* | 77±4 | <.001 | 62.55 | 3 |
C=Control; E1=30min; E2=60min; R=Recovery(90min). * different than control value-p<0.05; # different than control value-p<.001. All=Angiotensin II.
Sample values are means±SE.
Table 3: EFFECTS OF All & NP INFUSION ON BLOOD PRESSURE (mm Hg)
| DOSE | SAMPLE | ONE WAY ANALYSIS OF VARIANCE |
|---------------|--------|------------------------------|
| | C | E₁ | E₂ | R | P | F | n |
| 20ng·kg⁻¹·min⁻¹ All | | | | | | | |
| 3μg·kg⁻¹·min⁻¹ NP | 73±5 | 71±6 | 73±8 | 72±5 | >.10 | .208 | 3 |
| 40ng·kg⁻¹·min⁻¹ All | | | | | | | |
| 3–6μg·kg⁻¹·min⁻¹ NP | 63±6 | 68±8 | 69±7 | 65±4 | >.10 | .592 | 5 |
C=Control; E₁=30min; E₂=60min; R=Recovery(90min); All=Angiotensin II;
NP=Nitroprusside Sample values are means±SE.
Table 4: EFFECT OF All INFUSION ON HEART RATE (bpm)
| All DOSE | C | E₁ | E₂ | R | P | F | n |
|----------|-----|-----|-----|-----|-----|------|---|
| 0 (Control) | 242±13 | 246±10 | 245±8 | 252±9 | >.10 | .3521 | 3 |
| 20ng·kg⁻¹·min⁻¹ | 225±12 | 210±9 | 240±20 | 280±20* | <.05 | 10.87 | 3 |
| 40ng·kg⁻¹·min⁻¹ | 220±37 | 205±34 | 230±48 | 275±38* | <.05 | 5.401 | 3 |
C=Control; E₁=30min; E₂=60min; R=Recovery(90min); All=Angiotensin II;
bpm=beats per minute. *different than control value—p<0.01; * different than control value—p<0.05. Sample values are means±SE.
Table 5: EFFECT OF All & NP INFUSION ON HEART RATE (bpm)
| DOSE | SAMPLE | ONE WAY ANALYSIS OF VARIANCE |
|---------------|--------|------------------------------|
| | C | E1 | E2 | R | P | F | n |
| 20ng·kg⁻¹·min⁻¹ All | | | | | | | |
| + | NP | 228±15 | 243±4 | 252±3 | 255±10 | >.05 | 3.942 | 3 |
| 3μg·kg⁻¹·min⁻¹ All | | | | | | | |
| + | NP | 217±14 | 238±19 | 231±16 | 255±21 | >.05 | 3.346 | 5 |
| 40ng·kg⁻¹·min⁻¹ All | | | | | | | |
| + | NP | 217±14 | 238±19 | 231±16 | 255±21 | >.05 | 3.346 | 5 |
*C=Control; E₁=30min; E₂=60min; R=Recovery(90min). All=Angiotensin II; NP=Nitroprusside; bpm=beats per minute.*
Table 6: EFFECT OF AII INFUSION ON PLASMA GLUCOCORTICOID CONC. (ng/ml)
| All DOSE | C | E₁ | E₂ | R | P | F | n |
|----------|-----|-----|-----|-----|-----|------|---|
| 0 (Control) | 49±10 | 62±13 | 74±19 | 76±21 | >.10 | .914 | 4 |
| 10 ng·kg⁻¹·min⁻¹ | 51 | 64 | 64 | 51 | | | 2 |
| 20 ng·kg⁻¹·min⁻¹ | 53±19 | 51±13 | 42±16 | 45±16 | >.10 | 0.251 | 3 |
| 40 ng·kg⁻¹·min⁻¹ | 69±12 | 80±13 | 81±12 | 87±17 | >.05 | 3.474 | 4 |
Sample values are means±SE. *C=Control; E₁=30 min; E₂=60 min; R=Recovery (90 min). All=Angiotensin II.
Table 7: EFFECT OF All & NP INFUSION ON PLASMA GLUCOCORTICOID CONCENTRATION (ng/ml)
| DOSE | C | E1 | E2 | R | P | F | n |
|---------------|-----|-----|-----|-----|-----|------|---|
| 20ng·kg⁻¹·min⁻¹ All | | | | | | | |
| + | | | | | | | |
| 3μg·kg⁻¹·min⁻¹ NP | 70±10 | 73±10 | 74±8 | 59±13 | .10 | .901 | 4 |
| 40ng·kg⁻¹·min⁻¹ All | | | | | | | |
| + | | | | | | | |
| 3-6μg·kg⁻¹·min⁻¹ NP | 44±8 | 64±5** | 66±6** | 44±12 | < .005 | 7.26 | 5 |
Sample values are means±SE. C=Control; E₁=30min; E₂=60min; R=Recovery (90min). All=Angiotensin II; NP=Nitroprusside. **different than control level—P<0.01.
Table 8: EFFECTS OF AI1 INFUSION ON PLASMA VASOPRESSIN CONC. (pg/ml)
| AI1 DOSE | C | E1 | E2 | R | P | n |
|----------|-------|-------|-------|-------|-----|-----|
| 0 ng·kg⁻¹·min⁻¹ | 2.4±.7 | 3.9±.9 | 2.9±1.0 | 1.8±.8 | >.10 | 3 |
| 10 ng·kg⁻¹·min⁻¹ | 11.2 | 7.7 | 3.7 | | | 2 |
| 20 ng·kg⁻¹·min⁻¹ | 1.8±1.2 | 2.8±1.1 | 3.8±1.6 | 9.2±8.1 | >.10 | 3 |
| 40 ng·kg⁻¹·min⁻¹ | 14.5±11.7 | 6.0±3.0 | 4.4±1.2 | 9.5±6.0 | >.10 | 4 |
Sample values are means±SE. C=Control; E₁=30min; E₂=60min; R=Recovery(90min); AI1=Angiotensin II.
Table 9: EFFECTS OF ALL & NP INFUSION ON PLASMA VASOPRESSIN CONC. (pg/ml)
| All DOSE | SAMPLE | ONE WAY ANALYSIS OF VARIANCE |
|----------|--------|------------------------------|
| | C | E1 | E2 | R | p | n |
| 20ng·kg⁻¹·min⁻¹ All | | | | | | |
| 3μg·kg⁻¹·min⁻¹ NP | 7.0±2.8 | 7.3±2.5 | 7.9±3.4 | 3.2±.9 | >.05 | 4 |
| 40ng·kg⁻¹·min⁻¹ All | | | | | | |
| 3–6μg·kg⁻¹·min⁻¹ NP | 2.6±.7 | 4.4±1.3* | 3.0±.6 | 2.7±.7 | <.05 | 5 |
Sample values are mean±SE. C=Control; E₁=30min; E₂=60min; R=Recovery(90min);
All=Angiotensin II; NP=Nitroprusside. * different than control value—p<.05.
Table 10: EFFECTS OF HEMORRHAGE ON BP, HR, PLASMA AI, CORTICOSTEROID AND AVP CONCENTRATION
| AMNT. HEMORRHAGED (% EBV) | SAMPLE | ONE WAY ANALYSIS OF VARIANCE |
|---------------------------|--------|------------------------------|
| | C | H3 | H4 | H5 | p | n |
| BP (mm Hg) | 73±4 | 69±4 | 67±2 | 63±2*| <.05 | 3 |
| HR (bpm) | 237±4 | 254±15| 292±16| 296±19| >.05 | 3 |
| AI (pg/ml) | 24±5 | 31±8 | 42±16| 47±15| >.05 | 4 |
| Corts (ng/ml) | 60±8 | 58±7 | 67±10| 69±8 | <.05 | 4 |
| AVP (pg/ml) | 4.0±.6 | 3.8±.6| 15±3 | 49±13| <.01 | 3 |
C=Control; EBV=Estimated blood volume; BP=Blood pressure; HR=Heart rate; bpm=Beats per minute; AI=Angiotensin II; Corts=Corticosteroids; AVP=Arginine vasopressin. Samples H3–H5 taken after % EBV removed. * different than control value—p<.05. Sample values are means ±SE.
| Infusion Rate (ng·kg⁻¹·min⁻¹) | Sample | C | E₁ | E₂ | R | n |
|-------------------------------|--------|-----|------|------|-----|---|
| 0 | All | 277 | 276±2| 275±2| 278±2| 3 |
| 10 | All | 281 | 282 | 283 | 283 | 2 |
| 20 | All | 278±2| 278±2| 278±2| 281±2| 3 |
| 40 | All | 276±4| 277±3| 276±3| 278±3| 4 |
| 20 + 3 | All | 280±2| 280±1| 281±1| 284±1| 4 |
| 40 + 3 | NP | 280±2| 280±1| 281±1| 284±1| 4 |
| 3-6 | NP | 276±3| 276±3| 278±4| 280±3| 4 |
See tables 1-10 for abbreviations.
FIGURE 1: Experimental Set Up.
Infusion Pump
Rabbit in Box
Transducer
Polygraph
FIGURE 2: All Infusion Protocol. All = angiotensin II, NP = nitroprusside, AVP = vasopressin, CORTS = glucocorticoids. AVP, All, and CORTS refer to assays applied to plasma samples above.
ANGIOTENSIN II INFUSION PROTOCOL
Infusion of
All or All + NP
-30min 0min 30min 60min 90min 90min
Sample C (control)
Sample E1
Sample E2
Sample R
AVP
AVP
AVP
AVP
All, CORTS
CORTS
All, CORTS
All, CORTS
FIGURE 3: The change (Δ) in mean arterial pressure from control (mean ± SE) during infusion of two doses of angiotensin II (All) either alone or in combination with nitroprusside. * indicates that blood pressure responses to infusions 3 and 4 differed significantly (p<0.05, unpaired t test).
BP RESPONSE TO All INFUSION
A BLOOD PRESSURE (mm Hg)
1: 20ng/kg⁻¹·min⁻¹ All, n=3; 2: 20ng/kg⁻¹·min⁻¹ + 3µg/kg⁻¹·min⁻¹ NP, n=3;
3: 40ng/kg⁻¹·min⁻¹ All, N=5; 4: 40ng/kg⁻¹·min⁻¹ All + 3-6µg/kg⁻¹·min⁻¹ NP, n=5
FIGURE 4: Effect of $20 \text{ ng-kg}^{-1}\cdot\text{min}^{-1}$ angiotensin II (All) infusion either alone or in combination with $3 \mu\text{g-kg}^{-1}\cdot\text{min}^{-1}$ nitroprusside (NP) on plasma corticosteroid concentration. Values are means ± SE.
RESPONSE OF PLASMA CORTICOSTEROID CONCENTRATION
PLASMA CORTICOSTEROID CONC. (ng/ml)
TIME (min)
FIGURE 5: Effect of $40 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ angiotensin II (AI) infusion either alone or in combination with nitroprusside (NP) on plasma corticosteroid concentration. Values are means ± SE. * Indicates values significantly different than controls ($p<0.005$, $n=5$).
RESPONSE OF PLASMA CORTICOSTEROID CONCENTRATION
PLASMA CORTICOSTEROID CONC. (ng/ml)
TIME (min)
FIGURE 6: Effect of $40 \text{ ng} \cdot \text{kg}^{-1} \cdot \text{min}^{-1}$ angiotensin II (All) + 3 $\mu\text{g/kg/min}$ nitroprusside (NP) infusion on plasma AVP concentration. Values are means ± SE. * Indicates value significantly different than control ($p<0.05$, $n=5$).
RESPONSE OF ARGinine VASOPRESSIN CONCENTRATION
ARGinine VASOPRESSIN CONC. (pg/ml)
TIME (min)
Suggestions for implanting 18 or 20 gauge catheters (Quik-Cath, Intravascular Over-the-Needle Catheter, Travenol) into the central ear artery of the rabbit.
1. Shave ears the day before the catheterization.
2. Attempt arterial catheterization before venous catheterization when both are needed. Arterial catheterization has a much higher failure rate.
3. Infuse lidocaine without epinephrine subcutaneously bilaterally to the catheterization site.
4. Avoid over use of lidocaine since it tends to vasoconstrict ears in local areas of subcutaneous infusion.
5. Use a 25-gauge needle to infuse lidocaine.
6. Avoid puncturing small but visible blood vessels lateral to central artery. Hemotoma formation also is associated with central artery constriction.
7. Prepare heparinized saline filled tubing and heparinized saline flush solutions before catheterization. Also prepare cloth tape strips for securing the catheter.
8. 18-gauge catheters seem to perform better in providing consistent blood pressure readings but 20-gauge may be necessary for catheterization of small or somewhat constricted vessels.
9. Attempt to maximize arterial vasodilation immediately before inserting the catheter. This can be done by gently stroking the ear along the course of the artery or applying slight pressure with thumb and forefinger on both sides of
the ear and sliding along the course of the artery toward the base of the ear, emptying the artery. When fingers are removed in the latter maneuver, the artery often refills to a larger diameter.
10. Another technique for increasing arterial diameter is that of flicking the ear several times, which usually causes transient arterial engorgement. However, this is short-lived and the maneuver tends to irritate the rabbit.
11. The catheter should be inserted into the skin bevel up directly over the artery. Apply traction to the skin with the thumb and forefinger of the opposite hand.
12. When the catheter enters the arterial lumen, a flashback of blood will occur into the transparent chamber of the catheter needle. At this time the angle of the catheter should be adjusted slightly upward so that the needle does not pass through the opposite side of the vessel.
13. The catheter assembly should then be advanced into the vessel until the shoulders of the teflon catheter have entered the vessel (1-2 mm). The teflon catheter may then be advanced over the needle approximately 3 mm, burying the point. Then both the needle and catheter should be advanced another centimeter into the vessel for improved security. If the catheter is in the lumen, it should advance easily.
14. Immediately after removing the needle, flush catheter with heparinized normal saline and secure with cloth tape.
15. Venous catheterization may be done in a similar manner as arterial. Usually unilateral administration of subcutaneous local anesthesia is sufficient.
APPENDIX B: EXPERIMENTAL DATES AND RANDOMIZATION
PROTOCOL: CONTROL INFUSIONS (5% DEXTROSE IN WATER)
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|------------|--------|------------|----------|
| 8/12/86 | 873 | * | |
| 10/3/86 | 931 | * | |
| 11/14/86 | 102 | * | |
| 12/23/86 | 912 | * | |
| 12/29/86 | 228 | * | |
| 1/5/87 | 443 | * | |
| 4/8/87 | 690 | | |
| 5/4/87 | 711 | | |
PROTOCOL: 10 NG·KG⁻¹·MIN⁻¹ ANGIOTENSIN II INFUSIONS
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|------------|--------|------------|----------|
| 11/13/86 | 952 | * | |
| 12/19/86 | 951 | * | |
PROTOCOL: 20 NG·KG⁻¹·MIN⁻¹ ANGIOTENSIN II INFUSIONS
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|------------|--------|------------|----------|
| 9/2/86 | 311 | * | |
| 11/18/86 | 951 | * | |
| 1/2/87 | 446 | * | |
| 1/15/87 | 443 | * | |
| 1/19/87 | 445 | | |
PROTOCOL: 40 NG·KG⁻¹·MIN⁻¹ ANGIOTENSIN II INFUSIONS
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|------------|--------|------------|----------|
| 1/20/87 | 449 | | |
| 1/21/87 | 443 | | |
| 1/30/87 | 449 | | |
| 4/24/87 | 776 | | |
| 4/25/87 | 711 | | |
APPENDIX B, CONTINUED.
PROTOCOL: 20 NG·KG⁻¹·MIN⁻¹ ANGIOTENSIN II PLUS
3 MICROGRAMS·KG⁻¹·MIN⁻¹ NITROPRUSSIDE INFUSIONS
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|----------|--------|------------|----------|
| 11/11/86 | 951 | * | * |
| 12/7/86 | 952 | * | |
| 12/16/86 | 101 | * | * |
| 1/15/87 | 449 | * | |
| 5/5/87 | 736 | | |
| 5/7/87 | 738 | | |
PROTOCOL: 40 NG·KG⁻¹·MIN⁻¹ ANGIOTENSIN II PLUS
3-6 MICROGRAMS·KG⁻¹·MIN⁻¹ NITROPRUSSIDE INFUSIONS
| DATE | RABBIT | RANDOMIZED | EXCLUDED |
|----------|--------|------------|----------|
| 1/12/87 | 251 | * | * |
| 1/22/87 | 448 | | |
| 1/27/87 | 449 | | |
| 3/27/87 | 690 | | |
| 4/15/87 | 779 | | |
| 4/20/87 | 780 | | | |
DEVIANT EVOLUTION
Ashram Kain
DEVIANT EVOLUTION
A D20 MODERN SETTING
This game contains references to graphic violence and war, supernatural forces, magic, and religion. Parents are encouraged to review this game to make sure it is right for their family.
Deviant Evolution
Written and designed by Ashram Kain
Cover by Ashram Kain
Illustrations by Ashram Kain
# TABLE OF CONTENTS
**PREFACE** .......................................................................................................................... 1
**A BRIEF HISTORY OF GHESTAL** .................................................................................. 2
**THE WORLD STAGE: ACREAN ERA 1964** ..................................................................... 3
- **ALLIED PROTECTION OPERATIONS COMMAND** .......................................................... 3
- **DRACIAN FEDERATION** ................................................................................................. 4
- **THE DEVIANT CORPS** .................................................................................................... 5
- **THE SOCIALIST REPUBLIC OF SIDONIA** ....................................................................... 6
**GEOLOGY AND ECOLOGY** .............................................................................................. 7
- **GEOGRAPHY** .................................................................................................................. 7
- **ASTROGRAPHY** .............................................................................................................. 8
- **MONSTERS** ................................................................................................................... 8
- **FLORA** .......................................................................................................................... 8
**MAGIC** ............................................................................................................................ 10
- **MANA** ........................................................................................................................... 10
- **LENZ** ........................................................................................................................... 12
- **MIASMA** ....................................................................................................................... 14
**DEVIANT CHARACTERS** ............................................................................................... 15
- **CLASSES** ..................................................................................................................... 16
- **BACKGROUND PROFESSION** ..................................................................................... 17
- **FEATS** ........................................................................................................................ 19
- **SKILLS** ....................................................................................................................... 25
**EQUIPMENT AND TECHNOLOGY** ............................................................................... 27
- **STARTING EQUIPMENT** ............................................................................................... 28
- **CONVENTIONAL EQUIPMENT** .................................................................................... 28
- **MANATECHNOLOGY** .................................................................................................... 36
- **LENZ HARNESSSES** .................................................................................................. 36
- **CASTERS** ................................................................................................................... 37
- **SYNTECH EQUIPMENT** .............................................................................................. 38
- **MECHANETICS** .......................................................................................................... 41
**VEHICLES** ..................................................................................................................... 46
**LENZ** ............................................................................................................................ 47
- **SPELL LENZ** .............................................................................................................. 48
- **ABILITY LENZ** ............................................................................................................ 57
- **POWER LENZ** ............................................................................................................. 58
- **DRAGON LENZ** .......................................................................................................... 61
- **BLACK LENZ** .............................................................................................................. 62
- **LENZ FACETS** ............................................................................................................ 64
**ADVANCED & LEGENDARY CLASSES** ........................................................................ 66
- **ADVANCED CLASSES** ............................................................................................... 66
- **LEGENDARY CLASSES** .............................................................................................. 76
**BEASTIARY** .................................................................................................................. 87
In the early nineteenth century, the reparations demanded by the great nations following the Attican War drove a state severe economic imbalance into one of severe depression. It should be no surprise that the Kingdome of Ardin would fall victim to the violent rhetoric of their young prince. In a few short years, the crown transformed the failing state into an industrial super-power. Ardin was a nation with a rich history of industry and the sciences, and was the birthplace of modern nuclear physics. As such, they were the first to enrich and develop weapon’s grade nuclear material. Fearing the devastating power of atomic weaponry in the hands of the would-be empire of Ardin, the enemies of the Imperial Alliance rushed to create their bombs first. Hoping that the threat of total annihilation would be the End War, Greater Ridmar used their Atomic bombs.
Imperialistic aggression escalated to atomic war. Fueling the conflict was the energy known as Mana, and the refined form known as Lenz, both used to unleash a devastating power: magic. In the aftermath of the war, the nightmare birthed by the bombs was realized. The consequence of mana and radioactive fallout was the Miasma – cursed clouds that brought not only death, but undead hoards – the Horrors.
In the face of a world ruined, the remaining nations joined forces to create the Allied Protection Operations Command - A.P.O.C. – to defend against the Horrors and contain the Miasma. In order to survive, humanity was forced to take refuge in ever expanding walled fortresses, many of these city-states under the A.P.O.C.’s authoritarian rule. As the ashes settled, the wild frontier was reclaimed by nature.
Twenty years have passed since the bombs fell. The constant threat from monsters, horrors, and miasma have given rise to a unique class of para-military mercenaries that make their living fighting back the darkness and those who would exploit the ruins and wilderness of the old world:
The Deviants.
A BRIEF HISTORY OF GHESTAL
Abstract form the Journal of Natural Sciences
“Mutation as an answer to the Caldaran Bloom”
Dr. Galid Minswin
The geological epoch known as the Caldaran Bloom continues to pose a significant problem to the natural sciences. While Wildharn’s theory of natural selection is demonstrable, the sudden eruption of biodiversity during such a short period of time between 18,000 and 25,000 years ago challenges what we know about evolution and emergence of species. In light of the ecological changes in the northern hemisphere in the aftermath of the Great War, our study suggests that Mana fueled mutation is a likely cause for the sudden emergence of new flora and fauna, including humans. However, this still does not exactly explain how hominids emerged from no known common ancestors.
While geologists and archeologists estimate that Ghestal is between 5 and 8 billion years old, with a rich fossil history, the biodiversity – specifically among mega-fauna – really only explodes between 18 and 25 thousand years ago with the emergence of several thousand species, including hominids. This has given life to many conspiracy theories regarding the advent of life. Though there are many debates as to the truth of early history, it is undisputed that an apocalyptic celestial event took place between twelve to ten thousand years ago (10th to 8th millennium EC). The impacts and following ice age ended virtually every early culture, save for a few scattered tribes of survivors.
Proceeding this event there were many hominid subspecies in the fossil records, a number of which some believe survived the period of the destruction. However, this hypothesis is not favorable in modern archeological sciences. Still, legends of dwarves and elves persist in modern times in folktales and popular culture. Although so much was lost culturally in the north following the war, the royal Family of Ridmar claimed to be of Elvin decent, thus named the Elf-Kings of Ridmar.
The name Ghestal comes from the ancient Amonath religion, the name of the earth dragon. The Amonath were one of the greatest cultures of classical antiquity, and considered the originators of northern civilization. This culture gave rise to early philosophy, sciences, and written language. As such many of the geological and celestial names echo the ancient Amonath language. Although, between the 6th and 17th century it is the Acrean Religion that held sway over most of the northern world.
Born from a tribal movement in the 4th century Hassisan desert, the Acrean faith believed in the teachings of a messianic figure reputed to have supernatural powers. In the late 17th century, it was shown that the figure was an early Lenz user, and a fraud, by the brilliant mathematician and natural philosopher Saican Tewon. As one of the fathers of modern Lenz sciences, Tewon is considered one of the most important scientists in history. He discovered and codified the use of Lenz Arts as they are known today.
In the north, natural sciences had become paramount to progress and gave birth to an industrial revolution. Driven by the struggle to establish a dominant economy imperialistic competition swiftly turned to hostilities. In the early portion of this century the complicated economic and political climate lead to the Attican War, a military dispute over the succession of the Attican princess that eventually engulfed the globe. In the aftermath, the great nations imposed strict economic restrictions on the Ardinian Republic, who was seen as responsible for the war. Less than a generation later, the Prince of Ardin had created a hyper nationalistic movement, and plunged the north in to a second great war.
Nations rushed to build Mana powered super-weapons, and soldiers augmented by Lenz.
THE WORLD STAGE: ACREAN ERA 1964
From “The End War”
Professor Ishaan Zan
In understanding the past one must always remain aware that history is often told from the stance of the victorious. In the years following the war, four major political bodies emerged from the ashes and destruction to make up the current world stage, along with and handful of independent states and originations that fill in the gaps.
ALLIED PROTECTION OPERATIONS COMMAND
In the first year following the Empire of Ardin’s reprisal on Greater Ridmar, the Great War came to an abrupt end. Both the Imperial Alliance and the Union of Democratic Nations experienced severe Miasma activity near the site of the bombs, and then came the first wave of Horrors. Originally, the A.P.O.C. was formed as an intergovernmental peace keeping commission to help relocate and protect refugees, and secure areas affected by the Miasma. Four years later, Greater Ridmar, Attica, and Gaidia, largest founding members collapsed largely due to Miasma and Horror activity. Faced with a crisis on par with the initial aftermath of the bombs, the Commissioner General formed the Directorate, and assumed control of those nations’ remaining military and industrial facilities. By 1952, the A.P.O.C represented the largest governmental organization, with many city-states willingly surrendering autonomy for the sake of A.P.O.C protection.
Today the A.P.O.C. Directorate governs virtually all of the major City-Structures. They maintain and manage the rail-ways, the defense systems, the containment systems, and the military police. Each City State, with the exception of Cyport, has a regional governor, elected by the citizens but answering to the Directorate. While the Directorate claims that it is an intergovernmental body – it is closer to a unified world government. For all intent and purposes, it is a Socialist Dictatorship under the command of the Commissioner General, who is elected by the council, which use to be the Representatives or Heads of State of the sovereign members, now however, the Council is largely indistinguishable from the Directorate, which is composed of members of the military, private industries, and a few political leaders.
The current Commissioner General is Yosh Kitase, having held the position for twelve years following the death of the previous Commissioner General. Under Kitase several distinct organizations have become cornerstones within A.P.O.C. and have significant power and autonomy. The first is the Teishin Shudan, the secret police, an organization directly descended from the intelligence apparatus of the Empire of Jurai, one of the major southern supporters of the original APOC. The second is SOLDAT – Special Operations Logistics Division Assault Teams – the most elite military force of the APOC, Soldat arose from the super soldier programs of many of the APOC member states following the war. Darker whispers have it that the augmented warriors of Soldat were born from the incredible abilities demonstrated by veteran Deviants. None the less, Deviants end up taking many of the missions and operations that Soldat either can’t, or won’t.
Membership in the APOC Guard (the expeditionary military and civil defense force) is compulsory for citizenship in the protectorate. This is also means that some Deviants have a history of military service in the Guard, or even SOLDAT or the Teishin Shudan, although this is extremely rare. Because deviants represent a unique place socially, it is not smiled upon by the protectorate when citizens or elites elect to become deviants. That said, the APOC could not effectively operate in the frontier without the aid of the soldiers of fortune, and could not meet their long range strategic and tactical objectives without them.
DRACIAN FEDERATION
Representing a number of equilateral boarder states that were forced to survive without APOC protection, and rule, the Dracian Federation was officially founded in 1949, when fourteen colonial city states allied to repel an invasion of horrors in a series of battles called the Black Wars. These city states and a significant volume of essential resources, but lacking manpower were forced to rely heavily to mercenaries, this event caused the APOC to legislate the growing deviant population – in essence creating the Deviant Corps.
Today the Dracian Federation claims itself to be the last bastion of democracy, this may be true, but the provincial walled cities lack the standard of living of the industrial super cities – of course, they also provide huge amounts of the crops and livestock to the cities in exchange for power and machinery, so who is to say. Ironically, in their independence, the Dracian Federation cities are even more reliant on the Deviant Corps than APOC states, as they lack the military industrial complex to support long standing expeditionary military actions. The core of the Dracian Federations peace keeping force in the frontier are the Martial Rangers, roaming lawmen that protect the outer communities of the federation.
This makes the frontier of the federation a dangerous place, where even farmers and herdsmen often have to fend off the lesser horrors and monsters. The APOC paints these out laying communities as lawless places where the federation Marshals really visit, leaving it to the local communities to enforce the laws and protect their own. The fiction about the hearty herdsmen and vengeful gunslingers of the Dracian Frontier is actually pretty popular in the APOC. This is not exactly true – most of the outer communities are regularly patrolled by Marital Rangers and Deviant Mercenaries. Yet even regular patrols cannot hope to secure such a huge land boarder.
The vast geography of the Federation is due to its nature as a conglomerate of city-states. As a government, the federation is a member democracy, with each of the city states electing a local Mayor and representative to the Federal Senate. The seat of the Senate is Dracia, the largest cities, and as cosmopolitan and advanced as many in the APOC. Dracia is the exception not the rule however, most of the city states and frontier communities are smaller agrarian communities. Unlike the APOC which focused on securing key areas and then building rapid transit to connect the super-cities, Dracia has pushed to secure large areas of land, easily three times the total land area of the APOC – however, the area the APOC actually covers is considerably more area than any other political body.
The current President of the Federation is the charismatic Anaria Zan Tessa, known as the maiden of Revolution – she led the reformist party in her youth and was the one that negotiated the alliance of the original federation. Currently in her seventh term as president, many rumor that she is a dictator for life. While this may not be far from the truth, her reign has cemented a lasting security for her people, and that gives her considerable power and clout in the current environment. The senate made up from representatives of the member city states, but the current political climate is tense as many of the older senators and representatives are aging out or retiring, leading to many young and reckless new leaders.
THE DEVIANT CORPS
In the final years of the Great War outcasts, drifters, and social pariahs began enlisting in mercenary corps in large numbers. These deviants often could not enlist in their nation’s military, and could make a better money as soldiers of fortune. Following the bombs, the corps turned to rescue and containment to help the affected areas, and were hired en-mass by the smaller communities for protection. Outcasts became heroes, and soon, the constant battles against horrors began to reveal extra-ordinary powers in many of these soldiers. Fearing this army of super soldiers, the A.P.O.C. officially branded the aberrations Deviants.
As the A.P.O.C. began to imposed strict regulations and legislation on the “Deviant” mercenaries, the largest outfits joined together to establish the Joint Command Corps. By 1952 the Corps were colloquially known as Deviants even in the frontier as part of an unsuccessful A.P.O.C. campaign to discredit them, and the JCC restructured into the Deviant Corps, embracing the motley crew image and building a command structure to best help them operate in the field.
Modern Deviants come from all walks of life – some are in it for the money, others for a place to belong, some simply revel in the danger and excitement of frontier life. Being a Deviant is about more than simply fighting for a living, the Corps have come to represent freedom, power, and to a great many – hope. The hope that there is a chance to fight against the impossible monsters and horrors, the hope that men and women can rise up against the worst the world has to offer, and win.
Deviant’s are paid on a per assignment basis, and form small cells – units that operate autonomously, but answer to the Corps Command. Each Deviant is ranked in order of power and seniority: E, D, C, B, A and R, above R there are either S ranks 1 through 10, or X ranks 1 through 10. The highest ranking Deviant in a cell dictates what missions that unit can accept. S ranks are exclusive to the Command infrastructure of the Deviant Corps. These Deviant rarely take assignments, and operate the organization as a whole. The X ranks are reserved for high level field unit commanders. These are the deviants sent in to accomplish the impossible.
THE SOCIALIST REPUBLIC OF SIDONIA
During the Attican War the Empire of Sidonia, located in the northern polar region of Resudas, fell to a violent and bloody socialist revolution led by a mysterious figure known only as Pavov. As the Great War escalated, the Socialist Republic of Sidonia was born during the height of the conflict. In the aftermath of the bombs, refugees flooded the republic. At this time, Pavov vanished from the scene, and his right hand lieutenant became the de facto autocrat of greater Sidonia. Suddenly, the nation began to topple and conquer its neighbors and engulf the tide of refugees. Those that were capable of working we put in camps, those too sick or old were killed. By 1950 the SRS had transformed from an agrarian and fishing culture into and industrial powerhouse with factories and assembly lines churning out products for the People’s Army.
Ironically, the SRS has faced far more and worse Miasma and Horror events as a product of this Expansion, forcing them to continually develop and escalate their weapons technology. Sidonian Casters are reputed to be extremely powerful, and they produce a significant number more than the APOC. SRS Dragoons come in a number of frightening varieties, and these exotic suits of Power-armor form the foundation of the Sidonia People’s Army and are often seen patrolling the coastline. Beyond that, very little is known of the SRS, the few defectors that have made it out claim that every city is an industrial monster, and that every man, woman and child is expected to work in the communist state.
The leadership of the republic is the Workers United Reform Majority. At the head of WURM is Arkov Zoul – the right hand of the Socialist revolutionary Pavov, and now the president for life of the People’s Nation. Zoul is a terrifying man who has sacrificed countless lived to build the socialist super power and conquer every nation on the northern continent. Separated from the APOC and Dracia by the Borean Ocean, most of the frontier, and the badlands, Sidonia’s isolation has allowed Zoul to operate with complete impunity.
Among the horrors and marvels that Zoul’s Scientific Directive have created include Machines that can burn miasma for energy, soldiers that are more machine than human, and weapons of Lenz implanted inside the body. More than one rumor persists that they have performed hideous and violent experiments on children in hopes of discovering to source of preternatural powers.
GEOLOGY AND ECOLOGY
GEOGRAPHY
The world of Ghestal is known to have a mean radius of 6,768 hm, and orbital period of 290 days, each 26 hours long. The average surface temperature ranges from -40 in the polar regions to as high as 70 in the worst of the badlands. The orbital period is broken into ten months corresponding to the spin of the geosynchronous moon. Generally, the monthly period is only broken apart by the day, numbered I through XXIX. The ten months are based on the ten elements in classical Amonath mythology: Stine, Sole, Met, Wass, Frie, Strumer, Ziet, Dun, and Raum.
Ghestal has two major land masses. The first is the Ghestallandis – a vast stretch of land that encircles the globe. Technically Ghestallandis is divided into three continental masses and the equatorial region is a nearly impassible desert know colloquially as the Badlands, mapped in to three major deserts. This impassible terrain has prevented many over land journeys, and eventually led to the advent of airships used to traverse the region. The second is the northern polar continent of Resudas. While there is a very large island archipelago to the south, Jurai, it is not generally considered a continental mass.
In the north, one of the most important region is the Middlands. Here the industrial revolution took place, and the major powers rose during the colonial period. These nations developed Manatech and the first artificial lenz, but also the worse wars in history. Today, much of the midlands lay abandon or destroyed save for a few outposts of the APOC. To east of the midlands is the Wilderun, also known as Dragon Lands. This region of jungles, deep canyons, and pillar like mountains is separated from the Middlands by a field of high crags that is virtually impossible to cross by land. However, this region has an abundance of mana and resources which led to many attempts to colonize it during the early eighteenth century.
To the southwest of the midlands is the pains and steps of the Acuran region. Heavily colonized by Ridmar in the 16th century, this area is harsh scrub punctuated by deep green valleys. Separated from the midlands by a series of deep canyons in a seismically unstable desert, it was thought impossible to pass until the advent of Airships. The colonies seceded during the Attican Wars, and became major arms manufacturers during the Great War – as what they lacked in Mana they made up for in metals and petroleum.
Calacairn is the region south across the badlands from the midlands, an enormous bay with one island, Cyport. Believed to be the birthplace of human civilization, Cyport has been inhabited for eons, and prior to the war was part of the kingdom of Al’Saidim. During the Miasma winter of ’45 tens of thousands of northern refugees fled from the north and the Jurai military occupied the city. To the East of Calacairn is the Jurai Archipelago home to the Jurai Empire, an otherwise feudal nation which underwent a rapid industrial revolution just prior to the Attican Wars to become one of the greatest industrialized nation in the world. As the economic and political powers in the north collapsed it was arguably the Jurai Empire which faired the best. When the APOC began seizing control, its deputy Consul was deeply connected in the Jurai government, and may have facilitated the geo-political coup.
Ghestal has two small oceans – the Borean Ocean, to the north that encircles the planet and separates Ghestallandis from Resudas. To the south is the much larger Argean Ocean, filled with islands. There are nine Inland Seas on Ghestal thought to have formed in recent geological history.
Historically, the latitudinal mean was Baerishim on most northern maps, this remains mostly true today although the center point is often said to be Cyport. Ironically both cities bare the exact same latitude, which means
the moon, a geosynchronous satellite, remains over both at all times, all be it in different regions of the sky.
**ASTROGRAPHY**
The first to calculate the orbit of the celestial bodies and the force of gravity, Saican Tewon observed that the planet rested in middle of a great arch of a luminous galaxy, named for the Amonath celestial dragon Kaisierious. The star of the solar system as a bright yellow dwarf, orbited by six observed planets.
The most prominent feature of the night sky is the moon, named Armos, though usually simply referred to as the moon. The position of the moon is one that poses significant problems for astronomers. It is both geosynchronous, and stationary, orbits at an exact and specific period to maintain its position directly above Cyport. Telescopes have revealed that the cratered surface is also marred by what many geologists believe are riverbeds, or cracks in the surface. Interestingly, it is believed that the moon is large enough to sustain an atmosphere – if a thin one – and this being what gives the moon is blue hue.
Ghestal has two other lesser satellites are large elliptical orbits, Renom which appears in the night sky for three days every out of every fourteen, and Raos that disappears behind Armos on the 29th day of the month for a day. Amonath mythology had a host of mythological stories associated with the celestial bodies, constellations and a kind of zodiac that was built up around the elements and the dragon gods. These names are particularly common throughout the north, as in the eight to eleventh century there was a resurgence of classical culture under an Ardinarin (the predecessor to Ardin) king.
In the equilateral region an important time during the day is the Second Night, where the sun is blocked for a brief period near the middle of the day as it passes behind the moon. The symbolism of the eclipse is one the most important to the tribes of the badlands.
**MONSTERS**
An interesting aspect of the ecological balance of Ghestal is the presence of mutated super-predators: monsters. When fauna is exposed to excessive mana, it is not uncommon for them to take on extraordinary abilities and mutations, also becoming increasingly hostile and predatory. This has resulted in an interesting balance as these creatures only seem interested in hunting humans. Some have even gone so far as to say that they are anti-bodies created by the mana stream, but that is non-scientific nonsense.
In addition to the extraordinary abilities granted by mutation, most monster eat Lenz when they can, thus junctioning the material to gain magical abilities. One of the most common aspects of Monsters is their immunity to mundane weapons. Among hunters and deviants this has resulted in a resurgence of syntech enchanted melee weapons explicitly for fighting such creatures.
**FLORA**
Many of the plants, crops, and herbs know in the world would be familiar, with a few exceptions. First among them is a crop of extreme industrial value: Crystal Trees. An important natural rescores, Crystal Trees can grow in most place, but large orchards are found almost exclusively in the far south or in thickets around the inland seas. Crystal trees are biologically distinctive – not exactly plant, fungus, or animal. The trees have a myriad of subspecies, but all bare the same general lifecycle and properties. The Crystal Tree gets its name from the silica aragonite crystal “wood”, a lattice of transparent material very similar is structure to Nacre. The crystal itself is not the actual plant, rather the gummy sap in the hollow parts of the tree is organism, the crystal wood in like the shell of a mollusk, grown for protect and help the organism collect sun and grow.
Crystal wood is clear and usually colorless, but hues of blue, red, yellow and green have been seen in some species. The wood is light, in fact it is slightly lighter than good pine, however is very hard, and slightly brittle. Carved more like stone than wood, crystal wood makes for excellent construction material and natural glass. Crystal wood has a Hardness of 7 and 5 hit points per pound.
The gummy sap from inside the crystal is another useful commodity. When exposed to air and heat the organism will quickly petrify, resulting in a very strong resin. Crystal Tree resin is amazing substance, harder and lighter than most plastics but with very similar properties. The Resin was widely used as the skin for the first airships and low speed aircraft. Even today, versions of the organism created in the lab are used to produce specific verities of the resin for manufacturing purposes. Resin has an average Hardness of 8 and about 10 hit points per square foot at a quarter of an inch thick.
Both the wood and the resin sap are almost impervious to heat but make excellent insolation. The exceedingly high flash points mean crystal wood and resin have three times the hardness verses fire and heat, so cut crystal wood is often used to make a light and strong glass.
MAGIC
Dr. Cassious Arn
Socialist Republic of Sidonia
Report on the Progress of Project Hybrid
The presence of iridium and palladium elements in the sediment layers at the excavation site indicate beyond doubt that significant geological destruction between ten and twelve thousand years ago. So far, the teams have found fourteen complete or near complete erect hominids. Tissue analysis shows high levels of anomalies that contain charged particulate Mana crystallization, similar to the project’s original test subject. However, this ratio is significantly lower than that of the sample found by the Area 7 excavation. Thus there is no other logical conclusion than to assume that to some degree or another all human kind possess the anomalous genes, and bare a genetic relationship to the Area 7 specimen. However, no human is able to incur the effects in the Mana stream that Specimen 13 demonstrated and no cloning attempt has yet succeeded. Thus it is inevitable that we proceed with the Hybrid operation as outlined by the Area 7 research director.
A thousand years ago, Mana and Lenz were considered useless natural phenomena, as dangerous as lightning, and ten times as common. Though Mana Pools could heal wounds they were touchy and prone to eruption, so it was thing to be avoided. As for Lenz, in its unaltered state it was very beautiful, and used in jewelry and expensive ornaments. Lenz was a rare, but otherwise useless material. Today Manatech is a one of the most important technological and industrial powers in the world.
MANA
In its natural state Mana is invisible, intangible and everywhere. It is in every breath, every stone and every tree, in all things. Because it is ever present, Mana Reactors can draw in and condense the eldritch current into its liquid state for fuel, or enrich it further to produce Syntech Lenz. As a fuel, mana can produce limitless energy, in a Syntech machine mana can be transformed into a number of arcane effects and tools. Those skilled in Lenz Arts can tap this great current directly, using a Junctioned Lenz to release powerful spells.
Refined Mana is a luminous, iridescent liquid that sheds a constant dim glow. The color shifts in ranges of blue and green with the occasional moment of violet or purple. In a liquid state, mana can only be contained in special receptacles, usually glass or steel tempered in mana. Mana refined or syphoned from a Mana Pool can burn when exposed to a specific electrical current, producing witchfire. In the early 19th century this was used in primitive steam and combustion engines but is a highly inefficient use of the resource.
Mana also has many dangers. It is volatile and toxic, and drinking refined Mana, or being submerged in Mana for under a minutes calls for a DC 30 Fortitude save or contract the Mana Poisoning. Those who save become nauseous, disoriented, dizzy and weak (reduce all stats by 6, minimum 1) for 1D8 hours. Those who fail the saving throw become disoriented drooling invalids (all stats become 2, movement is one fourth). This condition lasts 2D4 weeks, after which the victim makes a save again. Those who succeed recover overnight, those who fail are dead. On a natural 3 or under, or should the poor dreg be in a pool for more than five minutes or so, the victim mutates. At first the mutation is subtle, seeming exactly like the lethal Mana poisoning, but in 24 or so hours the mutation fully blooms and the victim becomes a monster.
Liquid mana is commonly sold in fuel canisters of varying sizes. These standardized cartridges are used in weapons, vehicles, and just about everything that uses mana as an energy source. Many machines make use of the energy created by Lenz to exploit physical phenomenon from internal combustion to electrical generation. This has allowed for a number of significant technological advances.
Mana Pools
Though it flows through the world in a great current, on occasion, in a place of beauty or serenity, the Mana coalesces, running together. The Mana literally puddles on the ground, in a tree hallow, in a rock indent, or where ever. This breath taking visage glows with its own light, emitting rainbow hues and slow multi colored fragments of light, often called Mana fairies. This is a Mana Pool.
Some as small as a drop of dew others like iridescent lakes. It is widely known drinking a cupped hand full of fresh Mana is like a full night’s rest, and recovers any light injuries, easing pain, and curing any natural illness. Liquid Mana can only be contained by living tissue, or in jars or bottles made from or coated inside with Lenz, an expensive item, it seems to leak out of anything else. However, 1D4 rounds after it is removed from the pool naturally condensed mana takes on the properties of Refined Mana.
Mana pools is that they are innately temporary, and at the end of their life the pools fairies begin to shoot off rapidly, slowly increasing in speed until they are blinding streaks, then the pool explodes. The pool’s explosion is called a fountain, and can be lethal. The size and damage of the fountain is directly related to the size of the pool. Those too close may make a reflex save DC 15 to take half damage; on a natural twenty no damage is taken what so ever.
Keep in mind that the fountain only damages living cells, inanimate things are completely immune. On a natural one the victim contracts Mana Poisoning, and will likely die anyways. Mana pools will last 10D6 hours per foot across, those under a foot 5D6 hours. That means that some of the largest might last years.
| Pool Size | Fountain Radius | Subdual Damage | Chance of Lenz |
|-----------------|-----------------|----------------|----------------|
| Under 6” | 1 foot | 1D4 | 0% |
| 6” to 3 feet | 3 feet | 1D6 | 3% |
| 4 to 9 feet | 5 feet | 2D6 | 10% |
| 10 to 30’ | 10 feet | 4D6 | 15% |
| 30 to 100’ | 25 feet | 6D6 | 20% |
| 100 to 300’ | 50 feet | 8D6 | 25% |
| 300 to 1000’ | 100 feet | 10D6 | 30% |
Witchfire
Mana exposed to a very specific electrical current can ignite producing an effect colloquially called Witch fire. This phenomena rarely occurs naturally, though it has been reported throughout history, and is often seen around monsters. Witchfire was first reproduced in the 1870’s and used in primitive mana engines as a source of steam power.
The effect takes on the appearance of a faint turquoise fire around the fuel source and will last only a few moments (depending on the amount of fuel). Witchfire can severely damage and causes damage comparable to flame most material. It is believed witchfire is the result of rapid dispersion of mana into it’s natural, ubiquitous state.
The crystalized form of mana is known as Lenz. This two inch orb is an etched cloudy stone, filled with light colorful patterns. While Lenz is the scientific term, many people still refer to them as Mana Stones, or Moon Stones in some places. In nature Lenz is created from the violent force of very large Mana Fountains. Today, virtually all Lenz is synthesized from Mana enriched in a Mana Reactor. Lenz can be used to focus liquid mana and create effects. This is the foundation of Synctech. Mana is injected into a chamber with the Lenz and an electrical current is applied, this causes the Lenz to absorb and transmute the mana. This is used for a number for every day uses, as well as weapons and transportation.
Naturally occurring Lenz extremely rare. Virtually all Lenz is Syntech Lenz produced by Magicite Industries in the APOC, with a few smaller outfits in the Dracian Federation making some basic offensive Lenz. There is also legendary Dragon Lenz, formed inside ancient dragons after many years, these super Lenz are rumored to have extraordinary powers. For more on using Lenz see the Lenz Arts skill, below.
Besides its powers, the physical properties of Lenz raise significant and unanswered questions about the physical laws of the universe. The material seems to defy explanation, and there has yet to be any significant breakthrough in understanding what exactly Lenz is composed of. One of the prominent, and unpopular, hypotheses is that Lenz is some kind of noetic matter. Because lenz, as a material is non-reactive to other matter, some have even postulated it is dark matter. Of course this all conjecture.
**Powers**
There are three types of Lenz: Spell, Power, and Ability. Spell Lenz allow the user to manipulate the mana stream to create and control energy, Ability Lenz increases user’s natural ability scores, and Power Lenz has a constant effect or number of uses per day. There are more than thirty Lenz that can be created in Mana Reactors, and rumors that many more may exist in nature.
All Lenz have a base effect and then a level, from +0 to +5 (like most enchanted items). As the level increases, so too do the effects of the Lenz – be it a spell or a bonus. Lenz created in a Mana Reactor for consumers usually have a level of +0, whereas naturally occurring Lenz tend to be more potent, with a level of +1 to +4 (roll 1D4).
The most common type of Lenz is Spell Lenz, these allow people to create magical effects such as bolts of energy or healing wounds. The base DC to resist a Spell from a Lenz is 10 + the Lenz Bonus level + the Caster’s Lenz Arts Rank.
Junctioning
Lenz can also be bonded to a creature who can then use the orb to gain incredible powers, this is called Junctioning. At any given time, a creature can Junction up to their Constitution Modifier +1 Lenz.
To Junction the Lenz the perspective user must grip the Lenz tightly for 1D6 rounds and succeed a Lenz Arts check (the DC depends on the Lenz). At the end of the time the Lenz will pulse in time to the new master’s heart beat for about an hour, and the matrix will glow bright blue, green, or violet (depending on what type of Lenz it is). A junctioned Lenz will remain synced to a master for as long as it remains close, and for 1D12 days after it leaves their possession. After which one of the user’s junction slots opens up, and it must be Junctioned again.
To activate, or gain the benefits of the Lenz, it must be in physical contact with the user’s skin. Many companies manufacture specialized Lenz harnesses for this purpose: bracers, bracelets, and neck pieces are the most common. Spell Lenz must be activated each time it is used, as full round action. Ability Lenz need only be in contact with the user for them to gain its benefits. Power Lenz can be used at will or a specific number of times per day, depending on the Lenz, so long as it is held.
Lenz Forging
One of the key aspects of Lenz Arts is evolving Lenz. As a Lenz is used and the master grows in strength, it is possible to channel some of that experience and power into the Lenz, increasing its potency. This is a process known as Lenz Forging. At any time a character may dedicate 8 uninterrupted hours of mediation and focus to increasing the power of a Lenz they Junctioned.
During this period the character must succeed a Lenz Arts check against a DC of $15 + 2x$ the intended total bonus. The experience points are the table below are total, not cumulative. Meaning Forging a +0 Lenz to a +5 Lenz requires a total of 5,000 XP. Forging a +3 Lenz to a +6 Lenz requires 6,400 XP and so on.
| Lenz Bonus | Exp Sacrifice Required | Min. Level |
|------------|------------------------|------------|
| +1 | 200 XP | 2nd |
| +2 | 800 XP | 4th |
| +3 | 1,800 XP | 6th |
| +4 | 3,200 XP | 8th |
| +5 | 5,000 XP | 10th |
| +6* | 7,200 XP | 11th |
| +7* | 9,800 XP | 12th |
| +8* | 12,800 XP | 13th |
| +9* | 16,200 XP | 14th |
| +10* | 20,000 XP | 15th |
*Lenz, like most enchanted items, has a maximum bonus of +5, however, there are a number of Facets available to change the properties of the Lenz, much like enchanted arms and armor.
Dragon Lenz
On rare occasion, a Lenz that has gestated in an Ancient Dragon will be found. These are breathtakingly rare artifacts that have only be discovered on a handful of occasions. Dragon Lenz are rumored to have extraordinary powers – including releasing a person’s inner power, summoning and commanding dragons, or unleashing devastating energy. So far only a handful of samples of supposed Dragon Lenz have ever been discovered – usually by Deviants. As such, very little is known about their properties or powers.
Black Lenz
It is said that when an old and powerful horror feasts on miasma and death it can form a condensed kind of Miasma, much like Dragon Lenz, colloquially called Black Lenz. These rumors have it that the Black Lenz is identical to Lenz in size, shape, and properties, but black Lenz is dark, with red matrix and formations set against cloudy black. The effects and power of Black Lenz, if it exists is unknown, yet rumors persist that these stones can give the user the power over horrors – at a terrible cost.
Miasma
Just as Mana may coalesce in places of great beauty and serenity, so to around places of death and fear the Miasma may form. This dense black and red fog covers an area of 1D6x100 cubic meters and will drift through the area, lasting 1D6 hours. These vapors automatically kill any living creature with 2 or fewer HD (no save). A living creature with 3 to 5 HD is slain unless it succeeds on a Fortitude save against a DC 35 (in which case it takes 1d4 points of Constitution damage on turn each round while in the cloud). A living creature with 6 or more HD takes 1d4 points of Constitution damage on their turn each round while in the cloud (a successful Fortitude save halves this damage). Holding one’s breath doesn’t help. Having no Con score, horrors are immune to the effects.
Once formed the Miasma will randomly move 10 feet per round, rolling along the surface of the ground. Because the miasma is heavier than air, it will sink to the lowest level of the land, even pouring down den or sinkhole openings; this also means that when it forms in a city it ends up pouring through the buildings and into the under streets.
Creatures killed by the Miasma have a significant chance of reanimating as mindless undead horrors (13 or higher on a D20). Those that take 9 or more points of Constitution damage must succeed a second Fortitude save (DC 35) or become doomed to a hideous fate as a Horror. Such a victim will suffer vomiting and shakes for 1D4 hours, before abruptly dying and reanimating as a horror.
DEVIANT CHARACTERS
D20 Modern features several rules that you should familiarize yourself with that are different from D&D.
**Action Points:** All characters begin with a number of Action points, and gain more each level. Once used, Action points are lost, but can do a number of things such as add a bonus to a roll, add an action during a round, or activate some special class powers.
**Defense:** As characters gain class levels they will gain a class bonus to Armor Class, which is called Defense in D20 modern. This is an inherent bonus and can only be lost when a character is completely incapacitated.
**Reputation:** The world of Deviant Evolution is a connected place, with computers, a sort of internet, and a significant media. As a character’s accomplishments grow, so too will their reputation, and the chance that people will recognize and respect them.
**Wealth:** Currency is generally regarded in Credits – a universal system that every APOC and most Dracian cities use. The SRS has their own currency, but usually accepts credits. As such, players will not count their individual creds, rather they will use a wealth check system – like a skill roll – when purchasing. Characters begin play with a wealth score of 1D4+4, + Profession Bonus, + Windfall feat (if taken). Characters who start with any ranks in Profession gain a +1 bonus to starting wealth.
Deviant characters are created in a similar manner as the standard D20 Modern character. However, all characters are **recent** enlistees in the Deviant Corps. As a Deviant characters have the dispensation to take missions through the corps and leave the cities. But the first thing you need is a concept and a motivation for your character. What part of the world where they from before becoming a Deviant? Why did they join the Corps?
Once you have an idea, there are six steps to the Deviant Evolution D20 character creation process:
1. **Ability Scores:** All characters’ ability scores are rolled on 4D6, dropping the lowest dice. Roll 12 times, choosing the highest 6 and place them in any order you choose.
2. **Class:** Next select the character’s starting Class. There are six starting classes: Strong Hero, Smart Hero, Fast Hero, Tough Hero, Dedicated hero, and Charming Hero. These are just the Basic Classes, many Deviants select Advanced Classes after level 1, or achieve the Limit Break and are able to advance in Legendary Classes. More about the limit and what that means can be found below.
3. **Background Profession:** Now that you have a class, you will select a Profession. This represents the character’s chosen career and education prior to enlisting in the Deviant Corps. Each profession gives a number of skills or bonuses, and a bonus feat they may select from several options. Regardless of their past now the character is a Deviant – a hero to some, and mercenary to others.
4. **Feats:** Players start with at least three feats at creation: A deviant bonus feat, bonus feats for being human, and the 1st Level feat they are free to select from any available.
5. **Skills:** Characters receive a number of skill points from their class and intelligence, and receive a number of skills form their profession.
6. **Equipment:** Finally, you will buy equipment. You can take 10 on these rolls, but may not take 20, and must roll for expensive equipment which can reduce your starting Wealth bonus.
CLASSES
In D20 Modern, the character classes are frameworks for adding special abilities and powers to build a unique hero. The Basic Classes (Strong hero, Fast hero, Tough hero, Smart hero, Dedicated hero, and Charming hero) especially speak more about the character’s personality and natural aptitudes than training, background, or objectives. There are a few special rules surrounding classes in deviant evolution.
Limit Break
Throughout modern history, there has been a limit to what people were capable of. This inborn deterrent prevents most adults from reaching 5 hit dice. Now, the advent of genetic science has discovered there is in fact a genetic limiter that suppresses peoples’ abilities. This phenomenon is known as the Limit.
Yet there are those that excel to powers that seem legendary. The Limit Break. Before the war, this was mostly a legend. In 1944 the first modern Limit Break was confirmed in a mercenary named Ridium Ziess, who went on to become a legendary dragon hunter.
There are a number of ways to achieve the Limit Break – the first is feats, several of which will cause the Limit Break. The next is the special ability of some advanced classes presented below. The final method is to use a Dragon Soul Lenz, a semi-legendary Lenz forged inside ancient dragons which is rumored to have tremendous powers.
It is rumored that it may be possible through genetic manipulation, which some use to explain the power of SOLDAT operatives.
Limit Break has a number of unique effects:
- Level cap is removed
- Reputation gains a onetime bonus of +1
- Any time a character with Limit Break gains a level they may spend an Action Point to maximize their Hit Dice for that level (rather than rolling normally simply taking the maximum value)
- Characters can access Legendary Classes (if they meet the other prerequisites)
Advanced Classes
All of the advanced classes presented in Chapter Six of the D20 Modern Rulebook are available to characters unchanged, even if they have not achieved the Limit Break. Additionally, several new Advanced Classes are available. Because many of these classes have requirements that may put that at the upper end of the Limit – it is not surprising to only find characters with 1 or 2 levels in an Advanced Class.
Legendary Classes
Some classes require more than just skill and ability – some require the power to overcome the limits of nature entirely. Legendary Classes are just that – legendary. They persist in popular culture and in rumors, but few, if any, are known to actually exist. To qualify for a Legendary Class, characters must meet high prerequisites, and above all, have achieved the Limit Break.
BACKGROUND PROFESSION
At the turn of the century every culture had legends about people who had awoken something more inside of themselves. It was a cultural idea so prevalent that it occupied the place of the monomyth. Yet, as the first Miasma appeared, seemingly ordinary people found that something in them stirred. But before you enlisted, what did you do?
All the Professions found on page 31 through 34 in the D20 Modern Base book are available, largely unchanged. Finally, several new professions are available, listed below:
DRIFTER
Drifters are aimless wanderers and world wise jacks-of-all-trades who move between the frontier and cities, often illegally, working odd jobs until boredom or fate leads them elsewhere. Along the way they learn strange customs and pick up interesting and diverse skills.
Prerequisite: Age 15+.
Skills: Choose four of the following skills as permanent class skills. If a skill you select is already a class skill, you gain a +1 competence bonus on checks using that skill: Bluff (Cha), Decipher Script (Int), Disable Device (Int), Disguise (Cha), Forgery (Int), Gamble (Wis), Gather Information (Cha), Hide (Dex), Knowledge (streetwise) (Int), Navigate (Int), Sense Motive (Wis), Sleight of Hand (Dex).
Wealth Bonus Increase: +1.
LENZ STUDENT
A Lenz Student is someone who has spent a great deal of time studying the use and practice of Lenz with or without the benefit of having any formal training or mentoring in the use of magic. This is the only way, short of feats, to gain Lenz Arts as a class skill, short of feats and prestige classes.
Prerequisite: Intelligence 12
Skills: Choose three of the following skills as permanent class skills. If a skill you select is already a class skill, you receive a +1 competence bonus when using that skill: Concentration, Craft (chemical or writing), Decipher Script, Gather Information, Knowledge (arcane lore, art, earth and life sciences, history, or physical sciences), Lenz Arts, Research.
Wealth Bonus Increase: +2.
MUTANT
Two parts outcast, one parts drifter, a mutant is a character is slightly less than human. This character has been forced to move often and stay on the outside for fear that their true nature will become known.
Prerequisite: Age 15+
Skills: Choose two of the following skills as permanent class skills. If a skill you select is already a class skill, you receive a +1 competence bonus on checks using that skill: Bluff (Cha), Concentration (Con), Disguise (Cha), Gather Information (Int), Hide (Dex), Sense Motive (Wis), or Survival (Wis).
Bonus Feats: Toughness and either Legacy of Horror or Monstrous Legacy.
Wealth Bonus Increase: +0
ON THE RUN
A character on the run is a fugitive being chased by law enforcement, a shadowy government agency, or a sinister but well-connected secret society. Such a character might be wanted for a crime she didn’t commit- or one she did. Whatever the reason for her fugitive status, the character has developed skills that make her a tough quarry to catch.
**Prerequisite:** Age 20+
**Skills:** Choose three of the following skills as permanent class skills. If a skill you select is already a class skill, you receive a +1 competence bonus on checks using that skill: Bluff, Disguise, Escape Artist, Forgery, Gather Information, Hide, Knowledge (streetwise), Move Silently, Sense Motive.
**Bonus Feat:** Select one of the following: Brawl, Dodge, Low Profile, Personal Firearms Proficiency, Stealthy.
**Wealth Bonus Increase:** +1.
SQUIRE
Squires serve as assistants to warriors that specialize in hunting Horrors and Monster in hopes of learning the skills they will need to be warriors themselves one day. This is particularly common of exceptionally powerful or notable Deviants that wield archaic weapons that can harm the monsters.
**Prerequisite:** Strength 13 or Dexterity 13.
**Skills:** Choose one of the following skills as permanent class skills. If a skill you select is already a class skill, you receive a +1 competence bonus on checks using that skill.
Balance, Climb, Handle Animal, Jump, Ride, Tumble.
**Bonus Feat:** Archaic Weapons Proficiency and Armor Proficiency (light).
**Wealth Bonus Increase:** +1
TRIBAL
The character hails from one of the arid desert tribes, or even a tribe from deep in the dragon lands. A tribal character might be a native of a primitive culture who somehow wound up in the big city. He might be an APOC citizen or Dracian that was lost in the jungle as a child and reared by a hunter-gatherer culture (or animals like Wyverns or apes), then returned to civilization.
**Prerequisite:** Age 15+
**Skills:** Choose three of the following skills as permanent class skills. If a skill you select is already a class skill, you receive a +1 competence bonus on checks using that skill: Balance, Climb, Concentration, Handle Animal, Jump, Survival, Swim.
**Bonus Feats:** Archaic Weapons Proficiency and Track.
**Wealth Bonus Increase:** +0.
FEATS
In addition to the feats presented in the D20 Modern Rulebook, there are many new feats added below.
Action Oriented
You never commit half way.
Benefit: When you spend an action point, you roll d8s instead of d6s for the action result.
Ancient Blood
You are closer to your ancient inhuman heritage than others.
Prerequisites: Either Legacy of Grace, Legacy of Iron, or Legacy of Might
Benefit: The morphological characteristics of your heritage are considerably more pronounced, be it pointed ears, short stature, or pronounced tusks.
Legacy of Grace: You can see twice as far as a human in starlight, moonlight, torchlight, and similar conditions of poor illumination. You are immune to Sleep and Paralysis.
Legacy of Iron: You gain darkvision out to 15 feet and a +2 save vs Spells and Lenz effects.
Legacy of Might: You gain darkvision out to 30 feet and an additional +1 Strength, at the cost of -1 Charisma.
Ancient Legacy
The enigmatic First Race ruled Ghestal thousands of years ago, you carry a fragment of their genetic legacy.
Prerequisites: 1st Level, no ability scores below 13.
Benefit: The genetic memory of the first race lives in you. This feat is a prerequisite for the Limit Break feat, you gain a +2 bonus on Fortitude saving throws to resist poisons, diseases, and radiation sickness. Furthermore, any permanent ability drain inflicted upon you is treated as temporary ability damage instead. Monsters and Horrors are always hostile towards you, and will always attack you first in combat to the exclusion of other targets.
Armor Proficiency: Powered
You are proficient in the use of Powered Armors – huge heavily armored suits of robotic armor.
Prerequisites: Medium Armor Proficiency
Benefit: You receive all defense and ability score benefits from a suit of powered armor.
Normal: A character who uses an armor with which he or she is not proficient receives a reduced defense bonus and no equipment bonuses to ability scores.
Assault Grip
You are able to effortlessly fire rifles and shot-guns with one hand.
Prerequisite: Personal firearm proficiency, STR 16
Benefit: You can wield two handed Medium sized firearms with one hand. You cannot wield a larger weapon in your off hand, and you cannot use this feat with an exotic firearm.
Assault Martial Arts
Your unarmed attacks are especially violent.
Prerequisite: Combat Martial Arts, Base Attack Bonus +3
Benefit: Increase the damage of your unarmed attacks by one dice step, to a maximum of 1D12.
Badlander
You are you a native of one of the fearsome badlands tribes, generally called the beyrahda.
Prerequisites: 1st level
Benefit: Normal humans cannot survive long in the deep desert, but you can. Your dark complexion and hearty nature give you Resist Fire 5. Your Fortitude save DC to avoid subdual damage from heat is reduced to 10. In addition, you can go without water for a number of hours equal to 24 + twice your Constitution score.
Caster Proficiency
You are proficient in the use of Casters, manatech weapons that use mana packs to activate an integrated Spell Lenz.
**Prerequisites:** None
**Benefit:** You make attack rolls with the weapon normally.
**Normal:** A character who uses a weapon with which he or she is not proficient takes a -4 penalty on attack rolls.
Class Plus
You are an unparalleled natural at your chosen Basic Class – Strong, Fast, Tough, Smart, Dedicated, or Charismatic.
**Benefit:** You gain two talents from one of your hero’s Basic Class talent trees. The talents must be selected from the following lists, you cannot select more than one talent from a single talent tree, and you must meet all the prerequisites of a talent to select it.
**Charismatic:** Favor, Captivate, Dazzle, Taunt, Inspiration, Greater Inspiration.
**Dedicated:** Improved Aid Another, Intuition, Healing Touch 1, Healing Touch 2, Aware, Faith, Cool Under Pressure.
**Fast:** Uncanny Dodge 1, Uncanny Dodge 2. Defensive Roll, Opportunist, Improved Increased Speed, Advanced Increased Speed.
**Smart:** Savant, Linguist, Exploit Weakness, Plan, Trick.
**Strong:** Improved Extreme Effort, Advanced Extreme Effort, Improved Ignore Hardness, Advanced Ignore Hardness. Improved Melee Smash, Advanced Melee Smash.
**Tough:** Damage Reduction 2/-, Damage Reduction 3/-, Energy Resistance (choose one energy type), Remain Conscious, Second Wind, Stamina.
**Special:** You may select this feat multiple times. Each time you select this feat, you must choose a different pair of talents from one of your basic classes.
Craft Mechanetics
You can construct Mechanetic attachments.
**Prerequisites:** Craft (electrical) 6 ranks, Craft (mechanical) 6 ranks, Knowledge (life sciences) 6 ranks.
**Benefits:** You can build any of the mechanetic attachments described under Mechanetics, below. You must first make a Wealth check against the purchase DC of the attachment -2 (to acquire the necessary components), then invest 24 man-hours in the construction. At the end of that time, you must succeed at a Craft (mechanical) check (DC 30) and a Craft (electrical) check (DC 30). If both Craft checks succeed, the mechanetic attachment functions properly and can be installed at any time (see the Mechanetic Surgery feat, below). If either or both checks fail, the attachment’s design is flawed; another 24 hours must be spent fixing the problems, and two new checks must be made at the end of that time.
Craft Syntech
**Prerequisites:** Craft (Electrical) and Craft (mechanical) 6 Ranks or Craft (electrical), Craft (mechanical), and Lenz Arts 4 ranks.
**Benefit:** You can attempt to create any Caster, Weapon, or Enchantment whose components you possess (see Syntech under Equipment, below). These arms and armors take one day for each point of its Purchase DC. To complete theses Lenz powered devices you must spend 100 times its total Purchase DC in XP and succeed a wealth check of one-half of this total price. The equipment requires a Lenz that you provide and is destroyed in the creation of the device. Its cost is not included in the above cost.
Mechanetic Surgery
You can graft mechanetic attachments onto living tissue as well as safely remove them.
**Prerequisites:** Treat Injury 6 ranks, Surgery.
**Benefit:** You can make a Treat Injury check (DC 20) to install or remove a mechanetic attachment. If you do
not have a surgery kit or access to a medical facility, you take a -4 penalty on the check. Mechanetic surgery takes 1d4 hours.
The consequences of failure are severe: If your check result fails by 5 or more, the installation or removal of the mechanetic attachment causes undue physical trauma to the patient, who suffers 1d4 points of Constitution damage. If the check result fails by 10 or more, the Constitution damage is treated as Constitution drain instead. A character who undergoes mechanetic surgery (successful or not) is fatigued for 24 hours. Reduce this time by 2 hours for every point above the DC the surgeon achieves. The period of fatigue can never be reduced below 6 hours in this fashion.
**Normal:** Characters without this feat take a -8 penalty on Treat Injury checks made to perform mechanetic surgery (-4 penalty if they have the Surgery feat).
### Dedicated Mechanetic
You can incorporate more mechanetic attachments into your body than normal without suffering ill effects.
**Benefit:** The maximum number of mechanetic attachments you can have without suffering negative levels increases by 1 (see Tolerance, below).
**Special:** You can gain this feat multiple times. Its effects stack.
### Dragon Eater
According to legend, the first Amonath Emperors consumed the magic of dragons for their power. You can draw upon the living essence of mutants to empower your Lenz.
**Prerequisites:** Lenz Master, Lenz Arts 9 ranks
**Benefit:** You are a Dragon Eater, able to consume the life energy of monsters to empower Lenz. Whenever you attempt to spend XP to empower a Lenz, you can draw upon the life force of a nearby Monster or Horror to reduce the XP cost to you. Before beginning the process, you must secure a Monster or Horror who’s Hit Dice equal or exceed the minimum level necessary for the bonus of the Lenz in question. For the entire period the creature must remain within 30 feet, and can attempt a Fortitude save (DC 10 + 1/2 your character level + your Cha modifier) to resist removal of its essence. Success negates your use of this feat and forces you to either pay the full cost yourself, or abort. Successfully drawing essence from a monster with this reduces the XP cost of the growth by one-half, but it complicates the process, and you must succeed on the Lenz Arts check + the monster’s HD to Forge the Lenz. This is process is draining, causing 1D6 points of Constitution Damage the master and 1D4 per plus of the Lenz to the monster, potentially killing the monster. Undead horrors are destroyed automatically, but have a 30% +10% per HD chance of Corrupting the Lenz.
### Dragon Grip
You are able to wield weapons normally too large for you.
**Prerequisite:** Base attack bonus +1, STR 16
**Benefit:** You can wield two handed Medium sized melee weapons with one hand. You cannot wield a larger weapon in your off hand, and you cannot use this feat with a double weapon.
### Failed Soldat
The process by which the augmented warriors of Soldat are created is a mystery, but it is well known that failure of the program means death in almost every case. Somehow, you survived, although you have no memory of the procedure.
**Prerequisites:** Military, 1st level, Con 13, Str 13
**Benefit:** You completed the first leg of Soldat training and augmentation increasing your strength, speed and stamina. Your Strength is considered 4 points higher for the purpose of determining carrying capacity, and your overland movement speed is increased by 10. You receive a +2 racial bonus on Climb, Jump, and Swim checks, and a +1 dodge bonus to Defense. However, the process has left you with iridescent eyes and a weakness to magic (-2 save vs spell lenz and effects).
Legacy of Grace
Legend has it that long ago a fair and mystical race walked among men. If the legends are to be believed, you have awoken a glimmer of that heritage within yourself.
**Prerequisites:** 1st level, Dex 15
**Benefit:** You have the last glimmer of elf in you, resulting in a number of benefits in addition to your ever so slightly pointed ears: +2 racial bonus on saving throw against Sleep and Paralysis. +1 racial bonus on Listen, Search, and Spot checks. Characters may only have one Legacy feat.
---
Legacy of Iron
Long ago, the Jurai had legends of a race of stout masters of iron, capable of creating arcane steel. Today, Dwarves are a myth to most. To you, they are ancestors.
**Prerequisites:** 1st level, Con 15
**Benefit:** You have the last heritage of the Dwarven Lords, giving you a number of benefits in addition to your short build and awesome beard: +2 racial bonus on saving throws against poison and +1 racial bonus on Craft, Repair, and Demolition checks. Characters may only have one Legacy feat.
---
Legacy of Hope
The Acrean religion is long dead, and few remember the ancient teachings. You are among them.
**Prerequisites:** 1st Level, Chr 15
**Benefit:** You have learned the ancient healing prayers of the Acrean Light. Each day, as a standard action, you can use Prayer to heal a total number of hit points of damage equal to your total level + Charisma bonus. For example, a 7th-level Deviant with a 16 Charisma (+3 bonus) can heal 10 points of damage per day. You may choose to divide you healing among multiple recipients, and need not use it all at once. Prayer is a supernatural ability.
---
Legacy of Horror
They could not save your mother form the Miasma, but by some miracle you were spared that fate.
**Prerequisites:** 1st level
**Benefit:** You were born from a woman who had become a Horror while still pregnant with you. Your irises are back and red and emit an eerie glow, giving you a -1 racial penalty on bluff and diplomacy checks. Mindless undead see you as an undead creature and will ignore you unless attacked. Your dark nature gives you a +1 circumstance bonus on saving throws against mind-affecting spell lenz and abilities, poison, sleep, paralysis, stunning, and disease. Characters may only have one Legacy feat.
---
Legacy of Might
The nations in the frozen north have many legends regarding the powerful tusked giants known as orcs.
**Prerequisites:** 1st level, Str 15
**Benefit:** You have blood of the orc in your veins, and are considerably larger than the average human. You get a onetime +1 inherent bonus to Strength and Resist Cold 5. A character may only have one Legacy feat.
---
Lenz Focus
Choose one Spell Lenz. You have a special aptitude with this type of Lenz.
**Prerequisites:** Lenz Arts rank 1
**Benefit:** You receive a +2 feat bonus on all Lenz Arts checks with your chosen Spell Lenz.
**Special:** You may select this feat multiple times, each time you do you may select a different Spell Lenz.
---
Lenz Master
Either through natural aptitude or rigorous training, you have mastered Lenz Arts.
**Prerequisites:** Lenz Focus, Lenz Arts rank 4
**Benefit:** Lenz Arts is a Class Skill for you regardless of your current class. You receive a +2 feat bonus on all
Lenz Use checks, and increase the number of Lenz you can attune by 1.
**Normal:** Lenz arts is only a class skill for a few Advanced and Legendary Classes.
**Special:** You may select this feat multiple times, each time you do, you gain either an additional +1 feat bonus to Lenz Arts checks or increase the number of Lenz you may junction by 1.
### Limit Break
**Prerequisites:** Ancient Legacy or Ancient Blood, 3rd level
**Benefit:** Accessing the power in your genetic memory, you have achieved the Limit Break and managed to overcome your genetic limiters. The level that you select this feat, rather than roll for hit points, you simply take the maximum value of your Hit Dice.
**Normal:** Without this feat or a similar feat characters are unable to progress past 5th level.
**Special:** At 5th Level if you meet the prerequisites, and have enough experience points to reach 6th level but have not taken this feat, you may take this feat as your bonus feat for 6th level and progress into level 6 and beyond normally.
### Mastercraft
You are adept at creating mastercraft electronic and mechanical devices (including tools, vehicles, weapons, mechanetics, and armor).
**Prerequisites:** Craft (electrical) 8 ranks, Craft (mechanical) 8 ranks.
**Benefit:** When successfully completed a mastercraft electronic or mechanical object provides an equipment bonus on skill checks made to use the object (in the case of mastercraft vehicles, this includes Drive or Pilot checks). A mastercraft weapon provides a bonus on attack or damage rolls (your choice). A mastercraft suit of armor improves the armor’s equipment bonus to Defense. In each case, the bonus can be +1, +2, or +3, and no single object can have more than one mastercraft feature. (For instance, you cannot build a mastercraft weapon that gains a bonus on attack rolls and damage rolls.) On average, it takes twice as long to build a mastercraft object as it does to build an ordinary object of the same type. The cost to build a mastercraft object is equal to the purchase DC for the object (or its components) + the bonus provided by the mastercraft feature (+1, +2, or +3).
In addition to the Wealth check, you must also pay a cost in experience points equal to 250 x the bonus provided by the mastercraft feature. The experience points must be paid before making the Craft check. If the expenditure of these experience points would drop you below the minimum needed for your current level, then the experience points can’t be paid and you can’t make the mastercraft object until you have sufficient experience points to remain at your current level after the expenditure is made.
Apply the following modifiers to the Craft check DC for mastercraft items:
| Mastercraft Feature | DC Modifier |
|---------------------|-------------|
| Mastercraft (+1) | +5 |
| Mastercraft (+2) | +10 |
| Mastercraft (+3) | +15 |
You can add the mastercraft feature to an existing ordinary object or a lower-grade mastercraft object by making a Wealth check and then making the Craft check as though you were constructing the object from scratch. Normally, you cannot add mastercraft features to Syntech equipment.
### Monstrous Legacy
While early in her pregnancy, your mother suffered a lethal dose of mana poisoning and became a monster. You manifest some minor mutations as a result.
**Prerequisites:** 1st level
**Benefit:** You were born from a woman had become a mutant while still pregnant with you. You possess iridescent turquois eyes of a monster and a natural affinity with mana mutated beasts. Magical beasts are
automatically neutral to the character unless threatened. Choose a monstrous manifestation when you choose the feat, it cannot be changed.
**Night Senses (Ex):** You gain low-light vision.
**Claws of the Beast (Ex):** You grow a pair of claws that deal 1d4 points of damage.
**Predator’s Leap (Ex):** You can make a running jump without needing to run 10 feet before you jump.
**Wild Instinct (Ex):** You gain a +2 bonus on initiative checks and a +2 bonus on Survival skill checks.
Characters may only have one Legacy feat.
---
**Runic Weapon**
Use Spell Lenz to fill your weapon with mana energy
**Prerequisites:** Lenz Arts Rank 2, a Junctioned Spell Lenz
**Benefits:** As a swift action, you can activate a lenz to imbue your melee weapons with mana. For 1 round, your weapons are treated as magic for the purpose of overcoming damage reduction. For every level of the Lenz you activate, the weapon gains a damage bonus of +1, to a maximum of +5. While your weapons are filled with energy you cannot otherwise activate the Lenz you have chosen for this feat. Activating this power is a standard action.
---
**Ruinous Power**
You have awakened the monster within you.
**Prerequisites:** Character Level 3rd, Legacy of Horror or Monstrous Legacy.
**Benefit:** You’ve awoken to your mutated power. You can no longer be mutated by mana or miasma. When you select this feat your type changes to Monstrous Humanoid and you select one of the following powers:
**Crimson Prince (Su):** You take on the Crimson Noble template presented in the bestiary, below.
**Horrible Flesh (Ex):** You gain a Damage Reduction of 1/- plus your Constitution Modifier. This stacks with DR gained from class or feats.
**Mana Disruption (Ex):** You gain Spell Resistance 10, plus your Wisdom Modifier.
**Monstrous Resistance (Ex):** You gain Resist 5 against Fire, Lightning, and Acid. This stacks with any other energy resistance you may have.
**Soul Eater (Su):** Whenever you use the coup de grace action to kill a creature, or kill a creature with a critical blow, that creature cannot be restored to life by any means until you are slain. You gain 2 temporary hit points per Hit Die of the slain creature. These temporary hit points last for up to 1 hour.
**Ride the Current (Su):** You gain a fly speed (with average maneuverability) equal to your base land speed. You can fly for a number of consecutive rounds equal to hit dice + your Constitution modifier (minimum 1 round); between these uses you cannot fly for 1 round.
**Witchfire (Su):** You can cause the mana around yourself or a target within 30 feet to ignite in turquois fire. The flame can either shed a ghostly light as long as you concentrate, or be used to attack for 1d6 damage once per round, reflex save for half damage.
This feat constitutes a Limit Break, and you can progress beyond 5th level once you have taken it.
---
**Spell Penetration**
Your Lenz spells break through spell resistance more easily than most.
**Prerequisites:** Lenz Arts Rank 1
**Benefit:** You get a +3 bonus on caster level checks (1d20 + caster level) made to overcome a creature’s spell resistance, and may add the spell Lenz’s bonus, if any, to this roll.
---
**Swift Spell**
For increased difficulty you may activate a specific Spell Lenz with a Standard action.
**Prerequisites:** Lenz Focus, Lenz Arts Rank 6
**Benefit:** You may activate a Spell Lenz that you have the Lenz Focus feat for with a standard action, increasing the DC to activate by 5.
**Normal:** Activating a spell Lenz is a full round action.
SKILLS
All of the standard skill presented in the D20 Modern Rulebook are available largely unchanged. Knowledge (Lenz and Mana) can be used to identify a piece of Lenz and gauge its approximate power level (DC 20), identify the approximate time till destruction of a mana Pool (DC 25). Knowledge (technology) can identify Syntech weapons and casters. While characters with craft skills can create items that consume mana for combustive energy, they cannot build Syntech items without the appropriate feat.
Knowledge (Lenz and Mana) (Int)
You can make a Knowledge (Lenz and Mana) check to correctly identify Lenz, Lenz level of power, active spell effects, and any other use similar to Knowledge (Arcane) which this skill replaces. This includes a study of ancient mysteries, magic traditions, arcane symbols, Lenz properties, dragons, and magical beasts.
Check: The DCs for identifying or recognizing any particular magical phenomena depends. You can use this skill to identify monsters and their basic powers or vulnerabilities. In general, the DC of such a check equals 10 + the monster's CR. To recall a monster’s specific supernatural abilities in detail requires a DC check of 15 + the monster's CR. You can detect Miasma within 500 feet with a DC 25 check.
Knowledge (Technology) (Int)
You can make a Knowledge (technology) check to correctly identify manatech, Syntech weapons, and mechanetic attachments, as well as identify unfamiliar technological devices.
Check: The DCs for identifying technological items vary depending on the type of information required:
- Identifying a Syntech Weapon (not an enchantment), caster, or armor: DC 10.
- Determining the function or purpose of a Syntech Enchantment or a Mechanetic attachment: DC 15.
- Understanding the abilities and effects of a highly customized Syntech weapon: DC 20.
Language
Ghestal has a number of distinct languages and cultures. That said, almost everywhere Rigari is the common language used, simply due to the spread of the tongue form the north during the height of the colonial period. In the A.P.O.C. a symbol of class is the Jurai language Tohogo, which is commonly used in business, media, and the military, making it the second most common language. Other tongues include classical and modern Amonati, Zweispeka, and Attican in frontier and federations, and Ute unga, and Hal-el in the Wilderun and badlands, respectively.
**Lenz Arts (Con)**
*Trained Only*
The ability to Junction, Activate, and Forge a piece of Lenz is collectively known as Lenz Arts. The teachings of this skill are largely philosophical and mystical, rather than a scientific understanding of Lenz.
To use a piece of Lenz you must have a minimum Constitution score of 10 + the Lenz’s total level (+0, +1, +2, etc). Before a Lenz can be used by a character, it must be Junctioned. The total number of Lenz a character may have actively Junctioned is equal to their Constitution Modifier +1. Spells cast through Lenz have no chance of arcane spell failure, but are subject to Spell Resistance and effects like Dispel magic.
**Check:** Make a check to Activate, Junction, or Forge a piece of Lenz in your possession. You may **not** take 10 or take 20 when Junctioning, Activating, or Forging a Lenz.
| Action | DC |
|---------------------------------|---------------------|
| Activate a Spell Lenz | 10† + Lenz Level |
| Forge a Lenz | 15 + 2x Lenz Level |
| Junction an Ability Lenz | 15 + 2x Lenz Level |
| Junction a Power Lenz | 15 + 2x Lenz Level |
| Junction a Spell Lenz | 10 + 2x Lenz Level |
†Some, particularly powerful Lenz may have a higher DC to activate. These will be called out as either Hard (DC 15) or Severe (DC 20) in the description of the Lenz.
**Activate Spell Lenz:** Activating a spell Lenz is a full round action. While it does not provoke an attack of opportunity, being attacked while casting a spell through Lenz requires a concentration check to maintain focus and prevent the spell from being disrupted. Things like armor, being wounded, and distractions can impact a Lenz Arts casting check.
| Activation Modifier | Check Modifier |
|-----------------------|----------------------|
| Wearing Armor | Armor Penalty |
| Wounded | -1 per 3 damage |
| Fatigued | -2 toc check |
| Exhausted | -4 to check |
**Junction:** A successful roll links the character to the Lenz; failure means that the Lenz rejects the character. A character rejected by a Lenz cannot attempt to junction that Lenz again for 24 hours.
**Forging:** Increasing the power of a Lenz takes considerable time and effort. A character disrupted for more than five minutes while Forging a Lenz must start over from scratch. Being attacked disrupts the process.
**Repair (Int)**
You can use this skill to repair manatech machines, Syntech weapons, power armor and mechanetic attachments.
**Check:** Repairing damage takes 1 hour of work, a mechanical tool kit, and a proper facility such as a workshop or hangar bay, without a tool kit, you take a -4 penalty on your Repair check. At the end of the hour make a Repair check (DC 30). Success repairs 1d10 points of damage. If damage remains, you may continue to make repairs for as many hours as it takes to restore the machine to full hit points.
EQUIPMENT AND TECHNOLOGY
Technology on the world of Ghestal is comparable to what we would recognize as the late fifties and early sixties. Much of the equipment in the D20 Modern Rulebook is completely available. The power of mana and Lenz also allows for a number of technological marvels years ahead of their normal scientific understanding, many of these technologies from the energy age. This includes mobile phones, Mechanetics, and genetic manipulation. These modern wonders are dependent on what, for lack of a better term, can only be described as magic. If you have access to the D20 Future book, it can be said that stripped of these marvels, and the world is only comparable to a late Progress level 4 civilization.
Of these technologies, Casters and Syntech Enchantment are of particular interest to players. A caster is a device that can consume Mana to activate an integrated Spell Lenz. Casters are commonly used by the Deviant Corps and the military because mastering Lenz is extremely challenging, whereas learning to fire a Caster is comparatively easy. This makes Casters a more common use for Lenz in the military. As Magical damage, Casters are capable of harming Horrors and Monsters far more effectively than conventional firearms, as they bypass damage reduction.
Due to the incredible resistance of monsters, another weapons technology is widely used: Syntech Enchantments. These augment a melee weapon, allowing them to overcome Damage Reduction and deal damage comparable to a light firearm. Because these do not require proficiency with casters, Syntech Enchantments on simple weapons are extremely common.
STARTING EQUIPMENT
Characters begin play with a wealth score of 1D4+4, + Profession Bonus, + Windfall feat (if taken). Characters who start with any ranks in Profession gain a +1 bonus to starting wealth. Players can begin with anything with a Wealth check less than their current wealth bonus – it is considered a small enough expense that it does not lower their wealth score. For more serious starting equipment, players can take 10 and 20 on anything before game starts, but remember if you purchase anything with a cost above your current wealth bonus there is a chance your wealth may go down.
Generally, Deviants are expected to provide their own equipment and supplies. There are also commissaries that sell consumer grade Lenz, Casters, and other Syntech arms. After all, arms dealing in a world under constant threat is a lucrative business. That said, the A.P.O.C. makes a point to prevent or confiscate powerful or rare Lenz in the possession of unauthorized persons in their territory. Usually without compensation.
CONVENTIONAL EQUIPMENT
Tools, weapons, and armors presented in the D20 Modern book are relevant in this setting, even if the specific brands may not be. Still, firearms in the Deviant Evolution world have differing calibers and damages, and armor serves a different purpose. Vehicles like cars, trucks, and motorcycles exist, but most aircraft are lighter than air or similar craft meant for long voyages across a world that is mostly land. Not to mention many devices that stand out as years ahead of the rest of the world’s technology.
ARMS
The firearms presented below are modular frames based on the firing receiver and ammunition type. The largest arms manufacturers in the A.P.O.C. have standardized the munitions types, to make use of these other manufacturers’ base their weapons on the receiver types on these cartridge calibers. Most manufacturers use the same basic frames, and then different features are added that impact things such as range, damage, or other characteristics.
.28 REVOLVER
The revolver type sold most for frontier personal defense, the .28 firing mechanism is cheap and easy to manufacture. Because it can use a cheap version of the common rifle ammo, home brew revolvers are surprisingly common.
.35 REVOLVER
Using the same rounds common in the automatic variant, the .35 revolver was very common throughout the great war, but the semi-auto has become far more common, still as a reliable weapon thousands are still in circulation.
| Conventional Firearm | Damage | Critical | Rate of Fire | Range Increment | Ammo | Size | Weight | Cost DC |
|-------------------------------|--------|----------|--------------|-----------------|--------|------|--------|---------|
| .28 Revolver | 2d4 | 20 | Single | 10 ft. | 8 cyl. | Small| 3 lb. | 8 |
| .35 Revolver | 2d6 | 20 | Single | 30 ft. | 6 cyl. | Med | 3 lb. | 13 |
| .55 Magnum Revolver | 2d8 | 20 | Single | 30 ft. | 6 cyl. | Med | 5 lb. | 15 |
| .35 Semi-auto Pistol | 2d6 | 20 | S | 40 ft. | 13 clp.| Small| 3 lb. | 16 |
| Dragon Slayer .55 | 2d8 | 19-20 | S | 40 ft. | 9 clp. | Med | 6 lb. | 19 |
| .225 Submachine gun | 2d8 | 20 | S, A | 40 ft. | 60 box | Large| 8 lb. | 18 |
| .225 Auto Battle Rifle | 2d10 | 20 | S, A | 60 ft. | 45 box | Large| 12 lb. | 16 |
| .28 Hunting Rifle | 2d10 | 20 | Single | 80 ft. | 15 clp.| Large| 10 lb. | 14 |
| .66 Machinegun | 2d12 | 20 | S, A | 50 ft. | 180 box| Large| 44 lb. | 24 |
| Longshot .66 | 2d10 | 19-20 | S | 100 ft. | 8 clp. | Large| 26 lb. | 22 |
**.55 Magnum Revolver**
This huge revolver fires a giant .55 round and is common in the frontier. Popular for its stopping power and range, the .55 round is commonly manufactured for SOLDAT – and the .55 Magnum is a civilian arm.
**.35 Semi-Auto Pistol**
The most popular and common side arm in the world is the .35 semi-automatic pistol. Nearly identical models are built by every manufacturer, and the rounds can be found in every store in the world.
**SOLDAT Dragon Slayer .55**
Built exclusively for the A.P.O.C. special combat units, the Dragon Slayer is a devastating semi-automatic weapon that fires .55 rounds. Character with a Str score of less than 13 suffer a -1 to hit with these weapons.
**.225 Submachine Gun**
Built on a bullup frame (a frame where the clip is behind the handle). As a tactical weapon the .225 submachine gun, with its automatic firing mode, is a recent technical development.
**.225 Automatic Battle Rifle**
Standard issue assault rifles of the A.P.O.C. military are the .225 battel rifles. While these weapons come in multiple configurations, it is the most common rifle worldwide. Even the SRS uses a .225 combat rifle as its standard issues armament to soldiers.
**.28 Hunting Rifle**
Based on munition designs common before the war, the .28 hunting rifles, any single action variant of the .28 long gun, is the most common civilian firearm in eth federation and most of the frontier. The rifle can fire either the .28 hunter or .28 shorty, and are remarkably reliable and sturdy.
**.66 Machinegun**
Variations on the .66 machine gun are used on vehicles and aircraft in the A.P.O.C., and some frontier manufacturers produce a man-portable heavy infantry variation. All versions of this weapon require the exotic weapon proficiency.
**Longshot .66 Semiautomatic Rife**
A semi-automatic rifle designed to fire the .66 high caliber rounds, the Longshot is commonly equipped as a sniper rifle or light anti-armor rifle.
---
**Arms Gadgets**
Gadgets are modular components that can be applied to weapons to improve or alter their characteristics. All the generic weapons above can be equipped with gadgets. Existing weapon accessories, such as scopes, suppressors, and lights are reprinted here below to incorporate them into the gadget system.
**Alternate Weapon**
Some weapons are capable of serving multiple purposes by integrating two types of weapon into one. This can encompass everything from having a bayonet installed on a rifle to allowing a weapon to switch between two different energy types at any given time. When dealing with firearms and other ranged weapons, this usually involves only mixing like types; for example, energy weapons are only combined with energy weapons, and ballistic weapons are only combined with ballistic weapons. This is not a hard-and-fast limitation but rather a suggestion based on the logistics of designing such a weapon. When selecting the alternate weapon gadget, choose a second weapon. That weapon is integrated into the base weapon and can be used at any time. Additionally, you must choose whether or not the alternate weapon may be physically separated from the base weapon or not at the time of purchase. This gadget may be selected multiple times, each time adding a single additional weapon to the base model.
**Restrictions:** The character must also purchase the weapon to be integrated separately from the primary weapon, before the gadget modification is made.
**Purchase DC Modifier:** +4
---
**Collapsible**
In situations that call for stealth and deception, it is of great value to be able to separate an item into its parts and transport them in their broken down state. A weapon that makes use of the collapsible gadget is easily disassembled and reassembled at a moment’s notice. Breaking down a weapon into its individual parts requires a full-round action, while reassembling them in the correct order requires another full-round action. Obviously, the weapons must be fully assembled to be used. In its disassembled state, a weapon is not easily identified; a Knowledge (technology) check (DC 17) is required to identify a collapsed weapon for what it really is.
**Restrictions:** None.
**Purchase DC Modifier:** +2.
---
**Expanded Magazine**
Some weapon engineers recognize that stopping to reload a weapon in combat is a dangerous and potentially life-threatening maneuver. Taking steps to reduce the amount of time required to keep the weapon full, these engineers have increased the ammunition capacity of the weapon to reduce the frequency with which it must be reloaded. Any weapon with the expanded magazine gadget doubles its normal magazine capacity. This gadget may only be taken once per weapon.
**Restrictions:** Ranged weapons only.
**Purchase DC Modifier:** +2.
---
**Illuminator**
An illuminator is a small flashlight that mounts to a firearm, freeing up one of the user’s hands. It functions as a standard flashlight.
**Restrictions:** None.
**Purchase DC Modifier:** +2.
**Ported Barrel**
Carefully engineered barrels and firing chambers can actually recapture much of the recoil energy and use it to increase the pressure in the barrel behind the bullet. This will noticeably increase the round’s muzzle exit velocity. Unfortunately, this is exclusive to medium size or larger weapons, and cannot be applied to revolvers. The ported barrel gadget will increase the weapon’s range increment by 10 ft.
**Restrictions:** Semi-automatic weapons only, medium or larger size only.
**Purchase DC Modifier:** +2.
**Recoil Compensator**
A series of springs, motions, and servos absorb significant amount of the recoil from this weapon, allowing for easy handling and better in situation control. A weapon that feature recoil compensation has no benefit on the first attack in a round, but has a +1 to attack on additional attacks made in the same round as part of a full attack action.
**Restrictions:** Ranged weapons only.
**Purchase DC Modifier:** +4.
**Scope**
A scope is a sighting device that makes it easier to hit targets at long range. However, although a scope magnifies the image of the target, it has a very limited field of view, making it difficult to use. A standard scope increases the range increment for a ranged weapon by one-half (multiply by 1.5). However, to use a scope a character must spend an attack action acquiring his or her target. If the character changes targets or otherwise lose sight of the target, he or she must reacquire the target to gain the benefit of the scope.
**Restrictions:** Ranged weapons only.
**Purchase DC Modifier:** +4.
**Suppressor**
A suppressor fits on the end of a firearm, capturing the gases traveling at supersonic speed that propel a bullet as it is fired. This eliminates the noise from the bullet’s firing, dramatically reducing the sound the weapon makes when it is used. For handguns, the only sound is the mechanical action of the weapon (Listen check, DC 15, to notice). For longarms, the supersonic speed of the bullet itself still makes noise. However, it’s difficult to tell where the sound is coming from, requiring a Listen check (DC 15) to locate the source of the gunfire.
**Restrictions:** Ranged weapons only.
**Purchase DC Modifier:** +6.
Armor
The following suits of armor are common throughout the world. Combat armors like the tactical assault armor and hard armor are deployed to troops by the A.P.O.C. and SRS, whereas the far more personalized hunter style armor is used by frontiersmen primarily. Because of the low level of protection that armor offers skilled fighters, heavy suits of armor are practically unheard off. After all, decreased mobility is meaningless against ranged touch attacks, which most offensive spells are.
Hard Armor
Common scout and mobile unit armor, Hard Armor provides less protection than assault types, but also allows increased mobility. Most light combat armors consist of a reinforced blast vest, shoulder and upper arm pads, thigh and abdomen pads, and kneepads. Some light combat armors also include helmets and visors, though not all incorporate this aspect of the armor.
Hunter Armor
Popular with Deviants and those in the badlands, the armor is prepared from multiple layers of leather and Monster hides. Typically built upon modern ballistic fabrics and incorporating pieces of hard armor, bone, and scale, Hunter armor takes its name form that fact that many hunters wear trophies of their kills.
Tactical Assault Armor
Designed for heavy warfare and dangerous situations, TAA is a medium combat armor that covers the user almost head to toe in armor plating. This armor comes with a helmet that fits snugly on the head and does not interfere with the soldier’s field of vision, and has multiple points to attach tactical optics.
Armor Gadgets
As with weapons, armor can feature a number of additional features depending on its manufacturer and purpose.
Camouflaged
Camouflaged armor is painted with camouflage patterns: woodland, desert, winter (primarily white), urban (gray patterned), and black are available. When worn in an appropriate setting, the armor grants a +2 bonus on Hide checks.
Restrictions: This armor is only camouflaged for one environment type.
Purchase DC Modifier: +1.
Easy On
An Armor with this feature can be quickly donned and removed. Fastenings, snaps, and sippers let the armor come off very quickly, and go in just as fast. A character who is proficient can remove or put on this armor as a full round action.
Restrictions: None.
Purchase DC Modifier: +2.
| Medium Armors | Type | Equip. Bonus | Non prof. Bonus | Max Dex Bonus | Armor Penalty | Speed 30 ft. | Weight | Purchase DC |
|---------------------|----------|--------------|-----------------|---------------|---------------|--------------|--------|-------------|
| Hard Armor | Tactical | +3 | +1 | +5 | -2 | 30 ft. | 6 lb. | 12 |
| Hunter Armor | Archaic | +4 | +1 | +4 | -4 | 30 ft. | 10 lb. | 14 |
| Tactical Assault Armor | Tactical | +6 | +2 | +2 | -5 | 20 ft. | 28 lb. | 19 |
INTEGRATED EQUIPMENT
A particular piece of nonweapon equipment has been integrated into the armor and can be used by the armor’s wearer at any time. This gadget is often used to add features such as lenz harnesses, glow-lamps or radio to the armor, though it is not limited to those applications. When selecting the integrated equipment gadget, choose a piece of equipment. That equipment is integrated into the base armor and can be used at any time. Additionally, you must choose whether or not the equipment may be physically separated from the base armor or not at the time of purchase. This gadget may be selected multiple times, each time adding a single additional piece of equipment to the base model.
Restrictions: The character must also purchase the piece of equipment to be integrated separately from the armor, before the gadget modification is made.
Purchase DC Modifier: +2.
MASSIVE IMPACT DAMPENING
This armor incorporates strategically placed non-Newtonian fluid absorption pads to lessen the impact of massive damage and falls. This gives the wearer a +2 to save vs. Massive Damage and Falling damage.
Restrictions: Medium or Heavy armor.
Purchase DC Modifier: +2.
MANA HARDENING
Portions of this armor’s material were exposed to raw mana to temper it against the energies. Such armor provide a +2 bonus to save vs Mana exposure and Mana fountains..
Purchase DC Modifier: +2.
**Power Armor**
Powered Armor, sometimes called Dragoons, is exclusive to the highest tech regions. These machines offer tremendous protection and power, but are often fueled by Mana and require special training to use effectively. Regardless of the make or model, all Powered Armor offers a number of unique benefits above and beyond simple protection to characters who are proficient in their use:
- **Augmented Strength**: A Powered suit provides a +4 equipment bonus to the wearer’s strength.
- **Movement Bonus**: All powered suits provide a modest 10 ft. speed bonus that offsets the weight.
- **Sensor Systems**: In addition to a short wave radio comm and primitive heads up display, all powered armor suits feature some basic sensor systems that provide a +1 to Search, Spot, and Listen.
- **Environmental Filters**: These armors are designed to filter out many environmental hazards such as poison gases, radiation, bright lights, and ordinary heat and cold. Against such effects the armor provides a +2 equipment bonus to saving throws.
- **Power supply**: Powered armor is powered, and runs on Mana canisters. The average suite will operate for about 6 minutes per charge, meaning an industrial canister can power the suit for 10 hours.
---
**Dragoon Mark II**
The first powered armor was a huge, slow, and over armored personal unit deployed in the great war by Jurai to make best use of their limited resources. It’s tank like armor combined with infantry like mobility actually made it a viable weapon. Since that time the technology has improved drastically. The first Mark II Dragoons began production in ‘58, specifically to deploy as deep field operations units. The Mark II is a sleek, effective, and powerful weapons platform.
**Heavy Environmental Exoskeleton Assault Armor**
Built upon the original Dragoon Mark I design, the HEX Assault Armor was built in resist Miasma and operate in Miasma drenched areas. Unfortunately, it proved to have only limited protection against the fog, and as such units began to appear all over the black market. With the technology falling into the hands of the SRS, Federation, and other lesser powers, Knock-offs and modern version began to appear on the market. The added environmental Protections in the HEX armors provide a +2 to save against Miasma, and a Resistance to Fire and Cold of 5.
**Mobile Artillery Armor**
The latest models from the A.P.O.C. and SRS are classified as mobile artillery platforms. A character proficient with this armor can use Huge size weapons without penalty, though the effort to use the weapon does not change. Though slow, the armor supports its own weight, and for the purposes of carry weight and encumbrance the wearer treats their strength as 10 points higher (not counting the +4 equipment bonus from powered armor) that their actual score.
| Powered Armors | Type | Equip. Bonus | Non prof. Bonus | Max Dex Bonus | Armor Penalty | Speed 30 ft. | Weight | Purchase DC |
|-------------------------|--------|--------------|-----------------|---------------|---------------|--------------|--------|-------------|
| Dragoon Mark II | Power | +10 | +3 | +2 | −6 | 20 ft.* | 30 lb. | 19 |
| H.E.X. Assault Armor | Power | +9 | +3 | +1 | −7 | 15 ft.* | 40 lb. | 17 |
| Mobile Artillery Armor | Power | +13 | +1 | 0 | −10 | 10 ft.* | 68 lb. | 22 |
Standard Ammunition
While the basic principles of firearm ammunition on Ghestal are recognizable, the caliber and compositions are not. Following the war, munition types were standardized by the A.P.O.C. to fit the weapons they could mass produce. This led to a heavy standardized model that new companies followed. Below is a table outlining munitions and cost per box.
| Ammunition Type | Purchase DC |
|----------------------------------|-------------|
| .225 Caliber (20 box) | 4 |
| .28 Hunter (20 box) | 4 |
| .28 Shorty (20 box) | 4 |
| .35 Caliber (50 box) | 5 |
| .35 Special (20 box) | 5 |
| .55 Magnum (20 box) | 6 |
| .66 Caliber (50 box) | 6 |
**.225 Caliber**
This is the standard rifle ammunition of the A.P.O.C. forces and will fit most automatic and combat rifles. This round is large casing and small long bullet that is designed for fully automatic weapons.
**.28 Caliber**
Generally, the .28 Hunter rounds are 0.28x1.15 rifle cartridges common in single action long guns from the Dracian Federation. These are common in the frontier, because they are simple to manufacture, and fit dozens of different rifles. A pistol version of the .28 caliber called a shorty is manufactured for light revolvers in the federation but are unheard of in the A.P.O.C.
**.35 Caliber**
Primarily manufactured in the A.P.O.C. as a handgun round, these rounds probably the most popular munition in the world. Considered a “universal round” this cartridge is based on an older Ridmar design. The .35 Special is a longer casing with a higher charge designed for revolvers, and is primarily used in the frontier of its high stopping power. The longer shell and high recoil prevent their use in semi-automatic weapons.
**.55 Magnum**
Billed by manufacturers and the Personal Anti-monster Munition, the .55 is a massive cartridge designed for the largest handguns on the market. The side arms of SOLDAT fire .55 rounds, and revolvers that fires the rounds are very popular in the Deviant Corps.
**.66 Caliber**
The .66 round is a standard heavy armor piercing round used by ordinance machineguns on vehicles, the largest man portable machineguns, as well semi-automatic long range sniper rifles.
**Manatechnology**
Manatechnology refers to a device that uses Mana as a fuel source, or has an active Mana Current in it. Circulating mana in a device and produce a number of effects, mechanical force the least of it. The weapons and armor crafted to incorporate lenz and either a mana circuit, or consume mana, are known as Syntech. Characters can create Syntech equipment provided they have the appropriate feat. Synthec weapons and armor are one of the only defenses against horrors and monsters that cannot be harmed by conventional weapons.
**Mana Cartridges**
Most Mana powered devices take Mana Cartridges. Most consumer cartridges have between 15 and 50 charges. Usually cartridges are not expended unless the device consumes that mana, such as a caster. The normal rules for reloading are use when reloading a device with a mana cartridge.
| Mana Cartridge | Size | Purchase DC |
|----------------------|--------|-------------|
| Light Cartridge | Tiny | 2 |
| Heavy Cartridge | Small | 6 |
| Industrial Cartridge | Medium | 10 |
**Light Cartridge**
Have 15 Charges and fit most consumer casters and harnesses. Light cartridges are consumable and cannot be refilled, however can be recycled for their components.
**Heavy Cartridge**
More common in the militaries and Deviant Corps, Heavy Cartridges have 30 charges, and are designed for most military equipment. These standard cartridges are designed to be recharged.
**Industrial Cartridge**
These large canisters are used to power personal transports and power-tools. Such reusable canisters have 100 charges, but are also volatile and can explode if damaged.
**Lenz Harnesses**
Devices used to affix Lenz to a Lenz Master, harnesses come in many forms and purposes. A Harness has one to three sockets for Lenz. Any Lenz will fit as they are uniform in size.
| Lenz Harness | Size | Lenz Sockets | Purchase DC |
|------------------|--------|--------------|-------------|
| Belt | Medium | 3 | 12 |
| Bracelet | Tiny | 1 | 4 |
| Bracer | Small | 2 | 8 |
| Chest piece | Medium | 3 | 12 |
| Crown | Small | 2 | 8 |
| Neckless | Tiny | 1 | 4 |
CASTERS
A caster is a Syntech weapon that resembles a firearm, yet rather than launch a projectile, they shoot magic. To function, casters incorporate a bolt Lenz that is exposed to an electric charge and mana. Circuitry and mechanism then focuses and directs the resulting bolt of mystical energy. The damage, range and type depend on the Lenz used in the creation. The type of caster also affects the purchase cost.
Because casters release a magical attack, the target is entitled to Spell Resistance, casters are considered to have a caster level of 1. When a caster bolt hits a creature with Spell Resistance the caster user rolls 1d20+1 to overcome the target’s Spell Resistance. If the roll fails, the bolt does not damage the target.
A caster does not benefit from feats such as Spell Penetration.
Casters are by nature somewhat modular, each with a number of configurations based on its components: The Frame, the Lenz, and any gadgets. When creating a caster select a Caster Frame to determine the size, range, and ammo capacity, and select the Lenz to determine the damage and the effect of Critical Hits. As with standard equipment, Gadgets can be added to the weapon to modify its performance.
Caster Frames
There are three models of Caster Frames: Light, Tactical, and Assault. Each has a purposes and advantage over others.
LIGHT CASTER
A light caster is a one handed pistol like weapon that fires a medium rang bolt of energy. Because light casters can be used one handed, the user is able to hold a small weapon in the offhand and make two weapon attacks normally. Because Light casters are medium sized, it takes special skill to use two casters at the same time.
TACTICAL CASTER
The rifle-like tactical caster is a two handed weapon designed for heavy combat and military exploits. The major difference between a light caster and a Tactical Caster is in the mana canister size and range. Tactical casters can come in a number of configurations, from close combat models to extended range sniper casters.
ASSAULT
The heavy weapon of casters, the assault caster is a staff like weapon designed for heavy engagements; the assault Caster can actually have up to three separate Lenz in the chamber, although only one Lenz can be fired at a time. Changing the active Lenz is a move action.
Caster Lenz
| Caster Lenz | Damage | Damage Type | Purchase DC |
|-----------------|--------|-------------|-------------|
| Fire Bolt | 2d6 | Fire | +2 |
| Force Bolt | 2d4 | Force | +2 |
| Ice Bolt | 1d8 | Cold | +2 |
| Lightning | 2d6 | Electric | +3 |
| Mana bolt | 2d8 | Energy | +5 |
| Radiant Bolt | 4d6 | Energy | +8 |
| Caster Type | Damage | Critical | Rate of Fire | Rage Increment | Cartridge | Size | Weight | Purchase DC |
|---------------|--------|----------|--------------|----------------|-----------|------|--------|-------------|
| Light | By Lenz| 20 | S | 30 Feet | Light | Medium | 4 lb. | 15 |
| Tactical | By Lenz| 20 | S | 60 Feet | Heavy | Large | 9 lb. | 17 |
| Assault | By Lenz| 20 | S | 80 Feet | Heavy | Large | 18 lb. | 20 |
SYNTECH EQUIPMENT
Syntech Enchantments
The Syntech Enchantments are special gadgets that can be added to any conventional melee weapon, building a Lenz and mana circuit into the weapon. The result effectively enchants the weapon, and enables these weapons to overcome damage reduction, like that possessed by horrors. Each Syntech Enchantment is a specific weapon modification, and a weapon can only have one.
ASSAULT
Purchase DC: 17
Component: Mana, Fire, Ice, or Lightning Bolt Lenz
An assault enchantment surrounds the weapon in an aura of energy: cold, electricity, fire, or force. This aura adds 1d6 energy damage to all attacks made with the weapon, and has an additional effect one a successful on a critical hit:
Cold Assault: Target's speed is reduced by 10 feet for 1 round, to a minimum speed of 5 feet (multiple critical hits on the same creature stack).
Electricity Assault: Target is dazzled for 1 round.
Fire Assault: Target takes an additional 1d6 points of fire damage at the beginning of the following round (multiple critical hits on the same creature can increase the next round's damage beyond 1d6).
Force Assault: The Large and smaller targets are knocked prone on a critical hit. Huge and bigger creatures suffer a -1 penalty to their initiative on the next round.
BANISHING
Purchase DC: 22
Component: Rebuke Lenz or +1 Cure Lenz
A weapon with the Banish Syntech Enchantment is augmented against Undead. When such a horror is struck with a Banish weapon it deals an extra 1d6 points of damage. Further, the weapon can deliver sneak attacks and critical hits against undead as if they were living creatures!
BRIGHT
Purchase DC: 19
Component: Bright Lenz
The enchantment sheds bright illumination in a 20-foot radius and shadowy illumination for 20 feet beyond that. On a successful critical hit a Bright weapon will release a Blinding Flash, anyone save the wielder within 20 feet must make a DC 14 Reflex save or be blinded for 1d4 rounds.
LIFE DRINKING
Purchase DC: 26
Prerequisites: Drain Lenz
Each hit against a living target with a Life drinking Syntech weapon will deals an extra 1d4 damage, and will heal that much damage to the wielder. On a critical hit, this damage, and the about healed, are also doubled. A life drinker can restore up to 40 hit points per day in this manner.
RADIANT
Purchase DC: 24
Component: Radiant Lenz or +1 Fire Bolt Lenz
This Syntech adds a 1d6 energy damage to rolls with the weapon, and ignores hardness when damaging objects.
SHATTERMANTLE
Purchase DC: 24
Component: Seal Lenz
A shattermantle weapon damages a foe's spell resistance, like that possessed by monsters and horrors. Each time the weapon strikes a foe that has spell resistance, the value of that spell resistance is reduced by 2 for 1 round. The penalties for multiple hits during the same round stack. For example, if you succeed on three attacks in the same round against the same foe, that foe's spell resistance is reduced by 6 until the beginning of your next turn.
**Spelleater**
**Purchase DC:** 25
**Component:** Dispel Lenz
A weapon fitted with the Spelleater Syntech enchantment can be used to attack magical effects on a target. On a successful strike against a target, the target is subject to a dispel, using the character’s attack roll. The character must declare they are attempting to dispel, and if the attack roll fails, so does the dispel. This Enchantment can be activated up to three times per day, success or fail. Each successful dispel recovers one of those uses, however.
**Vicious**
**Purchase DC:** 20
**Component:** Fire Bolt Lenz
A Vicious weapon has double the threat range of a normal weapon of its type. For example, a vicious quarterstaff scores a threat on a roll of 19-20, and a vicious heavy flail scores a threat on a roll of 17-20. This effect doesn’t stack with any other effect that expands a weapon’s critical threat range.
**Witchlight**
**Purchase DC:** 15
**Component:** None
Named for the burning mana phenomena seen around many monsters, a witch light weapon does not include a Lenz. Instead the device consumes Mana from a mana canister to produce an aura of Witchlight around the weapon. Each charge will activate the aura for one round. While the Witchlight aura is active, the weapon will deal 1d4 points of mana damage and by-pass damage reduction.
**Wounding**
**Purchase DC:** 24
**Component:** Wound Lenz
A weapon equipped with the wounding Syntech enchantment causes severe bleeding. Each strike with the weapon will cause 1d4 additional damage to the target the following round. Undead and constructs are immune to this effect.
Syntech Arms
Beyond creating augmenting enchantments for a weapon, Syntech can be used to create a few specific mana fueled weapons. The devices imitate technology well beyond that capability of the world’s scientific minds, and are only possible through the power of magic.
Beam Sword
Born from attempts to create a radiant Syntech enchantment, enterprising engineers realized they could contain a beam of energy in a specialized mana field. Beam swords use two Lenz in their construction, making them very expensive, however they are devastating melee weapons.
Giant’s Weapon
This is either a huge sword, axe or hammer equipped with a flight lenz that allows the weapon to be wielded by a medium sized creature.
Plasma Rifle
Plasma occurs when gases become electrically charged after losing electrons. Plasma weapons condense this electrically charged gas into a destructive force that can eat through solid objects and cause severe damage. Most plasma weapons generate their destructive ammunition by combining the force or pressure, mana, and bolts from a fire Lenz.
| Syntech Melee Weapon | Damage | Critical | Damage Type | Weapon prof. | Size | Weight | Purchase DC |
|----------------------|--------|----------|-------------|--------------|------|--------|-------------|
| Beam Sword | 2d8 | 20/x3 | Fire | Exotic | S | 2 | 20 |
| Giant’s Weapon | 3d6 | 19-20 | Impact | Archaic | H | 45 | 16 |
| Ranged Weapon | Damage | Critical | Rate of Fire | Rage Increment | Cartridge | Size | Weight | Purchase DC |
|----------------------|--------|----------|--------------|----------------|-----------|--------|---------|-------------|
| Plasma Rifle | 3d10 | 20 | S, A | 80 feet | Heavy | Large | 8 lb. | 22 |
| Rail Gun | 3d12 | 20 | S | 100 feet | 20 box | Large | 18 lb. | 26 |
| X-Ray Laser | 5d12 | 20 | Single | 200 feet | Industrial| Huge | 28 lb. | 30 |
MECHANETICS
Mana technology and magical healing allow for a number of technological feats that far exceed the general level of technical understanding. One of them being Mechanetics – mechanical prosthetics that can replace, and even augment, a person. Members of SOLDAT receive considerable skeletal enhancements, for example. Unlike cybernetics of more advanced civilizations, Mechanetics are crude mechanical attachments, more like armor filled with mechanical parts than the advanced prosthetics of the future. In Deviant Evolution, there are few or no discreet implants, only larger more obvious attachments. It is obvious when one possesses a Mechanetic, requiring a DC 17 Disguise Check to hide or conceal.
Construction and Repair
Mechanetic attachments are complex instruments with both electrical and mechanical components. Consequently, a character must have the Craft Mechanetics feat (see page 12) to build a Mechanetic attachment.
Repairing a damaged or nonfunctional Mechanetic attachment requires 10 hours of work and a successful Repair check (DC 25). A character needs both an electrical tool kit and a mechanical tool kit to facilitate repairs. Without one or the other, a character takes a -4 penalty on the check; without both kits, the penalty increases to -8.
Attachment and Removal
Installing or removing a Mechanetic attachment, regardless of whether it is a replacement or enhancement, requires a successful Treat Injury check. A character with the Mechanetic Surgery feat suffers no penalty on the check (see the feat’s description on page 12). Removing a Mechanetic attachment without proper surgery causes lasting physical trauma to the patient’s body, dealing 1d4 points of permanent Constitution drain.
Tolerance
Only living creatures can have Mechanetic attachments. In addition, a living creature can have a maximum number of Mechanetic attachments equal to 1 + the creature's Constitution modifier (minimum 0). For example, a creature with a Constitution of 14 (+2 modifier) can have a maximum of three Mechanetic attachments, while a creature with a Constitution of 9 (-1 modifier) can bear none.
A creature may have more Mechanetic attachments installed on its body than it can bear. However, the creature gains 1 negative level per Mechanetic attachment that exceeds its maximum allowed. For each negative level, the creature takes a -1 penalty on all skill checks and ability checks, attack rolls, and saving throws, and loses one effective level or Hit Die whenever level is used in a die roll or calculation. Negative levels caused by having too many Mechanetic attachments remain until the character has the offending attachment(s) removed.
Susceptibility to Attack
External Mechanetic attachments are subject to attacks as if they were objects worn by their recipients: see the rules for attacking objects on pages 149-150 of the d20 Modern Roleplaying Game.
Benefits of Mechanetic Conversion
When a character receives a Mechanetic spine plus two or more limbs, they begin to receive mechanetic Conversion bonuses. Conversion benefits include increased strength, bonuses to defense and resistance to massive damage and a damage reduction bonus.
Partial Conversion Mechanetic occurs when character receives a Mechanetic spine and two limbs, or a skeletal enhancement plus two limbs.
Total Conversion Mechanetic character has received four limbs plus a spine, or four limbs and a skeletal fortification. At this point because so much of the character is mechanical, they suffer from reduced healing, and can only recover 2/3rds of the H.P. naturally or through magical healing. The rest is considered damage to their components that requires a mechanetic repair check as well as components.
| Type | Str. Bonus | Defense Bonus | Massive Damage Save | Damage Reduction |
|--------|------------|---------------|---------------------|-----------------|
| Partial| +1 | +0 | +1 | +1/- |
| Total | +2 | +1 | +3 | +3/- |
Mechanetic Prosthetics
These machine limbs replace the biological counterparts lost. These limbs do not have any inherent bonuses, however, when combined can have a number of bonuses (is mechanetic conversion, above).
Arm Upgrade
Benefit: This complicated mechanical augmentation attaches and replaces a missing arm. Its strength matches the original; alone it provides no special game benefits.
Type: External.
Hardness/Hit Points: 5/10.
Base Purchase DC: 19.
Restriction: None.
Leg Upgrade
Benefit: This mechanical limb upgrade duplicates the function of its biological counterpart but has a greater hardness and more hit points than a flesh limb. Its strength matches the original; it provides no special game benefits.
Type: External.
Hardness/Hit Points: 5/15.
Base Purchase DC: 19.
Restriction: None.
Extra Arm
Benefit: An extreme mechanical augmentation attaches a fully functional additional arm. It’s strength matches the character’s score. The arm does not give the character any extra attacks or actions per round, though the arm can wield a weapon and make attacks as part of the normal attack routine (using two-weapon fighting). The arm has its own slots for weapons or Lenz sockets. A character with two additional arms may make 1 extra offhand attack per round (per two-weapon fighting).
Type: External.
Hardness/Hit Points: 5/10.
Base Purchase DC: 22.
Restriction: None.
Spine Upgrade
Benefit: A mechanical attachment that replaces a damaged Spinal column. This attachment is not subtle, but is much more resistant to massive damage than a biological counterpart is, providing a +3 Fort save.
Type: External.
Hardness/Hit Points: 5/15.
Base Purchase DC: 22.
Restriction: None.
Mechanetic Enhancements
Beyond simple prosthetics, some mechanetics can even be used to augment and enhance a warrior. These so-called mechanetic enhancements can provide a number of bonuses to characters with mechanetic prosthetics as well as those without.
Augmented Eye
A modified polished Lenz along with a number of sensors and components replaces the eye, giving the character the ability to see the emanations of mana.
**Benefit:** An Augmented Eye grants the character detect mana with a range of 30 feet as a constant ability. Lenz, Syntech, Monsters, and horrors all radiate an aura of Mana with a strength based on their level of power. There are no schools of magic to detect but the character can determine whether the mana is natural, artificial, or miasma in origin.
**Type:** Internal.
**Hardness/Hit Points:** -/2.
**Base Purchase DC:** 24
**Restriction:** None.
External Body Armor
External body armor consists of a series of plates of rigid armor plates connected to the exterior of prosthetic limbs and spine.
**Benefit:** The character gains a natural armor bonus to Defense. The bonus depends on the number of armored limbs: One limb or spine +1, two limbs +2, three limbs +4, four limbs +6, all limbs plus spine +8.
**Type:** Internal.
**Hardness/Hit Points:** -/varies. The armor has one-quarter the maximum hit points of the recipient.
**Base Purchase DC:** 12 per limb
**Restriction:** Military (+3).
External Lenz Mount
One of the recipient’s prosthetic limbs or their mechanetic spine features a Lenz socket. A mana circuit connects the socket to the user’s nervous system. The user can change out Lenz as a standard action, just like a normal harness.
**Special:** Normally Lenz Harnesses do not work when worn on prosthetics.
**Type:** External.
**Hardness/Hit Points:** 3/5 (mount only).
**Base Purchase DC:** 12
**Restriction:** Res (+2).
External Weapon Mount
The recipient’s prosthetic arm ends in a weapon instead of a hand. Benefit: The recipient has a melee or ranged weapon attached to a prosthetic arm. Attempts to disarm the recipient of the attached weapon automatically fail, though the weapon can still be attacked (like any other weapon) in an attempt to destroy it.
**Type:** External.
**Hardness/Hit Points:** 10/5 (mount only).
**Base Purchase DC:** Melee weapon mount 12, ranged weapon mount 15 (the purchase DC does not include the prosthetic arm or weapon).
**Restriction:** Military (+3).
Fortified Skeleton
The recipient’s skeleton is fortified with high-impact polymers and metals, then healed magically, increasing his ability to shrug off physical damage considerably.
**Benefit:** The recipient gains damage reduction 4/-.
**Type:** Internal.
**Hardness/Hit Points:** -/varies. The skeletal reinforcement has one-quarter the maximum hit points of the recipient.
**Base Purchase DC:** 32.
**Restriction:** Military (+3).
**INTERNAL LENZ MOUNT**
One of the recipient's prosthetic limbs or their mechanetic spine features a Lenz socket. A mana circuit connects the socket to the user's nervous system. An internal Lenz mount cannot be changed out without a repair check (DC 20), as it is integrated into the machine.
**Special:** Spotting an internal Lenz requires a successful Spot check opposed by the recipient's disguise check (with a +10 bonus).
**Type:** Internal.
**Hardness/Hit Points:** 10/5 (mount only).
**Base Purchase DC:** 19.
**Restriction:** Military (+3).
---
**INTERNAL TOOL MOUNT**
The recipient has a hidden piece of equipment or equipment kit embedded in her prosthetic forearm or hand. The tool extends from the prosthesis and is visible when in use. Often, a tool kit is designed so that each finger of a hand is a different tool.
**Benefit:** Attempts to disarm the recipient of the attached equipment automatically fail, and the equipment itself cannot be attacked unless it is extended. Extending or retracting the equipment is a free action. Spotting a subcutaneous piece of equipment requires a successful Spot check opposed by the recipient's Sleight of Hand check. The tool's size applies a modifier to the Sleight of Hand check (see Table 4–3: Concealing Weapons and Objects on page 95 of the d20 Modern Roleplaying Game).
**Type:** Internal.
**Hardness/Hit Points:** 10/5 (mount only).
**Base Purchase DC:** Melee weapon mount 15, ranged weapon mount 17 (the purchase DC does not include the prosthesis or weapon).
**Restriction:** Military (+3).
---
**MECHANETIC AUGMENTATION**
This is a series of modifications and augmentations to a mechanetic arm or leg, making it stronger.
**Benefit:** If attached to a prosthetic leg, the prosthetic enhancer increases the recipient's base speed by +5 feet. In addition, any unarmed attack made with an enhanced prosthetic leg deals an additional 1 point of damage.
If attached to a prosthetic arm, the prosthetic enhancer grants a +2 bonus on Strength and Dexterity-based ability checks and skill checks. In addition, any unarmed attack made with an enhanced prosthetic arm deals an additional 1 point of damage.
**Special:** Each Mechanetic limb can receive this Augmentation. The effects stack.
**Type:** Internal.
**Hardness/Hit Points:** -/2.
**Base Purchase DC:** 22.
**Restriction:** None.
**Power Augmentation**
This augmentation can only be applied to characters with two or more Mechanetic limbs, or a mechanetic spine and Fortified Skeleton.
**Benefit:** The character that receives a power augmentation finds that their Strength ability score is increased by +2. This is an equipment bonus, and is lost if one or more limbs is damaged.
**Type:** Internal.
**Hardness/Hit Points:** -/4.
**Base Purchase DC:** 22.
**Restriction:** Res(+2).
---
**Speed Augmentation**
This augmentation can only be applied to characters with two or more Mechanetic limbs, or a mechanetic spine and Fortified Skeleton.
**Benefit:** The character that receives a speed and agility augmentation finds that their Dexterity ability score is increased by +2. This is an equipment bonus, and is lost if one or more limbs is damaged.
**Type:** Internal.
**Hardness/Hit Points:** -/4.
**Base Purchase DC:** 22.
**Restriction:** Res(+2).
VEHICLES
Vehicles are described by a number of statistics in the D20 Modern core rule book. However, most of the vehicles as described in that book are simply not available on Ghestal. Below you can find a number of Vehicles that are common throughout the world.
AIRSHIPS & PLANES
Civilian aircraft are not uncommon as airships and planes are essential to cross significant portions of the planet. All the aircraft presented below are governed under the piloting skill.
AUTOS
Civilian Land vehicles are particularly common outside the city-structures with advanced public transit. Land Autos are governed by the Drive skill. Vehicles such as APCs and land cruisers require the surface vehicle feat to make effective use of.
| Name | Crew | Pass | Cargo | Init | Manvr. | Top Speed | Defense | Hardness/Hit Points | Size | Cost DC |
|-----------------------|------|------|-----------|------|--------|-------------|---------|---------------------|------|---------|
| **Aircraft** | | | | | | | | | | |
| Airtank Quadcopter | 2 | 10 | 1,000 lb. | -4 | -4 | 200 (20) | 6 | 15/45 | H | 36 |
| APOC Enforcer Airship | 7 | 25 | 8 tons | -4 | -4 | 100 (10) | 4 | 10/120 | E | 44 |
| APOC Leviathan Airship| 30 | 91 | 50 tons | -6 | -6 | 140 (14) | 2 | 20/300 | E | 55 |
| APOC Wyvern | 1 | 3 | 120 lb. | -4 | -2 | 210 (21) | 8 | 4/28 | H | 30 |
| Corvette Airship | 4 | 20 | 6 tons | -4 | -4 | 80 (8) | 4 | 5/30 | E | |
| Dracian Ranger | 1 | 4 | 250 lb. | -4 | -4 | 245 (25) | 6 | 5 | G | 28 |
| Ironhawk Carrier Jet | 2 | 13 | 5,000 lb. | -4 | -4 | 1100 (110) | 10 | 5 | G | 36 |
| **Autos** | | | | | | | | | | |
| Armored truck | 2 | 2 | 3,600 lb. | -2 | -2 | 175 (17) | 8 | 10/36 | H | 34 |
| Off-road truck | 1 | 3 | 1,700 lb. | -2 | -2 | 175 (17) | 8 | 5/36 | H | |
| APOC Tactical Auto | 1 | 3 | 600 lb. | +0 | +0 | 185 (18) | 8 | 8/34 | H | |
| Urban Motorcycle | 1 | 1 | 0 lb. | -1 | +1 | 275 (27) | 9 | 5/22 | L | 26 |
| Wasteland motorcycle | 1 | 1 | 0 lb. | +0 | +2 | 165 (16) | 10 | 5/18 | L | 23 |
| **Land Cruisers** | | | | | | | | | | |
| Tactical tracked tank | 4 | 2 | 425 lb. | -4 | -4 | 80 (8) | 6 | 20/64 | G | 47 |
| Mobile Command Carrier| 16 | 30 | 14 tons | -6 | -8 | 75 (8) | 4 | 20/200 | G | 55 |
| Dreadnought land Carrier | 30 | 60 | 50 tons | -6 | -10 | 140 (14) | 2 | 35/300 | E | 55 |
LENZ
Mystical orbs of crystalized mana. Lenz can be identified by succeeding a DC 25 on a Lenz Arts Check, or DC 20 Knowledge (Lenz and Mana). Most consumer Lenz is sold identified.
Interestingly, it is virtually impossible to predict or control the type of Lenz produced in a mana reactor. Simply because Lenz can be readily produced does not always mean it can be purchased, some Lenz are either restricted or very rare. Lenz Has a Hardness of 4 and 2 Hit Points. While not extremely fragile, it cannot be cut or shaped and will burst in blinding light when destroyed, leaving no trace behind.
| Spell Lenz | Purchase DC | Availability |
|---------------------|-------------|--------------|
| Armor | 14 | Common |
| Apocalypse | 34 | Limited |
| Blessing | 16 | Common |
| Confusion | 24 | Limited |
| Cure | 14 | Common |
| Curse | 18 | Rare |
| Dispel | 26 | Rare |
| Dominion | 26 | Rare |
| Drain | 22 | Rare |
| Earthquake | 30 | Limited |
| Fire Bolt | 14 | Common |
| Flight | 26 | Rare |
| Force Bolt | 16 | Common |
| Haste | 18 | Common |
| Heal | 20 | Rare |
| Hex | 16 | Rare |
| Ice Bolt | 14 | Common |
| Lightning Bolt | 16 | Common |
| Mana bolt | 18 | Rare |
| Mirage | 22 | Rare |
| Petrify | 26 | Rare |
| Radiant Bolt | 24 | Rare |
| Restoration | 30 | Limited |
| Seal | 28 | Rare |
| Shell | 22 | Common |
| Shield | 20 | Common |
| Sleep | 22 | Common |
| Slow | 18 | Common |
| Stun | 24 | Common |
| Storm | 26 | Limited |
| Thunder | 24 | Limited |
| Wall of Mana | 28 | Limited |
| Ward | 20 | Common |
| Wound | 24 | Rare |
| Ability Lenz | Purchase DC | Availability |
|---------------------|-------------|--------------|
| Strength | 25 | Rare |
| Dexterity | 25 | Rare |
| Constitution | 25 | Rare |
| Intelligence | 25 | Rare |
| Wisdom | 25 | Rare |
| Charisma | 25 | Rare |
| Toughness | 30 | Limited |
| Power Lenz | Purchase DC | Availability |
|---------------------|-------------|--------------|
| Animal Empathy | 29 | Limited |
| Aura of Protection | 26 | Rare |
| Bright | 20 | Rare |
| Familiar Bond | 30 | Limited |
| Fear | 22 | Rare |
| Immortality | 40 | Limited |
| Quickening | 26 | Rare |
| Rage | 22 | Rare |
| Rebuke | 28 | Rare |
| Sense | 20 | Rare |
| Shroud | 28 | Limited |
| Soulbind | 30 | Limited |
| Spirit Walk | 30 | Limited |
| Witchlight | 22 | Rare |
| Dragon Lenz | Purchase DC | Availability |
|---------------------|-------------|--------------|
| Dragon Call | 48 | Limited |
| Dragon Fire | 42 | Limited |
| Dragon Soul | 54 | Limited |
| Ragnarök | 44 | Limited |
| Black Lenz | Purchase DC | Availability |
|---------------------|-------------|--------------|
| Conjure Horror | 28 | Limited |
| Command | 26 | Limited |
**Common** Lenz are regularly manufacture in Mana reactors and are relatively easy to come by at any major outfitter or commissary. In the frontier they are the only Lenz available, if any are at all.
**Rare** Lenz is highly sought after, expensive, and probably only available from boutique stores or directly form the manufacturer on auction.
**Limited** types of Lenz are one in a million. They are probably not simply for sale and carry a considerable price tag. If anything these Lenz are sent to the military and never consumers.
---
### SPELL LENZ
The most common kind of Lenz fashioned in mana reactors, and arguably the most useful, spell Lenz can be used to cast spells. Spell Lenz have generally the same description components. Generally, Lenz with a bonus (+1, +2, etc) is more powerful. This bonus is often referred to as the Lenz Level. Remember, the maximum bonus level is +5 (excluding Facets).
**Name:** The name of the specific Lenz. This will be followed by a brief description of the Lenz. In italics under the name, Lenz that are more difficult to activate will be noted appropriately.
**Effect:** The effects of the Lenz. This description will include the effects of +1 and higher Lenz. If the Lenz has any special Facets, it will be listed here.
**Range:** the range of the spell.
**Personal:** The spell affects only you.
**Touch:** You must touch a creature or object to affect it. A touch spell that deals damage can score a critical hit just as a weapon can. A touch spell threatens a critical hit on a natural roll of 20 and deals double damage on a successful critical hit.
**Close:** The spell reaches as far as 25 feet away from you. The maximum range increases by 10 feet for every +1 of the Lenz.
**Medium:** The spell reaches as far as 100 feet + 25 feet per +1 of the Lenz.
**Long:** The spell reaches as far as 400 feet + 100 feet per +1 of the Lenz.
**Duration:** How long the spell effect lasts.
**Timed Durations:** Many durations are measured in rounds, minutes, hours, or other increments. When the time is up, the magic goes away and the spell ends.
**Instantaneous:** The spell energy comes and goes the instant the spell is cast, though the consequences might be long-lasting.
**Concentration:** The spell lasts as long as you concentrate on it. Concentrating to maintain a spell is a standard action that does not provoke attacks of opportunity. Anything that could break your concentration when casting a spell can also break your concentration while you're maintaining one, causing the spell to end. See concentration.
**Saving Throw:** What type of Saving Throw (Reflex, Will, Fortitude) and the effect of the Save (Negate, Half)
**Spell Resistance:** Whether or not Spell Resistance effect the Spell.
---
### Armor
This spell Lenz generates an invisible but tangible protective field around the caster.
**Effect:** The protective armor grants a +3 Defense Bonus while active. This bonus increases by 1 with each +1 level of the Lenz. This bonus does not stack with artificial armor, only the highest defense bonus is used, however, the magical bonuses from such armor, if any, are still counted.
*Guardian (+1 Facet):* This spell can be cast on another creature by touch.
**Range:** Personal
**Duration:** 1 hour, +1 hour per Lenz bonus
**Saving Throw:** None
**Spell Resistance:** None
---
### Apocalypse
**Severe (DC 20)**
Activating this Lenz causes four burning mana meteors to rain from the sky around that caster.
**Effect:** Raising your hand to the sky, four blue green mana meteors will rain down around you in spots you
select. If you aim a meteor at a specific creature, you may make a ranged touch attack to strike the target with the meteor. Any creature struck by a Meteor takes 2d6 points of bludgeoning damage (no save) and takes a –4 penalty on the saving throw against the meteor’s explosion (see below). If a targeted sphere misses its target, it simply explodes at the nearest corner of the target’s space. You may aim more than one Meteor at the same target. When each sphere impacts at the end of your turn, it explodes in a 40-foot-radius spread, dealing 1d6 points of mana damage to each creature in the area. If a creature is within the area of more than one sphere, it must save separately against each. The damage from the explosion increases by 1d6 for each +1 of the Lenz.
**Meteor (+1 Facet):** This facet adds one additional meteor to the apocalypse.
**Range:** Long
**Duration:** Instantaneous
**Saving Throw:** Reflex
**Spell Resistance:** Yes
### Blessing
Activating this Lenz fills allies within range with energy and confidence.
**Effect:** All allies within range receive a +1 Moral bonus on Attack rolls and Saving Throws for the duration. This increases by +1 for each Level of the Lenz.
**Mighty Blessing (+1 Facet):** Those blessed by this magic are filled with preternatural strength, gaining a +4 enchantment bonus to Strength for the duration.
**Range:** Close
**Duration:** 1 minute, +1 minute per Lenz bonus
**Saving Throw:** None
**Spell Resistance:** None
### Confusion
This Lenz creates a disorienting burst that confuses all the targets in the area.
**Effect:** All creatures within a 15ft burst in range must make a will save or become confused. Victims cannot control their actions, roll on the following table at the start of each subject's turn each round to see what it does in that round.
| d% | Behavior |
|----|--------------------------------------------------------------------------|
| 01–25 | Act normally |
| 26–50 | Do nothing but babble incoherently |
| 51–75 | Deal 1d6 points of damage + Str modifier to self with item in hand |
| 76–100 | Attack nearest creature (for this purpose, a familiar will count as part of the subject's self, and not be attacked.) |
A confused creature who can't carry out the indicated action does nothing but babble incoherently. Attackers are not at any special advantage when attacking a confused character. Any confused character who is attacked automatically attacks its attackers on its next turn, as long as it is still confused when its turn comes. Note that a confused character will not make attacks of opportunity against any creature that it is not already attacking (either because of its most recent action or because it has just been attacked).
**Range:** Medium
**Duration:** 3 rounds, +1 round per Lenz bonus
**Saving Throw:** Will negates
**Spell Resistance:** Yes
### Cure
Invigorate the target with a bolt of healing energy.
**Effect:** One creature within range is healed 2d6 points of damage. This increases by 2d6 for each level of the Lenz. This magical healing cannot regrow limbs, and does leave scars. This spell deals damage against undead targets, who may make a Reflex save to dodge the bolt.
**Range:** Close
**Duration:** Instantaneous
**Saving Throw:** Reflex Negates
**Spell Resistance:** yes
### Curse
**Hard (DC 15)**
Poison the target’s mana to prevent healing and regeneration.
**Effect:** Select one target in range. This creature cannot recover HP through any means for the duration of the magic. This includes fast healing, magical healing, and mundane treatment. Dying targets (those at negative hit points and not stabilized) are instantly killed by this spell.
*Killing Curse (+3 Facet):* The target of this spell take 1 point of damage per round, in addition to being unable to heal. Targets reduced to -1 H.P. by this spell are stricken dead.
**Range:** Close
**Duration:** 1 minute, +1 minute per Lenz bonus
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
---
**Dispel**
This magic will abruptly a spell in effect on the target.
**Effect:** One object, creature, or spell is the target of Dispel. You make one dispel check (1d20 + your Lenz Arts Rank) and compare that to the spell with highest caster level (DC = 11 + the spell's caster Lenz Arts rank). If successful, that spell ends. If not, compare the same result to the spell with the next highest caster level on the target. Repeat this process until you have dispelled one spell affecting the target, or you have failed to dispel every spell. For each +1 of this Lenz, an additional Spell on one target or an additional Target may be selected.
**Range:** Close
**Duration:** Instantaneous
**Saving Throw:** None
**Spell Resistance:** None
---
**Dominion**
Take mental command of Monsters for a short time.
**Effect:** This spell allows the caster to enchant a monster or magical beast (animal or low intelligence) with up to twice your hit dice and direct it with simple commands such as “Attack,” “Run,” and “Fetch.” Suicidal or self-destructive commands (including an order to attack a creature two or more size categories larger than the dominated monster) are simply ignored. This magic establishes a mental link between you and the subject creature. The Monster can be directed by silent mental command as long as it remains in range. You need not see the creature to control it. You do not receive direct sensory input from the creature, but you know what it is experiencing. Because you are directing the monster with your own intelligence, it may be able to undertake actions normally beyond its own comprehension. You need to concentrate exclusively on controlling a creature to maintain this magic, and as soon as this ends, so does the magic. Changing your instructions or giving a dominated creature a new command is the equivalent of redirecting a spell, so it is a move action. This spell has no effect on natural animals, undead, and Horrors. For each level of the Lenz, and additional monster can be controlled.
*Thrall (+4 Facet):* This power allows the Caster to mentally enslave the target, increasing the duration to 24 hours, no longer requiring concentration. At the end of this time, the target may make another saving throw, if that is successful, the spell ends. If the target fails it’s save, the effect continues for one week, and at the end of this period the target may save again to end the dominion, and again each week until the monster succeeds, or dies.
**Range:** Close
**Duration:** Concentration
**Saving Throw:** Will negates
**Spell Resistance:** yes
---
**Drain**
Steal the vital life force of a target with a touch attack.
**Effect:** Activating this spell allows you to make a melee touch attack on one target. If you land the blow, you Drain the target of 1d6 H.P., and recover that same amount. If you are at full H.P. these are temporary hit points that last for one hour. Each level of the Lenz
increases this damage by 1d6, which you also heal. Undead are healed by this spell, which harms the caster.
**Crimson Fist (+2 Facet):** Rather than simply drain hit points, this magic will cause the target to gain 1d4 negative levels, plus 1 per level of the Lenz, healing the caster 5 H.P. for each negative level inflicted on the target. The Negative levels last 24 hours and have a chance of becoming permanent.
**Range:** Touch
**Duration:** Instantaneous
**Saving Throw:** None
**Spell Resistance:** yes
---
**Earthquake**
**Hard (DC 15)**
Rend the earth asunder with powerful magic.
**Effect:** When you cast earthquake, an intense but highly localized tremor rips the ground. The powerful shockwave created by this spell knocks creatures down, collapses structures, opens cracks in the ground, and more in an 80 foot area you target. The effect lasts for 1 round, during which time creatures on the ground can't move or attack. A Lenz master on the ground must make a Concentration check (DC 20 + Lenz level) or lose any spell he or she tries to cast. The earthquake affects all terrain, vegetation, structures, and creatures in the area. The specific effect of an earthquake spell depends on the nature of the terrain where it is cast.
*Cave, Cavern, or Tunnel:* The roof collapses, dealing 5d6 points of damage, 1d6 per Lenz level to any creature caught under the cave-in (Reflex DC 15 half) and pinning that creature beneath the rubble (see below). An earthquake cast on the roof of a very large cavern could also endanger those outside the actual area but below the falling debris and rubble.
*Cliffs:* Earthquake causes a cliff to crumble, creating a landslide that travels horizontally as far as it falls vertically. Any creature in the path takes 5d6 points of bludgeoning damage, plus 1d6 per lenz level (Reflex DC 15 half) and is pinned beneath the rubble (see below).
*Open Ground:* Each creature standing in the area must make a DC 15 Reflex save or fall down. Fissures open in the earth, and every creature on the ground has a 25% chance to fall into one (Reflex DC 20 to avoid a fissure). The fissures are 40 feet deep. At the end of the spell, all fissures grind shut. Treat all trapped creatures as if they were in the bury zone of an avalanche, trapped without air (see Environment for more details).
*Structure:* Any structure standing on open ground takes 50 points of damage, +10 per Lenz level, enough to collapse a typical wooden or masonry building, but not a structure built of concrete and metal. Hardness does not reduce this damage, nor is it halved as damage dealt to objects normally is. Any creature caught inside a collapsing structure takes 5d6 points of bludgeoning damage plus 1d6 per Lenz level (Reflex DC 15 half) and is pinned beneath the rubble (see below).
*River, Lake, or Marsh:* Fissures open under the water, draining away the water from that area and forming muddy ground. Soggy marsh or swampland becomes quicksand for the duration of the spell, sucking down creatures and structures. Each creature in the area must make a DC 15 Reflex save or sink down in the mud and quicksand. At the end of the spell, the rest of the body of water rushes in to replace the drained water, possibly drowning those caught in the mud.
*Pinned Beneath Rubble:* Any creature pinned beneath rubble takes 1d6 points of nonlethal damage per minute while pinned. If a pinned character falls unconscious, he or she must make a DC 15 Constitution check or take 1d6 points of lethal damage each minute thereafter until freed or dead.
**Range:** Long
**Duration:** 1 Round
**Saving Throw:** See text
**Spell Resistance:** No
---
**Fire Bolt**
Launch a bolt of mystical fire at the target.
**Effect:** This spell fires a high speed bolt of mystical fire at the target. This Fire Bolt does 2d6 fire damage. This
damage is increased by 2d6 per level of the Lenz. There is no need to roll to hit, although a successful reflex save by the target will dodge the spell. Flammable objects can be ignited with this spell.
**Range:** Medium
**Duration:** Instantaneous
**Saving Throw:** Reflex Negate
**Spell Resistance:** Yes
### Flight
The target can hover and fly for a short period.
**Effect:** The subject can fly at a speed of 60 feet (or 40 feet if it wears medium or heavy armor, or if it carries a medium or heavy load). It can ascend at half speed and descend at double speed, and its maneuverability is perfect. Using a fly spell requires only as much concentration as walking, so the subject can attack or cast spells normally. The subject of a fly spell can charge but not run, and it cannot carry aloft more weight than its maximum load, plus any armor it wears.
**Range:** Touch
**Duration:** 4 minutes, +4 per Lenz level
**Saving Throw:** none
**Spell Resistance:** Yes
### Force Bolt
Fire a bolt of pure force at the target.
**Effect:** This spell assaults the target with invisible bolt of force. The Force Bolt does 2d4 impact damage. This damage is increased by 2d4 per level of the Lenz. There is no need to roll to hit, although a successful reflex save by the target will dodge the spell.
*Arcane Missile (+1 Facet):* The projectile can no longer be saved against, and always strike true.
**Range:** Close
**Duration:** Instantaneous
**Saving Throw:** Reflex Negate
**Spell Resistance:** Yes
### Haste
Greatly increase the speed of the target.
**Effect:** The target of this spell can move and act more quickly than normal. When making a full attack action, a hasted creature may make one extra standard attack while Hasted. The attack is made using the creature’s full base attack bonus, plus any modifiers appropriate to the situation. (This effect is not cumulative with similar effects, such as that provided by a speed weapon, nor does it actually grant an extra action, so you can’t use it to cast a second spell or otherwise take an extra action in the round.)
A hasted creature gains a +1 bonus on attack rolls and a +1 dodge bonus to AC and Reflex saves. Any condition that makes you lose your Dexterity bonus to Armor Class (if any) also makes you lose dodge bonuses.
All of the hasted creature’s modes of movement (including land movement, burrow, climb, fly, and swim) are twice the subject’s normal speed using that form of movement. This increase counts as an enhancement bonus, and it affects the creature’s jumping distance as normal for increased speed. Multiple haste effects don’t stack. Haste dispels and counters slow.
**Range:** Close
**Duration:** 2 rounds, +2 per Lenz level.
**Saving Throw:** Will Negate
**Spell Resistance:** Yes
### Heal
*Hard (DC 15)*
Channel mana into the target to remove injuries and illnesses.
**Effect:** Touching one creature and casting heal will immediately end any and all of the following adverse conditions affecting the target: temporary negative levels, ability damage, blinded, confused, dazed, dazzled, deafened, diseased, exhausted, fatigued, feebleminded, insanity, nauseated, poisoned, sickened, and stunned. It also cures 25 hit points, +25 per Level of the Lenz.
Aura (+2 Facet): The range becomes Close, and the Caster may Heal one target per level of the Lenz.
**Range:** Touch
**Duration:** Instantaneous
**Saving Throw:** Fortitude Negates
**Spell Resistance:** yes
---
**Hex**
Shatter the target’s moral and drain them of energy.
**Effect:** All enemies within range receive must succeed a Will save or suffer a -1 Moral penalty on Attack rolls, Defense and Saving Throws for the duration. This increases by -1 for each bonus of the Lenz.
*Crushing Hex (+1 facet)*: Victims of this hex also find that there are drained of their strength and vitality, suffering a -4 Penalty to strength for the duration.
**Range:** Close
**Duration:** 1 minute, +1 minute per Lenz level
**Saving Throw:** Will Negates
**Spell Resistance:** Yes
---
**Ice Bolt**
Forge a freezing lance of ice and hurtle it at your target.
**Effect:** This spell releases a freezing bolt that strikes one target in range for 1d8 damage, and staggers them for 1d4 rounds. Each level of the Lenz increases this damage by 1d8. If the bolt strikes water it will freeze a 4 foot area, 6 inches deep around the impact site for 1 minute per level of the lenz.
**Range:** Medium
**Duration:** Instantaneous
**Saving Throw:** Reflexes Negate
**Spell Resistance:** Yes
---
**Lightning Bolt**
Release a bolt of crackling Jovian lightning.
**Effect:** Activating this Lenz creates a crackling beam of lighting 1 foot wide and stretching the entire range of the spell. Any creature in the path of the beam must make a reflex save or suffer 2d6 damage. The damage increase by 2d6 per level of the Lenz.
**Range:** Medium
**Duration:** Instantaneous
**Saving Throw:** Reflexes Negate
**Spell Resistance:** Yes
---
**Mana Bolt**
Fire pure mana fire at an enemy.
**Effect:** This Lenz fires a bolt of pure eldritch energy that is a ranged touch attack at one target in range for 2d8 damage, each level of the Lenz increases this damage by 2d8.
*Ultima Echo (+5 Facet)* (Severe DC 20): Doubling the damage of this spell, this facet transforms the Mana Bolt into the ultimate expression of destruction, igniting the mana field around the target on impact with a 50 foot burst of energy. The target may not save and suffers full damage. All other creatures in the blast radius are entitled to a Fortitude save for half damage. The Ultima Echo ignores energy resistance and damage reduction. If fired at a mana pool it will cause the mana pool to fountain.
**Range:** Close
**Duration:** Instantaneous
**Saving Throw:** None, ranged touch attack.
**Spell Resistance:** Yes
---
**Mirage**
Bend light around you to become harder to detect and hit.
**Effect:** Mirage generates a shifting field of warped light around the caster. In addition to a +4 Defense bonus against ranged attacks, they receive a +4 bonus on Hide checks while the field is active. While this Lenz does not grant true invisibility, the features, clothing, and details of the caster are obscured beyond recognition by the spell.
**Range:** Personal
**Duration:** 1 Minute, +1 minute per level of the Lenz
**Saving Throw:** None
Petrify
Hard (DC 15)
Transform one target to calcified stone.
**Effect:** The target, but none of its carried gear, is transformed into a mindless, inert statue. If the statue resulting from this spell is broken or damaged, the subject (if ever returned to its original state) has similar damage or deformities. The creature is not dead, but it does not seem to be alive. Only biological material is affected by this spell, machines and armor are immune. Only a Restoration Spell can reverse this effect.
**Range:** Close
**Duration:** Permanent
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
Radiant Bolt
Severe (DC 20)
Unleash a beam of radiant light at the target.
**Effect:** This spell fires a scintillating beam of radiance 1 foot wide at the target and stretching the entire range of the spell. The target of the spell, and any creature in the path of the beam takes 4D6 damage. The damage increase by 4D6 per level of the Lenz. The target may make a Reflex save for half damage, others in the beam may make a reflex save to dodge the spell.
*Disintegration (+5 Facet):* The Beam becomes a raging beam of disintegration, dealing double damage and completely disintegrating any target reduced to 0 HP. This beam can disintegrate objects and ignores all DR and Hardness.
**Range:** Medium
**Duration:** Instantaneous
**Saving Throw:** Reflexes Negate
**Spell Resistance:** Yes
Restoration
Severe (DC 20)
Revive and restore a target with powerful healing magic.
**Effect:** When cast this powerful restorative magic can have one of a number of effects:
- Restore something dead for less than 1 minute per Lenz level to life (1 H.P.)
- Reattached a severed limb (limb must be present and intact)
- Remove Petrification, Paralysis, Wound, or Curse
- Restore 1 point of permanent ability score damage per Lenz level to one ability score
- Restore all temporary ability score damage to 1 Ability score pre Lenz level
- Restore one permanent Negative level per Lenz Level
- Restore 3 temporary negative levels per Lenz Level
- Remove mana poisoning
**Range:** Touch
**Duration:** Instantaneous
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
Seal
Temporarily seal away a creature’s ability to manipulate the mana stream.
**Effect:** Select one target within range. That target is no longer able to activate Lenz, nor use any Supernatural ability for the duration of the spell. This includes spell like abilities and powers commonly associated with Monsters and Horrors.
**Range:** Close
**Duration:** 1 minute, +1 minute per Lenz bonus
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
Shell
Once activated, this Lenz creates a protective barrier that absorbs energy damage for a short time.
**Effect:** The subject gains Resist 10 (+2 per Lenz level) against one of energy – fire, lightning, or cold. The
type of energy a Shell Lenz protects against is set, and cannot be changed, for example, an Ice Shell Lenz only protects against cold and so on.
**Range:** Touch
**Duration:** 10 minutes, +10 minutes per Lenz bonus
**Saving Throw:** None
**Spell Resistance:** None
### Shield
Once activated, this Lenz creates a protective barrier that absorbs damage for a short time.
**Effect:** The subject gains DR 5/- (+2 per Lenz level), ignoring the first 5 points of damage each time it takes damage from a weapon. Once the spell has prevented a total of 10 points of damage (+20 per bonus of the Lenz), it is discharged.
*Energy Shield (+1 Facet):* Rather than protect against physical damage the Shield provides energy resistance equal to twice the Damage reduction, and can absorb twice the energy damage, from one energy type. Select either Fire, Electricity, Cold, or Acid when this facet if forged.
**Range:** Touch
**Duration:** 10 minutes, +10 minutes per Lenz bonus
**Saving Throw:** None
**Spell Resistance:** None
### Sleep
Send creatures into a deep magical slumber.
**Effect:** All creatures within a 15 foot radius of the caster’s choosing must make a Will save or fall into a magical sleep. Sleeping creatures are helpless. Slapping or wounding awakens an affected creature, but normal noise does not. Awakening a creature is a standard action (an application of the aid another action). Sleep does not target unconscious creatures, constructs, or undead creatures.
**Range:** Medium
**Duration:** 4 minutes, +4 per Lenz Level
**Saving Throw:** Will Negate
**Spell Resistance:** Yes
### Slow
Greatly reduce a creature’s speed.
**Effect:** The target of this spell moves and attacks at a drastically slowed rate. Slowed creatures are staggered and can take only a single move action or standard action each turn, but not both (nor may it take full-round actions). Additionally, it takes a –1 penalty on attack rolls, AC, and Reflex saves. A slowed creature moves at half its normal speed (round down to the next 5-foot increment), which affects the creature's jumping distance as normal for decreased speed. Multiple slow effects don't stack. Slow counters and dispels haste.
**Range:** Close
**Duration:** 2 rounds, +2 per Lenz level.
**Saving Throw:** Will Negate
**Spell Resistance:** Yes
### Storm
Conjure a violent mana storm in the area.
**Effect:** Once this spell is active a violent storm forms overhead in 1d4 minutes covering a 1 mile circle centered on the caster. As the storm gathers to blot out the sun echoing bolts of blue green lighting will trace across the sky. Once the Mana storm has fully formed the caster can call down a bolt of lightning once per round. The lightning deals 3d10 points of electricity damage. The bolt of lightning flashes down in a vertical stroke at whatever target point you choose within the spell's range (measured from your position at the time). Any creature in the target square or in the path of the bolt is affected. You need not call a bolt of lightning immediately; other actions, even spellcasting, can be performed first. Each round after the first you may use a standard action (concentrating on the spell) to call a bolt. You may call a total number of bolts equal to 2 plus twice the Lenz level. Creatures targeted by a lightning bolt are entitled to a reflex save for half damage. After the spell ends the storm will remain for 1d4 hours before dissipating.
**Range:** Long
**Stun**
Paralyze a target for a short time.
**Effect:** A paralyzed creature is frozen in place and unable to move or act. A paralyzed character has effective Dexterity and Strength scores of 0 and is helpless, but can take purely mental actions. A winged creature flying in the air at the time that it becomes paralyzed cannot flap its wings and falls. A paralyzed swimmer can’t swim and may drown.
**Range:** Close
**Duration:** +1d6 rounds, +2 per Lenz level.
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
---
**Thunder**
Release a deafening boom to shake and damage your enemies.
**Effect:** This spell creates a deafening sonic boom in a close cone shaped burst in front of the caster. Any creature within the area is deafened for 2d6 rounds and takes 2d6 points of sonic damage per Lenz Level. A successful save negates the deafness and reduces the damage by half. Any exposed brittle or crystalline object or crystalline creature takes 4d6 points of sonic damage per Lenz level. An affected creature is allowed a Fortitude save to reduce the damage by half, and a creature holding fragile objects can negate damage to them with a successful Reflex save.
**Range:** Close cone shaped burst
**Duration:** Instantaneous.
**Saving Throw:** Fortitude Partial, Reflexes Negate (objects)
**Spell Resistance:** Yes
---
**Wall of Mana**
Create a wall or dome of pure eldritch energy.
**Effect:** This spell erects an iridescent blue green wall of energy that is virtually impossible to destroy. The wall is 20 square feet, plus 20 per Lenz level, and once fashioned, cannot be moved. Nothing can pass through the wall, and the Wall of Mana is immune to dispel, but can be damaged normally. The wall has a harness of 30, and 50 H.P., +50 per Lenz level.
**Range:** Close
**Duration:** 1 Minute, +1 minute per level of the Lenz
**Saving Throw:** None
**Spell Resistance:** No
---
**Ward**
Surround the target with an anti-magic field.
**Effect:** The subject of this spell is surrounded by a field that disrupts all magic for a period of time. While Active, the Ward grants the target Spell Resistance of 15 (+1 per Lenz level). This cannot be deactivated though, so even helpful magic is disrupted for the duration.
**Range:** Touch
**Duration:** 10 minutes, +10 minutes per Lenz level
**Saving Throw:** None
**Spell Resistance:** None
---
**Wound**
Cause bleeding damage every time the target is hurt.
**Effect:** For the duration of the spell, every time the target takes damage from weapon or force spell, they will bleed the following round for 1d4 damage. The number of rounds is increased by 4 for each Level of the Lenz. Bleeding effects that would extend beyond the spell’s duration continue to bleed, but no new bleeding is added after the spell ends.
**Range:** Close
**Duration:** 1 minute, +1 minutes per Lenz bonus
**Saving Throw:** Fortitude Negates
**Spell Resistance:** Yes
ABILITY LENZ
Far less common than Spell Lenz, and just as useful, Ability Lenz grants the Lenz master an enhancement bonus to one of their ability scores or toughness. These Lenz are not activated, rather, they grant a constant magical effect as long as they are Junctioned and in contact with their master. If the Lenz is removed from the character, the bonus is lost.
Ability Score Lenz
An Ability Score Lenz grants increased capability with one ability score.
**Skill Bonus:** Each Ability Score Lenz is keyed to one of the Ability Scores. Once Junctioned a +0 Ability Lenz will grant a +1 Enhancement Bonus to all Skill Checks that use that ability score so long as it remains Junctioned to the character, and in contact with them.
**Ability Score Bonus:** Once an Ability Score Lenz is forged to +1 or above, it will add its Lenz bonus as an Enhancement Bonus to the appropriate ability score. So a +2 Strength Lenz would add a +2 enhancement bonus to Strength score, and a +1 enhancement bonus to all strength checks. Remember all Lenz have a maximum effective bonus of +5 – like enchanted arms and armor. Special abilities of the Lenz, called Facets may increase its total bonus as high at +10.
Toughness Lenz
These Lenz make the user more resilient to harm.
**Hit Point Bonus:** A +0 Toughness Lenz adds +3 temporary hit points at the beginning of each day. This H.P. lasts 24 hours, or until depleted. As temporary Hit Points they are damaged before real hit point damage is taken. These hit points are restored automatically every midnight, but cannot be otherwise regenerated.
**Hit Dice Bonus:** Toughness Lenz Forged to +1 or Higher grants bonus Hit Dice, rolled just like the character’s Hit Dice from their current class. The number of Bonus Hit Dice is equal to the Lenz Level, and all of these Hit Points are temporary. These bonus hit points are rolled every morning, and always use the character’s current class (the last class that they gained a level in) Hit Dice. Thus a level 3 Tough Hero with a +2 Toughness Lenz would gain 2d10+3 temporary hit points each morning from his Toughness Lenz.
POWER LENZ
Some lenz are like a mixture of a spell and ability Lenz – rather than grant a spell that can be activated or a continuous bonus, they grant a specific power. Some of these powers are in constant effect, other can be called upon a number of times per day. It wholly depends on the individual Lenz.
The descriptions for power Lenz follow the same format, much like Spell Lenz.
**Name:** This is the Name of the Lenz
**Effect:** This will describe the powers and uses of the Lenz. No two power Lenz are the same, each has their own facets, and effects as they grow.
**Facet (cost):** All power Lenz have one or more unique facets that can be applied to the Lenz when it is forged to +1 or beyond. Unlike spell Lenz these are all specific to the Lenz, and not generic.
**Bond Animal (+2 Facet):** Once junctioned, the master can use this Lenz to form a bond with an animal. This creature becomes a trusted and loyal companion. Like any follower, if mistreated by the Lenz master, the creature may leave, and the character may not bond another.
**Aura of Protection**
An Aura Lenz creates a field of invisible protection around the character. A +0 Aura Lenz grants a +1 bonus on all saving throws. Aura Lenz of +1 or higher grant a bonus to defense equal to the Lenz Level.
**Bright**
Upon mental command, a bright Lenz can emit brilliant light. A simple thought causes the gem to shed light as a hooded lantern. It continues to emit light until the master wills the Lenz to extinguish the illumination. Once per day, plus the Lenz Level, the Lenz master can cause the Bright Lenz to flare in a blinding flash of light that fills a 30-foot cone. Although this glare lasts but a moment, any creature within the cone must make a DC 14 Fortitude save or be blinded for 1d4 rounds.
**Sunbeam (+2 Facet):** As a standard action you can evoke a dazzling beam of intense light each round. You can call forth one beam, +1 per Lenz level. Each creature in the beam is blinded and takes 4d6 points of damage. Any creatures to which sunlight is harmful or unnatural take double damage. Undead suffer 4d6 damage per Lenz level. A successful Reflex save negates the blindness and reduces the damage by half.
**Familiar Bond**
The first time this Lenz is Junctioned it is Junctioned to two creatures simultaneously, the Lenz master and a natural animal, creating a familiar. The animal retains the appearance, Hit Dice, base attack bonus, base save bonuses, skills, and feats of the normal animal it once was, but is now a magical beast for the purpose of effects that depend on its type. Only a normal, unmodified animal may become a familiar.
Familiar Basics: Use the basic statistics for a creature of the familiar’s kind, but with the following changes.
Hit Dice: For the purpose of effects related to number of Hit Dice, use the master’s character level or the familiar’s normal HD total, whichever is higher.
Hit Points: The familiar has half the master’s total hit points (not including temporary hit points), rounded down, regardless of its actual Hit Dice.
Attacks: Use the master’s base attack bonus, as calculated from all his classes. Use the familiar’s Dexterity or Strength modifier, whichever is greater, to calculate the familiar’s melee attack bonus with natural weapons. Damage equals that of a normal creature of the familiar’s kind.
Defense: The Familiar gains a natural bonus to defense equal to half to twice the Lenz Level.
Saving Throws: For each saving throw, use either the familiar’s base save bonus (Fortitude +2, Reflex +2, Will +0) or the master’s (as calculated from all his classes), whichever is better. The familiar uses its own ability modifiers to saves, and it doesn’t share any of the other bonuses that the master might have on saves.
Intelligence: The stronger the Lenz the more intelligent the familiar will become. The Familiar’s Intelligence score increases to 5 + 2 per Level (to a maximum of +10 Int for a +5 Lenz Level).
Skills: For each skill in which either the master or the familiar has ranks, use either the normal skill ranks for an animal of that type or the master’s skill ranks, whichever is better. In either case, the familiar uses its own ability modifiers. Regardless of a familiar’s total skill modifiers, some skills may remain beyond the familiar’s ability to use. Familiars treat Climb, Hide, Move Silently, Listen, Spot, and Swim as class skills.
Empathic Link (Su): The master has an empathic link with his familiar to a 1 mile distance. The master can communicate empathically with the familiar, but cannot see through its eyes. Because of the link’s limited nature, only general emotions can be shared.
Deliver Spells (+1 Facet): With this facet, a familiar can deliver spells for the master. If the master and the familiar are in contact at the time the master casts a spell, he can designate his familiar as the “Caster.” The familiar can then deliver the spell just as the master would.
Speak with Master (+1 Facet): So Forged, the Familiar Lenz allows a familiar and the master to communicate verbally as if they were using a common language. Other creatures do not understand the communication without magical help.
Speak with Animals of Its Kind (+1 Facet): By forging this facet a familiar can communicate with animals of approximately the same kind as itself (including dire varieties): bats with bats, cats with felines, hawks and owls and ravens with birds, lizards and snakes with reptiles, monkeys with other simians, rats with rodents, toads with amphibians, and weasels with ermines and minks. Such communication is limited by the Intelligence of the conversing creatures.
Spell Resistance (+2 Facet): A familiar gains spell resistance equal to the master’s level + 5 with this facet. To affect the familiar with a spell, another spellcaster must get a result on a caster check that equals or exceeds the familiar’s spell resistance.
Fear
Once this Lenz is Junctioned, the master can generate a 5-foot-radius (per Lenz Level) fear aura as a free action. Enemies in the area must succeed on a Will save (DC 10 + 2x Lenz Level + her Cha modifier) or become shaken. A creature who successfully saves cannot be affected by that fear aura for 24 hours.
Immortality
This Lenz is largely believed to be a rumor. Once Junctioned, the Lenz continually allows a living wearer to heal 1 point of damage per Lenz level per minute and an equal amount of nonlethal damage. In addition, he is immune to bleed damage while the Lenz in Junctioned. If the master loses a limb, an organ, or any other body part while Junctioned, and in contact with, this Lenz, the Lenz regenerates it. In either case, only damage taken while the Lenz in active and junctioned is regenerated.
Ageless (+3 Facet): While Junctioned the Lenz master ceases to age.
Immunity (+2 Facet): The facet nullifies all poisons and toxins while the Lenz is Junctioned.
Quickening
Once junctioned the master will find that they are faster, but that is only the beginning, as the Lenz augments reflexes and agility as well. A junctioned Quickening Lenz grants the master a +2 bonus to initiative, and can avoid even magical and unusual attacks with great agility. If she makes a successful Reflex saving throw against an attack that normally deals half damage on a successful save, she instead takes no damage. Each level of the Lenz grants a +1 bonus to Reflex Saves.
Uncanny Dodge (+2 Facet): With this facet the master will find they can react to danger before her senses would normally allow her to do so. She cannot be caught flat-footed, nor does she lose her Dex bonus to Defense if the attacker is invisible.
Swift (+1 Facet): Increase base movement Speed by 10 feet.
Rage
Up to three times per day this Lenz can fill the master with uncontrollable fury. While in a rage the Lenz Master gains a +6 morale bonus to her Strength and +4 to Constitution, as well as a +2 morale bonus on Will saves. In addition, she takes a -2 penalty to Defense. The increase to Constitution grants the rager 2 hit points per Hit Dice, but these disappear when the rage ends and are not lost first like temporary hit points. While in rage, a barbarian cannot use any Charisma-, Dexterity-, or Intelligence-based skills (except Intimidate, and Ride) or any ability that requires patience or concentration. The Rage lasts for 4 rounds, +2 per Lenz level, and the master is fatigued after rage for a number of rounds equal to 2 times the number of rounds spent in the rage. A barbarian cannot enter a new rage while fatigued or exhausted but can otherwise enter rage multiple times during a single encounter or combat.
Fearless Rage (+2 Facet): While raging, the master is immune to the shaken and frightened conditions.
Terrifying Howl (+2 Facet): With this facet the master can unleash a terrifying howl as a standard action while in a rage. All enemies within 30 feet must make a Will save (DC equal to 10 + lenz Level + the Master’s Strength modifier) or be panicked for 1d4+1 rounds. Once an enemy has made a save versus terrifying howl (successful or not), it is immune to this power for 24 hours.
Rebuke
Three times per day, per lenz level, as a standard action this Lenz can cause all undead within 30 feet of you to flee, as if panicked. Undead receive a Will save to negate the effect. The DC for this Will save is equal to 11 + Lenz Level + your Charisma modifier. Undead that fail their save flee for 1 minute. Intelligent undead receive a new saving throw each round to end the effect.
Holy (+5 Facet): In addition to fleeing panicked, undead in the area are scorched by brilliant burst of energy and take 4d6 damage, plus 4d6 per level of the Lenz. Reflex save for half damage.
Sense
This Lenz allows the master to see the emanations of magical energy within 30 feet, plus 30 per Lenz Level. The aura depends on the subject. Mana pools, multiple types of magic, or strong local magical emanations may distort or conceal weaker auras. As a free action the master can see any Lenz, mana canisters, magical beasts, and active spells. The Lenz or Spell with the highest level is seen first, and any Lenz or levels with half that aura or less are concealed. Magical creatures’ aura is dependent on their hit dice. If multiple creatures are presence, any with half or less the hit dice of the strongest is concealed. Having this Lenz junction grants
a bonus to Spot and Search checks for magic or mana equal to the Lenz Level. Sense can penetrate barriers, up to 1 foot of stone, 1 inch of common metal, a thin sheet of lead, or 3 feet of wood or dirt per Lenz Level.
**Shroud**
Three times per day as a move action, the master of list Lenz can become nearly undetectable – sound is muffled and all that is seen is a shifting blur. This grants a +5 bonus to Hide and Move Silently checks, plus the Lenz Level. While the shroud is active, the character receives +4 dodge bonus to defense, and an additional +1 per Lenz level against ranged attacks. Shroud remains active for 1 minute, plus 1 minute per Lenz level.
**Soulbind**
Once junctioned this Lenz the master’s soul is bound to the Lenz, sealed away for safe keeping. This renders the character highly resistant to energy drain effects of undead and some monsters. The character can save against temporary level energy drain once per hour, and permanent energy drain is treated at temporary energy drain effect. Each Lenz level grants a +1 bonus to the save vs energy drain.
**Spirit Walk**
Once per day this Lenz allows the master to walk the world unchained by freeing the spirit from the physical body, projecting their spirit into the mana stream. While the physical body is left in a state of suspended animation, the Lenz projects the master’s consciousness and essence into the Mana stream as an ethereal creature: invisible, insubstantial, and capable of moving in any direction, even up or down, albeit at half normal speed. As an insubstantial creature, the Lenz master can move through solid objects, including living creatures. An ethereal creature can see and hear, but everything looks gray and ephemeral. Mana and Force effects affect an ethereal creature normally. An ethereal creature can’t attack material creatures, and spells you cast while ethereal affect only other ethereal things. Each Lenz level allows an additional use of this ability per day.
**Witchlight**
With this Lenz, the master can, as a standard action, cause some or all of their body or an unattended object up to Medium-size that they touch to glow with witchlight, a harmless supernatural flame that sheds light as a candle and appears pale turquoise. Maintaining witchlight requires concentration, and you can maintain its effect on an object as long as it is within 20 feet, +20 feet per Lenz level of you.
*Extended Witchlight (+1 Facet):* Your witchlight lasts as long as you concentrate + 10 minutes.
*Hot Witchlight (+1 Facet):* Your witchlight deals 1d6 points of fire damage every round to the target. The target can attempt to extinguish the flames successful fortitude save.
*Bright Witchlight (+1 Facet):* Your witchlight sheds light as a torch.
**DRAGON LENZ**
**Dragon Call**
*Severe (DC 20)*
Activating this Lenz calls out to any nearby dragons, who will answer the call.
**Effect:** Raising the Lenz, you call forth to the nearest dragon. One dragon, so long as it is within five hundred miles, will arrive at the scene of the Lenz with in 1d6 hours. The Dragon is under no obligation to obey the character who called it, but will arrive in a neutral position, interested in who activated the beacon, and why. The power of the dragon that arrives depends on the level of the lenz. A +0 Dragon Call will attract a Juvenile. Each Level of the Lenz will increase the age category of the dragon called by one.
**Dragon Strike (+5 Facet):** With this facet, 1d10 dragons will answer the call, all within the age category of the Lenz.
**Range:** Special
**Duration:** Instantaneous
**Saving Throw:** None
**Spell Resistance:** No
---
**Dragon Fire**
**Severe (DC 20)**
This Lenz releases the fabled fire of the dragons.
**Effect:** This spell releases a cone of dragon-fire 30 feet long, fifteen feet wide at the end. All creatures within the cone suffer 5d6 fire damage, a reflex save will reduce this damage by half. Creatures resistant to fire only get half their energy resistance (round down) against dragon fire. Each Lenz Level increase the damage of the dragon fire by 5d6, and extends the cone of fire by 10 foot in range, and 5 feet in area.
**Incineration (+5 Facet):** Dragon Fire becomes impossibly hot plasma, completely disintegrating any target reduced to 0 HP. This dragon fire can disintegrate objects and ignores all fire resistance and Hardness.
**Range:** 30’ cone.
**Duration:** Instantaneous
**Saving Throw:** Reflex
**Spell Resistance:** Yes
---
**Dragon Soul**
Once Junctioned, this power Lenz can be activated at will as a free action. Activating the Dragon Soul will release a torrent of magical energy around the character resulting in what appears to be a mana fountain. This destroys the Dragon Soul Lenz, but will result in **one** of the following effects:
- Create a +0 Spell Lenz of Choice (Lenz Arts Check DC 20)
- Forge any Lenz Junctioned to the character by +1
- Grant a +1 inherent bonus to an Ability Score
- Grant the Limit Break in the character
While the Dragon Soul Lenz can be forged, doing so has no impact on the above effects of the Lenz. It is hypothesized that there are powers to this Lenz that could be unlocked by fully forging it, but none have ever succeeded in doing so.
---
**Ragnarök**
**Severe (DC 20)**
Rumor has it that casting this spell brings about uncontrollable devastation around the caster.
**Effect:** Activating this Lenz causes the effects of Storm and Earthquake in a 1D6 mile radius centered on the caster. The caster has no control where lightning will fall or where the violent shaking will take place. The effects remain for 1d4 hours before dissipating.
**Range:** Special
**Duration:** Special
**Saving Throw:** Special, see text
**Spell Resistance:** Yes
---
**BLACK LENZ**
**Command**
Three times per day, per lenz level, as a standard action this Power Lenz can cause all undead within 30 feet of you to become subject to your will. Undead receive a Will save to negate the effect. The DC for this Will save is equal to 11 + Lenz Level + your Charisma modifier. Undead that fail their save are completely under your command for 1 minute. Intelligent undead receive a new saving throw each round to end the effect.
**Unholy (+5 Facet):** The power of this Lenz is so great that undead that fail the save are affected as per the Dominion spell.
---
**Conjure Horror**
**Hard (DC 15)**
**Effect:** This spell summons forth a stream of concentrated miasma and infects the bones or bodies
of dead creatures into undead skeletons or zombies that follow your spoken commands. The undead can follow you, or they can remain in an area and attack any creature (or just a specific kind of creature) entering the place. They remain animated until they are destroyed. Regardless of the type of undead you create with this spell, you can animate up to 10 HD of creatures, plus 6 per Lenz level, with any one casting. The undead you create remain under your control indefinitely. No matter how many times you use this spell, however, you can control only 4 HD worth of undead creatures per caster level. If you exceed this number, all the newly created creatures fall under your control, and any excess undead from previous castings become uncontrolled. (You choose which creatures are released).
**Skeletons:** A skeleton can be created only from a mostly intact corpse or skeleton. The corpse must have bones, so creating a skeleton from a purple worm, for example, is not possible. If a skeleton is made from a corpse, the flesh falls off the bones. The statistics for a skeleton depend on its size; they do not depend on what abilities the creature may have had while alive.
**Zombies:** A zombie can be created only from a mostly intact corpse. The corpse must be that of a creature with a true anatomy, so a dead gelatinous cube, for example, cannot be animated as a zombie. The statistics for a zombie depend on its size, not on what abilities the creature may have had while alive. The Monster Manual has game statistics for zombies.
**Range:** Touch
**Duration:** Instant
**Saving Throw:** None
**Spell Resistance:** No
LENZ FACETS
When a spell or power Lenz is forged beyond +1, unique properties can be added to the Lenz, known as Facets. Each facet has a bonus from +1 to +4 that is added to the overall level when forged. Each Power Lenz has its own Facets, spell Lenz on the other had has a select number of facets that could be applied to any spell Lenz when forged. Lenz is very rarely sold with any Forged bonuses or Facets, when it is they often demand extremely high prices.
| Spell Facets | XP Cost | Effect |
|--------------|---------|--------|
| Aimed | +1 | Spell becomes a ranged touch attack |
| Banishing | +2 | Exceptional damage to Undead |
| Burst | +2 | Bursts on target for added effect |
| Charging | +1 | Cast at higher Lenz level by charging up |
| Corrupt | - | Lenz causes Con damage when used |
| Easy | +1 | + 4 to Lenz Arts Activation Check |
| Expand | +1 | Covers a greater area of effect |
| Extend | +1 | Uses next range category up |
| Iounic | +2 | Activate the Lenz if it is within 3 feet |
| Lasting | +2 | Extend the duration of the effects |
| Piercing | +2 | +5 to penetrate Spell Resistance |
| Selective | +2 | Exclude targets in the area of effect |
| Swift | +3 | Can be activated as a Standard Action |
| Vicious | +2 | All save against the spell are at -4 |
Aimed
This spell turns any Bolt spell, and cure, into a ranged touch attack. Characters in the path of a beam, such as Lighting Bolt or Radiant Bolt are still entitled to a reflex save, but the target is not.
Banishing
This facet can be applied to any spell that deals damage and Cure. Against undead targets, this spell’s damage dice are doubled.
Burst
This facet can be applied to any ranged spell and a single target. The effect of the spell now takes effect in a 15 foot diameter burst. All creatures in this burst must roll a saving throw. A spell that deals damage such as Fire Bolt now deals damage to all creatures in the area, and the may save for half damage. Spells that have a singular effect that is not damage must make the save to negate.
Charging
A charging Lenz can build up a tremendous effect buy charging it for an additional round. Each additional round spent charging the Lenz increases the effective Lenz level by +1 when the spell is cast. Lenz can be charged up to a maximum of current level, +5 levels. Each round spent charging, the Lenz master must succeed a Lenz Activation skill check or the energy built up is harmlessly discharged. A Charging Lenz can only be overcharged 3 time per day. Charging facet cannot be combined with the Swift facet.
Corrupt
A Lenz that has been poisoned by Miasma or improper forging can become corrupted. Corrupted Lenz function like normal Spell Lenz, but have the added side effect of causing 1 point of Constitution damage each time they are activated. This ability damage is temporary, but can kill.
Easy
A Lenz with this facet is very easy to activate, and all such checks receive a +4 enhancement bonus. This bonus is only added to activation, not to checks to junction or forge the Lenz. Easy cannot be added to any Lenz in the severe category.
Selective
This effect can be added to any spell Lenz with an area of effect. In such a case the caster may choose who to include and who to exclude from the spells effects when the Lenz is activated.
Expand
The Expand Facet can be added to any spell that normally has an area of effect such as Hex. This area of effect is doubled. This facet can increase the area of effect created by the Burst facet, but in such a case it is a +2 facet.
Swift
A swift Lenz can be activated with a standard action, rather than a full round action. This effect stacks with the Swift Spell feat, if the character with this feat activates a swift Lenz (including the -5 penalty) they are able to activate this Lenz as a move action.
Extend
A spell with the Extend Facet increases its range to the next category, to a maximum of Long. Touch becomes Close ranged touch attack, Close becomes medium, and medium becomes long. Spells with a range of long cannot benefit from this facet.
Iounic
An Iounic Spell Lenz can be activated so long as it is within three feet of the Lenz master, and it need not be touching his skin.
Lasting
This Facet can be added to any Lenz with a non-instantaneous, non-permanent duration. These spell Lenz find their base duration doubled.
Piercing
Of most use to offensive Lenz, any caster roll made to overcome spell resistance with this Lenz receives a +5 bonus. This Facet increases the effective Caster level for the purpose of overcoming Spell Resistance only.
ADVANCED & LEGENDARY CLASSES
Below is a list of classes unique to the world of Deviant Evolution. These classes are available to any deviants who meet the requirements.
ADVANCED CLASSES
The following classes are generally available. While the requirements mean that a character without the limit break cannot progress to a high level, they can select a level or two of one of these classes. Of the advanced classes only the Augmented Mechanetic grants the limit break, albeit only in a very limited form.
AUGMENTED MECHANETIC
Not everyone is born with a special heritage, or has the power to master supernatural forces. For some the only path to power is through augmentation. An augmented mechanetic is a full conversion mechanetic borg that specializes in using their mechanized bodies to full effect – above and beyond what normal augmented soldiers accomplish. This is the perfect advanced class for heroes who want the power and ability of heavy hitters without relying on arcane forces and Lenz. Strong and Tough heroes are good entry points to this class, although virtually any class can meet the primary requisites.
REQUIREMENTS
In order to qualify for this advanced class a character must fulfill the following criteria.
Skills: Knowledge (tactics) 6 Ranks
Feats: Cybertaker
Special: Total Conversion Mechanetic
CLASS INFORMATION
Hit Dice
The Augmented Mechanetic gains 1d12 hit points per level. The character’s constitution modifier applies.
Action Points
An Augmented Mechanetic gains a number of action points equal to 6 + one-half her character level, rounded down, every time she attains a new level in this class.
Class Skills
The Augmented Mechanetic’s class skills are as follows. Balance (Dex), Computer Use (Int), Demolitions (Int), Drive (Dex), Intimidate (Cha), Jump (Str), Knowledge (current events, tactics) (Int), Listen (Wis), Navigate (Int), Pilot (Dex), Read/Write Language (none), Repair (Int), Speak Language (none), Spot (Wis), Survival (Wis).
Skill Points at Each Level: 4 + Int modifier.
Damage Reduction (Ex): All total conversion Mechanetics have a Damage Reduction of at least 7/-. At level 5 the character learns to make the best of their metal bodies, and increases their DR by +3 to at least 10/-.
Increased Tolerance (Ex): At levels 3 and 7 the Augmented Mechanetic increases his Mechanetic Tolerance by +1. This is in addition to the effects of the Cybertaker feat.
Improved Healing (Ex): Normally Total conversion Mechanetics cannot recover more that ¾ of their maximum hit points without the aid of someone capable of Mechanetic repairs. At 4th level the character can recover all hit points with the aid of a simple Repair check (DC 25). The mechanetic Surgery feat is not required, although a tool kit is. Starting at Level 8 the character’s Mechanetics are no longer subject to attack as if they were equipment.
Mechanetic Bonus (Ex): At First level the Augmented Mechanetic learns to take full advantage of their enhanced mechanical body. This has a number of beneficial effects:
- +2 Fortitude save vs. Petrify, Stun, and Wound.
- Unarmed attacks are deal lethal damage.
- +4 to save vs. Massive Damage.
- **Limit Break:** So long as all levels beyond level 5 are in the Augmented Mechanetic advanced class, a character can progress in this class up to level 10.
Power Surge (Ex): Starting at 2nd level the Mechanetic can temporarily increase his Strength and Dexterity, by overloading his system but at a penalty to saving throws. The Mechanetic gains a +6 morale bonus to both Strength and Dexterity, but takes a -2 penalty on all saving throws. Activating ability surge is a free action, and the surge lasts for as many rounds as the character has Augmented Mechanetic levels. Following an ability surge, the mechanetic is fatigued (-2 to Strength and Dexterity) for as many rounds as he surged, but may negate this penalty as a free action by spending an action point. The Mechanetic may use the ability surge once per day at 2nd level, twice per day at 6th level, and three times per day at 8th level.
DRAGON HUNTER
Few hunters are as dedicated to the cause of slaying monsters as the Dragon hunters. The name of this class is more of a goal than a label – the ultimate goal of all dragon hunters is to slay a dragon, something few do. To that end the hunters focus on combating magical beasts of all kinds – the monsters that hunt the wilds and the beasts created by lunatics who intentionally expose animals to Mana in hopes of creating weapons. The fastest path to this class is through the Strong hero, although Fast and tough are good too.
REQUIREMENTS
To qualify as a Dragon Hunter a character must fulfill the following criteria.
Base Attack Bonus: +2
Skills: Knowledge (lenz and mana) 3 Ranks, Survival 3 Ranks
Feats: Weapon Focus (archaic)
CLASS INFORMATION
Hit Dice
The Dragon Hunters gains 1d10 hit points per level. The character’s constitution modifier applies.
Action Points
Dragon Hunters gains a number of action points equal to 6 + one-half her character level, rounded down, every time she attains a new level in this class.
Class Skills
The Dragon Hunter’s class skills are as follows.
Balance (Dex), Climb (Str), Intimidate (Cha), Jump (Str), Handel Animal (Cha), Hide (Dex), Knowledge (life science, lenz and mana, tactics) (Int), Move Silently, Pilot (Dex), Read/Write Language (none), Ride (Dex), Speak Language (none), Sense Motive (Wis), Survival (Wis), Treat Injury (Wis).
Skill Points at Each Level: 4 + Int modifier.
Bonus Feats (Ex): At levels 3, 6, and 9 the character receives a bonus feat. However, this feat must be selected from the following list. You must meet all prerequisite to select a feat.
Action Boost, Acrobatic, Alertness, Animal Affinity, Athletic, Attentive, Blind-Fight, Builder, Cautious, Confident, Creative, Deceptive, Dodge, Endurance, Focused, Frightful Presence, Gearhead, Great Fortitude, Guide, Heroic Surge, Improved Damage Threshold, Improved Initiative, Iron Will, Lightning Reflexes, Low Profile, Medical Expert, Meticulous, Nimble, Renown, Run, Runic Weapon, Stealthy, Toughness, Trustworthy, Weapon Finesse, Weapon Focus.
Damage Bonus (Ex): A dragon hunter gains a bonus on weapon damage rolls against Magical Beasts equal to ½ his class level in the Dragon Hunter class.
Energy Resistance (Su): At 5th level, a dragon hunter gains resistance to acid 5, cold 5, electricity 5, fire 5, and sonic 5. These resistances improve to 10 at 10th level.
| Level | Base Attack Bonus | Fort Save | Ref Save | Will Save | Special Abilities | Defense Bonus | Reputation Bonus |
|-------|-------------------|-----------|----------|-----------|--------------------------------------------------------|---------------|-----------------|
| 1st | +1 | +1 | +1 | +0 | Track, Damage Bonus, Predator’s Aura | +1 | +1 |
| 2nd | +2 | +2 | +2 | +0 | Sense Magical Beasts | +1 | +1 |
| 3rd | +3 | +2 | +2 | +1 | Bonus Feat | +2 | +1 |
| 4th | +4 | +2 | +2 | +1 | Evasion | +2 | +2 |
| 5th | +5 | +3 | +3 | +1 | Energy Resistance 5 | +3 | +2 |
| 6th | +6 | +3 | +3 | +2 | Bonus Feat | +3 | +2 |
| 7th | +7 | +4 | +4 | +2 | -- | +4 | +3 |
| 8th | +8 | +4 | +4 | +2 | Improved Evasion | +4 | +3 |
| 9th | +9 | +4 | +4 | +3 | Bonus Feat | +5 | +3 |
| 10th | +10 | +5 | +5 | +3 | Energy Resistance 10 | +5 | +4 |
These resistances stack with any other similar effects gained from classes or natural abilities, but not magic or items.
**Evasion (Ex):** When he reaches 4th level, a hunter can avoid even magical and unusual attacks with great agility. If he makes a successful Reflex saving throw against an attack that normally deals half damage on a successful save, he instead takes no damage. Evasion can be used only if the hunter is wearing light armor, medium armor, or no armor. A helpless hunter does not gain the benefit of evasion.
**Improved Evasion (Ex):** This ability works like evasion, except that while the ranger still takes no damage on a successful Reflex saving throw against attacks, he henceforth takes only half damage on a failed save. A helpless ranger does not gain the benefit of improved evasion.
**Predator’s Aura (Su):** A Dragon hunter learns to mimic the aura of magical monster. Natural animals and monsters can detect this aura, this allows the Dragon Hunter to use Intimidate to unnerve monsters in combat. This does not work on mindless creatures or undead.
**Sense Magical Beasts (Su):** As a full round action the hunter can concentration and feel the presence of and creature with the Magical Beast or Monstrous Humanoid creature type.
**Track (Ex):** A hunter adds half his level (minimum 1) to Survival skill checks made to follow tracks.
HUNTER OF THE DEAD
Only the bravest – or most dedicated – could specialize on hunting Horrors and undead created by the miasma. Legend has it that the first deviants, before the corps, were those warriors who could, and would, fight in the depths of the Miasma to hunt the undead. Make no mistake, though these warriors hate and hunt the undead and horrors above all others, this does not automatically make them shining examples of light and good. The opposite is often true, as these killers need become of hardened heart to do what they do. Dedicated hero is the fastest route to this advanced class.
REQUIREMENTS
To qualify as a Hunter of the Dead a character must fulfill the following criteria.
Base Attack Bonus: +2
Skills: Knowledge (Miasma and Horrors) 3 Ranks, Survival 6 ranks, and Lenz Arts 2 ranks
Feats: Lenz Focus (Rebuke or Cure),
CLASS INFORMATION
Hit Dice
A Hunter of the Dead gains 1d8 hit points per level. The character’s constitution modifier applies.
Action Points
Hunters of the Dead gain a number of action points equal to 6 + one-half her character level, rounded down, every time she attains a new level in this class.
Class Skills
The Hunter of the Dead’s class skills are as follows.
| Level | Base Attack Bonus | Fort Save | Ref Save | Will Save | Special Abilities | Defense Bonus | Reputation Bonus |
|-------|-------------------|-----------|----------|-----------|------------------------------------|---------------|-----------------|
| 1st | +1 | +1 | +0 | +1 | Detect undead | +1 | +0 |
| 2nd | +2 | +2 | +0 | +2 | Smite undead | +1 | +0 |
| 3rd | +3 | +2 | +1 | +2 | Miasma Walk | +2 | +1 |
| 4th | +4 | +2 | +1 | +2 | — | +2 | +1 |
| 5th | +5 | +3 | +1 | +3 | True death | +3 | +1 |
Concentration (Con), Jump (Str), Hide (Dex), Investigate (Int), Knowledge (Tactics, Theology and Miasma) (Int), Lenz Arts (Con), Move Silently, Pilot (Dex), Read/Write Language (none), Ride (Dex), Speak Language (none), Sense Motive (Wis), Survival (Wis), Treat Injury (Wis).
Skill Points at Each Level: 5 + Int modifier.
Detect Undead (Sp): At will, a hunter of the dead can detect the aura that surrounds undead creatures. The amount of information revealed depends on how long you study a particular area.
1st Round: Presence or absence of undead auras.
2nd Round: Number of undead auras in the area and the strength of the strongest undead aura present. If the strongest creature has HD of at twice or more than your own, you are stunned for 1 round.
3rd Round: The strength and location of each undead aura. If an aura is outside your line of sight, then you discern its direction but not its exact location.
Miasma Walk (Su): A hunter of the dead of 3rd level or higher is immune to the Constitution draining effects of Miasma. Further, the hunter applies her Wisdom modifier (if positive) as an additional bonus on all saving throws against effects and spells used by undead. This bonus stacks with the Wisdom modifier already applied to Will saves.
Smite Undead (Su): Starting at 2nd level, the hunter can attempt to smite undead with one normal melee attack. She adds her Wisdom modifier (if positive) to her attack roll and deals 2 extra points of damage per level. For example, an 4th-level hunter of the dead armed with a longsword would deal 1d8 + 8 points of damage, plus any additional bonuses for Strength and magical effects.
that normally apply. If a hunter of the dead accidentally smites a creature that is not undead, the smite has no effect but it is still used up a daily charge. The Hunter of the dead may attempt to smite dead a number of times per day equal to her Wisdom Modifier, plus her Class level.
**True Death (Su):** Undead slain by a hunter of the dead of 5th level or higher, either by melee attacks or spells, can never rise again as undead. They are forever destroyed.
### Tactical Sorcerer
The Tactical Sorcerer is the undisputed master of Casters – weapons that consume mana to produce spell like effects. Most people view casters and little more than weapons, albeit ones that can harm monsters and horrors otherwise invulnerable to conventional firearms. The Tactical Sorcerer on the other hand sees the Casters for what they truly are – living weapons with exceptional versatility and power. Select this advanced class if you want to have a magical warrior with the power of a Lenz master, and the panache of a gunslinger. Fast hero is the easiest ingress into this class, but tough and dedicated heroes can work.
### Requirements
In order to qualify as a Tactical Sorcerer a character must fulfill the following criteria.
**Base Attack Bonus:** +2
| Level | Base Attack Bonus | Fort Save | Ref Save | Will Save | Special Abilities | Defense Bonus | Reputation Bonus |
|-------|-------------------|-----------|----------|-----------|--------------------------------------------------------|---------------|-----------------|
| 1st | +0 | +1 | +1 | +0 | Caster Specialization, Awaken Caster | +1 | +0 |
| 2nd | +1 | +2 | +2 | +0 | — | +1 | +0 |
| 3rd | +2 | +2 | +2 | +1 | Piercing Caster, Rapid Reload | +2 | +1 |
| 4th | +2 | +2 | +2 | +1 | — | +2 | +1 |
| 5th | +3 | +3 | +3 | +1 | Fuel Caster | +3 | +1 |
| 6th | +4 | +3 | +3 | +2 | — | +3 | +2 |
| 7th | +4 | +4 | +4 | +2 | Improved Caster Critical | +4 | +2 |
| 8th | +5 | +4 | +4 | +2 | — | +4 | +2 |
| 9th | +6 | +4 | +4 | +3 | Overload Caster | +5 | +3 |
| 10th | +6 | +5 | +5 | +3 | — | +5 | +3 |
**Skills:** Knowledge (tactics) 4 Ranks, Repair 6 Ranks
**Feats:** Caster Proficiency, Lightning Reflexes
### Class Information
**Hit Dice**
A Tactical Sorcerer gains 1d8 hit points per level. The character’s constitution modifier applies.
Action Points
A Tactical Sorcerer gains a number of action points equal to 6 + one-half her character level, rounded down, every time he attains a new level in this class.
Class Skills
The Tactical Sorcerer’s class skills are as follows. Bluff (Cha), Climb (Str), Concentration (Con), Jump (Str), Intimidate (Cha), Knowledge (Tactics, Technology) (Int), Move Silently (Dex), Pilot (Dex), Read/Write Language (none), Repair (Int), Speak Language (none), Sense Motive (Wis), Survival (Wis), Treat Injury (Wis), Tumble (Dex).
Skill Points at Each Level: 5 + Int modifier.
Awaken Caster (Su): At first level the Tactical Sorcerer can form a bond with a specific caster, Awakening the Lenz inside. Awakening a caster requires 8 hours of concentration, meditation and work. At the end of this time the caster becomes bonded to the Tactical Sorcerer – much like how a Lenz can become Junctioned with master. An Awakened caster in fact takes one of the Tactical Sorcerer’s Lenz slots.
Once a Caster is Awakened, the Tactical Sorcerer can spend experience points to increase the Lenz Level of the caster within. However, unlike forging a Lenz, an awakened caster cannot have Facets.
| Caster Bonus | Experience Required | Sacrifice | Min. Level |
|--------------|---------------------|-----------|------------|
| +1 | 400 XP | | 2nd |
| +2 | 1,000 XP | | 4th |
| +3 | 2,200 XP | | 6th |
| +4 | 5,000 XP | | 8th |
| +5 | 6,400 XP | | 10th |
Awakened casters Damage dice increase just like the Lenz used in its creation, additionally, Awakened casters get a bonus on attack rolls equal to its level (+1, etc.)
Caster Specialization (Ex): The Tactical Sorcerer’s dedication to casters allows him to expertly deal damage with the type of caster in which they are focused. You gain a +2 bonus on all damage rolls that caster type.
Fuel Caster (Su): Starting at 5th level the Tactical Sorcerer can attempt to fuel an awakened caster with their own mana, rather than charges from a mana canister. The Caster must be awakened and in hand to do this, in such a case firing the Caster becomes a full round action.
Improved Caster Critical (Ex): At level 7 a Tactical Sorcerer Crits on a 19-20 with any Caster.
Overload Caster (Su): A 9th level tactical sorcerer can overload an awakened caster with his own mana. This consumes a charge regularly, as well as requiring a full round action. The overloaded caster shot will deal maximum damage based on its dice. For example, a +3 Lighting Caster would deal 36 damage on a successful Overcharged shot. A normal attack roll is required.
Piercing Caster (Ex): When using an Awakened Caster, the Tactical Sorcerer of 3rd level or higher may add his Class level to rolls to overcome spell resistance.
Rapid Reload (Ex): At 3rd level the Tactical Sorcerer becomes readily proficient in reloading his casters, and may swap out a mana canister as a free action.
MANATECH ENGINEER
It takes a special mind to combine the intuitive understanding of Lenz and mana with technology. She uses her mechanical aptitude to solve everyday problems, and she uses her understanding of mana and Lenz to create and upgrade almost magical devices. The Manatech Engineer is the brains behind the party’s best toys, able to build and repair all of the weapons and equipment deviants need in the field, but also upgrade and augment these weapons in new and amazing ways. Select this advanced class if you want your character to excel at building, modifying, repairing, and disabling mana-technology of all kinds, including weapons. The fastest path into this advanced class is from the Smart hero basic class, though other paths are possible.
REQUIREMENTS
To qualify to become a Manatech Engineer a character must fulfill the following criteria.
Skills: Computer Use 6 ranks, Craft (electrical) 6 ranks, Craft (mechanical) 6 ranks, Knowledge (technology) 6 ranks, Repair 6 ranks.
Feats: Craft Syntech.
CLASS INFORMATION
Hit Dice
A Manatech Engineer gains 1d6 hit points per level. The character’s constitution modifier applies.
Action Points
A Manatech Engineer gains a number of action points equal to 6 + one-half her character level, rounded down, every time he attains a new level in this class.
Class Skills
The Manatech Engineer’s class skills are as follows. Computer Use (Int), Craft (electronic, mechanical, structural) (Int), Disable Device (Int), Drive (Dex), Knowledge (lenz and mana, physical sciences, technology) (Int), Navigate (Int), Pilot (Dex), Profession (Wis), Read/Write Language (none), Repair (Int), Search (Int), Speak Language (none).
Skill Points at Each Level: 7 + Int modifier.
Bonus Feats: At 1st, 3rd, 6th, and 9th level, the Manatech Engineer gets a bonus feat. The bonus feat must be selected from the following list, and the Engineer must meet all the prerequisites of the feat to select it. Aircraft Operation, Caster Proficiency, Craft Mechanetics, Lenz Master, Builder, Cautious, Gearhead, Surface Vehicle Operation, Vehicle Expert.
Craft XP Reserve (Ex): Starting at 5th level, a Manatech Engineer can create Syntech or mastercraft weapons or items without investing as much of himself in the process. At 5th level and every level thereafter, an Engineer gains a special reserve of experience points equal to 100 x his Engineer class level. These extra experience points are separate from experience gained...
through level advancement and can only be used to make mastercraft items; they do not count toward level gain. An Engineer must spend the extra experience points he gains at each level, for when the Engineer gains a level, he loses any unspent experience points in his reserve. For example, at 6th level, the Engineer gains 600 XP to spend on making Syntech or mastercraft items; any unspent experience points in his reserve from the previous level are lost.
**Dual feature (Ex):** At 10th level the Manatech engineer is an absolute master of Syntech equipment, and can add two Mastercraft bonuses to a Syntech devices, though no more than +3 to any one effect or roll. For example an engineer could create a Syntech Stealth armor with a +3 to defense and a +3 to Hide, but not +6 to Defense.
**Quick Craft (Ex):** At 2nd level, an engineer learns how to craft ordinary scratch built electronic, mechanical, and structural objects more quickly than normal. When using the Craft (electronic), Craft (mechanical), or Craft (structural) skill to build an ordinary scratch-built item, the Manatech Engineer reduces the building time by one-quarter. For example, a complex electronic device that normally takes 24 hours to build takes the Engineer 18 hours to build. At 5th level, the Syntech Engineer reduces the building time of Mastercraft objects and Syntech objects by one quarter.
**Reconfigure Syntech (Ex):** At 4th level, the Manatech Engineer can reconfigure a syntech weapon, changing its Syntech qualities. Reconfiguring a Syntech weapon requires 1 hour of work and a successful Repair check (DC 20); reconfiguring a mastercraft Syntech weapon is slightly harder (DC 20 + the weapon’s mastercraft bonus feature). An Engineer may take 10 or take 20 on this check. Weapons can be reconfigured multiple times; each time a weapon is reconfigured, it imparts a new benefit. The reconfiguration imposes a -1 penalty on attack rolls made with the weapon but grants one of the following benefits indefinitely:
**Augment Lenz:** The Engineer can attempt to replace a normal Lenz with a Forged Lenz in a Caster. Add the Lenz’s forged bonus to the Repair check. This destroys the Lenz, but treat the weapon as an Awakened Caster (see Tactical Sorcerer, above).
**Change Lenz:** The engineer can swap out the Lenz, or change a fire, ice, or lightning Lenz to another of those three without removing the Lenz by retuning it.
**Greater Mana Capacity:** The reconfigured devices consumes less mana than normal, increasing the effective charge capacity of canisters in the weapon by 50%. This benefit applies to casters and other devices that consume mana.
**Greater Concealment:** The reconfiguration grants a +2 bonus on Sleight of Hand checks made to conceal the reconfigured weapon.
**Greater Range Increment:** The reconfigured weapon's range increment increases by 10 feet. This benefit applies only to weapons with range increments.
**Recover Lenz:** The Syntech Engineer can attempt to recover a viable Lenz from a piece of Syntech equipment, destroying the Syntech device completely. This has a DC of 25. The Lenz from Awakened Casters cannot be recovered.
**Signature Shooter:** The weapon is reconfigured for a single individual's use only and is treated as a unique exotic weapon. Anyone else who uses the weapon takes a -4 nonproficient penalty on attack rolls.
**Syntech Enchantment:** The engineer can swap out an enchantment, provided that have the components (i.e. the appropriate Lenz). Assault enchantments and simply be “retuned” like caster, other enchantments are destroyed.
**Sabotage (Ex):** At 3rd level and beyond, the Engineer can sabotage a mana powered machine so that it operates poorly. The Engineer must succeed on a Disable Device check (DC 20) to accomplish the downgrade, and sabotaging a mastercraft object is slightly harder (DC 20 + the mastercraft object’s bonus feature). Noticing the Engineer's handiwork without first testing the sabotaged device requires a successful
Search check (DC =the Engineer's Disable Device check result). Fixing the sabotaged item requires a successful Repair check (see the Repair skill description on page 70 of the d20 Modern Roleplaying Game).
**Sabotage Device:** As a full-round action, the Engineer can reconfigure a device with electrical or mechanical components (such as a computer, a tool kit, or a vehicle) so that anyone who uses it suffers a penalty equal to the Engineer’s class level on skill checks made to use the device.
**Sabotage Weapon:** As a full-round action, the Engineer can sabotage a weapon so that it misfires or breaks the next time it is used. A sabotaged weapon cannot be used effectively until repaired.
**Superior Repair (Ex):** At 2nd level an Engineer learns improved ways of repairing weapons, armor, and mechanetic attachments. An Engineer with a mechanical tool kit and an appropriate facility (a workshop, garage, or hangar) can repair damage to a weapon, armor, or mechanetic attachment. (Without a mechanical tool kit, the Engineer takes a -4 penalty on the Repair check.) With 1 hour of work, the engineer can restore a number of hit points based on his Repair check result, as shown in the table below for Superior Repair. If damage remains, the Engineer may continue to make repairs for as many hours as needed to fully repair the object.
| Repair Check | Damage Repaired |
|--------------|--------------------------|
| Less than 20 | None |
| 20-29 | 2d6 + Engineer class level |
| 30-39 | 3d6 + Engineer class level |
| 40+ | 4d6 + Engineer class level |
**Syntech MasterCrafter (Su):** At first level, a Manatech engineer who possesses the feat, may add Mastercraft features to Syntech weapons and equipment.
**Quick Fix (Ex):** At 7th level, the Engineer can repair a Syntech device in half the normal time: see the Repair skill description on page 70 of the d20 Modern Roleplaying Game for normal repair times. However, cutting the repair time increases the Repair check DC by 5.
**Weapon Upgrade (Su):** At 8th level an Engineer can upgrade ordinary handheld or mechanetic-installed weapons with Syntech, or add additional features to existing Syntech weapons.
| Weapon Upgrade | DC |
|-----------------------------------------------------|------|
| Weapon also dazes target for 1 round | 25 |
| Weapon also knocks target prone | 30 |
| Weapon leaves target shaken for 1d4 rounds | 35 |
| Weapon also stuns target for 1d4 rounds | 40 |
| Weapon deals an extra dice of damage | 25 |
| Weapon deals an extra two dice of damage | 40 |
| Ordinary Weapon ignores 5 points of target's hardness/DR | 30 |
| Weapon's critical hit multiplier increases by 1 | 35 |
| Ordinary Weapon ignores 10 points of target's hardness/DR | 40 |
The Engineer must spend 1 hour tinkering with the weapon and have at least one mana canister, after which he must succeed at a Craft (mechanical) check. The DC varies depending on how the weapon is modified, as shown on the table above. If the skill check fails, the attempt to modify the weapon also fails, although the Engineer may try again. (The engineer may take 20 on the skill check, but the upgrade ta 20 hours to complete.) An upgraded weapon has a 10% chance of breaking after each time it is used; it cannot be used again until repaired, and repairing it requires 1 hour and a successful Repair check (DC 40).
LEGENDARY CLASSES
Where the advanced classes above are available to anyone with sufficient skill and ability, Legendary Classes are only available to those with the will to power – the Limit Break.
ABOMINATION
What becomes of those born from a poisoned line? There are those who fight against their own mutation, struggling to high it, struggling to secret away their magical nature. Or, they can embrace it – becoming more. An abomination is born form a monster, and chooses to fully embrace their connection to the mana field, becoming Abominations. More than human, less than a monster – abominations develop powers not unlike that of true monsters. This is the class for characters who want to progress their monstrous legacy to its full potential, fighting fire with fire.
REQUIREMENTS
Becoming an Abomination is no simple task.
Skills: Intimidation 6 ranks, Survival 8 ranks.
Feats: Monstrous Legacy, Ruinous Power
Special: The character cannot have the Crimson Prince ability from Ruinous Power
CLASS INFORMATION
Hit Dice
An abomination gains 1d12 hit points per level. The character’s constitution modifier applies.
Action Points
Each level an Abomination gains a number of action points equal to 7 + one-half her character level, rounded down.
Class Skills
An Abomination’s class skills are as follows.
Balance (Dex), Climb (Str), Concentration (Con), Escape Artist (Dex), Hide (Dex), Intimidate (Cha), Jump (Str), Listen (Wis), Move Silently (Dex), Search (Int), Sense Motive (Wis), Sleight of Hand (Dex), Spot (Wis), Survival (Wis), Swim (Str), Tumble (Dex).
Skill Points at Each Level: 4 + Int modifier.
Ability Bonus: Starting at 2nd level, and every even level there after the character receives an ability bonus.
Blindsense (Ex): At 5th level, the abomination gains blindsense with a range of 30 feet. Using nonvisual senses, such as acute smell or hearing, the abomination notices things it cannot see. He usually does not need to make Spot or Listen checks to notice and pinpoint the location of creatures within range of his blindsense ability, provided that he has line of effect to that creature. Any opponent the abomination cannot see still has total concealment against him, and the abomination still has the normal miss chance when attacking foes that have concealment. Visibility still
affects the movement of a creature with blindsense. A creature with blindsense is still denied its Dexterity bonus to Defense against attacks from creatures it cannot see. At 10th level, the range of this ability increases to 60 feet.
**Command Monsters (Su):** At level 7, the Abomination can command monsters almost identical to the Dominion Lenz. The total number of Hit Dice of Monsters that the Abomination can command at any time is equal to twice their class level.
**Devoir Lenz (Ex):** Starting at first level, the Abomination learns to devoir Lenz, like true monsters, swallowing a Lenz whole. Once swallowed the Lenz is Junctioned just like any other Lenz, and can be forged normally. However once devoured a Lenz cannot be changed and becomes forever part of the character, filling one of the character’s Lenz slots. The Devoured Lenz takes on the Quick Cast property, without increasing its level, and a +1 bonus to Casting Checks with the Lenz.
**Mutation:** At levels 3, 6, and 9 the abomination takes on an additional mutation. These mutations are more powerful but similar to those of the Ruinous Power feat.
**Beast Flesh (Ex):** You gain a DR 1/- per Abomination Level. This stacks with DR from class or feats.
**Mana Disruption (Ex):** You gain Spell Resistance 10, plus your Class level. If you already possess Spell resistance increase it by your class level.
**Monstrous Resistance (Ex):** You gain Resist 5 against Fire, Lightning, and Cold. This stacks with any other energy resistance you may have.
**Life Drain (Su):** You gain the ability to deal Con damage with melee attacks. Each successful melee attack will cause 1 point of temporary Con damage to the victim, and give the abomination 2 temporary hit points. These temporary hit points last for up to 1 hour.
**Starspawn (Su):** You gain a fly speed (with average maneuverability) equal to your base land speed. If you already possess flight, you gain the ability to fly at perfect maneuverability.
**Witchclaws (Su):** At will you can cause the mana around your natural weapons to ignite in turquoise fire. The flame shed a ghostly light and add 2d6 damage to your natural attacks.
**Natural Weapons (Ex):** At 2nd level, an abomination gains claw and bite attacks if he does not already have them. The claws deal 1d6 damage, and the bite deals 1d4. The Abomination is considered proficient with these attacks. When making a full attack, she uses her full base attack bonus with the bite attack but takes a -5 penalty on claw attacks. The Multiattack feat reduces this penalty to only -2.
**True Monster (Ex):** At level 10 the Abomination becomes a true monster. He gains +4 to Strength and -2 to Charisma. His natural armor Defense bonus increases to +4, and he acquires low-light vision, 60-foot darkvision, immunity to sleep and paralysis effects, and immunity to mana energy type damage.
BLOOD KING
In the aftermath of the great war, a shadowy cult of mutants known as the Crimson Nobles emerged into the horrifying destruction of the northern nations. Using their power and resources, these beings were able to stabilize large sections of Ridmar and Gaidia as the governments collapsed. From their mountain estates the powerful and ancient Crimson Nobles ruled these territories, until the arrival of the first Blood King. More than a mutant, less than a horror, the Blood King possessed awesome supernatural powers over miasma and horrors, and was rumored to be able to create his own. Only those descended from the Crimson Princes can elevate their powers, becoming something worse that the shambling horrors that wander the badlands, a predator that feeds upon life and death in equal measure.
REQUIREMENTS
Only those with great will and the blood of the Crimson Nobility can become true Blood Kings.
Skills: Intimidation 12 ranks, Bluff 8 ranks, Sense Motives 8 ranks.
Feats: Legacy of Horror, Ruinous Power
Special: The character must possess the Crimson Prince ability from Ruinous Power.
CLASS INFORMATION
Hit Dice
| Level | Base Attack Bonus | Fort Save | Ref Save | Will Save | Special Abilities | Defense Bonus | Reputation Bonus |
|-------|------------------|-----------|----------|-----------|--------------------------------------------------------|---------------|-----------------|
| 1st | +0 | +1 | +0 | +1 | Blood Drain, Blood Might, Slam | +1 | +0 |
| 2nd | +1 | +2 | +0 | +2 | Energy Resistance (5), Undead Soul | +1 | +1 |
| 3rd | +2 | +2 | +1 | +2 | Blood Might, Domination | +2 | +1 |
| 4th | +3 | +2 | +1 | +2 | Fast Healing (1), Undead Vitality | +2 | +1 |
| 5th | +3 | +3 | +1 | +3 | Command Horrors, Miasma Walk | +3 | +2 |
| 6th | +4 | +3 | +2 | +3 | Blood Might, Undead Fortitude | +3 | +2 |
| 7th | +5 | +4 | +2 | +4 | Life Drain, Create Spawn | +4 | +2 |
| 8th | +6 | +4 | +2 | +4 | Fast Healing (2), Undead Resistance | +4 | +3 |
| 9th | +6 | +4 | +3 | +4 | Blood Might, Energy Resistance (10) | +5 | +3 |
| 10th | +7 | +5 | +3 | +5 | Unholy King, Summon Miasma | +5 | +3 |
Each Level a Blood King gains 1d8 hit points. The character’s constitution modifier applies.
Action Points
Each level a Blood King gains a number of action points equal to 7 + one-half her character level, rounded down.
Class Skills
The Class Skills of the Blood King are as follows.
Bluff (Chr), Climb (Str), Diplomacy (Chr), Hide (Dex), Intimidate (Cha), Jump (Str), Knowledge (current events, history, miasma popular culture, streetwise, tactics, technology, theology and philosophy) (Int), Profession (Wis), Read/Write Language (none), Listen (Wis), Move Silently (Dex), Search (Int), Sense Motive (Wis), Spot (Wis), Speak Language (none), Tumble (Dex).
Skill Points at Each Level: 4 + Int modifier.
Blood Drain (Su): A Blood King can suck blood from a grappled opponent; if the creature successfully bites the target and drinks its blood following a pin maneuver, the target suffers 1d4 points of Constitution damage in addition to the damage from the bite and blood loss. The vampire heals and additional 5 hit points or gains 5 temporary hit points for 1 hour (up to a maximum number of temporary hit points equal to its full normal hit points) each round it drains blood.
Blood Might (Su): At 1st level, and every third level thereafter (3, 6, 9) the Blood King gains a +1 enhancement bonus to Str, Dex, and Chr, to a total of +4 at level 9.
**Command Horrors (Su):** Upon attaining level 5, a Blood king can use their Domination gaze attack to command mindless undead (no save) and intelligent horrors (saving throw).
**Create Spawn (Su):** Once a Blood king has reached 7th level, they can use their own blood to attempt to create spawn. Once per day the Blood king can feed a humanoid reduced to 0 Constitution by their Blood Drain some of the Blood King’s own blood. The ritual takes at least one minute, and requires the Blood King spend an Action Point. The Creature is entitled to a fortitude Save with no bonus against $10 + \text{the Blood King’s Level} + \text{her Chr modifier}$. If the save fails, the creature will slip into a death-like coma for 14 days. When the victim awakens, they will find that they possess the Crimson Noble template. However, the creature is now bound to the Blood King per Domination (see below) until released by their creator or until the creator is killed.
**Domination (Su):** When a blood King reaches 3rd level it gains the ability to use its Gaze attack to dominate mind of mortal creatures as a standard action. Anyone the blood king targets must succeed on a Will save or fall instantly under the blood king’s influence. If they and the subject have a common language, the blood king can generally force the subject to perform as desired, within the limits of its abilities. If no common language exists, only basic commands, such as “Come here,” “Go there,” “Fight,” and “Stand still” can be conveyed. Through this mental link, the Blood kings knows what the subject is experiencing, but do not receive direct sensory input from it, nor can it communicate telepathically.
Once you have given a dominated creature a command, it continues to attempt to carry out that command to the exclusion of all other activities except those necessary for day-to-day survival (such as sleeping, eating, and so forth). Because of this limited range of activity, a Sense Motive check against DC 15 (rather than DC 25) can determine that the subject’s behavior is being influenced by an enchantment effect (see the Sense Motive skill description). Changing orders or giving a dominated creature a new command is a move action.
The Domination last one day per Blood King Level. During that time subjects resist this control, and any subject forced to take actions against its nature receives a new saving throw with a +2 bonus. Obviously self-destructive orders are not carried out. Once control is established, the range at which it can be exercised is unlimited. The blood king need not see the subject to control it. The Blood King must spend at least 1 round concentrating on the domination each day, or the subject receives a new saving throw to throw off the domination. The ability has a range of 30 feet.
**Energy Resistance (Su):** At 2nd level, the Blood King gains Resist 5 to Cold and Electricity. At level 9 this increases to Resist 10.
**Fast Healing (Ex):** Beginning at 4th level a Blood King who is not starving gains Fast healing 1 – recovering 1 hit point per round. At 8th level this increase to fast healing 2. A Blood king reduced to 0 H.P. instantly stabilizes, and will recover H.P. normally. If a Blood king is reduced to -1 H.P.s or lower, they instantly stabilize but will no longer gain the benefits of fast healing.
**Life Drain (Su):** Beginning at 7th level the Blood King can spend an Action Point to use their bite attack to drain the life energy of mortal targets. Each successful grapple and bite will now drain 1d6 Constitution from the target, in addition to the damage from the bite and blood loss. If the blood king kills a target through Constitution drain from this attack, they gain a +4 moral bonus to Str and Dex for a number of hours equal to the Hit Dice of the creature drained. Multiple kills will not increase the bonus, but will prolong the effect.
Miasma Walk (Su): Once a Blood King reaches 5th Level they are completely immune to the effects of Miasma.
Slam (Ex): A Blood King’s Unarmed attacks deal lethal damage, and are considered natural weapons.
Summon Miasma (Su): A 10th level Blood King may summon a Miasma once per day as a full round action. They have no command over the Miasma, and it is indiscriminant in its effects. The Miasma will form with the Blood King as its center point.
Undead Fortitude (Su): At 6th level the Blood King is fortified by their unholy power, and has a 25% chance to resist critical hits. When a critical hit or sneak attack is scored against the character, there is a 25% chance that the critical hit or sneak attack is negated and damage is instead rolled normally. In addition, the Blood king is not at risk death from massive damage.
Undead Resistance (Su): Upon reaching 8th level the Blood King becomes extremely resistant to harm. The Creature’s Damage Reduction increases to 10, and they gain an additional +2 to save against mind-affecting spell lenz and abilities, poison, paralysis, stunning, radiation, and disease. At this level the Blood King only takes 1d6 damage per round from sunlight.
Undead Soul (Su): At 2nd Level, the Blood King fills their soul with darkness, taking on the spiritual characteristics of the undead: they are healed by negative energy and harmed by positive energy as if they were undead. This mean Cure deals damage, whereas Drain will heal them and harm the caster.
Undead Vitality (Su): Beginning at 4th level, the Blood King becomes closer to a horror, gaining a damage reduction of 5. At this level the creature does not need to sleep, and is immune to magic sleep effects. Finally, the blood king becomes immune to bleeding or wound damage.
Unholy King (Su): Upon attaining 10th level, the blood king becomes the lord of darkness. At this Level the Blood King no longer ages, but may be killed by violence. The creature becomes immune to Critical Hits.
## Lenz Mage
Few have the power to truly master Lenz. For those that do, the awesome power of Lenz opens for them, allowing these very few to become masters of the arcane forces of the world, wielding spells and powers. Those who rise to master the magic are awarded with power greater than that of mere dabblers. They have the ability to command their Lenz at range, junction numerous Lenz, and Forge them with ease. The requirements are so high for this class that the last true Lenz Mage is believed to have been in pre-history, and today only rumors exist of their powers and abilities. You absolutely want to become the undisputed master of Lenz, this is the class. No character will be able to match the number of Lenz the Lenz Mage can junction, further, the number of unique powers they receive is above and beyond almost any other class.
### Requirements
Almost no other class has as strict and high requirements as the Lenz Mage.
**Skills:** Concentration 12 Ranks, Lenz Arts 12 Ranks.
**Feats:** Lenz Master
### Class Information
#### Hit Dice
Lenz Mages gain 1d6 hit points per level. The character’s constitution modifier applies.
#### Action Points
Lenz mages gain a number of action points equal to 7 plus one half their character level, rounded down, every time they advance a level in this class.
#### Class Skills
The Lenz mage's class skills are as follows:
- Computer Use (Int)
- Concentration (Con)
- Craft (chemical, electronic, mechanical, pharmaceutical) (Int)
- Decipher Script (Int)
- Demolitions (Int)
- Disable Device (Int)
- Investigate (Int)
- Knowledge (arcane lore, Art, behavioral sciences, business, civics, current events, earth and life sciences, history, Physical sciences, popular culture, streetwise, technology, theology and philosophy) (Int)
- Profession (Wis)
- Read/Write Language (none)
- Repair (Int)
- Research (Int)
- Speak Language (none)
**Skill Points at Each Level:** 7 + Int modifier.
Attune Lenz (Ex): Beginning at level 1, the Lenz Mage increase the number of Lenz she may Junction. The Lenz Mage add her class level to the number of Lenz slots she may junction.
Forge XP Reserve (Ex): Starting at 5th level, an Lenz mage can forge her Lenz without investing as much of himself in the process. At 5th level and every level thereafter, an Lenz Mage gains a special reserve of experience points equal to 100 x his class level. These extra experience points are separate from experience gained through level advancement and can only be used to Forge Lenz; they do not count toward level gain. An Lenz Mage must spend the extra experience points he gains at each level, for when the Lenz Mage gains a level, he loses any unspent experience points in his reserve. For example, at 6th level, the Lenz mage gains 600 XP to spend on Forging Lenz; any unspent experience points in his reserve from the previous level are lost.
Iounic Lenz (Su): Starting at 3rd Level the Lenz Mage can activate any Lenz she has Junctioned so long as it is within 3 feet – it need not be in physical contact or in a harness. At will, the Lenz mage can cause any Lenz she has Junctioned to float in orbit of her head in a lazy circle.
Meta-lenz Powers (Su): The defining characteristic of the Lenz Mage is the power to change the nature of a spell Lenz as it is activated. At 2nd level, and every even level there after the Lenz Mage may select a Meta Lenz power. Each meta Lenz power increases the difficulty of the activation roll by a set amount, and changes the effects of the Lenz.
Arc Blast (+4): An Arc Blast spell fills and offensive spell with burning mana. An Arc Blast Spell ignores 5 points of a creature’s energy resistance against the spell’s damage type.
Embody Lenz (+4): Activating an energy bolt Lenz, you wreathe your body with the energy for 1 round per level of the Lenz. You are immune to the energy generated, and your natural attacks and attacks made with weapons deal an extra 1d6 points of damage of the appropriate type. Creatures that attempt to grapple you or that successfully attack you with a natural weapon or an unarmed strike take 1d6 points of damage for each hit or round of sustained contact.
Energy Substitution (+2): Choose one type of energy (cold, electricity, or fire). You can then modify any spell with an energy descriptor to use the chosen type of energy instead. The spell’s descriptor changes to the new energy type.
Enlarge Lenz (+2): You can alter a spell with a range of close, medium, or long to increase its range by 100%. An enlarged spell with a range of close now has a range of 50 ft. + 5 ft./level, while medium-range spells have a range of 200 ft. + 20 ft./level and long-range spells have a range of 800 ft. + 80 ft./level.
Extend Lenz (+2): An extended spell lasts twice as long as normal. A spell with a duration of concentration, instantaneous, or permanent is not affected by this feat.
Heighten Lenz (+2): The Spell Lenz is cast and at +1 level, to a maximum of +5. All effects dependent on spell level (such as damage, range, duration) are calculated according to the heightened level.
Witchfire Lenz (+8): The Lenz mage overloads an offensive Lenz with mana adding an equal amount of the Mana energy damage to the spell’s normal effects. The altered spell works normally in all respects except for the type and amount of damage dealt, with each type of energy counting separately toward the spell’s damage. Thus, a witchfire Ice bolt from a +2 Lenz deals 3d8 points of Ice damage and 3d8 points of Mana damage (rolled separately).
Wounding Lenz (+4): When affected by this feat, a spell that deals damage to a creature also inflicts a bleeding wound that does not heal normally. On each subsequent round, the victim loses 1 hit point at the beginning of your turn. The continuing hit point loss can be stopped with a Treat Injuries check (DC equal to the spell's save DC), or a cure spell.
Flare (Su): The final power of the Lenz mage is rumored to be the power to critically overload a piece of Lenz, resulting in a massive release of energy. Activating a flare requires the expenditure of an action point and one full round of concentration. On the beginning of the Lenz Mage’s next turn he may magically launch the Lenz at a target within long range. This is a ranged touch attack. When the Lenz reaches its destination, it explodes, completely obliterating itself in the process. The Flare deals 10d10 damage, plus 10d10 per plus (up to +10) to everything within 80 feet of the center of the explosion. A successful reflex save will reduce this damage by half. If the Lenz has an energy descriptor, the explosion deals that damage type. Cure, Heal, and Restoration Lenz actually release a pulse of healing energy. An overloaded Bright Lenz has a blast radius of 160 feet and deals fire damage.
WITCHFIRE SHAMAN
Legends in the badlands and tribes at the edge of the dragon lands persist that there are individuals who have learned to harness and direct the mana field through will and ritual alone. These so-called Witchfire Shamans hold religious significance in the indigenous populations of the Dracian federation and dragon lands, and in ancient Jurai myths myths speak of powerful mystics clad in witchfire. And yet beyond the legends, rumors insist that a lone figure who roams the edge of the Borean Ocean demonstrated incredible powers over mana, seemingly capable of magic without lenz. Such rumors have been largely dismissed as a monster by the A.P.O.C., but that does not change the fact that for those willing, it is said that deep in the dragon lands is a temple where dwells masters of the old religion, willing to teach the brave, or the foolish, to study the secrets of the Witchfire Lenz, what they call the First Lenz, and how to unlock incredible powers from the simple magic.
REQUIREMENTS
To become a witchfire shaman a character must meet the following prerequisites.
Skills: Concentration 12 ranks, Decipher Script 8 Ranks, Knowledge (Lenz and Mana) 8 ranks.
Feats: Iron Will
Special: The character must possess a Witchlight Lenz, which is scarified to taking this class.
CLASS INFORMATION
Hit Dice
Witchfire Shamans gain 1d6 hit points per level. The character’s constitution modifier applies.
Action Points
At each new level the Witchfire Shaman gains 7 + one-half her character level Action Points, rounded down.
| Level | Base Attack Bonus | Fort Save | Ref Save | Will Save | Special Abilities | Defense Bonus | Reputation Bonus |
|-------|-------------------|-----------|----------|-----------|--------------------------------------------------------|---------------|-----------------|
| 1st | +0 | +0 | +0 | +1 | Witchfire Blast, Witchlight, Meditation | +0 | +0 |
| 2nd | +1 | +0 | +0 | +2 | Spell | +0 | +0 |
| 3rd | +2 | +1 | +1 | +2 | Blast Energy, Spell Resistance | +1 | +1 |
| 4th | +3 | +1 | +1 | +2 | Spell | +1 | +1 |
| 5th | +3 | +1 | +1 | +3 | Blast Energy | +2 | +1 |
| 6th | +4 | +2 | +2 | +3 | Spell | +2 | +2 |
| 7th | +5 | +2 | +2 | +4 | Blast Energy | +3 | +2 |
| 8th | +6 | +2 | +2 | +4 | Spell | +3 | +2 |
| 9th | +6 | +3 | +3 | +4 | Blast Energy | +4 | +3 |
| 10th | +7 | +3 | +3 | +5 | Ultima | +4 | +3 |
Class Skills
The Class Skills of the Witchfire Shaman are as follows. Bluff (Chr), Concentration (Con), Decipher Script (Int), Investigate (int), Knowledge (current events, history, mana, popular culture, streetwise, technology, theology and philosophy) (Int), Listen (Wis), Profession (Wis), Read/Write Language (none), Search (Int), Sense Motive (Wis), Spot (Wis), Speak Language (none).
Skill Points at Each Level: 3 + Int modifier.
Blast Energy (Sp): Beginning at 3rd level, the Witchfire Shaman is able to change the energy composition of her Witchfire Blast, adding an elemental effect to the bolt of mana. At level 3, and every other level there after the shaman may select a new energy they may add to their Witchfire attack. The Shaman can only use one energy blast at a time, and must declare before an attack roll is made.
Bewitching Blast: Any creature struck by a bewitching blast must succeed on a Will save or be confused for 1 round in addition to the normal damage from the blast. This is a mind influencing effect.
Brimstone Blast: A brimstone blast deals fire damage. Any creature struck by a brimstone blast must succeed on a Reflex save or catch on fire, taking 2d6 points of fire damage per round until it takes a full-round action to extinguish the flames or the duration expires. The fire damage persists for 1 round per three class levels you have.
Hellrime Blast: A hellrime blast deals cold damage. Any creature struck by the attack must make a Fortitude save or take a –4 penalty to Dexterity for 10 minutes. The Dexterity penalties from multiple hellrime blasts do not stack.
Hammering: A hammering blast deals force damage and will damage inanimate object normally. Any Medium or smaller creature struck by a Hammering blast must make a Reflex save or be hurled 1d6×5 feet (1d6 squares) directly away from you and be knocked prone by the impact of the attack. If the creature strikes a solid object, it stops prematurely, taking 1d6 points of damage per 10 feet hurled, and it is still knocked prone. Movement from this blast does not provoke attacks of opportunity.
Thundering Blast: The thundering blast deals electrical damage. The great arcing bolt will stun any target that fails a Fortitude saving throws. The creature is sunned for one round, however undead are immune to this effect.
Meditation (Ex): All shamans learn to calm their minds and resonate with the mana field. This is a skill called meditation, and allows the Witchfire Shaman to use the Concentration Skill in a number of interesting ways:
Memorize (DC 15): You can attempt to memorize a long string of numbers, a long passage of verse, or some other particularly difficult piece of information (but you can’t memorize magical writing or similarly exotic scripts). Each successful check allows you to memorize a single page of text (up to 800 words), numbers, diagrams, or sigils (even if you don’t recognize their meaning). If a document is longer than one page, you can make additional checks for each additional page. You always retain this information; however, you can recall it only with another successful Meditation check.
Resist Dying (DC 20): You can attempt to subconsciously prevent yourself from dying. If you have negative hit points and are losing hit points (at 1 per round, 1 per hour), you can substitute a DC 15 Meditation check for your d% roll to see if you become stable. If the check is successful, you stop losing hit points (you do not gain any hit points, however, as a result of the check). You can substitute this check for the d% roll in later rounds if you are initially unsuccessful.
Resist Fear (Fear Effect DC): In response to any fear effect, you make a saving throw normally. If you fail the saving throw, you can make an Meditation check on your next round even while overcome by fear. If your meditation check meets or beats the DC for the fear effect, you shrug off the fear. On a failed check, the fear affects you normally, and you gain no further attempts to shrug off that particular fear effect.
Tolerate Poison (Poison’s DC): You can choose to substitute a Meditation check for a saving throw against any standard poison’s secondary damage or effect. This skill has no effect on the initial saving throw against poison.
Willpower (DC 20): If reduced to 0 hit points (disabled), you can make an Meditation check. If successful, you can take a normal action while at 0 hit points without taking 1 point of damage. You must make a check for each strenuous action you want to take. A failed Meditation check in this circumstance carries no direct penalty; you can choose not to take the strenuous action and thus avoid the hit point loss. If you do so anyway, you drop to -1 hit points, as normal when disabled.
Spell Resistance (Su): Beginning at 3rd level, the shaman develops a resistance to magic equal to 10 plus his Shaman class level. This does not stack, take the higher of the effects.
Spell (Sp): Beginning at 2nd level, and every even level thereafter, the Witchfire shaman learns to emulate the effects of one spell Lenz from the following list as a spell like ability. Activating a Witchfire spell is a full round action, and does prompt attacks of opportunity. The spell does not require any kind of check and may be used as often as the shaman desires. The level of each Witchfire Spell is equal to half your Witchfire Shaman level, rounded down. This means the first spell starts at +1, and so on. However, you cannot forge Witchfire spells, and cannot add facets. However, feats such as spell penetration and Lenz focus can be applied to the character’s Witchfire spells. Each level that the Witchfire Shaman learns a new spell they may replace one they have already selected.
The list of available spells is: Armor, Blessing, Confusion, Cure, Dispel, Flight, Haste, Hex, Mirage, Seal, Shell, Shield, Sleep, Stun, Wound.
Ultima (Sp): The Witchfire Shaman’s ultimate attack, you fire a burst of energy igniting the mana field in a huge radius of the blast point and dealing tremendous damage.
Effect: Activating Ultima is a full round action, and requires 1 Action point. This modified Witchfire blast explodes on impact with a 100 foot burst of energy dealing 20d6 damage to all creatures. If you targeted a specific creature and hit with the Witchfire blast, the target may not save and suffers full damage. All other creatures in the blast radius are entitled to a Fortitude save for half damage. The Ultima ignores energy resistance and damage reduction. If fired at a mana pool it will cause the mana pool to fountain.
Range: Medium
Duration: Instantaneous
Saving Throw: Fortitude.
Spell Resistance: No
Witchfire Blast (Sp): One of the first Expressions of natural magic that the Witchfire Shaman learns is the Witchfire Blast. An eldritch blast is a ray with a range of 60 feet. It is a ranged touch attack that affects a single target, allowing no saving throw. An witchfire blast deals 1d6 points of damage per class level. A witchfire blast is the equivalent of a spell whose level is equal to one-half the shaman’s class level (round down), with a minimum spell level of +1 and a maximum of +5 when the shaman reaches 10th level. The witchfire blast is subject to spell resistance, although the Spell Penetration feat and other effects that improve caster level checks to overcome spell resistance also apply to Witchfire blast. A witchfire blast deals half damage to objects. Feats such as Weapon Focus (ray) can be applied to the Witchfire blast.
Witchlight (Sp): At level 1, a Witchfire Shaman can emulate all of the effects of the Witchfire Lenz, including all of its facets. The effective lenz level of this ability is equal to one half the class level, minimum of +1, and maximum of +5.
Ghestal is an alien world, with few animals that would be recognizable. Because of the unique ecology and environment, many of these animals have unique characteristics that warrant explanation.
**ANIMALS**
**Livestock**
There are three major livestock animals on Ghestal: Bachura, which closely resemble large clawed rabbits or kangaroos, the Tockta that for all intent and purposes are a cross between a bison and a boar, and the Basaks, great serpents used and beasts of burden or to ride.
**Bachura**
Docile herbivores standing around 5 feet at the head, these furry mammals use the long claws to attack only in defense, digging through the soil for bulbs and roots which they eat primarily.
**Tockta**
The main source of livestock, the Tockta are huge, slow, grazing animals. These large animals produce a shaggy wool, sturdy leather, and ivory from their tusks.
**Basaks**
Great serpents with thick carapace like scales and gaping maws, the Basak are generally not hostile. Roaming the wastelands and shores in small packs, the creatures feed on Tockta and other animals in wild. In Domestication, Basaks are used a beast of burden.
| | Bachura | Tockta | Basaks |
|--------------------------|---------------|---------------|----------------|
| **Hit Dice:** | 3d8+6 (19 hp) | 5d8+15 (37 hp)| 11d8+14 (63 hp)|
| **Initiative:** | +1 | +0 | +3 |
| **Speed:** | 40 ft. (8 squares) | 40 ft. (8 squares) | 30 ft. (4 squares), climb 20 ft., burrow 20 ft. |
| **Defense:** | 13 (+1 Dex, +2 natural), touch 11, flat-footed 12 | 13 (-1 size, +4 natural), touch 9, flat-footed 13 | 16 (-2 size, +1 Dex, +7 natural), touch 9, flat-footed 15 |
| **Base Attack:** | +2/+6 | +3/+13 | +8/+23 |
| **Attack:** | Claw +6 melee (1d4+4) | Gore +8 melee (1d8+9) | Bite +13 melee (2d8+12) or tail slap +13 melee (1d12+12) |
| **Full Attack:** | 2 claws +6 melee (1d4+4) and kick +1 melee (1d4+2) | Gore +8 melee (1d8+9) | Bite +13 melee (2d8+12) or tail slap +11 melee (1d12+12) |
| **Space/Reach:** | 5 ft./5 ft. | 10 ft./5 ft. | 15 ft./10 ft. |
| **Special Attacks:** | -- | Stampede | Constrict 1d8+10, Improved grab |
| **Special Qualities:** | Low-light vision, scent | Low-light vision, scent | Hold breath, low-light vision |
| **Saves:** | Fort +5, Ref +4, Will +2 | Fort +7, Ref +4, Will +1 | Fort +9, Ref +10, Will +4 |
| **Abilities:** | Str 19, Dex 13, Con 15, Int 2, Wis 12, Cha 6 | Str 22, Dex 10, Con 16, Int 2, Wis 11, Cha 4 | Str 27, Dex 17, Con 19, Int 1, Wis 12, Cha 2 |
| **Skills:** | Jump +8, Listen +5, Spot +5 | Listen +7, Spot +5 | Balance +11, Climb +17, Hide +10, Listen +9, Spot +9 |
| **Feats:** | Endurance, Run | Alertness, Endurance | Alertness, Endurance, Toughness |
| **Environment:** | Steppes, desert | Plains, grasslands | Mountains, deserts |
| **Organization:** | Solitary or jump (2-8) | Solitary or herd (6-30) | Solitary or clutch (2-5) |
| **Challenge Rating:** | 2 | 2 | 6 |
| **Advancement:** | 4-5 HD | 6-7 HD | 12–16 HD (Huge); 17–33 HD (Gargantuan) |
## Predators
| | Sand Runner | Vatha Cat | Wrog |
|--------------------------|-------------|-----------|------|
| **Hit Dice:** | 3d8+6 (19 hp) | 5d8+15 (37 hp) | 4d10+8 (30 hp) |
| **Initiative:** | +1 | +0 | +2 |
| **Speed:** | 40 ft. (8 squares) | 40 ft. (8 squares) | 50 ft. |
| **Defense:** | 13 (+1 Dex, +2 natural), touch 11, flat-footed 12 | 14 (-1 size, +2 Dex, +3 natural), touch 11, flat-footed 12 | 14 (+2 Dex, +2 natural), touch 12, flat-footed 12 |
| **Base Attack:** | +3/+11 | +4/+14 | +4/+7 |
| **Attack:** | Talons +6 melee (2d6+4) | Claw +9 melee (1d8+6) | Bite +7 melee (1d6+4) |
| **Full Attack:** | Talons +6 melee (2d6+4) and 2 foreclaws +1 melee (1d3+2) and bite +1 melee (2d4+2) | 2 claws +9 melee (1d8+6) and bite +4 melee (2d6+3) | Bite +7 melee (1d6+4) |
| **Space/Reach:** | 10 ft./5 ft. | 10 ft./5 ft. | 5 ft./5 ft. |
| **Special Attacks:** | Pounce | Improved grab, pounce, rake 1d8+3 | Trip |
| **Special Qualities:** | Low-light vision, scent | Low-light vision, scent | Darkvision 60 ft., low-light vision, scent |
| **Saves:** | Fort +8, Ref +6, Will +2 | Fort +8, Ref +7, Will +3 | Fort +6, Ref +6, Will +3 |
| **Abilities:** | Str 19, Dex 15, Con 19, Int 2, Wis 12, Cha 10 | Str 23, Dex 15, Con 17, Int 4, Wis 12, Cha 6 | Str 17, Dex 15, Con 15, Int 6, Wis 14, Cha 10 |
| **Skills:** | Hide +8, Jump +26, Listen +10, Spot +10, Survival +10 | Balance +6, Hide +3*, Listen +3, Move Silently +9, Spot +3, Swim +11 | Hide +4, Listen +6, Move Silently +6, Spot +6, Survival +2* |
| **Feats:** | Endurance, Run, Track | Alertness, Improved Natural Attack (bite), Improved Natural Attack (claw) | Alertness, Endurance, Track |
| **Environment:** | Steppes, desert | Plains, grasslands, jungle | Temperate planes and forests |
| **Organization:** | Solitary or pack (2-8) | Solitary or pair | Solitary, pair, or pack (6–11) |
| **Challenge Rating:** | 3 | 4 | 2 |
| **Advancement:** | 5–8 HD (Large) | 7–12 HD (Large); 13–18 HD (Huge) | 5–6 HD (Medium); 7–12 HD (Large) |
### Sand Runner
These swift bipedal cousins of wyverns are one of the common predators of the deserts and steppes. Not as bright as wyverns, but extremely hostile, these flightless lizards run in small packs through the vast badlands and dunes hunting on virtually anything and everything.
**Combat**
A Sand Runner uses a combination of speed, grasping forearms, large teeth, and hind legs with ripping talons. It hunts by running at prey, leaping, and ripping with its rear talons as it claws and bites.
### Vatha Cat
These huge creatures resemble nothing so much as a cross between a tiger and a bear. Common in the jungles and deep canyons, these great cats stand more than 4 feet tall at the shoulder and are about 9 feet long. They weigh from 400 to 600 pounds.
**Combat**
*Improved Grab (Ex):* To use this ability, a tiger must hit with a claw or bite attack. It can then attempt to start a grapple as a free action without provoking an attack of
opportunity. If it wins the grapple check, it establishes a hold and can rake.
*Pounce (Ex)*: If a tiger charges a foe, it can make a full attack, including two rake attacks.
*Rake (Ex)*: Attack bonus +9 melee, damage 1d8+3.
**Worg**
Large, highly intelligent canine like predators, Worgs usually live and hunt in packs. Their favored prey is large herbivores. Although they typically stalk and kill young, sick, or weak animals, they don’t hesitate to hunt humanoids, particularly when game is scarce. Worgs may stalk humanoid prey for hours or even days before attacking, and choose the most advantageous terrain and time of day to do so (during the predawn hours, for example). A typical worg has gray or black fur, grows to 5 feet long and stands 3 feet tall at the shoulder. It weighs 300 pounds.
**Combat**
Mated pairs or packs work together to bring down large game, while lone worgs usually chase down creatures smaller than themselves. Both often use hit-and-run tactics to exhaust their quarry. A pack usually circles a larger opponent: Each wolf attacks in turn, biting and retreating, until the creature is exhausted, at which point the pack moves in for the kill. If they get impatient or heavily outnumber the prey, worgs attempt to pin it.
*Trip (Ex)*: A worg that hits with a bite attack can attempt to trip the opponent (+3 check modifier) as a free action without making a touch attack or provoking an attack of opportunity. If the attempt fails, the opponent cannot react to trip the worg.
**Skills**: A worg has a +4 racial bonus on Survival checks when tracking by scent.
HORRORS
Mindless, terrifying spawn of the Miasma, Horrors are shambling, mindless things that crave only to feast upon the living. Zombies are not pleasant to look upon. Robed of peace of their graves, half decayed and partially consumed by worms, they wear the tattered remains of whatever they died in. A rank odor of death hangs heavy in the air around them.
Creating Horrors
Horror is an acquired template that can be added to any corporeal creature vertebrate animal or humanoid (referred to hereafter as the base creature).
Size and Type: The creature’s type changes to undead. It retains any subtypes except alignment subtypes (such as good) and subtypes that indicate kind. It does not gain the augmented subtype. It uses all the base creature’s statistics and special abilities except as noted here. Any feats, skills, or knowledge is lost.
Hit Dice: Drop any Hit Dice from class levels (to a minimum of 1), double the number of Hit Dice left, and raise them to d12s. If the base creature has more than 10 Hit Dice (not counting those gained with experience), it can’t be made into a Horror with the Conjure Horror spell, but can still be animated by potent Miasma.
Speed: Reduce Speed by half. If the base creature can fly, its maneuverability rating drops to clumsy.
Defense: Natural armor bonus increases by a number based on the Horror’s size (see table below).
Base Attack: A Horror has a base attack bonus equal to 1/2 its Hit Dice.
Attacks: A Horror retains all the natural weapons, manufactured weapon attacks, and weapon proficiencies of the base creature. A Horror also gains a slam attack.
Damage: Natural and manufactured weapons deal damage normally. A slam attack deals damage depending on the Horror’s size. (Use the base creature’s slam damage if it’s better. See table below)
Special Attacks: A Horror retains none of the base creature’s special attacks, but dose gain the following special attacks:
Energy Drain (Su): Living creatures hit by a Horror’s slam attack gain one negative level. The DC is 14 for the Fortitude save to remove a negative level. The save DC is Charisma-based. For each such negative level bestowed, the horror gains 5 temporary hit points.
Special Qualities: A Horror loses any supernatural qualities. A Horror gains the following special qualities:
Damage Reduction (Su): Horrors gain DR based on the base creatures size (see table below).
Devour Lenz (Su): Like monsters, horrors can devour Lenz to junction them. This corrupts the Lenz. Horrors can devour a number of based on their size. Each devoured Lenz increase the Horror’s Intelligence score by 2 and Charisma by 1.
Single Actions Only (Ex): Horrors have poor reflexes and can perform only a single move action or attack action each round. A Horror can move up to its speed and attack in the same round, but only if it attempts a charge. Horrors with 2 or more Lenz loose this flaw.
| Size | Defense Bonus | Slam Damage | DR | Lenz |
|---------------|---------------|-------------|------|------|
| Tiny or smaller | +0 | 1d3 | – | 0 |
| Small | +1 | 1d4 | – | 1 |
| Medium | +2 | 1d6 | 5/– | 2 |
| Large | +3 | 1d8 | 10/– | 3 |
| Huge | +4 | 2d6 | 10/– | 5 |
| Gargantuan | +7 | 2d8 | 15/– | 7 |
| Colossal | +11 | 4d6 | 20/– | 9 |
Saves: Base save bonuses are Fort +1/3 HD, Ref +1/3 HD, and Will +1/2 HD + 2.
Abilities: A Horror’s Strength increases by +2, its Dexterity decreases by 2, it has no Constitution or Intelligence score, its Wisdom changes to 10, and its Charisma changes to 1.
Skills: A Horror has no skills.
Feats: A Horror loses all feats of the base creature and gains Toughness.
Environment: Any land and underground.
Challenge Rating: As original creature +3.
MONSTERS
When Animals become exposed to mana, most die. Occasionally, as with people, some mutate becoming something more, and less than they were. Monsters are ecologically unique, as once an animal becomes a monster, it tends to hunt humans exclusively, ignoring its original diet and behavior, instead moving towards human settlements and predating upon them to the exclusion of all else. Even when faced with starvation, many monsters will not eat anything but human flesh.
Berserkers
The most common and most dangerous kind of Monster, berserkers are mindlessly aggressive and filled with blood lust. Monster Berserker is an inherited template that can be added to any corporeal animal, (referred to hereafter as the base creature). A Berserk Monster uses all the base creature’s statistics and abilities except as noted here. Do not recalculate the creature’s Hit Dice, base attack bonus, saves, or skill points if its type changes.
Size and Type: Animals or vermin with this template become magical beasts, but otherwise the creature type is unchanged. Size is unchanged.
Special Attacks: A Berserker Monster creature retains all the special attacks of the base creature and also gains the following attack.
Witchfire (Su): Once per day a monster can ignite with an aura of burning mana known as witch fire. For one round all attacks add +2 damage per HD in mana damage.
Special Qualities: A celestial creature retains all the special qualities of the base creature and also gains the following qualities.
Darkvision (Ex): Monsters gain Darkvision out to 60 ft.
Damage Reduction (Su): A Monster gains Damage reduction (see the table below).
Devour Lenz (Su): A berserk monster can devour Lenz to junction the material. The mutant can devour as many Lenz as it has junction slots (1+ Con Modifier).
Frenzy (Ex): Three times per day, a monster can work itself into a frenzy of bloodlust and rage. During this berserk the mutant gains +6 Strength, +4 Constitution, +2 morale bonus on Will saves, and a -2 Defense penalty. The frenzy lasts for 6 rounds, and the mutant suffers no ill effects afterward.
Energy Resistance (Ex): Monsters develop Resistance to acid, cold, and electricity (see the table below).
Spell Resistance (Ex): Berserk Monsters develop Spell resistance equal to HD + 5 (maximum 25).
| Hit Dice | Energy Resistance | Damage Reduction |
|----------|-------------------|------------------|
| 1–2 | 5 | – |
| 3–5 | 5 | 5/– |
| 6–8 | 10 | 10/– |
| 9–12 | 10 | 10/– |
| 13 or more | 15 | 15/– |
If the base creature already has one or more of these special qualities, use the better value.
Natural Weapon Bonus (Ex): Monsters gain Weapon Focus with its primary natural attack, its natural weapons are treated as magic weapons for the purpose of overcoming damage reduction.
Abilities: Same as the base creature, but Intelligence is at least 3.
Environment: Any.
Challenge Rating: As base creature +4.
MUTANTS
Generally, the difference between a monster and a mutant is the creature. Mutants are humans that after becoming exposed to mana become savage feral beasts, intent only of death. Or so the APOC says.
Beastly Mutant
The Beastly Mutant is an imbued template that can be applied to any natural humanoid, referred to hereafter as the base creature, and uses all the base creature’s statistics and abilities except as noted here.
Size and Type: The creature’s type changes to Monstrous Humanoid. Do not recalculate the creature’s Hit Dice, base attack bonus, saves, or skill points. The creature gains the augmented subtype if necessary. Size is unchanged.
Defense: Natural armor bonus to Defense improves by 6 (this stacks with any defense or natural armor bonus the base creature has).
Special Qualities: A Beastly Mutant creature has all the special qualities of the base creature, plus the following special qualities.
Damage Reduction (Su): A Beastly Mutant creature gains DR 5/- + its Hit Dice. This stacks with any existing Damage reduction of the base creature.
Devour Lenz (Su): A Beastly monster can devour Lenz to junction the material. The mutant can devour as many Lenz as it has junction slots (1+ Con Modifier).
Natural Weapons (Ex): All Beastly mutant develop a number of hideous physical mutations, including claws, pronounced teeth, horns, a thrashing tail, among others. Generally, this manifests as three natural attacks with a damage of 1d4/1d6/1d6.
Spell Resistance (Su): A Beastly Mutant creature gains spell resistance equal to 11 + its Hit Dice. If the creature already has spell resistance, use the greater of the two values.
Bloodlust Frenzy (Ex): Three times per day, a beastly mutant can work itself into a frenzy of bloodlust and rage. During this berserk the mutant gains +4 Strength, +4 Constitution, +2 morale bonus on Will saves, and a -2 Defense penalty. The frenzy lasts for 6 rounds, and the mutant suffers no ill effects afterward.
Abilities: Increase from the base creature as follows: Str +6, Con +4, Int -6 (minimum 3).
Challenge Rating: HD 3 or less, as base creature +2; HD 4 to 10, as base creature +3; HD 11 or more, as base creature +5.
Level Adjustment: +3.
Crimson Noble
The Crimson nobles are a unique cast of human mutants that have managed to maintain and sustain a stabilized mutant blood line. Among the goals of the Crimson Nobility was longevity and supernatural powers. It is rumored that following the great war, the patriarchs of the Crimson Nobles attained Horror like powers and found a way to cheat even death. The Crimson Noble template is an inherited template that can be applied to any humanoid.
Size and Type: The creature’s type changes to Monstrous Humanoid, and the creature gains the augmented subtype. Do not recalculate the creature’s Hit Dice, base attack bonus, saves, or skill points. Size is unchanged.
Defense: A Crimson Noble gains a +1 dodge bonus to Defense. Anything that would take the creature’s dexterity bonus, takes this bonus.
Special Attacks: Crimson Nobles possess all the attacks of the base creature, plus the following:
Bite (Ex): The Crimson nobles poses pronounced fangs and can make a bite attack for 1d6 Damage. If the base creature gets multiple attacks in a round, it can bite multiple times. The bite is treated as a natural weapon and does not provoke attacks of opportunity.
Gaze attack (Su): As a move action, the creature can make a special gaze attack against one creature within 30 feet. The target must succeed on a Will save (DC 10 + one-half Hit dice + Charisma modifier) or be shaken for 2d6 rounds. This gaze attack is a mind affecting compulsion, and any creature that successfully saves
against this gaze attack cannot be affected again for 24 hours.
**Special Qualities:** Crimson Nobles possess all the qualities of the base creature, plus the following.
*Darkvision (Ex):* The Crimson Noble gains Nightvision out to 60 feet, and low light vision.
*Fortitude (Ex):* The Crimson Nobles gains a +2 bonus on Fortitude saving throws to resist poisons, diseases, and radiation sickness. Furthermore, any permanent life energy drain inflicted upon you is treated as temporary energy damage instead. Finally, the creature receives a +2 bonus to Save vs. Massive Damage.
*Bloodlust (Ex):* A crimson noble must drain a pint of blood from a living creature once every 24 hours. Doing so is an attack action, and you can only drain blood from a willing, helpless, or dying (but not dead) creature. The bitten creature takes normal damage from the bite attack plus an extra 1d6 points of damage from the blood loss. So long as the Crimson Noble is not starving (see below) they will instantly heal the same about of damage that the creature has taken from blood loss. If the noble goes 24 hours without consuming blood, it becomes starving and will take 1d4 points of Constitution damage. Drinking a pint of blood cures the ability damage caused by blood deprivation in 1d6 rounds, but not recover H.P. Ability damage caused by blood deprivation cannot be restored through natural or magical healing.
*Light Sensitivity (Ex):* Ultraviolet light (including direct sunlight) burns you for 2d6 points of fire damage per round and causes any light, flammable clothing you are wearing to ignite (see the d20 Modern Roleplaying Game, page 213, for rules about catching on fire). Further, abrupt exposure to bright light (such as sunlight) blinds you for 1 round. On subsequent rounds, you take a -1 penalty on attack rolls, Search checks, and Spot checks as long as you remain in the affected area.
**Abilities:** As base creature.
**Challenge Rating:** As creature +1
**Level Adjustment:** -
Wyverns and Wyrms
Of all life on Ghestal, the wyverns have the most unique life cycle. In the wild, old wyverns who have mated and lived out their natural lives will often spend their last days foraging for naturally occurring Lenz. If they live long enough to find one, the Wyvern will eat the Lenz and as much mana as they can. This triggers a remarkable metamorphosis, and the beast will enter into an extended hibernation resembling Mana Poisoning, emerging as a wyrm.
Wyvern
*Large Dragon*
**Hit Dice:** 7d12+14 (59 hp)
**Initiative:** +3
**Speed:** 20 ft. (4 squares), fly 60 ft. (poor)
**Defense:** 20 (−1 size, +3 Dex, +8 natural), touch 10, flat-footed 17
**Base Attack/Grapple:** +7/+15
**Attack:** Talon +10 melee (2d6+4), bite +10 melee (2d8+4), tail slap (1d8=4)
**Full Attack:** Bite +10 melee (2d8+4) and 2 wings +8 melee (1d8+2) or 2 talons +8 melee (2d6+4), and tail slap (1d8+4)
**Space/Reach:** 10 ft./5 ft.
**Special Attacks:** Frightful presence, improved grab
**Special Qualities:** Darkvision 60 ft., immunity to sleep and paralysis, low-light vision, scent.
**Saves:** Fort +7, Ref +6, Will +6
**Abilities:** Str 19, Dex 16, Con 16, Int 6, Wis 12, Cha 16
**Skills:** Hide +7, Listen +13, Move Silently +11, Spot +16
**Feats:** Frightful Presence, Alertness, Flyby Attack, Multiattack
**Environment:** Warm cliffs, jungle, mountains
**Organization:** Solitary, pair, or flight (3–6)
**Challenge Rating:** 6
**Advancement:** 8–10 HD (Huge); 11–21 HD (Gargantuan)
The apex predator of the deserts and badlands, Wyverns are large flying reptiles renowned for their ferocity and cunning. The legends of wyverns and dragons go back to the beginning of civilization, the earliest cultures held them as sacred. These animals are cunning, and very capable hunters. Unfortunately, this intelligence makes the wyverns nearly impossible to train and tame. While they are capable of learning much, they are intelligent enough to recognize captivity and turn on their keepers.
These two legged beasts have great leathery wings as front legs and long whip-like tales. An average wyvern’s body is 15 feet long, and dark brown to gray; half that
length is tail. Its wingspan is about 30 feet. A bull male wyvern may weigh as much as one ton.
**Combat**
Wyverns are cunning careful predators, but extremely territorial: They attack nearly anything that violates their territory, ideally eating it. A wyvern dives from the air, snatching the opponent with its talons and biting it to death. A wyvern can slash with its talons only when making a flyby attack.
**Frightful Presence (Ex):** Up close wyverns are terrifying and when they attack all opponents within reach how have seven hit dice or less must make a Will save vs DC 18 or become shaken, suffering a -2 penalty to attack roles, saving throws, and skill checks for 1d6+2 rounds.
**Improved Grab (Ex):** To use this ability, a wyvern must hit with its talons. It can then attempt to start a grapple as a free action without provoking an attack of opportunity. If it wins the grapple check, it establishes a hold and bites.
**Skills:** Wyverns have a +3 racial bonus on Spot checks.
---
**Wyrm**
*Huge Dragon*
**Hit Dice:** 12d12+14 (59 hp)
**Initiative:** +3
**Speed:** 20 ft. (4 squares), fly 60 ft. (poor)
**Defense:** 20 (−1 size, +3 Dex, +8 natural), touch 10, flat-footed 17
**Base Attack/Grapple:** +7/+15
**Attack:** Talon +10 melee (2d6+4), bite +10 melee (2d8+4), tail slap (1d8=4)
**Full Attack:** Bite +10 melee (2d8+4) and 2 wings +8 melee (1d8+2) or 2 talons +8 melee (2d6+4), and tail slap (1d8+4)
**Space/Reach:** 10 ft./5 ft.
**Special Attacks:** Breath weapon, Frightful presence, improved grab
**Special Qualities:** Darkvision 60 ft., immunity to sleep and paralysis, low-light vision, scent.
**Saves:** Fort +7, Ref +6, Will +6
**Abilities:** Str 19, Dex 16, Con 16, Int 6, Wis 12, Cha 16
**Skills:** Hide +7, Listen +13, Move Silently +11, Spot +16
**Feats:** Frightful Presence, Alertness, Flyby Attack, Multiattack
**Environment:** Warm cliffs, jungle, mountains
**Organization:** Solitary, pair, or flight (3–6)
**Challenge Rating:** 6
**Advancement:** See Below
One in every ten thousand Wyvers will successfully find an energy Lenz in their twilight, gobbling up the stone and then gorging on mana. Inevitably the creature will become mana poisoned and find a safe place to hibernate through the 2d4 week coma, usually their den or anew cave far from their original territory. Following the metamorphosis the Wyvern emerges as an alpha monster – a true dragon.
These creatures gain remarkable powers, and it is rumored that they become increasingly intelligent with age. Some claim the oldest can speak – but this is just a legend. Yet with each passing year, dragons become more powerful. It is unknown if they every die.
**Combat**
**Breath Weapon (Su):** A wyrm can breathe a 60-foot-long, 5-foot-wide line of witchfire every 1d4 rounds as an attack action. Any creature in the line of fire takes mana damage. The damage depends on the wyrm’s age.
**Darkvision (Ex):** Monsters gain Darkvision out to 200 ft.
**Damage Reduction (Su):** A Monster gains Damage reduction (see the table below).
**Devour Lenz (Su):** Dragons can devour Lenz to junction the material. A Junctioned Lenz is required to initiate the transformation into a dragon. The wyrm can devour as many Lenz as it has junction slots (1+ Con Modifier).
**Energy Resistance (Ex):** Wyrms develop Resistance to fire, cold, and electricity (see the advancement table below).
**Spell Resistance (Ex):** Like all monsters, wyrms develop spell resistance equal to HD + 5.
**Improved Grab (Ex):** To use this ability, a wyrm must hit with both claw attacks. If it gets a hold, it hangs on and stings. If a wyrm grabs a creature two or more size categories smaller, it automatically deals damage with both claws and its sting each round the hold is maintained. See Improved Grab.
**Fling (Ex):** A wyrm can drop a creature it has grabbed or use an attack action to fling it aside. A flung creature travels 30 feet and takes 3d6 points of damage. If the wyrm flings it while flying, the creature takes this amount or falling damage, whichever is greater.
**Frightful Presence (Ex):** Wyrrms are even more terrifying than wyverns. The ability takes effect automatically whenever the dragon attacks or flies overhead. Creatures within the radius are subject to the effect if they have fewer Hit Dice than the dragon. A potentially affected creature that succeeds at a Will save (DC noted below) remains immune to that dragon's fear aura for one day. On a failure, creatures with 4 or fewer HD become panicked for 4d6 rounds and those with 5 or more HD become shaken for 4d6 rounds. Dragons ignore the fear aura of other dragons.
**Scent (Ex):** This ability allows a wyrm to detect approaching enemies, sniff out hidden foes, and track by sense of smell. See Special Qualities for more information.
**Immunities (Ex):** Wyrrms are immune to sleep, hold, and paralysis effects.
**Skill Bonus:** Wyrrms receive a +3 species bonus on Spot checks during daylight hours.
| Wyrm Age | Size | Base Hit Dice | Breath Weapon | Fear Aura | Talons /Bite | Tail Sweep | Energy Resist | DR | SR |
|----------|----------|---------------|---------------|-------------|--------------|------------|---------------|----|----|
| 0-25 | Huge | 12d12 | 8d10 (DC 23) | 50ft. (DC 23)| 2d6/2d8 | 1d8 | 5 | 5/-| 15 |
| 26-75 | Huge | 16d12 | 10d10 (DC 26) | 750ft. (DC 26)| 2d6/2d8 | 1d8 | 5 | 10/-| 19 |
| 76-150 | Huge | 20d12 | 12d10 (DC 27) | 100ft. (DC 29)| 2d8/2d10 | 1d10 | 10 | 10/-| 23 |
| 150-250 | Huge | 24d12 | 14d10 (DC 30) | 150ft. (DC 32)| 2d8/2d10 | 1d10 | 10 | 15/-| 27 |
| 151-300 | Gargantuan | 28d12 | 16d10 (DC 31) | 200ft. (DC 35)| 3d6/3d8 | 2d6 | 10 | 15/-| 29 |
| 301-500 | Gargantuan | 32d12 | 18d10 (DC 34) | 250ft. (DC 38)| 3d6/3d8 | 2d6 | 15 | 20/-| 31 |
| 501-900 | Gargantuan | 36d12 | 20d10 (DC 36) | 300ft. (DC 41)| 3d8/3d10 | 2d6 | 15 | 20/-| 33 |
| 900+ | Colossal | 40d12 | 22d10 (DC 39) | 350ft. (DC 24)| 3d8/3d10 | 2d8 | 20 | 25/-| 35 |
Thank you for reading!
If you enjoy this setting, and would like to see more like it,
please consider checking out more work at Ogertree.com and supporting
great content like this.
Ashram Kain
http://www.ogretree.com
Legal Information
Copyright 2016-2017 – Ashram Kain. This is a FREE PRODUCT distributed for entertainment and enjoyment, not for profit.
**Product Identity:** The items described below are identified as Product Identity, as defined in the Open Game License 1.0a, Section 1(e), and are not Open Content: All trademarks, registered trademarks, proper names (characters, organizations, etc.), dialogue, storylines, locations, characters, artworks, and designs. (Elements that have previously been designated as Open Game Content are not included in this declaration.)
**Open Content:** Except for material designated as Product Identity (see above), all of the mechanical elements of this product are Open Game Content, as defined in the Open Game License version 1.0a Section 1(d). No portion of this work other than the material designated as Open Game Content may be reproduced in any form without written permission.
**OPEN GAME LICENSE VERSION 1.0A**
The following text is the property of Wizards of the Coast, Inc. and is Copyright 2000 Wizards of the Coast, Inc ("Wizards"). All Rights Reserved.
1. Definitions: (a)"Contributors" means the copyright and/or trademark owners who have contributed Open Game Content; (b)"Derivative Material" means copyrighted material including derivative works and translations (including into other computer languages), potation, modification, correction, addition, extension, upgrade, improvement, compilation, abridgment or other form in which an existing work may be recast, transformed or adapted; (c) "Distribute" means to reproduce, license, rent, lease, sell, broadcast, publicly display, transmit or otherwise distribute; (d)"Open Game Content" means the game mechanic and includes the methods, procedures, processes and routines to the extent such content does not embody the Product Identity and is an enhancement over the prior art and any additional content clearly identified as Open Game Content by the Contributor, and means any work covered by this License, including translations and derivative works under copyright law, but specifically excludes Product Identity. (e) "Product Identity" means product and product line names, logos and identifying marks including trade dress; artifacts; creatures characters; stories, storylines, plots, thematic elements, dialogue, incidents, language, artwork, artwork, symbols, designs, depictions, likenesses, formats, poses, concepts, themes and graphic, photographic and other visual or audio representations; names and descriptions of characters, spells, enchantments, personalities, teams, personas, likenesses and special abilities; places, locations, environments, creatures, equipment, magical or supernatural abilities or effects, logos, symbols, or graphic designs; and any other trademark or registered trademark clearly identified as Product identity by the owner of the Product Identity, and which specifically excludes the Open Game Content; (f) "Trademark" means the logos, names, mark, sign, motto, designs that are used by a Contributor to identify itself or its products or the associated products contributed to the Open Game License by the Contributor (g) "Use", "Used" or "Using" means to use, Distribute, copy, edit, format, modify, translate and otherwise create Derivative Material of Open Game Content. (h) "You" or "Your" means the licensee in terms of this agreement.
2. The License: This License applies to any Open Game Content that contains a notice indicating that the Open Game Content may only be Used under and in terms of this License. You must affix such a notice to any Open Game Content that you Use. No terms may be added to or subtracted from this License except as described by the License itself. No other terms or conditions may be applied to any Open Game Content distributed using this License.
3. Offer and Acceptance: By Using the Open Game Content You indicate Your acceptance of the terms of this License.
4. Grant and Consideration: In consideration for agreeing to use this License, the Contributors grant You a perpetual, worldwide, royalty-free, non-exclusive license with the exact terms of this License to Use, the Open Game Content.
5. Representation of Authority to Contribute: If You are contributing original material as Open Game Content, You represent that Your Contributions are Your original creation and/or You have sufficient rights to grant the rights conveyed by this License.
6. Notice of License Copyright: You must update the COPYRIGHT NOTICE portion of this License to include the exact text of the COPYRIGHT NOTICE of any Open Game Content You are copying, modifying or distributing, and You must add the title, the copyright date, and the copyright holder's name to the COPYRIGHT NOTICE of any original Open Game Content you Distribute.
7. Use of Product Identity: You agree not to Use any Product Identity, including as an indication as to compatibility, except as expressly licensed in another, independent Agreement with the owner of each element of that Product Identity. You agree not to indicate compatibility or co-adaptability
with any Trademark or Registered Trademark in conjunction with a work containing Open Game Content except as expressly licensed in another, independent Agreement with the owner of such Trademark or Registered Trademark. The use of any Product Identity in Open Game Content does not constitute a challenge to the ownership of that Product Identity. The owner of any Product Identity used in Open Game Content shall retain all rights, title and interest in and to that Product Identity.
8. Identification: If you distribute Open Game Content You must clearly indicate which portions of the work that you are distributing are Open Game Content.
9. Updating the License: Wizards or its designated Agents may publish updated versions of this License. You may use any authorized version of this License to copy, modify and distribute any Open Game Content originally distributed under any version of this License.
10. Copy of this License: You MUST include a copy of this License with every copy of the Open Game Content You Distribute.
11. Use of Contributor Credits: You may not market or advertise the Open Game Content using the name of any Contributor unless You have written permission from the Contributor to do so.
12. Inability to Comply: If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Open Game Content due to statute, judicial order, or governmental regulation then You may not Use any Open Game Material so affected.
13. Termination: This License will terminate automatically if You fail to comply with all terms herein and fail to cure such breach within 30 days of becoming aware of the breach. All sublicenses shall survive the termination of this License.
14. Reformation: If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable.
15. COPYRIGHT NOTICE
Open Game License v 1.0a Copyright 2000, Wizards of the Coast, Inc.
D20 Modern Copyright 2002-2004, Wizards of the Coast, Inc.; Authors Bill Slavicsek, Jeff Grubb, Rich Redman, Charles Ryan, Eric Cagle, David Noonan, Stanl, Christopher Perkins, Rodney Thompson, and JD Wiker, based on material by Jonathan Tweet, Monte Cook, Skip Williams, Richard Baker, Peter Adkison, Bruce R. Cordell, John Tynes, Andy Collins, and JD Wiker.
Portions of the work are interpretations of rules presented in Dungeons & Dragons, D20 Future, and D20 Urban Arcana.
END OF LICENSE
Enter into a world of danger, adventure, magic and science.
Deviant Evolution is a setting book created for the D20 Modern Role Playing Game.
Ogre Tree |
Adaptive form-finding method for form-fixed spatial network structures
Cheng Lan\textsuperscript{1} · Xi Tu\textsuperscript{2} · Junqing Xue\textsuperscript{3}\textsuperscript{✉} · Bruno Briseghella\textsuperscript{3} · Tobia Zordan\textsuperscript{1}
Received: 8 August 2017 / Accepted: 2 January 2018
© The Author(s) 2018. This article is an open access publication
Abstract
An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.
Keywords Form-finding · Spatial network structure · Form-fixed · Structural optimization · Dynamic relaxation · Explicit integration
Introduction
Since the 1960s, the form finding process, also called the shape finding process, has been used to find the certain structural state of desired geometry, an equilibrium state, or both of them, with specified boundary conditions. Currently, the process has been progressively adopted for structural-engineering applications, such as buildings and bridges (Allahdadian and Boroomand 2010; Huang and Xie 2008; Neves et al. 1995; Stromberg et al. 2010; Zordan et al. 2010; Briseghella et al. 2010, 2013a, b; Zordan et al. 2014), and it is widely applied in the architectural design of structures that transfer their loads almost purely using their shapes, or vice versa (Adriaenssens et al. 2014; Basso and Del Grosso 2011; Fenu et al. 2015; Briseghella et al. 2016). Such structures mainly include unstrained grid-shells (components in compression), cable nets or membranes (components in tension) and tensegrity structures (components both in compression and tension) (Barnes 1977; Topping and Ivanyi 2007; Tran and Lee 2010; Koohestani 2013). Those structures are considered to be designed not only in a highly efficient manner from the structural point of view but also in an aesthetically pleasing manner (Lucerga and Armisén 2012). Some form finding methods have been developed through decades of practices (Nouri-Baranger 2004). The existing well-known methods can be categorized in the following main families:
– Stiffness matrix methods are based on using the standard elastic and geometric stiffness matrices that were adapted from structural analysis. These methods account for the material properties in computation, which may lead to difficulty in operations of matrices and control of (stable) convergence.
Geometric stiffness methods are material independent, with only a geometric stiffness. However, these methods are applied in their linear form and produce results that are not constructionally practicable (Barnes 1977); thus, they can serve only as a preliminary result.
Force density methods. Because the ratio of force to length is a central unit in mathematics, force density methods can be considered as subtypes under the category of geometric stiffness methods. Similar to geometric stiffness methods, additional iterations are necessary for uniform or geodesic networks or shape-dependent loading, making the method non-linear (Barnes 1977; Haber and Abel 1982; Tan 1989; Lewis 2008; Koohestani 2014). Recently, the so-called thrust network analysis derived from the force density method has been used to find the shape of a discrete membrane restrained to a given geometric limitation (Block 2009; Marmo and Rosati 2017).
Dynamic equilibrium or relaxation methods that solve the problem of dynamic equilibrium to arrive at a steady-state solution are equivalent to the static solution of static equilibrium. As adapted from explicit time series integration, the time step parameters are required to control stability and convergence (Barnes 1977; Lewis 1989; Baraff and Witkin 1998). The main advantage of the dynamic relaxation method is that no assembled structural stiffness matrix is required; hence, it is suitable for highly nonlinear problems (Topping and Ivanyi 2007). The iterations in dynamic relaxation methods simulate the physical evolutionary process of the structures with feasible geometric configurations. Furthermore, with the development of the computer technique, the time and resource consumption in iterations and results storage has been significantly reduced. Such dynamic relaxation methods are becoming more popular (Olsson 2012; Bagrianski and Halpern 2014) and will be the basis for developing the method to be presented in this paper.
Similar categorizations can be found in other works with different names (Topping and Ivanyi 2007; Basso and Del Grosso 2011; Veenendaal and Block 2012). In applying these methods, the general shapes or forms of structures are unknown in advance, the so called free-form structures (Liu and Shimoda 2014). By fixing boundary conditions, where the forces ended (transferred to constraints/supports), the final optimal forms will be found through continuous force paths (Fig. 1). In the phase of considering boundary conditions in the form-finding method or finite element method, typically, the degrees of freedom (DOFs) in a Cartesian coordinates system will be separated into the following two groups: interior ones (free to move) and exterior ones (fixed). Correspondingly, the mass matrix, stiffness matrix, displacement vectors, force vectors and so on, will be divided into sub-matrices and sub-vectors to solve the static system equations.
In the case that the general network form is known or fixed, for example, finding a network in the shape of the desired ellipsoidal dome, as shown in Fig. 2, the “form” to find is the position of the joints/nodes in the network, while the “form” surface of network is already fixed. If the typical form-finding method is applied, then the nodal supports should be movable in the surface with changing supports directions that are normal to the surface, that is, the boundary conditions change at every iteration step. This situation could make it difficult to correctly describe the boundary conditions in the Cartesian coordinate system (with the exception of specific desired shapes, e.g., a cylinder shape could be described in a cylindrical coordinates system, a spherical shape could be described in a spherical coordinates system, and so on; otherwise, stiffness matrices in the corresponding surface coordinates must be developed). Moreover, the calculation matrices will be very complicated and might cause singularity problems in the matrix operations.
In this case, the dynamic relaxation method will be applied, thus avoiding inverse of stiffness matrices of complicated and varied geometries; the typical method needs to be adapted to easily update the position-dependent boundary conditions. Therefore, the objective of this paper is to, based on the dynamic relaxation method, construct an adaptive form finding method for a bar network on a form-fixed surface with boundary conditions updated in each time step.
**Objective and framework**
To demonstrate the framework of the presenting method, the example of designing an ellipsoid shaped geodesic network dome is employed. The lengths of semi-principal axes of the ellipsoid are $a = 15$ m, $b = 11$ m, and $c = 12$ m (height). The target is to obtain a geodesic network dome in this desired ellipsoidal shape, having bars with as few length variations as possible, and each bar length should be approximately 3 m, for the purpose of economy and convenience of construction.
Geodesic network
As is known to architectural designers, the typical geodesic network in the form of a sphere or half sphere, besides the aesthetical aspect, the length of all bars (or called struts, beams, edges) of the network are in groups of limited number, according to the frequency of the network, as shown in Table 1. The “frequency” here, is defined by the subdivisions of one basic triangular face of an icosahedron. The higher the frequency, the more triangles there are in the geodesic dome. In icosahedron-based geodesic domes, the flat bottoms are available from half-sphere of even frequencies, whereas no flat bottoms are available from those of odd frequencies. Thus, the geodesic domes could be fabricated in a few types of nodes (also called joints, hubs, vertices) and bars and then assembled on-site with a lower cost of construction.
Based on the characteristics of the geodesic network, the design target is considered to have similar distribution of bars of a geodesic dome at a frequency of 6 V with a spherical radius of 15 m, where bar lengths in nine groups vary from 2.439 to 3.429 m. Thus, this network (designated as initial state or zero state in the following content) is transformed to an ellipsoidal geodesic network by scaling in x, y, and z directions with factors of 15/15, 11/15, and 12/15, respectively (designated as first state). The next step is to find a new node position on the current ellipsoidal surface that allows the bar lengths to “return” to the corresponding lengths of the geodesic network before transformation. To achieve this goal, the adaptive form-finding procedure is conducted, as presented in the following section.
Form-finding procedure
In the first state, bars are given pre-compressions calculated from the changing lengths while being transformed from the initial state, $F_1 = EA(L_1 - L_0)/L_0$. It can be predicted that, after releasing the bar forces, the network form will expand along the surface to cover more surface than a half-dome in the end;
| Frequency (V) | Geodesic dome | Number of length groups | Number of bars (struts, beams) | Number of nodes (joints, hubs) |
|---------------|---------------|-------------------------|-------------------------------|--------------------------------|
| 2 | | 2 | 26 | 65 |
| 4 | | 6 | 250 | 91 |
| 6 | | 9 | 555 | 196 |
| 8 | | 19 | 980 | 341 |
thus, the “over-design” part will be cut off properly to fit the target network.
In the form-finding procedure, the pre-compressions in bars are the only loadings considered on the network structure. Regarding the nodal forces induced by releasing the pre-compressed bars, their components in the directions of surface normal at the nodes should be balanced with the reaction forces from the surface constraining boundary conditions. The residual forces of the nodal forces and the reaction forces, therefore, are the components of the nodal forces in the tangent plane of the surface at nodes and can be calculated according to the nodal forces and the positions of the nodes. In this manner, for any state of the network, the boundary conditions are considered in the calculation of the residual forces without recognizing the degrees of freedom that are fixed or not fixed. Moreover, the operations on vectors and matrices are unified for different geometries.
As the dynamic relaxation method is applied, the network undergoes a structural dynamic process, where the explicit integration of time-dependent variables will be performed to obtain the evolutionary node positions over a given time duration until the convergence criteria are met. As ideally expected, in the end of an iteration, if all nodes are to be at stable positions, then the network in that state will have more or less the same bar length groups as those in the initial state. However, due to the inconstant curvatures of the desired ellipsoidal surface, the primary radii varying in the quarter part, the bars obviously will not have the same length as those in the spherical dome. The general objective of this form-finding is to find a state of the structure with the most averaged bar length variations, where the forces in bars will be the most balanced. Therefore, the bar length or force will be the main criterion for convergence. The framework of this procedure is summarized and shown in the following flowchart (Fig. 3). The detailed procedure and relative formulations are discussed in the next section.
**Procedure and formulations**
**Geometry state**
In each step of the calculation, the items are mainly calculated based on the geometric state of the network. Generally, a bar-node matrix $C$ is used to describe the topology of a network of bars and nodes (Schek 1974). For a network with $m$ bars and $n$ nodes in three-dimensional space, a bar-node matrix $C$ is constructed, where the entries of the $i$-th row and $j$-th column of the $m \times n$ matrix $C$ are as follows:
$$C_{ij} = \begin{cases} +1 & \text{if node } j \text{ is the start of bar } i \\ -1 & \text{if node } j \text{ is the end of bar } i \\ 0 & \text{otherwise} \end{cases}$$
(1)
The $n \times 3$ nodal coordinate vectors $\mathbf{x}$ is as follows:
$$\mathbf{x} = \begin{bmatrix} \mathbf{x} & \mathbf{y} & \mathbf{z} \end{bmatrix} = \begin{bmatrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ \vdots & \vdots & \vdots \\ x_n & y_n & z_n \end{bmatrix}$$
(2)
The coordinate difference vectors (bar directions in rows) $\mathbf{u}$ can be written as a function of $C$ and coordinate vectors $\mathbf{x}$ as follows:
$$\mathbf{u} = \begin{bmatrix} \mathbf{u} & \mathbf{v} & \mathbf{w} \end{bmatrix} = C \mathbf{x}$$
(3)
**Fig. 3** Flowchart of the framework of the form-finding procedure (the right column shows the calculations in each iteration; the left column shows the updated state variables for calculation)
where $\overline{u}$, $\overline{v}$ and $\overline{w}$ are vectors, each containing $m$ coordinate differences in the corresponding Cartesian direction. With coordinate difference vectors, the bar lengths $L$ are as follows:
$$L = \left( \text{diag}(\overline{u})^2 + \text{diag}(\overline{v})^2 + \text{diag}(\overline{w})^2 \right)^{\frac{1}{2}}$$ \hspace{1cm} (4)
where the function $\text{diag}()$ returns the diagonal matrix from the vector or the diagonal elements from the matrix. Therefore, $L$ is an $m \times m$ matrix that contains only diagonal elements that are bar lengths.
The masses of the structure are considered to be lumped at the nodes. The lumped mass matrix can be written as follows:
$$M = m_l |C|^T L$$ \hspace{1cm} (5)
where $m_l$ is the mass per unit length of bars or is expressed in material density and bar section area $m_l = \rho A$.
The problem with form-finding is, in principle, a geometric one, i.e., material independent (Barnes 1999). After convergence, the forces on bars with $m$ desired stiffnesses $EA$ do not disturb the state of equilibrium (Gründig et al. 2000). In this procedure, the bar stiffnesses $EA$ are considered as constant for all bars and are applied directly to all steps of the calculation of forces on beams, expressed as follows:
$$F = EA \left( LL_0^{-1} - I \right)$$ \hspace{1cm} (6)
where $L_0$ is calculated via Eqs. (2)–(4) with coordinates $x_0$ of the geodesic dome of a unit sphere; and $I$ is an identity matrix. Thus, the force-to-length ratios, also called force densities or tension coefficients (Veenendaal and Block 2012), are given by the following:
$$q = FL^{-1}$$ \hspace{1cm} (7)
The resulting forces on nodes are as follows:
$$P = -C^T qu = -C^T FL^{-1} C x$$ \hspace{1cm} (8)
Thus, the secant stiffness matrix can be written as follows:
$$K = -P(x - x_0)^{-1} = C^T FL^{-1} C x (x - x_0)^{-1}$$ \hspace{1cm} (9)
**Boundary conditions/constraints update**
The boundary conditions are updated through achieving the equilibrium of forces, as introduced above. In the design example, the nodes are constrained on an ellipsoidal surface, where movements in the normal directions of the surface are fixed, whereas movements in the tangent planes are free. The ellipsoidal surface function in the Cartesian coordinates system can be written as follows:
$$F([x, y, z]) = \left( \frac{x}{a} \right)^2 + \left( \frac{y}{b} \right)^2 + \left( \frac{z}{c} \right)^2 - 1 = 0$$ \hspace{1cm} (10)
where $[x, y, z]$ is an arbitrary point on the surface. Thus, the normal direction vector of the ellipsoidal surface at point $[x, y, z]$ is as follows:
$$\begin{bmatrix}
\frac{\partial F}{\partial x} & \frac{\partial F}{\partial y} & \frac{\partial F}{\partial z}
\end{bmatrix} = \begin{bmatrix}
\frac{2x}{a^2} & \frac{2y}{b^2} & \frac{2z}{c^2}
\end{bmatrix}$$ \hspace{1cm} (11)
or, expressed as vectors as follows:
$$n = x \begin{bmatrix}
2/a^2 \\
2/b^2 \\
2/c^2
\end{bmatrix}$$ \hspace{1cm} (12)
The normal vectors can be normalized as follows:
$$\tilde{n} = n \text{diag}(\|n\|)^{-1}$$ \hspace{1cm} (13)
where the expression of $\|n\|$ returns the second order norm of each row in $n$. The projections of nodal forces $P$ on normal $\tilde{n}$ are given by the following:
$$P_{pn} = \text{diag}\left( \text{diag}(P \tilde{n}^T) \right) \tilde{n}$$ \hspace{1cm} (14)
Because the reaction forces at nodes are balanced with $P_{pn}$, the reaction forces are $-P_{pn}$. The resulting residual forces are as follows:
$$R = P - P_{pn} = P - \text{diag}\left( \text{diag}(P \tilde{n}^T) \right) \tilde{n}$$ \hspace{1cm} (15)
**Explicit integration**
The relaxation of the bar forces initiates the structural dynamic time history process. The state variables of a new step will be explicitly integrated from the current state according to the governing ordinary differential equations. Typically, the integration implementation uses either the explicit classic 4th order Runge–Kutta Method (Baraff and Witkin 1998) or the Central Finite Difference Method (Barnes 1999). Here, a simple conditionally stable explicit method based on the Modified Trapezoidal Rule Method (Pezeshk and Camp 1995) is used.
The structural equation of motion at step time of $t$ can be expressed as follows:
$$M_t a_t + K_t x_t = P_{e,t}$$ \hspace{1cm} (16)
where the subscript of $t$ denotes the present variable calculated based on the geometric state at time of $t$. The mass matrix is constant over time; the external forces $P_e$ are the reaction forces; the displacement induced internal forces $Kx$ are the nodal forces resulting from the bar forces. Therefore, Eq. (16) can be rewritten as follows:
$$Ma_t = R_t$$ \hspace{1cm} (17)
The integration using the Modified Trapezoidal Rule Method is accomplished as follows:
\[
\mathbf{a}_t = \mathbf{M}^{-1}\mathbf{R}_t \\
\mathbf{v}_{t+\Delta t/2} = \left( \mathbf{I} - \frac{1}{2}\mathbf{M}_t^{-1}\mathbf{K}_t \right) \mathbf{v}_{t-\Delta t/2} + \Delta t \mathbf{a}_t \\
\mathbf{x}_{t+\Delta t} = \mathbf{x}_t + \frac{\Delta t}{2} (\mathbf{v}_{t-\Delta t/2} + \mathbf{v}_{t+\Delta t/2})
\] (18)
Considering the constraints again here, the resulting coordinates \( \mathbf{x}_{t+\Delta t} \) will position outside from the ellipsoid surface because the nodal accelerations cause the displacements and velocities on the tangent directions of previous positions \( \mathbf{x}_t \). Therefore, the coordinates will be projected back to the constraint of the ellipsoid surface, and the velocities will be updated by removing the component on newly normal directions \( \tilde{\mathbf{n}}_{t+\Delta t} \) updated via Eqs. (12) and (13). The updated coordinates and velocities will be used for updating other state variables and for integration in the next iteration.
\[
\mathbf{x}^*_{t+\Delta t} = \mathbf{x}_{t+\Delta t} \begin{bmatrix}
\frac{1}{k/a^2+1} & 0 & 0 \\
0 & \frac{1}{k/b^2+1} & 0 \\
0 & 0 & \frac{1}{k/c^2+1}
\end{bmatrix} \\
\mathbf{v}^*_{t+\Delta t/2} = \mathbf{v}_{t+\Delta t/2} - \text{diag} \left( \text{diag} \left( \mathbf{v}_{t+\Delta t/2} \tilde{\mathbf{n}}_{t+\Delta t}^T \right) \right) \tilde{\mathbf{n}}_{t+\Delta t}
\] (19)
where \( k \) is the absolute minimum root of the surface constraint function as follows:
\[
F \left( \mathbf{x}_{t+\Delta t} \begin{bmatrix}
\frac{1}{k/a+a} & 0 & 0 \\
0 & \frac{1}{k/b+b} & 0 \\
0 & 0 & \frac{1}{k/c+c}
\end{bmatrix} \right) = 0
\] (20)
The condition of stability of the integration is to satisfy the maximum time step length (Pezeshk and Camp 1995) as follows:
\[
\Delta t < \frac{T}{\pi} = 2 \sqrt{\frac{m_i L}{EA/L}} = 2L \sqrt{\frac{\rho}{E}}
\] (21)
where \( L \) is the minimum length of bars; \( \rho \) is the material density of bars. In this case, for example, all the bars are made of structural steel, and all the bars are longer than 2 m. Thus, the time step in the length of \( \Delta t = 1 \times 10^{-5} \) s \(< 2 \times 2 \times (7850/2 \times 10^{11})^{1/2} \) s \( \approx 8 \times 10^{-4} \) s is satisfied.
**Fig. 4** Time history of the maximum displacements between successive iterations
**Convergence criteria**
To solve the convergence of form-finding problem, the following criteria (or some of them) are usually adopted (Veenendaal and Block 2012) as follows:
1. small variations in the displacements between successive iterations \( \left( \| \mathbf{x}_{t+\Delta t} - \mathbf{x}_t \| < \varepsilon \right) \);
2. small variations of the bar forces (or bar lengths) between successive iterations \( \left( \| \mathbf{F}_{t+\Delta t} - \mathbf{F}_t \| < \varepsilon \right) \);
3. small values of the residual forces \( \left( \| \mathbf{R}_t - \mathbf{P}_t \| < \varepsilon \right) \);
4. small values of the kinetic energy \( \left( \left\| \frac{1}{2} \mathbf{M} \text{diag} \left( \| \mathbf{v}_t \| \right)^2 \right\| < \varepsilon \right) \);
5. maximum number of iterations (or maximum time duration) reached.
The dynamic relaxation process may not have a numerical convergence that allows the structure to achieve a still or rest state because no damping is introduced, as the equation of motion Eq. (16) represents an undamped structure system to simplify the calculation. However, with a preset maximum number of iterations, the calculation will “converge”, even if no damping exists in the system.
After the calculation converged, the optimal (best) solution of the network will be selected according to the convergence criteria that the calculation achieved. Recalling the objective description of the design example, the state of minimum residual forces of the entire calculation duration will be selected as the optimal solution. The maximum acceptable errors have been decided according to not only numerical considerations but also structural design ones. If no numerical convergence is achieved in the relevant time durations (converged according to criterion 5), the state of minimum residual forces may not be at the end of the time history; as a result, the state variables must be evaluated if the network is desired. Otherwise, the time duration must be extended for another new calculation, or the almost-target state from current calculation is used as the initial state for the new calculation.
Solutions and evaluation
Optimal solution
As the state of the dome network fixed on the ellipsoidal surface of $a = 15$ m, $b = 11$ m, and $c = 12$ m, through the procedure presented above and considering all the five convergence criteria, the time history results, with time step length $\Delta t = 1 \times 10^{-5}$ s and time duration of $500\Delta t$ (500 iteration steps), of the state variables maximum displacements, maximum residual forces, maximum bar length variations, average bar length variations, and standard deviation of bar length variations are shown in Figs. 4, 5, 6, 7 and 8. The state variables are calculated and compared between some the representative states over the time history. The selected states are shown in Table 2.
Fig. 5 Time history of the maximum residual force
Fig. 6 Time history of the maximum value of the length variation $(L_i - L_0)/L_0$
Fig. 7 Time history of the average value of the length variation $(L_i - L_0)/L_0$
Fig. 8 Time history of the standard deviation of the length variation $(L_i - L_0)/L_0$
Table 2 Representative states during the calculation
| Time step | Network | Max. residual force | Bar length variation distribution |
|-----------|---------|---------------------|----------------------------------|
| | | | Max. (%) | Average (%) | Standard deviation (%) |
| 1 |  | 149.2 MN | 26.7 | 15.0 | 7.3 |
| 20 |  | 74.6 MN | 27.0 | 14.1 | 7.0 |
| 74 |  | 63.2 MN | 25.3 | 10.7 | 6.5 |
| 181 |  | 53.8 MN | 22.7 | 5.5 | 5.2 |
| 330 |  | 69.2 MN | 45.3 | 11.8 | 10.0 |
It is observed that in this calculation, no numerical convergence was achieved; nevertheless, the maximum residual force met the minimum value at the 181st step during the dynamic process. In the same step, the variation of the bar length reaches the point having minimum values of the average and standard deviation at the same time. Thus, the present procedure is proved to be applicable.
Therefore, the 181st state is considered to be the optimal solution to the network structure (Fig. 9). Thus, the final network can be the part above the ground level by cutting off either directly at the ground level or at the horizontal bars closest to the ground level. Here, as shown in Fig. 9, if the part above the bands that connect the centres of lower pentagons is kept as the final solution to the design goal, then the possible design would be as shown in Fig. 10.
calculated during dynamic relaxation will be higher and induce time states with lower resolution. Thus, the initial geometry is suggested to be as closer to the target as possible.
In the explicit integration, as stated above, the studies on the explicit dynamic process revealed the time step length effect on the stability of integration. If the procedure is calculated in a material-independent manner, then the time step also must be less than the dimension unit of the structure elements to ensure the explicit integration remains stable.
The convergence criteria chosen for the form-finding procedure are also important. In the example case presented, the target is to minimize length variations, which leads to the state of minimum residual force. The other convergence criteria (except the criterion 5) have reached 0-state at the initial state and obviously are not applicable in this case. Finally, with the state of optimal solution found, the form-finding method and criterion chosen are proved to be effectively applicable.
**Conclusions**
The adaptive form-finding method presented in this paper is a simple and effective procedure for computational design of spatial network structures with a fixed form. According to the limitation of the research, the following conclusions can be drawn:
1. The dynamic relaxation method is applicable in form-finding process of form-fixed spatial network structure, which avoids the inverse operations on complicated stiffness matrices in each step;
2. During form-finding process, the boundary conditions can be applied as reaction forces through force equilibrium to avoid varying operations on vectors and matrices, especially for cases of time-dependent varying boundary conditions;
3. In spatial form-fixed problem, the coordinates and velocities integrated from accelerations could be positioned out of the constraint form. In order to keep solution feasible, it is necessary to update the coordinates and velocities after each explicit integration;
(4) It is noticed that the choosing of initial state will affect a lot the states obtained in the time duration. It is better to choose the initial state closer as much as possible to the target state; the time step length is important for the stability of explicit integration, and for finding accurately the optimal solution; the convergence criteria chosen for time step iteration influence the target state characteristics. Therefore, the convergence criteria need to be chosen properly, especially in undamped systems.
**Funding** National Natural Science Foundation of China (51508103) and Department of Education of Fujian Province (CN) (JA15074) by Junqing XUE, National Natural Science Foundation of China (51778148) and Recruitment Program of Global Experts Foundation (TM2012-27) by Bruno Briseghella, and National Natural Science Foundation of China (51508053) by Xi Tu.
**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
**References**
Adriaenssens S, Block P, Veenendaal D, Williams C (2014) Shell structures for architecture: form finding and optimization. Routledge, London
Allahdadian S, Boroomand B (2010) Design and retrofitting of structures under transient dynamic loads by a topology optimization scheme. In: Proceedings of the 3rd International Conference on Seismic Retrofitting, Iranian North West Retrofitting Center (Iranian Retrofitting Researchers Ins.), Tabriz, Iran, pp 1–9
Bagrianski S, Halpern AB (2014) Form-finding of compressive structures using prescriptive dynamic relaxation. Comput Struct 132:65–74
Baraff D, Witkin A (1998) Large steps in cloth animation. SIGGRAPH 98 Computer Graphics Proceedings, Orlando, FL, USA
Barnes MR (1977) Form-finding and analysis of tension space structures by dynamic relaxation, PhD thesis, City University London, UK
Barnes MR (1999) Form finding and analysis of tension structures by dynamic relaxation. Int J Space Struct 14:89–104
Basso P, Del Grosso A (2011). Form-finding methods for structural frameworks: a review. In: Proceedings of the International Association of Shells and Spatial Structures, London
Block P (2009). Thrust network analysis: exploring three-dimensional equilibrium. PhD thesis, Massachusetts Institute of Technology, MA, USA
Briseghella B, Lan C, Zordan T (2010) Optimized design for soil-pile interaction and abutment size of integral abutment bridges. Large Structures and Infrastructures for Environmentally Constrained and Urbanised Areas
Briseghella B, Fenu L, Feng Y, Mazzarolo E, Zordan T (2013a) Topology optimization of bridges supported by a concrete shell. Struct Eng Int 23(3):285–294
Briseghella B, Fenu L, Lan C, Mazzarolo E, Zordan T (2013b) Application of topological optimization to bridge design. J Bridge Eng. https://doi.org/10.1061/(ASCE)BE.1943-5592.0000416
Briseghella B, Fenu L, Feng Y, Lan C, Mazzarolo E, Zordan T (2016) Optimization indexes to identify the optimal design solution. J Bridge Eng 21(3):04015067-1–04015067-12
Fenu L, Briseghella B, Zordan T (2015) Curved shell-supported footbridges. IABSE Conference, Geneva 2015: structural engineering: providing solutions to global challenges—report, pp 394–401
Gründig L, Moncrieff E, Singer P, Ströbel D (2000) A history of the principal developments and applications of the force density method in Germany 1970–1999. In: Proceedings of IASS-IACM 2000 Fourth International Colloquium on Computation of Shell & Spatial Structures, Chania-Crete, Greece
Haber RB, Abel JF (1982) Initial equilibrium solution methods for cable reinforced membranes. Part I—formulations. Comput Methods Appl Mech Eng 30:263–284
Huang X, Xie Y (2008) Topology optimization of nonlinear structures under displacement loading. Eng Struct 30(7):2057–2068
Koohestani K (2013) A computational framework for the form-finding and design of tensegrity structures. Mech Res Commun 54:41–49
Koohestani K (2014) Nonlinear force density method for the form-finding of minimal surface membrane structures. Commun Nonlinear Sci Numer Simul 19(6):2071–2087
Lewis WJ (1989) The efficiency of numerical methods for the analysis of prestressed nets and pin-jointed frame structures. Comput Struct 33:791–800
Lewis WJ (2008) Computational form-finding methods for fabric structures. In: Proceedings of the ICE—Engineering and Computational Mechanics, vol 161, pp 139–149
Liu Y, Shimoda M (2014) A non-parametric solution to shape identification problem of free-form shells for desired deformation mode. Comput Struct 144:1–11
Lucerga JJ, Armisen JM (2012) An iterative form-finding method for antifunicular shapes in spatial arch bridges. Comput Struct 108–109:42–60
Marmo F, Rosati L (2017) Reformulation and extension of the thrust network analysis. Comput Struct 182:104–118
Neves M, Rodrigues H, Guedes J (1995) Generalized topology design of structures with a buckling load criterion. Struct Multidiscip Optim 10(2):71–78
Nouri-Baranger T (2004) Computational methods for tension-loaded structures. Arch Comput Methods Eng 11:143–186
Olsson J (2012) Form finding and size optimization—implementation of beam elements and size optimization in real time form finding using dynamic relaxation, Master’s thesis, Chalmers University of Technology, Gothenburg, Sweden
Pezeshk S, Camp CV (1995) An explicit time integration technique for dynamic analyses. Int J Numer Methods Eng 38:2265–2281
Schek HJ (1974) The force density method for form finding and computation of general networks. Comput Methods Appl Mech Eng 3:115–134
Stromberg LL, Beghini A, Baker WF, Paulino GH (2010) Application of layout and topology optimization using pattern gradation for the conceptual design of buildings. Struct Multidiscip Optim 43(2):165–180
Tan KY (1989) The computer design of tensile membrane structures, PhD thesis, University of Queensland, Brisbane, Australia
Topping BHV, Ivanyi P (2007) Computer aided design of cable membrane structures. Saxe-Coburg Publications, Kippen
Tran HC, Lee J (2010) Advanced form-finding of tensegrity structures. Comput Struct 88(3–4):237–246
Veenendaal D, Block P (2012) An overview and comparison of structural form finding methods for general networks. Int J Solids Struct 49:3741–3753
Zordan T, Briseghella B, Mazzarolo E (2010) Bridge structural optimization through step-by-step evolutionary process. Struct Eng Int 20(1):72–78
Zordan T, Mazzarolo E, Briseghella B, Chen BC, Feng Y, Siviero E, Fenu L (2014) Optimization of Calatrava bridge in Venice. Bridge maintenance, safety, management and life extension. In: Proceedings of the 7th International Conference of Bridge Maintenance, Safety and Management, IABMAS 2014
Publisher’s Note Springer Nature remains neutral with regard to urisdictional claims in published maps and institutional affiliations. |
αRoute: A Name Based Routing Scheme for Information Centric Networks
Reaz Ahmed*, Md. Faizul Bari*, Shihabur Rahman Chowdhury*, Md. Golam Rabbani*,
Raouf Boutaba*, and Bertrand Mathieu†
*David R. Cheriton School of Computer Science, University of Waterloo
{mfbari | sr2chowdhury | m6rabbani | rAhmed | email@example.com
†Orange Labs, Lannion, France
firstname.lastname@example.org
Abstract—One of the crucial building blocks for Information Centric Networking (ICN) is a name based routing scheme that can route directly on content names instead of IP addresses. However, moving the address space from IP addresses to content names brings scalability issues to a whole new level, due to two reasons. First, name aggregation is not as trivial a task as the IP address aggregation in BGP routing. Second, the number of addressable contents in the Internet is several orders of magnitude higher than the number of IP addresses. With the current size of the Internet, name based, anycast routing is very challenging specially when routing efficiency is of prime importance. We propose a novel name-based routing scheme (αRoute) for ICN that offers efficient bandwidth usage, guaranteed content lookup and scalable routing table size.
I. INTRODUCTION
Information Centric Networking (ICN) has recently received significant attention in the research community. ICN philosophy prioritizes a content (“what”) over its location (“where”). To realize this separation of a content from its location, a name based routing mechanism is essential. However, a number of crucial issues and challenges related to name based routing are yet to be addressed in order to successfully realize a content oriented networking model for the future Internet.
Today’s Internet exists as an interconnection of thousands of Autonomous Systems (ASs) from around the globe. The biggest Internet routing table contains around $4 \times 10^5$ Border Gateway Protocol (BGP) [1] routes for covering about $3.8 \times 10^9$ IPv4 addresses and $6 \times 10^8$ hosts. This $10^4$ scaling factor between IPv4 addresses and BGP routes is achieved by prefix based routing and route aggregation. However, the number of addressable ICN contents is expected to be several orders of magnitude higher. Google has indexed approximately $10^{12}$ URLs [2], which would impose 7 orders of magnitude scalability requirement on a routing scheme similar to BGP.
The routing scalability issue in ICN is related to how contents are named and how inter-AS and intra-AS routing protocols process these names. Even if an inter-AS ICN routing protocol like BGP covers only the top-level domains as prefixes, it will need to carry approximately $2 \times 10^8$ unique prefix routes [3], as no aggregation is possible at this level. So, the crux of the problem lies in the fact that ICN requires Internet routers to maintain a Brobdingnagian amount of routing state, which does not seem to be possible with existing technology. However, in reality the scalability requirement will be much higher for the following reasons: (i) content names are not as aggregatable as IP addresses, (ii) names with same prefixes may not be advertised from nearby network locations, (iii) routing cannot depend on topological prefix binding as content retrieval should be location independent, (iv) restricting the content name to some form of specialized format limits the usability of the system, and finally, (v) supporting content replication, caching, and mobility reduces the degree of route aggregation that can be applied, as multiple routes for the same content need to be maintained in the routing table.
In this paper we address the routing scalability issue for ICN. We propose a name-based overlay routing scheme named αRoute, which is scalable and offers content lookup guarantee (Section II). Both the routing table size and the number of hops for content lookup in αRoute are logarithmically bounded by network size. For Internet inter-domain routing using αRoute, we propose a distributed overlay-to-underlay mapping scheme that enables near shortest path routing in underlay (AS-network) by preserving the adjacency relations in the overlay graph (Section III). We also provide qualitative comparison of our approach with existing approaches for ICN routing in Section IV. Finally, we conclude and outline future research directions in Section V.
II. αROUTE: A NAME-BASED DHT
A DHT essentially maps a key to a value in a distributed manner. A DHT design involves two components: a) partitioning: segregating the entire key-space into subspaces and assign each subspace to a physical node and b) routing: a mechanism for locating any key in a bounded number of hops. Now, we present these two components for αRoute.
A. Partitioning
Our partitioning policy has three desirable characteristics: first, it places similar names in same partition, second, it provides an upper-bound on the number of partitions and third, it creates non-overlapping partitions. We treat strings as unordered sets of characters; e.g., the string “www.rocket.com” will be treated as $\{w, r, o, c, k, e, t, c, m\}$. Now we can classify all the strings in the key space in separate partitions, based on the presence or absence of characters in a string. At first, we create some partitioning sets $(S_i)$ over the 36
Non-English characters in a string may be treated in two alternative ways. First, if we want to limit ourselves to the 36 alpha-numeric characters in English, then non-English characters can be mapped to English characters using some predefined rules. Alternatively, we can incorporate non-English characters in the partitioning process, which may increase routing overhead. It is worth noting that we can accommodate around 64 billion nodes using the partitioning tree of 36 alphanumeric characters.
**B. Routing**
Our routing mechanism has two components: routing table and message forwarding mechanism. Ideally the routing table should be logarithmic on network size, while the forwarding mechanism should ensure shortest path routing using local information only. In this section we present an overlay routing architecture which achieves both of these goals. In the next section we will present a mapping algorithm to achieve these goals in an underlay network as closely as possible.
**Routing table**: Each partition in the aforementioned partitioning tree can be identified by a pattern $s_1 s_2 \ldots s_h$ where, $s_i$ is a character presence combination over the characters of $S_i$ and $h$ is the height of the partitioning tree. Each leaf node of the aforementioned tree corresponds to an AS in the Internet. For example if $S_i = \{r, c\}$ then $s_i$ can be any of $rc$, $rc$, $\bar{r}c$ or $\bar{r}\bar{c}$. We define a prefix of a pattern as a leftmost sub-pattern of any length, e.g., $s_1 s_2 \ldots s_t$, where $t \leq h$.
Now we describe the routing table entries for the AS responsible for partition $s_1 s_2 \ldots s_i \ldots s_h$. For some level $i$, the AS’s routing table will have $2^{|S_i|} - 1$ routing links corresponding to the partitions $s_1 s_2 \ldots t_i \ldots s_h$, where $t_i$ is a character presence combination over the characters of $S_i$ and $t_i \neq s_i$. In general, the routing table at each AS will have $\sum_{i=1}^{h}(2^{|S_i|} - 1)$ entries.
We can better describe the routing table entries with an example. Consider the shaded AS in Fig. 1 with prefix $\bar{r}c - \bar{e} - kt$. This AS will have a total of $7 (= (2^2 - 1) + (2^1 - 1) + (2^2 - 1))$ routing links to ASs marked with numbers 1 to 7 in the figure. The first three routing links are computed by taking the character presence combination over the characters in $S_1 = \{r, c\}$, which gives us $rc - \bar{e} - kt$, $r\bar{c} - \bar{e} - kt$ and $\bar{r}c - \bar{e} - kt$. Note that for the first and the third links, the tree has not been fully expanded to level 3, so the links will be pointing to nodes $rc - \bar{e}$ and $\bar{r}c - \bar{e}$, respectively. For computing link 4, prefix characters corresponding to $S_1$ and $S_3$ will remain unchanged, while the character(s) in $S_2$ will be complemented and so on. Slightly different situation can arise if an AS has a shorter prefix than other ASs, e.g., the AS with prefix $rc - \bar{e}$; its first routing entry would be $r\bar{c} - \bar{e}$, which is an internal(logical) node. Here any AS with prefix $r\bar{c} - \bar{e}$ can be considered as the first link for $rc - \bar{e}$.
**Message forwarding**: We can define a simple message forwarding mechanism based on the above described routing table. A lookup string is converted to a set of characters corresponding to the partitioning set, $S_i$. The lookup request will be forwarded to the AS responsible for the queried characters in a multi-hop path. This path is obtained by
gradually transforming the prefix of the current AS to the lookup pattern. Following the previous example in Fig 1, suppose the dark shaded node is looking for the AS responsible for string “rectangle”, which is mapped to an AS with prefix $rc - e - kt$. At the first step, node $\bar{r}c - \bar{e} - kt$ will forward the query to AS with the prefix $rc - \bar{e}$ using routing link 1. The $2^{nd}$ routing link of $rc - \bar{e}$ will have the prefix of any AS, say $rc - e - kt$, under AS $rc - e$. Thus the query will be forwarded to $rc - e - kt$, which will finally forward the query to $rc - e - \bar{k}t$.
It is worth noting that the partition tree, as in the example of Fig. 1, does not exist in terms of network links because logical nodes in the tree are not assigned to any physical entity in the network. Rather, the tree exists logically at each indexing ASs as prefix strings to the root. The overlay network is composed of the routing links (dashed lines in Fig. 1) between the indexing nodes.
C. Join protocol
To join the network a new AS, say $X$, has to know an existing AS in the system, say $M$. $X$ will query $M$ for the neighbor, say $M_1$, with shortest prefix. Next, $X$ will query $M_1$ for the neighbor with shortest prefix. In this way $X$ will crawl the network and find a local minima, i.e., a node with shorter prefix than all of its neighbors. In case of a tie, $X$ will choose the node storing higher number of index records. Once the local minima, say $Y$ is found, $X$ will request $Y$ to increase its prefix by one step. If prefix of $Y$ is $s_1 s_2 \ldots s_t$ and $Y$ has $2^{|S_t|}$ siblings in level $t$ then $Y$ will increase its prefix to $s_1 s_2 \ldots s_{t+1}$ and $X$ will become a new sibling of $Y$, otherwise $X$ will become a sibling of $Y$ at level $t$. Accordingly, $X$ has to populate its routing table using the routing information at $Y$. It can be trivially proven that all the neighbors of $X$ will be within 2 hops from $Y$.
III. FROM OVERLAY TO UNDERLAY
In order to route using $\alpha$Route, we have to map the nodes in the $\alpha$Route overlay graph to the AS topology. In this section we first explain the impact of Internet topology on the mapping process (Section III-A), then we present the mapping algorithm (Section III-B) followed by the lookup (Section III-C) and caching (Section III-D) mechanisms in the underlay network.
A. Topology Considerations for Mapping
The inter-domain AS network is based on the Border Gateway Protocol (BGP), while each AS controls its intra-domain routing protocol independently. Hence, it will be inappropriate to use $\alpha$Route for both inter- and intra-domain routing in the future information centric Internet. Instead, following the current tradition, we assume that ASs will collaborate using $\alpha$Route for inter-domain routing, while for intra-domain routing an AS may extend its own $\alpha$Route prefix or it may use a separate intra-domain routing protocol.
Node degree distribution at the overlay ($\alpha$Route) and underlay (AS-topology) graphs has profound impact on the mapping process. According to Fig. 1, each indexing node of a uniformly grown partitioning tree should have similar number of routing links. In other words, the overlay graph is a nearly regular graph. On the contrary, it has been reported in [4] that the node degree distribution in the AS-topology exhibits a power law relationship with the number of ASs. We exploit this dissimilarity in node degree distribution during the mapping process. Recent studies [5] on Internet topology revealed that a small number (around 12 to 16) of high-degree ASs form an almost completely connected core. The rest of the ASs have multiple physical links to the core, which results into many triangles in the AS graph. It is also reported [6] that the inter-connect graph between the non-core ASs is sparse. For the mapping process, we treat the core ASs as Tier-1 AS, while the ASs directly connected to at least one Tier-1 AS are treated as Tier-2 ASs and so on. Hence, a Tier-2 AS, directly connected to multiple Tier-1 ASs, can route a lookup request to a core AS with an appropriate prefix, or it may use its peering links with other Tier-2 or lower tier ASs. This process recurs for the lower tier ASs as well.
Fig. 2 depicts a conceptual overview of $\alpha$Route prefix distribution over the ASs. To exploit the heterogeneous inter-AS
connectivity, we assign short prefixes to the highly connected top tier ASs. A lower tier AS, on the other hand, extends a prefix of an upper tier AS. In contrast to the partitioning tree introduced in Fig. 1, selected logical nodes (partitions) at different levels are assigned to highly connected upper tier ASs. In addition to having the regular $\alpha$Route links (as presented by dashed arrows in Fig. 2), an upper tier AS will have physical links to the lower tier ASs that extend its prefix.
B. Mapping Algorithm
The mapping procedure is initiated by a centralized entity referred to as the Name Assignment Authority (NAA). The NAA chooses a set of prefixes and assigns them to the Tier-1 ASs. The prefixes are selected in such a way that the expected name resolution related processing load on each Tier-1 AS is distributed proportionally to its capacity. In the next step, each Tier-1 AS executes Algorithm 1 to assign prefixes to Tier-2 ASs. Each Tier-1 AS extends its own prefix to generate a set of patterns $S_{patterns}$ that are not yet mapped and starts with the same pattern as its own, e.g., if a Tier-1 AS is assigned prefix $r\bar{c}$ then its $S_{patterns}$ set contains all unmapped patterns starting with $r\bar{c}$. Next, the AS finds a neighbour ($nbr$) that has the highest number of mapped neighbours. $nbr$ is then assigned a prefix in such a way that its distance (in the hamming space) from all its neighbours is minimized and the process goes on until all neighbours are mapped. After the Tier-1 ASs have executed this mapping process, the already mapped Tier-2 ASs map their neighbours using the same Algorithm. The process goes on in a nested recursive manner until each AS is mapped to a prefix.
In terms of mapping an edge in overlay graph (or logical link) to the underlay network, this mapping strategy can produce three scenarios: equal, compression and expansion. In most of the cases, a logical link will be mapped to a physical link, resulting into an equal or one-to-one mapping. Recall that, if two logical nodes in the overlay space has more than one mismatch (hamming distance) between their prefixes, it results in a multi hop path between them in the overlay routing space. Therefore, when two physical neighbors have more than one mismatch in their assigned prefixes, a logical path between them in the overlay space is mapped with the physical link between them. In this case, traversing a physical link makes a jump in the overlay space while routing. This will essentially results into a compression of an overlay path into a physical link. Finally for a few cases, adjacent overlay nodes will be more than one hop away in the underlay, which will degrade the mapping performance due to the expansion of a logical link in $\alpha$Route graph to a physical path in underlay AS-topology. In the experimental results section, we will provide quantitative measures for these three cases.
C. Content lookup
To lookup a content, we first create a pattern depending on the presence or absence of the letters in the given name (or keywords) matching the partitioning strings ($S_i$s). Then we can use $\alpha$Route to route the lookup request to the AS indexing the names matching this pattern. At the indexing AS, we will find one or more index records of the form $<N_l, P_l>$, which indicates that the content with name $N_l$ is stored at AS with pattern $P_l$. Now we use $\alpha$Route to reach the AS responsible for pattern $P_l$.
Each AS has to maintain a routing table (as explained in Section III-A) for routing messages to an AS responsible for any given pattern. For each logical routing link $L_k$ (corresponding to the dashed lines in Fig 1 and Fig 2), the routing table will contain an entry like $<L_k, I_k, h_k>$. Here, $I_k$ is the inter-AS link that should be used for routing to the AS responsible for pattern $L_k$ and $h_k$ is the number of ASs to be traversed for reaching $L_k$. With a good mapping algorithm, $h_k$ will be 1 for most of the cases. In addition to the logical links, an AS will keep separate routing entries like $<P_k, I_k, 1>$ for each physical neighbor. Here, $P_k$ is the pattern of the neighbor AS reachable through the inter-AS link $I_k$.
Similar to BGP, $\alpha$Route supports policy-based routing. $\alpha$Route can be augmented with different policies during the route selection process. In the current implementation we adhere to the following policy. If a lookup request can be resolved using a peering link (usually free of cost) we route using that link. Otherwise, the request has to be forwarded to a provider AS, which usually incurs cost to the requesting AS.
D. Indexing and Caching
Strict index placement restriction is a major disadvantage for any DHT approach. To enable efficient content lookup we have to place a content’s index at a specific network location. In addition, it introduces two step routing: first, route to an index and then route to the content. We can mitigate both of these problems (i.e., index placement freedom and two-step lookup) by intelligently caching indexes and contents. Moreover, such caching policies will reduce expensive inter-AS traffic.
**Index caching**: ASs may not agree to store any content’s index for several reasons, including legal implications and high query traffic for a popular content. If the content is illegal or access restricted then this behavior of an AS is appropriate. But, for a popular content, such behavior can decrease the content’s reachability. To minimize the volume of lookup traffic for a popular content, each AS can cache the indexes returned by outgoing lookup requests for resolving future lookup requests. This index caching strategy will effectively reduce the popular content lookup traffic at the rendezvous indexing AS.
**Content caching**: As previously reported [7], [8], content popularity in the Internet and hence lookup rate follows power law distribution. We can use this property to improve response time by caching popular contents at the AS storing the content’s index. This will allow us to access a content in one DHT lookup. However, we may face two barriers while deploying this strategy. First, a content owner may not allow the indexing AS to cache and serve its content due to financial and legal reasons. Second, an AS may be overloaded if the distribution of popular contents over the ASs is not uniform. The second obstacle can be reduced if ASs cache a popular,
permitted content and update content’s index by adding a link to the cached copy. In the later strategy, a user can lookup the indexing AS to find a list of ASs caching the desired content and access the content from a nearby AS. This approach can be effective while accessing large contents.
IV. RELATED WORK
The last few years have witnessed significant number of research efforts in the field of ICN. Several of these research works address content naming and routing as key research challenges in ICN and have proposed different solutions for these problems. A comprehensive survey on different naming and routing schemes proposed for ICN can be found in [9]. DONA [10] provides a hierarchical name resolution infrastructure including new network entities named Resolution Handlers (RH), which stores routing information of the domain it is attached with. The root RH needs to maintain routing information for all content in the network, which severely confines the scalability of this mechanism. NetInf [11] and LANES [12] both proposes a hierarchical DHT based approach for ICN routing. However, the topmost level in the DHT hierarchy in NetInf, called REX, needs to store index for all the contents in the network, which results in a performance bottleneck and a scalability issue. CCN [13], CBCB [14], and TRIAD [15] use gossip based routing protocols, which incur significant management and control overhead. CCN proposes to replace IP address prefixes in BGP routing table with content name prefixes and route content requests by performing longest prefix matching in the routing table. On the other hand, routing in CBCB [14] is based on controlled flooding of attribute-value pairs.
V. CONCLUSION AND FUTURE WORK
In this paper we proposed a novel name based overlay routing scheme $\alpha$Route and an effective strategy for mapping the overlay network to physical AS-topology. $\alpha$Route guarantees content lookup while ensuring efficient bandwidth usage and small routing table size. The proposed mapping strategy, on the other hand, produces small expansion in routing path. Compared to the existing routing techniques, our approach has a number of advantages. First, routing can be done on names without sacrificing efficiency or completeness. Second, after finding the node responsible for a query name, it is easy to find other names within 1 or 2 edit distance; since the nodes responsible for storing those names will be 1 or 2 overlay hops away from the query target. Third, in contrast to hierarchical routing mechanisms, there is no bottleneck node in the proposed system. A capacity proportional load distribution can be achieved by placing the ASs at different levels in the partitioning tree based on capacity. Fourth, compared to other tree-based routing approaches, we can conveniently select the size of partitioning sets ($|S_i|$), to tune the depth of the tree. This will allow us to easily decrease routing hops by increasing the number of routing-links, and vice versa. However, the proposed partitioning algorithm for constructing $S_i$s has a shortcoming. We currently select $S_i$s off-line in such a way that the sample names are uniformly distributed over the leafs of the tree. For a fairly large sample size, offline computation should give nearly uniform distribution of names over the resolution nodes. We intend to investigate other techniques for online computation of $S_i$s. In addition, the performance of $\alpha$Route can be greatly improved by adopting the caching strategies proposed in Section III-D. We intend to investigate $\alpha$Route’s performance in presence of indexing and content caching and experiment in a large scale testbed.
REFERENCES
[1] “BGP Routing Table Analysis Reports,” http://bgp.potaroo.net/.
[2] We Knew The Web Was Big. [Online]. Available: http://googleblog.blogspot.com/2008/07/we-knew-webwas-big.html
[3] “Domain Counts & Internet Statistics,” http://www.domaintools.com/internet-statistics.
[4] M. Faloutsos, P. Faloutsos, and C. Faloutsos, “On power-law relationships of the Internet topology,” SIGCOMM Comput. Commun. Rev., vol. 29, no. 4, pp. 251–262, Aug. 1999.
[5] M. Boguñá, F. Papadopoulos, and D. Krioukov, “Sustaining the internet with hyperbolic mapping,” Nature Communications, vol. 1, p. 62, 2010.
[6] L. Subramanian, S. Agarwal, J. Rexford, and R. Katz, “Characterizing the Internet hierarchy from multiple vantage points,” in IEEE INFOCOM, vol. 2, 2002, pp. 618–627.
[7] S. A. Krashakov, A. B. Teslyuk, and L. N. Shechur, “On the universality of rank distributions of website popularity,” Comput. Netw., vol. 50, no. 11, pp. 1769–1780, Aug. 2006.
[8] O. Saleh and M. Hefeeda, “Modeling and Caching of Peer-to-Peer Traffic,” in ICNP 2006, pp. 249 – 258.
[9] M. F. Bari, S. R. Chowdhury, R. Ahmed, R. Boutaba, and B. Mathieu, “A survey of naming and routing in information-centric networks,” IEEE Communications Magazine, vol. 50, no. 12, pp. 44–53, 2012.
[10] T. Koponen, M. Chawla, B.-G. Chun, A. Ermolinskiy, K. H. Kim, S. Shenker, and I. Stoica, “A data-oriented (and beyond) network architecture,” SIGCOMM Comput. Commun. Rev., vol. 37, pp. 181–192, August 2007.
[11] C. Dannewitz, “NetInf: An Information-Centric Design for the Future Internet,” in Proc. GIITG KuVS Workshop on The Future Internet 2009.
[12] K. Visala, D. Lagutin, and S. Tarkoma, “LANES: an inter-domain data-oriented routing architecture,” in Proc ReArch 2009, pp. 55–60.
[13] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. Braynard, “Networking named content,” in CoNEXT, 2009.
[14] A. Carzaniga, M. J. Rutherford, and A. L. Wolf, “A Routing Scheme for Content-Based Networking,” in INFOCOM 2004.
[15] D. Cheriton and M. Gritter, “TRIAD: a scalable deployable NAT-based Internet architecture,” Technical Report, January 2000. |
Anodes, cathodes, and separators for batteries (electrochemical energy storage devices). The anodes are Li metal anodes having lithiated carbon films (Li-MWCNT) (as dendrite suppressors and protective coatings for the Li metal anodes). The cathodes are sulfurized carbon cathodes. The separators are GNR-modified (or modified) separators. The invention includes each of these separately (as well as in combination both with each other and with other anodes, cathodes, and separators) and the methods of making each of these separately (and in combination). The invention further includes a battery that uses at least one of (a) the anode having a lithiated carbon film, (b) the sulfurized carbon
cathode, and (c) the GNR-modified separator in the anode/cathode/separator arrangement. For instance, a full battery can include the sulfurized carbon cathode in combination with the Li-MWCNT anode or a full battery can include the sulfurized carbon cathode in combination with other anodes (such as a GCNT-Li anode).
4 Claims, 42 Drawing Sheets
(51) Int. Cl.
H01M 4/04 (2006.01)
H01M 4/36 (2010.01)
H01M 4/38 (2006.01)
H01M 4/58 (2010.01)
H01M 4/583 (2010.01)
H01M 4/62 (2006.01)
H01M 10/0525 (2010.01)
H01M 50/451 (2021.01)
(52) U.S. Cl.
CPC ............ H01M 4/36 (2013.01); H01M 4/38 (2013.01); H01M 4/58 (2013.01); H01M 4/583 (2013.01); H01M 4/625 (2013.01); H01M 10/0525 (2013.01); H01M 50/451 (2021.01); H01M 2004/028 (2013.01); Y02T 10/70 (2013.01)
References Cited
U.S. PATENT DOCUMENTS
6,326,104 B1 12/2001 Cain et al.
6,576,370 B1 6/2003 Nakagiri et al.
8,665,539 B2 3/2014 Fischer et al.
8,735,873 B2 5/2014 Huang et al.
9,096,437 B2 8/2015 Tour et al.
9,455,094 B2 9/2016 Tour et al.
9,885,487 B2 2/2018 Singh et al.
10,023,234 B2 3/2018 Eitouni et al.
10,044,064 B2 8/2018 Eitouni et al.
10,056,618 B2 8/2018 Li et al.
10,151,321 B2 12/2018 Kim et al.
2003/0136241 A1 7/2003 Kim et al.
2003/0118908 A1 6/2003 Ishikawa et al.
2009/0053594 A1 2/2009 Johnson et al.
2009/0246625 A1* 10/2009 Lu ..................... H01M 4/1393 977/734
2011/0183206 A1 7/2011 Davis et al.
2011/0262807 A1 10/2011 Boren et al.
2011/0318654 A1* 12/2011 Janssen .............. H01M 4/622 429/338
2012/0077084 A1 3/2012 Christensen et al.
2012/0171574 A1 7/2012 Zhamu et al.
2012/0231426 A1 9/2012 Baxwal et al.
2013/0050420 A1 3/2013 Chen et al.
2013/0157128 A1 6/2013 Solan et al.
2013/0164626 A1 6/2013 Manthiram et al.
2013/0175102 A1 7/2013 Chen et al.
2013/0190082 A1 7/2013 Kozlak et al.
2013/0196235 A1 8/2013 Prieto et al.
2013/0202961 A1 8/2013 Hagen et al.
2013/0220817 A1 8/2013 Walker et al.
2013/0224594 A1 8/2013 Yushin et al.
2013/0244007 A1* 9/2013 Leitner ............. H01M 4/1397 429/188
2013/0260246 A1 10/2013 Chen et al.
2014/0015474 A1 1/2014 Tour et al.
2014/0147738 A1 5/2014 Chen et al.
2014/0178688 A1 6/2014 Tour et al.
2014/0313636 A1* 10/2014 Tour ..................... H01G 11/36 361/502
2014/0332731 A1 11/2014 Ma et al.
FOREIGN PATENT DOCUMENTS
CN 105515646 A 1/2014
104362394 A 2/2015
105350054 A 2/2016
105789563 A 7/2016
JP 2005294028 A 10/2005
WO 201011011 A1 12/2010
WO 2015084945 A1 6/2015
WO 2016201101 A1 12/2016
WO 2017011052 A2 1/2017
WO 2017012309 A1 3/2017
WO 2017062950 A1 4/2017
WO 2017120391 A1 7/2017
WO 2017164963 A2 9/2017
WO 2018045226 A1 3/2018
OTHER PUBLICATIONS
Officer Maruska Galatiote; International Search Report and Written Opinion; PCT/US2017/049719; date of mailing Oct. 20, 2017; 13 pages.
Armand, M. et al. “Building Better Batteries”, Nature 2008, 451 (7179), 652-657 (“Armand 2008”), 6 pages.
Aurbach, D. et al. “A Short Review of Failure Mechanisms of Lithium Metal and Lithiated Graphitic Anodes in Liquid Electrolyte Solutions” Solid State Ionics 2002, 148, 403-416 (“Aurbach 2002”), 12 pages.
Bai, P. et al. “Transition of Lithium Growth Mechanisms in Liquid Electrolytes” Energy Environ. Sci. 2016, 9, 3221-3229 (“Bai 2016”), 9 pages.
Basile, A. et al. “Stabilizing Lithium Metal Using Ionic Liquids for Long-Lived Batteries”. Nature Comm. 2016, 7, 11794, 11 pages.
Bates, J. et al. “Fabrication and Characterization of Amorphous Lithium Electrolyte Thin Films and Rechargeable Thin-Film Batteries”. J. Power Sources 1993, 43 (1-3), 103-110 (“Bates 1993”), 8 pages.
Bauer, I. et al. “Thin-film Lithium and Lithium-Ion Batteries”, Solid State Ionics 2000, 135, 33-45; 13 pages.
Besenhard, J. et al. “Inorganic Film-Forming Electrolyte Additives Improving the Cycling Behavior of Metallic Lithium Electrodes and the Self-Discharge of Carbon-Based Electrodes”. J. Power Sources 1996, 44 (1-3), 413-420 (“Besenhard 1996”), 8 pages.
Bouchet, R. “Batteries: A Stable Lithium Metal Interface”. Nat. Nanotechnol. 2014, 9, 572-573 (“Bouchet 2014”), 2 pages.
Bouchet, R. et al. “Single-Ion BAB Triblock Copolymers as Highly Efficient Electrolytes for Lithium-Metal Batteries”, Nature Mater. 2013, 12, 47-54, 8 pages.
Bruce, P. et al. “Li–O2 and Li–S Batteries with High Energy Storage”. Nat. Mater. 2011, 11 (2), 172-172 (“Bruce 2011”), 2 pages.
Cavallio, L. et al. “A free-standing reduced graphene oxide aerogel as supporting electrode in a fluorine-free Li2S8 catholytic Li—S battery”. Journal of Power Sources, Feb. 5, 2010, 7 pages.
Chebam, R. et al. “Comparison of the chemical stability of the high energy density cathodes of lithium-ion-batteries,” Electrochemistry Communications 2001, 3 (11), 624-627. 4 pages.
Cheon, S. et al. “Rechargeable Lithium Sulfur Battery: II. Rate Capability and Cycle Characteristics,” Journal of The Electrochemical Society 2003, 150 (6), A800-A805, 7 pages.
References Cited
OTHER PUBLICATIONS
Claye, A. et al. “Solid-State Electrochemistry of the Li Single Wall Carbon Nanotube System”. J. Electrochem. Soc. 2000, 147, 2845-2852 (“Claye 2000”), 9 pages.
Cohen, Y. et al. “Micromorphological Studies of Lithium Electrodes in Alkyl Carbonate Solutions Using In Situ Atomic Force Microscopy”. J. Phys. Chem. B 2000, 104 (51), 12282-12291 (“Cohen 2000”), 10 pages.
Crowther, O. et al. “Effect of Electrolyte Composition on Lithium Dendrite Growth”. J. Electrochem. Soc. 2008, 155, A806-A811 (“Crowther 2008”), 7 pages.
Ding, F. et al. “Dendrite-Free Lithium Deposition via Self-Healing Electrostatic Shield Mechanism”. J. Am. Chem. Soc. 2013, 135 (11), 4450-4456 (“Ding II 2013”), 7 pages.
Ding, F. et al. “Effects of Carbonate Solvents and Lithium Solus on Morphology and Coulombic Efficiency of Lithium Electrode”. J. Electrochem. Soc. 2013, 160 (10), A1894-A1901 (“Ding I 2013”), 9 pages.
Dresselhaus, M. et al. “Raman Spectroscopy on Isolated Single Wall Carbon Nanotubes”. Carbon 2002, 40, 2043-2061 (“Dresselhaus 2002”), 19 pages.
Dunn, B. et al. “Electrical Energy Storage for the Grid: A Battery of Choices”. Science (80.) 2011, 334 (6058), 928-935 (“Dunn 2011”), 9 pages.
Elsheikha, T. et al. “Electrical Conductivity of Individual Carbon Nanotubes”. Nature 1996, 382, 54-56 (“Elsheikha 1996”), 3 pages.
Evarts, E. “Lithium Batteries: To the Limits of Lithium”. Nature 2015, 526, S93-S95 (“Evarts 2015”) 4 pages.
Gurisikumar, G. et al. “Lithium-Air Battery: Promise and Challenges”. J. Phys. Chem. Lett. 2010, 1 (14), 2193-2203 (“Gurisikumar 2010”), 11 pages.
Goodenough, J. et al., “The Li-Ion Rechargeable Battery: A Perspective”. J. Am. Chem. Soc. 2013, 135 (4), 1167-1176 (“Goodenough 2013”) 10 pages.
Hao, X. et al. “Ultrastrong Polyacrylate Nanofiber Membranes for Dendrite-Proof and Hermetic Battery Separators”. Nano Lett. 2016, 16, 2981-2987 (“Hao 2016”), 7 pages.
Hirai, T. et al. “Effect of Additives on Lithium Cycling Efficiency”. J. Electrochem. Soc. 1994, 141, 2300-2305 (“Hirai 1994”), 7 pages.
Hirai, F. et al. “Lithiation Strategies for Rechargeable Energy Storage: Nanophysics Concepts, Promises and Challenges”. Batteries. Jan. 23, 2018, 39 pages.
Hou, J. et al “Graphene-based electrochemical energy conversion and storage: fuel cells, supercapacitors and lithium ion batteries”, Synthetic Chemistry Chemical Physics. vol. 13, No. 34, Jan. 1, 2011, pp. 15384-15402, 19 pages.
Hutchinson, M. “New chemistry promises better lithium sulfur batteries”. PV Magazine. Jun. 22, 2020, 5 pages.
Ji, X. et al. “Advances in Li–S batteries”. Journal of Materials Chemistry 2010, 20 (44), 9821-9826; 6 pages.
Jin, S. et al. “Covalent Conjugated Carbon Nanostructures for Current Collectors in Both the Cathode and Anode of Li–S Batteries”. Adv. Mater. 2016, 28, 9094-9102 (“Jin 2016”), 9 pages.
Jin, S. et al. “Efficient Activation of High-Loading Sulfur by Small CNFs Confined Inside a Large CNT for High-Capacity and High-Rate Lithium-Sulfur Batteries”. Nano Lett. 2015, acs.nanolett.5b04105 (“Jin 2015”), 8 pages.
Kaneko, N. et al. “Lithium Superionic Conductor”, Nature Mater. 2011, 10, 682; 5 pages.
Kang, N. et al. “Cathode porosity is a missing key parameter to optimize lithium-sulfur battery energy density”. Nature Communications. 2017, 10, 10 pages.
Kannan, R. et al. “Lithium Iron Phosphate Conductor Thio-LISICON: the Li2SnGeSe2P2SS System”. J. Electrochem. Soc. 2001, 148, A742, 6 pages.
Kim, J. et al. “Controlled Lithium Dendrite Growth by a Synergistic Effect of Multilayered Graphene Coating and an Electrolyte Additive”. Chem. Mater. 2015, 27 (8), 2780-2787 (“Kim 2015”), 8 pages.
Kim, M. et al. “A fast and efficient pre-doping approach to high energy density lithium-ion hybrid capacitors”. Journal of Materials Chemistry A Of The Royal Society Of Chemistry. Mar. 2014, 6 pages (10029-10033); 6 pages.
Kozen, A. et al. “Next-Generation Lithium Metal Anode Engineering via Atomic Layer Deposition”, ACS Nano 2015, 9 (6), 5884-5892 (“Kozen 2015”), 9 pages.
Landi, B. et al. “Carbon Nanotubes for Lithium Ion Batteries”. Energy Environ. Sci. 2009, 2, 638-654 (“Landi 2009”), 18 pages.
Landi, B. et al. “Lithium Ion Capacity of Single Wall Carbon Nanotube Paper Electrodes”. J. Phys. Chem. C 2008, 112, 7509-7515 (“Landi 2008”); 7 pages.
Lee, H. et al. “Simple Composite Protective Layer Coating that Enhances the Cyclic Stability of Lithium Metal Batteries”. J. Power Sources 2015, 284, 102-108. (“Lee 2015”), 6 pages.
Li, F. et al. “Identification of the Constituents of Double-Walled Carbon Nanotubes Using Raman Spectra Taken with Different Laser-Excitation Energies”. J. Mater. Res. 2003, 18, 1251-1258 (“Li 2003”), 9 pages.
Li, N. et al. “An Artificial Solid Electrolyte Interphase Layer for Stable Lithium Metal Anodes”. Adv. Mater. 2016, 28(9), 1853-1858 (“Li 2016”), 7 pages.
Li, Y. et al. “The Synergetic Effect of Lithium Polysulfide and Lithium Nitrate to Prevent Lithium Dendrite Growth”. Nat. Commun. 2015, 6 (May), 7436 (“Li 2015”), 8 pages.
Liang, Z. et al. “Composite Lithium Metal Anode by Melt Infusion of Lithium into a 3D Conducting Scaffold with Lithiophilic Coating”. Proc. Natl. Acad. Sci. U. S. A. 2016, 113, 2862-2867 (“Liang 2016”), 6 pages.
Lin, D. et al. “Layered Reduced Graphene Oxide with Nanoscale Interlayer Gaps as a Stable Host for Lithium Metal Anodes”, Nat. Nanotechnol. 2016, 11, 626-632 (“Lin 2016”), 8 pages.
Lin, D. et al. “Reviving the Lithium Metal Anode for High-Energy Batteries”, Nat. Publ. Gr. 2017, 12 (3), 194-206 (“Lin 2017”), 13 pages.
Lin, D. et al. “Three-Dimensional Stable Lithium Metal Anode with Nanoscale Lithium Islands Embedded in Ionically Conductive Matrix”, Proc. Natl. Acad. Sci. U. S. A. 2017, 114, 4613-4618 (“Lin II 2017”), 6 pages.
Lin, J. et al. “3-Dimensional Graphene Carbon Nanotube Carpet-Based Microsupercapacitors with High Electrochemical Performance”. Nano Lett. 2013, 13, 72-82 (“Lin 2015”), 7 pages.
Lin, Y. et al. “Artificial Solid Electrolyte Interphases with High Li-Ion Conductivity, Mechanical Strength, and Flexibility for Stable Lithium Metal Anodes”. Adv. Mater. 2017, 29, 1605531 (“Lin 2017”), 8 pages.
Liu, Y. et al. “Lithium-Coated Polymer Matrix as a Minimum Volume Electrolyte and Durable Lithium Metal Anode”, Nat. Commun. 2016, 7, 10992 (“Liu 2016”), 9 pages.
Lu, J. et al. “Free-Standing Copper Nanowire Network Current Collector for Improving Lithium Anode Performance”, Nano Lett. 2016, 16, 4431; 7 pages.
Lu, Y. et al. “Stable Lithium Electrodeposition in Liquid and Nanoporous Solid Electrolytes”. Nat. Mater. 2014, 13, 961-969 (“Lu 2014”), 9 pages.
Luo, C. et al. “A chemically stabilized sulfur cathode for lean electrolyte lithium sulfur batteries”, Proceedings of the National Academy of Sciences (PNAS.org). May 15, 2020, 9 pages.
Mahmood, A. et al. “Nanoscale Lithium Anode Materials for Lithium Ion Batteries: Progress, Challenge and Perspective”. Adv. Energy Mater. 2016, 6, 1600374 (“Mahmood 2016”), 22 pages.
Manthiram, A. et al. “Lithium-Sulfur Batteries: Progress and Prospects”. Adv. Mater. 2015, 27 (12), 1980-2006 (“Manthiram 2015”), 27 pages.
Mishra, Y. et al. “Poly sulfide Shuttle Study in the Li/S Battery System”. Journal of The Electrochemical Society 2004, 151 (11), A1969-A1976, 9 pages.
Murugan, R. et al. “Fast Lithium Ion Conduction in Garnet-Type Li7La3Zr2O12”. Angew. Chem. Int. Ed. 2007, 46, 7778; 4 pages.
Noorden, R. “The Rechargeable Revolution: A Better Battery”. Nature 2014, 507, 26-28 (“Noorden 2014”), 3 pages.
References Cited
OTHER PUBLICATIONS
Osaka, T. et al. “Surface Characterization of Electrodeposited Lithium Anode with Enhanced Cycleability Obtained by CO2 Addition”, J. Electrochem. Soc. 1997, 144 (5), 1709 (“Osaka 1997”), 6 pages.
Peigney, A. et al. “Specific Surface Area of Carbon Nanotubes and Bundles of Carbon Nanotubes” Carbon 2001, 39, 507-514 (“Peigney 2001”), 9 pages.
Qian, D. et al. “High Rate and Stable Cycling of Lithium Metal Anodes”. Nat. Commun. 2015, 6, 6362 (“Qian 2015”), 9 pages.
Ren, Z. et al. “Synthesis of Large Arrays of Well-Aligned Carbon Nanotubes on Glass”. Science 1998, 282, 1105-1107 (“Ren 1998”), 4 pages.
Roy, P. et al. “Nanostructured Anode Materials for Lithium Ion Batteries”. J. Mater. Chem. A 2015, 3, 2454-2484 (“Roy 2015”), 31 pages.
Salvatierra, R. et al. “Graphene Carbon Nanotube Carpets Grown Using Binary Catalysts for High-Performance Lithium-Ion Capacitors”. ACS Nano 2017, 11, 2724-2733 (“Salvatierra 2017”), 10 pages.
Stone, G. et al. “Resolution of the Modulus Versus Adhesion Dilemma in Solid Polymer Electrolytes for Rechargeable Lithium Metal Batteries”. J. Electrochem. Soc. 2001, 159, A222, 7 pages.
Su, Y. et al. “Lithium-ion batteries with a microporous carbon paper as bifunctional interlayer.” Nature Communications 2012, 3, 1166, 6 pages.
Sun, Z. et al. “Large-Area Bernal-Stacked Bi-, Tri-, and Tetralayer Graphene”. ACS Nano 2012, 6, 9790-9796 (“Sun 2012”), 7 pages.
Thess, A. et al. “Cryogenic Ropes of Metallic Carbon Nanotubes”. Science 1996, 273, 483-486 (“Thess 1996”), 6 pages.
Tung, S. et al. “A Dendrite-Suppressing Composite Ion Conductor from Aramid Nanofibres”. Nat. Commun. 2015, 6, 6152 (“Tung 2015”), 7 pages.
Wang, C. et al. “Suppression of Lithium-Dendrite Formation by Using a Ag/CoPdO (LiTFSI) Composite Solid Electrolyte and Lithium Metal Anode Modified by PEO (LiTFSI) in All-Solid-State Lithium Batteries”. ACS Appl. Mater. Interfaces 2017, acsmi.7b00336 (“Wang 2017”), 9 pages.
Wei, S. et al. “Metal-Sulfur Battery Cathodes Based on Pan-Sulfur Composites”. J. Am. Chem. Soc. 2015, 137, 12143-12152 (“Wei 2015”), 10 pages.
Whittingham, M. “History, Evolution, and Future Status of Energy Storage”. Proc. IEEE 2012, 100 (Special Centennial Issue), 1518-1534 (“Whittingham 2012”), 17 pages.
Wikipedia: Lithium metal battery. Retrieved from https://en.wikipedia.org/wiki/Lithium-metal_battery&oldid=963354052, last edited on Jun. 19, 2020, at 10:29 (UTC). 9 pages.
Xu, W. et al. “Lithium Metal Anodes for Rechargeable Batteries”, Energy Environ. Sci. 2014, 7 (2), 513-537 (“Xu 2014”), 25 pages.
Yan, K. et al. “Selective Deposition and Stable Encapsulation of Lithium through Interfacial Seed Seeded Growth”. Nat. Energy 2016, 1, 16010 (“Yan 2016”), 8 pages.
Yan, Z. et al. “Three-Dimensional Metal Graphene Nanotube Multifunctional Hybrid Materials”. ACS Nano 2013, 7, 58-64. DOI: 10.1021/nn3015882; 7 pages.
Yang, C. et al. “Accommodating Lithium into 3D Current Collectors with a Submicron Skeleton Towards Long-Life Lithium Metal Anodes”. Nat. Commun. 2015, 6, 8058 (“Yang 2015”), 9 pages.
Yang, Y. et al. “Nanostructures for Lithium Batteries”. Chem Soc Rev of The Royal Society of Chemistry 2018, 47(18):3018-3032; 15 pages.
Yazami, R. et al. “A Reversible Graphite-Lithium-Ion Battery Electrode for Electrochemical Generators”. J. Power Sources 1983, 9, 365-371 (“Yazami 1983”), 7 pages.
Zhang, J. et al. “Three-Dimensional Bicrystallin Ultrafast-Charge and Discharge Bulk Lithium Electrodes”. Nat. Nanotechnol. 2011, 6, 277-281 (“Zhang 2011”), 5 pages.
Zhang, J. et al. “Lithium Metal Anodes and Rechargeable Lithium Metal Batteries”. In: ed. Hull, R. et. Eds.; Springer International Publishing AG; 2017 (“Zhang 2017”), 10 pages.
Zhang, R. et al. “Conductive Nanostructured Scaffolds Render Low Local Current Density to Inhibit Lithium Dendrite Growth”. Adv. Mater. 2016, 28, 2155-2162 (“Zhang 2016”), 8 pages.
Zhang, S. “Sulfurized carbon: a class of cathode materials for high performance lithium-sulfur batteries.” Frontiers in Energy Research; Dec. 2013, 10 pages.
Zhang, S. et al. “Charge and Discharge Characteristics of a Commercial LiCoO2-Based 18650 Li-ion Battery”. J. Power Sources 2006, 160, 1403-1409 (“Zhang 2006”), 7 pages.
Zheng, Y. et al. “Carbon-Based Current Collector with Surface Protection for Li Metal Anode”. Nano Res. 2017, 10, 1356-1365 (“Y. Zhang 2017”), 11 pages.
Zheng, Y. et al. “High-Capacity, Low-Tortuosity, and Channel-Guided Lithium Metal Anode”. Proc. Natl. Acad. Sci. U. S. A. 2017, 114, 3580-3586 (“Y. Zhang 2017”), 7 pages.
Zheng, G. et al. “Interconnected Hollow Carbon Nanospheres for Stable Lithium Metal Anodes”. Nat. J. Nanotechnol. 2014, advance on (8), 618-623 (“Zheng 2014”), 6 pages.
Zhou, W. et al. “Plating a Dendrite-Free Lithium Anode with a Polymer-Centered Polymeric-Solvated Electrolyte”. J. Am. Chem. Soc. 2016, 138 (30), 9385-9388 (“Zhou 2016”), 4 pages.
Zhu, Y. et al. “A seamless three-dimensional carbon nanotube graphene hybrid material.” Nature Communications 2012, 3, 1225, 7 pages.
European Patent Office, European Search Report for Application No. EP18845141, dated May 9, 2019, 9 pages.
International Searching Authority, International Preliminary Report on Patentability for PCT/US2016/056270, mailed Apr. 10, 2018, 9 pages.
International Searching Authority, International Search Report and Written Opinion for PCT/US2016/056270, mailed on Dec. 22, 2016, 11 pages.
Unpublished U.S. Appl. No. 17/061,223 “Alkali-Metal Anode with Alloy Coating Applied by Friction”, Tour, J., et al., filed Oct. 1, 2020.
Examination Report and Search Report from the Taiwan Intellectual Property Office (TIPO) for Patent Application No. 106 129755, dated Oct. 2021, 21 pages.
China National Intellectual Property Administration, Notice on the First Office Action for CN Application No. 201780067483.9; date of mailing May 24, 2021, 13 pages.
Decision of the Intellectual Property Office, Reasons for the Rejection for Taiwan Patent Application No. 106129755; dated Oct. 5, 2022, 2 pages.
* cited by examiner
FIG. 1
100
101
102
103
FIG. 2A
FIG. 2B
FIG. 3 (Prior Art)
FIG. 4A
FIG. 4B
FIG. 4C
FIG. 4D
FIG. 5
Voltage (mV) vs. Time (h)
-100 0 200 400 600 800 1000 1200 1400 1600 1800 2000
-50 0 50 100 150 200
Bare Li
Li-MWCNT
501 502 503
Inset: Voltage vs. Time for Bare Li and Li-MWCNT electrodes.
FIG. 6
Voltage (mV) vs. Time (h)
-1500 -1000 -500 0 500 1000
-1500 -1000 -500 0 500 1000
Li-MWCNT Bare Li
602 601
FIG. 7
Voltage (mV)
Time (h)
FIG. 8
Voltage (V) (vs Li⁺/Li)
4 mAh, 2 mA
Time (h)
FIG. 9A
Discharge
1569, 1608
Intensity (a.u.)
Wavenumber (cm⁻¹)
0 min
10 min
20 min
30 min
40 min
50 min
FIG. 9B
Charge
1585, 1610
Intensity (a.u.)
Wavenumber (cm⁻¹)
0 min
10 min
20 min
30 min
40 min
50 min
FIG. 10A
FIG. 10B
FIG. 11A
FIG. 11B
FIG. 11C
Li-CNT (top)
FIG. 11D
FIG. 11E
FIG. 12
Mass loss (%) vs Temperature (°C)
FIG. 13
Voltage (V vs. Li/Li⁺)
Specific capacity (mAh g⁻¹)
FIG. 14A
Specific Capacity (mAh g$^{-1}$)
- SC/GNR - 3 h (0.1 C)
- SC/GNR - 6 h (0.2 C)
- SC/GNR - 15 h (0.1 C)
Cycles
FIG. 14B
Coulombic Efficiency (%)
- SC/GNR - 3 h (0.1 C)
- SC/GNR - 6 h (0.2 C)
- SC/GNR - 15 h (0.1 C)
Cycles
FIG. 15
Voltage (V vs. Li/Li⁺) Specific capacity (mAh g⁻¹)
SC/GNR - 6 h (4 M LiFSI in DME)
SC/GNR - 6 h (1 M LiPF₆ in EC:DEC)
0.1 C
b
Voltage (V vs. Li/Li⁺)
Specific Capacity(mAh g⁻¹)
FIG. 16
FIG. 17
Specific Capacity (mAh g\(^{-1}\))
Cycles
0.2C (1701)
0.6C (1702)
3C (1703)
13C → 60C (1704) (1705)
FIG. 18
FIG. 19
Specific Capacity (mAh/g)
Coulombic efficiency (%)
Cycle number
FIG. 20
FIG. 21A
FIG. 21B
FIG. 21C
FIG. 21D
FIG. 22A
FIG. 22B
FIG. 22C
FIG. 22D
FIG. 23A
Current Density (mA cm$^{-2}$)
Voltage (V vs Li/Li$^+$)
1 mA cm$^{-2}$
GCNT-Li SC
FIG. 23B
Capacity (mAh cm$^{-2}$)
V (vs Li/Li$^+$)
Capacity (mAh g$^{-1}$ s$^{-1}$)
0 1 2 3
2302 2301
2303 2304
FIG. 23C
FIG. 23D
Capacity (Ah g\(^{-1}\)) vs Cycles
FIG. 23E
V (vs Li/Li\(^+\)) vs Time (h)
FIG. 23F
Energy Density (Wh kg\(^{-1}\)) vs Power Density (W kg\(^{-1}\))
FIG. 24A
a
4 M LiFSI/DME
MWCNT film
Li foil
Li-MWCNT
FIG. 24B
FIG. 24C
FIG. 25A
FIG. 25B
FIG. 25C
Intensity (a.u.)
Field (mT)
FIG. 26A
FIG. 26B
Intensity (a.u.)
Wavenumber (cm$^{-1}$)
FIG. 27
Specific Capacity (mAh g$^{-1}$) vs Cycles
- rLiSC at 0.4C (C/D)
- rLiSC at 1C (C/D)
- rLiSC at 2C (C/D)
- rLiSC at 3C (C/D)
FIG. 28A
Voltage (V) vs Time (min)
- 0.1C (10 min) / 20X 0.1C current for 10 s
- 0.1C (10 min) / 40X 0.1C current for 10 s
- 0.1C (10 min) / 60X 0.1C current for 10 s
2801, 2803, 2802
FIG. 28B
FIG. 28C
FIG. 28D
FIG. 29A
Power Density (W kg$^{-1}$)
Energy Density (Wh kg$^{-1}$)
FIG. 29B
C
$Z''(Ω)$
$Z'(Ω)$
ANODES, CATHODES, AND SEPARATORS FOR BATTERIES AND METHODS TO MAKE AND USE SAME
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
This application is a 35 U.S.C § 371 national application of PCT Application No. PCT/US17/49719, filed on Aug. 31, 2017, entitled “Anodes, Cathodes, And Separators For Batteries And Methods To Make And Use Same”, which claims priority to U.S. Patent Appl. 62/381,782, entitled “Sulfurized Carbon As Stable High Capacity Cathodes In High Concentrated Electrolytes” filed Aug. 31, 2016; and U.S. Patent Appl. 62/460,985, entitled “Anodes, Cathodes, And Separators For Batteries And Methods To Make And Use Same,” filed Feb. 20, 2017, which patent applications are commonly owned by the owner of the present invention. These patent applications are hereby incorporated by reference in their entirety for all purposes.
GOVERNMENT INTEREST
This invention was made with government support under Grant No. FA9550-14-1-0111 and FA9550-12-1-0035, awarded by the U.S. Department of Defense, Air Force Office of Scientific Research. The United States government has certain rights in the invention.
FIELD OF INVENTION
Anodes, cathodes, and separators for batteries (electrochemical energy storage devices) and more particularly (a) Li metal anodes having lithiated carbon films (as dendrite suppressors and protective coatings for the Li metal anodes), (b) sulfurized carbon cathodes, (c) graphene nanoribbon (GNR) coated (or modified) separators. This includes the methods of making each of these anodes, cathodes, and separators, and the methods of using each of these alone or in combination with one another, such as in batteries.
BACKGROUND OF INVENTION
Lithium-ion batteries are today’s energy storage device technology of choice for electronic devices and electric vehicles. Since its commercialization in 1991, the lithium-ion battery (LIB) enabled wireless electronic devices, revolutionizing global communications. Almost three decades later, the LIB is expected to facilitate the integration of renewable energy into the electrical grid, as well as to allow affordable electric transportation. [Goodenough 2013; Noordien 2014; Dunn 2011; J. Zhang 2017]. However, these applications demand energy storage capabilities that the LIBs may be unable to meet because of their low cell energy density as required. Therefore, new battery chemistries with higher energy densities have attracted the attention of the scientific community. There are new efforts to develop new battery chemistries with higher energy densities, such as lithium-air (Li—O$_2$) and lithium-sulfur (Li—S), while also resolving the current limitations with pure lithium, commonly called lithium metal anodes. Li-air (Li—O$_2$) and Li-sulfur (Li—S) systems have shown great promise as the energy densities are almost one order of magnitude higher than that of the LIB. [Noordien 2014; Bruce 2011; Girishkumar 2010; Manthiram 2015; Armand 2008].
In Li-air and Li-sulfur batteries, the positive electrode or cathode is coupled with Li as the negative electrode or anode. [Bruce 2011] Li metal (as opposed to lithium ion Li$^+$) possess one of the highest theoretical specific capacities (3,860 mAh g$^{-1}$) and the lowest electrochemical potential (-3.040 V vs. standard hydrogen electrode) of all possible anode materials [Xu 2014], far surpassing presently used graphite anodes in LIBs. While Li metal was extensively investigated as an anode material in the late 1980s, safety issues associated with its use and the short life of the battery hampered its commercialization. Li metal was eventually replaced by graphite and lithium ions. [Goodenough 2013; Xu 2014; Whittingham 2012]. The growing demand for energy storage has prompted the attempts to overcome the safety and lifetime issues of Li metal anodes.
A main challenge of Li metal anode is its tendency to form whisker and needle-like structures, often called “dendrites,” during the charging process. These dendrites can either isolate Li, shortening the life of the battery, or penetrate through the separator, forming an internal short circuit. The formation of dendrites is related to the reactivity between Li and the electrolyte; the low electrochemical potential of Li makes possible the instantaneous reduction of the electrolyte on its surface, creating a passive layer, solid electrolyte interphase (SEI). This SEI layer is typically inhomogeneous and can easily break as a result of the volume change during the charge-discharge cycle, which promotes the growth of dendrites through the fractures, as well as the production of dead Li (electrically isolated Li). [Xu 2014; Cohen 2000; Lin I 2017].
Thus, in spite of these advantages, the practical application of using Li metal in commercial batteries has been hindered by the safety concerns associated with the Li dendrite growth upon repeated charge/discharge cycling. In contact with the electrolyte, Li forms an inhomogeneous solid electrolyte interphase, which provides nucleation sites for dendrite formation at any current density, in addition to the parasitic reactions that occur. These issues give rise to a low coulombic efficiency, a gap formation between the anode and the interface layer, the depletion of the electrolyte and short circuits that can generate fires and explosions.
The different approaches that have been tried to suppress the formation of Li can be divided into three different categories. The first category is based in the modification of the electrolyte composition to improve the ion transportation and, consequently, the protection of the SEI. [Qian 2015; Besenhard 1993; Ding I 2013; Oyama 2017; Lee 2015; Jin 2015; Ding I 2013]. The second involves the development of solid electrolytes that acts as barriers to stop dendrite propagation without compromising the ion transportation. [Bates 1993; Zhou 2016; Wang 2017]. The third category focuses on protecting the interface Li-electrolyte by forming a protective layer that will control the Li deposition. [Zheng 2014; Lee 2015; Kim 2015; Kozen 2015; Li 2016]. To ensure a homogeneous deposition of Li, this protective layer needs to be electrically conductive, mechanically stable, and able to control the flow of Li ions. Different carbon materials, metal oxides and polymers have been used and proven to form a stable protective layer that prevent the Li dendrite formation. Nevertheless, in most cases, current collectors and complex fabrication methods are required, and, in most cases, still result in the formation of a gap between the anode and the protective layer giving rise to dendrite formation.
Another interesting strategy is the use of three dimensional (3D) porous frameworks as host structures for Li metal. In this approach, Li metal is electrodeposited in a 3D structure, where it is accommodated and distributed in the empty volume of the porous framework, which reduces local current density and minimizes Li dendrite formation. [Zhang
2016; Yang 2015; Y. Zhang I 2017; Lin 2016; Liu 2016; Lin II 2017; Y. Zhang I 2017; Tour PCT '052 application). The use of scaffolds or 3D frameworks implies that the gravimetric or volumetric capacity of the Li metal anode is reduced by including the mass or volume of the framework component. An ideal framework structure for Li dendrite suppression would involve a high surface area, low density material with a homogeneously conductive surface for Li deposition that would maximize the gravimetric capacity of the Li metal anode. In addition, a non-tortuous path for Li plating/stripping is desired for reversible operation and high rate applications.
Furthermore, improved high capacity cathodes are desired, such as to achieve batteries with improved energy density and lower cost effectiveness. The development of high capacity cathodes for lithium ion batteries (LIBs) is desired to achieve batteries with improved energy density. Commercially available cathodes such as lithiated metal oxides (e.g., LiCoO$_2$, LiMnO$_2$, LiFePO$_4$, and the like) present lower gravimetric capacity. However, such cathodes display high voltage operation (>3 V vs. Li/Li$^+$) during charge/discharge processes, thereby leading to batteries with comparatively high energy density compared to other battery technologies.
Newer cathodes such as those based on elemental sulfur can lead to much higher energy density because the specific capacity to store Li ions is much higher (1675 mAh g$^{-1}$) compared to lithiated metal oxide cathodes (<200 mAh g$^{-1}$), even though sulfur's voltage operation is lower (~2.1 V vs. Li/Li$^+$).
Sulfur cathodes are also especially attractive considering their cost, low toxicity and abundance when compared to metal oxide cathodes. However, the challenge posed by sulfur cathodes is to control and suppress the "shuttle" effect, by which lithiated species of sulfur, namely lithium polysulfides (i.e., linear chains of sulfur bonded to lithium ion at their ends, Li$_x$S$_y$ with x=2, y=4-8), can be dissolved into the battery electrolyte.
Moreover, the dissolution of Li polysulfides to the electrolyte has two main deleterious effects: one is that it depletes sulfur content from the cathode and the second is that there is an accumulative reaction of Li polysulfides over the surface of the Li metal anode. These two factors lead to a fast capacity drop in Li–S batteries with a concomitant increase resistance to lithium deposition.
The approaches to mitigate the Li polysulfide dissolution have been focused on four main strategies: (1) chemical/physical blocking barriers to slow/stop diffusion of Li polysulfides, (2) coated sulfur particles to suppress Li polysulfide dissolution, (3) solid electrolytes, and (4) sulfurized carbon species. Among these options, the sulfurized carbon species involves the chemical covalent bonding between sulfur and a carbon species, in which elemental sulfur is no longer present.
A separator is generally a porous membrane that functions to keep an anode and a cathode apart electrically while still allowing the transport of ionic charge carriers between them. Improved or modified separators are also desired to improve the cycling stability and decrease the self-discharge effect in batteries. The surface modification of the separator prevents the diffusion of undesirable materials between an anode and a cathode of a battery.
SUMMARY OF INVENTION
The present invention includes new anodes and new processes for modifying the Li metal surface enabling its safe use in lithium metal batteries. The modification includes coating the Li metal surface with a multi-walled carbon nanotubes (MWCNTs) (or graphene nanoribbons, single walled nanotubes, or ultrathin carbon films) free-standing thin film ("Li-MWCNT") and, alternatively "MWCNT-Li" and "rLi$^+$") and an electrolyte. ("rLi$^+$", i.e., "red lithium," is indicative of the MWCNT acquiring a dark red color as a result of the lithiation (doping) process). The thin film is typically 20 to 80 microm in thickness, but it could be thinner or thicker as desired. This thin film coating becomes a lithiated carbon nanotube layer on top of the Li surface driven by the surface reaction between the Li metal and the MWCNT film. The entire MWCNT thin film then becomes doped by the Li metal. Then the Li-doped MWCNT becomes the surface which ejects Li ions toward the cathode upon discharging. This Li-doped MWCNT layer protects the underlying Li metal from parasitic reactions, preventing the formation of dendrites on the surface of Li to practical current densities of 1 and 2 mA cm$^{-2}$ and high areal capacities, such as 2 and 4 mAh cm$^{-2}$ considering one side of the electrode. The numbers could be much broader; these numbers are merely illustrative. The lithiated MWCNT layer that is in direct contact with the lithium metal also eliminates the creation of potential gaps or inhomogeneities between the solid electrolyte interphase layer and the Li metal anode because the MWCNT layer is electrostatically drawn to the lithium metal by the doping process, further reducing the possibility of dendrite formation and loss of coulombic efficiency.
Thus, among other things, the Li-MWCNT protects lithium from the electrolyte and lithium polysulfides in Li-S batteries. The Li-MWCNT can also be utilized in lithium-air (LiO$_2$) batteries to protect the lithium from the dissolved oxygen.
In some embodiment, the present invention encompasses full batteries and new processes by combining the anodes with sulfurized carbon as stable high capacity cathodes. In some embodiments, the present invention includes a full battery (FB) that combines the GCNT-Li anode with a sulfurized carbon (SC) cathode with high sulfur content (up to 60 wt %). This affords a stable device with an operation voltage of >2 V, high energy density (752 Wh kg$^{-1}$) and electrodes, total electrodes (GCNT-Li+SC+ribbons), high areal capacity (2 mAh cm$^{-2}$), and good cyclability (80% retention at >500 cycles), and the system is free of Li polysulfides and dendrites that would cause severe capacity fade. In some embodiments, the full batteries of the present disclosure also include high concentration electrolytes. In some embodiments, the cathodes of the present disclosure also include additional additives, such as graphene nanoribbons (GNRs) (SC/GNR).
The present invention further includes new separators and new processes for making separators having a thin coating of graphene nanoribbons (GNRs). This thin coating could also be made of MWCNTs, single-walled carbon nanotubes (SWCNTs) or graphene that is not in a ribbon shape—graphene oxide, or other form of carbon that can form a barrier to prevent sulfur species from migrating through the membrane. Here, a ribbon is defined as having a length to width aspect ratio of at least 3:1.
The present invention further includes batteries that include one or more of the anodes, cathodes, and separators described above and methods of using same.
In general, in one embodiment, the invention features a lithium metal anode that includes a lithium metal coated with a lithiated carbon material.
In general, in another embodiment, the invention features a cathode that includes a sulfurized carbon cathode.
In general, in another embodiment, the invention features a GNR-modified separator that includes a polymer material coated with a layer of GNRs. The GNR-modified separator is operable for use as a separator in a battery.
In general, in another embodiment, the invention features a battery that includes an anode, a cathode, and a separator positioned between the anode and the cathode. The battery comprises a component selected from the group consisting of: (a) a lithium metal anode that includes a lithium metal coated with a lithiated carbon material; (b) a cathode that includes a sulfurized carbon cathode; (c) a GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery; and (d) combinations thereof.
Implementations of the invention can include one or more of the following features:
- The battery can include the lithium metal anode that includes the lithium metal coated with the lithiated carbon material.
- The battery can include (a) the lithium metal anode that includes the lithium metal coated with the lithiated carbon material and (b) the cathode that includes the sulfurized carbon cathode.
- The battery can include (a) the lithium metal anode that includes the lithium metal coated with the lithiated carbon material, (b) the cathode that includes the sulfurized carbon cathode, and (c) the GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
- The battery can include the cathode that includes the sulfurized carbon cathode.
- The battery can include (a) the cathode that includes the sulfurized carbon cathode, and (b) the GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
- The battery can include a GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
In general, in another embodiment, the invention features a method that includes making a lithium metal anode. The method includes selecting a lithium metal having a surface. The method further includes coating the surface of the lithium metal with a carbon material and an electrolyte. The method further includes performing a reaction involving the lithium metal, carbon material, and the electrolyte to form a lithiated carbon top of the lithium metal.
In general, in another embodiment, the invention features a method that includes making a sulfurized carbon cathode.
In general, in another embodiment, the invention features a method that includes selecting a polymer material operable for use as a separator in a battery, and modifying the polymer material by adding a layer of GNRs to form a GNR-modified separator.
In general, in another embodiment, the invention features a method of forming a battery that includes the steps of combining an anode, a cathode, and a separator positioned between the anode and cathode. The method further includes the step selected from the group consisting of: (a) making the lithium metal anode as set forth above, (b) making the sulfurized carbon cathode as set forth above; (c) making the GNR-modified separator as set forth above; and (d) combinations thereof.
Implementations of the invention can include one or more of the following features:
- In the method of forming the battery, the anode can be made by making the lithium metal anode as set forth above.
- In the method of forming the battery, (a) the anode can be made by making the lithium metal anode as set forth above and (b) the cathode can be made by making the sulfurized carbon cathode as set forth above.
- In the method of forming the battery, (a) the anode can be made by making the lithium metal anode as set forth above, (b) the cathode can be made by making the sulfurized carbon cathode as set forth above, and (c) the GNR-modified separator can be made as set forth above.
- In the method of forming the battery, the cathode can be made by making the sulfurized carbon cathode as set forth above.
- In the method of forming the battery, (a) the cathode can be made by making the sulfurized carbon cathode as set forth above and (b) the GNR-modified separator can be made as set forth above.
- In the method of forming the battery, the GNR-modified separator can be made as set forth above.
In general, in another embodiment, the invention features a method of forming a battery that includes the steps of combining an anode, a cathode, and a separator positioned between the anode and the cathode. The battery comprises a component selected from the group consisting of: (a) a lithium metal anode that includes a lithium metal coated with a lithiated carbon material; (b) a cathode that includes a sulfurized carbon cathode; (c) a GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery; and (d) combinations thereof.
Implementations of the invention can include one or more of the following features:
- The method of forming the battery can include the lithium metal anode that includes the lithium metal coated with the lithiated carbon material.
- The method of forming the battery can include (a) the lithium metal anode that includes the lithium metal coated with the lithiated carbon material and (b) the cathode that includes the sulfurized carbon cathode.
- The method of forming the battery can include (a) the lithium metal anode that includes the lithium metal coated with the lithiated carbon material, (b) the cathode that includes the sulfurized carbon cathode, and (c) the GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
- The method of forming the battery can include the cathode that includes the sulfurized carbon cathode.
- The method of forming the battery can include (a) the cathode that includes the sulfurized carbon cathode, and (b) the GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
- The method of forming the battery can include the GNR-modified separator that includes a polymer material coated with a layer of GNRs in which the GNR-modified separator is operable for use as a separator in a battery.
Implementations of the invention can include one or more of the following features:
- The lithium metal can be in the form of a lithium foil.
The carbon material can include multi-walled carbon nanotubes.
The multi-walled carbon nanotubes can be in the form of a bucky paper.
The carbon material can include graphene nanoribbons.
The nanoribbons can be in the form of a filtered nanoribbon paper.
The carbon material can be selected from a group consisting of multi-walled carbon nanotubes, single-walled carbon nanotubes, few-walled carbon nanotubes, graphene nanoribbons, graphene oxide, graphene oxide nanoribbons, graphite, graphite nanoplatelets, graphite, activated carbon, thermally treated asphalt, amorphous carbon, carbon black, and mixtures thereof.
The carbon materials can further be treated with a polymer to make the carbon materials more flexible without cracking.
The polymer can include polydimethylsiloxane.
The polymer can be selected from a group consisting of polydimethylsiloxane, polyurethane, thermoplastic polyurethane, polybutadiene, polystyrene butadiene), poly(styrene butadiene) copolymer, polyethylene, polyimine, poly fluorinated systems, poly(methyl methacrylate), poly(ethylene glycol), poly(ethylene oxide), polyacrylates, vinyl polymers, chain growth polymers, step growth polymers, condensation polymers, and mixtures thereof.
The electrolyte can be selected from the group consisting of lithium bis(trifluoromethanesulfonyl)imide (LiTFSI), dimethoxyethane (DME), and 1,3-dioxolane (DOL), and mixtures thereof.
The electrolyte can include a mixture of the 1 mol L\(^{-1}\) lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) in a ratio of 1:1 of the dimethoxyethane (DME) and the 1,3-dioxolane (DOL).
The electrolyte can be an ionic liquid or a mixture of the ionic liquid with an organic solvent.
The electrolyte can be formed from a salt in a solvent. The salt can be selected from the group consisting of lithium hexafluorophosphate, lithium perchlorate, lithium bis(fluorosulfonyl)imide, lithium bis(oxalato)borate, lithium tetrafluoroborate, and combinations thereof. The solvent can be selected from the group consisting of ethylene carbonate, propylene carbonate, butylene carbonate, vinyl methyl carbonate, dimethyl carbonate, methyl ethyl carbonate, diethyl ether, carbon dioxide, ethylene glycol dimethyl ether, and combinations thereof.
The electrolyte can be placed on or between the carbon material and the lithium metal in the initial phases of the method.
The electrolyte can be in a high concentration.
The electrolyte can be between 0.5 and 10 mol/L of lithium bis(fluorosulfonyl)imide (LiFSI) in dimethoxyethane (DME).
The electrolyte can be between 2 and 8 mol/L of the lithium bis(fluorosulfonyl)imide (LiFSI) in the dimethoxyethane (DME).
The electrolyte can be between 3 and 5 mol/L of the lithium bis(fluorosulfonyl)imide (LiFSI) in the dimethoxyethane (DME).
The electrolyte can be 4 mol/L of the lithium bis(fluorosulfonyl)imide (LiFSI) in the dimethoxyethane (DME).
An electrolyte can be added to the battery in combination with the anode.
The electrolyte can be between 0.5 and 10 mol/L of lithium bis(fluorosulfonyl)imide (LiFSI) in dimethoxyethane (DME).
The lithium metal can dope the carbon material.
The carbon material can become red or silver in color.
The carbon material can be operable to suppress lithium dendrite formation of the lithium metal anode.
The doped carbon material can become the source of lithium ions injected across into the electrolyte and then into a cathode.
The lithium metal can be a metallic Li foil. The doped carbon material can act as a buffer between an SEI layer and the metallic Li foil.
The buffer can eliminate any gap formation between the SEI layer and the metallic Li foil.
The lithium metal, the carbon material, and the electrolyte can be part of a battery.
The lithium metal, the carbon material, and the electrolyte can be part of a battery anode.
The battery can include a sulfur cathode.
The sulfurized carbon cathode can include sulfur, carbon, and thermally treated polyacrylonitrile.
The sulfurized carbon cathode can include sulfur in an amount between about 47% and about 60 wt %.
The amount of the sulfur in the sulfurized carbon cathode can be between about 47% and about 57 wt %.
The amount of the sulfur in the sulfurized carbon cathode can be between about 55% and about 60 wt %.
The cathode can lack elemental sulfur.
The cathode can include a carbon additive that is a conductive filler.
The carbon additive can be selected from the group consisting of carbon black, graphene, carbon nanotubes, graphene nanoribbons, and combinations thereof.
The method of making the sulfurized carbon cathode can include heat treating elemental sulfur with a carbon source.
The carbon source can include PAN.
The step of heat treating can occur in the presence of an additive.
The additive can be selected from a group consisting of carbon black, graphene, carbon nanotubes, graphene nanoribbons, and combinations thereof.
The step of heat treating can occur at a temperature of at least about 100° C.
The step of heat treating can occur at a temperature of at least about 200° C.
The step of heat treating can occur for at least about 3 hours.
The method of making the sulfurized carbon cathode can include forming a powder that includes elemental sulfur, a carbon source, and an additive. The method of making the sulfurized carbon cathode can include heat treating the powder at a temperature of at least about 450° C. for at least three hours.
The carbon source can include PAN. The additive can include graphene nanoribbons.
The sulfurized carbon cathode can be part of a seamless hybrid of nanotubes grown from a graphene layer.
The polymer materials can include at least one of polypropylene (PP) and polyethylene (PE).
In general, in another embodiment, the invention features a method to form an anode that includes selecting a lithium metal having a surface. The method further includes coating the surface of the lithium metal with a carbon material and an electrolyte. The method further includes forming a lithiated carbon material by lithiating the carbon material with lithium from the lithium metal.
Implementations of the invention can include one or more of the following features:
The method can further include continuing the step of lithiating the carbon material until there is no remaining lithium in the lithium metal. The lithiated carbon material can be the anode.
In general, in another embodiment, the invention features a lithium metal anode. The lithium metal is coated with a thin film material and an electrolyte.
The foregoing has outlined rather broadly the features and technical advantages of the invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter that form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and the specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
It is also to be understood that the invention is not limited in its application to the details of construction and to the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram of an arrangement of an anode, cathode, and separator in a battery that can utilize one or more of the new anodes, cathodes, and separators disclosed herein.
FIGS. 2A-2B are illustrations of an embodiment of the present invention that can utilize one or more of the new anodes, cathodes, and separators disclosed herein. FIG. 2A illustrates the embodiment during the discharge portion of the discharge/charge cycle. FIG. 2B illustrates the embodiment during the charge portion of the discharge/charge cycle.
FIG. 3 is a diagram of a prior art battery that has a short circuit pathway due to dendrites.
FIGS. 4A-4D are SEM images of the unmodified and lithiated-MWCNT modified Li foil surface after Li plating/stripping cycling. FIGS. 4A-4B are, respectively, low magnification images of (a) the pristine Li surface and (b) the Li-MWCNT surface. FIGS. 4C-4D are, respectively, high magnification images of (c) the pristine Li surface and (d) the Li-MWCNT surface. Dendrites 401 are shown in FIG. 4C.
FIG. 5 is a graph that shows the comparison of the cycling stability of Li-MWCNT and bare Li symmetric cells at a current density of 1 mA cm\(^{-2}\) for a total capacity of 2 mAh cm\(^{-2}\).
FIG. 6 is a graph showing the comparison of the cycling stability of the bare Li and the Li-MWCNT at a current density of 2 mA cm\(^{-2}\) and a total capacity of 4 mAh cm\(^{-2}\). 2 mg of S were added to the electrolyte in the form of lithium polysulfides (Li\(_{x}\)S\(_{y}\)) for chemical resistance testing.
FIG. 7 is a graph showing the rate performance of the bare Li 701 and the Li-MWCNT 702 (current densities from 1 to 5 mA cm\(^{-2}\)) for a total capacity was 4 mAh cm\(^{-2}\).
FIG. 8 is a graph showing the plating/stripping test with other carbon nanomaterials as thin films, namely CNT 801—MWCNTs 70-80 nm diameter (NTL Composites); SWCNT 802—HiPco single-walled CNTs (Rice); GO 803—Graphene oxide (AZ Electronics/EMD Merck); and GNR 804—Pristine graphene nanoribbons (AZ Electronics/EMD Merck).
FIG. 9A is a graph showing in situ Raman measurements of a lithiated carbon layer used in anodes of the present invention over different discharge periods.
FIG. 9B is a graph showing in situ Raman measurements of a lithiated carbon layer used in anodes of the present invention over different charge periods.
FIGS. 10A-10B are illustrations of a stainless steel substrate with a lithiated carbon nanotube film before and after, respectively, the plating of lithium.
FIGS. 11A-11E are SEM images showing the stainless steel substrate with the lithiated carbon nanotubes film after the plating of lithium.
FIG. 12 is a graph showing thermogravimetric curves of SC/GNR treated at different times (3, 6 and 15 h) and an elemental sulfur/carbon (sulfur/C) cathode.
FIG. 13 is a graph showing galvanostatic charge/discharge curves of SC/GNR treated at different times (3, 6 and 15 h) using 4 mol L\(^{-1}\) LiFSI in DME.
FIGS. 14A-14B are graphs showing cycling stability and coulombic efficiency, respectively, of SC/GNR-3 h, 6 h and 15 h using 4 mol L\(^{-1}\) LiFSI in DME.
FIG. 15 is a graph showing galvanostatic charge/discharge curves SC/GNR-6 h tested in different electrolytes.
FIG. 16 is a graph showing galvanostatic charge/discharge curves (charge and discharge portions) of the discharge/charge cycle for a battery with (a) an anode having a Li foil/lithiated carbon film and (b) a sulfurized carbon cathode.
FIG. 17 is a graph showing a rate test of the battery utilized to generate the curves in FIG. 16 over different cycle rates. FIG. 17 shows rate performance (from 0.2 to 60 C) of the full cell.
FIG. 18 is an illustration of a GNR-coated separator.
FIG. 19 is an SEM image of a GNR-coated separator.
FIG. 20 is a graph that compares the cycling stability between a standard separator and a GNR-coated separator when utilizing an elemental sulfur based cathode.
FIGS. 21A-21D are photographs taken at time 0 minutes, 30 minutes, 60 minutes, and 180 minutes, respectively, of a standard separator to show the diffusion of Li polysulfides over time.
FIGS. 22A-22D are photographs taken at time 0 minutes, 30 minutes, 60 minutes, and 180 minutes, respectively, of a GNR coated separator to show the diffusion of Li polysulfides over time.
FIGS. 23A-23F are graphs and an image related to a full battery (FB) with a GCNT-Li anode and SC cathode. FIG. 23A is a graph that shows cyclic voltammograms (CVs) of GCNT-Li and SC electrodes in a 1 M LiFSI/DME at 0.5 mV s\(^{-1}\). FIG. 23B is a graph that shows galvanostatic charge/discharge curves of the FB at 0.1 C with areal capacity of 2 mAh cm\(^{-2}\). FIG. 23C is a photograph of a FB prototype powering a LED. FIG. 23D is a graph that shows sequential rate performance test (0.2 to 9 C) and cycling stability of the FB. The inset shows CE (%) of the rate and stability test. FIG. 23E is a graph that shows self-discharge (SD) tests of the FB after 8 h and 1 week showing charge curve followed by continuous discharge curve during and after the open circuit period. The inset shows voltage vs capacity of the SD tests. FIG. 23F is a graph that shows Ragone plot of the GCNT-Li/SC FB, considering the combined mass of the anode and cathode active materials (Li and
S) and the full electrode mass (including binder, carbon additives, GCNT, excess of Li), excluding the current collector.
FIGS. 24A-24C are schematics and an image of the fabrication of an electrode composed of a Li metal coated with a Li-doped MWCNT film (Li-MWCNT). FIG. 24A shows the fabrication process of the Li-MWCNT electrode consisted in wetting the MWCNT film with a high concentrated electrolyte and pressing it against the Li foil to dope the carbon nanotubes.
FIG. 24B is a photograph of MWCNT film after being doped with Li (dark red color). FIG. 24C is a scheme of the spontaneous lithiation of MWCNTs and the corresponding redox reaction.
FIGS. 25A-25C show the morphology of the Li-MWCNT film. FIG. 25A shows the morphology of the pristine MWCNT film characterized by SEM. FIGS. 25B-25C show the morphology of the Li-doped MWCNT film characterized by SEM.
FIGS. 26A-26B are graphs that show the electron paramagnetic resonance (EPR) and Raman spectroscopy, respectively, of MWCNT and Li-MWCNT. Raman spectra (532 nm) compares the vibrational spectra of the pristine MWCNT, the surface of Li metal and the resulting Li-MWCNT.
FIG. 27 is a graph that shows cycles of Li-MWCNT/SC battery at different current densities (cycle stability).
FIGS. 28A-28D are graphs that show an alternated rate test. FIG. 28A is graph that shows single discharges of a Li-MWCNT/SC full battery under different alternated rate conditions. FIGS. 28B-28C are graphs that shows variations in current vs time.
FIGS. 29A-29B are graphs that shows electrochemical characteristics of a full cell with Li-MWCNT as the anode and S (sulfurized carbon) as the cathode. FIG. 29A is a graph that shows the Ragone plot based on the cathode and anode-cathode weight. FIG. 29B is a graph that shows electrochemical impedance spectrum.
DETAILED DESCRIPTION
The present invention is directed to anodes, cathodes, and separators for improved batteries (electrochemical energy storage devices), and more particularly (a) Li metal anodes having lithiated carbon films (as dendrite suppressors and protective coatings for the Li metal anodes), (b) sulfurized carbon cathodes, (c) GNR-coated separators. This includes the methods of making each and the methods of using each of these alone or in combination with one another, such as in batteries.
As used herein, a “lithiated carbon film” is a carbon film to which lithium is bound to, or doped with, the carbon material in the carbon film. Furthermore, the lithium could be in the 0 or +1 oxidation state, being lithium metal or lithium ions, whichever is due to the carbon material.
FIG. 1 is a diagram of an arrangement of anode 101, cathode 102, and separator 103 in a battery 103 that can utilize one or more of the new anodes, cathodes, and separators disclosed herein. The separator 103 electrically insulates the anode 101 from cathode 102 but can transport ions between anode 101 and cathode 102.
FIGS. 2A-2B are illustrations of battery 200 having anode 201, cathode 202, and separator 203. One or more of anode 201, cathode 202, and separator 203 can be an anode, cathode, or separator disclosed herein. In FIG. 2A, battery 200 is shown during the discharge portion of the discharge/charge cycle. Load 204 provides for electron flow to flow from anode 201 to cathode 202 as shown by arrows 205 with current flow from the cathode 202 to anode 201 as shown by arrows 206.
FIG. 2B illustrates battery 200 during the charge portion of the discharge/charge cycle. Charger 207 provides for electron flow to flow from cathode 202 to anode 201 as shown by arrows 208 with current flow from the anode 201 to cathode 202 as shown by arrows 209. For instance, when the anode 201 is a Li metal anode having a lithiated MWCNT film (as described herein), it is believed that MWCNT reacts with the lithium cations to form and, upon discharge, the Li cations return to the cathode.
As noted above, the batteries shown in FIGS. 1 and 2A-2B can be used in an anode, cathode, and separator described here alone or in combination with one another. One advantage in doing so is, for example, is to prevent the growth of dendrites on the anode that would short circuit the battery. For example, as shown in FIG. 3, in the combination of a prior art anode 301, cathode 302, and separator 303 in battery 300, in which dendrites 304 have formed that penetrate the separator 303 resulting a short circuit pathway 305.
Anodes Having Lithiated Carbon Films
The present invention demonstrates that lithiated MWCNT, as an example, can act as a layer that effectively protects the Li surface against parasitic reactions and suppresses the formation of Li dendrites on the surface of the Li foil. The lithiation of the MWCNT film is achieved by contacting the Li surface and the MWCNT film with the use of electrolyte (4 mol L\(^{-1}\) of lithium bis(fluorosulfonyl)imide (LiFSI) in dimethoxyethane), as an example. The lithiation reaction is spontaneous (complete in less than 30 minutes) and the MWCNT film turns red as a result of the lithium (Li) doping process. It is believed that the lithiated carbon layer can act as an ion/electron transport medium mediating the Li plating and stripping processes, thus suppressing the dendrite and rendering the Li a modified surface that is more chemically resistant against parasitic reactions with the liquid electrolyte. The dendrite suppression ability was observed by Li plating/stripping experiments between two Li foils in a 2032 coin cell configuration. FIGS. 4A-4D show scanning electron microscopy (SEM) images of the Li surface and the lithiated-MWCNT-modified Li surface (Li-MWCNT) after the plating/stripping of Li (continuous plating/stripping of Li) at the same conditions. Li dendrites 401 are clearly observed in the unprotected Li foil (FIG. 4C) as a result of non-homogeneous Li deposition. The Li-MWCNT surface shows no signs of Li dendrites, instead the Li is evenly distributed over the lithiated CNT layer.
The stripping/plating process was investigated using the common electrolyte for sulfur cathodes: 1 M lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) in a ratio of 1:1 of dimethoxyethane (DME) and 1,3-dioxolane (DOL). Moreover, to investigate protection against parasitic reactions, the same electrolyte with the addition of S in the form of lithium polysulfides (Li\(_x\)S\(_y\)) was used. The control experiments consisted of a bare Li foil under the same conditions. The additive lithium nitrate (LiNO\(_3\)) used to protect Li was not used in the electrolyte.
FIGS. 5-6 show the cycling performance at a current density of 1 mA cm\(^{-2}\) for a total capacity of 2 mAh cm\(^{-2}\). In FIG. 5, curves 501-502 show the comparison of the cycling stability of Li-MWCNT and bare Li symmetric cells, respectively, with inset 503 showing a magnified portion of curves 501-502. The symmetrical cell of bare Li showed a larger Li stripping/plating overpotential (>40 mV vs Li/Li\(^+\)) than the Li-MWCNT (<40 mV vs Li/Li\(^+\)). The voltage profile of the
bare Li cell showed fluctuations that can be attributed to possible dendrite-induced soft short circuits. In FIG. 6, curves 601-602 are for Li-MWCNT and bare Li, respectively. 2 mg of S were added to the electrolyte in the form of lithium polysulfides (Li$_2$S$_x$) for chemical resistance testing. The symmetrical cells were exposed to LiS, to simulate the chemical environment of a Li—S battery. The cell of bare Li showed a larger Li stripping/plating overpotential (>300 mV vs Li/Li$^+$) than the Li-MWCNT (<110 mV vs Li/Li$^+$). The voltage plateaus for Li-MWCNT were more defined and stable, which indicates the suppression of dendrite formation. The Li-MWCNT exhibited a lower stripping/plating overpotential than the bare Li (~16 times lower). In addition, the bare Li anode showed more voltage fluctuation after a few cycles, which might be attributed to the formation of Li dendrites and an increase in the gap between the Li foil and the interface layer, while the Li-MWCNT anode maintained a constant voltage profile. The same behavior was observed when sulfur was presented in the electrolyte, simulating the chemical environment of a Li—S battery.
Although lithium polysulfides have been found to improve the stability of Li by forming a strong SEI layer, the bare Li anode still exhibited fluctuations in the voltage profile and a larger overpotential, indicating the enhanced chemical resistance of the Li-MWCNT. The cycling stability at different current densities was investigated. Li-MWCNT exhibited lower overpotential and less fluctuations than the bare Li (FIG. 7). FIG. 8 shows a comparison when single-walled carbon nanotubes 802 are used and also graphene oxide 803 and graphene nanoribbons 804. Graphene nanoribbons can work well, being nearly equivalent to the MWCNTs 801. Mixtures of single-walled and MWCNTs can also be used, as well as chemically or mechanically shortened versions, or double- or triple-walled carbon nanotube versions.
For certain anodes of the present invention, the MWCNT film preparation was performed as follows: Pristine MWCNT (diameter ~70-80 nm; M-grade; NanoTech Labs, Inc.) were dispersed in N-methyl-2-pyrrolidone (Sigma-Aldrich) using tip sonication. Alternatively, a dispersion of MWCNT can be achieved in a water and 2-propanol mixture (4:1 volume proportion ratio, respectively). The homogeneous dispersion was then filtered through a porous aluminum membrane to produce a homogeneous film (1.5 mg/cm$^2$) which is often referred to as a flocy paste. To obtain the free-standing film, the aluminum was dissolved in a 1 N sodium hydroxide solution.
For certain anodes of the present invention, the Li coating was carried out in an argon-filled glove box; 25 μL of 4 M LiFSI (Oakswood Products, Inc.) in DME (Sigma-Aldrich) were placed on a Li foil (thickness ~0.45 mm, MTI Corporation), following by the MWCNT film. The Li foil surface was cleaned from any reaction by scraping the surface to remove any oxide layer. Another 25 μL of 4 M LiFSI was added, followed by a second Li foil with the same thickness as the first Li foil. After 30 min, the upper Li foil was removed, resulting in a Li foil covered by a lithiated MWCNT film. See FIGS. 4B and 4D).
For the battery assembly regarding certain embodiments of the present invention, to study the Li stripping/plating, symmetric cells were prepared by assembling the Li-MWCNT electrodes into 2032-type coin cells. The electrolytes employed were either 1 M LiTFSI (Sigma-Aldrich) in a 1:1 ratio of DME and DOF (Sigma-Aldrich) or 4 M LiFSI in DME. The separator used was Celgard K2045. For the controls, symmetric cells using freshly scraped Li foils were assembled. A constant current was applied to the electrodes, and the potential and coulombic efficiencies were recorded over time.
For these embodiments, it was found that the MWCNT film was preferably well-attached to the Li surface. Furthermore, the MWCNT films after lithiation could become brittle. Also, though not required, the lithiation process of MWCNT was facilitated by high concentrated electrolyte.
In further anodes of the present invention, different carbon nanomaterials can be used (single walled CNTs, graphene oxide, graphene nanoribbons, high porosity ultra-thin graphite films, porous ultra-thin conductive films). For these alternative carbon materials, the graphene nanoribbons appeared to provide best performance for these selected, and they also appear to be as good as using the MWCNTs.
Furthermore, in further embodiments of the present invention, a thin and porous polymer coating with the MWCNTs, such as polydimethylsiloxane (PDMS), can be coated onto MWCNT films to render them more flexible and foldable (which would generally be desirable for large batteries, curved electrodes, and for flexible devices).
The present invention will enable safe use of pure Li metal anodes in advanced battery technologies required for the next generation of high energy density batteries, such as metallic rechargeable, sulfur and oxygen batteries. By solving the lithium dendrite formation with a simple fabrication process, the fabrication of batteries that can safely provide more energy storage will be possible. Furthermore, the fabrication process is simple and allows scalable production. This protective (buffer) layer can be also used to protect other type of metal electrodes, such as sodium or potassium or magnesium or sulfur or selenium.
Moreover, the lithiated carbon film is not only a protection layer, it also assists mediating the lithiation (plating and stripping reaction). FIG. 9A shows in situ Raman measurements of lithiated carbon nanotubes during discharge for time periods from 0 to 50 minutes (every 10 minutes), which reveal changes in the frequency of C—C modes (at approximately 1580 cm$^{-1}$, namely peaks at 1569 cm$^{-1}$ and 1608 cm$^{-1}$). FIG. 9B shows in situ Raman measurements of lithiated carbon nanotubes during charge for the same time periods from 0 to 50 minutes (every 10 minutes), which reveal changes in the frequency of C—C modes (at approximately 1580 cm$^{-1}$, namely at 1585 cm$^{-1}$ and 1610 cm$^{-1}$). These indicated that the lithiated carbon nanotubes participated in the lithiation/delithiation reaction.
FIGS. 10A-10B are illustrations of a stainless steel substrate 1001 with a lithiated carbon nanotube film 1002 before and after, respectively, the plating of lithium. The lithiated carbon nanotube film 1002 (as illustrated in FIG. 10A) was plated with lithium at 4 mAh per cm$^2$ for a total of 8 mAh. FIGS. 11A-11E show, respectively, the stainless steel substrate 1001 with the lithiated carbon nanotubes film 1002 after the plating of lithium at 4 mA/h per cm$^2$ for a total of 8 mAh. FIG. 11A shows a further magnified portion showing CNTs 1103. These indicated that, as shown in FIG. 10B, the plated Li metal 1003 was located mainly under the lithiated carbon nanotube film 1102.
In other embodiments of the present invention, a MWCNT mat can be positioned on the anode without lithium. Either electrochemically or by evaporation a Li layer can be applied atop the MWCNT layer, which can diffuse to the underside of the MWCNT layer. In some
In addition to the simple fabrication process, the present invention also has advantages over the prior art that include: Li dendrite suppression using mediated lithiation; mediation of Li deposition by the MWCNTs during the charge/discharge cycle; and creating an ion/electron conductive/protective layer that evenly distributes the Li metal deposition.
It should further be noted that, in extreme discharge, all of the lithium metal might end up in the lithiated carbon material, such that there is no need for lithium metal (such as the lithium foil) under the lithium carbon material. Such lithiated carbon material (without any underlying lithium metal) can be utilized as an anode in the present invention.
Sulfurized Carbon Cathodes
Further embodiments of the present invention utilize sulfurized carbon cathodes. The cathodes can further include high concentrations of electrolytes. The cathodes can also further include additional additives, such as graphene nanoribbons (GNRs) (SC/GNR).
The cathodes can have a sulfur content of more than about 50 wt % (e.g., between about 47 and about 56 wt % of S in mass related to the mass of the electrode excluding the mass contribution of the current collector). The sulfur content in the cathodes can lack any elemental sulfur. The sulfur content in the cathodes can contain minimal amounts of elemental sulfur.
The cathodes can be associated with electrolyte concentrations of more than about 1 mol L\(^{-1}\) (e.g., about 4 mol L\(^{-1}\)). The cathodes can be associated with various types of electrolytes. The electrolytes can include commercial electrolytes, such as lithium hexafluorophosphate in ethylene carbonate/diethyl carbonate (LiPF\(_6\) in EC/DEC), lithium bis (fluorosulfonyl)imide in dimethoxyethane, and combinations thereof.
The cathodes can include various types of carbons additives as conductive fillers. For instance, in some embodiments, the additives can be carbon black, graphene, carbon nanotubes, graphene nanoribbons, among others.
The cathodes can be fabricated by various methods. For instance, in some embodiments, the cathodes of the present disclosure can be fabricated by heat treating elemental sulfur with a carbon source (e.g., PAN). In some embodiments, the heat treatment can occur in the presence of additives, such as GNRs. In some embodiments, the heat treatment can occur at temperatures of more than about 100° C. (e.g., about 450° C. and higher). In more specific embodiments, the cathodes can be fabricated by slowly heat treating elemental sulfur, PAN and GNRs (such as at 450° C.) in a sealed container and an inert atmosphere. The final material can have about 55 wt % to about 60 wt % of S in mass. In some embodiments, the final material can be further heat treated with additional amount of sulfur.
For example, a sulfurized carbon cathode can be prepared as follows: A powder can be prepared by grinding elemental sulfur, PAN (Sigma-Aldrich, 150000 molecular weight) and GNRs in the mass proportion of 55:1:1 for 10 minutes. The powder is then submitted to heat treatment in a sealed tube at 450° C. First, the powder is loaded into an alumina boat. The alumina boat is inserted in the tube and the tube is evacuated to remove air. Next, the tube is filled with argon until it reaches room pressure. At this point, the tube is sealed. The heating from room temperature (25° C.) to 450° C. proceeded at a rate of 5° C. min\(^{-1}\). The heat treatment at 450° C. proceeded for 3-15 h and then the mixture was allowed to cool to room temperature.
If, for example, this sulfurized carbon cathode is to be included in a battery assembly, this can be further performed as follows: Electrodes can be prepared by 80 wt % of SC/GNR, 10 wt % carbon black (Black Pearl 2000) and 10 wt % PVDF (polyvinylidene fluoride) as binder, prepared as slurry. The slurry is coated over steel foil and dried under vacuum at 60° C. for 12 hours. Half-cells are assembled inside a glove box (oxygen and water level <2 ppm) as coin cells (2032) with Celgard K2045 as separator and Li foil as counter and reference electrode (two electrode configuration). The electrolyte can be 1 mol L\(^{-1}\) LiPF\(_6\) in EC/DEC or 4 mol L\(^{-1}\) LiFSI in DMF. The charge-discharge can be tested at 0.1 C (it was considered only the mass of sulfur to calculate the current density) with the voltage limits of 1 to 3 V (vs. Li/Li\(^+\)).
The cathodes can have various advantageous properties. For instance, in some embodiments, the cathodes display very stable behavior during continuous charge/discharge cycles (i.e., minimal capacity loss over cycles) and compatibility in different electrolytes, in which a better performance (i.e., high capacity and stability) is observed in high concentration electrolytes. In some specific embodiments, the cathodes of the present disclosure are able to deliver a capacity of 704 mAh g\(^{-1}\) using common commercial electrolytes (e.g., 1 mol L\(^{-1}\) LiPF\(_6\) in EC/DEC (lithium hexafluorophosphate in ethylene carbonate/diethyl carbonate)) and 1050 mAh g\(^{-1}\) using high concentration electrolytes (e.g., 4 mol L\(^{-1}\) lithium bis (fluorosulfonyl)imide in dimethoxyethane).
Furthermore, the carbon in the sulfurized carbon cathode can be part of a seamless hybrid of nanotubes grown from a graphene layer disclosed and taught in U.S. Pat. No. 9,243,094, issued Sep. 27, 2016, to Tour et al (“the ‘094 Tour patent”). See also Appendix A, at p. 7 (discussing several commonly-owned patent applications, including U.S. Patent Appl. Serial No. 2014/0313636, which issued as the ‘094 Tour patent).
The produced GNR-containing cathodes (SC/GNRs) can act as efficient cathodes without the problems associated with typical elemental sulfur cathodes. The sulfur species in this cathode embodiment corresponds to 55% to 60% of the mass of the material, according to thermogravimetric (TG) curves (FIG. 12).
According to the literature, the sulfur in sulfurized carbon species are believed to be composed mainly by small sulfur chains (S\(_n\)—S\(_n\)) chemically bonded to the sp\(^2\) carbon lattice produced by the decomposition of the PAN, therefore suppressing the Li polysulfide dissolution. The TG curves (curves 1201-1203) in FIG. 12 show that no elemental sulfur is present in the SC/GNR samples. The mass loss is observed after 700° C. (seen in curves 1201-1203), attributed to the bond breaking of C—S species. For comparison, a mixture of sulfur and carbon black (Black Pearls 2000) is presented in the same graph of FIG. 12 (in curve 1204) to show the mass loss attributed to elemental sulfur occurs at much lower temperature (i.e., ~300° C.).
The heat treating time of S, PAN and GNR was varied from 3 to 15 hours. The heating time does not significantly affect the amount of S in the SC/GNR according to FIG. 12 as long as it is not much faster than three hours or else sulfur can sublime out before it reacts with the PAN.
However, the capacity of half-cell batteries using SC/GNR cathodes produced with 3, 6 and 15 hour heat treatment time (SC/GNR-3 h, SC/GNR-6 h and SC/GNR-15 h) presented very different electrochemical behavior, as observed in FIG. 13. Curves 1301-1303 are the galvanostatic charge curves for SC/GNR-3 h, SC/GNR-6 h and SC/GNR15 h, respectively. Curves 1304-1306 are the galvanostatic discharge curves for SC/GNR-3 h, SC/GNR-6 h and SC/GNR-15 h, respectively.
The tests were conducted in 4 mol L\(^{-1}\) LiFSI (lithium bis[fluorosulfonyl]imide in DME (dimethoxyethane)) as electrolyte. The capacity of the sample SC/GNR-15 h is lower (\( \sim 600 \text{ mAh g}^{-1} \)) than the samples SC/GNR-3 h/6 h (\( \sim 1000 \text{ mAh g}^{-1} \)). The cycling stability and coulombic efficiency (CE) of these tests are presented in the graphs of FIGS. 14A-14B, respectively. The triangles 1401, circles 1402, and squares 1403 reflect the cycling stability of SC/GNR-3 h, SC/GNR-6 h and SC/GNR-15 h, respectively. The dark squares 1404, triangles 1405, and light squares 1406 reflect the coulombic efficiency of SC/GNR-3 h, SC/GNR-6 h and SC/GNR-15 h, respectively.
The samples SC/GNR-6 h and SC/GNR-15 h present stable behavior during continuous cycling compared to the sample SC/GNR-3 h, See FIG. 14A. This is expressed also in CE. See FIG. 14B. The SC/GNR samples present high CE, achieving 99.99% in SC/GNR-15 h and 99.9% in SC/GNR-6 h. The sample SC/GNR-6 h presents the best trade-off between stability and capacity of these samples.
Using the sample SC/GNR-6 h, the compatibility of the cathode was tested in common commercial electrolytes composed by 1 mol L\(^{-1}\) LiPF\(_6\) (lithium hexafluorophosphate) in EC:DEC (ethylene carbonate:diethylcarbonate) and the performance was compared with the high concentration electrolyte (4 mol L\(^{-1}\) LiFSI in DME). See FIG. 15 (showing galvanostatic charge curves 1501-1502 for SC/GNR-6 h tested in electrolytes EC:DEC and DME, respectively, and further showing galvanostatic discharge charge curves 1503-1504 for SC/GNR-6 h tested in electrolytes EC:DEC and DME, respectively). The comparison demonstrates that this cathode material has a 42% higher capacity in the high concentrated electrolyte than the commercial electrolyte (\( \sim 1000 \text{ mAh g}^{-1} \) compared to \( 700 \text{ mAh g}^{-1} \)), tested at the same rate (0.1 C, in which 1 C = 1675 mA g\(^{-1}\)). This underscored the advance of the high electrolyte concentrations.
FIG. 16 is a graph showing discharge and charge portions of the discharge/charge cycle for a battery with (a) an anode having a Li foil/lithiated carbon film and (b) a sulfurized carbon cathode. The Li foil was extracted from a commercial Li metal primary battery (Energizer Ultimate Lithium®) and had a thickness of ~130 µm. The Li metal was paired with a sulfurized-carbon (SC) cathode in a 4 mol L\(^{-1}\) LiFSI/DME electrolyte (lithium bis[fluorosulfonyl]imide salt in dimethoxyethane).
Curve 1601 is the discharge curve during the first cycle. Curves 1602 reflect subsequent discharge curves (going from ~3 volts to ~1 volt with a specific capacity of ~800 mAh g\(^{-1}\)). Curves 1603 reflect subsequent discharge curves (going from ~1 volt to 3 volts with a specific capacity of again around ~800 mAh g\(^{-1}\)). The same cathode and anode in a battery could also operate under other concentrations (0.5 M to 10 M), Li salts, and other electrolyte compositions.
FIG. 17 is a graph showing a rate test of the same battery utilized to generate the curves in FIG. 16 over different cycle rates. The curves 1701-1705 correspond to cycle rates of 0.2 C, 0.6 C, 3 C, 13 C, and 60 C, respectively, with 1 C representing a full charge over a one hour period. Accordingly, cycle rates 0.2 C, 0.6 C, 3 C, 13 C, and 60 C, correspond, respectively, to full charges over the following periods, 5 hours, 100 minutes, 20 minutes, around 4.6 minutes, and 1 minute. The open circles in the curves correspond to discharge rate for the cycle and the solid squares in the curves correspond to the charge rate for the cycle.
Sulfurized carbon cathodes have various utilities. For example, anodes and cathodes with high capacity and optimal rate performance are desired to compose batteries that have much higher energy density compared to the current technology. In some embodiments, the compatibility and optimal performance of sulfurized carbon cathodes with high concentration electrolytes make them compatible with high capacity and advanced anodes, such as Li metal anodes, allowing the possibility to replace both anodes and cathodes.
Moreover, the methods of making the sulfurized carbon cathodes are facile, thereby allowing scalable production of the cathode. In some embodiments, the resulting material also has a high proportion of S (about 55 wt % to about 60 wt %) and N (12 wt %), which could be of interest in other catalytic applications.
Sulfurized carbon cathodes solve issues related to cathodes by using a formulation combination of sulfur covalently bound to carbon in the presence of high electrolyte concentrations.
The sulfurized carbon cathodes complement the many developments in anodes, providing the other requisite half of the battery configuration. The sulfurized carbon cathodes demonstrate that covalent sulfur carbon species can afford a stable high capacity cathode if the electrolyte used is at much higher concentrations than typically disclosed by others in the literature.
In some embodiments, the use of such high concentrations of electrolytes along with sulfur-based cathodes produce enhanced effects.
In some embodiments, the sulfurized carbon cathodes can be used in conjunction with Applicants' monolithic seamless graphene-carbon nanotube hybrid electrodes (GCNTs) to afford optimal properties for GCNTs in cathodes as GCNT benefited anodes as formerly disclosed. [See e.g., Tour '052 application and Tour '636 application]. In some embodiments, the sulfurized carbon cathodes can also be used in conjunction with ultrahigh surface area carbons (e.g., uGil-900 made from asphalt and KOH activation) to afford optimal properties for asphalt-derived cathodes just as asphalt-derived carbons benefited anodes as formerly disclosed. [See e.g., Tour PCT '950 application].
In some embodiments, the high concentration electrolyte for the sulfurized carbon cathodes resembles the electrolyte concentration and type that has been shown to work well for Li-GCNT anodes [e.g., Tour PCT '052 application] and Li-asphalt derived anodes [e.g., Tour PCT '950 application]. As such, in some embodiments, the sulfurized carbon cathodes complement the afore-mentioned systems, which are now permitted as both cathodes and anodes to work in unison, as required.
In some embodiments where GCNT electrodes are utilized, the sulfurized carbon cathodes could be fabricated by the methods of the present disclosure through the use of sulfur, PAN, and GCNT (with or without GNRs). In some embodiments where asphalt-derived electrodes are utilized, then the sulfurized carbon cathodes could be fabricated by using sulfur, PAN, and uGil-900 high surface area carbon from KOH activation of asphalt (with or without the GNRs).
In some embodiments, the inclusion of a small proportion of elemental S to the SC/GNR can increase the total capacity of the sulfurized carbon cathodes. In some embodiments, additives other than GNRs can be utilized. In some embodiments, the additives can include, without limitation, carbon nanotubes, graphene, carbon black, and combinations
thereof. In some embodiments, mixtures of Se and S can be included during the preparation of sulfurized carbon. In some embodiments, use of GCNT with PAN and S can be efficacious with or without GNR additives. In some embodiments, ultrahigh surface area carbons such as uG100 can be used in conjunction with PAN and S with or without GNRs. In some embodiments, the content of sulfur is about 55 wt % to about 60 wt %, making the overall content of S in the sulfurized carbon cathodes about 45 wt % to about 50 wt % (including the binder and carbon additives), which reduces the overall capacity of the cathode. In some embodiments, the voltage of discharge (~2 V) is less flat than an elemental sulfur cathode, even though it is much more stable.
GNR-Modified Separators
A separator is utilized to keep the cathode and anode electrically insulated from one another, but allows the transport of electrolyte and ions within. Standard separators are made from materials such as polypropylene (PP) and polyethylene (PE).
The present invention can utilize a separator with a coating on one or both sides, which coating further selectively allows the blocks materials from moving from one side to the other (i.e., moving from the anode side to the cathode side or vice versa). As shown in FIG. 18, such a separator 1801 can be modified by adding a layer of graphene nanoribbons (GNRs) 1802 to yield a light-weight GNR-coated separator 1802. FIG. 19 is an SEM image of a GNR-coated separator. While the layer of graphene nanoribbons 1802 is illustrated on one side of the GNR-coated separator 1800, such layer can be provided on both sides. In some embodiments, the GNR-coated separator 1802 is oriented within the battery with a layer of the graphene nanoribbons facing the cathode, such that when the cathode is a sulfurized carbon cathode or an elemental sulfur based cathode.
The GNR-coated separator was fabricated as follows: Pristine GNR (AZ Electronic Materials) were dispersed in N-methyl-2-pyrrolidone (NMP) via 10 min of tip sonication. Then, the dispersion was vacuum-filtered through a Celgard separator and dried at 60° C. under vacuum for 12 h. This fabrication method makes possible large scale applications and can produce GNR-coated separators with different thicknesses by only changing the concentration of GNR in the dispersion.
Such a GNR-coated separator decreases the diffusion of unwanted materials from traversing from one side to the other side of the battery (such as lithium polysulfides from a sulfur-based cathode traversing through the separator to the anode). In addition, the electrical conductivity of GNR provides new electron pathways that reactivate the intercepted material, thus improving the capacity retention. This means that, because the GNRs are conductive, they can transfer electrons to the trapped species (lithium polysulfides and lithium sulfide). Had these intercepted materials not been reactivated, a severe agglomeration of Li₂S after cycling would have been observed and the capacity retention would instead have been similar to the cell without a coated separator.
FIG. 20 is a graph that compares the cycling stability between a standard separator (curve 2001 showing charge and discharge) and a GNR-coated separator (curve 2002 showing charge and discharge) when utilizing an elemental sulfur based cathode. Such curves show that after 100 cycles, the capacity using the standard separator has gone from around 800 to 400 mAh g⁻¹, while the capacity using the GNR-coated separator has gone from around 900 to 800 with a GNR-coated separator. Curves 2003 are the coulombic efficiency of both the standard and GNR-coated separators to reflect that these remained the same regardless of the separator utilized. Such graph of FIG. 20 thus showed an improved cycling stability with GNR-modified separators for elemental sulfur-based cathodes. The improved GNR-modified separator can likewise be used in the sulfurized-carbon (SC) cathode described above.
FIGS. 21A-21D are photographs taken at time 0 minutes, 30 minutes, 60 minutes, and 180 minutes, respectively, of a standard separator to show the diffusion of Li polysulfides over time. FIGS. 22A-22D are photographs taken at time 0 minutes, 30 minutes, 60 minutes, and 180 minutes, respectively, of a GNR coated separator to show the diffusion of Li polysulfides over time. At time zero (shown in FIGS. 21A and 22A), these reflect (a) on the left side 1702 of the apparatus a relatively clear fluid of LiTFSI (1 M) and LiNO₃ (0.16 M) in DME-DOL; and (b) on the right side 1701 of the apparatus a relatively dark fluid of Li₂S₆ (1M), LiTFSI (1M) and LiNO₃ (0.16 M) in DME-DOL. The difference in these apparatus is that the apparatus of FIGS. 21A-21D utilizes a standard (or un-modified) separator 2103, while the apparatus of FIGS. 22A-22D utilizes a GNR-modified separator 2203. It is evident from (c) comparing FIG. 21B with FIG. 22B (both taken at time t=30 minutes), (c) comparing FIG. 21C with FIG. 22C (both taken at time t=60 minutes), and (d) comparing FIG. 21D with FIG. 22D (both taken at time t=180 minutes), the rate of diffusion was reduced for the apparatus having the GNR-modified separator (utilized in FIGS. 22A-22D).
By reducing the diffusion of such unwanted materials (such as sulfur or lithiated polysulfides) this better enables the use of such materials in the cathodes and anodes (such as sulfur based cathodes).
Batteries
Batteries can be utilized that utilize one or more of the improved anodes, cathodes, and separators and their modifications described herein. In some embodiments, the battery includes an anode having a lithiated carbon film, a sulfurized carbon cathode, and a GNR-modified separator. In other embodiments, the battery has just two of the three being complemented by other standard/commercial component which needs it (i.e., the anode having a lithiated carbon film and a sulfurized carbon cathode, an anode having a lithiated carbon film and a GNR-modified separator, or a sulfurized carbon cathode and a GNR-modified separator).
FB with GCNT-Li Anode and Sulfurized Carbon Cathode
A full battery (FB) was assembled by combining a GCNT-Li anode [Zhu 2012; Lin 2015] with a sulfurized carbon cathode.
For the graphene-carbon nanotube preparation, the preparation of GCNT was similar to the previously reported methods. [Zhu 2012; Lin 2015]. First, Bernal-stacked multilayer graphene was grown on copper foil (25 μm) using the CVD method as reported elsewhere. [Sun 2012]. The catalysts for CNT growth are deposited by e-beam evaporation over the graphene/Cu foil in the order graphene/Fe (1 nm)/Al₂O₃ (3 nm). The CNT growth was conducted under reduced pressure using a water-assisted CVD method at 750° C. First, the catalyst is activated by using atomic hydrogen (H) generated in situ by H₂ decomposition on the surface of a hot filament (0.25 mm W wire, 10 A, 30 W) for 30 s, under 25 Torr (210 scem H₂, 2 scem C₂H₄ and water vapor generated by bubbling 200 scem of H₂ through ultrapure water). After the activation of the catalyst for 30 s, the pressure is reduced to 8.5 Torr and the growth is carried out for 15 min.
For the electrochemical plating/stripping of Li into/from GCNT, the electrochemical reaction was performed in 2035 coin-type cells using GCNT substrates and Li foil as both counter and reference electrodes. The GCNT substrates are circular with total area of ~2 cm². The electrolyte used was 4 M lithium bis(fluorosulfonyl)imide (LiFSI) (Oakwood Inc.) in 1,2-dimethoxyethane (DME). The LiFSI salt is vacuum dried (<20 Torr) at 100°C for 24 h, and DME was distilled over Na strips. The GCNT substrate was preplated by putting one drop of electrolyte on the surface of GCNT, pressing a Li coin gently against the GCNT and leaving it with the Li coin on top for 3 h. Adding excessive amounts of the electrolyte solution during the pretreatment was found to yield ineffective prelithiation due to poor contact between the GCNT and the Li. After the prelithiation, the GCNT was assembled in a coin cell using the same Li coin used in the prelithiation.
For the sulfurized carbon cathode preparation, the sulfurized carbon cathode was prepared by the decomposition of polyvinylchloride (PAN) (Sigma-Aldrich, Mw 150 k) in the presence of excess elemental sulfur. PAN, S, and graphene nanoribbons (GNRs) (EMD-Merck) in the mass ratio of 55:11:1 were ground together using a mortar and pestle. (The GNRs improved the conductivity of the final material). The resulting powder is heated from room temperature to 450°C at a rate of 5°C·min⁻¹ in an argon atmosphere (1 atm). After 6 h, the sulfurized carbon powder was removed and used without purification. The sulfurized carbon powder had approximately 40 wt % S. The sulfurized carbon cathodes were prepared by mixing the powder with carbon black (Black Pearls 2000, Cabot Corp.) and polyvinylidene fluoride (PVDF, Sigma-Aldrich) in a mass proportion of 8:1:1, resulting in a total S content in the electrode of 48 wt %. Typical mass loading was 4-5 mg in 1 cm² electrodes.
For the full battery assembly, the FB was assembled by combining the GCNT-Li and sulfurized carbon cathode using a 4 M LiFSI/DME electrolyte and Celgard K2045 as separator. The electrodes were ~1 cm². The areal capacity of the GCNT-Li was set to match the 30% irreversibly capacity loss of the first cycle of the sulfurized carbon cathode.
By this means, FIG. 21B can be obtained by combining the dendrite-free GCNT-Li anode with a sulfurized carbon cathode with S content of ~60 wt %. The S content in the cathode was reduced to 48 wt % with the addition of binder and carbon additives. Cathodes based on sulfurized carbon have advantages over elemental sulfur (S₈) cathodes, such as high compatibility with different electrolytes and absence of Li polysulfide diffusion [Wei 2015]; the latter generally leads to capacity fading over cycling in elemental sulfur cathodes. [Yang 2013].
FIG. 21C shows the cyclic voltammograms (CVs) of the GCNT-Li and the a sulfurized carbon cathode (third cycle) half-cells, each with total areal capacity of ~2 mAh cm⁻². The first cycle of the a sulfurized carbon cathode half-cell (curve 2301) has a CE of 83%, and the first cycle of the GCNT-Li anode half-cell (curve 2302) has an average CE of 85%, both requiring a small excess of Li from the anode in the FB. The galvanostatic charge/discharge curves (curves 2303-2304, respectively) of the FB in FIG. 21B shows that the discharge curve 2304 extends from 2.1 to 1.7 V. The specific capacity based on S mass is very close to that observed in the half-cells.
A pouch FB 2305 based on GCNT-Li/SC is shown in FIG. 22C. The FB can be cycled continuously at different rates (1 C→discharge time, b) from 0.2 to 9 C (curves 2306-2310, respectively). A cycle stability over 500 cycles is obtained at 1 C with μ80% capacity retention (curve 2311) and CE close to 99.9% (curve 2313 in inset 2312).
As shown in FIG. 22E, the self-discharge (SD) was also tested in the FB, in which a stable voltage of 2.15 V can be achieved even after 1 week (curve 2315). (Curve 2314 is the voltage for the first 8 h). A capacity retention of 94% and 81% is measured after 8 h and 1 week of SD, respectively (shown in curves 2316 and 2317 of inset FIG. 2318). Also, the Ragone plot is calculated and presented in the FIG. 231 for a specific energy and power densities (curves 2319-2320 for FB (active materials) and FB (full electrode), respective).
At the lowest power density, the energy density of the GCNT-Li/SC full-cell is 1423 Wh kg⁻¹ active materials, (752 Wh kg⁻¹ total electrodes), where active materials =Li+S only and total electrodes =GCNT-Li+sulfurized carbon+carbon additives+binder. This is 3x higher energy density than that seen in Li—S full-cells with respect to the mass of active materials (L1—S). [Jin 2016]. Moreover, the data appear attractive when compared to commercial LIB performances with 310 Wh kg⁻¹ active materials (220 Wh kg⁻¹ total electrodes) [Zhang 2006], where active materials =graphite+Li₂CO₃; total electrodes =graphite+LiCoO₂+carbon additives+binder.
However, a definitive comparison with a commercial cell is difficult at this stage because commercial cells are dual-sided and stacked, designed to minimize the contribution of current collectors and packaging materials.
In a non-optimized device, a volumetric energy density of 234 Wh/Ltotal electrodes was achieved. There is no dendritic or mossy Li in the full-cell electrodes after 500 cycles. These results represent a significant achievement for a Li polysulfide- and dendrite-free battery.
The present invention thus achieves superior energy density due to the near theoretical Li storage capacity and serves as the basis for the demonstrated sulfurized carbon/GCNT-Li full-cell in a high concentration electrolyte to produce a safe, stable, and high-performance battery.
FB with Li-MWCNT Anode and Sulfurized Carbon Cathode
A full battery (FB) was assembled by combining a Li-MWCNT anode with a sulfurized carbon cathode.
For the MWCNT film preparation, free standing carbon nanotubes films were prepared by dispersing MWCNTs (NTI, C-agglomerate, 800 nm diameter) in N-methylpyrrolidone (NMP). The MWCNTs were used as received without further purification. MWCNTs (68 mg) were dispersed in NMP (250 mL) using tip sonication; the as prepared dispersion was vacuum filtered through a porous Al membrane (9 cm dia). The MWCNTs were trapped on the surface forming a MWCNT film. This resulting film was rinsed with methanol and dried overnight at 70°C. The Al film was later dissolved using an aqueous etching solution of HF (2.5 v/v %) and HCl (2.5 v/v %) in a round bottom floure. After the Al was completely dissolved, the MWCNT film was removed from the solution, washed with water and ethanol, and dried overnight at 70°C. The porous Al membrane was then easily prepared by etching commercial Al foil (60 µm thickness, Fisher Scientific Inc.) in the aqueous etching solution mentioned above for approximately 10 min.
For the lithiated-MWCNT preparation, Li metal foils from MTI Corporation (1.6 cm diameter chips, 230 µm thickness) were used or they were extracted from an Ultimate Lithium AA battery-Duracell Inc. (25×3.5 cm, 130 µm thick). The Li foils were cleaned before their use by scraping the surface until a shiny metallic surface appeared. The lithiated MWCNT film (1-L-MWCNT) was produced by placing the MWCNT film between two Li foils wetted by 50
μL of 4 M lithium bis(fluorosulfonyl)imide (LiFSI) in dimethoxethane (DME). The lithiation process took approximately 10 min and it could be visualized by the reddish color acquired by the CNT film.
For the sulfurized carbon preparation, the sulfurized carbon powder was prepared by grinding polycyanonitrile (PAN) (Sigma-Aldrich, Mw 150 k), elemental sulfur (Ss) and graphene nanoribbons (GNRs, EMD-Merck) in a mass ratio of 55:11:1 (S:PAN:GNR, respectively). The mixture was heated at 450 °C for 6 h under argon atmosphere (1 atm), at a heating rate of 5° C. min⁻¹. After the heat treatment, the resulting sulfurized carbon powder was used without further purification. A content of ~60 wt % S was measured in the sulfurized carbon powder by thermogravimetric analysis (TGA). The cathode slurry was prepared by mixing the SC, carbon black (Black Pearls 2000, Cabot Corp.) and polyvinylidene fluoride (PVDF, Sigma Aldrich) in a mass ratio of 8:1:1 in NMP. The slurry was used to coat stainless steel foils (30 μm thick, 40 mg cm⁻²) or carbon-coated Al foils (10 μm, 5.5 mg cm⁻², MTI Corp.). The typical mass loading of sulfurized carbon cathodes was 3-5 mg per cm², with a final S content of 47-50 % by wt %.
FIG. 2A shows the fabrication process of the Li-MWCNT electrode. 2404 includes wetting the MWCNT film 2402 with a high concentrated electrolyte 2403 and pressing it against the Li foil 2401 to dope the carbon nanotubes. The thin film of MWCNTs acquired a red color after the lithiation reaction. The red color is attributed to the Li-doped MWCNTs. Reaction took at least 10 min to complete. The red color was only observable with a high concentration of electrolyte. Such Li foil modification is scalable. The doped MWCNT act as protective layer for the Li foil. Li-doped MWCNT act as an enhanced solid electrolyte interphase (SEI) layer.
FIG. 2B is a photograph of MWCNT film after being doped with Li (dark red color). Again, the MWCNT became red as result of lithiation reaction. The red color appears only where there is a Li foil surface available.
FIG. 2BC is a scheme of the spontaneous lithiation of MWCNTs and the corresponding redox reaction. Energy diagrams demonstrated the driving force for reduction of MWCNTs is based on the difference of Fermi energy levels related to vacuum (work function) of Li metal (~2 eV vs. vacuum) and MWCNTs (~5 eV vs. vacuum). The voltage potential was created because of the difference of Fermi energy level of each metal (Li and MWCNT). Electrons flowed from the metal with highest Fermi level (Li ~2 eV). The reaction happened when two metals are in contact. The reaction ended when both Fermi levels are equilibrated. The high concentrated electrolyte enabled maximum lithiation. The same reaction was not possible using just 1 M electrolyte of pure (dry) Li foil.
FIGS. 25A-25C show the morphology of the Li-MWCNT film. FIG. 25A shows the morphology of the pristine MWCNT film characterized by SEM. FIGS. 25B-25C show the morphology of the Li-doped MWCNT film characterized by SEM. The morphology of MWCNTs was not affected by the lithiation of the MWCNTs. The MWCNTs looked swollen because of lithiation. The Lithiation created a compact MWCNT layer. The mat structure of the MWCNT thin film was unaffected by lithiation.
FIGS. 26A-26B are graphs that show the electron paramagnetic resonance (EPR) and Raman spectroscopy, respectively, of MWCNT and Li-MWCNT. Raman spectra (532 nm) compares the vibrational spectra of the pristine MWCNT, the surface of Li metal and the resulting Li-MWCNT.
For FIG. 26A, the EPR measurement elucidated the nature of Li-doped MWCNT. The EPR of pristine MWCNT (curve 2601) indicated high purity of MWCNT sample (no signals observed). The EPR of Li-doped MWCNT (curve 2602) produced a high intensity peak. The g factor of 1.988 indicated formation of stable radicals induced by MWCNT reduction. The g factor (1.988) was far from the expected for free electron systems (g= 2.0023). The electrons in Li-doped MWCNT could be in more localized electron states.
For FIG. 26B, the Raman spectrum of Li-doped MWCNT (curve 2604) showed low intensity of the sp² carbon modes (D, G, 2D). The nature of MWCNT's band structures was altered with Li-doped MWCNT formation. Curves 2604 and 2605 are for Li metal and MWCNT, respectively. The Raman spectrum corroborated EPR and XPS data.
As discussed above, FIGS. 5-6 show the cycling performance at a current density of 2 mA cm⁻² for a total capacity of 4 mAh cm⁻² when comparing bare Li and Li-MWCNT symmetric cells. In FIG. 5, the voltage profile of the bare Li cell showed fluctuations that can be attributed to possible dendrite-induced soft short circuiting. FIG. 6 the voltage plateaus of Li-Li⁺ cells were more uniform and stable, which indicates the suppression of dendrite formation.
FIG. 7 is a graph that show rate performance (current densities from 1 to 5 mA cm⁻²) for a total capacity of 2 mAh cm⁻². Curves 701-702 are for Li-MWCNT and bare Li, respectively. The symmetrical cell of bare Li showed a larger Li stripping/plating overpotential than the Li-MWCNT at different current densities. After returning back to 2 mA cm⁻², the bare Li cell overpotential was ~2.7 times higher. In the case of the Li-MWCNT cell, the overpotential was approximately the same.
FIG. 27 shows cycles of Li-MWCNT/SC ("rLi/rSC") battery at different current densities (cycle stability), with curves 2701-2704 corresponding to rLi/rSC at 0.4, 1, 2, and 3 C (C/D), respectively. Charge and discharge were performed at the same rate. Example "rLi/SC at 1 C (C/D)" means that charge and discharge were performed at the same current density, enabling full charge and discharge in approximately 1 h, respectively.
FIGS. 28A-28D are show an alternated rate test. FIG. 28A is graph that shows a single discharge of a rLi/rSC full battery in which the current density was continuously alternated, starting at 0.4 C current density for 10 minutes, then discharge for 10 s at higher current density (20, 40 or 60 times the original current density for curves 2801-2803, respectively), then returned to the original rate at 0.1 C until the battery reached the lower cut-off limit (1 V). The battery was cycled (charge and discharge) at 0.4 C for 3 times between the alternated rate tests. FIGS. 28B-28D are graphs that shows variations in current vs time (20, 40, and 60 times the original current density, respectively).
FIGS. 16-17 and 29A-29H are graphs that shows electrochemical performance of a full cell with Li-MWCNT as the anode and S (sulfurized carbon) as the cathode.
FIG. 16 is a graph that shows galvanostatic charge/discharge curves. The curves are for the full cell. FIG. 16 shows high reversible capacity (~1000 mAh g⁻¹, based on S mass), lower irreversible capacity at the first cycle (~30%), an average flat discharge voltage at 1.9 V, and that only first cycle presented lower voltage (~1.5 V). The lower voltage is related to the activation of sulfur species in the sulfurized carbon cathode. The charge extend up to 3 V (flat voltage at 2.3 V).
FIG. 17 is a graph that shows rate performance (from 0.2 C to 60 C) of the full cell. Curves 1701-1705 for 0.2 C, 0.6 C, 3 C, 13 C, and 60 C, respectively. This showed that rates
from 0.2 to 60 C were possible. The lower rates could be recovered after high rate test. FIG. 17 also showed long-term stability (i.e., the battery would run until final submission).
FIG. 29A is a graph that shows the Ragone plot based on the cathode and anode-cathode weight. The energy and power density were calculated in terms of mass of cathode active material (S mass) (curve 2901), mass of sulfurized carbon electrode (curve 2902), and mass of both electrodes (anode+electrode, including current collectors) (curve 2903). The battery had high power and energy density capability (projection for 340 Wh kg\(^{-1}\) of the full cell assuming both sides coated of cathode current collector). The full mass (including current collector) was considered for calculations.
FIG. 29B is a graph that shows electrochemical impedance spectrum (with magnified portion of curve 2904 from 0 to 48Ω shown in inset 2905). An equivalent circuit was fitted over the experimental data. This revealed a low internal resistance (~8Ω) and a low charge transfer resistance (~13Ω).
While embodiments of the invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the spirit and teachings of the invention. The embodiments described and the examples provided herein are exemplary only, and are not intended to be limiting. Many variations and modifications of the invention disclosed herein are possible and are within the scope of the invention. Accordingly, other embodiments are within the scope of the following claims. The scope of protection is not limited by the description set out above.
The disclosures of all patents, patent applications, and publications cited herein are hereby incorporated herein by reference in their entirety, to the extent that they provide exemplary, procedural, or other details supplementary to those set forth herein.
Concentrations, amounts, and other numerical data may be presented herein in a range format. It is to be understood that such range format is used merely for convenience and brevity and should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly disclosed. For example, a numerical range of approximately 1 to approximately 4.5 should be interpreted to include not only the explicitly recited limits of 1 to approximately 4.5, but also to include individual numerals such as 2, 3, 4, and sub-ranges such as 1 to 3, 2 to 4, etc. The same principle applies to ranges reciting only one numerical value, such as “less than approximately 4.5,” which should be interpreted to include all of the above-recited values and ranges. Further, such an interpretation should apply regardless of the breadth of the range or the context in which the range is described.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which the presently disclosed subject matter belongs. Although any methods, devices, and materials similar or equivalent to those described herein can be used in the practice or testing of the presently disclosed subject matter, representative methods, devices, and materials are now described.
Following long-standing patent law convention, the terms “a” and “an” mean “one or more” when used in this application, including the claims.
Unless otherwise indicated, all numbers expressing quantities of ingredients, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in this specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by the presently disclosed subject matter.
As used herein, the term “about,” when referring to a value or to an amount of mass, weight, time, volume, concentration or percentage is meant to encompass variations of in some embodiments ±20%, in some embodiments ±10%, in some embodiments ±5%, in some embodiments ±1%, in some embodiments ±0.5%, and in some embodiments ±0.1% from the specified amount, as such variations are appropriate to perform the disclosed method.
As used herein, the term “and/or” when used in the context of a listing of entities, refers to the entities being present singly or in combination. Thus, for example, the phrase “A, B, C, and/or D” includes A, B, C, and D individually, but also includes any and all combinations and subcombinations of A, B, C, and D.
REFERENCES
Tour J. M. et al., U.S. Pat. No. 9,455,094, “Graphene-Carbon Nanotube Hybrid Materials And Use As Electrodes,” issued Sep. 27, 2016. (“the ‘094 Tour patent”).
Tour J. M. et al., Vertically Aligned Carbon Nanotube Arrays As Electrodes, PCT Int’l Patent Publ. No. WO/2017/011052, filed Apr. 25, 2016 (“Tour PCT ‘052 application”).
Tour J. M. et al., High Surface Area Porous Carbon Materials As Electrodes, PCT Int’l Patent Publ. No. WO/2017/062950, filed Oct. 8, 2015 (“Tour PCT ‘950 application”).
Tour J. M. et al., Graphene-Carbon Nanotube Hybrid Materials And Use As Electrodes, U.S. Patent Publ. No. 20140313636, published Oct. 23, 2014 (“Tour ‘636 application”).
Eustace, D. J. et al., Li/TiS\(_2\) Current Producing System, U.S. Pat. No. 4,416,960, issued Nov. 22, 1983 (“Eustace ‘960 patent”).
Armand, M. et al., Building Better Batteries. Nature 2008, 451 (7179), 652-657 (“Armand 2008”).
Aurbach, D. et al., A Short Review of Failure Mechanisms of Lithium Metal and Lithiated Graphite Anodes in Liquid Electrolyte Solutions. Solid State Ionics 2002, 148, 405-416 (“Aurbach 2002”).
Bai, P. et al., Transition of Lithium Growth Mechanisms in Liquid Electrolytes. Energy Environ. Sci. 2016, 9, 3221-3229 (“Bai 2016”).
Bates, J. B. et al., Fabrication and Characterization of Amorphous Lithium Electrolyte Thin Films and Rechargeable Thin-Film Batteries. J. Power Sources 1993, 43 (1-3), 103-110 (“Bates 1993”).
Bessenhard, J. O. et al., Inorganic Film-Forming Electrolyte Additives Improving the Cycling Behaviour of Metallic Lithium Electrodes and the Self-Discharge of Carbon-Lithium Electrodes. J. Power Sources 1993, 44 (1-3), 413-420 (“Bessenhard 1993”).
Boucher, R., Batteries: A Stable Lithium Metal Interface. Nat. Nanotechnol. 2014, 9, 572-573 (“Boucher 2014”).
Bruce, P. G. et al., Li—O\(_2\) and Li—S Batteries with High Energy Storage. Nat. Mater. 2011, 11 (2), 172-172 (“Bruce 2011”).
Claye, A. S. et al., Solid-State Electrochemistry of the Li Single-Wall Carbon Nanotube System. J. Electrochem. Soc. 2000, 147, 2845-2852 (“Claye 2000”).
Cohen, Y. S. et al., Micromorphological Studies of Lithium Electrodes in Alkyl Carbonate Solutions Using In Situ Atomic Force Microscopy. *J Phys. Chem. B* 2000, 104 (51), 12282-12291 ("Cohen 2000").
Crowther, O. et al., Effect of Electrolyte Composition on Lithium Dendrite Growth. *J. Electrochem. Soc.* 2008, 155, A806-A811 ("Crowther 2008").
Ding, F. et al., Effects of Carbonate Solvents and Lithium Salts on Morphology and Coulombic Efficiency of Lithium Electrode. *J. Electrochem. Soc.* 2013, 160 (10), A1894-A1901 ("Ding I 2013").
Ding, F. et al., Dendrite-Free Lithium Deposition via Self-Healing Electrostatic Shield Mechanism. *J. Am. Chem. Soc.* 2013, 135 (11), 4450-4456 ("Ding II 2013").
Dresselhaus, M. S. et al., Raman Spectroscopy on Isolated Single Wall Carbon Nanotubes. *Carbon* 2002, 40, 2043-2061 ("Dresselhaus 2002").
Dunn, B.; Kamath, H. et al., Electrical Energy Storage for the Grid: A Battery of Choices. *Science* (80), 2011, 334 (6058), 928-935 ("Dunn 2011").
Ebbesen, T. W. et al., Electrical Conductivity of Individual Carbon Nanotubes. *Nature* 1996, 382, 54-56 ("Ebbesen 1996").
Evarts, E. C., Lithium Batteries: To the Limits of Lithium. *Nature* 2015, 526, S93-S95 ("Evarts 2015").
Goodenough, J. B. et al., The Li-Ion Rechargeable Battery: A Perspective. *J. Am. Chem. Soc.* 2013, 135 (4), 1167-1176 ("Goodenough 2013").
Girishkumar, G. et al., Lithium-Air Battery: Promise and Challenges. *J. Phys. Chem. Lett.* 2010, 1 (14), 2193-2203 ("Girishkumar 2010").
Hao, X. et al., Ultrastrong Polyoxazole NanoFiber Membranes for Dendrite-Proof and Heat-Resistant Battery Separators. *Nano Lett.* 2016, 16, 2981-2987 ("Hao 2016").
Hirai, T. et al., Effect of Additives on Lithium Cycling Efficiency. *J. Electrochem. Soc.* 1994, 141, 2300-2305 ("Hirai 1994").
Jin, F. et al., Efficient Activation of High-Loading Sulfur by Small CNTs Confined Inside a Large CNT for High-Capacity and High-Rate Lithium-Sulfur Batteries. *Nano Lett.* 2015, ncs.nanolett.5b04105 ("Jin 2015").
Jin, S. et al., Covalently Connected Carbon Nanostructures for Current Collection in Both the Cathode and Anode of Li–S Batteries. *Adv. Mater.* 2016, 28, 9094-9102 ("Jin 2016").
Kim, J. S. et al., Controlled Lithium Dendrite Growth by a Synergistic Effect of Multilayered Graphene Coating and an Electrolyte Additive. *Chem. Mater.* 2015, 27 (8), 2780-2787 ("Kim 2015").
Kozen, A. C.; et al., Next-Generation Lithium Metal Anode Engineering via Atomic Layer Deposition. *ACS Nano* 2015, 9 (6), 5884-5892 ("Kozen 2015").
Landi, B. J. et al., Carbon Nanofibres for Lithium Ion Batteries. *Energy Environ. Sci.* 2009, 2, 638-654 ("Landi 2009").
Landi, B. J. et al., Lithium Ion Capacity of Single Wall Carbon Nanotube Paper Electrodes. *J. Phys. Chem. C* 2008, 112, 7509-7515 ("Landi 2008").
Lee, H.; et al., Simple Composite Protective Layer Coating That Enhances the Cycling Stability of Lithium Metal Batteries. *J. Power Sources* 2015, 284, 103-108 ("Lee 2015").
Li, F. et al., Identification of the Constituents of Double-Walled Carbon Nanotubes Using Raman Spectra Taken with Different Laser-Excitation Energies. *J. Mater. Res.* 2003, 18, 1251-1258 ("Li 2003").
Li, N. W. et al., An Artificial Solid Electrolyte Interphase Layer for Stable Lithium Metal Anodes. *Adv. Mater.* 2016, 28 (9), 1853-1858 ("Li 2016").
Li, W., The Synergetic Effect of Lithium Polysulfide and Lithium Nitrate to Prevent Lithium Dendrite Growth. *Nat. Commun.* 2015, 6 (May), 7436 ("Li 2015").
Liang, Z. et al., Composite Lithium Metal Anode by Melt Infusion of Lithium into a 3D Conducting Scaffold with Lithophilic Coating. *Proc. Natl. Acad. Sci. U.S.A* 2016, 113, 2862-2867 ("Liang 2016").
Lin, D. et al., Reviving the Lithium Metal Anode for High-Voltage Batteries. *Nat. Publ. Gr.* 2017, 12 (3), 194-206 ("Lin I 2017").
Lin, D. et al., Three-Dimensional Stable Lithium Metal Anode with Nanoscale Lithium Islands Embedded in Ionically Conductive Solid Matrix. *Proc. Natl. Acad. Sci. U.S.A.* 2017, 114, 4613-4618 ("Lin II 2017").
Lin, D. et al., Layered Reduced Graphene Oxide with Nanoscale Interlayer Gaps as a Stable Host for Lithium Metal Anodes. *Nat. Nanotechnol.* 2016, 11, 626-632 ("Lin 2016").
Lin, J. et al., 3-Dimensional Graphene Carbon Nanotube Carpet-Based Microsupercapacitors with High Electrochemical Performance. *Nano Lett.* 2013, 13, 72-78 ("Lin 2015").
Liu, Y. et al., An Artificial Solid Electrolyte Interphase with High Li-Ion Conductivity, Mechanical Strength, and Flexibility for Stable Lithium Metal Anodes. *Adv. Mater.* 2017, 29, 1605531 ("Liu 2017").
Liu, Y. et al., Lithium-Coated Polymeric Matrix as a Minimum Volume-Change and Dendrite-Free Lithium Metal Anode. *Nat. Commun.* 2016, 7, 10992 ("Liu 2016").
Lu, Y. et al., Stable Lithium Electrodeposition in Liquid and Nanoporous Solid Electrolytes. *Nat. Mater.* 2014, 13, 961-969 ("Lu 2014").
Mahmood, N. et al., Nanostructured Anode Materials for Lithium Ion Batteries: Progress, Challenge and Perspective. *Adv. Energy Mater.* 2016, 6, 1600374 ("Mahmood 2016").
Manthiram, A. et al., Lithium-Sulfur Batteries: Progress and Prospects. *Adv. Mater.* 2015, 27 (12), 1980-2006 ("Manthiram 2015").
Nordgren, R. et al., The Rechargeable Revolution: A Better Battery. *Nature* 2014, 507, 26-28 ("Nordgren 2014").
Osaka, T., Surface Characterization of Electrodeposited Lithium Anode with Enhanced Cycleability Obtained by CO(sub 2) Addition. *J. Electrochem. Soc.* 1997, 144 (5), 1709 ("Osaka 1997").
Peigney, A. et al., Specific Surface Area of Carbon Nanotubes and Bundles of Carbon Nanotubes. *Carbon* 2001, 39, 507-514 ("Peigney 2001").
Qian, J. et al., High Rate and Stable Cycling of Lithium Metal Anode. *Nat. Commun.* 2015, 6, 6362 ("Qian 2015").
Ren, Z. F. et al., Synthesis of Large Arrays of Well-Aligned Carbon Nanotubes on Glass. *Science* 1998, 282, 1105-1107 ("Ren 1998").
Roy, P. et al., Nanostructured Anode Materials for Lithium Ion Batteries. *J. Mater. Chem. A* 2015, 3, 2454-2484 ("Roy 2015").
Salvatierierra, R. V. et al., Graphene Carbon Nanotube Carpets Grown Using Binary Catalysts for High-Performance Lithium-Ion Capacitors. *ACS Nano* 2017, 11, 2724-2733 ("Salvatierierra 2017").
Sun, Z. et al., Large-Area Bernal-Stacked Bi-, Tri-, and Tetralayer Graphene. *ACS Nano* 2012, 6, 9790-9796 ("Sun 2012").
Thess, A. et al., Crystalline Ropes of Metallic Carbon Nanotubes. *Science* 1996, 273, 483-487 (“Thess 1996”).
Tung, S.-O. et al., A Dendrite-Suppressing Composite Ion Conductor from Aramid Nanofibres. *Nat. Commun.* 2015, 6, 6152 (“Tung 2015”).
Wang, C. et al., Suppression of Lithium-Dendrite Formation by Using LAGP-PEO (LiTiFSI) Composite Solid Electrolyte and Lithium Metal Anode Modified by PEO (LiTiFSI) in All-Solid-State Lithium Batteries. *ACS Appl. Mater. Interfaces* 2017, ascami.7b00336 (“Wang 2017”).
Wei, S. et al., Metal-Sulfur Battery Cathodes Based on Pan-Sulfur Composites. *J. Am. Chem. Soc.* 2015, 137, 12143-12152 (“Wei 2015”).
Whittingham, M. S., History, Evolution, and Future Status of Energy Storage. *Proc. IEEE* 2012, 100 (Special Centennial Issue), 1518-1534 (“Whittingham 2012”).
Xu, W. et al., Lithium Metal Anodes for Rechargeable Batteries. *Energy Environ. Sci.* 2014, 7 (2), 513-537 (“Xu 2014”).
Yan, K. et al., Selective Deposition and Stable Encapsulation of Lithium through Heterogeneous Seeded Growth. *Nat. Energy* 2016, 1, 16010 (“Yan 2016”).
Yang, C.-P. et al., Accommodating Lithium into 3D Current Collectors with a Submicron Skeleton Towards Long-Life Lithium Metal Anodes. *Nat. Commun.* 2015, 6, 8058 (“Yang 2015”).
Yang, Y. et al., Nanostructured Sulfur Cathodes. *Chem. Soc. Rev.* 2013, 42, 3018-3032 (“Yang 2013”).
Yazami, R. et al., Reversible Graphite-Lithium Negative Electrode for Electrochemical Generators. *J. Power Sources* 1983, 9, 365-371 (“Yazami 1983”).
Zhang, H. et al., Three-Dimensional Bicontinuous Ultrafast-Charge and -Discharge Bulk Battery Electrodes. *Nat. Nanotechnol.* 2011, 6, 277-281 (“Zhang 2011”).
Zhang, J.-G. et al., *Lithium Metal Anodes and Rechargeable Lithium Metal Batteries*, 1st ed.; Hull, R. et al., Eds.; Springer International Publishing, 2017 (“J. Zhang 2017”).
Zhang, R. et al., Conductive Nanostructured Scaffolds Reducing Low Local Current Density to Inhibit Lithium Dendrite Growth. *Adv. Mater.* 2016, 28, 2155-2162 (“Zhang 2016”).
Zhang, S. S. et al., Charge and Discharge Characteristics of a Commercial LiCoO$_2$—Based 18650 Li-Ion Battery. *J. Power Sources* 2006, 160, 1403-1409 (“Zhang 2006”).
Zhang, Y. et al., High-Capacity, Low-Tortuosity, and Channel-Guided Lithium Metal Anode. *Proc. Natl. Acad. Sci. U.S.A.* 2017, 114, 3584-3589 (“Y. Zhang I 2017”).
Zhang, Y. et al., A Carbon-Based 3D Current Collector with Sulfurized Protected Lithium Metal Anode. *Nano Res.* 2017, 10, 1356-1365 (“Y. Zhang II 2017”).
Zheng, G. et al., Interconnected Hollow Carbon Nanospheres for Stable Lithium Metal Anodes. *Nat. Nanotechnol.* 2014, advance on (8), 618-623 (“Zheng 2014”).
Zhou, W. et al., Plating a Dendrite-Free Lithium Anode with a Polymer/Ceramic/Polymer Sandwich Electrolyte. *J. Am. Chem. Soc.* 2016, 138 (30), 9385-9388 (“Zhou 2016”).
Zhu, Y. et al., A Seamless Three-Dimensional Carbon Nanotube Graphene Hybrid Material. *Nat. Commun.* 2012, 3, 1225 (“Zhu 2012”).
What is claimed is:
1. A sulfurized carbon cathode comprising:
(a) sulfur;
(b) carbon, wherein the carbon is part of a seamless hybrid of carbon nanotubes grown from a graphene layer; and
(c) a thermally treated polymer, wherein
(i) the cathode lacks elemental sulfur; and
(ii) all of the sulfur in the sulfurized carbon cathode is directly or indirectly covalently bound to the carbon.
2. The sulfurized carbon cathode of claim 1, wherein the thermally treated polymer comprises thermally treated polyacrylonitrile.
3. The sulfurized carbon cathode of claim 1, wherein
(a) the cathode comprises a carbon additive that is a conductive filler, and
(b) the conductive filler is selected from the group consisting of carbon black, graphene nanoribbons, and combinations thereof.
4. The sulfurized carbon cathode of claim 1, wherein
(a) the majority of the sulfur in the sulfurized carbon cathode is small sulfur chains directly covalently bound to sp$^2$ carbon lattices of the carbon, and
(b) the small sulfur chains comprise no more than three sulfur atoms. |
X-Pipeline: an analysis package for autonomous gravitational-wave burst searches
The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
Citation: Doherty, Kathleen M et al. "A Multifaceted Analysis of HIV-1 Protease Multidrug Resistance Phenotypes." BMC Bioinformatics 12.1 (2011): 477. Web.
As Published: http://dx.doi.org/10.1088/1367-2630/12/5/053034
Publisher: Institute of Physics Publishing
Persistent URL: http://hdl.handle.net/1721.1/70132
Version: Final published version: final published article, as it appeared in a journal, conference proceedings, or other formally published context
Terms of use: Creative Commons Attribution 3.0
X-Pipeline: an analysis package for autonomous gravitational-wave burst searches
This article has been downloaded from IOPscience. Please scroll down to see the full text article.
2010 New J. Phys. 12 053034
(http://iopscience.iop.org/1367-2630/12/5/053034)
View the table of contents for this issue, or go to the journal homepage for more
Download details:
IP Address: 220.127.116.11
The article was downloaded on 19/03/2012 at 14:59
Please note that terms and conditions apply.
X-Pipeline: an analysis package for autonomous gravitational-wave burst searches
Patrick J Sutton$^{1,9}$, Gareth Jones$^1$, Shourov Chatterji$^2$, Peter Kalmus$^3$, Isabel Leonor$^4$, Stephen Poprocki$^5$, Jameson Rollins$^6$, Antony Searle$^3$, Leo Stein$^2$, Massimo Tinto$^7$ and Michal Was$^8$
$^1$ School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA, UK
$^2$ Massachusetts Institute of Technology, Cambridge, MA 02139, USA
$^3$ California Institute of Technology, Pasadena, CA 91125, USA
$^4$ University of Oregon, Eugene, OR 97403, USA
$^5$ Department of Physics, Cornell University, Ithaca, NY 14853, USA
$^6$ Columbia University, New York, NY 10027, USA
$^7$ Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
$^8$ LAL, Université of Paris-Sud, CNRS/IN2P3, Orsay, France
E-mail: firstname.lastname@example.org
*New Journal of Physics* **12** (2010) 053034 (32pp)
Received 15 September 2009
Published 21 May 2010
Online at http://www.njp.org/
doi:10.1088/1367-2630/12/5/053034
**Abstract.** Autonomous gravitational-wave searches—fully automated analyses of data that run without human intervention or assistance—are desirable for a number of reasons. They are necessary for the rapid identification of gravitational-wave burst candidates, which in turn will allow for follow-up observations by other observatories and the maximum exploitation of their scientific potential. A fully automated analysis would also circumvent the traditional ‘by hand’ setup and tuning of burst searches that is both labourious and time consuming. We demonstrate a fully automated search with X-Pipeline, a software package for the coherent analysis of data from networks of interferometers for detecting bursts associated with gamma-ray bursts (GRBs) and other astrophysical triggers. We discuss the methods X-Pipeline uses for automated running, including background estimation, efficiency studies, unbiased optimal tuning of search thresholds and prediction of upper limits. These are all done automatically.
$^9$ Author to whom any correspondence should be addressed.
via Monte Carlo with multiple independent data samples and without requiring human intervention. As a demonstration of the power of this approach, we apply X-Pipeline to LIGO data to compute the sensitivity to gravitational-wave emission associated with GRB 031108. We find that X-Pipeline is sensitive to signals approximately a factor of 2 weaker in amplitude than those detectable by the cross-correlation technique used in LIGO searches to date. We conclude with comments on the status of X-Pipeline as a fully autonomous, near-real-time-triggered burst search in the current LSC-Virgo Science Run.
Contents
1. Introduction .................................................. 2
2. Coherent analysis for GWB detection ......................... 4
2.1. Formulation ........................................... 4
2.2. Standard likelihood .................................. 6
2.3. Projection operators and the null energy ............. 8
2.4. Dominant polarization frame and other likelihoods .... 8
2.5. Statistical properties ................................ 10
2.6. Incoherent energies and background rejection ......... 12
3. Overview of X-Pipeline .................................... 14
3.1. Preliminaries ....................................... 14
3.2. Time–frequency maps ................................. 15
3.3. Clustering and event identification .................. 16
3.4. Glitch rejection ..................................... 17
3.5. Triggered search: tuning and upper limits .......... 19
4. GRB 031108 .................................................. 23
4.1. Analysis .............................................. 24
5. Autonomous running ....................................... 29
5.1. Automated launch of X-Pipeline by GCN triggers .... 29
6. Summary ..................................................... 30
Acknowledgments ............................................... 30
References ...................................................... 31
1. Introduction
Gravitational-wave bursts (GWBs) are one of the most interesting classes of signals being sought by the new generation of gravitational-wave detectors. Possible sources include core-collapse supernovae [1], the merger of binaries containing black holes or neutron stars [2], gamma-ray bursts [3] and other relativistic systems; see [4] for a brief overview. These systems typically involve matter at neutron-star densities and very strong gravitational fields, making GWBs potentially rich sources of information on relativistic astrophysics.
The maximum exploitation of a GWB detection would occur when the system is observed by other ‘messengers’ besides gravitational waves, such as in optical, gamma rays, or neutrinos [5]. Indeed, the first detection of a GWB might rely on independent confirmation by
other observatories, and efforts are under way to develop collaborations between gravitational-wave detectors, electromagnetic telescopes and neutrino observatories (see for example [6, 7]). The rapid and confident identification of candidate GWBs by gravitational-wave detectors will be vital for these efforts.
Unfortunately, the analysis of gravitational-wave data tends to be a slow process, with a typical latency of several years between the collection of the data and the publication of results. For example, searches for gravitational-wave transients in the first year (2005–2006) of the LIGO Science Run 5/Virgo Science Run 1 (S5-VSR1) have only recently been published [8, 9]. One of the fastest such analyses has been the search for a gravitational-wave signal associated with GRB 070201 [10], which was published 9 months after the event.
The rapid analysis of gravitational-wave data is not trivial, particularly given the non-stationary nature of the background noise in gravitational-wave detectors and the lack of accurate and comprehensive waveform models for GWB signals. Specifically, we need methods that can detect weak signals with \textit{a priori} unknown waveforms, while being insensitive to the background noise ‘glitches’ that are common in data from gravitational-wave detectors. Glitch rejection is particularly important since it is the limiting factor in the sensitivity of current burst searches, and a confident detection of a GWB will depend critically on robust background estimation. Detector characterization [11, 12] and search optimization tend to be laborious and time consuming, as is accounting for other systematic effects such as uncertainties in detector calibration.
These considerations motivate the deployment of data analysis packages that can process data rapidly, yet comprehensively. The ideal scenario is a \textit{fully autonomous} search—one that runs continuously and without human intervention. This requires an analysis that is self-tuning, adjusting search parameters to changes in the detector network and accounting for variations in the properties of the background noise around the time of candidate events.
We present X-\textsc{Pipeline} [13, 14], a software package designed for performing autonomous searches for unmodelled gravitational-wave bursts. X-\textsc{Pipeline} targets GWBs associated with external astrophysical ‘triggers’ such as gamma-ray bursts (GRBs) and has been used to search for GWBs associated with more than 100 GRBs that were observed during S5-VSR1 [15]. It performs a fully coherent analysis of data from arbitrary networks of gravitational-wave detectors, while being robust against noise-induced glitches. We emphasize the novel features of X-\textsc{Pipeline}, particularly a procedure for automated tuning of the background rejection tests. This allows the analysis of each external trigger to be optimized independently, based on background noise characteristics and detector performance at the time of the trigger, maximizing the search sensitivity and the chances of making a detection. This tuning uses independent data samples for tuning and estimating the significance of candidate events, for unbiased selection of GWB candidates. (See also [16] for a Bayesian-inspired technique for automated tuning.) X-\textsc{Pipeline} can also account automatically for effects like uncertainty in the sky position of astrophysical trigger and detector calibration uncertainties. Furthermore, for the ongoing S6-VSR2 run, we are preparing the next step in the evolution of GWB searches: a fully autonomous search, wherein X-\textsc{Pipeline} is triggered automatically by email reports of GRBs and wherein data are analysed and candidate GWBs identified without human intervention. Our goal is the complete analysis of each GRB within 24 h of the receipt of the GRB notice. Such a rapid analysis would be fast enough to allow further follow-up observations to be prompted by the GWB candidate.
We begin in section 2 with a brief discussion of the theory of coherent analysis in GWB detection. In section 3, we discuss the main steps followed in an X-Pipeline-triggered coherent search. In section 4, we demonstrate the sensitivity of X-Pipeline on GRB 031108 using actual LIGO data, and compare it to the upper limits set by the cross-correlation technique used in the published LIGO search for gravitational waves associated with the same GRB. In section 5, we discuss the status of autonomous running of X-Pipeline during the current S6-VSR2 science run of LIGO and Virgo. We conclude with a few brief comments in section 6.
2. Coherent analysis for GWB detection
Most algorithms currently used in GWB detection can be grouped into two broad classes. In incoherent methods [17, 18], candidate events typically are constructed from each detector data stream independently, and one looks for events with similar duration and frequency band that occur in all detectors simultaneously. By contrast, coherent methods [14, 17], [19]–[33] combine data from multiple detectors before processing, and create a single list of candidate events for the whole network. Coherent methods have some advantages over incoherent methods, such as demonstrated usefulness in rejecting background noise ‘glitches’ [14, 23, 24] and for reconstructing GWB waveforms [19, 29]. A less-recognized advantage of coherent methods is that they are relatively easy to tune. For example, time–frequency coincidence windows for comparing candidate GWBs in different detectors are not necessary. Detectors are naturally weighted by their relative sensitivity, so there is no need to tune the relative thresholds for generating candidate events in each detector. This ease of tuning makes coherent methods particularly useful for rapid searches.
That said, there are also drawbacks to coherent methods, the most significant being computational cost. Coherent combinations are typically a function of the sky position of the GWB source; there are $\gtrsim 10^4$ resolvable directions on the sky for a worldwide detector network [34]. This cost is compounded by the need to estimate the background due to noise, which requires repeated re-analysis of the data using time shifts. Fortunately, in triggered searches, the sky position of the source is often known to high accuracy, and the amount of data to be analysed is relatively small (typically hours), so the computational cost of a fully coherent analysis is modest. This allows triggered searches to take advantage of the benefits of coherent methods while avoiding or minimizing most of the drawbacks.
In this section, we give a brief review of some of the main principles of coherent network analysis as implemented in X-Pipeline.
2.1. Formulation
A rigorous treatment of gravitational waves is based on linearized perturbations of the spacetime metric around a fixed background (see for example [35]). In the linearized theory based on flat spacetime, when working in a suitable gauge, the perturbations representing the gravitational waves can be shown to obey the ordinary wave equation. The gravitational waves are transverse and travel at the speed of light. They have two independent polarizations, commonly referred to as ‘plus’ (+) and ‘cross’ ($\times$). Their physical manifestation is a quadrupolar change in the distance between freely falling test particles (approximated in interferometric gravitational-wave detectors by the mirrors in the interferometer arms). Explicit definitions of the plus and cross polarization states can be found, for example, in [17].
The interferometers currently used to try to detect these waves are based on a laser, a beamsplitter and mirrors at the ends of each arm, which serve as test masses. Data from each interferometer record the length difference of the arms and, when calibrated, measure the strain induced by a gravitational wave. The LIGO detectors are kilometre-scale power-recycled Michelson interferometers with orthogonal Fabry–Perot arms [36, 37]. There are two LIGO observatories: one located at Hanford, WA and the other at Livingston, LA. The Hanford site houses two interferometers: one with 4 km arms (H1) and the other with 2 km arms (H2). The Livingston observatory has one 4 km interferometer (L1). The Virgo detector (V1) is in Cascina near Pisa, Italy. It is a 3-km-long power-recycled Michelson interferometer with orthogonal Fabry–Perot arms [38]. The GEO 600 detector [39], located near Hannover, Germany, is also operational, although with a lower sensitivity than LIGO and Virgo. These instruments are all designed to detect gravitational waves with frequencies ranging from $\sim 30$ Hz to several kHz.
Consider a gravitational wave $h_+(t, \vec{x}), h_\times(t, \vec{x})$ from a direction $\hat{\Omega}$. The output of detector $\alpha \in \{1, \ldots, D\}$ is a linear combination of this signal and noise $n_\alpha$:
$$d_\alpha(t + \Delta t_\alpha(\hat{\Omega})) = F^+_\alpha(\hat{\Omega})h_+(t) + F^\times_\alpha(\hat{\Omega})h_\times(t) + n_\alpha(t + \Delta t_\alpha(\hat{\Omega})). \quad (2.1)$$
Here $F^+(\hat{\Omega}), F^\times(\hat{\Omega})$ are the antenna response functions describing the sensitivity of the detector to the plus and cross polarizations (note that the choice of polarization basis is arbitrary; we use the $\psi = 0$ choice of appendix B of [17]). Also, $\Delta t_\alpha(\hat{\Omega})$ is the time delay between the position $\vec{r}_\alpha$ of detector $\alpha$ and an arbitrary reference position $\vec{r}_0$:
$$\Delta t_\alpha(\hat{\Omega}) = \frac{1}{c}(\vec{r}_0 - \vec{r}_\alpha) \cdot \hat{\Omega}. \quad (2.2)$$
For brevity, we suppress explicit mention of the time delay and understand the data streams to be time-shifted by the appropriate amount prior to analysis. We also write $h_{+, \times}(t) \equiv h_{+, \times}(t, \vec{r}_0)$.
Since the detector data are sampled discretely, we use discrete notation henceforth. The discrete Fourier transform $\tilde{x}[k]$ of a time series $x[j]$ is
$$\tilde{x}[k] = \sum_{j=0}^{N-1} x[j] e^{-i2\pi jk/N}, \quad (2.3)$$
$$x[j] = \frac{1}{N} \sum_{k=0}^{N-1} \tilde{x}[k] e^{i2\pi jk/N},$$
where $N$ is the number of data points in the time domain. Denoting the sampling rate by $f_s$, we can convert from continuous to discrete notation using $x(t) \rightarrow x[j], \tilde{x}(f) \rightarrow f_s^{-1}\tilde{x}[k], \int dt \rightarrow f_s^{-1} \sum_j, \int df \rightarrow f_s N^{-1} \sum_k, \delta(t - t') \rightarrow f_s \delta_{jj'},$ and $\delta(f - f') \rightarrow N f_s^{-1} \delta_{kk'}$. For example, the one-sided noise power spectral density $S_\alpha[k]$ of the noise $\tilde{n}_\alpha$ is
$$\langle \tilde{n}_\alpha^*[k]\tilde{n}_\beta[k'] \rangle = \frac{N}{2} \delta_{\alpha\beta} \delta_{kk'} S_\alpha[k], \quad (2.4)$$
where the angle brackets indicate an average over noise instantiations.
It is conceptually convenient to define the noise-spectrum-weighted quantities
$$\tilde{d}_{\text{wst}}[k] = \frac{\tilde{d}_\alpha[k]}{\sqrt{\frac{N}{2} S_\alpha[k]}}, \quad (2.5)$$
\[
\tilde{n}_{w\alpha}[k] = \frac{\tilde{n}_\alpha[k]}{\sqrt{\frac{N}{2} S_\alpha[k]}},
\]
(2.6)
\[
F_{w\alpha}^{+, \times}(\hat{\Omega}, k) = \frac{F_{w\alpha}^{+, \times}(\hat{\Omega})}{\sqrt{\frac{N}{2} S_\alpha[k]}}.
\]
(2.7)
The normalization of the whitened noise is\(^{10}\)
\[
\langle \tilde{n}_{w\alpha}^*[k] \tilde{n}_{w\beta}[k'] \rangle = \delta_{\alpha\beta} \delta_{kk'}.
\]
(2.8)
With this notation, equation (2.1) for the data measured from a set of \(D\) detectors can be written in the simple matrix form
\[
\tilde{d} = F \tilde{h} + \tilde{n},
\]
(2.9)
where we have dropped the explicit indices for frequency and sky position. We use the boldface symbols \(\tilde{d}\), \(F\) and \(\tilde{n}\) to refer to noise-weighted quantities that are vectors or matrices on the space of detectors (note that \(\tilde{h}\) is not noise-weighted and is not in the space of the detectors):
\[
\tilde{d} \equiv \begin{bmatrix}
\tilde{d}_{w1} \\
\tilde{d}_{w2} \\
\vdots \\
\tilde{d}_{wD}
\end{bmatrix}, \quad \tilde{h} \equiv \begin{bmatrix}
\tilde{h}_s \\
\tilde{h}_\times
\end{bmatrix}, \quad \tilde{n} \equiv \begin{bmatrix}
\tilde{n}_{w1} \\
\tilde{n}_{w1} \\
\vdots \\
\tilde{n}_{wD}
\end{bmatrix},
\]
(2.10)
and
\[
F \equiv \begin{bmatrix}
F^+ & F^\times
\end{bmatrix} \equiv \begin{bmatrix}
F_{w1}^+ & F_{w1}^\times \\
F_{w2}^+ & F_{w2}^\times \\
\vdots & \vdots \\
F_{wD}^+ & F_{wD}^\times
\end{bmatrix}.
\]
(2.11)
(See table 1 for a list of the dimensions of all of the quantities used in this section.) Note that each of these quantities is a function of both frequency and (through the antenna response or implied time shift) sky position. As a consequence, coherent combinations typically have to be re-computed for every frequency bin as well as for every sky position. Note also that, because of the noise-spectrum weighting, the whitened noise is isotropically distributed in the space of detectors (equation (2.8)). Therefore, all information on the sensitivity of the network both as a function of frequency and of sky position is contained in the matrix \(F\) defined by equation (2.11).
### 2.2. Standard likelihood
In this section, we describe some of the simpler coherent likelihoods: those that can be computed from projections of the data. These are the main ones used for signal detection in X-Pipeline. We begin with the simplest coherent likelihood of all: the standard or maximum likelihood, first derived in [17, 21].
---
\(^{10}\) More precisely, the \(k\)-dependent term on the right-hand side of (2.8) (the two-point spectral correlation function) is proportional to the Fourier transform of the window that was applied to the data before transforming to the frequency domain to suppress leakage. We use \(\delta_{kk'}\) in (2.8) as an approximation to simplify the notation.
Table 1. Dimensionality of various quantities used in this section. $D$ is the number of detectors in the network.
| Quantity | Dimensions |
|---------------------------------|-----------------------------|
| $\tilde{h}$, $\tilde{h}_+$, $\tilde{h}_{\text{max}}$ | $2 \times 1$ vectors |
| $F$ | $D \times 2$ matrix |
| $P^{\text{GW}}$, $P^{\text{null}}$, $I$ | $D \times D$ matrices |
| All other boldface symbols: $\tilde{d}$, $\tilde{d}^\dagger$, $F^*$, $f^*$, $e^*$, etc | $D \times 1$ vectors |
Let $P(\tilde{d}|\tilde{h})$ be the probability of obtaining the whitened data $\tilde{d}$ in one time–frequency pixel in the presence of a known gravitational wave $\tilde{h}$ from a known direction. Assuming Gaussian noise,
$$P(\tilde{d}|\tilde{h}) = \frac{1}{(2\pi)^{D/2}} \exp \left[ -\frac{1}{2} \left| \tilde{d} - F \tilde{h} \right|^2 \right]. \quad (2.12)$$
For a set $(\tilde{d})$ of $N_p$ time–frequency pixels,
$$P((\tilde{d})|(\tilde{h})) = \frac{1}{(2\pi)^{N_pD/2}} \exp \left[ -\frac{1}{2} \sum_k \left| \tilde{d}[k] - F[k] \tilde{h}[k] \right|^2 \right], \quad (2.13)$$
where $k$ indexes the pixels. The likelihood ratio $L$ is defined by the log-ratio of this probability to the corresponding probability under the null hypothesis,
$$L \equiv \ln \frac{P((\tilde{d})|(\tilde{h}))}{P((\tilde{d})|(0))} = \frac{1}{2} \sum_k \left[ \left| \tilde{d} \right|^2 - \left| \tilde{d} - F \tilde{h} \right|^2 \right], \quad (2.14)$$
where $P((\tilde{d})|(0))$ is the probability of measuring the data $(\tilde{d})$ when no GWB is present ($\tilde{h} = 0$).
In practice, the signal waveform $\tilde{h}$ is not known \textit{a priori}, so it is not clear how to compute the likelihood ratio (2.14). One approach is to treat the waveform values $\tilde{h} = (\tilde{h}_+, \tilde{h}_\times)$ in each time–frequency pixel as free parameters to be fitted to the data. The best-fit values $\tilde{h}_{\text{max}}$ are those that maximize the likelihood ratio:
$$0 = \frac{\partial L}{\partial \tilde{h}} \bigg|_{\tilde{h} = \tilde{h}_{\text{max}}}. \quad (2.15)$$
Because the likelihood ratio $L$ is quadratic in $\tilde{h}$, (2.15) gives a linear equation for $\tilde{h}_{\text{max}}$. The solution is
$$\tilde{h}_{\text{max}} = (F^\dagger F)^{-1} F^\dagger \tilde{d}, \quad (2.16)$$
where we use the superscript $\dagger$ to denote the conjugate transpose. ($F$ is real, but other quantities such as the data vector $\tilde{d}$ are complex.) Substituting the solution for $\tilde{h}_{\text{max}}$ in (2.14) gives the \textit{standard likelihood},
$$E_{SL} \equiv 2L(\tilde{h}_{\text{max}}) = \sum_k \tilde{d}^\dagger P^{\text{GW}} \tilde{d}, \quad (2.17)$$
where we define
$$P^{\text{GW}} \equiv F (F^\dagger F)^{-1} F^\dagger \quad (2.18)$$
and we have used the fact that $P_{GW}$ is Hermitian. (The factor of 2 in the definition of $E_{SL}$ is purely a matter of taste.)
### 2.3. Projection operators and the null energy
It is easy to show that $P_{GW}$ is a projection operator that projects the data into the subspace spanned by $\tilde{F}^+$ and $\tilde{F}^\times$. We know by equation (2.1) or (2.9)–(2.11) that the contribution made to $\tilde{d}$ by any gravitational wave from a fixed sky position is restricted to this subspace. The standard likelihood is therefore the maximum amount of energy\(^{11}\) in the whitened data that is consistent with the hypothesis of a gravitational wave from a given sky position.
Contrast this with the *total energy* in the data, which is simply
$$E_{tot} = \sum_k |\tilde{d}|^2.$$
(2.19)
The total energy is an incoherent statistic in the sense that it contains only autocorrelation terms and no cross-correlation terms. In the limit of a one-detector network, this is the quantity one computes for each time–frequency pixel in an excess-power search [17].
The projection operator $P_{null} \equiv (I - P_{GW})$, which is orthogonal to $P_{GW}$, cancels the gravitational-wave signal. This yields the *null stream* with energy
$$E_{null} \equiv E_{tot} - E_{SL} = \sum_k \tilde{d}^\dagger P_{null} \tilde{d}.$$
(2.20)
The null energy is the minimum amount of energy in the whitened data that is inconsistent with the hypothesis of a gravitational wave from a given sky position.
One advantage of coherent analysis is that the projection from the full data space with energy $E_{tot}$ to the subspace spanned by $\tilde{F}^+$ and $\tilde{F}^\times$ with energy $E_{SL}$ removes some fraction of the noise, with energy $E_{null}$, without removing any of the signal component (small errors in calibrations, sky position or power spectra change $\tilde{F}$ but this affects the signal energy only at second order). This means that a signal can be detected with higher confidence. An important caveat is that the full benefit is gained only if the sky position is known *a priori*, such as in GRB searches. If the sky position of the source is not known *a priori*, one typically repeats the calculation of the likelihood for a set of directions spanning the entire sky ($\gtrsim 10^3$ directions). Since $\tilde{F}^+$ and $\tilde{F}^\times$ vary with the sky position, this means that many different projection operators will be applied to the data. This will incur a false-alarm penalty.
### 2.4. Dominant polarization frame and other likelihoods
For a single time–frequency pixel, the data from a set of $D$ detectors is a vector in a $D$-dimensional complex space. One basis of this space is formed by the set of single-detector strains (the basis in which all equations have been written so far); however, this is not the most convenient basis for writing detection statistics. The two-dimensional subspace defined by $\tilde{F}^+$ and $\tilde{F}^\times$ is a natural starting point for the construction of a better basis. If we examine the properties of this two-dimensional space, we find there is a direction (a choice of polarization
\(^{11}\) More precisely, since it is defined in terms of the noise-weighted data, the standard likelihood is the maximum possible squared signal-to-noise ratio $\rho^2$ that is consistent with the hypothesis of a gravitational wave from a given sky position. See section 2.5.
angle) in which the detector network has the maximum antenna response and an orthogonal direction in which the network has minimum antenna response. Choosing those two directions as basis vectors, and completing them with an orthonormal basis for the null space, yields a very convenient basis in which to construct detection statistics. To further simplify things it is possible to define the + and × polarizations so that $F^+$ lies along the first basis vector and $F^\times$ along the second. This choice of polarization definition is called the dominant polarization frame or DPF [25, 26]. Note that while searches for modelled signals such as binary inspirals often select the polarization basis with reference to the source, the DPF polarization basis is tailored to the detector network at each frequency. This makes it a particularly convenient choice when searching for more general GWB signals.
To see how one constructs the DPF, recall that the antenna response vectors in two frames separated by a polarization angle $\psi$ are related by
$$F^+(\psi) = \cos 2\psi F^+(0) + \sin 2\psi F^\times(0),$$
$$F^\times(\psi) = -\sin 2\psi F^+(0) + \cos 2\psi F^\times(0)$$
(see for example equations (B9) and (B10) of [17]). It is straightforward to show that for any direction on the sky, one can always choose a polarization frame such that $F^+(\psi)$ and $F^\times(\psi)$ are orthogonal and $|F^+(\psi)| > |F^\times(\psi)|$. Explicitly, given $F^+(0)$ and $F^\times(0)$ in the original polarization frame, the rotation angle $\psi_{DP}$ giving the dominant polarization frame is
$$\psi_{DP}(\hat{\Omega}, k) = \frac{1}{4} \text{atan2} \left( 2F^+(0) \cdot F^\times(0), \ |F^+(0)|^2 - |F^\times(0)|^2 \right),$$
where $\text{atan2}(y, x)$ is the arctangent function with range $(-\pi, \pi]$. Note that $\psi_{DP}$ is a function of both sky position and frequency (through the noise weighting of $F^+$ and $F^\times$).
We denote the antenna response vectors in the DPF by the lower-case symbols $f^+$ and $f^\times$. They have the properties
$$|f^+|^2 \geq |f^\times|^2,$$
$$f^+ \cdot f^\times = 0.$$
In the DPF the unit vectors $e^+ \equiv f^+/|f^+|$, $e^\times \equiv f^\times/|f^\times|$ are part of an orthonormal coordinate system; see figure 1. Indeed, the DPF can be viewed as the natural coordinate system in the space of detector data for understanding the sensitivity of the network. Mathematically, rotating to the DPF is the same as doing a singular value decomposition of the matrix $F$. The singular values are $|f^+|^2$ and $|f^\times|^2$, i.e. the magnitudes of the antenna response evaluated in the DPF.
It should be noted that the DPF does not specify any particular choice of basis for the null space. Convenient choices for the null basis can be motivated by how the null energy is used in the search, but we do not consider this issue here.
In the DPF, the projection operator $P^{GW}$ takes on the very simple form
$$P^{GW} = e^+ e^{+ \dagger} + e^\times e^{\times \dagger}.$$
The standard likelihood (2.17) becomes
$$E_{SL} = \sum_k \left[ |e^+ \cdot \tilde{d}|^2 + |e^\times \cdot \tilde{d}|^2 \right],$$
where we use the notation \( \boldsymbol{a} \cdot \boldsymbol{b} \) to denote the familiar dot product between \( D \times 1 \) dimensional vectors \( \boldsymbol{a} \) and \( \boldsymbol{b} \). The plus energy or hard constraint likelihood [25, 26] is the energy in the \( h_+ \) polarization in the DPF:
\[
E_+ \equiv \sum_k \left| e^+ \cdot \tilde{\boldsymbol{d}} \right|^2.
\]
The cross energy is defined analogously:
\[
E_\times \equiv \sum_k \left| e^\times \cdot \tilde{\boldsymbol{d}} \right|^2.
\]
The soft constraint likelihood [25, 26] (not a projection likelihood) is
\[
E_{\text{soft}} \equiv \sum_k \left[ \left| e^+ \cdot \tilde{\boldsymbol{d}} \right|^2 + \epsilon \left| e^\times \cdot \tilde{\boldsymbol{d}} \right|^2 \right],
\]
where the weighting factor \( \epsilon \) is defined in the DPF as
\[
\epsilon \equiv \frac{|f^\times|^2}{|f^+|^2} \in [0, 1].
\]
Typical values are \( \epsilon \sim 0.01–0.1 \) for the LIGO network.
Numerous other likelihood-based coherent statistics have been introduced in the literature, such as the Tikhonov regularized statistic [28], a sky-map variability statistic [30], and modified constraint likelihood statistics [33]. Also, comprehensive Bayesian formulations of the problem of GWB detection and waveform estimation are described in [29, 31, 32]. While some of these statistics are available in X-Pipeline, we do not consider them here.
### 2.5. Statistical properties
One convenient property of the projection likelihoods \( E_+ \), \( E_\times \), \( E_{SL} \), \( E_{\text{null}} \) and \( E_{\text{tot}} \) is that their statistical properties for signals in Gaussian background noise are very simple. Specifically, for
a set of time–frequency pixels and a sky position chosen *a priori*, each of these energies follows a $\chi^2$ distribution with $2N_pD_{\text{proj}}$ degrees of freedom:
$$2E \sim \chi^2_{2N_pD_{\text{proj}}}(\lambda). \quad (2.32)$$
Here $N_p$ is the number of pixels (or time–frequency volume), $D_{\text{proj}}$ is the number of dimensions of the projection, which is 1 for $E_+$, $E_\times$, 2 for $E_{\text{SL}}$ and $D$ for $E_{\text{tot}}$. Note that $D_{\text{proj}} = D - 2$ for $E_{\text{null}}$, except when the null stream is constructed as the difference of the data streams from the two co-aligned LIGO-Hanford detectors, H1 and H2, in which case it is $D - 1$ (the H–H2 sub-network is only sensitive to a single gravitational-wave polarization, so only one dimension is removed in forming the null stream). The factor of 2 in the degrees of freedom occurs because the data are complex. The non-centrality parameter $\lambda$ is the expected squared signal-to-noise ratio of a matched filter for the waveform restricted to the time–frequency region in question\footnote{Equations (2.33)–(2.36) assume that the sum is restricted to positive frequencies; if negative frequencies are included, then the non-centrality parameters should be doubled ($\lambda \rightarrow 2\rho$). In both cases, $\rho$ is to be understood as the expected output of a matched filter.} and after projection by the appropriate likelihood projection operator, summed over the network:
$$\lambda_+ = 2 \sum_k |f^+|^2|h_+|^2$$
$$= \frac{4}{N} \sum_\alpha \sum_k \left| \frac{F_\alpha^+(\psi_{\text{DP}})\tilde{h}_+[k](\psi_{\text{DP}})}{S_\alpha[k]} \right|^2 =: \rho_+^2, \quad (2.33)$$
$$\lambda_\times = 2 \sum_k |f^\times|^2|h_\times|^2$$
$$= \frac{4}{N} \sum_\alpha \sum_k \left| \frac{F_\alpha^\times(\psi_{\text{DP}})\tilde{h}_\times[k](\psi_{\text{DP}})}{S_\alpha[k]} \right|^2 =: \rho_\times^2, \quad (2.34)$$
$$\lambda_{\text{SL}} = 2 \sum_k \left[ |f^+|^2|h_+|^2 + |f^\times|^2|h_\times|^2 \right]$$
$$= \frac{4}{N} \sum_\alpha \sum_k \left| \frac{F_\alpha^+\tilde{h}_+[k] + F_\alpha^\times\tilde{h}_\times[k]}{S_\alpha[k]} \right|^2 =: \rho^2, \quad (2.35)$$
$$\lambda_{\text{tot}} = \rho^2, \quad (2.36)$$
$$\lambda_{\text{null}} = 0. \quad (2.37)$$
Note that in (2.33) and (2.34), the antenna responses and waveforms are defined in the DPF. Equation (2.35) is actually independent of the polarization basis used.
The mean and standard deviation of the non-central $\chi^2$ distribution (2.32) are $(2N_pD_{\text{proj}} + \lambda)$ and $\sqrt{2N_pD_{\text{proj}}}$. Consequently, one expects a signal to be detectable by a given coherent statistic when
$$\frac{\lambda}{\sqrt{2N_pD_{\text{proj}}}} \gg 1. \quad (2.38)$$
Table 2. Expected signal and noise contributions to various coherent energies. The ‘mean(2E)’ column shows the contribution to the mean energy due to a gravitational wave, evaluated in the dominant polarization frame. See equations (2.31) and (2.33)–(2.36). The ‘std(2E)’ column shows the standard deviation due to noise fluctuations assuming a non-aligned detector network (i.e., \(|f^\times| > 0\)). The values of \(E_{\text{soft}}\) are written as approximate because the weighting factor \(\epsilon\) is itself a function of frequency. All of these values assume the time–frequency region to sum over and the correct sky location are known \textit{a priori}.
| Energy measure | mean(2E)\(_{\text{GWb}}\) | std(2E)\(_{\text{noise}}\) |
|----------------|---------------------------|--------------------------|
| \(E_{\text{TOT}}\) | \(\rho^2 = \rho_+^2 + \rho_\times^2\) | \(\sqrt{D}\) |
| \(E_{\text{SL}}\) | \(\rho^2 = \rho_+^2 + \rho_\times^2\) | \(\sqrt{2}\) |
| \(E_+\) | \(\rho_+^2\) | 1 |
| \(E_\times\) | \(\rho_\times^2\) | 1 |
| \(E_{\text{soft}}\) | \(\simeq (\rho_+^2 + \epsilon \rho_\times^2)\) | \(\sqrt{1 + \epsilon}\) |
Table 2 shows the mean and standard deviation of various energy measures when the correct sky position and the time–frequency region are known \textit{a priori}.
For a circularly polarized or unpolarized gravitational wave, \(\rho_\times^2 / \rho_+^2 \simeq \epsilon \ll 1\) for typical sky positions. For example, for the LIGO–Virgo network of detectors H1–H2–L1–V1, assuming H2 is half as sensitive as H1, L1 and V1, the median value of \(\epsilon\) is 0.1, while for the LIGO network H1–H2–L1, the median value is 0.02. As a consequence, for many signals \(\rho_\times^2\) is negligible. (An exception is linearly polarized GWBs; for these the random polarization angle can make \(\rho_\times^2 > \rho_+^2\) in the H1–H2–L1 network for approximately 10% of signals for a typical sky position.) Since all of the energies except \(E_\times\) in table 2 include \(\rho_\times^2\), their relative performance is dominated by the level of noise fluctuations. The noise fluctuations in the energies scale as the square root of the number of orthogonal directions used to compute the energy. As a consequence, we expect those statistics that project the data down to fewer dimensions to perform better for GWB detection. For \(E_\times\) the data are projected onto a single direction. \(E_{\text{SL}}\) and \(E_{\text{soft}}\) use data along two directions, and so have higher noise. The total energy \(E_{\text{tot}}\) uses all of the data and therefore incorporates the largest contributions from noise. In practice, coherent consistency tests (discussed in the next section) can be used to reduce the noise background, allowing statistics like \(E_{\text{SL}}\) to be used effectively, so that all of the signal-to-noise ratio of a GWB (\(\rho_+^2\) and \(\rho_\times^2\)) can be included in the detection statistic.
2.6. Incoherent energies and background rejection
The various likelihood measures \(E_{\text{SL}}\), \(E_+\), etc, are motivated as detection statistics under the assumption of stationary Gaussian background noise. Real detectors do not have purely Gaussian noise. Rather, real detector noise contains \textit{glitches}, which are short transients of excess strain that can masquerade as GNB signals. In practice, without a means to distinguish noise glitches from true GW signals, the sensitivity of a burst search will be limited by such glitches. Coherent analyses can be particularly susceptible to such false alarms, since even a glitch in a single detector will produce large values for likelihoods such as \(E_{\text{SL}}\). In this section we outline a technique for the effective suppression of such false alarms in coherent analyses.
As shown in Chatterji et al [14], one can use the autocorrelation component of coherent energies to construct tests that are effective at rejecting glitches. This coherent veto test is based on the null space—the subspace orthogonal to that used to define the standard likelihood. The projection of \( \mathbf{d} \) on this subspace contains only noise, and the presence or absence of GWs should not affect this projection in any way. By contrast, glitches do not couple into the data streams with any particular relationship to \( \mathbf{F}^+ \) and \( \mathbf{F}^\times \). As a result, glitches will generally be present in the null space projection. This provides a way to distinguish true GWs from glitches, by requiring the null energy to be small for a transient to be considered a GW [24].
To see how an effective test can be constructed, note that we can write equation (2.20) for the null energy as
\[
E_{\text{null}} = \sum_k \sum_{\alpha,\beta} \tilde{d}_\alpha^* P_{\alpha\beta}^{\text{null}} \tilde{d}_\beta.
\]
(2.39)
As pointed out in Chatterji et al [14], the null energy is composed of cross-correlation terms \( \tilde{d}_\alpha^* \tilde{d}_\beta \) and auto-correlation terms \( \tilde{d}_\alpha^* \tilde{d}_\alpha \). If the transient signal is not correlated between detectors (as is expected for glitches), then the cross-correlation terms will be small compared to the auto-correlation terms. As a consequence, for a glitch we expect the null energy to be dominated by the auto-correlation components:
\[
E_{\text{null}} \simeq I_{\text{null}} \equiv \sum_k \sum_\alpha P_{\alpha\alpha}^{\text{null}} |\tilde{d}_\alpha|^2 \quad (\text{glitches}).
\]
(2.40)
This auto-correlation part of the null energy is called the *incoherent energy*.
By contrast, for a GW signal, the transient is correlated between the detectors according to equations (2.1) or (2.9)–(2.11). By construction of the null projection operator, these correlations cancel in the null stream, leaving only Gaussian noise. They cannot cancel in \( I_{\text{null}} \) however, since that is a purely incoherent statistic. Therefore, for a strong GW signal we expect
\[
E_{\text{null}} \ll I_{\text{null}} \quad (\text{GW}).
\]
(2.41)
Based on these considerations, the coherent veto test introduced by Chatterji et al [14] is to keep only transients with
\[
I_{\text{null}} / E_{\text{null}} > C,
\]
(2.42)
where \( C \) is some constant greater than 1. This test is particularly effective at eliminating large-amplitude glitches. For smaller amplitude glitches, \( E_{\text{null}} \) can be small compared to \( I_{\text{null}} \) due to statistical fluctuations; for this reason, in X-Pipeline we use a modified test where the effective threshold \( C \) varies with the event energy, as discussed in section 3.4.
Analogous tests can be imposed on the other coherent energies, \( E_+ \), \( E_\times \), etc. We define the corresponding incoherent energies by
\[
I_+ \equiv \sum_k \sum_\alpha |e_\alpha^+ \tilde{d}_\alpha|^2,
\]
(2.43)
\[
I_\times \equiv \sum_k \sum_\alpha |e_\alpha^\times \tilde{d}_\alpha|^2,
\]
(2.44)
\[
I_{\text{SL}} \equiv \sum_k \sum_\alpha \left[ |e_\alpha^+ \tilde{d}_\alpha|^2 + |e_\alpha^\times \tilde{d}_\alpha|^2 \right] = I_+ + I_\times.
\]
(2.45)
In each case, we compare the coherent energy $E$ to its incoherent counterpart $I$, making use of the expectation that for a glitch, $E \simeq I$. For a strong GW, the signal summed over both polarizations should build coherently, so one will find
$$E_{SL} > I_{SL} \quad (\text{GW}). \quad (2.46)$$
By contrast, one may find $E_+ > I_+$ or $E_\times < I_\times$ depending on the polarization of the GW signal. Specifically, if the GW signal is predominantly in the + polarization in the DPF, then one will find
$$\begin{align*}
E_+ &> I_+ \\
E_\times &< I_\times \quad (\text{signal predominantly } h_+). \quad (2.47)
\end{align*}$$
If the GW signal is predominantly in the $\times$ polarization in the DPF, then one will find the reverse:
$$\begin{align*}
E_+ &< I_+ \\
E_\times &> I_\times \quad (\text{signal predominantly } h_\times). \quad (2.48)
\end{align*}$$
In general, a GW will be characterized by at least one of $E_+ > I_+$ or $E_\times > I_\times$, i.e. at least one of the polarizations will show a coherent buildup of signal-to-noise ratio across detectors. This allows us to impose coherent glitch rejection tests even in the case where a null stream is not available, such as the H1–L1 network of LIGO detectors. Specific examples of coherent consistency tests are discussed in sections 3.4 and 4.
These incoherent energies are not defined as the magnitude of a projection. As a result, they do not obey $\chi^2$ statistics. They do, however, obey a simple relation with the coherent energies:
$$I_+ + I_\times + I_{\text{null}} = E_+ + E_\times + E_{\text{null}} = E_{\text{tot}}. \quad (2.49)$$
Equivalently, the sum of the cross-correlation contributions to $E_+$, $E_\times$ and $E_{\text{null}}$ cancel:
$$0 = (E_+ - I_+) + (E_\times - I_\times) + (E_{\text{null}} - I_{\text{null}})$$
$$= (E_{SL} - I_{SL}) + (E_{\text{null}} - I_{\text{null}}). \quad (2.50)$$
3. Overview of X-Pipeline
X-Pipeline is a MATLAB-based software package for performing coherent searches for GWBs in data from arbitrary networks of detectors. In this section, we give an overview of the main steps followed in a triggered burst search, describing how the data are processed and how candidate GWBs are identified. In section 5, we discuss how an X-Pipeline analysis is triggered.
3.1. Preliminaries
X-Pipeline performs the coherent analyses described in section 2. The user (a human or automated triggering software) specifies:
1. a set of detectors;
2. one or more intervals of data to be analysed;
3. a set of coherent energies to compute;
4. a set of sky positions; and
5. a list of parameters (such as FFT lengths) for the analysis.
In standard usage, X-Pipeline processes the data and produces lists of candidate gravitational-wave signals for each of the specified sky positions. It does this by first constructing time–frequency maps of the various energies in the reconstructed $h_+$, $h_\times$ and null streams. X-Pipeline then identifies clusters of pixels with large values of one of the coherent energies, such as $E_{SL}$ or $E_+$.
3.2. Time–frequency maps
X-Pipeline typically processes data in 256 s blocks. First, it loads the requested data. It constructs a zero-phase linear predictor error filter to whiten the data and estimate the power spectrum [14, 40]. For each sky position, X-Pipeline time-shifts the data from each detector according to equations (2.1) and (2.2). The data are divided into overlapping segments and Fourier-transformed, producing time–frequency maps for each detector. Given the time–frequency maps for the individual detector data streams $\tilde{d}$, X-Pipeline coherently sums and squares these maps in each pixel to produce time–frequency maps of the desired coherent energies; see figure 2. This representation gives easy access to the temporal evolution of the
spectral properties of the signal, and all statistics and other quantities that are functions of time and frequency.
3.3. Clustering and event identification
Given time–frequency maps of each of the coherent energies, the challenge is then to identify potential gravitational-wave signals in these maps.
The approach used in X-Pipeline is pixel clustering [18]. The user singles out one of the energy measures—typically $E_{SL}$, the summed energy in the reconstructed $h_+$ and $h_\times$ streams—as the detection statistic. A threshold is applied to the detection statistic map so that a fixed percentage (e.g. 1%) of the pixels with the highest value in the current map are marked as black pixels; see figure 2. Following the method of [18], black pixels that share a common side (nearest neighbours) are grouped together into clusters; see figure 3 for an example. (As allowed in [18], the user may specify a different connectivity criterion, such as next-nearest neighbors, or apply the ‘generalized clustering’ procedure.) This clustering technique is appropriate for a GWB whose shape in the time–frequency plane is connected, as opposed to consisting of well-separated ‘blobs’. This assumption is valid for many well-modelled signals such as low-mass inspirals and ringdowns.
Each cluster is considered a candidate detection event. Each is assigned a detection statistic value from its constituent pixels by simply summing the values of the statistic in the pixels. This is motivated by the additive property of the log-likelihood ratio—the inherited detection statistic is exactly the detection statistic for the area defined by the cluster. Each cluster is also assigned an approximate statistical significance $S$ based on the $\chi^2$ distribution; see equation (2.32). This significance is used when comparing different clusters to determine which is the ‘loudest’—the best candidate for being a gravitational wave signal. Finally, the energy at the same time–frequency locations in maps of each of the other requested likelihoods is also computed and recorded for each cluster.
The clusters are saved for later post-processing. The analysis of time shifting (see equation (2.2)), FFTing, and cluster identification is then repeated for each of the other sky positions and for each of the requested FFT lengths.
One other important feature of the time–frequency maps is the Fourier transform length or analysis time $T$, which determines the aspect ratio of the pixels. A longer time gives pixels with poor time resolution but good frequency resolution; a shorter time gives pixels with good time resolution but poor frequency resolution. Depending on the signal duration, different analysis times may be optimal. Since each pixel has the same noise distribution (assuming Gaussian statistics), the optimal pixel size is the size for which the signal spans the smallest number of pixels, so that the statistic is least polluted by noise.
Since the optimal analysis time for the incoming signal is not known, X-Pipeline uses several analysis times and applies a second layer of clustering between analysis times. For this second layer of clustering, clusters made of black pixels at two different analysis times that overlap in time and frequency are compared. The cluster that has the largest significance is kept as a candidate event; the less significant overlapping clusters are discarded.
### 3.4. Glitch rejection
As noted in section 2, noise glitches tend to have a strong correlation between each coherent energy $E_{\text{null}}$, $E_+$, $E_\times$ and its corresponding incoherent energy $I_{\text{null}}$, $I_+$, $I_\times$. X-Pipeline compares the coherent and incoherent energies to veto events that have properties similar to the noise background. These coherent veto tests are applied in post-processing (i.e. after candidate events from the different analysis times are generated and combined).
Two types of coherent veto are available in X-Pipeline. Both are pass/fail tests. The simplest is a threshold on the ratio $I/E$. Following the discussion in section 2.6, a cluster passes the coherent test if
$$I_{\text{null}}/E_{\text{null}} \geq r_{\text{null}},$$
$$|\log_{10}(I_+/E_+)| \geq \log_{10}(r_+),$$
$$|\log_{10}(I_\times/E_\times)| \geq \log_{10}(r_\times),$$
where the thresholds $r_{\text{null}}$, $r_+$ and $r_\times$ may be specified by the user or chosen automatically by X-Pipeline. The forms of equations (3.2) and (3.3) make these tests two-sided, i.e. they pass clusters that are sufficiently far above or below the diagonal.
The second type of coherent veto test in X-Pipeline is called the median-tracking veto test. In this test, the exclusion curve is nonlinear and designed to approximately follow the measured distribution of background clusters.
Examination of scatter plots of $I$ versus $E$ for background clusters shows that while $I \simeq E$ for loud glitches, there is a bias to $I > E$ at low amplitudes. Furthermore, the width of the distribution of background events around the diagonal varies with $E$. A simple scaling argument shows that for large-amplitude uncorrelated glitches, we expect
$$\langle (E - I)^2 \rangle \propto I.$$
Specifically, for a large single-detector glitch $\tilde{g}(f)$, the correlation with the noise $\tilde{n}(f)$ in another detector will have mean zero and variance $\propto |\tilde{g}|^2 \propto I$. Consequently, we expect noise events to be scattered about the diagonal with a width that is proportional to $I^{1/2}$ (recall that the energies are dimensionless quantities). The median-tracking test uses this information by estimating the median value of $I$ as a function of $E$ for background events. For each cluster to
be tested, it computes the following simple measure $n_\sigma$ of how far the cluster is above or below the median:
$$n_\sigma \equiv \frac{I - I_{\text{med}}(E)}{I^{1/2}}.$$ \hspace{1cm} (3.5)
An event is passed if
$$n_{\text{null}} > r_{\text{null}},$$ \hspace{1cm} (3.6)
$$|n_+| > r_+,$$ \hspace{1cm} (3.7)
$$|n_-| > r_-.$$ \hspace{1cm} (3.8)
As in the ratio test, the thresholds for each energy type are independent and may be specified by the user or selected automatically by X-Pipeline.
The median function $I_{\text{med}}(E)$ is estimated as follows. First, a set of background clusters are binned in $\log_{10} E$ and the median values of $\log_{10} E$ and $\log_{10} I$ in each bin are measured. A quadratic curve of the form
$$\log_{10} I = a (\log_{10} E)^2 + c$$ \hspace{1cm} (3.9)
is fitted to these sampled medians. The quadratic is merged smoothly to the diagonal $I = E$ above some value of $E$. This shape is entirely ad hoc, but in practice it provides a good fit to the observed distribution of glitches.
An example of the median-tracking coherent glitch veto is shown in figure 4. Each plus symbol (+) denotes a background cluster, coloured by its significance $\log_{10} S$. The large mass of light points at the lower left are weak background noise events. The darkly coloured points...
extending along the diagonal to the upper right are strong background noise events. Also shown are clusters due to a series of simulated gravitational-wave signals added to the data, denoted by squares (□). Even though many of these simulated signals are weaker (lighter) than the strong background noise glitches, they are well separated from the background noise population in the two-dimensional \((E_{\text{null}}, I_{\text{null}})\) space. The dashed line shows the coherent veto threshold placed on \((E_{\text{null}}, I_{\text{null}})\); points below this line are discarded. Scatterplots of \(I_\alpha\) versus \(E_\alpha\) and \(I_\times\) versus \(E_\times\) have similar appearance; see section 4 for examples.
In addition to the coherent glitch vetoes, clusters may also be rejected because they overlap data quality vetoes. These are periods when one or more detectors showed evidence of being disturbed by non-gravitational effects that are known to produce noise glitches. Such sources include environmental noise and instabilities in the detector control systems. These data quality vetoes are defined by studies of the data independently of X-Pipeline and hence are outside of the scope of the present paper. See [11, 12] for recent reviews of data quality and detector characterization efforts in the LIGO Scientific Collaboration and the Virgo Collaboration.
### 3.5. Triggered search: tuning and upper limits
We now focus on the strategy for conducting triggered searches with X-Pipeline, specifically searches for gravitational waves associated with GRBs. As pointed out by Hayama et al [30], GRB searches are an excellent case for the application of coherent analysis, since the sky position of the source is known a priori to high accuracy. We can therefore take full advantage of coherent combinations of the data streams without the false-alarm or computational penalties of scanning over thousands of trial sky directions.
#### 3.5.1. Detection procedure
For the purpose of a search for unmodelled gravitational-wave emission, a GRB source is characterized by its sky position \(\hat{\Omega}\), the time of onset of gamma-ray emission (the trigger time) \(t_0\) and the range of possible time delays \(\Delta t\) between the gamma-ray emission and the associated gravitational-wave emission. The latter quantity is referred to as the on-source window for the GRB; this is the time interval that is analysed for candidate signals. LIGO searches for gravitational wave bursts associated with GRBs [10, 41, 42] have traditionally used an asymmetric on-source window of \([t_0 - 120\,\text{s}, t_0 + 60\,\text{s}]\), which is conservative enough to encompass most theoretical models of gravitational-wave emission for this source, as well as uncertainties associated with \(t_0\) [3, 41].
In order to claim the detection of a gravitational wave, we need to be able to establish with high confidence that a candidate event is statistically inconsistent with the noise background. In X-Pipeline GRB searches, we use the loudest event statistic [43, 44] to characterize the outcome of the experiment. The loudest event is the cluster in the on-source interval that has the largest significance (after application of vetoes); let us denote its significance by \(S_{\text{max}}^{\text{on}}\). We compare \(S_{\text{max}}^{\text{on}}\) to the cumulative distribution \(C(S_{\text{max}})\) of loudest significances measured using background noise (discussed below). We set a threshold on \(C(S_{\text{max}})\) such that the probability of background noise producing a cluster in the on-source interval with significance above this threshold is a specific small value (for example, a 1% chance). The on-source data are then analysed. If the significance \(C(S_{\text{max}}^{\text{on}})\) of the loudest cluster is greater than our threshold, we consider the cluster as a possible gravitational wave detection. We can also set an upper limit on the strength of gravitational-wave emission associated with the GRB in question.
In principle, the cumulative distribution $C(S_{\text{max}})$ of loudest-event significances for clusters produced by Gaussian background noise can be estimated \textit{a priori}. In practice, however, real detector data are non-Gaussian. The most straightforward procedure for estimating the background distribution is then simply to analyse additional data from times near the GRB, but outside the on-source interval. These data are referred to as \textit{off-source}. The off-source clusters will not contain a gravitational-wave signal associated with the GRB and so they can be treated as samples of the noise background. In X-Pipeline, we divide the off-source data into segments of the same length as that used for the on-source data and analyse each segment in exactly the same manner as the on-source data (using, for example, the same source direction relative to the detectors for computing coherent combinations). For each segment, we determine the significance of the loudest event after applying vetoes. This collection of loudest-event significances from the off-source data then serves as the empirical measurement of $C(S_{\text{max}})$.
In X-Pipeline we typically set the off-source data to be all data within $\pm 1.5$ h of the GRB time, excluding the on-source interval. This time range is limited enough so that the detectors should be in a similar state of operation as during the GRB on-source interval, but long enough to provide typically $\sim 50$ off-source segments for sampling $C(S_{\text{max}})$, thereby allowing the estimation of probabilities as low as $\sim 2\%$. To get still better estimates of the background distribution, we also analyse off-source data after artificially time-shifting the data from one or more detectors by different amounts ranging from a few seconds to several hundred seconds. These shifts can give up to approximately 1000 times the on-source data for background estimation, allowing estimation of probabilities at the sub-1% level.
Networks containing both the LIGO-Hanford detectors, H1 and H2, present a special case for background estimation, as local environmental disturbances can produce simultaneous background glitches which are not accounted for in time slides. We therefore do not time-shift H1 relative to H2 unless they are the only detectors operating. In that case, the local probability is computed both with and without time slides to allow a consistency check on the background estimation. (Triggered searches with second-scale on-source windows have the advantage of not requiring time shifts at all; see for example [45].) In practice, we do not see significant differences due to correlated environmental disturbances. We attribute this robustness to the coherent glitch rejection tests described in section 3.4.
### 3.5.2. Upper limits
The comparison of the largest significance measured in the on-source data, $S_{\text{max}}^{\text{on}}$, to the cumulative distribution $C(S_{\text{max}})$ estimated from the off-source data allows us to determine if there is a statistically significant transient associated with the GRB. If no statistically significant signal is present, we set a frequentist upper limit on the strength of gravitational waves associated with the GRB. For a given gravitational-wave signal model, we define the 90% confidence level upper limit on the signal amplitude as the minimum amplitude for which there is a 90% or greater chance that such a signal, if present in the on-source region, would have produced a cluster with significance larger than the largest value $S_{\text{max}}^{\text{on}}$ actually measured.
We adopt the measure of signal amplitude that is standard for LIGO burst searches, the root-sum-squared amplitude $h_{\text{rss}}$, defined by
$$h_{\text{rss}} = \sqrt{\int_{-\infty}^{\infty} \mathrm{d}t \left[ h_{+}^{2}(t) + h_{\times}^{2}(t) \right]},$$
$$= \sqrt{2 \int_{0}^{\infty} \mathrm{d}f \left[ |\tilde{h}_{+}^{2}(f)| + |\tilde{h}_{\times}^{2}(f)| \right]}, \quad (3.10)$$
The units of $h_{\text{rss}}$ are Hz$^{-1/2}$, the same as for amplitude spectra, making it a convenient quantity for comparing to detector noise curves. For narrow-band signals, the $h_{\text{rss}}$ can also be linked to the energy emitted in gravitational waves under the assumption of isotropic radiation via [46]
$$E_{\text{GW}}^{\text{iso}} \simeq \frac{\pi^2 c^3}{G} D^2 f_0^2 h_{\text{rss}}^2,$$
(3.11)
where $D$ is the distance to the source and $f_0$ is the dominant frequency of the radiation. One drawback of $h_{\text{rss}}$ is that it does not involve the detector sensitivity (either antenna response or noise spectrum). As a result, upper limits phrased in terms of $h_{\text{rss}}$ will depend on the family and frequency of waveforms used and also on the sky position of the source.
To set the upper limit, we need to determine how strong a real gravitational-wave signal needs to be in order to appear with a given significance. We do this using a third set of clusters, one that contains sample gravitational-wave signals. Specifically, we repeatedly re-analyse the on-source data after adding ('injecting') simulated gravitational-wave signals to the data from each detector. The data are then analysed as before, producing lists of clusters. The significance associated with a given injection is the largest significance of all clusters that were observed within a short time window (typically 0.1 s) of the injection time, after applying vetoes.
The procedure for setting an upper limit is:
1. Select one or more families of waveforms for which the upper limit will be set. For example, a common choice in LIGO is linearly polarized, Gaussian-modulated sinusoids ('sine-Gaussians') with fixed central frequency and quality factor, and random peak time and polarization angle.
2. Find the significance $S_{\text{max}}^{\text{on}}$ of the loudest event in the on-source data, after applying the coherent glitch veto (section 3.4) and any data-quality vetoes.
3. For each waveform family:
(a) Generate random parameter values for a large number of waveforms from the family (e.g. specific peak times and polarization angles for the sine-Gaussian case), and with fixed $h_{\text{rss}}$ amplitude.
(b) Add the waveforms one by one to the on-source data and determine the largest significance of any surviving cluster (after vetoes) associated with each injection.
(c) Compute the percentage of the injections that have $S \geq S_{\text{max}}^{\text{on}}$.
(d) Repeat 3(a)–3(c) using the same waveform family but with different $h_{\text{rss}}$ amplitudes. The 90% confidence-level upper limit is that $h_{\text{rss}}$ value for which 90% of the injections have $S \geq S_{\text{max}}^{\text{on}}$.
### 3.5.3. Tuning and closed-box analyses
The sensitivity of the pipeline is determined by the relative significance of the clusters produced by real gravitational-wave signals to those produced by background noise. This in turn depends on the details of how the analysis is carried out. In particular, the thresholds used for the coherent glitch rejection tests will have a significant impact on the sensitivity. Too low a threshold will allow background noise glitches to survive and possibly appear louder than a real gravitational-wave signal. Too high a threshold may reject the gravitational-wave signals we seek.
To improve the sensitivity of X-Pipeline searches, we tune the coherent glitch test to optimize the trade-off between glitch rejection and signal acceptance. We do this using a closed-box analysis. A closed-box analysis estimates the pipeline sensitivity using the off-source and
injection data, but not the on-source data. This *blind* tuning avoids the possibility of biasing the upper limit.
The procedure used for a closed-box analysis follows that used for computing an upper limit, except that an off-source segment is used as a substitute for the true on-source segment. We then test different thresholds for the coherent veto tests and select the threshold set that gives us the best average ‘upper limit’ estimated from the off-source segments. Specifically, we do the following:
1. For each coherent veto test \((E_+ \text{ versus } I_+, \ E_\times \text{ versus } I_\times, \ E_{\text{null}} \text{ versus } I_{\text{null}})\) we select a discrete set of trial veto thresholds to test.
2. The off-source segments and the injection clusters are divided randomly into two equal sets: one for tuning and one for upper-limit estimation.
3. For each distinct combination of trial thresholds \((r_+, \ r_\times, \ r_{\text{null}})\), we do the following:
(a) We apply the coherent veto test (and any data quality vetoes) to the background clusters from each of the tuning off-source segments. The collection of loudest surviving events from each segment gives us \(C(S_{\text{max}})\) for that set of trial thresholds.
(b) We determine the off-source segment that gives the loudest event closest to the 95th percentile of the off-source \(S_{\text{max}}\) (i.e. closest to \(C(S_{\text{max}}) = 0.95\)). This off-source segment is termed the *dummy on-source segment*. (Different background segments may serve as the dummy on-source for different trial values of the coherent veto thresholds.)
(c) The dummy on-source clusters and the tuning injection clusters are read, and the coherent vetoes and data-quality vetoes are applied to each. The upper limit is computed, treating the dummy clusters as the true on-source clusters.
4. The final, tuned veto thresholds are the ones that give the lowest upper limit based on the dummy on-source clusters. (If testing multiple waveform families, the upper limits may be averaged across families for deciding the optimal tuning.)
5. To get an unbiased estimate of the expected upper limit, we apply the tuned vetoes to the second set of off-source and injection clusters, which were not used for tuning. Steps 3(a)–3(c) are repeated using the final thresholds and using the 50th percentile of \(S_{\text{max}}\) to choose the dummy on-source segment. The upper limit estimated from the dummy on-source segment in this second data set is the predicted upper limit for the GRB; equivalently, it may be interpreted as the sensitivity of the search.
We choose the 95th percentile of \(S_{\text{max}}\) for tuning to focus on eliminating the tail of high-significance background glitches. This is a deliberate choice, because to be accepted as a detection, a GWB will need to stand well clear of the background. We choose the 50th percentile of \(S_{\text{max}}\) as the dummy on-source value for sensitivity estimation because this is our best prediction for the typical value of \(S_{\text{max}}\) in the on-source data under the null hypothesis. Separate data sets are used in tuning and sensitivity estimation to avoid bias from tuning the cuts on the same data used to estimate the sensitivity. The data set used for closed-box sensitivity estimates is later re-used for computing event probabilities and upper limits for the ‘open-box’ (true on-source) data; this introduces no bias because no tuning decisions are made based on the closed-box sensitivity estimate.
In X-Pipeline, the tuning and upper limit calculations are automated. The closed-box analysis is performed first using a pre-selected range of trial thresholds for the coherent glitch
test. A web page is generated automatically reporting the details of the closed box analysis, including the optimized threshold values and the predicted upper limits. For the S5/VSR1 search, the user re-runs the post-processing on the on-source data with the fixed optimized thresholds, and another web page report is generated listing detection candidates and upper limits. For the S6/VSR2 search, we have automated this ‘box opening’ as well, so that the on-source events are scanned for candidate GWBs immediately once the closed-box tuning analysis has finished.
3.5.4. Statistical and systematic errors. There are several sources of error that can affect our analysis. The principal ones are calibration uncertainties (amplitude and phase response of the detectors and relative timing errors), and uncertainty in the sky position of the GRB.
X-Pipeline is able to account for these effects automatically in tuning and upper limit estimation. Specifically, X-Pipeline’s built-in simulation engine for injecting GWB signals is able to perturb the amplitude, phase and time delays for each injection in each detector. The perturbations are drawn from Gaussian distributions with mean and variance matching the calibration uncertainties for each detector. Furthermore, the GRB sky position can be perturbed in a random direction by a Gaussian-distributed angle with standard deviation set to the GRB error box width reported by the GCN. Tuning and upper limits based on the perturbed injections are effectively marginalized over these sources of error.
For the S5-VSR1 GRB search, the capability for perturbed injections was not available at the time of the original data analysis, and so the impact of the errors was estimated by re-analysis of a small subset of the full GRB sample. For the S6-VSR2 search, we include calibration and sky-position uncertainties in simulations for all GRBs from the beginning, removing the need to do any additional error analysis.
4. GRB 031108
GRB 031108 [47] was a long GRB observed by Ulysses, Konus-Wind, Mars Odyssey-HEND and GRS, and RHESSI. As observed by Ulysses, it had a duration of approximately 22 s, a 25–100 keV fluence of approximately $2.5 \times 10^{-5} \text{ erg cm}^{-2}$ and a peak flux of approximately $1.8 \times 10^{-6} \text{ erg cm}^{-2} \text{ s}$ over 0.50 s. It was triangulated to a three-sigma error box with approximate area 1600 square arcminutes with centre coordinates $4 \text{ h } 26 \text{ m } 54.86 \text{ s}$, $-5^\circ 55' 49.00''$.
GRB 031108 occurred during the third science run of the LIGO Scientific Collaboration (‘S3’). At that time, the two LIGO-Hanford detectors H1 and H2 were operating, while the Livingston detector L1 was not. A search for gravitational waves associated with the GRB was performed using a cross-correlation algorithm and is reported in Abbott et al [42].
To demonstrate X-Pipeline, we perform a closed-box analysis\(^1\) of the LIGO H1–H2 data to search for gravitational waves associated with GRB 031108. We tune the search and estimate its sensitivity to gravitational-wave emission as discussed in section 3.5, using the same simulated waveforms as in Abbott et al. We compare the sensitivity results to those of the cross-correlation search in Abbott et al. We estimate the 90% confidence upper limits from X-Pipeline to be typically 40% lower than those from the cross-correlation search.
\(^1\) We restrict ourselves to closed-box results because the policies governing LIGO data use do not permit the publication of open-box analysis results in methodological papers.
4.1. Analysis
At the time of GRB 031108, the two LIGO-Hanford detector H1 and H2 were operating. Figure 5 shows the noise level in the detectors at that time.
Since the H1 and H2 detectors have identical antenna responses, the network is sensitive to only one of the two gravitational-wave polarizations from any given sky direction. In the DPF, this means that $f^\times = 0$. As a consequence, the cross energy also vanishes identically, $E_\times = 0$ and $E_{SL} = E_+$. Each event cluster is therefore characterized by the two coherent energies $E_+$ and $E_{\text{null}}$ and their associated incoherent components $I_+$ and $I_{\text{null}}$. Figure 6 shows the weighting factors $\epsilon^+$ as a function of frequency.
X-Pipeline was run on all data within $\pm 1$ h of the GRB time for background estimation. Clusters were generated using Fourier transform lengths of 1/8 s, 1/16 s, 1/32 s, 1/64 s, 1/128 s and 1/256 s. Figure 7 shows scatter plots of $I_+$ versus $E_-$ and $I_{\text{null}}$ versus $E_{\text{null}}$ for the half of the off-source clusters that were used for upper limit estimation (i.e. after tuning). Also shown are the clusters produced by simulated sine-Gaussian GWBs at 150 Hz, one of the types tested in [42]. These injections had amplitudes of $6.3 \times 10^{-21}$ Hz$^{-1/2}$, approximately equal to the $h_{\text{rss}}$ upper limit estimated from the closed-box analysis.
As expected, loud background triggers fall close to the diagonal in both of these plots. The simulated gravitational waves also fall close to the diagonal for $I_+$ versus $E_-$; this is due to the fact that H2 is significantly less sensitive than H1 and so receives very little weighting in the calculation of $E_-$. In turn, this means that the H1–H2 cross terms in $E_+$ are small compared to the H1–H1 term, so that $E_+$ is dominated by the diagonal components and so is very similar to $I_+$. For the null stream, however, the weightings are reversed, and H2 is weighted higher than H1. As a consequence, gravitational waves lie above the diagonal in the $I_{\text{null}}$ versus $E_{\text{null}}$ plot, and it is possible to separate the injections from the background clusters in $(E_{\text{null}}, I_{\text{null}})$ space. X-Pipeline’s automated tuning procedure recognizes both of these facts; when run using the median-tracking veto test, it estimates that the best sensitivity will come from requiring a
Figure 6. Normalized contributions to $e^+ \equiv f^+ / |f^+|$ (in the DPF). Both detectors have the same antenna response, so the coherent weighting at each frequency is determined entirely by the relative noise spectra. Because $f^+ = 0$ for this network, the normalized contribution of detector $i$ to the null stream is simply $1 - (e_i^+)^2$ i.e. identical to this figure with the H1 and H2 curves swapped.
threshold of $r_+ = 5$ on $(E_{\text{null}}, I_{\text{null}})$, and imposing no condition on $I_+$ versus $E_+$. The $(E_{\text{null}}, I_{\text{null}})$ threshold is indicated in figure 7 by the dashed line; points below this line are discarded. As can be seen, this test rejects the majority of the loud off-source clusters, while accepting most of the simulated gravitational wave clusters. The off-source clusters that survive the test tend to be of low significance and therefore will not affect the loudest-event upper limit. Figure 8 shows the distribution of $S_{\text{max}}$ before and after the null-stream test.
The closed-box analysis discussed in section 3.5 was used to tune the coherent veto test and estimate the expected upper limit from X-Pipeline. Figure 9 shows a scatter plot of the ‘dummy’ on-source clusters. Recall that the dummy on-source region is selected as the background segment that gives the median loudest event surviving the coherent veto test. It therefore represents the expected typical result under the null hypothesis, averaging over noise instantiations, and so is a more robust way to estimate the pipeline sensitivity than, e.g., picking a random segment (or even the on-source segment).
The predicted $h_{\text{rss}}$ upper limits at 90% confidence for narrow-band sine-Gaussian waveforms of different central frequencies are shown in table 3 and figure 10. Table 3 also shows the actual upper limits from the cross-correlation search reported in [42]. The predicted X-Pipeline sensitivity is approximately a factor of 1.7 better than that of the cross-correlation pipeline, corresponding to an increase in search volume of a factor of $1.7^3 \simeq 5$. Similar improvements were seen in the open-box analysis of GRBs in the S5-VSR1 run (2005–2007) [15].
As can been seen in figure 10, the limiting amplitudes for this GRB track the noise spectrum of H2 and correspond to a matched-filter signal-to-noise ratio of approximately 5 in H2. This occurs because the sensitivity of the analysis is limited by the coherent glitch rejection test.
Figure 7. Scatter plots of off-source (+) and simulation (□) cluster likelihoods: $I_+$ versus $E_+$ (top) and $I_{\text{null}}$ versus $E_{\text{null}}$ (bottom). The colour denotes $\log_{10}(S)$. Loud background triggers fall close to the diagonal. Simulated gravitational waves also fall close to the diagonal for $I_+$ versus $E_+$, but above the diagonal for $I_{\text{null}}$ versus $E_{\text{null}}$. The dashed line denotes the coherent consistency threshold on $(E_{\text{null}}, I_{\text{null}})$ that is selected by X-Pipeline’s automated tuning procedure; points below this line are discarded. This test rejects the majority of the loud off-source clusters, while accepting most of the simulated gravitational-wave clusters, even if the GWB significance is typical of background events. The simulated signals in this plot have $h_{\text{iso}} = 6.3 \times 10^{-21} \text{ Hz}^{-1/2}$, approximately equal to the upper limit estimated from the closed-box analysis.
Figure 8. Distribution of the loudest event significance $S_{\text{max}}$ seen in each of the off-source segments used for upper limit estimation, before and after the coherent glitch rejection test. Only 56 of the 391 off-source segments have events that survive the test.
Figure 9. $I_{\text{null}}$ versus $E_{\text{null}}$ scatter plot of the dummy on-source (+) and simulation (□) cluster likelihoods used to estimate the upper limit. The colour denotes $\log_{10}(S)$. No background events survive the coherent consistency test. The simulated signals in this plot have $h_{\text{rss}} = 6.3 \times 10^{-21} \text{ Hz}^{-1/2}$, approximately equal to the estimated upper limit.
Table 3. Estimated $h_{\text{rss}}^{90\%}$ amplitude upper limits from X-Pipeline and the best upper limits from the actual cross-correlation search [42]. The units are $10^{-21} \, \text{Hz}^{-1/2}$. The simulated waveforms are circularly polarized sine-Gaussians as described in [42].
| Frequency (Hz) | 100 | 150 | 250 | 554 | 1000 | 1850 |
|---------------|-----|-----|-----|-----|------|------|
| Cross-correlation | 18.4 | 11.3 | 10.9 | 12.5 | 20.4 | 51.5 |
| X-Pipeline | 11.1 | 6.1 | 6.5 | 7.5 | 12.6 | 36.7 |
Figure 10. The expected 90% confidence level upper limits on the GW amplitude ($\bullet$) from X-Pipeline for narrow-band circularly polarized sine-Gaussian bursts. The detector noise spectra are also shown for reference.
This test requires a measurable correlation between the detectors, which in turn requires that the GWB have some minimal signal-to-noise ratio in each. This behaviour is typical of tuning using the 95th percentile of $S_{\text{max}}$, which is an aggressive choice designed to suppress the loud background. While the upper limits tend to be limited by such a strong background rejection, our ability to detect a GWB is enhanced, since a GWB candidate will undoubtedly need a significance higher than some very high percentile of the background to be claimed as an actual gravitational wave.
The factor of 1.7 sensitivity improvement of X-Pipeline relative to the cross-correlation search in [42] can be attributed in part to two factors. We estimate that a factor of approximately 1.3 comes from using $E_{\text{SL}}$ rather than the cross-correlation as the detection statistic. $E_{\text{SL}}$ includes the auto-correlation terms $(\tilde{d}_{H1}^* \tilde{d}_{H1}, \tilde{d}_{H2}^* \tilde{d}_{H2})$ in addition to the cross-correlation terms $(\tilde{d}_{H1}^* \tilde{d}_{H2})$ when combining the H1 and H2 data streams. This gives a net increase in the signal-to-noise ratio. More precisely, one can compute the ratio of the expected contribution to $E_{\text{SL}}$ due to a GWB to the standard deviation in $E_{\text{SL}}$ due to Gaussian noise; see section 2.5. Performing the same calculation for the cross-correlation statistic, one finds the per-pixel ratio for $E_{\text{SL}}$.
to be $1.8 \sim 1.3^2$ times larger than that for the cross-correlation (assuming a 2:1 ratio in the noise amplitudes for H2:H1). Another factor of $\sim 1.2$ can be attributed to the clustering, which restricts the likelihood calculation to pixels that show significant signal power (and thus tending to exclude pixels that contain only background noise). The cross-correlation statistic in [42] was computed on a minimum time–frequency volume (number of pixels) of approximately 50. By contrast, the typical cluster size in X-Pipeline was found to be 10–30 for injections at the 90% upper limit amplitude. As seen in section 2.5 and [17], the amplitude sensitivity in Gaussian noise scales as $N^{-1/4}$. The factor of $\sim 2$ smaller number of pixels used by X-Pipeline should therefore give a factor of $\sim 2^{1/4} = 1.2$ sensitivity improvement. Combined with the previous factor of 1.3 this gives a total improvement of about 1.6. While this is very close to the average measured improvement, one should keep in mind that these rough estimates have not properly accounted for the non-Gaussianity of the background (which will decrease the sensitivity of both pipelines), or for the tendency of the coherent glitch rejection test to limit the X-Pipeline sensitivity in the absence of strong background glitches. These other effects are presumably also important.
5. Autonomous running
X-Pipeline has been used to process data from S5-VSR1 (2005–2007). This is an ‘offline’ search, being completed almost 2 years after the last of the GRBs in question was observed. In parallel, X-Pipeline is being improved for the S6-VSR2 run, which started in July 2009. Our goal for S6-VSR2 is fully autonomous running, with a complete analysis of each GRB within 24 h of the trigger. To achieve this goal requires automatic triggering of X-Pipeline.
5.1. Automated launch of X-Pipeline by GCN triggers
Most of the information for sources that are analysed by various externally triggered burst searches in LIGO-Virgo come from the GRB Coordinates Network (GCN) [48]. GCN notices and circulars are received in real time by LIGO-Virgo, and the information needed for the search analyses is parsed automatically by Perl scripts, which are launched each time a GCN notice or circular is received. The information parsed includes: the time and date of the event, the source position (right ascension and declination), the position error and the duration of the event. For each source, these parameters are compiled and written to a trigger file.
Concurrently, a Perl script runs at a central computing site and regularly checks if there are new source events listed in the trigger file. When there are new triggers, the script checks for availability of the LIGO-Virgo data that are necessary for analysing the source. If the needed data are available, the script launches X-Pipeline event-generation jobs (which include simulation and off-source analyses) on the computing cluster. These jobs are monitored continuously to automatically determine when the jobs have finished. Once they are completed, the post-processing (tuning and detection/upper limit) jobs are automatically launched and likewise monitored. Successful completion of these steps results in a web page in which the results of the analysis are presented, and an email notification is sent to human analysts. Additionally, the scripts that monitor the status of the search and post-processing jobs log that progress for each source event and regularly write this information to a summary status web page. These GCN parsing and triggering scripts are now operational, and X-Pipeline is
currently autonomously analysing GRBs from the *Swift* [49] satellite. Open-box results are available in as little as 6 h following a GCN alert.
Other modifications currently being made to X-*Pipeline* focus on the larger sky-position error boxes from the Fermi satellite [50]. For S6-VSR2, most of the GRB triggers come from the GBM instrument on Fermi, which gives a typical position uncertainty of several degrees. This is much larger than the typical uncertainty of a few arcmin for GRBs from *Swift* in S5-VSR1. The X-*Pipeline* launch scripts are currently being modified to set up a grid of sky positions covering this error region, and the handling of events is being modified to minimize the additional computational time required. Finally, the suite of simulated waveforms has been expanded to include binary neutron star and black-hole–neutron-star binary inspirals, since these systems are widely thought to be the progenitors of short GRBs.
6. Summary
X-*Pipeline* is a software package designed to perform autonomous searches for gravitational-wave bursts associated with astrophysical triggers such as GRBs. It performs a fully coherent analysis of data from arbitrary networks of detectors to sensitively search small patches of the sky for gravitational-wave bursts. X-*Pipeline* features automated tuning of background rejection tests, and a built-in simulation engine with the ability to simulate effects such as calibration uncertainties and sky-position errors. X-*Pipeline* can be launched automatically by receipt of a GCN email, performing a complete analysis of data, including tuning and identification of GWB candidates, without human intervention. Each astrophysical trigger is analysed as a separate search, with background estimation and tuning performed using independent data samples local to the trigger. In a test on actual detector data for a real GRB, we find that X-*Pipeline* is sensitive to signals approximately a factor of 1.7 weaker than those detectable by the cross-correlation technique used in previous LIGO searches. X-*Pipeline* has recently been used for the analysis of GRBs from the LIGO–Virgo S5-VSR1 run and is currently running autonomously during the S6-VSR2 run to search for gravitational waves associated with GRBs observed electromagnetically. Our goal is the rapid identification of possible GWBs on time scales short enough to prompt additional follow-up observations by other observatories.
Acknowledgments
We thank Kipp Cannon and Ray Frey for valuable comments on an earlier draft of this paper. PJS and GJ were supported in part by STFC grant number PP/F001096/1. For MT, the research was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. MT was supported under research task number 05-BEFS05-0014. MW was supported by the California Institute of Technology and the École normale supérieure Paris. LS and SP were supported by an NSF REU Site grant. We thank the LIGO Scientific Collaboration for permission to use data from the time of GRB 031108 for our tests. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement number PHY-0107417. This paper has been assigned LIGO document number LIGO-P0900097-v4.
References
[1] Ott C D 2009 *Class. Quantum Gravity* **26** 063001 (arXiv:0809.0695)
[2] Cutler C *et al* 1993 *Phys. Rev. Lett.* **70** 2984
[3] Mészáros P 2006 *Rept. Prog. Phys.* **69** 2259 (arXiv:astro-ph/0605208)
[4] Cutler C and Thorne K S 2001 *General Relativity and Gravitation, Proc. GR16 (Durban, South Africa)* ed N T Bishop and S D Maharaj (Singapore: World Scientific) pp 72–111 (arXiv:gr-qc/0204090)
[5] Bloom J S *et al* 2009 (arXiv:0902.1527)
[6] Kanner J *et al* 2008 *Class. Quantum Gravity* **25** 184034 (arXiv:0803.0312)
[7] Van Eeewyck V *et al* 2009 *Int. J. Mod. Phys.* **A18** 1655 (arXiv:0906.4957)
[8] Abbott B P *et al* 2009 *Phys. Rev. D* **80** 102001 (arXiv:0905.0020)
[9] Abbott B P *et al* 2009 *Phys. Rev. D* **79** 122001 (arXiv:0901.0302)
[10] Abbott B *et al* 2008 *Astrophys. J.* **681** 1419 (arXiv:0711.1163)
[11] Blackburn L *et al* 2008 *Class. Quantum Gravity* **25** 184004 (arXiv:0804.0800)
[12] Leroy N (LIGO Scientific Collaboration and Virgo Collaboration) 2009 *Class. Quantum Gravity* **26** 204007
[13] https://geco.phys.columbia.edu/xpipeline/wiki
[14] Chatterji S, Lazzarini A, Stein L, Sutton P, Searle A and Tinto M 2006 *Phys. Rev. D* **74** 082005
[15] Abbott B P *et al* 2010 *Astrophys. J.* to appear (arXiv:0908.3824)
[16] Cannon K C 2008 *Class. Quantum Gravity* **25** 105024
[17] Anderson W G, Brady P R, Creighton J D E and Flanagan E E 2001 *Phys. Rev. D* **63** 042003
[18] Sylvestre J 2002 *Phys. Rev. D* **66** 102004
[19] Gursel Y and Tinto M 1989 *Phys. Rev. D* **40** 3884
[20] Tinto M 1996 *Proceedings of the International Conference on Gravitational Waves: Source and Detectors* (Singapore: World Scientific)
[21] Flanagan E E and Hughes S A 1998 *Phys. Rev. D* **57** 4566
[22] Sylvestre J 2003 *Phys. Rev. D* **68** 102005
[23] Cadonati L 2004 *Class. Quantum Gravity* **21** S1695
[24] Wen L and Schutz B 2005 *Class. Quantum Gravity* **22** S1321
[25] Klimenko S, Mohanty S, Rakhmanov M and Mitselmakher G 2005 *Phys. Rev. D* **72** 122002
[26] Klimenko S, Mohanty S, Rakhmanov M and Mitselmakher G 2006 *J. Phys. Conf. Ser.* **32** 12
[27] Mohanty S, Rakhmanov M, Klimenko S and Mitselmakher G 2006 *Class. Quantum Gravity* **23** 4799 (arXiv:gr-qc/0601076)
[28] Rakhmanov M 2006 *Class. Quantum Gravity* **23** S673 (arXiv:gr-qc/0604005)
[29] Summerscales T Z, Burrows A, Finn L S and Ott C D 2008 *Astrophys. J.* **678** 1142 (arXiv:0704.2157)
[30] Hayama K, Mohanty S D, Rakhmanov M and Desai S 2007 *Class. Quantum Gravity* **24** S681 (arXiv:0709.0940)
[31] Searle A C, Sutton P J, Tinto M and Woan G 2008 *Class. Quantum Gravity* **25** 114038 (arXiv:0712.0196)
[32] Searle A C, Sutton P J and Tinto M 2009 *Class. Quantum Gravity* **26** 155017 (arXiv:0809.2809)
[33] Klimenko S, Yakushin I, Mercer A and Mitselmakher G 2008 *Class. Quantum Gravity* **25** 114029 (arXiv:0802.3232)
[34] Fairhurst S 2009 *New J. Phys.* **11** 123006
[35] Flanagan E E and Hughes S A 2005 *New J. Phys.* **7** 204 (arXiv:gr-qc/0501041)
[36] Abbott B *et al* 2004 *Nucl. Instrum Methods Phys. Rex* **517** 154
[37] Abbott B *et al* 2009 *Rep. Prog. Phys.* **72** 076901 (arXiv:0711.3041)
[38] Acernese F *et al* 2008 *Class. Quantum Gravity* **25** 114045
[39] Grote H *et al* 2008 *Class. Quantum Gravity* **25** 114043
[40] Chatterji S, Blackburn L, Martin G and Katsavounidis E 2004 *Class. Quantum Gravity* **21** S1809
[41] Abbott B *et al* 2005 *Phys. Rev. D* **72** 042002 (arXiv:gr-qc/0501068)
[42] Abbott B et al 2008 *Phys. Rev.* D **77** 062004 (arXiv:gr-qc/0709.0766)
[43] Brady P R, Creighton J D E and Wiseman A G 2004 *Class. Quantum Gravity* **21** S1775 (arXiv:gr-qc/0405044)
[44] Biswas R, Brady P R, Creighton J D E and Fairhurst S 2009 *Class. Quantum Gravity* **26** 175009 (arXiv:0710.0465)
[45] Abbott B et al 2008 *Phys. Rev. Lett.* **101** 211102
[46] Riles K 2004 LIGO Document No. LIGO-T040055-00-Z (http://www.ligo.caltech.edu/docs/T/T040055-00.pdf)
[47] Hurley K et al http://gcn.gsfc.nasa.gov/gcn3/2441.gcn3
[48] http://gcn.gsfc.nasa.gov/
[49] Gehrels N et al 2004 *Astrophys. J.* **611** 1005
[50] http://fermi.gsfc.nasa.gov/ |
Coordinated Market Clearing for Combined Thermal and Electric Distribution Grid Operation
Sebastian Troitzsch\textsuperscript{1*}, Kai Zhang\textsuperscript{1†}, Tobias Massier\textsuperscript{1‡} and Thomas Hamacher\textsuperscript{2§}
\textsuperscript{1}TUMCREATE, Singapore, \textsuperscript{2}Technical University of Munich (TUM), Garching, Germany
Email: *email@example.com, †firstname.lastname@example.org,
‡email@example.com, §firstname.lastname@example.org
Abstract—To economically dispatch distributed energy resources (DERs) while addressing operational concerns of the electric grid, distribution locational marginal price (DLMP)-based market frameworks have been formulated for electric distribution grids. This paper proposes to extend this methodology to thermal grids, i.e. district heating or cooling systems, and presents a market-clearing mechanism based on the alternating direction method of multipliers (ADMM) for coordinating between thermal grid, electric grid and DER operation. The ability of the proposed mechanism to achieve the market equilibrium for a combined thermal and electric grid is demonstrated for a test case with 22 flexible loads (FLs).
I. INTRODUCTION
With the increasing integration of distributed energy resources (DERs) such as flexible loads (FLs) and distributed generators (DGs) into the electric distribution grid, local energy market topics are enjoying elevated research attention. If not operated appropriately, DERs may lead to increased losses and congestion in the electric distribution grid. Therefore, it is necessary to design appropriate market organization and pricing mechanisms that allow for the fair allocation of the operation cost of the electric grid as well as the cost-effective integration of DERs. In this scope, distribution locational marginal prices (DLMPs) have proved to aid with loss reduction [1] and congestion management [2], [3]. DLMPs essentially translate information on losses and congestion into incentive signals for the DERs to encourage operational contributions, e.g. the rescheduling of loads. The DLMP method naturally lends itself as a framework for local electric distribution grid markets, since it relates operational constraints into monetary objectives. DLMP-based market frameworks have been studied for the electric distribution grid in theoretical as well as real-life projects [4], [5], [6].
In district heating and cooling systems, i.e. thermal grids, thermal DERs in the form of distributed heating and cooling sources are leading to a paradigm shift similar to the electric distribution grid. Particularly, combined heat and power (CHP) plants, heat pumps (HPs) and solar-thermal plants have motivated research into bi-directional thermal grids [7]. On the demand side, buildings equipped with heating, ventilation and air-conditioning (HVAC) systems have already been proposed as FLs for the electric grid and can be retrofitted in a similar fashion to increase operational flexibility in the thermal grid. Our recent work [8] formulates a thermal grid model and methodology for deriving DLMPs for thermal grid operation. The thermal grid DLMPs were shown to adequately represent congestion and pumping losses in the thermal grid similar to electric grid DLMPs. With the availability of the DLMP technique in both the thermal grid and the electric grid, there remains the issue of local market organization across thermal grid and electric grid boundaries.
The combined operation problem of the thermal grid, the electric grid and the DERs can in principle be formulated as a centralized economic dispatch problem, i.e. an optimization problem for the minimization of energy costs with respect to the operational constraints of the thermal grid, the electric grid and the DERs. However, the operation of each system is typically managed by a dedicated organization, i.e. thermal grid operator, electric grid operator and DER aggregators, where each entity may prefer to retain control and data of their respective systems. To this end, distributed optimization techniques such as alternating direction method of multipliers (ADMM) can be utilized to facilitate the energy trade between different entities while retaining organizational boundaries [1]. In general, ADMM serves as a framework for distributed optimization and is widely proposed for applications in power systems due to its good scalability and robustness [9].
In this work, we propose a decentralized local market organization for combined thermal and electric distribution grid operation, enabled by the ADMM solution methodology as a market-clearing mechanism. Along with this, linear model formulations for the thermal grid, the electric grid and FLs are presented. The proposed market-clearing mechanism is shown to achieve the market equilibrium for a test case with 22 FLs connected to thermal grid and electric grid, even in the presence of congestion.
II. MODELLING
A. Thermal grid model
The thermal grid is modelled with a linear approximate model as:
\[
h_t = h^{ref} + M^{h,p^{th}} \Delta p^{th}_t \\
v_t = v^{ref} + M^{v,p^{th}} \Delta p^{th}_t \\
p^{gm}_t = p^{gm,ref} + M^{p^{gm},p^{th}} \Delta p^{th}_t
\]
(1)
The vectors \(h_t \in \mathbb{R}^{N^{th}}, v_t \in \mathbb{R}^{B^{th}}\) are the pressure head at thermal grid nodes \(n^{th} \in N^{th}\) and the branch volume flow at thermal grid branches \(b^{th} \in B^{th}\) for time step \(t \in T\).
The scalar $p^{pm}$ denotes the total electric distribution pumping power demand for time step $t$. The reference point for each property is denoted by $(\cdot)^{ref}$, which in the following is chosen to be the nominal operation point of the thermal grid. The vector $\Delta p_{f,t}^h \in \mathbb{R}^J$ is the thermal power change at FLs $f \in F$ for time step $t$. The matrices $M^{u,p} \in \mathbb{R}^{N^d \times J}$, $M^{u,q} \in \mathbb{R}^{N^d \times J}$, $M^{p^{ref},p} \in \mathbb{R}^{J \times J}$ are the sensitivity matrices for the change of the respective properties to the thermal power change, which in turn is defined as $\Delta p_{f,t}^h = p_{f,t}^h - p^{ref,f}$. The vector $p_{f,t}^h \in \mathbb{R}^J$ is the absolute thermal power demand and the vector $p^{ref,f} \in \mathbb{R}^J$ is the thermal power demand reference which is chosen to be the nominal load, i.e. peak load, of the FLs $f$. The reference properties for the linear approximate model in eq. (1) are then obtained by solving the reference power flow problem of the thermal grid. The detailed model formulation is omitted here for the sake of brevity, but is discussed in detail in [8].
B. Electric grid model
The electric grid is modelled with a linear approximate model as:
$$u_t = u^{ref} + M^{u,p} \Delta p_t + M^{u,q} \Delta q_t,$$
$$|s_t^f|^2 = |s_t^{ref,f}|^2 + M^{s,p} \Delta p_t + M^{s,q} \Delta q_t,$$
$$|s_t^b|^2 = |s_t^{ref,b}|^2 + M^{s,p} \Delta p_t + M^{s,q} \Delta q_t,$$
$$p_t^b = p_t^{ref,b} + M^{p,p} \Delta p_t + M^{p,q} \Delta q_t,$$
$$q_t^b = q_t^{ref,b} + M^{q,p} \Delta p_t + M^{q,q} \Delta q_t.$$
(2)
The vectors $u_t \in \mathbb{R}^{N^d}$, $|s_t^f|^2, |s_t^b|^2 \in \mathbb{R}^{B^d}$ are the voltage magnitude at electric grid nodes $n^{el} \in N^d$ and the squared branch power flow in “from” and “to” direction at electric grid branches $b^{el} \in B^d$ for time step $t \in T$. The scalars $p_t^b, q_t^b$ denote the total active and reactive loss for time step $t$. The reference point for each property is denoted by $(\cdot)^{ref}$, which in the following is chosen to be the nominal operation point of the electric grid. The vectors $\Delta p_t, \Delta q_t \in \mathbb{R}^J$ are the active and reactive power change at FLs $f \in F$ for time step $t$. The matrices $M^{u,p} \in \mathbb{R}^{N^d \times J}$, $M^{u,q} \in \mathbb{R}^{N^d \times J}$, $M^{p^{ref},p} \in \mathbb{R}^{J \times J}$ are the sensitivity matrices for the change of the respective properties to the active and reactive power change, which in turn are defined as $\Delta p_t = p_t - p^{ref}$, $\Delta q_t = q_t - q^{ref}$. The vectors $p_t, q_t \in \mathbb{R}^J$ are the absolute active and reactive power demand of the FLs $f$. The vectors $p_t^{ref}, q_t^{ref}$ are the active and reactive power demand reference which is chosen to be the nominal load, i.e. peak load, of the FLs $f$. The reference properties for the linear approximate model eq. (2) are then obtained by solving the reference power flow problem of the electric grid, e.g. through a fixed-point solution methodology, and a global approximation for the sensitivity matrices can be obtained as a function of the reference properties. The detailed formulation is omitted here for the sake of brevity, but can be obtained from [3].
C. Flexible load model
The FL model is expressed in state space form as:
$$x_{f,t+1} = A_f x_{f,t} + B_f c_{f,t} + B_f^d d_{f,t}$$
$$y_{f,t} = C_f x_{f,t} + D_f c_{f,t} + D_f^d d_{f,t}$$
(3)
The vectors $x_{f,t}$, $c_{f,t}$ and $d_{f,t}$ are the state, control and disturbance vectors for FL $f$ at time step $t$. The matrices $A_f$, $C_f$ are the state and output matrix, and $B_f^c$, $D_f^c$, $B_f^d$, $D_f^d$ are the control and feed-through matrices, on the control and disturbance vectors respectively. Note that control vector is a function of the thermal, active and reactive power dispatch of the FL $c_{f,t} = c_{f,t}(p_{f,t}^h, p_{f,t}, q_{f,t})$. Therefore the FL model serves as the interconnection between the thermal and electric grids.
For the presented test case (section IV), air-conditioned buildings serve as FLs and a fixed power factor, i.e. a fixed ratio between active and apparent power, is assumed. The detailed formulation of the state space model for such buildings can be obtained from [10].
III. Market organization
As depicted in figure (fig.) 1, the considered participants in the local thermal energy market and the local electric energy market on district level are 1) thermal grid operator, 2) electric grid operator, and 3) flexible load aggregator. Note that for notational brevity, the following formulations are for a single flexible load aggregator, although there may in fact be several aggregators within a district. Also note that while not explicitly considered in this paper, the flexible load aggregator can be transformed into a DER aggregator to take into account the dispatch of other DER types, e.g. generators, fixed loads or energy storage systems.
The flexible load aggregator is responsible for dispatching the flexible loads with respect to their operational constraints and correspondingly trading thermal power and active / reactive power with the thermal grid operator and the electric grid operator respectively. The thermal grid operator is responsible for maintaining the operational constraints of the thermal grid, while trading thermal power with the flexible load aggregator and obtaining any active / reactive generation from the wholesale electricity market. The electric grid operator is responsible for maintaining the operational constraints of the electric grid, while trading active / reactive power with the flexible load aggregator and obtaining active / reactive generation from the wholesale electricity market. Note that wholesale electricity market clearing is assumed to be cleared independently in advance to the local market clearing. Thus, the scalar $c_t^{el}$ is the cleared wholesale electric energy price.
The local electric and thermal energy markets are cleared by utilizing the alternating direction method of multipliers (ADMM). ADMM serves as a framework for distributed optimization and can be interpreted as a price coordination mechanism to achieve the Walrasian tâtonnement process [11], which only requires the information exchange of proxy variables without revealing the sensitive local information like
system models and constraints. To this end, the market participants exchange negotiation variables with the local thermal and electric energy markets.
The negotiation variables are depicted with dashed arrows in fig. 1. The vectors $p_t^{th,f}$ and $p_t^{th,th}$ denote the thermal power requested by the flexible load aggregator and the thermal power supply offered by the thermal grid operator. The vectors $\pi_t^{th,e,x}, \pi_t^{th,f}$ and $\pi_t^{th,th}$ are the resulting thermal power exchange and nodal prices for thermal power demand and supply. The vectors $p_t^e / q_t^e$ and $p_t^{el} / q_t^{el}$ denote the active / reactive power requested by the flexible load aggregator and the active / reactive power supply offered by the electric grid operator. The vectors $p_t^{el,f} / q_t^{el,f}, \pi_t^{el,f}$ and $\pi_t^{el,el}$ are the resulting active / reactive power exchange and the nodal prices for active / reactive power demand and supply.
The final cleared power and price schedule are denoted with solid arrows in fig. 1. The vectors $p_t^{th}$ and $\pi_t^{th}$ denote the final cleared thermal power exchange and the associated nodal price for thermal energy trade between the thermal grid operator and the flexible load aggregator. The vectors $p_t, q_t$ as well as $\pi_t^e, \pi_t^q$ denote the final cleared active / reactive power exchange as well as the associated nodal price for active / reactive energy trade between the electric grid operator and the flexible load aggregator for time step $t$.
A. Combined optimal operation problem
The combined optimal operation problem addresses the economic dispatch of FLs subject to the operational constraints of the thermal grid, the electric grid and the FLs. The ADMM-based market-clearing algorithm (alg.) is applied as a distributed optimization solution methodology for the combined optimal operation problem. To form the basis for deriving the individual sub-problems of the market clearing alg., first, the combined optimal operation is formulated as follows:
$$\min_{p_t, q_t, p_t^{th}} \sum_{t \in T} c_t^{ref} \left( 1^\top p_t + \frac{1}{\eta^{th}} 1^\top p_t^{th} \right)$$
s.t. $(\forall t \in T)$
$$h^- \leq h_t, \quad v_t \leq v^+$$
$$p_t^{th,src} - \frac{1}{\eta^{th}} 1^\top p_t^{th} = p_t^{pm}$$
$$u^- \leq u_t \leq u^+, \quad |s_t^f|^2 \leq |s_t^f|^2, \quad |s_t^q|^2 \leq |s_t^q|^2$$
$$p_t^{src} - 1^\top p_t = p_t^b, \quad q_t^{src} - 1^\top q_t = q_t^b$$
$$y_{f,t}^- \leq y_{f,t} \leq y_{f,t}^+$$
The scalar $c_t^{ref}$ is the electric energy price at the reference node and $\eta^{th}$ is the coefficient of performance (COP) of the district heating or cooling plant. Note that this formulation translates to assuming 1) a constant-efficiency model for the performance of the heat pump at the district heating or cooling plant and 2) that electric power required for district heating or cooling is drawn at the source node of the electric grid. The vectors $u, u^+ \in \mathbb{R}^{n_{el}}, |s_t^f|^2, |s_t^q|^2 \in \mathbb{R}^{b_{el}}$ describe the electric grid voltage and branch loading limits, whereas $h^- \in \mathbb{R}^{n_{th}}, v^+ \in \mathbb{R}^{b_{th}}$ describe the thermal grid head and volume flow limits. The scalars $p_t^{src}, q_t^{src}, p_t^{th,src}$ describe the total active, reactive power demand and the electric power demand of the thermal grid. The vectors $y_{f,t}, y_{f,t}^+$ describe the operational constraints of the FLs $f$, which may be time-dependent to reflect set-back periods. Note that this formulation of the combined operation assumes that the district heating or cooling plant of the thermal grid is co-located with the electric grid source node, such that the thermal grid source node is not subjected to any electric grid constraints. Therefore, the operation of the thermal grid (in eqs. (1), (4c) and (4d)) and the electric grid (in eqs. (2), (4e) and (4f)) are coupled only through the FL operation (in eqs. (3) and (4g)).
B. Market clearing
For the ADMM-based market clearing, individual sub-problems are formulated for 1) thermal grid operator, 2) electric grid operator and 3) flexible load aggregator. Each sub-problem consists of a subset of the objective and constraints from eq. (4) which are augmented by ADMM Lagrangian terms. In the following, $\rho > 0$ denotes the ADMM penalty factor, which serves as the convergence tuning parameter for the market clearing alg. and remains constant during ADMM iterations.
1) Thermal grid operator: The optimal operation problem of the thermal grid operator is formulated as:
$$\min_{p_t^{th,th}} \sum_{t \in T} \left( c_t^{ref} 1^\top p_t^{th,th} + \left( \pi_t^{th,th} \right)^\top \left( p_t^{th,th} - p_t^{th,ex} \right) \right.$$
$$+ \frac{\rho}{2} \left\| p_t^{th,th} - p_t^{th,ex} \right\|_2^2$$
s.t. $(\forall t \in T)$
The vector $p_t^{th,th}$ is the thermal grid sub-problem variable for the thermal power demand $p_t^{th}$. The vectors $\pi_t^{th,th}$ and $p_t^{th,ex}$ are the nodal thermal power price and the thermal power demand which are obtained in the previous market clearing iteration in alg. 1. Note that $\pi_t^{th,th}$ represents the prices which are perceived by the thermal grid operator during market clearing, but upon convergence of alg. 1, the cleared
nodal thermal power price is identical for the thermal grid operator and the flexible load aggregator.
2) **Electric grid operator**: The optimal operation problem of the electric grid operator is formulated as:
\[
\begin{align*}
\min_{p_t^{el}, q_t^{el}} & \sum_{t \in T} \left( c_t^{el} 1^T p_t^{el} \right. \\
& + \left( \pi_t^{p,el} \right)^T (p_t^{el} - p_t^{ex}) + \frac{\rho}{2} \left\| p_t^{el} - p_t^{ex} \right\|_2^2 \\
& + \left( \pi_t^{q,el} \right)^T (q_t^{el} - q_t^{ex}) + \frac{\rho}{2} \left\| q_t^{el} - q_t^{ex} \right\|_2^2 \right) \\
\text{s.t.} & \quad (2), (4e), (4f)
\end{align*}
\]
(6)
The vectors $p_t^{el}$ and $q_t^{el}$ are the electric grid sub-problem variables for the active power demand $p_t$ and reactive power demand $q_t$. The vectors $\pi_t^{p,el}$ and $\pi_t^{q,el}$ are the nodal active power price and nodal reactive power price. Similar to the thermal grid operator above, the vectors $\pi_t^{p,el}$ and $\pi_t^{q,el}$ represent the prices which are perceived by the electric grid operator during market clearing, but upon convergence of alg. 1, the cleared nodal price is identical for the electric grid operator and the flexible load aggregator. The vectors $p_t^{ex}$ and $q_t^{ex}$ are the active and reactive power demand which are obtained in the previous market clearing iteration in alg. 1.
3) **Flexible load aggregator**: The optimal operation problem of the flexible load aggregator is formulated as:
\[
\begin{align*}
\min_{p_t^{th,ft}, p_t^{el}, q_t^{ft}} & \sum_{t \in T} \left( c_t^{el} 1^T p_t^{th,ft} + \pi_t^{p,th,ft} 1^T (p_t^{th,ft} - p_t^{th,ex}) \right. \\
& + \frac{\rho}{2} \left\| p_t^{th,ft} - p_t^{th,ex} \right\|_2^2 \\
& + \left( \pi_t^{p,el} \right)^T (p_t^{el} - p_t^{ex}) + \frac{\rho}{2} \left\| p_t^{el} - p_t^{ex} \right\|_2^2 \\
& + \left( \pi_t^{q,ft} \right)^T (q_t^{ft} - q_t^{ex}) + \frac{\rho}{2} \left\| q_t^{ft} - q_t^{ex} \right\|_2^2 \right) \\
\text{s.t.} & \quad (3), (4g)
\end{align*}
\]
(7)
The vectors $p_t^{th,ft}$, $p_t^{el}$ and $q_t^{ft}$ are the flexible load aggregator sub-problem variables for the thermal power demand $p_t^{th,ft}$, active power demand $p_t$ and reactive power demand $q_t$. The vectors $\pi_t^{p,th,ft}$, $\pi_t^{p,el}$ and $\pi_t^{q,ft}$ are the nodal thermal power price, nodal active power price and nodal reactive power price, as perceived by the flexible load aggregator during market clearing. The vectors $p_t^{th,ex}$, $p_t^{ex}$ and $q_t^{ex}$ are the thermal power demand, active power demand and reactive power demand which are obtained in the previous market clearing iteration in alg. 1.
4) **Market-clearing algorithm**: Alg. 1 presents the market-clearing alg. based on ADMM which is utilised to coordinate the solution of eqs. (5) to (7) to obtain the market equilibrium. Note that all ADMM iteration variables are initialized to zero at the beginning of the ADMM loop. The scalar $r$ denotes the primal residuals, which are defined as $r(a, b) = \sum_{t \in T} |a_t - b_t|$, for thermal power $(p_t^{th,th}, p_t^{th,ft})$, active power $(p_t^{el}, p_t^{ft})$ and reactive power $(q_t^{el}, q_t^{ft})$. Since the residuals approach zero in the final solution [11], the ADMM loop is terminated once the residuals reach the desired termination threshold given by $\epsilon > 0$. The ADMM-based decomposition method scales reasonably well with the increasing size of complicating variables, as it was shown through several examples to be well suited for a wide variety of large-scale distributed optimization problems [11].
IV. RESULTS AND DISCUSSION
The presented methodology is demonstrated for a test case for a neighbourhood in Singapore, which was developed as part of [12]. The test case consists of thermal and electric grids with identical layout according to fig. 2 and 22 commercial buildings modelled as FLs according to [10], where the thermal grid serves as a district cooling system (DCS). The source node electricity price $c_t^{el}$ is derived for one day of the Universal Singapore Energy Price (USEP). In the presented scenario, an artificial branch volume flow constraint is defined in thermal grid at the branch between nodes “A” and “B” (see fig. 2).
Figures 3 and 4 depict the dispatch and price schedules for thermal and reactive power which are obtained from centralized operation, thermal grid operator, electric grid operator and flexible load aggregator. In this context, the centralized
V. CONCLUSION
This work presented a local market organization for combined thermal and electric distribution grid operation along with an ADMM solution methodology as a market-clearing mechanism. The proposed market-clearing mechanism was shown to achieve the market equilibrium in the presence of congestion. The presented models and test case are implemented in [13] and available open source\(^1\).
ACKNOWLEDGMENT
This work was financially supported by the Singapore National Research Foundation under its Campus for Research Excellence And Technological Enterprise (CREATE) programme.
NOMENCLATURE
Let \( \mathbb{R} \) be the domain of real numbers. Non-bold letters \( x, X \) denote scalars \( \mathbb{R}^1 \), bold lowercase letters \( x \) denote vectors \( \mathbb{R}^n \) and bold uppercase letters \( X \) denote matrices \( \mathbb{R}^{m \times n} \). Bold numbers 0 and 1 denote vectors or matrices of zeros and ones of appropriate sizes. The transpose of a vector or matrix is denoted by \( ()^\top \) and the p-norm of a vector is denoted by \( ||x||_p \).
REFERENCES
[1] K. Zhang, S. Hanaf, C. M. Haack, and T. Hanachter, “A framework for multi-regional real-time pricing in distribution grids,” *IEEE Trans. Smart Grid*, vol. 10, no. 6, pp. 6826–6838, 2019.
[2] L. Bai, J. Wang, C. Wang, C. Chen, and F. Li, “Distribution Locational Marginal Pricing (DLMP) for Congestion Management and Voltage Support,” *IEEE Trans. Power Syst.*, vol. 33, no. 4, pp. 4065–4073, 2018.
[3] S. Hanaf, K. Zhang, C. M. Haack, A. Baran, H. B. Gao, and T. Hanachter, “Decomposition and equilibrium achieving distribution locational marginal prices using trust-region method,” *IEEE Trans. Smart Grid*, vol. 10, no. 3, pp. 3269–3281, 2019.
[4] K. Zhang, S. Troitzsch, S. Hanaf, and T. Hanachter, “Coordinated market clearing for combined thermal and electric ancillary services in distribution grids,” *IEEE Trans. Smart Grid*, 2020.
[5] R. Li, Q. Wu, and S. S. Oren, “Distribution locational marginal pricing for optimal electric vehicle charging management,” *IEEE Transactions on Power Systems*, vol. 29, no. 1, pp. 204–213, 2014.
[6] S. Palandecherad, R. Pinto, and A. Branco, “On the development of organized nodal local energy markets and a framework for the iso-dso coordination,” *Electric Power Systems Research*, vol. 189, 2020.
[7] F. Bünning, M. Wetter, M. Fuchs, and D. Müller, “Bidirectional low temperature heat pumps: Control with agent-based control,” *Applied Energy*, vol. 209, pp. 502–515, 2018.
[8] S. Troitzsch, M. Grussmann, K. Zhang, and T. Hanachter, “Distribution Locational Marginal Pricing for Combined Thermal and Electric Grid Operation,” in *IEEE PES Innovative Smart Grid Technologies Conference Europe*, 2020.
[9] D. K. Molzahn, E. Dorfler, H. Sandberg, S. H. Low, S. Chakrabarti, R. Baldick, and J. Lavaei, “A survey of distributed optimization and control algorithms for electric power systems,” *IEEE Transactions on Smart Grid*, vol. 8, no. 6, pp. 2595–2607, 2017.
[10] S. Troitzsch and T. Hanachter, “Coordinated Thermal Building Modelling,” in *IEEE PES General Meeting*, 2020.
[11] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” *Foundations and Trends® in Machine Learning*, 2011.
[12] S. Troitzsch, B. K. Sreepathi, P. P. Huynh, A. Moone, T. Hanaf, J. Fonseca, and T. Hanachter, “Optimal electrical-distribution-grid planning considering the demand-side flexibility of thermal building systems for a test case in Singapore,” *Applied Energy*, vol. 273, 2020.
[13] S. Troitzsch, “FLEDGE - Flexible Distribution Grid Demonstrator,” Version 0.4.0, 2020. [Online]. Available: https://doi.org/10.5281/zenodo.3523563
\(^1\)https://github.com/TUMCREATE-ESTL/fledge |
Quantum Information Scrambling: From Holography to Quantum Simulators
Arpan Bhattacharyya\textsuperscript{a,1}, Lata Kh Joshi\textsuperscript{b,2,3}, Bhuvanesh Sundar \textsuperscript{c,4,5}
\textsuperscript{1}Indian Institute of Technology, Gandhinagar, Gujarat-382355, India
\textsuperscript{2}Center for Quantum Physics, University of Innsbruck, Innsbruck A-6020, Austria
\textsuperscript{3}Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, Innsbruck A-6020, Austria
\textsuperscript{4}JILA, Department of Physics, University of Colorado, Boulder, CO 80309, USA
\textsuperscript{5}Center for Theory of Quantum Matter, University of Colorado, Boulder, CO 80309, USA
Received: date / Accepted: date
Abstract In this review, we present the ongoing developments in bridging the gap between holography and experiments. To this end, we discuss information scrambling and models of quantum teleportation via Gao-Jafferis-Wall wormhole teleportation. We review the essential basics and summarize some of the recent works that have so far been obtained in quantum simulators towards a goal of realizing analogous models of holography in a lab.
1 Introduction
Holographic correspondence has been the most surprising and celebrated conjecture [1–4] for almost three decades now. It connects special quantum field theories (called the boundary theory) to gravity living in one extra dimension (called the bulk theory). Using the holographic toolbox, several advances have been made in the physics of strongly coupled quantum field theories—the transport properties in hydrodynamics [5–14], renormalization group flow [15–28], and entanglement entropy [29–40], to name a few. At a more microscopic level, the relations established between the geometry and quantum entanglement through the entanglement entropy proposal from Ryu and Takayanagi [29], ER–EPR [41, 42] have been suggestive of the fact that the gravity is an emergent phenomenon [43–57].
On the other side of the duality are gravity and black holes. The duality has also helped to advance us to understand the quantum nature of black holes [58] through quantum information processing in the boundary quantum systems. In recent years, the simplicity and analytic amenability of the duality between Sachdev-Ye-Kitaev (SYK) model and nearly Anti-de Sitter spacetime [59–64] has served as a guiding lamp-post for many developments in our understanding of black holes. This refers to, but is not limited to, quantum chaotic properties of black holes [65–69], and recent progress towards the black hole information paradox [70, 71].
Towards the information content of the Hawking radiation, Hayden and Preskill [72] proposed a fascinating thought experiment wherein information thrown into an old black hole can be recovered quickly having observed only a few quanta of Hawking radiation. This proposal was later made concrete for generic quantum systems by providing mechanisms for decoding the intended information [73]. At a first thought, one can visualize decoding of information in a quantum circuit as a form of teleportation of information from the input to the output. Whether or when the above is true constitutes some parts of this review. It has been recently argued that the Hayden-Preskill inspired information decoding circuits for generic quantum channels are actually similar (and same in some limits) to the circuits inspired by teleportation through a wormhole [74–76].
In the first part of this review we discuss these concepts and provide a summary of recent developments on wormhole teleportation inspired quantum circuits. We begin with holographic dictionary connecting eternal black holes to thermofield double state (TFD) [77] where the two asymptotic regions of left and right black holes are causally disconnected. What it means is that any perturbation on one side can not travel to the other, thus such two-sided wormholes are not traversable (see...
more on wormholes in [78]). Traversable wormholes have fascinated researchers for long [79], however it is also known that we need to violate null energy conditions, or inject negative energy, in order to achieve traversability [80–82]. To this end, Gao, Jafferis, and Wall [83], followed by [84], put forth a seminal work where a coupling between the two-sided geometry was proposed, that renders the wormhole traversable.
Remarkably, the Hayden-Preskill and Gao-Jafferis-Wall protocols are quite generally applicable for quantum many-body systems, and can be realized in the lab using programmable quantum devices. This is possible due to tremendous experimental advances in noisy intermediate scale quantum (NISQ) devices [85, 86], which provide a powerful toolset for analog and universal digital quantum simulation.
In the second part of our review, we describe how these protocols can be implemented in a lab with quantum simulators. Geared towards the goal of observing quantum gravity in a lab, in the holographic language, one requires initially a bridge to translate the tools of holography in terms of many-body dynamics, see also [87] for a review of the connection between holography and quantum many-body dynamics. Quantum simulators provide unique opportunities to study the time evolution of many-body systems in highly controlled laboratory settings. In this direction, we describe two out of many quantum simulation platforms – based on trapped ions [88, 89] and Rydberg atoms [90]. We emphasize that while an observation of models marking dynamics dual to black holes is still far away, the preparation and benchmarking steps provide promising directions for future experiments. For example, this refers to protocols [91–93] and preparation of TFD states [94–97], observation of Hayden-Preskill variant of quantum teleportation [98], and theoretical proposals [93, 99–108] and experimental observation of out-of-time-ordered correlators (OTOC) [98, 109–116] in small-scale quantum simulators.
Overview: This review is organized as follows: In Section 2, we review some basics of the holographic correspondence. To be concrete, we present the example of duality between eternal black holes and TFD states and discuss how the wormholes are made traversable by introducing double trace deformation. In Section 3, we discuss and set up basic notations regarding the information spread in quantum systems. We describe that the spread of initial information and the measures of it are the central mechanisms to understand teleportation in quantum circuits. In this section we also review the Hayden-Preskill protocol, and its variant generically applicable to quantum dynamics. In Section 4, we discuss the circuits, motivated from the wormhole teleportation, as teleportation circuits for many-body dynamics. We present a mechanism of transfer based on operator size and summarize the recent results. In Section 5, we describe in detail two platforms for quantum simulation, and present realization of many-body models in Section 6. We then present the measurement protocols, directly accessible in experiments, to measure OTOC and perform many-body teleportation in Section 7. We conclude in Section 8 with some additional remarks and future prospects.
2 AdS$_{d+1}$/CFT$_d$ and Wormhole
AdS/CFT correspondence can be embodied in various avatars, but we will only briefly review some aspects of it which will be relevant for the rest of the review. Essentially, the AdS/CFT duality links two different theories: a conformal field theory (CFT) which is strongly coupled (typically a large N gauge theory) and a weakly coupled gravity theory defined on the background of Anti-de Sitter (AdS) spacetime which is a spacetime with a negative curvature [1–3]. $d + 1$ dimensional AdS spacetime represents the maximally symmetric solution for the Einstein field equation with a negative cosmological constant $\Lambda = -\frac{d(d-1)}{2L^2}$, where $L$ is the AdS radius. The most well-understood example of this duality comes from the String theory. It has been demonstrated in [1–4], that there exists an equivalence between a strongly coupled $\mathcal{N} = 4$ supersymmetric $SU(N)$ Yang-Mills (SYM) theory and Type IIB String theory on $AdS_5 \times S^5$ in the large $N$ limit, where $N$ is the rank of the gauge group. In this context, one first starts with a stack of $N$ number of $D3$-branes. The low energy dynamics of it is described by $\mathcal{N} = 4$ SYM with a Gauge group $SU(N)$ with the 't-Hooft coupling $\lambda = g_{YM}^2 N$, where $g_{YM}$ denotes the Yang-Mills coupling. We can analyze this theory perturbatively when $\lambda \ll 1$. On the other hand, we can have a 10-dimensional metric solution emerging from the low energy description of Type IIB String theory,
$$ds^2 = \alpha' \left[ \frac{r^2}{\sqrt{4\pi g_s N}} (-dt^2 + dx_1^2 + dx_2^2 + dx_3^2) + \sqrt{\frac{4\pi g_s N}{r^2}} dr + \sqrt{4\pi g_s N} d\Omega_5^2 \right],$$
(1)
where $g_s$ is the string coupling. We work in $\alpha' \to 0$ limit where $\alpha'$ is the string tension. In this limit we can effectively neglect any stringy effect and hence work in the supergravity limit (which is essentially a Type IIB Supergravity theory for this case). In the AdS/CFT duality the couplings on the two sides are related by
$$\lambda = g_{YM}^2 N = 2\pi g_s N.$$
We can identify $L^2 = \alpha' \sqrt{2 g_{YM}^2 N}$. After this, we can easily see from (1) that the spacetime described by (1) is nothing but $AdS_5 \times S^5$. The supergravity limit necessarily implies,
$$\left(\frac{L}{l_s}\right)^4 = 2 g_{YM}^2 N \gg 1,$$
(3)
where we have used the fact that the string length $l_s = \sqrt{\alpha'}$. This equation simply tells us that, classical gravity description is valid when the AdS length scale is much bigger than the string length and from our previous identification the 't-Hooft coupling becomes very large. We have classical gravity (and weakly coupled) description in the bulk in the limit described in (3) and it is equivalent to a strongly coupled gauge theory at the boundary for which standard perturbation theory will not work anymore. Also, the Newton's constant (10-dimensional) which is the coupling for the gravity theory can be shown,
$$8 \pi G_{10} = (2\pi)^7 \alpha'^4 g_s^2.$$
(4)
From this, it is evident that the gravity theory is weakly coupled. Then the 5-dimensional Newton's constant $G_5$ can be related to $G_{10}$ by simply dividing it by the volume of unit 5-sphere [117].
Although this conjecture has not been proven yet, it passes several essential checks, such as matching the spectrum of chiral operators and correlation functions. One obtains a precise dictionary between field theory correlators and correlators of fields living inside the AdS spacetime [1–3, 117–119]². Holography is being used to study hydrodynamic transport coefficients, phase transitions in condensed matter systems, some aspects of QCD, open quantum systems, quantum chaos, black hole information paradox etc [13, 40, 120–128]³.
Although evidence supports the holographic principle, it is still not clear how gravity emerges from field theory. In recent times, tools of information-theoretic quantities, e.g. entanglement, have provided a more profound insight into the inner workings of AdS/CFT, see a recent review [129] on bulk emergence and quantum error correction in holography. Following holography, a plethora of interesting studies have resulted [40, 123] and several setups based on quantum information scrambling have been proposed to test certain predictions coming from holography. In the rest of this review, we will discuss some of them. Also, we will work in natural units where we will set $c = \hbar = k_B = 1$.
2.1 ER=EPR and Wormholes
We know that quantum mechanics allows for Einstein-Podolsky-Rosen (EPR) correlations [130], which basically stem from the underlying entanglement structure of the wavefunction describing the system. On the other hand, one can find solutions in general relativity that can connect far away points of spacetime via wormholes [131] which are called Einstein-Rosen bridges (ER) [132]. These two phenomena seem to challenge the notion of locality [130]. The locality plays an important role in physics, primarily because we cannot send a signal faster than light. From the point of view of spacetime, all points of spacetime are not causally connected. Maldecena and Susskind later proposed in [41, 42] that these two effects are related. In the context of AdS/CFT duality, two entangled copies of a conformal field theory having EPR-type correlation have a bulk dual that connects them through a wormhole. In particular, two black holes that are spatially far away but have EPR correlation between their microstates described by CFT are actually connected through an ER bridge. To elaborate a little bit more, let us take an analogy from quantum mechanics. Let us consider two CFTs on two spatially disconnected regions $A$ and $B$, and consider the following wavefunction,
$$|\psi\rangle = |\psi_A\rangle \otimes |\psi_B\rangle,$$
(5)
where $|\psi_A\rangle$ and $|\psi_B\rangle$ are the wavefunctions of the two non-interacting CFTs at $A$ and $B$. From (5), it is evident that $|\psi\rangle$ does not have any entanglement as it is a direct product state. This can be confirmed by computing von-Neumann entropy by tracing out either $A$ or $B$. This state corresponds to two disconnected geometries in the context of holography [133].

**Fig. 1** Following, Ref. [133], the figure depicts that the entanglement product of two disconnected CFT corresponds to a connected geometry, Penrose diagram of which is shown in the right, explained later in Fig. 2.
Now following [133], we can consider two CFTs placed on $S^n$ and let us denote the $i^{th}$ energy eigenstate of each CFT by $|E_i\rangle$. Then let us consider the following wavefunction (up to some normalization),
$$|\psi\rangle \propto \sum_{i=1}^{n} e^{-\beta E_i} |E_i\rangle \otimes |E_i\rangle.$$
(6)
This is basically a sum of the product state \(|E_+\rangle \otimes |E_-\rangle\). From (6) it is evident that this state does contain some amount of entanglement, which can be estimated by computing von-Neumann entropy by tracing out one of the CFTs. This is a particular example of the so-called “thermofield double” state. In the context of holography, this can be shown to be dual to a Euclidean “eternal black hole” geometry [77] as shown in the Fig. 1, which is basically a two-sided Euclidean black hole. So the quantum superposition of two states of two classically disconnected CFTs corresponds to a classically connected geometry (for our case, the two sides are connected by ER bridge). Next, we briefly discuss the geometry of this two-sided black hole.

**Fig. 2** Penrose diagram representing an eternal Schwarzschild-AdS black hole [58]. Also shown are the left and right boundaries where the CFT lies and which the system is dual to. The diagonal lines represent the left and right black hole’s horizons. \(r = 0\) corresponds to the singularity of the spacetime. The original exterior region is the right one (\(R\)) and the new exterior is the left one (\(L\)). No radial null geodesic can escape the future interior into one of the exterior and no null geodesic can connect the left and right exterior.
**Eternal black hole:** We consider the eternal AdS black hole with two asymptotic regions. Its Penrose diagram is depicted in Fig. 2. An eternal black hole consists of two causally disconnected black holes that share a common time [77]. The separated spaces have non-interacting degrees of freedom, but the two black holes are highly entangled [58], and they form a wormhole that connects both of them [58]. To elaborate a little more, let us consider the example of Euclidean non-rotating Banados-Teitelboim-Zanelli (BTZ) metric\(^4\),
\[
ds^2 = f(r)d\tau^2 + \frac{dr^2}{f(r)} + r^2d\phi^2,
\]
where, \(f(r) = e^{-\kappa r}\), \(L\) is the AdS radius, \(r = r_+\) is the horizon where the \(f(r)\) vanishes. The period of the \(\tau\) coordinate \(\beta = \frac{2\pi L^2}{T}\) is identified with the inverse of the temperature \(T\) of the black hole. The period of \(\phi\) is \(2\pi\). Together \(\tau\) and \(\phi\) provide the coordinates for the space on which the dual CFT is defined. The metric becomes ill-defined at \(r = r_+\) but this is just a coordinate singularity. One can define the following coordinate transformation,
\[
U = -e^{-\kappa u}, V = e^{\kappa v},
\]
where \(\kappa = \frac{r_+}{T^2}\) is the surface gravity and \(u, v = i\tau \pm r_*\), with,
\[
r_* = -\int_r^\infty \frac{dr'}{f(r')} = \frac{L^2}{2r_+} \log \left[ \frac{r - r_+}{r + r_+} \right].
\]
This is nothing but a Kruskal transformation [58]. The metric becomes,
\[
ds^2 = -\frac{4L^2 dU dV}{(1 + UV)^2} + \frac{r_+^2 (1 - UV)^2}{(1 + UV)^2} d\phi^2.
\]
\(U = 0\) and \(V = 0\) are the two horizons. From (9), it is evident that the metric is well defined even when either \(U = 0\) or \(V = 0\). While doing the coordinate transformation, we implicitly assumed that \(r > r_+\), thereby making \(U\) negative and \(V\) positive. Similarly, for the region \(r < r_+\) we can perform the same type of coordinate transformation only with the difference that for that case, \(U\) will be positive and \(V\) will be negative. Then we again end up with the same form of the metric as shown in (9). Finally, the Penrose diagram for the spacetime looks like as shown in Fig. 2.\(^5\). The spacetime now has four regions, as shown in the Fig. 2. The two singularities occur at \(UV = 1\) (\(r = 0\)), and the \(UV = -1\) (\(r = \infty\)) corresponds to the two asymptotic AdS boundaries. Combining all four regions, we can now interpret the full two-sided Euclidean BTZ space as a wormhole connecting the two asymptotically-AdS spaces. The wormhole is non-traversable in the sense that no signal can be sent from the region- \(L\) to the region \(R\) as shown in the Fig. 2, but two people, Alice and Bob, will be able to jump from these two sides and reach the middle point (the bifurcation point where \(U = 0\) and \(V = 0\) line intersect as shown in the Fig. 2) and exchange notes. Although we have used mainly the BTZ
\(^4\)For the Euclidean case, we have analytically continued the Lorentzian time: \(t \to i\tau\).
\(^5\)Note that, to draw the Penrose diagram, we need to do further a conformal compactification of the metric defined in (9). This can be done by using a particular coordinate transformation and then throwing out an overall conformal factor. Interested readers are referred to [58, 134] for more details. Also, we have ignored the angular coordinate. Each of the points on Fig. 2 corresponds to a \(S^1\).
metric, all these analyses can be extended to higher dimensions.
**Thermafield double state:** As we know that the AdS/CFT is a two-way street, we briefly now discuss the dual of this geometry. Within the context of holography, each geometry corresponds to a certain state of the dual field theory. From the boundary point of view, the CFT lives on a space described by two coordinates, both of which are periodic. The space looks like a product of two spheres: $S^1_\tau \times S^{d-1}$. $S_\tau$ is coming from the $\tau$ coordinate, and $S^{d-1}$ is coming from the rest of the angular coordinates. For (eternal) BTZ, we have $d = 2$, and for a constant time slice, the boundary will be the sum of two disconnected spheres $S^1 + S^1$. Then the Euclidean time direction then connects these two spheres. Then following [77], we can write down the dual state as,
$$|\psi\rangle = \frac{1}{\sqrt{Z(\beta)}} \sum_j e^{-\beta E_j/2} |E_j\rangle_L \otimes |E_j^\star\rangle_R,$$
(10)
where $|E_i\rangle$ denotes the energy eigenstate of the CFT placed on the sphere, $L$ and $R$ indicate the two asymptotic regions, the sum over $i$ goes over all the eigenstates\footnote{At this point, we are still in the field theory limit. Hence this sum goes up to $\infty$ as we are dealing with infinite-dimensional Hilbert space.} and $Z(\beta)$ is the thermal partition function for one copy of the CFT. The star denotes the CPT conjugation. From (10) it is evident that this state is an entangled state defined on a Hilbert space of the form $\mathcal{H} = \mathcal{H}_L \otimes \mathcal{H}_R$. In general finite dimensional quantum systems, TPD is a useful way to purify a given thermal state, we discuss this aspect in the next Section 3.
Due to the presence of the factor $e^{-\beta E_j/2}$, one can easily see (by computing the von-Neumann entropy by tracing one of the subsystems, either $L$ or $R$) that $|\psi\rangle$ possesses non-vanishing entanglement. From the wave-function $|\psi\rangle$ in (10), we can compute the thermal expectation value of any operator in the following way,
$$\langle \psi | O_L |\psi\rangle = Tr(\rho_L^{\beta} O_L),$$
(11)
where, $O_L$\footnote{This should be read as $O_L \otimes I_R$, $I_R$ is the identity operator acting on the right region.} is an operator which acts on the left asymptotic boundary. Then one can trace over the right region, and effectively the expectation value of this operator will be given by tracing over the reduced density matrix of the left region ($\rho_L^{\beta}$) times the operator $O_L$. The reduced density matrix $\rho_L^{\beta}$ comes from the fact that we have traced out the right region entirely. The subscript $\beta$ denotes the fact that it is a thermal density matrix that arises due to the entanglement between the two copies. Similarly, one can compute higher point correlation functions also. On the dual side, one can use the standard techniques of holography to compute these correlators. Following [77, 135] we will below quote the result for two-point functions of two spinless primary operators of scaling dimension $\Delta^\pm$ acting on $L$ and $R$ boundaries (both at $t = 0$) respectively\footnote{In the context of AdS/CFT, this corresponds to scalar fields on AdS with a certain mass.}.
$$\langle \psi | O_R(0, \phi_R) O_L(0, \phi_L) |\psi\rangle \sim \sum_{n=-\infty}^{\infty} \frac{1}{\left[1 + \cosh \left(\frac{2\pi (\phi_R - \phi_L) + 2\pi n}{\beta}\right)\right]^{2\Delta}},$$
(12)
From (12), it is evident that we indeed get non-vanishing correlations between two operators acting on two disconnected CFTs. This is because the underlying geometry and the dual state have some entanglement, although the two boundary regions are causally disconnected. This provides evidence to the ER–EPR conjecture discussed previously.
### 2.2 Teleportation through traversable Wormholes
The rest of the review will mainly focus on quantum information spreading and its implications for holography. Particularly, we will focus on the teleportation of quantum information and the corresponding holographic model. This provides us with an interesting playground to test some of the predictions from holography in the experimental setting. It is evident from our previous discussion that wormholes provide an ideal setting for quantum teleportation [136] because they have EPR-like correlations. However, the wormhole that we have discussed previously is not traversable [58, 131].
As shown in the Fig. 3, Alice sends a signal from the left boundary at some time $-t$. She is accelerating near the left boundary, as shown by the hyperbolic trajectory. She hopes that Bob, who is accelerating near the right boundary, will receive the signal. But as evident from the diagram, as the signal moves at the speed of light, it will always hit the singularity and Bob will never receive it. So we cannot send a signal through this non-traversable wormhole even if it possesses EPR-like correlation.
For teleportation, we need a traversable wormhole [137]. The exact protocols for quantum teleportation through a traversable wormhole will be reviewed in detail in the later sections. In this section, we briefly discuss the argument put forward in [83, 84] to make a traversable wormhole. It is well known that in general relativity, the traversable wormhole only occurs when the stress tensor for the matter sector violates the null energy condition [83, 138–140]. In the context of AdS/CFT, there is a precise protocol to achieve this, and we will discuss this in the context of the eternal AdS black hole following Ref. [83]. We first deform the system by adding a relevant double trace deformation. So the change in the action (boundary CFT action) is given by\footnote{Time runs in the opposite direction for two exterior wedges of Eternal black hole geometry. Hence the $\mathcal{O}_R$ and $\mathcal{O}_L$ are inserted at $t$ and $-t$ respectively.},
$$\delta S = \int dt \, d^{d-1}x \, h(t,x)\mathcal{O}_R(t,x)\mathcal{O}_L(-t,x),$$
where $\mathcal{O}_L$ and $\mathcal{O}_R$ are scalar operators with scaling dimension less than $\frac{d}{2}$ and acting on the left and right boundary, respectively. For the case of the eternal BTZ black hole [134], $d+1=3$ and $x$ will be the azimuthal coordinate $\phi$. By AdS/CFT dictionary, these two operators will be dual to a scalar field $\varphi$ with certain mass propagating inside the bulk spacetime. Also, we remember that time runs in the opposite direction in left and right wedges off this eternal geometry. The function $h(t,x)$ is turned on only after a certain time, which is referred to as “turn-on” time. The integral over time makes sure that we do not get contribution from very high energy states. In the path-integral, this will have the contribution of the form $\sim e^{\delta S}$. In the subsequent section, we will ignore this time integral following [84] then we will have the contribution to the path integral simply as $\sim e^{\delta Z(\mathcal{O}_R(0),\mathcal{O}_L(0))}$, where $\tilde{g}$ is an overall coupling constant. $\mathcal{O}_L(0)$ and $\mathcal{O}_R(0)$ are inserted at the two asymptotic boundaries at $t=0$.
One can further compute the stress-energy tensor of this scalar field $\varphi$ in the bulk spacetime
$$T_{\mu\nu} = \partial_\mu \varphi \partial_\nu \varphi - \frac{1}{2} g_{\mu\nu} (\partial \varphi)^2 - \frac{1}{2} m^2 \varphi^2,$$
where $m$ is the mass of the scalar field. From this, one can compute the 1-loop expectation value of this stress tensor. Following [83], we get,
$$\langle T_{\mu\nu} \rangle = \lim_{x \to x'} \left[ \partial_\mu \partial'_\nu G(x,x') - \frac{1}{2} g_{\mu\nu} \partial_\alpha \partial'^\alpha G(x,x') \right.$$ $$\left. - \frac{1}{2} g_{\mu\nu} m^2 G(x,x') \right].$$
One uses point splitting method to compute this stress-tensor and one has to normalize it to get a finite result. $G(x,x')$ is a two-point function of the scalar field. One such two-point function when there is no double trace deformation is shown in (12). But in the presence of this deformation it will get modified. A detailed calculation of it is given in [83]. Now as mentioned earlier to make the wormhole traversable we need to break the null energy condition. In this case, we have to violate the average null energy condition [83]. Let $k^\mu$ be the tangent vector of the null geodesic passing through the wormhole and let $\lambda$ be the affine parameter, then average null energy condition (ANEC) is,
$$\int_{-\infty}^{\infty} \langle T_{\mu\nu} \rangle k^\mu k^\nu d\lambda \geq 0.$$
In our Kruskal coordinate, $\partial_U$ is the tangent vector to the infinite null geodesic along the horizon $V=0$ and we can choose $U$ as the affine parameter. So the violation of ANEC implies,
$$\int dU \langle T_{UU} \rangle < 0.$$
Now this will back react to the geometry, and for a small spherically symmetric perturbation from the relevant component of the linearized Einstein equation, one can find that at $V=0$ [83],
$$\frac{(d-1)}{4} \left[ \left( \frac{(d-2)}{r_h^2} + \frac{d}{L^2} \right) \left( \delta g_{UU} + \partial_U (U \delta g_{UU}) \right) \right.$$ $$\left. - \frac{2}{r_h^2} \partial_U^2 \delta g_{\phi\phi} \right] = 8\pi G_N \langle T_{UU} \rangle,$$
where $r_h$ is the black hole horizon radius and $\phi$ denotes the azimuthal angle. $\delta g_{UU}$ is the linearized fluctuation of the metric. For the BTZ, $d+1=3$ and $r_h=r_+$ which
follows from (9). Again following [83], we can argue that perturbations will reach a stationary state with respect to the Killing symmetry $U \partial_U$ after the scrambling time as the deformation is small. Also, $T_{UU}$ will be decaying faster than $\frac{1}{r^2}$ and all other terms in the equation (18). Then we integrate (18) and drop all the total derivative terms as at the end points as they will vanish. Then we get,
\[
\frac{(d-1)}{4} \left( \frac{(d-2)}{r_h^2} + \frac{d}{L^2} \right) \int dU \delta g_{UU}
\]
\[= 8 \pi G_N \int dU \langle T_{UU} \rangle.\]
This equation relates the integral of $\langle T_{UU} \rangle$ to the integral of $\delta g_{UU}$. We also know that up to linear order in perturbation,
\[V(U) = -\frac{1}{2g_{UV}^0} \int_{-\infty}^{U} dU' \delta g_{UU}'.\]
Note that, $g_{UV}^0$, the original $UV$ component of metric is negative on $V = 0$ slice. Now we can impose the ANEC condition. If ANEC violates, then from (18) the integral over $\delta g_{UU}$ is also negative (note that the prefactor $\frac{(d-1)}{4} \left( \frac{(d-2)}{r_h^2} + \frac{d}{L^2} \right)$ in (18) is positive for $d \geq 2$.). Following [83], we can conclude that whenever ANEC violates, $V(+\infty) \to 0$, so that a light ray from the left boundary will reach the right boundary after a finite time. Furthermore, one can also calculate the deviation of this light ray from the horizon ($\Delta V$) by computing the Shapiro time delay (in our case it is actually a time advance\footnote{Here we have a shockwave backreacting on the geometry, thereby generating this time advance. This has also been used in other contexts, for example, to discuss causality constraints [143].}) and we can show that it is proportional to $h(t, x)$ [83] as defined in (13). Again for more details interested readers are referred to [83].
Before we end the section, we give an intuitive picture of the exchange of classical information in the quantum teleportation protocol realized in the bulk dual through the classical coupling introduced to the system. We briefly sketch the argument provided in [84, 141, 142]. As shown in Fig. 4, Alice first sends her message into the left horizon while accelerating near the left boundary (in our context, this message can be a scalar field propagating towards the black hole horizon). At time $t = 0$, she measures a part of the Hawking radiation emitted from the black hole. Remember, the Hawking radiation is generated due to vacuum fluctuations. Suppose that Alice measures the positive Hawking radiation energy, which corresponds to the positively charged particle of the Hawking pair created near the horizon. She then sends the result of her measurement to Bob, who is accelerating near the right horizon. So a classical communication takes place. Based on the result of Alice’s measurement, Bob now has a sense of what the positive energy particle is, and then he can measure the
Hawking radiation to identify the negative energy particle. This is possible since Alice and Bob share an entangled state (in our case, it corresponds to a thermofield double state). Then Bob can throw a negative energy pulse into the horizon from the right boundary as shown in the Fig. 4. This negative energy pulse causes the singularity to recede and help the signal from Alice to speed up. Specifically, signals (in our case, a scalar field propagating across the bulk) get delayed (or advanced in this case) due to the negative energy shock. In general relativity, this well-known effect is known as the Shapiro time delay [144]. This delay (or the advancement) happens due to the double trace coupling $O_L O_R$ turned on for certain time interval results in the ANEC violation. So finally, the signal speeds up, and instead of hitting the singularity, it reaches Bob!
So far, in the present section, we have discussed the teleportation through a wormhole from the point of view of the bulk gravity. The coupling and the teleportation in the gravity have a straightforward representation in the boundary theory described by a TFD, wherein coupling the two Hamiltonians, information is teleported from one Hilbert space to another. In quantum simulators, which can realize very general states and engineer interesting evolutions, one can ask the question of the generality of such a gravity-inspired teleportation scheme. We review some recent developments in understanding the underlying mechanism of teleportation and their applicability in general many-body models in Section 4. In the next section, Section 3, we first set up some useful notations and summarize important results on quantum information scrambling, which makes the basis for the following sections.
3 Quantum information spreading
Consider a Heisenberg operator $W$ evolving under a local Hamiltonian, $H$ acting on a lattice, such that $W(t) = e^{iHt} W e^{-iHt}$. As a function of time, this operator can be written using the Baker–Campbell–Hausdorff formula as
$$W(t) = W + it[H, W] - \frac{1}{2!} t^2 [H, [H, W]] - \frac{1}{3!} t^3 [H, [H, [H, W]]] + \cdots$$
Thus, as time grows, the operator $W$ contains sums of many products of local operators. For example, if we consider a local Hamiltonian with interactions only on neighboring sites, the operator $W$ will spread to farther and farther sites as the time evolves. This is referred to as quantum information spreading, and has been a central goal in various studies in recent years, involving the operator growth and the study of out-of-time ordered (OTOC) correlators (more details follow). Before continuing further towards operator growth and spreading, it will be useful to introduce some notations and diagrammatic representations which we use in several places. For the diagrammatic notations we follow Ref. [145].
3.1 Operator-State correspondence
An operator $W$, in a Hilbert space, can be expressed as
$$W = \sum_{i,j=1}^{d} W_{ij} |i\rangle \langle j|,$$
where $|i\rangle$, $|j\rangle$ denote the basis elements of the Hilbert space whose regularized dimension is $d$, and thus $i, j = 1, 2, \cdots d$. The coefficients $W_{ij} = \langle i|W|j\rangle$ denote the elements in the matrix representation of $W$ in this basis. In Fig. 5(a) this operator is represented with an input leg $i$ and an output leg $j$.
The operator-state correspondence relates an operator of the above form to a state in the doubled Hilbert space, $\mathcal{H} \otimes \mathcal{H}$, given as
$$|W\rangle = \frac{1}{\sqrt{\text{Tr}(W^\dagger W)}} \sum_{i,j=1}^{d} W_{ij} |i\rangle \otimes |j^*\rangle.$$
The basis states with a star $|j^*\rangle$ are the time reversed (or equivalently complex conjugated) states. These are related to $|j\rangle$ with an anti-unitary operator $|j^*⟩ = \Theta |j⟩$. The prefactor $1/\sqrt{\text{Tr}(W^\dagger W)}$ is the normalization constant. The above map from an operator in a single Hilbert space to a state in a doubled Hilbert space is also known as the ‘purification’, since the state $|W\rangle$ is a pure state, i.e. $\text{Tr}(|W\rangle \langle W|)^2 = 1$. We denote this state by a bent input line, as shown in Fig. 5(b).
An example of a pure state in the doubled Hilbert state is the EPR state. In its most simple form it can be understood as the product of $N$ Bell pairs, $|\text{EPR}\rangle = (\Phi^+)^{\otimes N}$, where $|\Phi^+\rangle = (|00\rangle + |11\rangle)/\sqrt{2}$ is a maximally entangled state between a pair consisting of one qubit from each Hilbert space, here $(0, 1)$ are the computational basis or the qubit basis. This definition can be rewritten using the basis elements of each Hilbert space as
$$|\text{EPR}\rangle = \frac{1}{\sqrt{d}} \sum_j |j\rangle \otimes |j^*\rangle.$$
Comparing with Eq. (23), we note that the EPR state is a purification of the identity operator 1, which is
Fig. 5 Operator-state correspondence in diagrammatic form. a) An operator $W$ is represented by an ingoing and an outgoing index. b) In the state representation both the ingoing and outgoing index are treated similarly, and each of them denote a basis state in their respective Hilbert space. c) The state $\rho$ is also related to the EPR, by the relation Eq. (23). The dashed box denotes the EPR state (24). d) The TFD denotes finite temperature generalization of EPR, where the density matrix $\rho$ is the density matrix in either the left or the right Hilbert space, $\rho = e^{-\beta H}/\text{Tr}(e^{-\beta H})$, also the density matrix for a state at infinite temperature $\rho_\infty = 1/d$. Therefore, the EPR state denotes an infinite-temperature state. In what follows we denote the EPR state with a notation shown in the dashed box in Fig. 5(c). Using this definition, we can further write the state $|W\rangle$ in Eq. (23) as
$$|W\rangle = \sqrt{\frac{d}{\text{Tr}(W^\dagger W)}}(W \otimes \mathbb{1})|\text{EPR}\rangle.$$
(25)
The EPR state has a special property, often termed as operator shifting, i.e., an operator acting on the left is the same as the operator transpose acting on the right,
$$(O_L \otimes \mathbb{1})|\text{EPR}\rangle = (\mathbb{1} \otimes O_R^T)|\text{EPR}\rangle,$$
(26)
where the subscripts $L$ and $R$ denote the two copies, as in the case of the asymptotic region of holography. These subscripts label the side an operator $O$ acts on. This identity is a direct consequence of the definition Eq. (22), which implies $W^T = \sum_{i,j} W_{ij}|j\rangle\langle i|$, combined with the definition of EPR. We can now revisit the finite temperature generalization of the EPR, i.e., the thermofield double states (TFD),
**Thermofield Double States (TFD):** In the context of CFTs, we listed the TFD state, in the previous section, as the holographic dual to an eternal black hole. On the doubled Hilbert space $\mathcal{H}_L \otimes \mathcal{H}_R$, with finite dimensional Hilbert spaces, the TFD state at temperature $T \equiv 1/(k_B \beta)$ is an entangled state on $2N$ qubits, defined as
$$|\text{TFD}\rangle = \frac{1}{\sqrt{Z}} \sum_{i=1}^{d} e^{-\beta E_i/2}|E_i\rangle_L \otimes |E_i^*\rangle_R,$$
(27)
where $Z = \text{Tr}[\exp(-\beta H)]$. The sum in the TFD runs over the eigenstates $|E\rangle$ of $H$, labeled by $i$, with respective eigenvalues $E$, i.e., $H|E\rangle = E|E\rangle$. The time reversed state $E^*$ satisfy, $H^*|E^*\rangle = E|E^*\rangle$. There have been many interesting works using the TFD state, in particular, in black holes [146], quantum field theory [147], and more recently in connections with holography [77, 84, 148] and others. Some of the main properties that make it a valuable subject are:
– It is a pure state. Constructing a density matrix $\rho_{\text{TFD}} = |\text{TFD}\rangle\langle\text{TFD}|$, one notes that $\text{Tr}(\rho_{\text{TFD}}^2) = 1$.
– By tracing one part of the system, we obtain, $\text{Tr}_R(|\text{TFD}\rangle\langle\text{TFD}|) = \rho_L$ where $\rho_L$ is a the thermal density matrix on the left system with Hamiltonian $H_L$, $\rho_L = \exp(-\beta H_L)/Z$.
– Since the state is defined on a product Hilbert space, expectation values of operators on one Hilbert space stay as thermal expectation values in that system.
For example, for an operator in the left system, $(\text{TFD})(O_L|\text{TFD}) = \text{Tr}(\rho_L O_L)$, as already mentioned in the previous section Eq. (11).
In Fig. 5(d), the TFD is written in terms of the EPR state such that,
$$|\text{TFD}\rangle = \sqrt{d}\sqrt{\rho_L}|\text{EPR}\rangle = \sqrt{d}\sqrt{\rho_R^T}|\text{EPR}\rangle.$$
(28)
Similar to the relation (26), for the TFD state we find,
$$(O_L \otimes \mathbb{1})|\text{TFD}\rangle = (O_L \otimes \mathbb{1})\sqrt{d}\sqrt{\rho_R^T}|\text{EPR}\rangle,$$
$$= \sqrt{d}\sqrt{\rho_R^T}(\mathbb{1} \otimes O_R^T)|\text{EPR}\rangle,$$
and $$(\mathbb{1} \otimes O_R)|\text{TFD}\rangle = \sqrt{d}\sqrt{\rho_L}(O_L^T \otimes \mathbb{1})|\text{EPR}\rangle.$$
(29)
These relations will be useful in next sections where we discuss the measures of the information scrambling and the many-body teleportation circuit. For this purpose, in the next subsection we return to quantifying information scrambling using the out-of-time-ordered correlators.
### 3.2 Out of time ordered correlators
To quantify the spread of information in a quantum system we can ask the question in terms of commutators representing the information and a probe. The effects of an initial perturbation, say $W$, on a later measurement of another operator $V$ can be understood by computing the commutator $[W(t), V(t)]$. Even if the operators $W$ and $V$ at $t = 0$ commute, after the time-evolution following (21) the operators need not commute. As an observable, it is meaningful to consider
$$C(t) = \langle[W(0), V(t)]^\dagger[W(0), V(t)]\rangle,$$
(30)
We remark that, the above definition of the thermal OTOC is one of the different regularizations\textsuperscript{13} often considered to introduce finite temperatures [154]. In particular, in the seminal work proposing bound on the growth of $C(t)$ [67] the finite temperature OTOC is of the form of $W^\dagger \rho^{1/4} V^T(t) \rho^{1/4} W \rho^{1/4} V^\dagger(t) \rho^{1/4}$. However, we work with the form (36) of the thermal OTOC for two reasons. Firstly because of its accessibility in the experiments [93, 116], where one only needs to perform local measurements of operator $V^T$ and $V^T$ on a prepared state in the two copies, for detailed measurement protocol see Section 7. And secondly because, as we will see in Section 4, we note that an averaged form of this thermal OTOC is related to the operator size which is central in the teleportation mechanism in many-body systems.
### 3.2.2 Illustration in many-body dynamics
To gain intuition about the properties of the OTOC and its dependence on the temperature, let us take an example. We consider the transverse field Ising Hamiltonian in presence of longitudinal fields, on a lattice of $N$ spin-1/2s,
$$H = J \sum_{i=1}^{N-1} \sigma_i^x \sigma_{i+1}^x + \sum_{i=1}^{N} (b \sigma_i^z + h \sigma_i^y), \quad (37)$$
where $\sigma^a$, $a \in \{x, y, z\}$ is the Pauli operator. The coefficient $J$ denotes the interaction strength between neighboring spins, and $b$, $h$ are transverse and longitudinal field strengths respectively. For concreteness, we choose $b = J$ and $h = J/2$.
In Fig. 6 we plot the numerically calculated OTOC in this model. As shown in 6(a) we chose the operators $V$ and $W$ as the Pauli operators $\sigma^x$ on adjacent qubits. The initial time dependence of the OTOC depends on the spatial positioning of the operators, here we have chosen the operators in the middle of the 1D lattice chain separated by unit lattice distance, as seen in panel (a). For generic operators we can chose to normalize such that $O_{th}(t = 0) = 1$. We plot $\tilde{O}_{th}(t) = O_{th}(t)/O_{th}(t = 0)$ in Fig. 6(b), and note that it decays from an initial value 1. Upon subsequent time-evolution the correlations between $W$ and $V$ decay finally reaching late time thermal expectation value. The late-time ($J/\hbar \sim 4$) behavior for the OTOCs in Fig. 6(b) are also affected by finite size effects.
We have also presented the behavior at different temperatures. It is best seen by plotting the slope of the OTOC when it becomes half of its initial value, i.e., the slope when $\tilde{O}_{th}(t) = 0.5$. The numerically computed slope, $(d\tilde{O}_{th}(t)/dt)|_{\tilde{O}_{th}=0.5}$, is presented as a function of temperature in Fig. 6(c).
The decay of the OTOC as discussed above with an example of a local Hamiltonian with $N = 10$ sites, is a generic feature of OTOC, expected to hold in all systems which scramble information. In Section 5, we discuss in detail the experimental platforms, which can realize the Hamiltonian (37) as well as measure the OTOC, protocols discussed in Section 7.
Quantum information scrambling has been central in the studies of the quantum nature of black holes. In this direction, we next briefly recapitulate the Hayden-Preskill recovery protocol [72] for information sent into the black holes and its generalization [73] to general quantum channels.
### 3.3 Hayden-Preskill recovery protocol
According to the original calculation using Schwarzschild black hole, the Hawking radiation contains information only about the macroscopic details, like the mass (equivalently temperature) of the black hole. Since then the questions about the information content of the black hole interior have been explored in many directions [71], in particular revolving around the question of how can the thermal radiation reveal any information about the formation of a black hole? While this can be a difficult
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig6.png}
\caption{Thermal OTOC in a many-body model given by Eq. (37) for a system of size $N = 10$ qubits. a) The $W$ and $V$ are chosen to be Pauli operators at adjacent sites. b) The decay of the normalized OTOC $\tilde{O}_{th}(t) = O_{th}(t)/O_{th}(0)$, is shown for different temperatures $T$. c) The temperature dependence can be studied by inferring the slope when $\tilde{O}_{th}(t) = 0.5$. We see that the rate of decay increases with temperature, settling at a constant for large temperatures.}
\end{figure}
\textsuperscript{13}For example, following the traditional definition, the expectation value of an operator $O$ in a thermal state is given by
$$O_{th}(\beta, t) = \frac{1}{Z} \text{Tr}(e^{-\beta H} O(t)) = \begin{bmatrix} \sqrt{\beta} & \\ & O(t) & \\ & & \sqrt{\beta} \end{bmatrix},$$
which corresponds to a different thermal OTOC.
Fig. 7 Information recovery from a black hole a) In Page’s calculations, an initial black hole $B$ is evaporating radiation $R$ with time. The growing size of the radiation should be compared to the forward time, here denoted with up arrow. Assuming the Haar random dynamics to model black hole, Page showed that to learn about the black hole from the radiation one has to wait for the black hole to evaporate half of its entropy, which takes time of the order $\sim M^3$. b) If black hole $B$ and Hayden-Preskill protocol begins with a maximally entangled pair between $B'$ (black hole $B$ and old radiation $B'$). The initial input from Alice $A$ is maximally entangled with a system $N$. Bob collects radiation $R$ and the conditions on the recovery of Alice’s input information are analyzed. c) In a further protocol, a quantum circuit for any quantum unitary, Yoshida-Kitaev protocol has similar settings as the Hayden-Preskill, but the information recovery procedure is made more concrete. Two possible ways to recover information are discussed (i) a probabilistic protocol (denoted by PD with green oval here) and (ii) a deterministic protocol. See text for details.
problem, the black hole thermodynamics suggests that, on average, black holes show similar thermodynamic properties as generally expected in unitary quantum mechanics. For example, they have a finite entropy $S$, proportional to the horizon area, using which a Hilbert space with dimension $d = \exp(S)$ is associated with the black holes. Page considered black holes as quantum objects whose dynamics in the long time limit can be mimicked by Haar random unitaries [155, 156].
Let us denote the initial state of the black hole by a random pure state $|\Psi\rangle$, and consider it evaporating with time. In Fig. 7(a) such a set up is schematically drawn, where $U$ denotes Haar random unitary describing black hole internal dynamics. At initial time we have a pure black hole which with time evaporates into radiation $R$, here the upward direction denotes time, which should be thought of as the growing size of the radiation subspace. Associating dimensions $d_R, d_C$ to the Hilbert spaces of emitted radiation ($R$) and remaining black hole $C$, it holds that, $d_R d_C = d$. The density matrix describing the radiation should be
$$\rho_{\text{rad}} = \text{Tr}_C(|\Psi\rangle \langle \Psi|). \quad (38)$$
For a small amount of radiation, $d_R \ll d_C$, Page showed that $\rho_{\text{rad}} = \frac{1}{d} I/d_R$. Thus, the radiation remains maximally mixed and one can not access information of the black hole just by looking at the radiation itself. However as the black hole evaporates half of its entropy away, and a point of $d_R = d_C$ is reached, the radiation is maximally entangled with the remaining black hole. After this point we have $d_R > d_C$ and the correlations between remaining black hole and the radiation are sufficient to learn about the information from the black hole. However, in Page’s setting, to reach this half way point, one has to wait a time which scales as the cube of the black hole mass ($\sim M^3$), which is impractical for all purposes.
The problem that Hayden and Preskill discussed [72] in the context of the information in a black hole is as follows. They consider an old black hole which has evaporated at least up to half of its entropy. Bob has been collecting all this radiation and Hayden-Preskill protocol begins with considering maximal entanglement between black hole $B$ and $B'$ which is the radiation collected by Bob, see schematics in Fig. 7(b). Furthermore, Bob has access to all the future radiation. Alice ($A$) wants to hide her quantum information $(\psi)$ by throwing it into the black hole.
The information recovery problem can be further simplified by considering the black hole as a quantum object of $N$ qubits, such that $d = 2^N$, and the dynamics given by a unitary $U(d)$ from the circular unitary ensemble of dimension $d$, which makes a unitary group over Haar measure. We think of Alice’s state to be composed of $k$ qubits, then questions that Hayden-Preskill answered are (i) How many qubits does Bob need to collect to recover Alice’s state and (ii) how long does he need to do so?
The analysis of the problem further reduces if one considers Alice to be in maximal entanglement with $N$.
The information content of Alice’s diary will be entirely in the radiation R and information theoretically it will be possible for Bob to learn about \( \psi \) only when the black hole evaporates to a point after which there is no entanglement between N and the remaining black hole C. This translates to the case when the combined density matrix of the NC system separates out as
\[
\rho_{NC} = \rho_N \otimes \rho_C ,
\]
(39)
where we find the density matrix in a system by partial tracing every other system, as also done in (38). Without going into more details, we summarize the answers to the above questions here, as they will be directly related to the theme of this review.
(i) How many qubits does Bob need? To answer this we need to find whether and when Eq. (39) holds. Ref. [72] used the notion of \( L_1 \) norm \( || \cdots || \) of states, which is to say that any states closer in the \( L_1 \) norm are indistinguishable in measurements. Assuming Haar random evolution of black holes, they showed that,
\[
\int dU || \rho_{NC} - \rho_N \otimes \rho_C ||^2 \leq 2^{2k-2s} ,
\]
(40)
where \( dU \) is the Haar measure and \( s \) is the number of qubits collected by Bob. Clearly, when \( s > k \), the condition (39) holds up to some tolerance. So Bob needs to only collect a little more than the qubits thrown in by Alice.
(ii) How long does it take? The time needed was shown to be \( t_{sc} \) plus the time needed to radiate \( s \) qubits.
Even though these are answers to some basic questions, how the information recovery, called also the decoding, is manifested was presented in a variation of the HP protocol applicable to generic quantum channels by Yoshida and Kitaev [73], shown in Fig. 7(c). The information recovery protocol assumes that the dynamics is sufficiently mixing, i.e., any initial local information spreads to all degrees of freedom, referred to as maximally mixing.
Drawn in Fig. 7(c), The Yoshida-Kitaev protocol begins with the black hole unitary \( U \) and Bob’s unitary \( U^* \). Alice’s input is at A, and the black hole B is in maximally mixed state with B’ which is a subsystem of the system Bob possesses. There is a reference system N maximally mixed with A and another maximally mixed pair of A’ and N’ with Bob. The protocol has two ways of decoding the information, both the algorithms work as long as the dimension of the D subsystem is \( d_D \geq d_A^2 \). In the derivation of this bound an averaged definition of the OTOC of the form (32) is used, we refer the readers to the interesting and detailed calculations in Ref. [73].
**Probabilistic decoding:** In the probabilistic decoding, after the input of initial information both the systems ABCD and A’B’C’D’ are forward evolved with \( U \) and \( U^* \) respectively. After which the probabilistic decoding is performed (labeled with green oval with PD in the Fig. 7(c)). This involves projecting the combined system DD’ onto EPR pair, while leaving the C’ and N’ as they are. Recall that only D, D’, C’, N’ are in Bob’s possession, so all decoding operations can only be performed in these subsystems. The EPR projector is taken to be
\[
P_{[DD']([C'N'])} = [\text{EPR}]_{DD'} \langle \text{EPR}|_{DD'} \otimes (\mathbb{1}_{C'N'}) =
\]
\[
\begin{array}{cccc}
D & D' & C' & N' \\
\hline
D & D' & C' & N'
\end{array}
\]
(41)
We have used in the above definition in the subscript of \( P \) the bracket \([\ldots]\) to denote the pair which is projected on EPR and \((\ldots)\) for the subsystems where no operations are performed. The projection to EPR pair succeeds with the probability given in terms of an averaged OTOC and is \( \geq 1/d_A^2 \). From this, it can be shown that if DD’ is projected to EPR with a probability of 1, then the N and N’ should make an EPR pair. And thus from N’ Bob can read the initial input state.
**Deterministic decoding:** Success probability in the probabilistic decoder goes down as \( 1/d_A^2 \) as the size of the subsystem A increases. The success probability can be boosted with a Grover variant. The idea is similar to that of the Grover’s algorithm – the probability of measuring a target solution can be improved by repeated applications of the Grover oracle and Grover diffusion. Instead of doing EPR projective measurements after initial evolutions with \( U \) and \( U^* \), we instate the Grover’s iterations, see Fig. 7(c). One iteration involves evolving DD’ with \( G_D \) (defined below), followed by evolving D’C’ with \( U^T \), A’N’ with \( G_A \) and A’B’ with \( U^* \), where,
\[
G_D = 1 - 2P_{[DD']([C'N'])} \quad \text{and} \quad G_A = 2P_{[A'N']([B'D])} - 1
\]
(42)
The first operation \( G_D \) is the Grover’s oracle and the second operation \( U^T G_A U^* \) is the Grover’s diffusion operator [157]. With this decoder, the probability for successfully decoding the initial input after \( m \) Grover steps is \( \sin^2 ((m + 1/2) \theta) \), where \( \theta = 2 \arcsin(1/d_A) \). The probability approaches 1 when \( m \sim \pi d_A/4 \) for \( d_A \gg 1 \).
\(^{14}\)For \( d_A = 2 \), the probability of decoding is 1 at \( m = 1 \).
Both the probabilistic and deterministic protocols have been demonstrated in an experiment based on trapped ions, we present the setup in Section 5 and results in Section 7. In the next section we discuss the recent developments connecting many-body quantum teleportation to wormhole teleportation [74–76, 158]. For late times and a single-qubit input bit, we find that the Yoshida-Kitaev circuit is the same as the single-qubit teleportation circuit.
4 Wormhole teleportation and many-body quantum teleportation
Motivated from the gravity calculations in the Section 2.2 describing the negative energy shock wave in eternal black hole, Fig. 4, we can devise a quantum circuit, designed for a many-body system on a lattice shown in Fig. 8. We provide description of this protocol in the next subsection. Here, we present the mechanism behind the teleportation in terms of the growth of the initially inserted operator and identify the criteria for a successful many-body teleportation using this wormhole teleportation inspired circuit. In subsequent subsection 4.2 we illustrate teleportation of a single-qubit. We then provide a summary of results from the existing literature in Section 4.3. We end this section in 4.4 by discussing a different origin of the teleportation, known as the size-winding.
4.1 Description of the protocol
As first steps, we make a one-to-one map of the wormhole teleportation as in Section 2.2 to obtain a circuit for many-body dynamics, as considered in [74, 75]. The circuit consists of the following steps. The description is easier to follow when we divide the left and right systems into message and carrier subsystems labeled by the subscripts $M$ and $C$ respectively. For an $N$ qubit system, the message to be teleported is inserted in the message subsystem $L_M$, which is composed of $m$ qubits on the left and received at the message subsystem $R_M$ at the right, also of the same size $m$. Thus, the carrier subsystem contains $K = N - m$ qubits.
The circuit begins with a TFD state in the product Hilbert state $\mathcal{H}_L \otimes \mathcal{H}_R$ at time $t = 0$. This corresponds to a non-traversable eternal black hole (as discussed in Section 2). We consider scrambling and thermalizing unitary dynamics in the two sides of the TFD where the forward time evolution in the left is governed by $U_L = U = \exp(-iHt)$ and that on the right is by $U_R = U^T = \exp(-iH^T t)$. The left side of the TFD is evolved with the adjoint unitary $U^\dagger$ to reach a time $-t$, at which point a message, to be teleported, is inserted as a state $|\psi\rangle$. This can be done by performing a swap between $|\psi\rangle$ and the state of the message subsystem. Next, this left system is forward evolved with $U$, which results in the scrambling of the input information, to reach time $t = 0$. At this state, a momentary coupling $\exp(i g V)$ is applied between the left carrier ($L_C$) and the right carrier ($R_C$) subsystems. This is similar to the Gao-Jafferis-Wall coupling introduced for wormhole teleportation in Eq. (13) but now adapted for a lattice model [74]. The right system is then forward evolved, after which if the teleportation is successful, the initial state should be teleported [74, 75]. The coupling at $t = 0$ is,
$$G = e^{igV}$$
where, $$V = \frac{1}{K} \sum_{j=1}^{K} O_{j,L}(0)O_{j,R}(0),$$
(43)
where $g$ denotes the coupling strength, and $K = N - m$ is the number of qubits in the carrier subsystems. The above operation can be seen either as quantum gates between the two sides or simply as communicating the results $o_{j,L}$ of the measurement of an operator $O_{j,L}$ on the left, followed by doing a conditioned operation on the right carrier by [84],
Operator on the $R$ system = $e^{ig \sum_j o_{j,L} O_{j,R}}$,
(44)
similar to the wormhole discussion in Fig. 4. It should be noted that the above teleportation is different than the conventional quantum teleportation, where the measurement of the initial quantum information is classically sent to a decoder. In the above teleportation, the information is first scrambled and then the results of the classical measurements are used to perform quantum operations on the right carrier subsystem.
In recent works, the above circuit, though inspired from wormholes, is found to be teleporting initial information not only for the gravity models but also for models far from it; high temperature SYK [158], spin models and random unitary channels [75, 76]. So there seems to be a unified underlying mechanism assisting the teleportation. As we explain below, this mechanism is based on the growth of the operators under scrambling dynamics, see for the generic notion of operator growth the Ref. [159].
4.1.1 Mechanism of teleportation: Operator size
Let us use the Pauli basis to expand operators. The Pauli basis for $N$ qubits is formed by taking tensor product of $N$ single-qubit Pauli operators,
$$\mathcal{P} = \{\mathbb{1}, \sigma^x, \sigma^y, \sigma^z \otimes N\}.$$
The circuit shows a state $|\psi\rangle$ insertion at the time $-t$, by removing the qubits in the message subsystem $L_M$. This should be viewed as an operator $Q_L$ acting on the qubits in $L_M$ such that
$$Q_L = |\psi\rangle\langle\phi|,$$
where $|\phi\rangle$ denotes the state of the subsystem $L_M$ at $-t$. The coupling in the teleportation circuit acts on a state, $Q_L(-t)\rho^{1/2}$, at time $t = 0$. Since an operator applied at $-t$ can be related to a state insertion in the above fashion, from here on we have used the words operator and state synonymously to talk about the inserted message. We have also dropped the minus sign in front of the time, for brevity. However, we keep in mind that an operator with a subscript $L$ is inserted at $-t$, and use it explicitly whenever it is not obvious. We begin by expanding this operator in Pauli basis,
$$Q_L(t)\rho^{1/2} = \frac{1}{\sqrt{d}} \sum_{P \in \mathcal{P}} c_P(t) P,$$
where the coefficients $c_P$ are such that $\sum_P |c_P|^2 = 1$. For a Pauli string $P$, the size $|P|$ of the string is defined as the number of non-identity operators in the string. As is evident from the basis set $\mathcal{P}$, many Pauli strings can share the same size, and they will enter in the operator in Eq. (46), with some coefficient $c_P$. Thus, there will be distribution of sizes, which is defined for a size $l$ as,
$$q(l) = \sum_{|P|=l} |c_P(t)|^2.$$
Summing over all possible sizes the distribution follows $\sum_l q(l) = 1$, which is simply the sum of the probabilities to find the operator in Eq. (46) in one of the $P$ strings.
At this point, we take a slight detour to learn a trick used to obtain the size of the Pauli string. We discuss it here for bosonic operators and closely follow Ref. [76]. The size of an operator can be found by considering EPR projectors in the doubled Hilbert space. To see this, first let us consider an EPR projector for a single-qubit in the doubled Hilbert space of $N$ qubits,
$$P_{\text{EPR},i} = \mathbb{1} \otimes \cdots (\langle \text{EPR} | \langle \text{EPR} |)_i \otimes \cdots \mathbb{1}$$
$$= \mathbb{1} \otimes \cdots \cdot \frac{1}{4} \sum_{P_i} P_{i,L} P_{i,R}^* \otimes \cdots \mathbb{1}$$
(48)
where $P_i \in \{\mathbb{1}, \sigma^x, \sigma^y, \sigma^z\}$. Next, we note that the expectation value of a Pauli string $P$ in a single-qubit EPR state $\langle \text{EPR} | P | \text{EPR} \rangle$, gives the trace of the Pauli $P_i$ at this qubit i.e., $\langle \text{EPR} | P | \text{EPR} \rangle := \text{tr}(P_i)/2$, which is $= \delta_{P_i,1}$. Thus, the above projector acts on $P | \text{EPR} \rangle$ as,
$$P_{\text{EPR},i}(P | \text{EPR} \rangle) = \delta_{P_i,1}(P | \text{EPR} \rangle).$$
(49)
Thus, the eigenvalue of the single-qubit EPR projector at $i$th qubit index is non-zero only when there is an identity at the $i$th site in the string $P$. This property can be utilized to count the number of identities or vice-versa to count the size of the string by considering a sum of all such single-qubit EPR projectors, i.e., we can define a counting operator,
$$\tilde{V} = \frac{1}{N} \sum_{i=1}^{N} P_{\text{EPR},i}$$
(50)
which follows [directly from Eq. (49)],
$$\tilde{V}(P | \text{EPR} \rangle) = \frac{N - |P|}{N}(P | \text{EPR} \rangle),$$
(51)
by counting the identities in the Pauli string, and in return giving the size $|P|$ of the string $P$. For the states of the form of (46), which are linear in $P$, we note that,
$$\tilde{V}(Q_L(t) | \text{TFD} \rangle) = \tilde{V}\left(\sum_{P} c_P(t) P\right) | \text{EPR} \rangle$$
$$= \sum_{P} \frac{N - |P|}{N} c_P(t)(P | \text{EPR} \rangle).$$
(52)
which immediately leads to,
$$(Q_L(t) | \text{TFD} \rangle \tilde{V}|Q_L(t) | \text{TFD} \rangle = \sum_{P} \left(1 - \frac{|P|}{N}\right) |c_P(t)|^2$$
(53)
Thereby the expectation value of $\tilde{V}$ in the state, just before the coupling is inserted in Fig. 8, gives an average of the operator size.
We return to our discussion regarding the effect of the coupling $G$ in (43). From the form of the operator $\tilde{V}$, by now, it should be clear where we are headed to with this discussion. The coupling (43), central in the teleportation protocol, is of the same form as the operator $\tilde{V}$ and thus measures the average size of the operators that have acted before $t = 0$. The effects of this coupling can be further simplified.
First, note that the counting operator in Eq. (50) is generic. For a non-trivial coupling we should remove the trivial identity operation. That would result in considering, in single-qubit EPR projector, in Eq. (48), a sum over $P_i$ restricted with $P_i \neq 1$. Such that,
$$\tilde{V}_{P_i \neq 1} = \frac{1}{N} \sum_{i=1}^{N} \left( \frac{1}{3} \sum_{(P_i, P_i \neq 1)} P_{i,L} P_{i,R}^* \right)$$
$$= \frac{4}{3} \tilde{V} - \frac{1}{3},$$
thus, the eigenvalue of $\tilde{V}_{P_i \neq 1}$ on the state $(P|EPR\rangle)$ is $[(N-4|P|/3)/N]$. Next, note that we have assumed the dynamics to be scrambling and thermalizing, in this case, after we have inserted $Q_L$ and let the system scramble for time $t$, it is sufficient to just consider 1 out of the 3 non-trivial $P_i$. This assumption is justified if we have taken $t \geq t_{\text{scr}}$, since then the initial information has spread equally to all sites, and all 3 non-trivial Pauli operators will probe the operator size similarly. Thus the coupling $\tilde{V}$ at $t = 0$, without loss of generality, becomes much simpler, written as [74],
$$V = \frac{1}{K} \sum_{i=1}^{K} \sigma_{i,L}^z \sigma_{i,R}^z,$$
The coupling contains the operator $V$ between $K$ carrier qubits only. We focus, in this work, on $m \ll N$, strictly $m = 1$. In this case the average size distribution $\sum_P |c_P|^2 |P|/N$ in (53) which uses all $N$ qubits can be regarded as the same as the average size distribution $\sum_{P_c} |c_{P_c}|^2 |P_c|/K$ for $K = N - m$ qubits, where $P_c$ is the Pauli string only on the $N - m$ carrier qubits.
Continuing the same calculation as presented above for a generic $\tilde{V}$, we find the expectation value of $V$ in the state prepared before $t = 0$ to be,
$$\langle V \rangle_Q = \langle \text{TFD} | Q_L^\dagger(t) V Q_L(t) | \text{TFD} \rangle$$
$$= \left( 1 - \frac{4}{3} |\varrho(\epsilon_-)| \right),$$
where $|\varrho(\epsilon_-)| = \sum_{P_c} |c_{P_c}|^2 |P_c|/K \approx \sum_{P_c} |c_{P_c}|^2 |P|/N$ is the average size over $K$ qubits for the state that existed just before $t = 0$ (hence the use of $\epsilon_-$), i.e., $\varrho(\epsilon_-) = Q_L(t)\rho^{1/2}$.
We can now ask what are the effects of the coupling $G = \exp(i g V)$? In the large number of carrier qubits $K$ we use the property of factorization such that, any expectation value of the form, $\langle B | G | B \rangle \approx \exp(i g \langle B | V | B \rangle)$. Thus the coupling $G$ acts on the state prepared at $t = 0$ as,
$$e^{igV} Q_L(t) | \text{TFD} \rangle = e^{ig \langle V \rangle_Q} Q_L(t) | \text{TFD} \rangle.$$
To conclude, we see from Eq. (56) and (57) that the effect of the non-trivial coupling is to apply an operator size dependent phase to the state $Q_L(t) | \text{TFD} \rangle$.
### 4.1.2 Criterion for a successful teleportation
We now ask the question of when is the teleportation successful according to the circuit 8. As presented in the circuit, having implemented the coupling $G$ at $t = 0$, we need to evolve the right circuit with $U^T$ for a time $t$. After this, as shown below and also presented in Ref. [74], we get the operator $Q^T$ at the right message subsystem. We can do a further decoding operation $D$ to obtain the $Q$. We explain it shortly. First, we redraw the circuit in Fig. 8 with this decoding operation as,
It can be noted that (explained below) for the teleportation to be successful, the following must hold [76],
where, the phase $\phi = g\langle V \rangle_Q$ depends on the operator $Q$ and measures its average size. In the large $K$ limit, this overall phase is justified from Eq. (57). Away from the large $K$ limit, for multi-qubit teleportation, this overall phase is possible only when the effect of the coupling $\exp(i g V)$ on $P(\text{TFD})$ is same for all $P$s, such that $\exp(i g V) \langle P | \text{TFD} \rangle \sim \exp(i \phi) \langle P | \text{TFD} \rangle$. Since, the $G$ acts on $Q_L(t)|\text{TFD}\rangle$, the $\phi$ measures the size of $Q_L(t)$. Thus, the overall phase as in (59) is possible, when the size distributions (47) are tightly peaked around the average size distribution $\sum_P |c_P|^2 P / N$ of the operator, dubbed as peak-ed-size teleportation. A situation that occurs in a wide range of many-body dynamics (see below subsection 4.3).
Assuming peak-size teleportation, we analyze the right side of the above equation (59). To begin, for the moment, let us set $D = 1$, then in the right side we get an operator $Q_R(t) = U^* Q_R U^T = (U^T)^{\dagger} Q_R U^T$. Note that when the left side evolves with $U$, the right evolves with $U^T$ in Fig. 8. Thus the above is a transfer of an operator on the left $Q$ at time $-t$ to the transpose of the operator on the right, i.e, $Q^T$ at time $t$. This is exactly the teleportation protocol circuit presented in Ref. [74, 75], and they obtained $O^T$ in the right side when $O$ was inserted on the left, as would be the case with the circuits in Fig. 8.
Now, the role of the decoder becomes clear. In order to obtain the operator $O$ teleported to the right, we need a decoding operation $D$ such that $D^{\dagger} O^T D \propto Q$. The success of the teleportation protocol for generic $U$, as in many-body dynamics, then boils down to finding out when does the above identity (59) hold? We begin by taking the inner product of the left and right side operators in (59), as,
$$C_Q = \langle \text{TFD}| Q_R(t)^{\dagger} e^{i g V} Q_L(t) |\text{TFD} \rangle,$$
(60)
where, $Q_R(t) = U^* D^{\dagger} Q_R DU^T$, as shown in the right side of (59). So, following Eq. (59), the first condition for the successful teleportation is that [76],
(i) the magnitude of $C_Q$ is maximal for any operator $Q$.
To ensure that the teleportation succeeds for arbitrary initial state, or equivalently arbitrary sum of operators $Q$. And the second condition is that,
(ii) the coupling applies the same phase $e^{i \phi}$ to all input states. Such will be the case when the size distributions for all sizes are tightly peaked around the average size of the operator.
We summarize in 4.3, that these two conditions are generically satisfied in many-body models, however, the holographic models follow the wormhole teleportation mechanism. In the next subsection we provide an illustration of this form of teleportation in a many-body model described by Hamiltonian (37).
4.2 Illustration in many-body dynamics
For illustration of the teleportation protocol in many-body system, we consider the Hamiltonian (37) and numerically run the left circuit in Fig. 8 in a spin-1/2 system with $N = 7$ qubits. We present the results for single-qubit teleportation in infinite temperature TFD, i.e., EPR state. Preparing an EPR at $t = 0$, we do backward time evolution up to $-t$ and then swap the first qubit with a state which has the expectation value $\langle \sigma_1^z \rangle = 1$. This can be done by inserting an up state $|0\rangle$, denoted in the computation basis of $\{0,1\}$ by $|0\rangle = (1,0)^T$. Then we evolve forward, perform the coupling, and evolve the right with $U^T$.

**Fig. 9** Illustration of teleportation protocol in Hamiltonian 37: (a) For an input state at qubit 1 on the left, such that $\langle \sigma_1^z \rangle = 1$, following Fig. 8 for $g = \pi$ the expectation value on the right at qubit 1 is presented. (b) For a time $J t = 4$, we present $\langle \sigma_1^z \rangle_R$ measured at the right on each qubit. The teleportation succeeds at qubit 1, shown in black color, while at all other qubits, $\langle \sigma_{j \neq 1}^z \rangle_R \approx 0$.
In Fig. 9(a) on the right we observe the expectation value $\langle \sigma_1^z \rangle_R$ at qubit 1 for the coupling strength $g = \pi$. With time, the magnitude of $\langle \sigma_1^z \rangle_R$ increases and saturates once the information has reached to all qubits, i.e., at the scale of $t_{\text{scr}}$, this corresponds to $|t_L| = t_R \geq t_{\text{scr}}$. In Fig. 9(b), we fix the evolution time at $J t = 4$, and observe the expectation $\langle \sigma_1^z \rangle_R$ on all qubits. We note that the teleportation is successful (black curve) only at the message qubit, labeled 1, while at all other qubits, $\langle \sigma_{j \neq 1}^z \rangle_R \approx 0$. The teleported signal has a maximum magnitude for some values of $g$, and there is an infidelity in teleportation. For more details on the dependence on $g$ and fidelities for spin model we refer to Ref. [75].
4.3 Summary and remarks
We derived above the requirements for a successful teleportation. It has been shown by analytical calculations
in high temperature SYK [158], spin models [75], random unitary circuits [76] and using several numerical models that the criterion of success holds for a large class of models and parameters. Such is summarized in Fig. 10 taken from [76]. In this subsection we summarize their results while also adding some remarks.

**Fig. 10** *Summary of the teleportation for different unitary dynamics.* These plots are taken from Ref. [76] with authors’ consent. (a) The fidelity of teleportation decreases with temperature. The characteristic time, which marks the teleported qubits, decreases at long times. (b) The fidelity features distinct peaks for holographic and other scramblers for $t < t^*$. For low temperature SYK, which is a model of black holes, the fidelity has a peak at $t = t_{\text{sc}}$ while zero otherwise. Whereas for other scramblers it has a ripple like behavior. After $t > t^*$ we see a revival of fidelity for SYK saturating at $\propto G_\beta$. Thus, after $t^*$ all scramblers have fidelity $G_\beta$ and follow peak-size mechanism for teleportation.
- **Holographic and peak-sized teleportation:** Peak-size teleportation means that the coupling applies the same phase (operator size dependent) to all Pauli strings making up the operator on the left. Therefore, the two sided correlator (61) is $C_Q = G_\beta e^{i \phi}$ where $\phi = g\langle V \rangle_Q \sim \sum_P |c_P|^2|P|/N$ (see Eq.(56)) measures approximately the average size of the operator $Q_L(-t)\sqrt{\rho}$ and,
$$G_\beta = \langle \text{TFD} | \tilde{Q}_R(t)^\dagger Q_L(-t) | \text{TFD} \rangle$$
$$= \text{Tr}(\tilde{Q}_L\rho^{1/2}Q_L\rho^{1/2}) ,$$
(61)
is the two-point function between the right and left operators. Using the property of TFD state, in the second line, we have rewritten it as the thermal two-point function on one side of the TFD with notation $\rho = \rho_L$. The thermal function decreases as the temperature decreases, thus the two-point function
$$G_\beta \leq 1 ,$$
(62)
with the limit saturating for $\beta = 0$. Since $C_Q$ measures the overlap of the right at $t > 0$ and the left at $t < 0$, the two-point function $G_\beta$ governs the fidelity of the teleportation, and the fidelity decreases with decreasing temperature in the peaked-size teleportation mechanism (presented in red-pink in the summary Fig. 10(a)). It has been discussed that when the size distribution has a width, resulting from imperfect peaked-size distribution, the fidelity decreases further [76].
In Fig.10(b), the mechanism of the peak-size teleportation (in red) is contrasted with the holographic wormhole teleportation [84] as in the low temperature SYK (depicted in blue). The low temperature SYK teleports with perfect fidelity at time around scrambling time $t_{\text{sc}}$ and zero otherwise. In contrast, in the peak-size teleportation the fidelity is of order $G_\beta$ and shows certain features with time. However in the low temperature SYK one notes a revival in the fidelity at long times (denoted with $t^*$), with a decreased fidelity $\propto G_\beta$ as in the peak-size teleportation. Thus above this time $t^*$, all scramblers teleport with peak-size mechanism. To conclude, due to the distinct behavior of fidelity of the teleportation with time, it is a strong signature of the holographic or peak-size teleportation.
- **Connection with the thermal OTOC:** Recall from Eqs. (56) and (57), the action of the coupling $G = \exp(i g\langle V \rangle)$ is to apply a size dependent phase $\exp(ig\langle V \rangle_Q)$, where the $\langle V \rangle_Q$ from (56) can be expanded to be,
$$\langle V \rangle_Q =$$
$$\frac{1}{K} \sum_{i=1}^{K} \langle \text{TFD} | (Q_i^\dagger(t) \otimes \mathbb{1})(O_{i,L} \otimes O_{i,R}^\dagger)(Q_L(t) \otimes \mathbb{1}) | \text{TFD} \rangle$$
$$= \frac{1}{K} \sum_{i=1}^{K} \text{Tr}[\rho^{1/2}Q_i^\dagger(t)O_{i,L}Q_L(t)\rho^{1/2}O_i^\dagger]$$
$$= \frac{1}{K} \sum_{i=1}^{K} O_{\text{th}}(\beta,t) .$$
(63)
This is the average of the thermal OTOC defined in the previous Section 3, Eq. 36 for operators $Q$ and $O_i$.
• **Connection with the HPR protocol**: In the late time, high temperature limit, the teleportation protocol can be shown to be the same as the Hayden-Preskill recovery (denoted by red diamond HPR in Fig. 10(a)) for single-qubit teleportation. For long times \(|t_L| = t_R = t > t_{\text{scr}}\), and infinite temperature limit the coupling acts at \(t = 0\) as,
\[
e^{igV} Q_L(t) |EPR\rangle = e^{i\eta(V)} Q_L(t) |EPR\rangle \\
= Q_L(t) |EPR\rangle = [Q_R(t)]^T |EPR\rangle .
\]
(64)
The absence of the phase factor follows directly from the relation of the \(\langle V \rangle\) to the averaged OTOC as in Eq. (63). At times \(t > t_{\text{scr}}\), the OTOC for infinite temperature states decays to zero, and thus the overall phase above is 1 whenever a non-trivial \(Q_L\) is applied. In the case when \(Q_L = 1\), then following Eq. (63), \(\langle V \rangle = 1\), and thus Eq. (64) holds for generic \(Q_L\), whenever \(g = n\pi\), where \(n \in \mathbb{Z}\).
The coupling \(V\), also including identity operations will be,
\[
V = \frac{1}{K} \sum_{i=1}^{K} P_{\text{EPR}, i} \\
= \frac{1}{K} \sum_{i=1}^{K} \frac{1}{4} \left( \sum_{P_j} P_{j,L} P_{j,R}^* \right)_i
\]
(65)
where the outer sum runs over all the carrier qubits, and the inner sum represents EPR pair on \(i\)th carrier qubits on the two sides. We have used the notation from Eq. (48), and recall that the \(P_j \in \{\mathbb{I}, \sigma^x, \sigma^y, \sigma^z\}\). At this point we use the property of late times \(t \geq t_{\text{scr}}\) when the time evolved operator \(Q_L(t)\) have evolved to all available sites. At this time the effect of the above coupling will be the same if we replace the sum over local EPR pairs with an EPR projector on the full carrier subsystem. This is, in the late time, we can equally take,
\[
V = P_{\text{EPR}} = \frac{1}{d_D^2} \sum_{P_D} P_{D,L} P_{D,R}^*
\]
(66)
here, we have changed the previous notation \(C\) for carrier subsystem with the letter \(D\), for comparison with the Fig. 7(c). The sum now runs over the Pauli operators on the full subsystem \(D\). With this, we now have,
\[
e^{igV} = e^{i\pi P_{\text{EPR}}} = 1 - 2P_{\text{EPR}}
\]
(67)
We wish to show the equivalence between the Yoshida-Kitaev Fig. 7(c) and the many-body teleportation circuit (58). For this purpose, we identify, \(G_D = 1 - 2(P_{\text{EPR}})_{DD}\), and \(G_A = 1 - 2(P_{\text{EPR}})_{A'N'}\). Note that for a single-qubit \(d_A = 2\) at the subsystem \(A\), this becomes,
\[
G_A = 1 - 2(P_{\text{EPR}})_{A'N'} = \sigma_A^y (\text{SWAP}) \sigma_N^y ,
\]
(68)
where the SWAP is the swap operator,
\[
\text{SWAP} = \frac{1}{d_A} \sum_{P_A} P_{A,A'} P_{A,N'}
\]
(69)
Thus comparing with the teleportation figure, the decoder \(D = \sigma^y\). With these operations, on the circuit (58), the output for an input operator \(Q_L = O\) will be
\[
Q_R = \sigma^x O^T \sigma^y = O , \quad \forall \ O \ \text{of the form Eq. (46)}.
\]
Hence, the single-qubit teleportation when we replaced \(D\) in (58) with the Grover’s oracle for a single-qubit succeeds with fidelity 1. Using the \(G_D\) and \(G_A\) in Eq. 58, for the case of infinite temperature initial state, and sliding the left \(U^\dagger\) to the right to make \(U^*\), we see that the teleportation circuit for single-qubit in \(A\) subsystem is the same as the Fig. 7(c).
### 4.4 The size-winding mechanism
In contrast to the size distribution Eq. (47) for the operator \(Q_L p^{1/2}\), Eq. (46), in Refs. [74, 75] winding-size distribution is defined as,
\[
\tilde{q}(l) = \sum_{|P|=l} c_P(t)^2
\]
(70)
The important difference is that the \(\tilde{q}\) can be complex and the distribution is over the complex plane. For infinite temperatures \(\beta = 0\), the distribution \(\tilde{q} = q\) since then, due the properties of the EPR state, the operator \(O(-t)\) as in Eq. (45) is Hermitian, and thus the coefficients \(c_P\) in Eq. (46) are real. By using the properties of the TFD state, as in Eq. (29), we rewrite the action of the operator on the left at time \(-t\) as,
\[
Q_L(-t) |TFD\rangle = \sqrt{d} \ Q_L(-t) \sqrt{\rho_L} |EPR\rangle \\
= \sum_P c_P(t) P |EPR\rangle
\]
(71)
and on the right operator \(Q_R^T\) at time \(t\) as,
\[
Q_R^T(t) |TFD\rangle = \sqrt{d} \sqrt{\rho_L} \ U^* Q^T U^T |L\rangle^T |EPR\rangle \\
= \sqrt{d} \sqrt{\rho_L} Q_L(-t) |EPR\rangle \\
= \sum_P c_P^*(t) P |EPR\rangle
\]
(72)
The success criterion in the Section 4.1.2 following Ref. [76] is developed by analyzing the overlap of the \(Q_R^T(t) |TFD\rangle\) with \(\exp(iqV) Q_L(-t) |TFD\rangle\) (here we take the decoder \(D = 1\), which means we are interested in operator \(Q^T\) on the right). The teleportation succeeds whenever the
LR coupling acts similarly on all Pauli strings, and the coupling action generates a phase $\exp(i\phi)$, and that the overlap, i.e. the two point function is maximum for any input operator.
For holographic systems, it is shown that perfect size-winding occurs such that the coefficients in the operator expansion take the form [74, 75],
$$c_P(t) = e^{i\alpha|P|}r_P(t) \quad ; \quad r_P(t) \in \mathbb{R}$$
(73)
and thus the LHS of Eqs. (71), (72) differ only by a phase linear in the operator size. The action of the operator on the left has opposite phase winding compared to the action of the operator on the right. As derived in the previous sections, the operator $V$ acts as,
$$V(P|\text{EPR}) = \left(1 - \frac{4}{3}\frac{|P|}{N}\right)(P|\text{EPR})$$
(74)
leading to,
$$e^{igV}Q_L(-t)|\text{TFD}\rangle = \sum_P e^{i(\alpha - \frac{gN}{3})|P|}r_P(t)|\text{EPR}\rangle$$
(75)
up to a constant phase, which we have dropped here. For the coupling strength $g = (\alpha \pm n\pi)3N/2$, the action of coupling at $t = 0$ is the same as an operator $Q^T$ on the right at $t$. So the phase factor on the left unwinds under the LR coupling to give the phase as an operator on the right would have. This perfect operator size winding takes place for models with holographic dual, and the teleportation can be seen as the unwinding of the left phase in the complex plane to produce the phase on the right. For models away from holographic limit, the imperfect size winding is expected, in which case the phases in the expression for operators may not be linear. We refer the readers to Ref. [74, 75] for proofs and detailed discussion regarding size-winding in holographic and non-holographic teleportation.
As noticed in the previous subsection, the thermal OTOC and the two-point function encode crucial information about the mechanism of teleportation and the success fidelity. These are measurable in present day quantum simulators. Successful measurements of infinite temperature OTOC [98, 109–115, 160], finite temperature OTOC [116], and teleportation protocol [98] has been achieved in recent years. Thus, we now turn towards the quantum simulations to summarize the current state-of-the-art. We first describe the platforms available and then discuss some important results which make the initial steps towards conducting holographic studies in the lab.
### 5 Quantum simulation platforms
Realizing the ideas described in this review requires quantum simulators which can controllably prepare desired quantum states, and realize suitable Hamiltonians. In this regard, many-body quantum simulation platforms based on ultracold gases [161, 162], trapped ions [88, 89], Rydberg atoms [90, 163–165], superconducting circuits [166, 167], nuclear magnetic resonators [168–170], and photonic systems [171, 172] have demonstrated tremendous potential to simulate useful physical models and phenomena from various fields of physics and beyond. These capabilities are enabled by relatively clean systems, significant degree of control over experimental parameters, strong tunable interactions between the particles, and single-particle addressability in some cases. In this section, we will describe two of these experimental platforms – based on Rydberg atoms and trapped ions – that show promise to explore the physics described in this review.
#### 5.1 Rydberg atoms
A Rydberg atom is an atom with a highly excited electron, i.e. in an orbital with a large principal quantum number $n$. One of the main advantages of using Rydberg atoms for quantum simulation is their strong dipole moment, which leads to strong inter-atomic interactions [90, 165, 173]. Due to being in a highly excited state, the radius of the electron’s orbit in a Rydberg atom is on the order of a few microns, which is thousands of times larger than that of typical ground state atoms. Therefore, the atom is easily polarizable and easily acquires a strong dipole moment (relative to other energy scales in experiments). The resulting strong dipole interactions allows researchers to simulate e.g. quantum many-body Hamiltonians, or realize universal quantum gates that can be used to build a quantum computing architecture. Popular candidate species for Rydberg atoms have been $^{87}$Rb, $^{88}$Sr, and $^{171}$Yb.
An electron is typically excited to one of the atom’s Rydberg states via a two-photon transition. Once excited, the atom in the Rydberg state interacts with other atoms in that Rydberg state via a van-der-Waals interaction, $V_{ij} \propto n^{11}/r_{ij}^6$. At typical inter-particle separations in these experiments, $\sim 500 – 1000$ nm, ground state atoms are nearly non-interacting, and the only interactions occur between Rydberg atoms. Advances in trapping and laser cooling [174–176], and more recent ideas involving atom-by-atom assemblies with trap rearrangements [177–182], have led to successful efforts in near-deterministic creation, trapping, and loading of
large numbers of Rydberg atoms in a periodic array in space [183–185].
A qubit can be encoded in these atoms as two internal atomic states, e.g., two long-lived hyperfine ground states (i.e. a state with a small principle quantum number). The qubit states can be coupled to the Rydberg state via laser pulses.
The essential ingredient offered by Rydberg atoms is that they have strong interactions. Using this, experimentalists can implement a controlled-Z gate, $\exp(-i\pi |e_i e_j\rangle \langle e_i e_j|)$, between spatially nearby qubits $i$ and $j$. A naive way to implement this gate involves directly accruing a phase proportional to the Rydberg-Rydberg interaction strength. This naive scheme, however, is sensitive to the distance between the atoms, and could therefore lead to large errors due to atomic motion. An alternative scheme to realize fast high-fidelity entangling gates between Rydberg atoms, proposed in Ref. [186, 187], uses the phenomenon called Rydberg blockade.
Rydberg blockade arises when the van-der-Waals interaction are so strong, e.g. due to large $n$, that having two Rydberg atoms near each other is energetically too expensive \footnote{Technically, when one atom is in a Rydberg state, the energy of an adjacent atom’s Rydberg state is shifted by an amount equal to the van-der-Waals interaction. The latter atom will not be excited to the Rydberg state when the two-photon Rabi coupling is much smaller than the laser detuning plus van-der-Waals interaction}. This so-called Rydberg blockaded regime has been experimentally observed [188–192]. Recently, Rydberg blockade has led to the first experimental evidence for quantum scar states [183] and topological quantum spin liquids [193].
Essential for simulating the protocols discussed in this review, the Rydberg blockade underpins the implementation of the entangling gate between qubits. The entangling scheme involves coupling ground state qubits to the Rydberg state via laser pulses, and is described in detail in Appendix B. Entanglement using Rydberg blockade has been widely realized experimentally [188, 191, 194–198]. Experiments have demonstrated gate fidelities exceeding 90% for entangling gates, and up to 99.6% for single-qubit gates [194]. To realize universal quantum computing, it is sufficient to have controlled-Z gate, together with arbitrary single-qubit rotations which can be implemented via magnetic fields or stimulated Raman transitions.
5.2 Trapped ions
Trapped ion chains are one of the most promising platforms for analog and digital quantum simulation. With currently the best gate fidelities for digital quantum gates, they form one of the pillars of today’s NISQ devices along with superconducting circuits. Popular candidate species for trapped ions have been $^{171}$Yb$^+$ and $^{40}$Ca$^+$. Qubit states are encoded in two long-lived electronic states of the ions which are either coherently manipulated by narrow linewidth laser fields (optical qubits) or microwave fields (hyperfine qubits). State dependent interactions are mediated by laser fields that interact with the ions’ electronic and motional degrees of freedom, eventually providing spin-spin interactions for analog quantum simulators and a universal gate set for digital quantum computers. For simplicity, in this review, we will focus on the optical qubit systems.
A 1D chain of ions is trapped in a Paul trap, which consists of an oscillating quadrupole field that provides, on average, a confining force on the ions\footnote{It is known from Earnshaw’s theorem that charged particles cannot be trapped with a static electric field; the oscillating quadrupole field is the simplest geometry which can trap charged particles}. Due to being electrically charged, the ions experience Coulomb repulsion from each other. This repulsion, together with the confining force provided by the trap, results in a nearly periodic array of trapped ions in space, and has yielded long chains of one dimensional ion chains for quantum simulation and computing [199, 200].
Although the ions interact via Coulomb interactions, these interactions are independent of the ions’ internal state, and therefore do not provide qubit interactions. That is, unlike the Rydberg atoms where van-der-Waals interactions give qubit interactions, the ions do not directly have qubit interactions. Instead, effective qubit interactions are obtained by coupling the ions to motional degrees of freedom, which are the normal modes of the chain, by shining bichromatic laser fields over the ion chain. The normal mode excitations can be found classically by solving the normal mode equations in the limit of large transverse trapping frequency [201, 202].
There are two main schemes for realizing qubit interactions using these normal modes. The first was developed by Cirac and Zoller [203], which relies on having zero phonons in the ion chain during normal operations, and exciting one phonon during the entangling operation. This scheme therefore requires the system to be cooled to the motional ground state, i.e. the state with zero phonons [204]. The second scheme, which is more commonly adopted nowadays, was developed by Mølmer and Sørensen [205]. This scheme does not require cooling the ion chain to its motional ground state. Understanding how both the schemes work requires some understanding of the physics of the ion-laser coupling, which is described in detail in Appendix C.
Essential for simulating the protocols in this review, the Molmer-Sorensen scheme implements the entangling operation $\exp(-i\theta \sigma_i^z \sigma_j^z)$ between two qubits $i$ and $j$. This scheme can, in principle (up to caveats about ion spacing and normal mode frequency spacing), implement this gate between any two qubits $i$ and $j$ in a finite time, and thus achieves all-to-all connectivity between the qubits. Two-qubit Molmer-Sorensen gates have been widely realized in experiments [206–211], with the highest current gate fidelity in the range of 99.9% [212].
6 Quantum simulation of many-body models
The quantum simulation platforms discussed above can realize a universal quantum gate set, and can therefore in principle realize any unitary quantum evolution. A powerful application of quantum simulation, from the perspective of holography, would be to realize holographic models. The SYK model is a particularly simple 0+1 dimensional model which at large $N$ and low energy is dual to the nearly AdS$_2$ gravity [62]. This model can potentially be realized in quantum simulators, although requiring a large number of quantum gates and ancillary qubits that could only be within reach of simulation in the future. In this section, we will briefly review how to simulate the SYK model with a quantum circuit. Later, in the next section, we discuss how to prepare the TFD state. These two ideas, preparing the TFD and realizing the model, can be seamlessly incorporated with the quantum protocols for measuring the OTOC and implementing HPR, and wormhole teleportation protocols. More details follow in Section 7.
The SYK model is a model of interacting Majorana particles [59–64],
$$\hat{H} = \frac{1}{4 \times 4!} \sum_{p,q,r,s=0}^{N-1} J_{pqrs} \gamma_p \gamma_q \gamma_r \gamma_s,$$
(76)
where $\gamma_i$ are Majorana operators, and $J_{pqrs}$ are real-valued scalars drawn randomly from a normal distribution with variance $\sigma^2 = 3J^2/N^3$. For simulating on a quantum circuit, one first writes the SYK model in terms of complex fermions, and then maps it to a spin Hamiltonian via e.g. the Jordan-Wigner transformation. Due to the Jordan-Wigner transformation, a typical term in the Hamiltonian consists of a four-qubit exchange interaction, multiplied by long Jordan-Wigner strings, for example,
$$\hat{H}_{pqrs} \propto \left( \prod_{m=s}^{p-1} \sigma_m^z \right) \left( \prod_{m=q}^{r-1} \sigma_m^z \right) \sigma_p^{\alpha_p} \sigma_q^{\alpha_q} \sigma_r^{\alpha_r} \sigma_s^{\alpha_s},$$
(77)
where $\sigma_i^{\alpha_i}$ is a spin raising or lowering operator on qubit $i$. This term, and similarly for all the other terms in the Hamiltonian, can be realized utilizing local and collective Molmer-Sorensen gates. Time evolution with the Hamiltonian can be implemented in a Trotterized fashion [213, 214].
Apart from the SYK model, there are some recent works on Hamiltonian simulation of certain gauge theories, which are also based on Trotterization of the Hamiltonian [215–224]. It is possible to construct the ground state for these systems, and measure some observables, via a mapping to a qubit system [215]. A detailed discussion of these is beyond the scope of this review. We suggest interested readers refer to these references for further details.
In Appendix B and Appendix C, we also review the simulation of other quantum many-body spin models that naturally arise in quantum simulation platforms based on trapped ions or Rydberg atoms.
7 Measurement protocols
We can devise an implementation of the quantum protocols described in this review to measure OTOCs, and realize a simulation of teleportation across wormholes, using the quantum simulators described in Section 5. In this section, we will describe concrete quantum circuits to realize these protocols, and highlight a few pioneering experiments that have already accomplished these feats.
7.1 Protocols for OTOC
First, we describe two protocols to measure OTOCs. The first protocol measures the thermal OTOC defined in Eq. (36) using a TFD state. The second protocol obtains the infinite-temperature OTOC from correlating measurements on two sets of qubits that were prepared as a product of correlated qubits in randomized bases. The essential idea of the latter protocol is that an ensemble of correlated qubits initialized in randomized bases realizes a state closely related to EPR pairs, which is the infinite-temperature TFD.
7.1.1 Thermal OTOC from TFD
We recall the definition of the regularized thermal OTOC, $O_{th}(\beta, t)$, shown in Eq. (36). This OTOC can be naturally interpreted as
$$O_{th}(\beta, t) = \langle \psi | V^\dagger(t) \otimes [V(t)]^T | \psi \rangle,$$
(78)
where \(|\psi\rangle = (W \otimes 1)|\text{TFD}\rangle\). Note that
\[
[V(t)]^T = [\exp(iHt)V(0)\exp(-iHt)]^T \\
= [\exp(-iH^*t)V^T(0)\exp(iH^*t)]
\]
is the Heisenberg operator for \(V^T\) at time \(t\) when evolved with \(-H^*\). The interpretation in Eq. (78) suggests an implementation as follows:
- Prepare \(|\text{TFD}\rangle\) on \(\mathcal{H}_L \otimes \mathcal{H}_R\).
- Apply \(W\) on, say, the left system. This is possible for unitary \(W\).
- Evolve the left and right systems with \(H_L\) and \(-H_R^*\). The right system should be evolved with \(-H_R^*\) for the reason explained above.
- Measure \(\langle V_L^\dagger \otimes V_R^T \rangle\).
Next, we describe one method to prepare \(|\text{TFD}\rangle\). Realizing the remaining steps, for example on a digital quantum computer, is straightforward.
Thermofield double states have been prepared for particular models and small system sizes on a trapped-ion based digital quantum computer. The technique used to prepare the TFD is a quantum-classical hybrid technique called the Quantum Approximate Optimization Algorithm (QAOA) [225], which has more recently been called the Quantum Alternating Operator Ansatz [226] (and denoted QAOA as well).

**Fig. 11** Schematic of the QAOA algorithm to prepare the TFD for the transverse Ising model. The algorithm consists of a parameterized quantum circuit, \(U(\vec{\theta}) = \cdots U_2(\theta_2)U_1(\theta_1)\), implemented on a quantum computer, where the angles \(\theta\) are found by a classical computer. Usually, one finds these angles in a classical-quantum feedback loop, where the classical computer updates \(\vec{\theta}\) based on the output of the quantum computer. Details of the gates used in the quantum circuit are in Fig. 12(a).
QAOA is a variational algorithm [227] originally proposed to minimize Hamiltonians. The algorithm is schematically drawn in Fig. 11. It produces a parameterized ansatz wavefunction,
\[
|\psi(\vec{\theta})\rangle = U(\vec{\theta})|\psi(0)\rangle,
\]
where the unitary \(U\) is composed of a set of quantum gates \(\{U_i\}\), and these quantum gates are parameterized by gate angles \(\vec{\theta} \equiv \{\theta_i\}\), i.e., \(U(\vec{\theta}) = \cdots U_2(\theta_2)U_1(\theta_1)\). The parameters \(\vec{\theta}\) are chosen such that \(|\psi(\vec{\theta})\rangle\) minimizes the Hamiltonian. In the most common setting, the parameters \(\vec{\theta}\) are chosen in a quantum-classical feedback loop. A classical computer feeds in \(\vec{\theta}\), the quantum computer returns \(\langle \psi(\vec{\theta})|H|\psi(\vec{\theta})\rangle\), and the loop continues until \(\langle \psi(\vec{\theta})|H|\psi(\vec{\theta})\rangle\) is minimized over the space of all \(\vec{\theta}\). The classical computer can use any classical optimization algorithm, e.g. gradient descent, to optimize the necessary \(\vec{\theta}\) to minimize the Hamiltonian. For small-system sizes, one can compute the optimal parameters \(\vec{\theta}\) classically, without requiring a classical-quantum feedback loop.
QAOA, and related variational algorithms such as the Variational Quantum Eigensolver, have been used in several applications to minimize target Hamiltonians [228–236]. Finding new applications of variational algorithms is an active area of research [94–97, 237–239].
Recently, QAOA has been used [94] for preparing the TFD for the transverse Ising model. One possibility for the basic building block of the quantum circuit for preparing the TFD for this model is shown in Fig. 12(a). Different models require different building blocks for the variational circuit. The variational angles \(\vec{\theta}\) can be chosen such that the fidelity
\[
F(\vec{\theta}) = \left|\langle \psi(\vec{\theta})|\text{TFD}\rangle\right|^2
\]
is maximized, i.e.
\[
\vec{\theta}_{\text{opt}} = \arg\max_{\vec{\theta}} F(\vec{\theta}).
\]
Maximizing the fidelity, however, is restricted to small systems. This is because classically calculating \(F(\vec{\theta})\) or measuring \(F(\vec{\theta})\) from the quantum computer are both exponentially difficult tasks.
To mitigate the above challenge, there are alternative proposals to prepare the TFD by maximizing the thermal entropy, or as the ground state of a local parent Hamiltonian for cases where the target Hamiltonian satisfies the eigenstate thermalization hypothesis [92, 148]. Specifically for the transverse Ising model (\(h = 0\) in the Hamiltonian (37)), the parent Hamiltonian may take the form
\[
H_{\text{parent}}(\lambda) = H_A + H_B + H_{AB}(\lambda),
\]
where \(H_A\) and \(H_B\) are the transverse Ising Hamiltonian in the A and B chains respectively, and
\[
H_{AB} = \lambda J \sum_{i=1}^{N} (\sigma^y_{iA}\sigma^y_{iB} - \sigma^x_{iA}\sigma^x_{iB}).
\]
Here, \(\lambda\) must be appropriately chosen for each temperature \(T\) labeling the TFD. The above parent Hamiltonian exactly produces the TFD at \(T = 0\) and \(T = \infty\).
Choosing $\lambda = 0$ produces the TFD at $T = 0$, which is the product of the transverse Ising model’s ground states on the A and B chains. Choosing $\lambda = \infty$ produces the TFD at $T = \infty$, which is a product of EPR pairs. At intermediate temperatures, the ground state of $H_{\text{parent}}(\lambda)$ produces the TFD (at $\lambda$-dependent temperature) to a good approximation [93], where the approximation may be improved by adding more terms to $H_{AB}$. We note that this realization of the TFD model as the ground state of a parent Hamiltonian is fairly general for chaotic Hamiltonians [92], see [148] for SYK model. The variational ideas to prepare the TFD state are also generic, and can in principle be applied to prepare the TFD state for other models, e.g. the SYK model [96].
After preparing the TFD, measuring the thermal OTOC requires evolving one of the halves forward in time, i.e. with $+H$, and the other backwards in time, i.e. with $-H^*$ (see Fig. 12(b)). Hamiltonian evolution in a digital quantum computer is possible as Trotterized evolution, with sufficiently small Trotter step $dt$. Evolution with $-H^*$ can easily be achieved due to the availability of a universal gate set. The time up to which the system can be evolved is currently limited by gate errors, which restricts high-fidelity quantum simulation to a few Trotter steps.

**Fig. 12** (a) Building block of the QAOA circuit that prepares the thermofield double state for the transverse Ising model. The XX gate is the local Molmer-Sørensen gate discussed in the main text, the ZZ gate is the analog of the Molmer-Sørensen for interactions along the $z$ direction, and $R_z$ is a single-qubit rotation around $z$. The gate angles are found classically. (b) Measurement protocol for the thermal OTOC. The block $U(\vec{\theta})$ is shown in (a). Depth q QAOA repeats this block $p$ times, with different angles $\vec{\theta}_1, \vec{\theta}_2, \ldots, \vec{\theta}_p$.
### 7.1.2 Infinite-temperature OTOC from randomized initial states
The protocol described above can be readily applied to measure the OTOC at $T = \infty$. In particular, $|TFD\rangle$ at $T = \infty$ is equal to $|EPR\rangle$, which can be readily prepared in the lab.
However, there are also other protocols to measure OTOCs at $T = \infty$. One of them, that was proposed by Ref. [101], and implemented by Ref. [112] for measuring the OTOC at $T = \infty$, obtains the OTOC from randomized measurements of qubits. This protocol can be extended to large finite $T$ as well, by perturbatively expanding the thermal factor $\exp(-\beta H)$ in powers of $\beta$. Other experiments have used similar ideas with randomized measurements to measure the OTOCs [160, 240].
The infinite-temperature protocol in Ref. [101, 112] works as follows:
- Prepare two sets of $N$ qubits, one in $|0^{\otimes N}\rangle$ and the other in $|x\rangle$, where $|x\rangle$ is an $N$-qubit product state in the computational basis. We will label the two sets of $N$ qubits as $1 \leq i \leq N$ and $N + 1 \leq i \leq 2N$. None of the operations performed will involve any entanglement between the first $N$ and the second $N$ qubits, therefore we can perform experiments on these in separate experimental runs. Then, each experimental run needs to be performed only on $N$ qubits at a time, which is a significant technical advantage over having $2N$ qubits at one time.
- Apply $N$ independent single-qubit Haar-random unitaries $u_i$ on qubits $1 \leq i \leq N$, and the same $u_i$ on the qubits $N + 1 \leq i \leq 2N$.
- Apply $W$ (assumed to be unitary) on the first $N$ qubits.
- Evolve both sets of $N$ qubits with the Hamiltonian $+H$. Note that evolution with $-H^*$ is not required.
- Measure $\langle V^T \rangle$ on qubits $1 \leq i \leq N$ and $\langle V^T \rangle$ on qubits $N + 1 \leq i \leq 2N$. Denote the product of these two measurements as $f_x$.
- For each $x$, average over the single-qubit Haar-random unitaries $u_i$. Denote the average as $\overline{f}_x$.
- The weighted sum, $\frac{1}{2^N} \sum_{x=0}^{2^N-1} (-2)^{-|x|} \overline{f}_x$, gives the OTOC, $O_\infty(t) \equiv \hat{O}_{\text{th}}(\beta = 0, t)$. Here, $|x|$ is the Hamming weight of $x$.
The crucial step in understanding why this protocol works comes from the realization that the initial state’s density matrix, averaged over Haar-random unitaries $\{u_i\}$ and summed over bit strings $x$ including the weight...
\[
\frac{1}{N_u 2^N} \sum_{u,x} (-2)^{-|x|} \langle u \otimes u | 0^{\otimes N} \rangle \langle x | \left( 0^{\otimes N} \right) \langle x | u^\dagger \otimes u \rangle \\
\propto \text{SWAP},
\]
where \( \text{SWAP} = \sum_{x,y} |x\rangle \langle y| \langle y| \langle x| \) swaps the state of the two systems and \( u = \bigotimes_{i=1}^{N} u_i \). For brevity, we will ignore normalization factors for the state that realizes SWAP. The sum in Eq. (84) should be understood as averaging over the unitaries first, and then summing over the bit strings \( x \), as described in the protocol above.
The SWAP state has the property that
\[
O_L \quad O_R = \begin{bmatrix}
\text{SWAP} & O_L & O_R^T \\
\text{SWAP} & O_L & O_R^T \\
\end{bmatrix}
\]
and as a corollary,
\[
W^\dagger(0) \quad V^\dagger(t) \quad (V(t))^T = \begin{bmatrix}
\text{SWAP} & W^\dagger(0) & V^\dagger(t) \\
\text{SWAP} & W^\dagger(0) & V^\dagger(t) \\
\end{bmatrix}
\]
Viewing the right hand side of Eq. (86) as a circuit, the protocol described above is a direct implementation of this circuit, and was implemented in Ref. [112] for the long-ranged Ising model with a transverse field. It is worthwhile to reemphasize the key innovations of this method: Using randomized measurements halved the number of qubits as compared to that required by other measurement protocols, and eliminated the need for evolution with \(-H^*\). Ref.[101] also proposed a related alternative protocol with global Haar-random unitaries, instead of local Haar-random unitaries, which proceeds similar to the local protocol, except that the initial states for both sets of \( N \) qubits are \( |0^{\otimes N}\rangle \), and there are no weighting factors \((-2)^{-|x|}\).
### 7.2 Simulating teleportation across a wormhole in a trapped ion quantum computer
There are also experiments that have implemented protocols to simulate the teleportation of one qubit across the analog of a wormhole [98, 241]. In one such experiment by Ref.[98], the experimentalists implemented two protocols that simulate the teleportation of one qubit across the analog of an infinite-temperature wormhole — on EPR states. One of those protocols probabilistically teleports one qubit, and the other deterministically teleports the qubit. Here, we describe the deterministic protocol, and refer the reader to Ref.[98] for the probabilistic protocol. The protocol is based on the Yoshida-Kitaev version [73] of the Hayden-Preskill protocol [72] [see Section 3.3].
The experiment in Ref. [98] implemented a particular instance of Fig. 7 with seven qubits, as follows:
– The experiment begins by initializing qubit 1 in \( |\psi\rangle \), which is the state to be teleported, and qubits 2-7 as EPR pairs, with qubits 2 and 5 forming one pair, qubits 3 and 4 forming one pair, and qubits 6 and 7 as one pair. Qubits 2-5 are analogous to the black holes and the past radiation, interpreted as \( B \) and...
$B'$ in Fig. 7, and qubits 6-7 are the ancillary pair for decoding, interpreted as $A'$ and $R'$.
Then, they evolved qubits 1-3 with a maximally scrambling unitary $U$, i.e., a unitary which evolves all single-qubit Pauli operators into three-qubit Pauli strings. They evolved qubits 4-6 with $U^*$. The probabilistic protocol measured qubits 3 and 4 and terminated here [see Ref. [98]]. But we will move on to the deterministic protocol.
They apply a Grover oracle $G = 1 - 2 \langle \text{EPR} \rangle \langle \text{EPR} \rangle$ on qubits 3 and 4. A circuit compilation trick allows $G$ to be implemented with a SWAP gate (performed classically by relabeling the qubits) followed by single-qubit $Y$ gates [see Fig. 13].
Then according to Fig. 13, one should evolve qubits 4-6 (which are now interpreted as $D'$ and $C'$) with $U^T$, apply $G = 1 - 2 \langle \text{EPR} \rangle \langle \text{EPR} \rangle$ on qubits 6 and 7, and evolve qubits 4-6 with $U^*$. As stated above, $G$ can be implemented with classical relabeling and $Y$ gates. At the end of this, $\langle \psi \rangle$ will be successfully teleported to qubit 7 (which is interpreted as $R'$). This concludes the experiment in Ref. [98]. The experiment did not need to implement the final $Y$ and $U^T$ on qubits 4-6, because they don’t affect the state of qubit 7.
Ref. [98] successfully demonstrated teleportation of various choices of $\langle \psi \rangle$, specifically $\langle \psi \rangle = |0\rangle$, $\langle \psi \rangle = |1\rangle$, $\langle \psi \rangle = (|0\rangle \pm |1\rangle)/\sqrt{2}$, and $\langle \psi \rangle = (|0\rangle \pm i |1\rangle)/\sqrt{2}$, with an average teleportation fidelity of 78%. The teleportation fidelity is < 100% due to gate errors in the experiment. The same protocol can teleport any single-qubit state $\langle \psi \rangle$.
Future experiments may use the above protocol to teleport multiple qubits with more Grover iterations, as well as simulate teleportation across the analog of a finite-temperature wormhole by appropriately generalizing the circuit, e.g., replacing $|\text{EPR}\rangle$ with $|\text{TFD}\rangle$ in the initial state, where $|\text{TFD}\rangle$ could for example be prepared by techniques outlined in Section 7.1.
8 Conclusions and Discussions
In this work, we have presented the recent advances in realizing analogous models of gravity in a lab, in the sense of holography. In particular, our focus has been the wormhole teleportation inspired protocols for teleportation in the many-body systems. The mechanism of the teleportation is governed by the operator size that is inserted just before the coupling is applied. We described the experimental protocols and observations of OTOC and small scale teleportation in state-of-the-art quantum simulators.
It should be noted that the exact model of gravity in a lab, where one can not only verify the holographic principle but also learn about the gravity from a lab, is still not available. Our summary of the recent advances should be seen as advances in the theoretical translation of the tools and observables in gravity and many-body models available on a lattice, as well as advances in experimental technology. With the proof-of-principles done for the holographic models that have a semi-classical dual, in the long run one can hope to study the more complex bulk dual, involving stringy corrections, in a quantum lab.
It is always crucial to study the effects of experimental decoherence and noise sources in implementing protocols. We have not discussed them in this review, but one should keep in mind the limitations they pose and the rectifications thereof, for example see [93, 98, 112, 242, 243] for possible error sources and corrections. We discussed here that the behavior of the teleportation fidelity with time is a strong signature of the nature of the dynamics, namely generic scramblers or the holographic scrambler. Even better, the teleportation fidelity identifies the real scrambling dynamics and decays due to decoherence [99]. Furthermore, it would be interesting to find out the validity and corrections of the Hayden-Preskill protocol as well as the many-body teleportation protocol in presence of errors [244].
We also note that the operator size distribution is a more refined description of the time-evolved operators than the averaged OTOC that we have presented here. It remains a question as to how and when the size distribution discussed here compares with the usual notion of the operator size [159], and to those amenable in experiments [245]. It is argued in [246–249], that the rate of change of momentum of the particle falling in to the bulk spacetime is dual to the complexity of the dual operator at the boundary. This complexity basically measures the growth of the size of the operator under time evolution. Some recent progress has been made towards understanding complexity for the dual field theory [250–263], see [264] for a recent review. However, it is in its early stage of development. An interesting direction will be to develop this idea of operator growth using complexity as a possible diagnostic. This will not only enable us to make connection with certain predictions coming from holography but will also help us to compare with other diagnostics which are measurable via experiments. Another important theoretical direction is to explore the finite temperature generalizations of the many-body teleportation in the spirit similar to [158, 265, 266]. In recent times, several toy models based on tensor network construction for holography has been proposed [46–52, 57]. In this context, it will be interesting to realize the thermofield double state and the teleportation protocol. Perhaps [267] will provide a good starting point. This will pave the way forward to some of the predictions coming from holography using interesting quantum many-body systems.
At last, for the experimental prospects of connecting theoretical high energy physics with experiments, we conclude by outlining directions other than the wormhole teleportation. For example some of the open directions are the realization of the SYK model as a simple model of holography [213], simple models of wormholes [268–270], time-shifted wormholes and the teleportation therein [271] and possibilities to use time shifted wormhole teleportation to distinguish states with similar entanglement [272], among many others.
Acknowledgements We thank Hannes Fichtler, Benoit Vermersch, Norman Y. Yao and Peter Zoller for fruitful discussions, and Andreas Elben, Tarunet V. Zacha for collaboration on related projects. We thank Ana Maria Rey and Murray Holland for a careful reading of the manuscript. We thank Manoj K. Joshi for comments on the section on quantum simulation platforms, Benoit Vermersch for comments on randomized measurement protocol for OTOCs and Andreas Elben for various useful comments. A.B would like to thank Wissam Chemissany and Nariman Chiraki and speakers of the workshop “Quantum Information in CFT and AdS/CFT” (https://events.iitgn.ac.in/2022-icft/) for various useful discussions which have made him interested in this particular topic. A.B is supported by Start-Up Research Grant (SRG/2020/001380), Mathematical Research Impact Centric Support Grant (MTR/2021/000490) by the Department of Science and Technology Science and Engineering Research Board (India) and Relevant Research Project grant (58/14.12/2021- BRNS) by the Board Of Research In Nuclear Sciences (BRNS), Department of energy, India. A.B acknowledges European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 731473 (QuantERA via QT-FLAG) and the Austrian Science Foundation (FWF, P 32597 N). LJK also acknowledges virtual hospitality of Indian Institute of Technology Madras, Chennai where parts of this work were presented in May 2020.
Appendix A: Some details about AdS/CFT dictionary
Here we will briefly sketch out some details of the AdS/CFT dictionary. It has two aspects; Kinematical aspects and Dynamical aspects. We briefly review both of them below. For more details, interested readers are referred to [4, 117, 123, 273] and the citations therein.
Kinematical Aspect: First, we begin by discussing the generators of a conformal group in $d$ dimensions. For simplicity, we will assume that the underlying CFT is defined on a flat Minkowski background. The conformal transformations consist of the following four transformations, and we quote the corresponding generators below [274].
\begin{align}
\text{Translation}(P_i) & \rightarrow i \partial_i, \\
\text{Rotation}(J_{ij}) & \rightarrow -i(x_i \partial_j - x_j \partial_i), \\
\text{Dilatation}(D) & \rightarrow -i x^i \partial_i, \\
\text{Special Conformal Transformation} (SCT)(K_i) & \rightarrow i(2x_i x^j \partial_j - x^2 \partial_i),
\end{align}
where the $i,j$ takes value from 0 to $d-1$, where 0 denotes the time coordinate. $J^{ij}$ includes both the space-rotation and boost. $J^{ij}$ is completely anti-symmetric in $i,j$ indices. So it is evident that Poincare group (consisting of Translations and Rotations) is a subgroup of conformal group. The dilatation generators, scales the coordinates by a constant factor and the special conformal transformation (SCT) can be thought of a translation preceded and followed by an inversion. Now it can be shown that, these generators after suitable identification satisfy a $SO(d,2)$ algebra.
Now the isometry generators of $AdS_{d+1}$ exactly satisfy this algebra and they are in one-to-one correspondence with the generators (global) of conformal group in one lower dimension. [274].
Lets take a concrete example of $AdS_3/CFT_2$. For $CFT_2$ we have 6 generators corresponding to the global conformal transformations. Now let us first write $AdS_3$ in Poincare coordinates\footnote{We can use other coordinates also. For a detailed review please refer to [4, 117]}.
\begin{equation}
ds^2 = \frac{L^2(dz^2 - dt^2 + dx^2)}{z^2},
\end{equation}
where the boundary of it is located at $z = 0$. $t$ is Lorentzian time. We below quote the isometry generators by solving the Killing equation and we quote the result below [78, 123, 275, 276].
\begin{align}
J_{01} &= i \left[ \left( \frac{L^2 + z^2 + t^2 + x^2}{2L} \right) \partial_t + \frac{x}{L} \partial_x + \frac{t}{L} \partial_z \right], \\
J_{02} &= i \left[ \left( \frac{-L^2 + z^2 + t^2 + x^2}{2L} \right) \partial_t + \frac{x}{L} \partial_x + \frac{t}{L} \partial_z \right], \\
J_{03} &= i \left[ -x \partial_t + t \partial_x \right], \\
J_{12} &= i \left[ -z \partial_z - t \partial_t - x \partial_x \right], \\
J_{13} &= i \left[ \left( \frac{L^2 + z^2 - t^2 - x^2}{2L} \right) \partial_x - \frac{x}{L} \partial_t - \frac{z}{L} \partial_z \right], \\
J_{23} &= i \left[ \left( \frac{-L^2 + z^2 - t^2 - x^2}{2L} \right) \partial_x - \frac{x}{L} \partial_t - \frac{z}{L} \partial_z \right].
\end{align}
We can show that they satisfy
\begin{equation}
[J_{ab}, J_{cd}] = i [\eta_{ac} J_{bd} - \eta_{ad} J_{bc} - \eta_{ab} J_{dc} - \eta_{bc} J_{ad}],
\end{equation}
$SO(2, 2)$ algebra and $a, b, c, d \in \{0, 1, 2, 3\}$. This precisely matches with the algebra of the global conformal generators in 3-dimensional Minkowski space and $\eta_{ab} = \text{Diag}(-1, -1, 1, 1)$ a diagonal metric with two time signature [123, 274].
Also, the conformal boundary of the AdS in this coordinate is located at $z = 0$. If we take the boundary limit on (A.3), one can easily see the one-to-one correspondence between theses generators and that of those for the global conformal group in one lower dimension, i.e. $d=2$ for this specific case. For example, in the boundary limit $J_{12}$ in (A.3) corresponds to the Dilatation generator (D) of the boundary CFT. One can easily generalize this result for arbitrary dimensions.
**Dynamical Aspect:** Now we will discuss the dynamical aspects of the duality. It states the equivalence between CFT and gravitational path-integral,
$$Z_{CFT}\{\{J_i\}\} = Z_{Gravity}, \quad (A.5)$$
where,
$$Z_{Gravity} \sim \int D(\phi, G_{\mu\nu}, A_\mu)|_{\{J_i\}_{z=0}} e^{-S_{gravity} + \cdots}. $$
For $Z_{Gravity}$, we have to evaluate the action $S_{gravity}$ which consists of bulk fields, metric, scalar field, gauge field etc on-shell, i.e. on the solution of the equation of motion of all these bulk fields. Also, the sources $J_i$ of the CFT side can be identified with the boundary values of the bulk scalar field after imposing suitable boundary condition. On the left hand side of (A.5), we have a functional $Z_{CFT}\{\{J_i\}\}$ which depends on arbitrary (off-shell) sources $J_i$ in $d$ dimensions, and on the right hand side we have the (on-shell) functional $Z_{Gravity}$ in $d+1$ dimensions, involving gravitational action evaluated on the solution of the equations and the fields reduce to the corresponding $J_i$’s at the boundary of the AdS. One has to be careful about imposing the boundary limit (for Poincare AdS as shown in (A.2) it is basically $z \to 0$ limit) as typically the fields diverges near the AdS boundary [4, 117, 273]. Now utilizing relation (A.5) one can translate all the field theory correlation functions to the correlation functions of fields in the bulk spacetime.
Before we end, we make a few more comments. We need to know how the CFT operators map to fields in the bulk. In principle, it depends on the details of the two theories (CFT and the gravity theory). String theory provides this map. Roughly, we can observe that the consistent coupling between a certain field to a certain operator can be often argued using underlying symmetries. Both $J_i$ and the field operator share the same quantum number under the conformal group. This gives some obvious coupling,
$$W_{CFT} = S_{CFT} + \int d^dx \left[ g_{ij} T^{ij} + A_s J^i + \phi F^2_{ij} + \cdots \right]. \quad (A.6)$$
So the metric couples to stress tensor, gauge field ($A_\mu$) in bulk to current ($J^\mu$) in dual CFT, the scalar field in bulk to some scalar operator at the boundary and so on. Now given the effective action $W_{CFT}$ we can construct the $Z_{CFT}\{\{J_i\}\}$ in the usual way. Also, one important point is that mass of the fields in the bulk can be related to the conformal dimensions ($\Delta$) of primary operator of the dual field theory. For example, for a massive scalar field in the bulk with a mass $m$, we have,
$$m^2 L^2 = \Delta(\Delta - d), \quad (A.7)$$
where $\Delta$ is the conformal dimension of the dual primary operator for the d-dimensional CFT, and $L$ is the AdS radius. Similar conclusions can be made for spinning fields also. For more details, interested readers are referred to [4, 117, 273].
**Appendix B: Rydberg atoms**

(a) Ground and Rydberg state as qubit states. Atoms are excited from the ground to the Rydberg state via a two-photon transition. Rydberg atoms interact with a van-der-Waals interaction. (b) Two ground states as qubit states. Interactions are induced by exciting $|e\rangle$ to a dressing $|c\rangle$ with a Rydberg state. (c) Two Rydberg states as qubit states. They undergo flip-flop interactions due to their dipole moments.
In the main text, we stated that Rydberg atoms are one of the platforms for analog and digital quantum simulation. Here, we discuss the physics that can be explored with Rydberg atoms, in more detail.
There are several possible ways to encode a qubit in these atoms using any two internal atomic states. These two states could be a long-lived hyperfine ground state (i.e. a state with a small principle quantum number) and a Rydberg state. Or the two qubit states could be two hyperfine ground states, with the Rydberg state acting as an auxiliary state, into which atoms are transferred when strong interactions are needed, or which is admixed to one of the ground states via Rydberg dressing. Or they could even be two Rydberg states. Each of these choices of the qubit states allows different capabilities, and has been used to realize digital quantum computation or analog quantum simulation. Let us now understand the physics that can be explored in each of these qubit encodings.
Let us first consider the case that the qubit states are a ground state and a Rydberg state, as illustrated in Fig. 14(a). Any two atoms in the Rydberg state and separated by a distance $r$ interact with each other with strength $V/r^6$, where $V \propto n^{11}$. Additionally, one could drive the atoms from the ground state to the Rydberg state via external lasers and effectively realize, for example, the long-ranged quantum Ising model,
$$H = \sum_i \Omega \sigma_i^x - \Delta \sigma_i^z + \frac{1}{2} \sum_{ij} \frac{V}{r_{ij}^6} (1 - \sigma_i^z)(1 - \sigma_j^z). \quad (B.8)$$
Here, $\sigma^\alpha$ are Pauli operators acting on the two qubit states, $\Omega$ is the amplitude of the two-photon transition that excites the atom from the ground to the Rydberg state, and $\Delta$ is the detuning of the two-photon transition from the atomic transition. This is a paradigmatic model in quantum mechanics, and has been realized with Rydberg atoms by various groups [277–283]. Further, the ability to arrange the atoms in arbitrary geometries of tweezers, and the ability to quench various parameters in the above Hamiltonian, leads to a rich playground of physics that is open for exploration.
In the case the qubit states are two hyperfine ground states, as illustrated in Fig. 14(b), it is possible to make one of the ground states interacting by dressing it with a Rydberg state. Interactions are induced due to the admixture with the Rydberg state, and obtains a similar model to Eq. (B.8). The advantage to this method is that the atomic lifetimes are longer, and not limited by spontaneous decay from the Rydberg state. This case has also been experimentally realized by various groups [284–288].
**Appendix B.1: Entangling gates on Rydberg atoms**
The scheme to implement an entangling gate is shown in Fig. 15. It consists of two hyperfine ground states $|g\rangle$ and $|e\rangle$ encoding the qubit, and one of the ground state, $|g\rangle$ being coupled to a Rydberg state $|r\rangle$ via a laser(s). The whole scheme involves three individually addressed laser pulses. First, a laser pulse of duration $t_\pi = \pi/\Omega$ is shone on one atom, then a pulse of duration $2t_\pi$ is shone on the second atom, and finally another laser pulse of duration $t_\pi$ is shone on the first atom. The effect of this sequence can be understood by considering the four initial states, $|ee\rangle$, $|ge\rangle$, $|eg\rangle$, and $|gg\rangle$, of the two qubits. Since only $|g\rangle$ is coupled to the Rydberg state, the whole sequence has no effect on $|ee\rangle$. Moreover, for the initial states $|ge\rangle$ and $|eg\rangle$, the three-pulse sequence is equivalent to applying a single pulse of $2t_\pi$ on $|g\rangle$, which only multiplies the state by $-1$. Non-trivial physics happens for the initial state $|gg\rangle$. For this case, the first and third laser pulses together multiply the state by $-1$, and the second laser pulse, which is effectively off-resonant due to the $|rr\rangle$ being blockaded, gives an additional small phase $\phi \ll \pi$. In total, the only one of
four states that does not acquire a sign is $|ee\rangle$. This is equivalent to applying a controlled-phase gate. The controlled-phase gate, together with arbitrary single-qubit rotations which can be implemented via magnetic fields or stimulated Raman transitions, are sufficient to realize universal quantum computing. Gate fidelities exceeding 99% for entangling gates, and up to 99.6% for single-qubit gates, have been demonstrated [194].
Appendix C: Trapped ions
In the main text, we stated that there are two schemes to implement entanglement between trapped ion qubits. Here, we describe these two schemes, as well as other physics that can be explored.
Appendix C.1: The Cirac-Zoller scheme
The Cirac-Zoller scheme [203], illustrated in Fig. 16, is a three-step process that realized the controlled-Z gate and requires individual qubit addressability. In the first step, one shines a laser at frequency $\omega = \omega_0 + \omega_i$ on a specific ion, where $\omega_0$ is the energy spacing between the qubit states $|g\rangle$ and $|e\rangle$, and $\omega_i$ is the frequency of the center-of-mass mode. The Hamiltonian for a single ion coupled to the laser is
$$H = -\frac{\hbar \omega_0}{2} \sigma^z + \hbar \omega_i (a^\dagger a + \frac{1}{2}) + \hbar \Omega \cos(\omega t) (a^\dagger + a)(\sigma^+ + \sigma^-),$$
where $a(a^\dagger)$ annihilates (creates) a mode excitation, and $\Omega$ is the ion-laser coupling strength. For $\omega_0, \omega_i \gg \Omega$, which is typically the case, we apply the rotating wave approximation, i.e. go to a rotating frame and neglect terms rotating at frequency $O(\omega_0)$ or $O(\omega_i)$ in this frame, and obtain
$$H = \frac{\hbar \Omega}{2} (a^\dagger \sigma^+ + \text{h.c.}),$$
where $a(a^\dagger)$ destroys (creates) a phonon excitation. Assuming the initial state has no phonons, the effect of the above Hamiltonian is that it transfers the qubit’s state to the phonons. Concretely, when one shines a laser pulse of duration $t = \pi/\Omega$ on an ion labeled $i$ in the state $(\alpha |g\rangle + \beta |e\rangle)_i |0\rangle$, where $|0\rangle$ refers to having zero phonons, the system’s state after the pulse is $|e\rangle_i (\alpha |1\rangle + \beta |0\rangle)$. The second step involves shining a second laser pulse, with a similar form to Eq. (C.10), but couples $|g\rangle_i$ on ion $j$ to a different excited state $|e'\rangle_j$, for a duration $2\pi/\Omega$. The effect of this second step is to selectively give a $(-1)$ sign to $|g\rangle_j |1\rangle$, i.e. accomplishes a controlled-Z gate between the phonon and the qubit. The third step is identical to the first step, and decouples the phonon from the qubit $i$. The net effect of the sequence is to selectively give a $(-1)$ sign to $|g\rangle_i |g\rangle_j |1\rangle$, and do nothing to all other states, which is a controlled-Z gate.
The Cirac-Zoller gate was first experimentally realized to demonstrate entanglement between an ion and a phonon in Ref. [292], and later between two ions in Ref. [293–295].
Appendix C.2: The Mølmer-Sørensen scheme
In the Mølmer-Sørensen scheme, a laser beam containing two frequency components $\omega_1$ and $\omega_2$, as shown in Fig. 17(a) is shone on the ions. The two frequencies are chosen close to the upper and lower motional sidebands, i.e. $\omega_0 \pm \omega_i \pm \Delta_r + \delta$, where $\omega_0$ is the energy spacing between the qubit states $|g\rangle$ and $|e\rangle$, and $\omega_i$ is the normal mode frequency of the ions, typically the frequency of the center-of-mass mode. Depending on the laser frequencies and strengths, this scheme can realize a variety of qubit interactions, including controlled-phase gate between a given pair of qubits, or various long-range Hamiltonians with pairwise interactions between the qubits.
The Hamiltonian for a single ion interacting with two frequency laser field is given as
$$H = -\frac{\hbar \omega_0}{2} \sigma^z + \hbar \omega_i (a^\dagger a + \frac{1}{2}) + \hbar \Omega (\cos \omega_1 t + \cos \omega_2 t) (a^\dagger + a)(\sigma^+ + \sigma^-),$$
where $a(a^\dagger)$ annihilates (creates) a mode excitation, and $\Omega$ is the ion-laser coupling strength. For $\omega_0, \omega_i \gg \Omega$, which is typically the case, we apply the rotating
Let us begin by considering the weak coupling case, $\Omega \ll \Delta_t$. In this case, the normal modes are only virtually excited, and can be eliminated in second order perturbation theory, giving
$$H = J \sum_{ij} (\sigma_i^+ \sigma_j^+ e^{-2i\delta t} + \sigma_i^+ \sigma_j^- + \text{h.c.}),$$ \hspace{1cm} (C.13)
where $J \propto \Omega^2 / \Delta_t$. The physical picture that explains this emergent interaction is as follows [see also Fig. 17(b)]. The laser drives flip the internal state of an ion labeled $i$, and the ion absorbs (or emits) a virtual normal mode phonon in this process. Another ion labeled $j$ emits (or absorbs) the phonon and flips its internal state due to the laser drive. This virtual exchange of phonons is responsible for mediating long-ranged qubit interactions between the ions. It should be noted that there are similarities between the coupling of an ion’s qubit states to the phonon modes and the case of cavity quantum electrodynamics where an atom’s internal states are coupled to the electromagnetic modes in the cavity [296].
In the limit $\delta = 0$, $H$ reduces to
$$H = J \sum_{ij} (\sigma_i^+ \sigma_j^+ + \sigma_i^+ \sigma_j^- + \text{h.c.}),$$ \hspace{1cm} (C.14)
which is the global Molmer-Sorensen interaction. In the limit $\delta \gg J$, the $\sigma_i^+ \sigma_j^+$ term is also rapidly rotating and can be averaged to zero, therefore $H$ reduces to
$$H = J \sum_{ij} (\sigma_i^+ \sigma_j^- + \text{h.c.}).$$
Finally, when $\delta$ is comparable to $J$, the Hamiltonian after moving to an interaction picture is
$$H = J \sum_{ij} \sigma_i^z \sigma_j^z + B \sum_i \sigma_i^z.$$
This scheme is widely realized in experiments for quantum simulation [see, e.g., [206, 208, 297, 298]].
The disadvantage of the weak coupling case above is that the dynamics are slow, $\Omega \ll \Delta_t$. The dynamics can be made faster by making $\Delta_t$ comparable to $\Omega$ and setting $\delta = 0$. In this case, Eq. (C.12) is exactly integrable. The exact time evolution operator under $H$ [Eq. (C.12)] has the form
$$U = D(\alpha(t) \sigma_{\text{tot}}^x) \exp(i \Phi(t)(\sigma_{\text{tot}}^z)^2),$$ \hspace{1cm} (C.15)
where $D(\alpha) = \exp(\alpha a^\dagger - \alpha^* a)$, and $\sigma_{\text{tot}}^x = \sum_j \sigma_j^x$. In this case, the spin degree of freedom and motional degree of freedom are not decoupled in general, except at special times when $\alpha(t) = 0$. This special time occurs at multiples of $t = 2\pi / \Delta_t$. At these special times, the time evolution operator, $U' = \exp(i \Phi(t)(\sigma_{\text{tot}}^z)^2)$, is the same as the one obtained from a Molmer-Sorensen interaction. One can obtain a physical picture behind this spin-motion decoupling at $t = 2\pi / \Delta_t$ by visualizing the center-of-mass mode as a quantum harmonic oscillator. The Hamiltonian [Eq. (C.12)] displaces the quantum harmonic oscillator, and the oscillator undergoes a displacement given by $D(\alpha(t) \sigma_{\text{tot}}^x)$. It returns to its initial
state at $t = 2\pi/\Delta t$ (i.e. when $\alpha(t) = 0$), however, it picks up a spin-dependent geometric phase during each cycle. This spin-dependent phase is exactly equal to the phase given by the Molmer-Sørensen gate. Figure 17(b) shows another intuitive explanation for the Molmer-Sørensen interaction. The laser pulses scatter two ions from $|gg,n\rangle$ to $|ee,n\rangle$ (also from $|ge,n\rangle$ to $|eg,n\rangle$) via four paths, and the total amplitude of this process is the constructive interference of the four paths.
Local Molmer-Sørensen interactions, e.g. between exactly two qubits, can be obtained by shining the lasers on only two ions so that the sum in Eq. (C.14) is restricted to those two ions. The gates can be made fast by making $\Delta t_c$ comparable to $\Omega$ and applying the laser pulses for a duration that is a multiple of $2\pi/\Delta_t$, as explained above. Two-qubit Molmer-Sørensen gates have also been widely realized in experiments [206–211], with the highest current gate fidelity in the range of 99.9% [212]. Together with single-qubit rotations, they form a universal gate set for digital quantum computation. A major advantage of using trapped ions over superconducting qubits as a quantum computing platform is the global connectivity of the interactions. All ions couple to the center-of-mass mode, which mediates the qubit interactions, therefore one can implement a Molmer-Sørensen interaction between any pair of ions in a finite time scale regardless of how far apart they are (up to caveats about ion spacing and mode spacing).
Finally, we consider the case that there are other normal modes nearby in frequency to the lasers. This case arises when $\Delta t_c$ is comparable to the mode spacing, which can for example be accomplished by parking the lasers close to the radial modes instead of the axial modes. Then Eq. (C.12) should be modified to include the other modes, $a_m$, as well. After adiabatically eliminating the normal modes for $\Omega \ll \Delta t_c$ as above, one again obtains a long-ranged qubit interaction, however the interaction is no longer uniform between all the qubits. Instead, one obtains an approximately power-law decaying interaction: $J_{ij} \sim J/|r_i - r_j|^{\alpha}$ (with an exponential correction). In the limit of coupling only to the center-of-mass mode, $\alpha = 0$ and we recover the infinite-ranged interaction in Eq. (C.14). In the limit that $\Delta t_c$ is so large that all the normal modes are nearly at the same frequency relative to the lasers, then $\alpha \approx 3$. For intermediate $\Delta t_c$, we have $0 < \alpha < 3$. This case was first realized in [298–301].
References
1. Juan Maldacena. The large-n limit of superconformal field theories and supergravity. Int. J. Theor. Phys., 38(4):1113–1133, 1999.
2. Edward Witten. Anti-de Sitter space and holography. Advances in Theoretical and Mathematical Physics, 2:253–291, January 1998.
3. S. S. Gubser, Igor R. Klebanov, and Alexander M. Polyakov. Gauge theory correlators from noncritical string theory. Phys. Lett. B, 428:105–114, 1998.
4. Ofer Aharony, Steven S. Gubser, Juan Maldacena, Hirosi Ooguri, and Yaron Oz. Large n field theories, string theory and gravity. Phys. Rep., 323(3):183–386, 2000.
5. G. Policastro, D. T. Son, and A. O. Starinets. Shear viscosity of strongly coupled $n = 4$ supersymmetric yang-mills plasma. Phys. Rev. Lett., 87:081601, Aug 2001.
6. P. K. Kovtun, D. T. Son, and A. O. Starinets. Viscosity in strongly interacting quantum field theories from black hole physics. Phys. Rev. Lett., 94:111601, Mar 2005.
7. Thomas Schäfer and Derek Teaney. Nearly perfect fluidity: from cold atomic gases to hot quark gluon plasmas. Rep. Prog. Phys., 72(12):126001, 2009.
8. Dam T. Son and Piotr Surowka. Hydrodynamics with Triangle Anomalies. Phys. Rev. Lett., 103:191601, 2009.
9. Karl Landsteiner, Eugenio Megias, and Francisco Pena-Benitez. Gravitational Anomaly and Transport. Phys. Rev. Lett., 107:021601, 2011.
10. Alex Buchel, Robert C. Myers, and Aninda Sinha. Beyond eta/s = 1/4 pi. JHEP, 03:084, 2009.
11. Mauro Brigante, Hong Liu, Robert C. Myers, Stephen Shenker, and Sho Yaida. The Viscosity Bound and Causality Violation. Phys. Rev. Lett., 100:191601, 2008.
12. Sayantani Bhattacharyya, Veronika E Hubeny, Shiraz Minwalla, and Mukund Rangamani. Nonlinear Fluid Dynamics from Gravity. JHEP, 02:045, 2008.
13. Mukund Rangamani. Gravity and Hydrodynamics: Lectures on the fluid-gravity correspondence. Class. Quant. Grav., 26:224003, 2009.
14. Nabamita Banerjee and Suvankar Dutta. Holographic hydrodynamics: models and methods. arXiv preprint arXiv:1112.5345, 2011.
15. A. B. Zamolodchikov. Irreversibility of the Flux of the Renormalization Group in a 2D Field Theory. JETP Lett., 43:730–732, 1986.
16. Sebastian de Haro, Sergey N. Solodukhin, and Kostas Skenderis. Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence. Commun. Math. Phys., 217:595–622, 2001.
17. D. Z. Freedman, S. S. Gubser, K. Pilch, and N. P. Warner. Renormalization group flows from holography supersymmetry and a c theorem. Adv. Theor. Math. Phys., 3:363–417, 1999.
18. Edwin Barnes, Kenneth A. Intriligator, Brian Wecht, and Jason Wright. Evidence for the strongest version of the 4d a-theorem, via a-maximization along RG flows. Nucl. Phys. B, 702:131–162, 2004.
19. Kenneth A. Intriligator and Brian Wecht. The Exact superconformal R symmetry maximizes a. Nucl. Phys. B, 667:183–200, 2003.
20. Robert C Myers and Aninda Sinha. Holographic c-theorems in arbitrary dimensions. Journal of High Energy Physics, 2011(1):125, 2011.
21. Robert C. Myers and Aninda Sinha. Seeing a c-theorem with holography. Phys. Rev. D, 82:046006, 2010.
22. John L. Cardy. Is There a c Theorem in Four-Dimensions? Phys. Lett. B, 215:749–752, 1988.
23. Zohar Komargodski and Adam Schwimmer. On Renormalization Group Flows in Four Dimensions. JHEP, 12:099, 2011.
24. Markus A. Luty, Joseph Polchinski, and Riccardo Rattazzi. The a-theorem and the Asymptotics of 4D Quantum Field Theory. JHEP, 01:152, 2013.
25. Henriette Elvang, Daniel Z. Freedman, Ling-Yan Hung, Michael Kiermaier, Robert C. Myers, and Stefan Theisen. On renormalization group flows and the a-theorem in 6d. JHEP, 10:011, 2012.
26. Arpan Bhattacharyya, Ling-Yan Hung, Kallol Sen, and Aninda Sinha. On c-theorems in arbitrary dimensions. Phys. Rev. D, 86:106006, 2012.
27. Henriette Elvang and Timothy M. Olson. RG flows in d dimensions, the dilaton effective action, and the a-theorem. JHEP, 03:034, 2013.
28. H. Casini and Marina Huerta. On the RG running of the entanglement entropy of a circle. Phys. Rev. D, 85:125016, 2012.
29. Shinsei Ryu and Tadashi Takayanagi. Holographic derivation of entanglement entropy from the anti-de sitter space/conformal field theory correspondence. Phys. Rev. Lett., 96:181602, May 2006.
30. Veronika E. Hubeny, Mukund Rangamani, and Tadashi Takayanagi. A Covariant holographic entanglement entropy proposal. JHEP, 07:062, 2007.
31. Aitor Lewkowycz and Juan Maldacena. Generalized gravitational entropy. JHEP, 08:090, 2013.
32. Dmitri V. Fursaev, Alexander Patrushev, and Sergey N. Solodukhin. Distributional Geometry of Squashed Cones. Phys. Rev. D, 88(4):044054, 2013.
33. Joan Camps. Generalized entropy and higher derivative Gravity. JHEP, 03:070, 2014.
34. Xi Dong. Holographic Entanglement Entropy for General Higher Derivative Gravity. JHEP, 01:044, 2014.
35. Arpan Bhattacharyya, Apratim Kaviraj, and Aninda Sinha. Entanglement entropy in higher derivative holography. JHEP, 08:012, 2013.
36. Arpan Bhattacharyya, Menika Sharma, and Aninda Sinha. On generalized gravitational entropy, squashed cones and holography. JHEP, 01:021, 2014.
37. Arpan Bhattacharyya and Menika Sharma. On entanglement entropy functionals in higher derivative gravity theories. JHEP, 10:130, 2014.
38. Rong-Xin Miao and Wu-zhong Guo. Holographic Entanglement Entropy for the Most General Higher Derivative Gravity. JHEP, 08:031, 2015.
39. Arpan Bhattacharyya and Aninda Sinha. Entanglement entropy from the holographic stress tensor. Class. Quant. Grav., 30:235032, 2013.
40. Mukund Rangamani and Tadashi Takayanagi. Holographic Entanglement Entropy, volume 931. Springer, 2017.
41. Juan Maldacena and Leonard Susskind. Cool horizons for entangled black holes. Fortsch. Phys., 61:781–811, 2013.
42. Leonard Susskind. ER–EPR, GHZ, and the consistency of quantum measurements. Fortsch. Phys., 64:72–83, 2016.
43. Brian Swingle. Entanglement Renormalization and Holography. Phys. Rev. D, 86:065007, 2012.
44. Masahiro Nozaki, Shinsei Ryu, and Tadashi Takayanagi. Holographic Geometry of Entanglement Renormalization in Quantum Field Theories. JHEP, 10:193, 2012.
45. Jan de Boer, Felix M. Haehl, Michal P. Heller, and Robert C. Myers. Entanglement, holography and causal diamonds. JHEP, 08:162, 2016.
46. Fernando Pastawski, Beni Yoshida, Daniel Harlow, and John Preskill. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence. JHEP, 06:149, 2015.
47. Patrick Hayden, Sepehr Nezami, Xiao-Liang Qi, Nathaniel Thomas, Michael Walter, and Zhao Yang. Holographic duality from random tensor networks. JHEP, 11:009, 2016.
48. Bartłomiej Czech, Laurpros Lamprou, Samuel McCandlish, and James Sully. Tensor Networks from Kinematic Space. JHEP, 07:100, 2016.
49. Arpan Bhattacharyya, Zhe-Shen Gao, Ling-Yan Hung, and Si-Nong Liu. Exploring the Tensor Networks /AdS Correspondence. JHEP, 08:086, 2016.
50. Arpan Bhattacharyya, Ling-Yan Hung, Yang Lei, and Wei Li. Tensor network and (p-adic) AdS/CFT. JHEP, 01:139, 2018.
51. Zhi Yang, Long Cheng, Ling-Yan Hung, Sirui Ning, and Arpan Bhattacharyya. Emergent Lorentz symmetry and the Unruh effect in a Lorentzian fermionic tensor network. Phys. Rev. D, 99(8):086007, 2019.
52. Lin Chen, Xirong Liu, and Ling-Yan Hung. Emergent einstein equation in p-adic conformal field theory tensor networks. Phys. Rev. Lett., 127:221602, Nov 2021.
53. Pawel Caputa, Nilay Kundu, Masamichi Miyaji, Tadashi Takayanagi, and Kent Watanabe. Anti-de Sitter Space from Optimization of Path Integrals in Conformal Field Theories. Phys. Rev. Lett., 119(7):071602, 2017.
54. Johanna Erdmenger, Kevin T. Grosvenor, and Ro Jefferson. Information geometry in quantum field theory: lessons from simple examples. SciPost Phys., 8(5):073, 2020.
55. Adam R. Brown and Leonard Susskind. Complexity geometry of a single qubit. Phys. Rev. D, 100(4):046020, 2019.
56. Bowen Chen, Bartlomiej Czech, and Zi-zhi Wang. Quantum information in holographic duality. arXiv preprint arXiv:2108.09188, 2021.
57. Alexander Jahn and Jens Eisert. Holographic tensor network models and quantum error correction: a topical review. Quantum Sci. Technol., 6(3):033002, jun 2021.
58. Daniel Harlow. Jerusalem Lectures on Black Holes and Quantum Information. Rev. Mod. Phys., 88:015002, 2016.
59. Subir Sachdev and Jin-wu Ye. Gapless spin fluid ground state in a random, quantum Heisenberg magnet. Phys. Rev. Lett., 70:3339, 1993.
60. Subir Sachdev. Holographic metals and the fractionalized Fermi liquid. Phys. Rev. Lett., 105:151602, 2010.
61. A. Kitaev. ‘A simple model of quantum holography’. Talks at KITP, April 7, and May 27, 2015, 2015.
62. Juan Maldacena, Douglas Stanford, and Zhenbin Yang. Conformal symmetry and its breaking in two-dimensional nearly anti-de sitter space. Progress of Theoretical and Experimental Physics, 2016(12), 2016.
63. Gautam Mandal, Pranjal Nayak, and Spenta R Wadia. Coadjoint orbit action of Virasoro group and two-dimensional quantum gravity dual to SYK/tensor models. Journal of High Energy Physics, 2017(11):46, 2017.
64. Adwait Gaikwad, Lata Kh Joshi, Gautam Mandal, and Spenta R Wadia. Holographic dual to charged SYK from 3D gravity and Chern-Simons. Journal of High Energy Physics, 2020(2):33, 2020.
65. Stephen H Shenker and Douglas Stanford. Black holes and the butterfly effect. Journal of High Energy Physics, 2014(3):67, 2014.
66. Stephen H. Shenker and Douglas Stanford. Stringy effects in scrambling. Journal of High Energy Physics, 2015(5), 2015.
67. Juan Maldacena, Stephen H , and Douglas Stanford. A bound on chaos. J. High Energy Phys., 2016(8):106, 2016.
68. Jordan S. Cotler, Guy Gur-Ari, Masanori Hanada, Joseph Polchinski, Phil Saad, Stephen H. Shenker, Douglas Stanford, Alexandre Streicher, and Masaki Tezuka. Black holes and random matrices. J. High Energy Phys., 2017(5), 2017.
69. Phil Saad, Stephen H Shenker, and Douglas Stanford. A semiclassical ramp in syk and in gravity. arXiv preprint arXiv:1806.06840, 2018.
70. Joseph Polchinski. The Black Hole Information Problem, chapter 6, pages 353–397. World Scientific, 2017.
71. Ahmed Almheiri, Thomas Hartman, Juan Maldacena, Edgar Shaghoulian, and Amirhossein Tajdini. The entropy of hawking radiation. Rev. Mod. Phys., 93:035002, Jul 2021.
72. Patrick Hayden and John Preskill. Black holes as mirrors: Quantum information in random subsystems. Journal of High Energy Physics, 2007(9), 2007.
73. Beni Yoshida and Alexei Kitaev. Efficient decoding for the hayden-preskill protocol. arXiv preprint arXiv:1710.03363, 2017.
74. Adam R Brown, Hrant Gharibyan, Stefan Leichenauer, Henry W Lin, Sepehr Nezami, Grant Salton, Leonard Susskind, Brian Swingle, and Michael Walter. Quantum gravity in the lab: teleportation by size and traversable wormholes. arXiv preprint arXiv:1911.06314, 2019.
75. Sepehr Nezami, Henry W Lin, Adam R Brown, Hrant Gharibyan, Stefan Leichenauer, Grant Salton, Leonard Susskind, Brian Swingle, and Michael Walter. Quantum gravity in the lab: teleportation by size and traversable wormholes, part ii. arXiv preprint arXiv:2102.01064, 2021.
76. Thomas Schuster, Bryce Kobrin, Ping Gao, Iris Cong, Emil T Khabiboulline, Norbert M Linke, Mikhail D Lukin, Christopher Monroe, Beni
77. Juan Martin Maldacena. Eternal black holes in anti-de Sitter. JHEP, 04:021, 2003.
78. Arnab Kundu. Wormholes & holography: An introduction. arXiv preprint arXiv:2110.14958, 2021.
79. Michael S. Morris and Kip S. Thorne. Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity. American Journal of Physics, 56(5):395–412, 1988.
80. Michael S. Morris, Kip S. Thorne, and Uvi Yurtsever. Wormholes, time machines, and the weak energy condition. Phys. Rev. Lett., 61:1446–1449, Sep 1988.
81. David Hochberg and Matt Visser. Null energy condition in dynamic wormholes. Phys. Rev. Lett., 81:746–749, Jul 1998.
82. Matt Visser, Sayan Kar, and Naresh Dadhich. Traversable wormholes with arbitrarily small energy condition violations. Phys. Rev. Lett., 90:201102, May 2003.
83. Ping Gao, Daniel Louis Jafferis, and Aron C. Wall. Traversable Wormholes via a Double Trace Deformation. JHEP, 12:151, 2017.
84. Juan Maldacena, Douglas Stanford, and Zhenbin Yang. Diving into traversable wormholes. Fortsch. Phys., 65(5), may 2017.
85. John Preskill. Quantum Computing in the NISQ era and beyond. Quantum, 2:79, August 2018.
86. Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heinonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alan Aspuru-Guzik. Noisy intermediate-scale quantum algorithms. Rev. Mod. Phys., 94:015004, Feb 2022.
87. Hong Liu and Julian Sonner. Quantum many-body physics from a gravitational lens. Nat. Rev. Phys., 2(11):615–633, 2020.
88. Rainer Blatt and Christian F Roos. Quantum simulations with trapped ions. Nat. Phys., 8(4):277–284, 2012.
89. Christopher Monroe, Wes C Campbell, Lu-Ming Duan, Z-X Gong, Alexey V Gorshkov, P W Hess, R Islam, K Kim, Norbert M Linke, Guido Pagano, Phil Richerme, Crystal Senko, and Norman Y. Yao. Programmable quantum simulations of spin systems with trapped ions. Rev. Mod. Phys., 93(2):025001, 2021.
90. Antoine Browaeys and Thierry Lahaye. Many-body physics with individually controlled rydberg atoms. Nat. Phys., 16(2):132–142, 2020.
91. Jingxiang Wu and Timothy H. Hsieh. Variational thermal quantum simulation via thermofield double states. Phys. Rev. Lett., 123:220502, Nov 2019.
92. William Cottrell, Ben Freivogel, Diego M Hoffman, and Sagar F Lokhande. How to build the thermofield double state. J. High Ener. Phys., 2019(2):58, 2019.
93. Bhuvanesh Sundar, Andreas Elben, Lata Kh Joshi, and Torsten V Zache. Proposal for measuring out-of-time-ordered correlators at finite temperature with coupled spin chains. New Journal of Physics, 24(2):023037, feb 2022.
94. Daiwei Zhu, Sonika Johri, Norbert M Linke, K A Landsman, C Huerta Alderete, Nhun H Nguyen, A Y Matsuura, T H Hsieh, and Christopher Monroe. Generation of thermofield double states and critical ground states with a quantum computer. Proc. Natl. Acad. Sci., 117(41):25402–25406, 2020.
95. Jingxiang Wu and Timothy H. Hsieh. Variational thermal quantum simulation via thermofield double states. Phys. Rev. Lett., 123(22):220502, Nov 2019.
96. Vincent Paul Su. Variational preparation of the thermofield double state of the Sachdev-Ye-Kitaev model. Phys. Rev. A, 104(1):012427, Jul 2021.
97. John Martyn and Brian Swingle. Product spectrum ansatz and the simplicity of thermal states. Phys. Rev. A, 100(3):032107, Sep 2019.
98. K. A. Landsman, C. Figgatt, T. Schuster, N. M. Linke, B. Yoshida, N. Y. Yao, and C. Monroe. Verified quantum information scrambling. Nature, 567(7746):61–65, 2019.
99. Beni Yoshida and Norman Y. Yao. Disentangling scrambling and decoherence via quantum teleportation. Phys. Rev. X, 9:011006, Jan 2019.
100. Ceren B Dag and L-M Duan. Detection of out-of-time-order correlators and information scrambling in cold atoms: Ladder-xx model. Phys. Rev. A, 99(5):052322, 2019.
101. Benoit Vermersch, Andreas Elben, Lukas M Sieberer, Norman Y Yao, and Peter Zoller. Probing scrambling using statistical correlations between randomized measurements. Phys. Rev. X, 9(2):021061, 2019.
102. Brian Swingle, Gregory Bentsen, Monika Schleier-Smith, and Patrick Hayden. Measuring the scrambling of quantum information. Phys. Rev. A, 94:040302, Oct 2016.
103. Guanyu Zhu, Mohammad Hafezi, and Tarun Grover. Measurement of many-body chaos using
104. Nicole Yunger Halpern. Jarzynski-like equality for the out-of-time-ordered correlator. Phys. Rev. A, 95:012120, Jan 2017.
105. Nicole Yunger Halpern, Brian Swingle, and Justin Dressel. Quasiprobability behind the out-of-time-ordered correlator. Phys. Rev. A, 97:042105, Apr 2018.
106. Justin Dressel, José Raúl González Alonso, Mordecai Waegell, and Nicole Yunger Halpern. Strengthening weak measurements of qubit out-of-time-order correlators. Phys. Rev. A, 98:012132, Jul 2018.
107. Naoto Tsuji, Philipp Werner, and Masahito Ueda. Exact out-of-time-ordered correlation functions for an interacting lattice fermion model. Phys. Rev. A, 95:011601, Jan 2017.
108. A. Bohrdt, C. B. Mendl, M. Endres, and M. Knap. Scrambling and thermalization in a diffusive quantum many-body system. New J. Phys., 19(6):063001, 2017.
109. Jun Li, Ruihua Fan, Hengyan Wang, Bingtian Ye, Bei Zeng, Hui Zhai, Xinhua Peng, and Jiangleang Du. Measuring out-of-time-order correlators on a nuclear magnetic resonance quantum simulator. Phys. Rev. X, 7(3):031011, 2017.
110. Ken Xuan Wei, Chandrasekhar Ramanathan, and Paola Cappellaro. Exploring localization in nuclear spin chains. Phys. Rev. Lett., 120(7):070501, 2018.
111. Xinfang Nie, Ze Zhang, Xinzhu Zhao, Tao Xin, Dawei Lu, and Jun Li. Detecting scrambling via statistical correlations between randomized measurements on an nmr quantum simulator. arXiv preprint arXiv:1903.12237, 2019.
112. Manoj K Joshi, Andreas Elben, Benoit Vermersch, Tiff Brydges, Christine Maier, Peter Zoller, Rainer Blatt, and Christian F Roos. Quantum information scrambling in a trapped-ion quantum simulator with tunable range interactions. Phys. Rev. Lett., 124(24):240505, 2020.
113. S Pegahian, I Arakelyan, and JE Thomas. Energy-resolved information scrambling in energy-space lattices. Phys. Rev. Lett., 126(7):070601, 2021.
114. Martin Gårttner, Justin G Bohnet, Arghavan Safavi-Naini, Michael L Wall, John J Bollinger, and Ana Maria Rey. Measuring out-of-time-order correlations and multiple quantum spectra in a trapped-ion quantum magnet. Nat. Phys., 13(8):781–786, 2017.
115. Jochen Braunmüller, Amir H Karamlou, Yariv Yanay, Bharath Kannan, David Kim, Morten Kjaergaard, Alexander Melville, Bethany M Niedzielski, Youngkyu Sung, Antti Vepsäläinen, et al. Probing quantum information propagation with out-of-time-ordered correlators. arXiv preprint arXiv:2102.11751, 2021.
116. Alaina M Green, A Elben, C Huerta Alderete, Lata Kh Joshi, Nhung H Nguyen, Torsten V Zache, Yingyue Zhu, Bhuvanesh Sundar, and Norbert M Linke. Experimental measurement of out-of-time-ordered correlators at finite temperature. arXiv preprint arXiv:2112.02068 (accepted in PRL), 2021.
117. Martin Ammon and Johanna Erdmenger. Gauge/gravity duality: Foundations and applications. Cambridge University Press, Cambridge, 4 2015.
118. Joao Penedones. Tasi lectures on ads/cft. In New Frontiers in Fields and Strings: TASI 2015 Proceedings of the 2015 Theoretical Advanced Study Institute in Elementary Particle Physics, pages 75–136. World Scientific, 2017.
119. Eric D Hoker and Daniel Z Freedman. Supersymmetric gauge theories and the ads/cft correspondence. In Strings, Branes and Extra Dimensions: TASI 2001, pages 3–159. World Scientific, 2004.
120. Viktor Jahnke. Recent developments in the holographic description of quantum chaos. Adv. High Energy Phys., 2019:9632708, 2019.
121. Subir Sachdev. Condensed Matter and AdS/CFT. Lect. Notes Phys., 828:273–311, 2011.
122. Tatsuma Nishioka. Entanglement entropy: holography and renormalization group. Rev. Mod. Phys., 90(3):035007, 2018.
123. Arpan Bhattacharyya. Lessons for Gravity from Entanglement, PhD Thesis. 2015.
124. Makoto Natsuume. AdS/CFT Duality User Guide, volume 903. Springer, 2015.
125. Johanna Erdmenger, Nick Evans, Ingo Kirsch, and Ed Threlfall. Mesons in Gauge / Gravity Duals - A Review. Eur. Phys. J. A, 35:81–133, 2008.
126. Chandan Jana, R. Loganayagam, and Mukund Rangamani. Open quantum systems and Schwinger-Keldysh holograms. JHEP, 07:242, 2020.
127. Hong Liu and Julian Sonner. Holographic systems far from equilibrium: a review. Rep. Prog. Phys., 83(1):016001, 2019.
128. Niklas Beisert et al. Review of AdS/CFT Integrability: An Overview. Lett. Math. Phys., 99:3–32, 2012.
129. Tanay Kibe, Prabha Mandayam, and Ayan Mukhopadhyay. Holographic spacetime, black holes and quantum error correcting codes: A re130. Albert Einstein, Boris Podolsky, and Nathan Rosen. Can quantum mechanical description of physical reality be considered complete? Phys. Rev., 47:777–780, 1935.
131. Charles W. Misner and John A. Wheeler. Classical physics as geometry: Gravitation, electromagnetism, unquantized charge, and mass as properties of curved empty space. Ann. Phys., 2:525–603, 1957.
132. Albert Einstein and N. Rosen. The Particle Problem in the General Theory of Relativity. Phys. Rev., 48:73–77, 1935.
133. Mark Van Raamsdonk. Building up spacetime with quantum entanglement. Gen. Rel. Grav., 42:2323–2329, 2010.
134. Maximo Banados, Claudio Teitelboim, and Jorge Zanelli. The Black hole in three-dimensional space-time. Phys. Rev. Lett., 69:1849–1851, 1992.
135. Esko Keski-Vakkuri. Bulk and boundary dynamics in BTZ black holes. Phys. Rev. D, 59:104001, 1999.
136. Leonard Susskind and Ying Zhao. Teleportation through the wormhole. Phys. Rev. D, 98(4):046016, 2018.
137. M. S. Morris and K. S. Thorne. Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity. Am. J. Phys., 56:395–412, 1988.
138. Matt Visser. Lorentzian wormholes, from einstein to hawking Woodbury, 1995.
139. Juan Maldacena and Alexey Milekhin. Humanly traversable wormholes. Phys. Rev. D, 103(6):066007, 2021.
140. Byoungsoo Ahn, Yongjun Ahn, Sang-Eon Bak, Viktor Jahnke, and Keun-Young Kim. Holographic teleportation in higher dimensions. JHEP, 07:219, 2021.
141. Nariman Charlkie. Tabletop Quantum Gravity: Roadmap. Master’s thesis, Lebanese University, Hadath, Beirut, 2020.
142. Douglas Stanford. New roles for wormholes. https://www.youtube.com/watch?v=-hfcApA9e8Q, May 2020.
143. Xian O. Camanho, Jose D. Edelstein, Juan Maldacena, and Alexander Zhiboedov. Causality Constraints on Corrections to the Graviton Three-Point Coupling. JHEP, 02:020, 2016.
144. Irwin I. Shapiro. Fourth Test of General Relativity. Phys. Rev. Lett., 13:789–791, 1964.
145. Pavan Hosur, Xiao-Liang Qi, Daniel A Roberts, and Beni Yoshida. Chaos in quantum channels. J. High Energy Phys., 2016(2):4, 2016.
146. W. Israel. Thermo-field dynamics of black holes. Physics Letters A, 57(2):107–110, 1976.
147. Yasushi Takahashi and Hiroomi Umezawa. Thermo field dynamics. International Journal of Modern Physics B, 10(13-14):1755–1805, January 1996.
148. Juan Maldacena and Xiao-Liang Qi. Eternal traversable wormhole. arXiv preprint arXiv:1804.00491, 2018.
149. Daniel A Roberts, Douglas Stanford, and Leonard Susskind. Localized shocks. Journal of High Energy Physics, 2015(3):51, 2015.
150. Koji Hashimoto, Keiju Murata, and Ryosuke Yoshii. Out-of-time-order correlators in quantum mechanics. J. High Energy Phys., 2017(10):138, 2017.
151. C. W. von Keyserlingk, Tibor Rakovszky, Frank Pollmann, and S. L. Sondhi. Operator hydrodynamics, otocs, and entanglement growth in systems without conservation laws. Phys. Rev. X, 8:021013, Apr 2018.
152. Yunxiang Liao and Victor Galitski. Nonlinear sigma model approach to many-body quantum chaos: Regularized and unregularized out-of-time-ordered correlators. Phys. Rev. B, 98:205124, Nov 2018.
153. Gregory Bentsen, Tomohiro Hashizume, Anton S. Buyskikh, Emily J. Davis, Andrew J. Daley, Steven S. Gubser, and MouIka Schleier-Smith. Treelike interactions and fast scrambling with cold atoms. Phys. Rev. Lett., 123:130601, Sep 2019.
154. Etienne Lantagne-Hurtubise, Stephan Plugge, Oguzhan Can, and Marcel Franz. Diagnosing quantum chaos in many-body systems using entanglement as a resource. Phys. Rev. Res., 2(1):013254, 2020.
155. Don N. Page. Average entropy of a subsystem. Phys. Rev. Lett., 71:1291–1294, Aug 1993.
156. Don N. Page. Information in black hole radiation. Phys. Rev. Lett., 71:3743–3746, Dec 1993.
157. Lov K Grover. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett., 79(2):325, 1997.
158. Ping Gao and Daniel Louis Jafferis. A traversable wormhole teleportation protocol in the syk model. JHEP, 2021(7):1–44, 2021.
159. Xiao Liang Qi and Alexandre Streicher. Quantum epidemiology: operator growth, thermal effects, and SYK. Journal of High Energy Physics, 2019(8), 2019.
160. Xiao Mi, Pedram Roushan, Chris Quintana, Salvatore Mandrà, Jeffrey Marshall, Charles Neill, Frank Arute, Kunal Arya, Juan Atalaya, Ryan
Babbush, et al. Information scrambling in quantum circuits. *Science*, 374(6574):1479–1483, 2021.
161. Immanuel Bloch, Jean Dalibard, and Sylvain Nascimbene. Quantum simulations with ultracold quantum gases. *Nat. Phys.*, 8(4):267–276, 2012.
162. Christian Gross and Immanuel Bloch. Quantum simulations with ultracold atoms in optical lattices. *Science*, 357(6355):995–1001, 2017.
163. Thomas F Gallagher. *Rydberg atoms*. Number 3. Cambridge University Press, 2005.
164. Mark Saffman, Thad G Walker, and Klaus Mølmer. Quantum information with rydberg atoms. *Rev. Mod. Phys.*, 82(3):2313, 2010.
165. Xiaolong Wu, Xinhui Liang, Yaoli Tian, Fan Yang, Cheng Chen, Yong-Chun Liu, Meng Khoon Tey, and Li You. A concise review of rydberg atom based quantum computation and quantum simulation. *Chin. Phys. B*, 2020.
166. Andrew A Houck, Hakan E Türeci, and Jens Koch. On-chip quantum simulation with superconducting circuits. *Nat. Phys.*, 8(4):292–299, 2012.
167. Morten Kjaergaard, Mollie E Schwartz, Jochen Braunmüller, Philip Krantz, Joel I-J Wang, Simon Gustavsson, and William D Oliver. Superconducting qubits: Current state of play. *Annu. Rev. Condens. Matter Phys.*, 11:369–395, 2020.
168. Lieven M K Vandersypen and Isaac L Chuang. Nmr techniques for quantum control and computation. *Rev. Mod. Phys.*, 76(4):1037, 2005.
169. Jonathan A Jones. Quantum computing with nmr. *Prog. NMR Spectrosc.*, 59:91–120, 2011.
170. Ivan Oliveira, Roberto Sarthou Jr, Tito Bonagamba, Eduardo Azevedo, and Jair C C Freitas. NMR quantum information processing. Elsevier, 2011.
171. Alán Aspuru-Guzik and Philip Walther. Photonic quantum simulators. *Nat. Phys.*, 8(4):285–291, 2012.
172. Darrick E Chang, Vladan Vuletić, and Mikhail D Lukin. Quantum nonlinear optics—photon by photon. *Nat. Photonics*, 8(9):685–694, 2014.
173. Antoine Browaeys, Daniel Barredo, and Thierry Lahaye. Experimental investigations of dipole–dipole interactions between a few rydberg atoms. *J. Phys. B: At. Mol. Opt. Phys.*, 49(15):152001, 2016.
174. I Mourachko, D Comparat, F De Tomasi, A Fioretti, P Nosbaum, V M Akulin, and P Pillet. Many-body effects in a frozen rydberg gas. *Phys. Rev. Lett.*, 80(2):253, 1998.
175. W R Anderson, J R Veale, and Thomas F Gallagher. Resonant dipole-dipole energy transfer in a nearly frozen rydberg gas. *Phys. Rev. Lett.*, 80(2):249, 1998.
176. Nicolas Schlosser, Georges Reymond, Igor Protsenko, and Philippe Grangier. Sub-poissonian loading of single atoms in a microscopic dipole trap. *Nature*, 411(6841):1024–1027, 2001.
177. Manuel Endres, Hannes Bernien, Alexander Keesling, Harry Levine, Eric R Anschuetz, Alexandre Krajenbrink, Crystal Senko, Vladan Vuletić, Markus Greiner, and Mikhail D Lukin. Atom-by-atom assembly of defect-free one-dimensional cold atom arrays. *Science*, 354(6315):1024–1027, 2016.
178. Daniel Barredo, Sylvain De Léséleuc, Vincent Lienhard, Thierry Lahaye, and Antoine Browaeys. An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays. *Science*, 354(6315):1021–1023, 2016.
179. Hyosuk Kim, Woojun Lee, Han-geyo Lee, Hanlae Jo, Yunheung Song, and Jaewook Ahn. In situ single-atom array synthesis using dynamic holographic optical tweezers. *Nat. Commun.*, 7(1):1–8, 2016.
180. Woojun Lee, Hyosuk Kim, and Jaewook Ahn. Three-dimensional rearrangement of single atoms using actively controlled optical microtraps. *Opt. Express*, 24(9):9816–9825, 2016.
181. Daniel Ohl de Mello, Dominik Schäffner, Jan Werkmann, Tilman Preuschoff, Lars Kohfahl, Malte Schlessier, and Gerhard Birkl. Defect-free assembly of 2d clusters of more than 100 single-atom quantum systems. *Phys. Rev. Lett.*, 122(20):203601, 2019.
182. Daniel Barredo, Vincent Lienhard, Sylvain De Leseleuc, Thierry Lahaye, and Antoine Browaeys. Synthetic three-dimensional atomic structures assembled atom by atom. *Nature*, 561(7721):79–82, 2018.
183. Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, Vladan Vuletić, and Mikhail D. Lukin. Probing many-body dynamics on a 51-atom quantum simulator. *Nature*, 551(7682):579–584, 2017.
184. Pascal Scholl, Michael Schuler, Hannah J Williams, Alexander A Eberharter, Daniel Barredo, Kai-Niklas Schymik, Vincent Lienhard, Louis-Paul Henry, Thomas C Lang, Thierry Lahaye, Andreas M Läuchli, and Antoine Browaeys. Quantum simulation of 2d antiferromagnets with hundreds of rydberg atoms. *Nature*, 595(7866):233–238, 2021.
185. Sepehr Ebadi, Tout T Wang, Harry Levine, Alexander Keesling, Giulia Semeghini, Ahmed Omran, Dolev Bluvstein, Rhine Samajdar, Hannes Pichler, Wen Wei Ho, Soonwon Choi, Subir Sachdev, Markus Greiner, Vladan Vuletić, and Mikhail D Lukin. Quantum phases of matter on a 256-atom programmable quantum simulator. *Nature*, 595(7866):227–232, 2021.
186. Dieter Jaksch, Juan Ignacio Cirac, Peter Zoller, Steve L Rolston, Robin Côté, and Mikhail D Lukin. Fast quantum gates for neutral atoms. *Phys. Rev. Lett.*, 85(10):2208, 2000.
187. Mikhail D Lukin, Michael Fleischhauer, Robin Côté, LuMing Duan, Dieter Jaksch, Juan Ignacio Cirac, and Peter Zoller. Dipole blockade and quantum information processing in mesoscopic atomic ensembles. *Phys. Rev. Lett.*, 87(3):037901, 2001.
188. Thad G Walker and Mark Saffman. Entanglement of two atoms using rydberg blockade. In *Advances in Atomic, Molecular, and Optical Physics*, volume 61, pages 81–115. Elsevier, 2012.
189. E Urban, Todd A Johnson, T Henage, L Isenhower, D D Yavuz, T G Walker, and M Saffman. Observation of rydberg blockade between two atoms. *Nat. Phys.*, 5(2):110–114, 2009.
190. Alpha Gaëtan, Yevhen Mirosnychenko, Tatjana Wilk, Amodsen Chotia, Matthieu Viteau, Daniel Comparat, Pierre Pillet, Antoine Browaeys, and Philippe Grangier. Observation of collective excitation of two individual atoms in the rydberg blockade regime. *Nat. Phys.*, 5(2):115–118, 2009.
191. Tatjana Wilk, A Gaëtan, C Evellin, J Wolters, Y Mirosnychenko, P Grangier, and Antoine Browaeys. Entanglement of two individual neutral atoms using rydberg blockade. *Phys. Rev. Lett.*, 104(1):010502, 2010.
192. Daniel Comparat and Pierre Pillet. Dipole blockade in a cold rydberg atomic sample. *J. Opt. Soc. Am. B*, 27(6):A208–A232, 2010.
193. G. Semeghini, H. Levine, A. Keesling, S. Ebadi, T. T. Wang, D. Bluvstein, R. Verresen, H. Pichler, M. Kalinowski, R. Samajdar, A. Omran, S. Sachdev, A. Vishwanath, M. Greiner, V. Vuletić, and M. D. Lukin. Probing topological spin liquids on a programmable quantum simulator. *Science*, 374(6572):1242–1247, 2021.
194. Ivaylo S Madjarov, Jacob P Covey, Adam L Shaw, Joanneh Choi, Anant Kale, Alexandre Cooper, Hannes Pichler, Vladimir Schkolnik, Jason R Williams, and Manuel Endres. High-fidelity entanglement and detection of alkaline-earth rydberg atoms. *Nat. Phys.*, 16(8):857–861, 2020.
195. L Isenhower, E Urban, X L Zhang, A T Gill, T Henage, Todd A Johnson, T G Walker, and M Saffman. Demonstration of a neutral atom controlled-not quantum gate. *Phys. Rev. Lett.*, 104(1):010503, 2010.
196. Harry Levine, Alexander Keesling, Giulia Semeghini, Ahmed Omran, Tout T Wang, Sepehr Ebadi, Hannes Bernien, Markus Greiner, Vladan Vuletić, Hannes Pichler, and Mikhail D. Lukin. Parallel implementation of high-fidelity multiqubit gates with neutral atoms. *Phys. Rev. Lett.*, 123(17):170503, 2019.
197. X L Zhang, L Isenhower, AT Gill, T G Walker, and M Saffman. Deterministic entanglement of two neutral atoms via rydberg blockade. *Phys. Rev. A*, 82(3):030306, 2010.
198. K M Maller, M T Lichtman, T Xia, Y Sun, M J Pitrotowicz, A W Carr, L Isenhower, and M Saffman. Rydberg-blockade controlled-not gate and entanglement in a two-dimensional array of neutral-atom qubits. *Phys. Rev. A*, 92(2):022336, 2015.
199. Manoj K Joshi, F Kranzl, A Schnuckert, I Lovas, Christine Maier, Rainer Blatt, Michael Knap, and Christian F Roos. Observing emergent hydrodynamics in a long-range quantum magnet. *arXiv preprint arXiv:2107.00033*, 2021.
200. Jiehang Zhang, Guido Pagano, Paul W Hess, Antonis Kyriakidis, Patrick Becker, Harvey Kaplan, Alexey V Gorshkov, Z-X Gong, and Christopher Monroe. Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. *Nature*, 551(7682):601–604, 2017.
201. D Kielpinski, B E King, C J Myatt, Cass A Sackett, Q A Turchette, Wayne M Itano, Christopher Monroe, David J Wineland, and Wojciech H Zurek. Sympathetic cooling of trapped ions for quantum logic. *Phys. Rev. A*, 61(3):032310, 2000.
202. Jonathan P Home, David Hammeke, John D Jost, Dietrich Leibfried, and David J Wineland. Normal modes of trapped ions in the presence of anharmonic trap potentials. *New J. Phys.*, 13(7):073026, 2011.
203. Juan Ignacio Cirac and Peter Zoller. Quantum computations with cold trapped ions. *Phys. Rev. Lett.*, 74(20):4091, 1995.
204. L Feng, W L Tan, A De, A Menon, A Chu, Guido Pagano, and Christopher Monroe. Efficient ground-state cooling of large trapped-ion chains with an electromagnetically-induced-transparency tripod scheme. *Phys. Rev. Lett.*, 125(5):053001, 2020.
205. Anders Sørensen and Klaus Mølmer. Entanglement and quantum computation with ions in ther206. G Kirchmair, J Benhelm, F Zähringer, R Gerritsma, Christian F Roos, and Rainer Blatt. Deterministic entanglement of ions in thermal states of motion. *New J. Phys.*, 11(2):023002, 2009.
207. G Kirchmair, J Benhelm, F Zähringer, R Gerritsma, Christian F Roos, and R Blatt. High-fidelity entanglement of ca+ 43 hyperfine clock states. *Phys. Rev. A*, 79(2):020304, 2009.
208. E E Edwards, S Korenblit, K Kim, R Islam, M-S Chang, J K Freericks, G-D Lin, Lu-Ming Duan, and Christopher Monroe. Quantum simulation and phase diagram of the transverse-field ising model with three atomic spins. *Phys. Rev. B*, 82(6):060412, 2010.
209. John P Gaebler, Ting Rei Tan, Y Lin, Y Wan, R Bowler, Adam C Keith, S Glancy, K Coakley, E Knill, D Leibfried, and David J Wineland. High-fidelity universal gate set for be+ ion qubits. *Phys. Rev. Lett.*, 117(6):060505, 2016.
210. Ting Rei Tan, John P Gaebler, Yiheng Lin, Yong Wan, R Bowler, D Leibfried, and David J Wineland. Multi-element logic gates for trapped-ion qubits. *Nature*, 528(7582):380–383, 2015.
211. Laird Egan, Dripto M Debroy, Crystal Noel, Andrew Risinger, Daiwei Zhu, Debopriyo Biswas, Michael Newman, Muyuan Li, Kenneth R Brown, Marko Cetina, et al. Fault-tolerant operation of a quantum error-correction code. *arXiv preprint arXiv:2009.11482*, 2020.
212. Craig R. Clark, Holly N. Tinkey, Brian C. Sawyer, Adam M. Meier, Karl A. Burkhardt, Christopher M. Seck, Christopher M. Shappert, Nicholas D. Guise, Curtis E. Volin, Spencer D. Falke, Harley T. Hayden, Wade G. Reellergert, and Kenton R. Brown. High-fidelity bell-state preparation with $^{40}$ca$^+$ optical qubits. *Phys. Rev. Lett.*, 127:130505, Sep 2021.
213. L. García-Álvarez, I. L. Egusquiza, L. Lamata, A. del Campo, J. Sonner, and E. Solano. Digital quantum simulation of minimal AdS/CFT. *Phys. Rev. Lett.*, 119:040501, Jul 2017.
214. Ryan Babbush, Dominic W Berry, and Hartmut Neven. Quantum simulation of the sachdev-ye-kitaev model by asymmetric qubitization. *Phys. Rev. A*, 99(4):040301, 2019.
215. Alexander J. Buser, Hrant Gharibyan, Masanori Hanada, Masazumi Honda, and Junyu Liu. Quantum simulation of gauge theory via orbifold lattice. *JHEP*, 09:034, 2021.
216. Hrant Gharibyan, Masanori Hanada, Masazumi Honda, and Junyu Liu. Toward simulating superstring/M-theory on a quantum computer. *JHEP*, 07:140, 2021.
217. Torin F. Stetina, Anthony Ciavarella, Xiaosong Li, and Nathan Wiebe. Simulating Effective QED on Quantum Computers. *Quantum*, 6:622, 2022.
218. Anthony Ciavarella, Natalie Klco, and Martin J. Savage. Trailhead for quantum simulation of SU(3) Yang-Mills lattice gauge theory in the local multiplet basis. *Phys. Rev. D*, 103(9):094501, 2021.
219. Zohreh Davoudi, Norbert M. Linke, and Guido Pagano. Toward simulating quantum field theories with controlled phonon-ion dynamics: A hybrid analog-digital approach. *Phys. Rev. Res.*, 3(4):043072, 2021.
220. Christopher Culver and David Schaich. Quantum computing for lattice supersymmetry. In *38th International Symposium on Lattice Field Theory*, 12 2021.
221. Anthony N. Ciavarella and Ivan A. Chernyshev. Preparation of the SU(3) Lattice Yang-Mills Vacuum with Variational Quantum Methods. 12 2021.
222. Junyu Liu, Jinzhao Sun, and Xiao Yuan. Towards a variational Jordan-Lee-Preskill quantum algorithm. 9 2021.
223. Natalie Klco, Alessandro Roggero, and Martin J. Savage. Standard Model Physics and the Digital Quantum Revolution: Thoughts about the Interface. 7 2021.
224. Masazumi Honda, Etsuko Itou, Yuta Kikuchi, Lenio Nagano, and Takuuya Okuda. Classically emulated digital quantum simulation for screening and confinement in the Schwinger model with a topological term. *Phys. Rev. D*, 105(1):014504, 2022.
225. Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm. *arXiv preprint arXiv:1411.4028*, 2014.
226. Stuart Hadfield, Zhihui Wang, Bryan O’Gorman, Eleanor G Rieffel, Davide Venturelli, and Rupak Biswas. From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. *Algorithms*, 12(2):34, 2019.
227. Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum algorithms. *Nat. Rev. Phys.*, pages 1–20, 2021.
228. Christian Kokail, Christine Maier, Rick van Bijnen, Tiff Brydges, Manoj K Joshi, Petar Jurcevic, Christine A Muschik, Pietro Silvi, Rainer Blatt, Christian F Roos, and Peter Zoller. Self-verifying variational quantum simulation of lattice models.
229. Matthew P Harrigan, Kevin J Sung, Matthew Neeley, Kevin J Satzinger, Frank Arute, Kunal Arya, Juan Atalaya, Joseph C Bardin, Rami Barends, Sergio Boixo, et al. Quantum approximate optimization of non-planar graph problems on a planar superconducting processor. *Nat. Phys.*, 17(3):332–336, 2021.
230. Guido Pagano, Aniruddha Bapat, Patrick Becker, Katherine S Collins, Arinjoy De, Paul W Hess, Harvey B Kaplan, Antonis Kyprianidis, Wen Lin Tan, Christopher Baldwin, Lucas T Brady, Abhinav Deshpande, Fangli Liu, Stephen Jordan, Alexey V Gorshkov, and Christopher Monroe. Quantum approximate optimization of the long-range ising model with a trapped-ion quantum simulator. *Proc. Natl. Acad. Sci.*, 117(41):25396–25401, 2020.
231. Peter J J O’Malley, Ryan Babbush, Ian D Kivlichan, Jonathan Romero, Jarrod R McClean, Rami Barends, Julian Kelly, Pedram Roushan, Andrew Tranter, Nan Ding, et al. Scalable quantum simulation of molecular energies. *Phys. Rev. X*, 6(3):031007, 2016.
232. Cornelius Hempel, Christine Maier, Jonathan Romero, Jarrod McClean, Thomas Monz, Heng Shen, Petar Jurcevic, Ben P Lanyon, Peter Love, Ryan Babbush, et al. Quantum chemistry calculations on a trapped-ion quantum simulator. *Phys. Rev. X*, 8(3):031022, 2018.
233. Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L O’Brien. A variational eigenvalue solver on a photonic quantum processor. *Nat. Commun.*, 5(1):1–7, 2014.
234. Eugene F Dumitrescu, Alex J McCaskey, Gaute Hagen, Gustav R Jansen, Titus D Morris, T Papenbrock, Raphael C Pooser, David Jarvis Dean, and Pavel Lougovski. Cloud quantum computing of an atomic nucleus. *Phys. Rev. Lett.*, 120(21):210501, 2018.
235. Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. *Nature*, 549(7671):242–246, 2017.
236. Natalie Klo, Eugene F Dumitrescu, Alex J McCaskey, Titus D Morris, Raphael C Pooser, Mikel Sanz, Enrique Solano, Pavel Lougovski, and Martin J Savage. Quantum-classical computation of schwinger model dynamics using quantum computers. *Phys. Rev. A*, 98(3):032331, 2018.
237. Bhuvanesh Sundar, Roger Paredes, David T Damanik, Leonardo Duenas-Osorio, and Kaden R A Hazzard. A quantum algorithm to count weighted ground states of classical spin hamiltonians. arXiv preprint arXiv:1908.01745, 2019.
238. Christian Kokail, Bhuvanesh Sundar, Torsten V. Zache, Andreas Elben, Benoît Vermersch, Marcello Dalmonte, Rick van Bijnen, and Peter Zoller. Quantum variational learning of the entanglement hamiltonian. *Phys. Rev. Lett.*, 127:170501, Oct 2021.
239. Dave Wecker, Matthew B Hastings, and Matthias Troyer. Progress towards practical quantum variational algorithms. *Phys. Rev. A*, 92(4):042303, 2015.
240. Roy J. Garcia, You Zhou, and Arthur Jaffe. Quantum scrambling with classical shadows. *Phys. Rev. Research*, 3:033155, Aug 2021.
241. Machiel S Blok, V V Ramasesh, Thomas Schuster, K O’Brien, J M Kreikebaum, D Dahlem, A Morvan, Beni Yoshida, Norman Y Yao, and Irfan Siddiqi. Quantum information scrambling on a superconducting qutrit processor. *Phys. Rev. X*, 11(2):021010, 2021.
242. B. Vermersch, A. Elben, L. M. Sieberer, N. Y. Yao, and P. Zoller. Probing scrambling using statistical correlations between randomized measurements. *Phys. Rev. X*, 9:021061, Jun 2019.
243. Andrew J. Daley. Quantum trajectories and open many-body quantum systems. *Advances in Physics*, 63(2):77–149, 2014.
244. Ning Bao and Yuta Kikuchi. Hayden-Preskill decoding from noisy Hawking radiation. *Journal of High Energy Physics*, 2021(2):17, 2021.
245. Xiao-Liang Qi, Emily J Davis, Avikar Periwal, and Monika Schleier-Smith. Measuring operator size growth in quantum quench experiments. arXiv preprint arXiv:1906.00524, 2019.
246. Leonard Susskind. Complexity and Newton’s Laws. *Front. Phys.*, 8:262, 2020.
247. Leonard Susskind and Ying Zhao. Complexity and Momentum. *JHEP*, 21:239, 2020.
248. J. L. F. Barbon, J. Martin-Garcia, and M. Sasesta. Proof of a Momentum/Complexity Correspondence. *Phys. Rev. D*, 102(10):101901, 2020.
249. Henry W. Lin, Juan Maldacena, and Ying Zhao. Symmetries Near the Horizon. *JHEP*, 08:049, 2019.
250. Ro Jefferson and Robert C. Myers. Circuit complexity in quantum field theory. *JHEP*, 10:107, 2017.
251. Lucas Hackl and Robert C. Myers. Circuit complexity for free fermions. JHEP, 07:139, 2018.
252. Rifath Khan, Chethan Krishnan, and Sanchita Sharma. Circuit Complexity in Fermionic Field Theory. Phys. Rev. D, 98(12):126001, 2018.
253. Shira Chapman, Michal P. Heller, Hugo Marrochio, and Fernando Pastawski. Toward a Definition of Complexity for Quantum Field Theory States. Phys. Rev. Lett., 120(12):121602, 2018.
254. Arpan Bhattacharyya, Arvind Shekar, and Aninda Sinha. Circuit complexity in interacting QFTs and RG flows. JHEP, 10:140, 2018.
255. Tibra Ali, Arpan Bhattacharyya, S. Shahjul Haque, Eugene H. Kim, and Nathan Moynihan. Time Evolution of Complexity: A Critique of Three Methods. JHEP, 04:087, 2019.
256. Shira Chapman, Jens Eisert, Lucas Hackl, Michal P. Heller, Ro Jefferson, Hugo Marrochio, and Robert C. Myers. Complexity and entanglement for thermofield double states. SciPost Phys., 6(3):034, 2019.
257. Arpan Bhattacharyya, Pratik Nandy, and Aninda Sinha. Renormalized Circuit Complexity. Phys. Rev. Lett., 124(10):101602, 2020.
258. Arpan Bhattacharyya. Circuit complexity and (some of) its applications. Int. J. Mod. Phys. E, 30(07):2130005, 2021.
259. Pawel Caputa and Javier M. Magan. Quantum Computation as Gravity. Phys. Rev. Lett., 122(23):231302, 2019.
260. Mario Flory and Michal P. Heller. Conformal field theory complexity from Euler-Arnold equations. JHEP, 12:091, 2020.
261. Johanna Erdmenger, Marius Gerbershagen, and Anna-Lena Weigel. Complexity measures from geometric actions on Virasoro and Kac-Moody orbits. JHEP, 11:003, 2020.
262. Nicolas Chagnet, Shira Chapman, Jan de Boer, and Claire Zukowski. Complexity for conformal field theories in general dimensions. arXiv preprint arXiv:2103.06920, 2021.
263. E. Rabinovici, A. Sánchez-Garrido, R. Shir, and J. Sonner. Operator complexity: a journey to the edge of Krylov space. JHEP, 06:062, 2021.
264. Shira Chapman and Giuseppe Policastro. Quantum computational complexity from quantum information to black holes and back. The European Physical Journal C, 82(2):128, 2022.
265. Ahmed Almheiri, Raghu Mahajan, and Juan Maldacena. Islands outside the horizon. arXiv preprint arXiv:1910.11077, 2019.
266. Geoffrey Penington. Entanglement wedge reconstruction and the information paradox. JHEP, 2020(9):1–84, 2020.
267. Alex Peach and Simon F. Ross. Tensor Network Models of Multiboundary Wormholes. Class. Quant. Grav., 34(10):105011, 2017.
268. J. Wheeler. Geons. Phys. Rev., 97:511–536, Jan 1955.
269. David Garfinkle and Andrew Strominger. Semiclassical wheeler wormhole production. Phys. Lett. B, 256(2):146–149, 1991.
270. Herman Verlinde. Wormholes in quantum mechanics. arXiv preprint arXiv:2105.02129, 2021.
271. Rik van Breukelen and Kyriakos Papadodimas. Quantum teleportation through time-shifted AdS wormholes. Journal of High Energy Physics, 2018(8):142, 2018.
272. Flavio S Nogueira, Souvik Banerjee, Moritz Dorband, René Meyer, Jeroen van den Brink, and Johanna Erdmenger. Geometric phases distinguish entangled states in wormhole quantum mechanics. arXiv preprint arXiv:2109.06190, 2021.
273. A. Zaffaroni. Introduction to the AdS-CFT correspondence. Class. Quant. Grav., 17:3571–3597, 2000.
274. P. Di Francesco, P. Mathieu, and D. Senechal. Conformal Field Theory. Graduate Texts in Contemporary Physics. Springer-Verlag, New York, 1997.
275. Arjun Bagchi and Reza Fareghbal. BMS GCA Redux: Towards Flatspace Holography from Non-Relativistic Symmetries. JHEP, 10:092, 2012.
276. Elena Caceres, Arnab Kundu, Ayan K. Patra, and Sanjit Shashi. A Killing Vector Treatment of Multiboundary Wormholes. JHEP, 02:149, 2020.
277. Vincent Lienhard, Sylvain de Léséleuc, Daniel Barredo, Thierry Lahery, Antoine Browaeys, Michael Schuler, Louis-Paul Henry, and Andreas M Läuchli. Observing the space-and time-dependent growth of correlations in dynamically tuned synthetic ising models with antiferromagnetic interactions. Phys. Rev. X, 8(2):021070, 2018.
278. Elmer Guardado-Sanchez, Peter T Brown, Debayan Mitra, Trithip Devakul, David A Huse, Peter Schauß, and Waseem S Bakr. Probing the quench dynamics of antiferromagnetic correlations in a 2d quantum ising spin system. Phys. Rev. X, 8(2):021069, 2018.
279. Peter Schauß, Johannes Zeiher, Takeshi Fukuhara, Sebastian Hild, Marc Cheneau, Tommaso Macrì, Thomas Pohl, Immanuel Bloch, and Christian Groß. Crystallization in ising quantum magnets. Science, 347(6229):1455–1458, 2015.
280. Woojun Lee, Minhyuk Kim, Hanlae Jo, Yunheung Song, and Jaewook Ahn. Coherent and dissipative dynamics of entangled few-body systems of rydberg atoms. *Phys. Rev. A.*, 99(4):043404, 2019.
281. Sylvain De Léséleuc, Sébastien Weber, Vincent Liehnard, Daniel Barredo, Hans Peter Büchler, Thierry Lahaye, and Antoine Browaeys. Accurate mapping of multilevel rydberg atoms on interacting spin-1/2 particles for the quantum simulation of ising models. *Phys. Rev. Lett.*, 120(11):113602, 2018.
282. Peter Schaaf, Marc Cheneau, Manuel Endres, Takeshi Fukuhara, Sebastian Hild, Ahmed Omran, Thomas Pohl, Christian Gross, Stefan Kuhr, and Immanuel Bloch. Observation of spatially ordered structures in a two-dimensional rydberg gas. *Nature*, 491(7422):87–91, 2012.
283. Henning Labuhn, Daniel Barredo, Sylvain Ravets, Sylvain De Léséleuc, Tommaso Mancini, Thierry Lahaye, and Antoine Browaeys. Tunable two-dimensional arrays of single rydberg atoms for realizing quantum ising models. *Nature*, 534(7609):667–670, 2016.
284. Johannes Zeiher, Rick Van Bijnen, Peter Schaaf, Sebastian Hild, Jae-yoon Choi, Thomas Pohl, Immanuel Bloch, and Christian Gross. Many-body interferometry of a rydberg-dressed spin lattice. *Nat. Phys.*, 12(12):1095–1099, 2016.
285. Johannes Zeiher, Jae-yoon Choi, Antonio Rubio-Abadal, Thomas Pohl, Rick Van Bijnen, Immanuel Bloch, and Christian Gross. Coherent many-body spin dynamics in a long-range interacting ising chain. *Phys. Rev. X*, 7(4):041063, 2017.
286. Elmer Guardado-Sanchez, Benjamin M Spar, Peter Schauss, Ron Belyansky, Jeremy T Young, Przemyslaw Bienias, Alexey V Gorshkov, Thomas Iadecola, and Waseem S Bakr. Quench dynamics of a fermi gas with strong nonlocal interactions. *Phys. Rev. X*, 11(2):021036, 2021.
287. Victoria Borish, Ognjen Marković, Jacob A Hines, Shankari V Rajagopal, and Monika Schleier-Smith. Transverse-field ising dynamics in a rydberg-dressed atomic gas. *Phys. Rev. Lett.*, 124(6):063601, 2020.
288. Y-Y Jan, A M Hankin, Tyler Keating, Ivan H Deutsch, and G W Biedermann. Entangling atomic spins with a rydberg-dressed spin-flip blockade. *Nat. Phys.*, 12(1):71–74, 2016.
289. Sylvain de Léséleuc, Vincent Liehnard, Pascal Scholl, Daniel Barredo, Sébastien Weber, Nicolai Lang, Hans Peter Büchler, Thierry Lahaye, and Antoine Browaeys. Observation of a symmetry-protected topological phase of interacting bosons with rydberg atoms. *Science*, 365(6455):775–780, 2019.
290. W P Su, J R Schrieffer, and Ao J Heeger. Solitons in polyacetylene. *Phys. Rev. Lett.*, 42(25):1698, 1979.
291. Ao J Heeger, Steven Kivelson, J R Schrieffer, and W P Su. Solitons in conducting polymers. *Rev. Mod. Phys.*, 60(3):781, 1988.
292. Christopher Monroe, David M Meekhof, Barry E King, Wayne M Itano, and David J Wineland. Demonstration of a fundamental quantum logic gate. *Phys. Rev. Lett.*, 75(25):4714, 1995.
293. Ferdinand Schmidt-Kaler, Hartmut Häffner, Mark Riebe, Stephan Gulde, Gavin PT Lancaster, Thomas Deuschle, Christoph Becher, Christian F Roos, Jürgen Eschner, and Rainer Blatt. Realization of the cirac-zoller controlled-not quantum gate. *Nature*, 422(6930):408–411, 2003.
294. F Schmidt-Kaler, Hartmut Häffner, S Gulde, M Riebe, G P T Lancaster, T Deuschle, C Becher, W Hänsel, J Eschner, Christian F Roos, and Rainer Blatt. How to realize a universal quantum gate with trapped ions. *Appl. Phys. B*, 77(8):789–796, 2003.
295. M Riebe, K Kim, P Schindler, Thomas Monz, P O Schmidt, T K Körber, W Hänsel, Hartmut Häffner, Christian F Roos, and Rainer Blatt. Process tomography of ion trap quantum gates. *Phys. Rev. Lett.*, 97(22):220407, 2006.
296. Farokh Mivehvar, Francesco Piazza, Tobias Donner, and Helmut Ritsch. Cavity qed with quantum gases: new paradigms in many-body physics. *Advances in Physics*, 70(1):1–153, 2021.
297. Ben P Lanyon, Cornelius Hempel, Daniel Nigg, Markus Müller, Rene Gerritsma, F Zähringer, Philipp Schindler, Julio T Barreiro, Markus Rambach, Gerhard Kirchmair, M Henrich, Peter Zoller, Rainer Blatt, and Christian F Roos. Universal digital quantum simulation with trapped ions. *Science*, 334(6052):57–61, 2011.
298. Kiwhan Kim, M-S Chang, Rajibul Islam, Sincha Korenblit, Lu-Ming Duan, and Christopher Monroe. Entanglement and tunable spin-spin couplings between trapped ions using multiple transverse modes. *Phys. Rev. Lett.*, 103(12):120502, 2009.
299. Sincha Korenblit, Dvir Kafri, Wess C Campbell, Rajibul Islam, Emily E Edwards, Zhe-Xuan Gong, Guin-Dar Lin, Lu-Ming Duan, Jungsang Kim, Ki-hwan Kim, and Christopher Monroe. Quantum simulation of spin models on an arbitrary lattice with trapped ions. *New J. Phys.*, 14(9):095024,
300. Philip Richerme, Crystal Senko, Simcha Korenblit, Jacob Smith, Aaron Lee, Rajibul Islam, Wesley C Campbell, and Christopher Monroe. Quantum catalysis of magnetic phase transitions in a quantum simulator. Phys. Rev. Lett., 111(10):100506, 2013.
301. R Islam, Crystal Senko, Wes C Campbell, S Korenblit, J Smith, A Lee, EE Edwards, C-C J Wang, J K Freericks, and C Monroe. Emergence and frustration of magnetism with variable-range interactions in a quantum simulator. Science, 340(6132):583–587, 2013. |
A method to implement the reservoir-wave hypothesis using phase-contrast magnetic resonance imaging
Robert D.M. Gray\textsuperscript{a,*}, Kim H. Parker\textsuperscript{b}, Michael A. Quail\textsuperscript{c}, Andrew M. Taylor\textsuperscript{c}, Giovanni Biglino\textsuperscript{c,d}
\textsuperscript{a} CoMPLEX, University College London, London, United Kingdom
\textsuperscript{b} Bioengineering Department, Imperial College London, London, United Kingdom
\textsuperscript{c} Centre for Cardiovascular Imaging, Institute of Cardiovascular Science, University College London & Great Ormond Street Hospital for Children, NHS Foundation Trust, London, United Kingdom
\textsuperscript{d} Bristol Heart Institute, University of Bristol, Bristol, United Kingdom
\textbf{Graphical Abstract}
PC-CMR \rightarrow Area \& velocity \rightarrow Fitting \rightarrow r, C, \ln A_w
\textbf{Abstract}
The reservoir-wave hypothesis states that the blood pressure waveform can be usefully divided into a “reservoir pressure” related to the global compliance and resistance of the arterial system, and an “excess pressure” that depends on local conditions. The formulation of the reservoir-wave hypothesis applied to the area waveform is shown, and the analysis is applied to area and velocity data from high-resolution phase-contrast cardiovascular magnetic resonance (CMR) imaging. A validation study shows the success of the principle, with the method producing largely robust and physically reasonable parameters, and the linear relationship between flow and wave pressure seen in the traditional pressure formulation is retained. The method was successfully tested on a cohort of 20 subjects (age range: 20–74 years; 17 males).
* Corresponding author.
E-mail address: firstname.lastname@example.org (R. Gray).
http://dx.doi.org/10.1016/j.mex.2016.08.004
2215-0161/© 2016 Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
This paper:
- Demonstrates the feasibility of deriving reservoir data non-invasively from CMR.
- Includes a validation cohort (CMR data).
- Suggests clinical applications of the method.
© 2016 Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
**ARTICLE INFO**
*Method name:* Implementation of the reservoir-wave hypothesis using phase-contrast magnetic resonance imaging
*Keywords:* Hemodynamics, Wave intensity analysis, Windkessel, Magnetic resonance imaging, Reservoir, Ventriculo-arterial coupling
*Article history:* Received 30 January 2016; Accepted 22 August 2016; Available online 25 August 2016
**Methods**
**Rationale**
The reservoir (Windkessel) model of arterial mechanics [1] represents the arteries as a single compliant compartment and with a single outflow resistance. This model predicts exponentially falling pressure in diastole, but not the sharp rise in pressure seen in systole. The more modern wave theory separates the pressure waveform into a combination of forward and backward travelling waves [2]. This analysis provides a good description during systole, where the separation produces an initial forward compression wave followed by backward reflected waves. However, in diastole this approach predicts large cancelling forward and backward waves to explain a falling pressure and a zero velocity [3]. The reservoir-wave hypothesis is an attempt to combine these two methods and benefits from the good predictions of the wave theory in systole and the reservoir theory in diastole.
In the reservoir-wave hypothesis [4], the pressure waveform in diastole is fitted with a single exponential model from which the reservoir parameters are extracted. We can then calculate the reservoir pressure and the excess pressure, which refers to the remaining part of the pressure waveform when the reservoir is subtracted. The latter is found to have the interesting property of being proportional to the flow into the arterial system. This indicates that the nearly identical excess pressure ($P_{ex}$) and inflow ($Q_{in}$) waveforms result only from forward-traveling compression and decompression waves generated by the left ventricle [5].
The reservoir-wave parameters describing the reservoir and excess pressure have been shown to have physiological and pharmacological significance. Their importance has been discussed in various areas including as a measure of left ventricular relaxation [6], as being related to hypertension [7], as a possible therapeutic target [8] and as a significant predictor of cardiovascular events carrying information for selection of pharmacological therapies [9].
Considering the clinical value of the reservoir-wave model, an implementation of the method for medical imaging, and cardiovascular magnetic resonance (CMR) imaging in particular, is desirable.
**Implementation**
The analysis is incorporated in a Python script. CMR data provides area (A) and velocity (U) as functions of time, sampled with a set temporal resolution (approximately 10 ms), and hence the flow $Q_{in} = UA$.
In complete analogy to the derivation of the reservoir pressure, we derive the reservoir area as follows. We have an equation for the reservoir pressure [3]:
$$\frac{dP_{res}}{dt} = \frac{(P_{res} - P_{\infty})}{RC} = \frac{Q_{in}(t)}{C}$$ \hspace{1cm} (1)
where R and C are the resistance and compliance of the arterial system respectively, $Q_{in}$ is the flow of blood into the arterial system and $P_{\infty}$ is the pressure at which flow through the circulation ceases. We
formulate a relationship between pressure and area using distensibility [10]
\[ D = \frac{1}{A} \frac{dA}{dP} = \frac{d\ln A}{dP} \]
(2)
and we can then rewrite an expression for the reservoir pressure in terms of \( \ln A \)
\[ \frac{d\ln A_{res}}{dt} + \frac{(\ln A_{res} - \ln A_\infty)}{RC} = \frac{D}{C} Q_{in}(t) \]
(3)
and solve to
\[ \ln A_{res} = \ln A_\infty + (\ln A_0 + \ln A_\infty)e^{-\frac{t}{\tau}} + \frac{D}{C} e^{-\frac{t}{\tau}} \int_0^t Q_{in}(t')e^{\frac{t'}{\tau}} dt' \]
(4)
where \( \tau = RC \). \( t=0 \) is defined to be the start of systole and \( \ln A_0 \) is the value of \( \ln A(t) \) at that time.
During diastole which starts at time \( t = T \), we assume that the inflow \( Q_{in} = 0 \), and Eq. (4) becomes
\[ \ln A_{res} = \ln A_\infty + (\ln A_0 - \ln A_\infty + \ln A_T)e^{-\frac{t}{\tau}} \]
(5)
where
\[ \ln A_T = \frac{D}{C} \int_0^T Q_{in}(t')e^{\frac{t'}{\tau}} dt' \]
(6)
Area is thus a function of the three reservoir parameters
\[ \ln A_{res}(\ln A_\infty, \tau, C) = \ln A_\infty + (\ln A_0 - \ln A_\infty + \ln A_T(\tau, C))e^{-\frac{t}{\tau}} \]
(7)
The cut-off time \( T \) is set as the time when the inflow \( Q_{in} \) first goes to 0. The three parameters are then found by fitting Eq. (5) to the data after this point with the Levenberg-Marquardt nonlinear fitting algorithm using the Lmfit package in Python [11].
This allows derivation of reservoir-wave information non-invasively based on CMR data. The derived parameters \( \tau \) and \( C \) are the same as those produced from invasive pressure measurements. The third parameter \( \ln A_\infty \) is not, but would be expected to carry important information analogously to \( P_\infty \) from the pressure formulation of the analysis.
An important difference from the pressure formulation is that the analysis requires knowledge of \( Q_{in}(t) \) in order to calculate \( \ln A_T \). This means the analysis is limited to the ascending aorta where we can obtain a measurement of the volume flow rate into the whole arterial system.
**Validation**
In order to assess the feasibility of the analysis, area and velocity data was acquired by CMR from the ascending aorta position in a small cohort of subjects (\( n = 20 \); age range: 20–74 years; 17 males). Ethical approval for analysis of CMR data for research purposes was in place. Data were acquired with a 1.5 T scanner (Avanto, Siemens, Erlangen, Germany) using two spine coils and one body-matrix coil. The imaging plane was planned just above the sinotubular junction, using orthogonal long axis cine images of the ascending aorta. The sequence was a prospectively triggered, spiral, velocity encoded spoiled gradient echo acquisition accelerated with SENSE [12]. The time resolution was 9.56 ms and the spatial resolution was 2.1 \( \times \) 2.1 mm. The breathhold was approximately 11 s and VENC was set at 180 cm/s in all cases. A and U signals were extracted by automatic segmentation propagation of the aorta using nonrigid registration, as described in detail elsewhere [13]. \( \ln A_\infty \) and \( Q_{in} \) time series were calculated from these data, and then the reservoir analysis was run in Python as described. The resulting parameters are displayed in Fig. 1. In 19 of the 20 cases, the fitting algorithm worked robustly and converged onto parameters than are physically realistic and with a reasonably tight distribution. The range of values was 121.05–2069.01 ms, while the values of \( \ln A_\infty \) ranged from 0.08 to 1.90. A representative case is shown in Fig. 2, exemplifying the reservoir signal fitted to the raw log-area data,
Fig. 1. The reservoir area analysis gives physically reasonable and well-distributed $\tau$ and $\ln A_\infty$ parameters. Note that the fitting was not completed successfully in one patient, likely due to the noisiness of the area signal in that specific patient. Horizontal bars represent the median.
Fig. 2. Representative example data and reservoir-wave results. (A) Normalised log aortic area waveform (blue) and reservoir log area (green), which is found by fitting to the data in diastole. (B) Aortic flow (red) and excess log area (black) which is found by subtracting the reservoir from the raw data. The approximately linear relationship between flow and excess, which demonstrates the validity of this approach, is seen.
and the flow and excess log-area, in analogy to prior observations based on the pressure-formulation [3].
In the majority of cases, the fitting was completed without issue and was robust. In most cases the Nelder-Mead simplex algorithm was also able to perform robust fitting. However, in some cases, a bounding of the fitting to physically reasonable parameters, such as setting $\tau < 10,000$ was required. In one case, even bounding could not achieve a successful fit. We discuss this in the next section.
A second important consideration is diastolic flow. This is assumed to be negligible, but in many cases is not. This will affect the analysis and the fitted parameters and so should be taken into consideration, particularly if the analysis were to be performed in descending aortic data. A potential extension to this work would be a fitting of a general form of equation 4 to the data, not assuming negligible diastolic flow.
Clinically relevant applications in the future may include studying hypertensive patients, but also refining analysis of ventriculo-arterial coupling in patients with congenital heart disease, offering additional insight into arterial mechanics in scenarios where arterial wall properties (and hence the reservoir effect) can be compromised.
Limitations
We found that our method converged to seemingly reasonable parameters in 19 of 20 cases. Despite this, we have found the method has certain limitations. In particular, noisiness in the data can fairly easily confound the fitting algorithm. If noise or some other defect causes the diastolic log-area waveform to deviate too strongly from a simple decaying exponential then the parameter space
becomes extremely flat and the fitting fails. Signal-to-noise ratio and clean data are thus of paramount importance for this method, and smoothing of the data could also be used to enable the fitting algorithm to converge.
Whilst the effect of non-perpendicular positioning of the plane with respect to aortic flow was not systematically tested, it is well known that care should be taken in planning the slice for flow quantification perpendicular to the direction of flow to avoid errors in velocity quantification [14], which in turn could have an effect on flow quantification ($Q_{in}$). In our case, the aortic plane was always carefully planned perpendicular to the direction of flow; this, together with the knowledge that velocity quantification is relative insensitive to small deviations to true perpendicular within an error of 20° [14], would suggest that $Q_{in}$ was not affected. Also, the effect of inaccuracies in U estimation due to setting VENC too high or too low were not systematically tested and can be explored further.
References
[1] O. Frank, The basic shape of the arterial pulse. First treatise: mathematical analysis, J. Mol. Cell. Cardiol. 22 (1990) 255–277.
[2] F. Pythoud, Separation of arterial pressure waves into their forward and backward running components, J. Biomech. Eng. 118 (2007) 295.
[3] K.H. Parker, An introduction to wave intensity analysis, Med. Biol. Eng. Comput. 47 (2009) 175–188.
[4] J.-J. Wang, A.B. O'Brien, N.G. Shrive, K.H. Parker, J.V. Tyberg, Time-domain representation of ventricular-arterial coupling as a windkessel and wave system, Am. J. Physiol. Heart Circ. Physiol. 284 (2003) H1358–68.
[5] J.V. Tyberg, et al., Wave intensity analysis and the development of the reservoir-wave approach, Med. Biol. Eng. Comput. 47 (2009) 221–232.
[6] S. Nagueh, et al., Recommendations for the evaluation of left ventricular diastolic function by echocardiography, J. Am. Soc. Echocardiogr. 22 (2009) 107–133.
[7] J.V. Tyberg, N.G. Shrive, J.C. Bouwmeester, K.H. Parker, J.-J. Wang, The reservoir-wave paradigm: potential implications for hypertension, Curr. Hypertens. Rev. 4 (2008) 203–213.
[8] J.E. Davies, et al., The arterial reservoir pressure increases with aging and is the major determinant of the aortic augmentation index, Am. J. Physiol. Heart Circ. Physiol. 298 (2010) H580–H586.
[9] J.E. Davies, et al., Excess pressure integral predicts cardiovascular events independent of other risk factors in the conduit artery functional evaluation substudy of Anglo-Scandinavian Cardiac Outcomes Trial, Hypertension 64 (2014) 60–68.
[10] G. Biglino, et al., A non-invasive clinical application of wave intensity analysis based on ultrahigh temporal resolution phase-contrast cardiovascular magnetic resonance, J. Cardiovasc. Magn. Reson. 14 (2012) 57.
[11] M. Newville, A. Ingargiola, T. Stensitzki, D. Allen, LMFIT: non-linear least-square minimization and curve-fitting for python, Zenodo (2014), doi:http://dx.doi.org/10.5281/zenodo.11813.
[12] J. Steeden, D. Atkinson, M. Hansen, A. Taylor, V. Muthurangu, Rapid flow assessment of congenital heart disease with high-spatiotemporal-resolution gated spiral phase-contrast MR imaging, Radiology 260 (2011) 79–87.
[13] F. Odille, J. Steeden, V. Muthurangu, D. Atkinson, Automatic segmentation propagation of the aorta in real-time phase contrast MRI using nonrigid registration, J. Magn. Reson. Imaging 33 (2011) 232–238.
[14] K.S. Nayak, et al., Cardiovascular magnetic resonance phase contrast imaging, J. Cardiovasc. Magn. Reson. 17 (2015) 71. |
July 1: NO MEETING THIS WEEK!
* Now meeting in the Academies of Creative Education building, 1160 Dahlonega Hwy. (use entrance off Pilgrim Road, third left from Hwy. 9, then take first right, 1160 is the second building, which has two sets of double doors on the front. Use the doors to the right.)
Please email Mike Smith at email@example.com if you are willing and able to help with our set up and serving at our in-person meeting.
This week in history: 1776
Continental Congress adopts the Declaration of Independence
In Philadelphia, Pennsylvania, the Continental Congress adopts the Declaration of Independence, which proclaims the independence of the United States of America from Great Britain and its king. The declaration came 442 days after the first volleys of the American Revolution were fired at Lexington and Concord in Massachusetts and marked an ideological expansion of the conflict that would eventually encourage France’s intervention on behalf of the Patriots.
The first major American opposition to British policy came in 1765 after Parliament passed the Stamp Act, a taxation measure to raise revenues for a standing British army in America. Under the banner of “no taxation without representation,” colonists convened the Stamp Act Congress in October 1765 to vocalize their opposition to the tax.
With its enactment in November, most colonists called for a boycott of British goods, and some organized attacks on the customhouses and homes of tax collectors. After months of protest in the colonies, Parliament voted to repeal the Stamp Act in March 1766.
Why did the American Colonies declare independence?
Most colonists continued to quietly accept British rule until Parliament’s enactment of the Tea Act in 1773, a bill designed to save the faltering East India Company by greatly lowering its tea tax.
and granting it a monopoly on the American tea trade. The low tax allowed the East India Company to undercut even tea smuggled into America by Dutch traders, and many colonists viewed the act as another example of taxation tyranny. In response, militant Patriots in Massachusetts organized the Boston Tea Party, which saw British tea valued at some 18,000 pounds dumped into Boston Harbor.
The British Parliament, outraged by the Boston Tea Party and other blatant acts of destruction of British property, enacted the Coercive Acts, also known as the Intolerable Acts, in 1774. The Coercive Acts closed Boston to merchant shipping, established formal British military rule in Massachusetts, made British officials immune to criminal prosecution in America, and required colonists to quarter British troops.
The colonists subsequently called the first Continental Congress to consider a united American resistance to the British. With the other colonies watching intently, Massachusetts led the resistance to the British, forming a shadow revolutionary government and establishing militias to resist the increasing British military presence across the colony.
In April 1775, Thomas Gage, the British governor of Massachusetts, ordered British troops to march to Concord, Massachusetts, where a Patriot arsenal was known to be located. On April 19, 1775, the British regulars encountered a group of American militiamen at Lexington, and the first shots of the American Revolution were fired. Initially, both the Americans and the British saw the conflict as a kind of civil war within the British Empire: To King George III it was a colonial rebellion, and to the Americans it was a struggle for their rights as British citizens.
However, Parliament remained unwilling to negotiate with the American rebels and instead purchased German mercenaries to help the British army crush the rebellion. In response to Britain’s continued opposition to reform, the Continental Congress began to pass measures abolishing British authority in the colonies.
**How did the American Colonies declare independence?**
In January 1776, Thomas Paine published Common Sense, an influential political pamphlet that convincingly argued for American independence and sold more than 500,000 copies in a few months. In the spring of 1776, support for independence swept the colonies, the Continental Congress called for states to form their own governments, and a five-man committee was assigned to draft a declaration.
The Declaration of Independence was largely the work of Virginian Thomas Jefferson. In justifying American independence, Jefferson drew generously from the political philosophy of John Locke, an advocate of natural rights, and from the work of other English theorists.
The first section features the famous lines, We hold these truths to be self-evident, that all men are created equal, that they are
endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. The second part presents a long list of grievances that provided the rationale for rebellion.
**When did American colonies declare independence?**
On July 2, 1776, the Continental Congress voted to approve a Virginia motion calling for separation from Britain. The dramatic words of this resolution were added to the closing of the Declaration of Independence. Two days later, on July 4, the declaration was formally adopted by 12 colonies after minor revision. New York approved it on July 19. On August 2, the declaration was signed. The Revolutionary War would last for five more years. Yet to come were the Patriot triumphs at Saratoga, the bitter winter at Valley Forge, the intervention of the French, and the final victory at Yorktown in 1781. In 1783, with the signing of the Treaty of Paris with Britain, the United States formally became a free and independent nation.
https://www.history.com/this-day-in-history/american-colonies-declare-independence
**Rotary Club of Forsyth County**
http://www.rotarydistrict6910.org
PO BOX 57
Cumming, GA 30028
firstname.lastname@example.org
**Social Media Links**
For more information about our club click on one of the links below:
Website
Twitter
Facebook
**Past Presidents**
| Year | President |
|------|-----------------|
| 1975 | Tommy Bagwell |
| 1976 | Terry Smith |
| 1977 | Jackie Welch |
| 1978 | Larry Boling |
| 1979 | Zack Rice |
| 1980 | Roger Williams |
| 1981 | Tom Miller |
| 1982 | Gabe Dukas |
| 1983 | Eddie Stowe |
| 1984 | Dana Miles |
| 1985 | Mike Gravitt |
| 1986 | Steve Jackson |
| 1987 | Tim LeBlanc |
| 1988 | Denton Ashway |
| 1989 | Bobby Thomas |
| 1990 | Jim Wheeler |
| 1991 | Rich Brown |
| 1992 | Tim Perry |
| 1993 | Bob McGuinn |
| 1994 | Robert Thuss |
| 1995 | Brian Carpenter |
| 1996 | Charles Ammons |
| 1997 | Bill Kehres |
| 1998 | Jeff Stephens |
| 1999 | John Weaver |
| 2000 | Jim Whitney |
| 2001 | Jon McDaniel |
| 2002 | Keith Argo |
| 2003 | Rich Neville |
| 2004 | Melissa Durand |
| 2005 | Brandon Barron |
| 2006 | Chuck Welch |
| 2007 | Gabe Arango |
| 2008 | Mike Palmer |
| 2009 | Burton Blackmar |
| 2010 | Shan Mize |
| 2011 | Taylor Rice |
| 2012 | George Pirkle |
| 2013 | George Pirkle |
| 2014 | Mike Smith |
| 2015 | Rusty Smith |
| 2016 | Donna Wade |
| 2017 | Eric Duncan |
| 2018 | Ken Terry |
**Paul Harris Fellows**
In 2016 our club became 100% Paul Harris Fellows!
| Year | President | Fellow |
|------|-----------------|----------------|
| 1977 | Tommy Bagwell | Penny McGuinn |
| 1978 | Jim French | Rafe Banks |
| 1986 | Bob McGuinn | Rich Brown |
| 1987 | Gary Allen | Stan Gault |
| 1988 | Jack Manton | Zack Rice |
| 1989 | Bill Carter | Shannon Mize |
| 1990 | Charles Welch | Brenda Thomas |
| 1991 | Larry Boling | Malvelene Vaughan |
| 1992 | Mike Gravitt | Mike Gravitt |
| 1993 | Bill Carter | Shannon Mize |
| 1994 | Charles Welch | George Pirkle |
| 1995 | Brian Carpenter | George Pirkle |
| 1996 | Charles Ammons | Burton Blackmar|
| 1997 | Jim Whitney | Seth Thomas |
| 1998 | Rex Abbott | Donna Wade |
| 1999 | Nancy Abbott | Lucy Thuss |
| 2000 | Melissa Durand | Matt Richmond |
| 2001 | Mike Smith | Dana Miles |
| 2002 | Burton Blackmar | John Heath |
| 2003 | Tim Perry | Chuck Welch |
| 2004 | Melissa Durand | Dana Miles |
| 2005 | Brandon Barron | Eric Duncan |
| 2006 | Chuck Welch | Eric Duncan |
| 2007 | Gabe Arango | Mike Palmer |
| 2008 | Burton Blackmar | John Heath |
Will Watt Fellows
1986 Tommy Bagwell
1990 Dick Neville
Bobby Thomas
Lou Douglas
1995 Brian Carpenter
Bob McGuinn
1998 Jim Whitney
1991 Robert Thuss
Vic Shirley
Lou Douglas
1992 Rafe Banks
Jack Heard
Jim Wheeler
1996 Leslie McGuinn
1997 Denton Ashway
1999 Rex Abbott
2001 Jon McDaniel
Rich Brown
Bill Kehres
2002 Malvelene Vaughan
2003 Chantal Bagwell
Shaun McGuinn
Brian Carpenter
Michael O’Bryan
Mike Montgomery
2003 Lorne Twiner
Bruce Hearn
Jack Godwin
Jon McDaniel
Mike Montgomery
Joni Owens
2007 Donna Wade
Eric Duncan
Dana Miles
John Hall
2008 George Pirkle
Dennis Gravitt
2009 Dana Miles
2010 John Hall
2012 George Pirkle
2013 Jim Wheeler
2014 Linda Duncan
D’Arcy Duncan Andrews
Erin Duncan Topel
2015 Sam Siemon
Gabe Arango
Murray Rice
If you have a question about the bulletin/programs, or have a program of interest to the club, please contact Stephanie Woody at email@example.com or mobile 678-878-0516. |
UNIT 4 DEVELOPMENT-SUPPORT COMMUNICATION (DSC)
Structure
4.0 Objectives
4.1 Introduction
4.2 Population Control and Family Welfare
4.2.1 DSC activities in Population Control
4.2.2 New Challenges
4.2.3 Solutions
4.3 Health and DSC
4.3.1 Health Communication
4.3.2 DSC and Health Behaviour
4.4 Education and Society
4.4.1 Types of Education
4.4.2 Literacy Programme
4.4.3 Education DSC
4.4.4 Uses of Communication for Education
4.5 Environment and Development
4.5.1 Economic Growth and the Environment
4.5.2 DSC and Environment
4.6 Let Us Sum Up
4.7 Further Reading
4.8 Check Your Progress: Model Answers
4.0 OBJECTIVES
After studying this unit, you should be able to:
- describe the role of the development support communication (DSC) in Population, Health, Education, and Environment.
- describe in what way the DSC could benefit the formulation of communication strategies in the areas of Health and Family Welfare, education in different aspects, the eradication of illiteracy of the adults, non-formal education, and also environmental issues; and
- identify the scope of DSC in making effective programme in these areas.
4.1 INTRODUCTION
In Unit 3 in this Block, you came to know about the DSC, i.e., Development-support Communication. Agriculture as a specific subject was covered. In this Unit, we shall cover some socially and economically relevant areas, like Population, Health, Education, and Environment. Similar areas have been touched upon in the earlier unit, because of the importance that these disciplines have attained in relation to the overall development of the country.
In this unit, we shall cover the use of DSC (Development-support Communication) for proper implementation and impact of programmes related to the population control and family welfare measures, Health, Hygiene and Nutrition, Education, and Environment.
4.2 POPULATION CONTROL AND FAMILY WELFARE
Activity 1
I am sure, you must have heard people saying that one of the most urgent problems we face today is the population problem. You, yourself, must have been baffled and perturbed by the magnitude of this problem. Before you proceed further, let us involve ourselves in an exercise.
You must have come across many advertisements regarding family planning, use of contraceptives, etc., list all these advertisements from the newspapers, magazines, posters and hoardings. Compare these messages. Find out which message(s) you like most and why.
The task before India and other developing countries is not merely to get better results within the existing framework of economic and social institutions, but to mould and refashion them in such a way that they effectively contribute to the realization of the wider and deeper social values. Adoption of such an approach would imply that instead of making mechanistic and deterministic messages, the focus would be on human beings. It would, thus, be necessary that development communication is only a support to development. This process has to be continuous and consistent in aiming at raising aspirations of the people so as to help them develop. Basically, development-support communication has to raise the level of social consciousness among people that can help them towards transformation.
We shall now attempt to highlight some of the issues in relation to DSC in population control and family welfare programmes in India. It needs hardly any reiteration that the health and family welfare programme is an integral part of the overall development programmes of the country. Fortunately, communication has been playing a vital role throughout the various developmental phases of the programme by extensively using different communication media and methods. In spite of the large efforts of educating and motivating people for accepting small family norms, changing the health practices of the people, and introducing spacing methods, the achievements of the programme have always fallen short of the expected results. Mostly, a moot question is being asked as to why communication efforts have been absolutely incidental to the actual adoption. Some studies have shown that factors such as paucity of resources, the traditional ethos, low education and illiteracy profile, diversity of languages and dialects, lack of coordination between the communication and policy planners and the overall resistance to change, are responsible for the failure of various effort put into this scheme. A deeper analysis, however, shows that the development programmes in any field, where formal resources have been deployed to promote communication, the socio structural factors play a vital role in the reception of new ideas in any community. In other words, factors within the universe of the socio cultural set-up have already been responsible for preventing innovative ideas from being functional and operational at the community level.
These contradictions are visible at the grassroots level, where, on the one hand, the multipurpose workers are expected to promote extension education and, on the other, they are handed over targets to fulfil. Failure to achieve these targets invites punitive measures. Most administrators view motivational and educational techniques as a kind of magic, which, when applied anywhere on anybody, can yield results in the form of acceptance and adoption of the family planning measures. It needs to be recognised that implementing educational and motivational efforts in the community require not only patience, but also capabilities of the workers. For effective communication, such contradictions in the philosophy and implementation need to be resolved.
4.2.1 DSC Activities in Population Control
In their eagerness to reach a wider audience many DSC activities in the population control and family welfare are gradually losing their informational components and becoming propaganda. An example would be the IUD drive or the mass sterilisation camps, where slogans such as "Nasbandi Karao, Rupia Kamao" were being pursued with a lot of enthusiasm. Such propaganda has naturally a short life, and does more harm than good to the programme.
Yet another issue that needs to be mentioned is that the present DSC efforts in the family welfare programmes seldom use the knowledge and the talent available at the local levels. It is becoming more evident that the expensive mass media technology is receiving higher emphasis at the cost of the traditional media. Usage of the traditional media is not only acceptable, and creative but affordable also. The urgency of containing the population did not allow the planners of the communication policy and programmes to pay due attention to the needs of different socio economic and tribal groups, spread in different regions of the country. Thus, proper attention might not have been given to proper communication planning. It is thought that putting messages in the mass media channels would ensure positive results. The psyche, ethos, economic-social milieu have been given less importance. In addition to these, the media habits of the target audience, their access to the media, the appropriateness of the channel have not been researched properly and adequately.
4.2.2 New Challenges
The challenges that are at the forefront of the family planning communication policy can be briefly subsumed under the message content, media mix and organization of media. Decentralization of media, wider choice of suitable media and development — integrated approach, giving due recognition to the socio-economic environment of the country also help. The focus and emphasis of the future DSC strategy in the family welfare programme has to be rural based and addressed to the rural and urban poor as its target audience for a country like India, where the rates of literacy and the level of purchasing power are low, the choice of the media assumes importance for any meaningful strategy.
The purpose of communication in family welfare should be more information-oriented, and it needs to be supplemented by developing messages through which functional and purposeful relationship among people could be developed. This is important, because family planning involves subjective motivation, and the questions relating to it are very personal and delicate in nature. The handling of such questions at the communication strategy level in terms of message content have to be very carefully considered. The present message content has been greatly influenced by the "urban alienating culture", which has infused among people a sense of social isolation, powerlessness and frustration. Unless the message content provides information regarding the economic advantages of the family welfare programme, the rural and urban masses will not get attracted towards adoption.
4.2.3 Solutions
The most important issue, which merits attention, is to provide linkage between the DSC strategies and different welfare programmes as part and parcel of the larger whole. In philosophy and in principle, this approach has been accepted. But it has always lagged behind in implementation for various reasons. The isolated communication approach has brought in contradictions within the developmental programmes. The family welfare programme has come to a stage where rural people appear to be aware of the benefits of the family welfare, but the socio-economic pressures operate against it. In its case, the communication strategy should
attack the socio-economic pressures that confront the rural and urban poor rather than planning strategies vertically for programme acceptance.
Any DSC strategy, be it family welfare, health or education, should be such that the communicators treat the rural people as intelligent, conscious, and capable of learning new things. This, naturally, implies that unless the media originate in and relate themselves to the world of values and environment of the community, they are bound to be irrelevant and ineffective. In the broader context, the DSC has to be a dependent rather than an independent variable. In brief, the DSC strategy in the family welfare programme has to be such that it not only empowers but enables the rural and urban poor to take informed decisions in relation to their personal and delicate questions on population control and family welfare.
Check Your Progress 1
Note: i) Write your answers in the space provided.
ii) Check your answers with the ones given at the end of this unit.
1) List three reasons for the failure of our population problem.
2) How could communication help in finding a solution to these problems?
4.3 HEALTH AND DEVELOPMENT-SUPPORT COMMUNICATION
Development implies progressive improvement in the living conditions and quality of life of individuals, community and society. Development in one sphere of life leads to development in other spheres. Thus, no distinction can be drawn between economic, social and health development. Economic development is an instrument to achieve social development which, in turn, is necessary to achieve economic development. The purpose of development is to prepare people to lead economically productive and socially satisfying lives. However, social satisfaction and economic productivity is perceived in different ways in different societies. Everywhere, people strive to increase their earnings, leading to increase in the purchasing power, which enables them to get for themselves and their children better and sufficient food, housing, better education, better opportunities of leisure, and, most important of all, better health. Unless people have healthful living, they cannot enjoy the other benefits of life. Therefore, health development is essential for social and economic development. This is the reason why activities attempting to improve health and socio-economic situation should be regarded as mutually complementary to each other rather than competitive. It is an academic debate whether health development only consumes resources, or whether it is an economically productive factor contributing to the overall development. For instance, the control of certain communicable diseases often helps to promote development in general. Proper nutrition and reduction of sickness increases the productivity of work. Breaking the vicious circle of malnutrition and infection leads to improvement of physical and mental development of the child. Vaccinating an entire child population against diseases brings reduction in child mortality, which can induce a feeling to have a small family. Further, by drawing on untapped resources — men, finance and material for health development — one can contribute to the awakening of social interest, which is very important for mobilizing people's efforts for development in social and economic fields.
No sector, be it social, economic, or health, can function independent of each other in the process of development. Activities in one field impinge on the goals of another. Therefore, there is a need to have cooperation between social and economic sectors to bring development and to promote health development as its part.
4.3.1 Health Communication
Realizing the complexity of health behaviour, which, in the case of Indian society, is largely guided by informal but deep-rooted socio cultural values, the country has adopted such measures which help the people to keep themselves healthy. Thus, the process of motivation of the people is attempted through the mass media and interpersonal communication, based on development-support strategy. The mass media and other communication channels have tremendous effect on every sphere of human life, but we have to accept that its impact is not uniform in all fields, nor can it be predicted universally. Those who hold the view, that mass media is uniformly effective in economic, agricultural, political or health fields, have a reasoning based on the theory of the effort of opinion-leaders. It has been found that the opinion leaders are comparatively more effective in changing the non-health behaviour of an individual. The concept of the two step flow of communication simply implies that the opinion leaders take the basic message, translate it into personal terms, and feed the same to their own influence network in ways that are acceptable and understandable to the target audience. But we observe that their influences in changing the health behaviour are limited. We have evidence that the big landlords were the first to accept changes in the agricultural process and production but not in health including family planning. Not all opinion leaders can influence everyone, but their influence is within relatively restricted spheres. The opinion leaders generally specialises in some fields. For example, a progressive agriculturist, who is influential in agriculture, is not likely to exploit this single trait that would predispose him to opinion leadership in all rural fields. Most of the studies suggest that the mass media tend to serve as reinforcing agent than as producers of massive changes in attitude.
4.3.2 DSC and Health Behaviour
Looking into the limited role of mass communication in the changing health behaviour, one can think of interpersonal communication, which, in the context of the complexity of health behaviour assumes greater significance. In this communication, word-of-mouth and personal communication from a trusted source is significantly more effective than mass communication from a remote source, however prestigious that source might be. Innumerable studies have established more credibility of interpersonal communication than mass communication. Health and development support communication are closely interlinked and mutually interdependent. In a country like India, a DSC strategy needs to be developed in a manner that can cater to the needs of the diverse groups based on social and cultural background. Merely by transferring health information to the people through mass communication alone will not bring health development. The goal of achieving health behaviour change should be a central
education, and several others. The broad classification of education could be: (1) Formal Education, (2) Nonformal Education, and (3) Extension Education, which will be discussed in succeeding part of this unit.
**Formal education**
Formal education is basically an institutional activity, uniform and subject-oriented, full-time, sequential, hierarchically structured, leading to the award of certificates, degrees and diplomas. The schools, colleges and universities fall under this category.
**Non-formal**
Non-formal education, is not formal, which means it is:
- flexible,
- life, environment and learner-oriented,
- diversified in content and method,
- non-authoritarian,
- built on learner-participation,
- helpful in mobilizing local resources, and
- an instrument/which enriches human and environmental potential.
Non-formal education processes and programmes should, in the long run, lead to:
- creating an awareness in individuals and society, of the prevailing environmental situations and the need for and direction of change,
- cultivating a rational, objective and scientific temper,
- enriching human potential and, thereby, increasing community resources, and promoting individual and group creativity,
- increasing the functional relevance of learning, both to the learners and to the community,
- achieving a greater degree of individual, social, cultural and economic development through democratic action and active participation,
- building a learning environment in which every individual shall have equal opportunity for continuing self-learning, and
- a better sharing of opportunities and social wealth and, particularly, a more equitable and just distribution of knowledge among various sections of society.
Extension or organized face-to-face communication is kept within the scope of the DSC. Extension provides a form of DSC, which might be more effective than the mass media. Extension education has proved very effective in Agriculture, and has since then widely been practised all over the world, specially in the third world countries.
It was thought that simple communication tools could be used to educate the farmers about various new innovations. Once motivated, they would use the new hybrid seeds, fertilizers, machines etc. This happened and, as a result, the food production increased manifold in the last two or three decades.
### 4.4.2 Literacy Programme
The Farmers’ Functional Literacy Programme is the biggest on-going country-wide programme of adult education. It is in reality a complex non-formal education system at its initial stage. Its implementation is the responsibility of the Central Government, and the scheme is classified as a Central Sector Programme.
There are many development schemes and projects in the country, the efficient implementation of which is hampered by the low level of educational attainments. This is particularly true of the enormous scheme of the High Yield Crop Varieties, since the modernization of agricultural practices has to be accompanied and supported by a programme of manpower development.
The Farmers' Training and Functional Literacy Programme, an inter-ministerial project implemented jointly by the Ministries of Agriculture, Education, and Information and Broadcasting, is an attempt to get a qualified answer to this fundamental challenge. The basic idea behind the project is that there is a direct correlation between the physical and human resources. In other words, this is an integrated approach to a comprehensive rural development programme, to the "Green Revolution". The main goal of the scheme is to support and strengthen one of the basic national objectives: self-sufficiency in food, increase in crop production and growth of agricultural productivity.
The functional literacy component was not only viewed in correlation with other developmental objectives but, from the very beginning, was conceived as a method of training the farmers for development purposes, a comprehensive non-formal educational programme and an opening to continuing education.
4.4.3 Education and DSC
Communication for development purposes should be distinguished from communication for the sake of entertainment, such as chitrarahar, or commercial advertising of soap and tooth paste or news dissemination news bulletins, The World This Week, etc. Entertainment-oriented
communication also has a powerful social influence and possible effect on the attempt to meet the national objectives.
It was agreed that the development-support communication should be taken to embrace the following: the infrastructure (economic, technological, organisational/administrative); the information processing and transfer systems; (c) the media, the personnel, the communicators; the recipients; the supporting communication services; organized interpersonal communication and extension services; the contents; the purposes or objectives (recognizing that they may vary, but that national development goals are usually central). It was recognized that the central focus was on mass communication, and that a multi-media approach with inter-media comparisons was necessary.
4.4.4 Uses of Communication for Education
The case for uses of communication for education has been convincingly argued on the following grounds:
- communication helps to enlarge mental horizons;
- it can be used to raise levels of aspirations;
- through communication, attention can be focussed on problems having a bearing on the contemporary developmental and educational context;
- it can be effectively employed to build consensus on the new economic and cultural goals;
- through communication, experimentation can be encouraged and knowledge relating to their success and/or failure can be widely disseminated; and
- it can also be utilized to teach specific skills and techniques.
To sum up, the DSC for education can play a powerful role in nation-building and development, and can contribute significantly to bring about social change in the desired direction.
Check Your Progress 3
Note: i) Write your answers in the space provided.
ii) Check your answers with the ones given at the end of this unit.
1) What are the various types of Education?
2) Provide two arguments on how literacy would help in development.
3) Mention ways by which the DSC can help in the development of education.
4.5 ENVIRONMENT AND DEVELOPMENT
In 1987, the World Commission on Environment and Development, a United Nations body, published its findings in 'Our Common Future', known as the Brundtland Report. It says that problems of environment and development are interlinked, and that economic interdependence among nations is increasing. Areas focussed on included:
- population and food security;
- the loss of species and genetic resources;
- energy; and
- industry.
It calls for economic growth based upon 'sustainable development,' — meeting the needs of the present without compromising the ability of the future generations to meet their own needs. The Brundtland Report called for continued economic growth, while emphasising the need to integrate environment and development.
Activities such as the burning of coal and other fossil fuels, and the use of CFCs as aerosol propellants are leading to a build-up of greenhouse gases in the atmosphere, resulting in damage to earth's climate. The Inter-governmental Panel on Climate Change (IPCC), looking at the climate processes, concludes that, if there is no change in emissions of greenhouse gases, global mean temperature would rise by about 0.3°C a decade in the 21st century, faster than at any time in the past 10,000 years. An increase in global temperature could, in turn, lead to major problems for mankind.
However, as the problem is global in scope, it is important that the different countries of the world take measures to control greenhouse gases together. India is an active member of the IPCC, which was set up in 1988, and is jointly sponsored by the UNEP and the World Meteorological Organisation. In December, 1990, the UN established an intergovernmental negotiating committee to prepare a framework convention on climate change. India is playing a prominent part in these negotiations, and also supports the IPCC which provides help to the negotiations by scientific advice on climatic change.
Deforestation can also contribute to the build up of greenhouse gas in the air, both by the loss of the trees' ability to absorb CO₂ and by releasing the carbon stored in them as CO₂ and methane. Apart from encouraging afforestation, the government should support many projects to promote good forest management.
Ozone Layer: In recent years, the scientists have become very concerned about damages to the ozone layer, which protects the earth from harmful ultra-violet rays of the sun. Much scientific evidence has been gathered which clearly implicates CFCs in stratospheric ozone depletion. In 1985, the scientists on the Antarctic Expedition discovered a 'hole' — an area of major depletion — in the ozone layer over the Antarctic, and evidence now shows that this significant ozone layer threatens mankind with increased diseases such as skin cancer, as well as the possibility of reducing the productivity of crops.
4.5.1 Economic Growth and the Environment
The environmental problems that countries face vary with their respective stages of development, the structure of their economics, and their environmental policies. Some problems are associated with the lack of economic development, inadequate sanitation and availability of potable water, and indoor air pollution from bio-mass burning. Many types of land degradation are a root cause of poverty in the developing countries. Here, the challenge is to accelerate equitable income growth and promote access to the necessary resources and technologies. But, many other problems are made worse by the growth of economic activity. Industrial and energy-related pollution (local and global), deforestation caused by commercial logging, and overuse of water are the result of economic expansion that fails to take account of the value of the environment. Here, the challenge is to build the recognition of environmental scarcity into decision-making. With or without development, rapid population growth may make it more difficult to address many environmental problems.
Rapid population growth can worsen the mutually reinforcing effects of poverty and environmental damage. The poor are both victims and agents of environmental damage. Since they lack resources and technology, the farmers resort to cultivating hill sides, and move into the tropical forest areas, where crop yields on cleared fields usually drop sharply after a few years. What pressures will economic growth place on the natural environment in the coming years? If environmental pollution and degradation were to rise in step with a rise in output, the result would be appalling environmental pollution and damage. Tens of millions of people would become sick or die each year from environmental caused diseases or disasters. Water shortages would be intolerable, and tropical forests and other natural habitats would decline to a fraction of their current sizes. Fortunately, such an outcome need not occur, if sound policies and strong institutional arrangements are put in place.
4.5.2 DSC and Environment
Ignorance is an important cause of environmental damage and a serious impediment to finding solutions. This principle holds true for international negotiations and poor households alike, as is illustrated by the global damage done to the ozone layer by the CFCs and the serious implications of indoor air pollution, like smoking, for family health. First, it is necessary, to know the facts; second, to determine values and analyze the benefits and costs of alternative measures; third, to ensure that information is available on the public and private choices.
The DSC regarding environmental issues increases access to information. Many governments encourage involvement of local population in tackling environmental issues. But if such involvement has to be effective, the local people need to be well-informed. Some ways to achieve this are:
a) to share/supply information to the local communities at the early stage in identifying a project;
b) to discuss local environmental problems with the affected communities;
c) to allow public comments on the DSC-inputs; and
d) to encourage public comments and discussion on the proposed environmental solutions.
Check Your Progress 4
Note: i) Use the space given below for your answers.
ii) Compare your answers with the ones given at the end of this unit.
1) What are the two important steps that we should take to ensure sustainable development?
__________________________________________________________________________
__________________________________________________________________________
2) Mention two ways by which the DSC can help in greater environmental awareness.
__________________________________________________________________________
__________________________________________________________________________
4.6 LET US SUM UP
Population control, health and hygiene, education and environment, are socially and economically very important areas as far as development is concerned. Due to various reasons, largely political, the directions given to the development processes in India, have been marred by confused thinking, and working at cross-purposes. It is for this reason that the UNDP and other UN institutions have emphasized "the linking of all agencies involved in the planned development work, which constitute the final delivery points, and the audience-recipients". It was felt that this vital link was absent in our developmental efforts.
'The final delivery points' are those that come into direct contact with the "beneficiaries" of any development programme.
In this unit, we have covered the use of the DSC in the significant socio-economic areas like Population, Health, Education and Environment. It is well-known that in the past four decades, communications have failed to enthuse the poor and the under-privileged, with the result that these sections of society have developed apathy, indifference, and submissiveness, and they have become totally resigned to the forces beyond their control. In short, the communication contents and messages are not yet clear about the goals of development. It was due to these factors that the DSC was sought to be used to raise the level of social consciousness among people so as to help them in their transformation.
Population control measures, despite large-scale efforts, have always fallen short of the resources being put into the process. Studies have shown that factors within the field of socio-cultural set up, have largely been responsible for preventing innovative ideas from being functional and operational at the community level.
Health development is essential for socio-economic development. It has been found that mass media have a limited role in actually changing the health behaviour. In this context, the role of the interpersonal channels of communication assumes significance. The communicator cannot ignore the forces which either integrate or disintegrate the community, especially, as far as health behaviour patterns are concerned.
Education and environment are two vital areas, in which the DSC can play a decisive role. The role of interpersonal communication becomes imperative. Extension provide a form which might be more effective than the mass media. In the case of environment, the areas that have been focused were population and food security, the loss of species and genetic resources, energy and industry. While several environmental problems have been emerging concurrent with the increased industrialization, the DSC can play an effective role in providing information, and creating awareness among the people. This would help in changing their attitude and behaviour. The classic case is that of the need for afforestation in the face of the devastating destruction of our forest wealth.
4.7 FURTHER READING
Shukla, K.S. 1987: The Other Side of Development — Socio-psychological implications, New Delhi: Sage Publications.
Sharma S.L. 1990: Social Development: Reflections on the Concept and the Indian Experience, Amritsar.
Todaro M. 1983: The Struggle for Economic Development, New York and London: Longman.
Report, 1992: World Environment Development, 1992.
4.8 CHECK YOUR PROGRESS: MODEL ANSWERS
Check Your Progress 1
1) a) Low level of literacy
b) Poor knowledge of the reproduction cycle, especially among the women
c) Low knowledge of the basic health care among the adult population in general
2) First of all, a political will to solve the problem of illiteracy is required. And then, there has to be a focus on the advantages of communication technology to solve these problems. The same medium may not be applicable to solve this problem. Therefore, the media selection should be done as per the target audience.
A massive drive to educate the people on the reproduction cycle should be launched. The popular and effective media should be selected to realize this objective.
Check Your Progress 2
1) Need-assessment: the level of education regarding the basic health care already existing among the target audience
- Find out about the media habit and access to the media
- Find out about the spread of the media
- Find out the level of literacy
- Prioritise the needs and requirements
- Choose the media to adequately answer the needs
- Address the needs.
Check Your Progress 3
1) ● Formal Education
● Non-formal Education
● Extension Education
2) ● Literacy could provide the population with self confidence, which would contribute to the development of a person
● Literacy would remove uneducation, and, hence, many social problems, like superstition, could be eradicated
3) ● Can give additional information
● Can bring in more clarity to the subject matter
● Can open more channels for learning
Check Your Progress 4
1) ● The development should have a practical rationale.
● The development process must not disturb the eco-systems, which support the development.
2) ● Use local communication channels to inform, educate and communicate to the people.
● Use the local talents to speak to the people belonging to a community. |
SUBJECT:
Approval of Sub Grant Agreement with the Southern Regional Children's Advocacy Center for Closed-Circuit Television and/or Recording Equipment for Use in Criminal Child Abuse Cases and Training.
STAFF RECOMMENDATIONS:
1. Approve the sub grant agreement between the Southern Regional Children's Advocacy Center and the District Attorney's Office.
2. Authorize the District Attorney to sign and approve the sub grant award agreement including any extensions, or amendments.
3. Direct the Auditor-Controller to increase revenue and appropriations as detailed on the attached Budget Journal form.
FISCAL IMPACT:
Appropriations and estimated revenue of $30,000 for the sub grant with the Southern Regional Children's Advocacy Center will be reflected in the District Attorney's Office Fiscal Year 2010-11 budget, since the District Attorney's Office is the administrator for the agreement. The award amount will be used for purchasing iRecord Professional equipment and training needed for the Child Abuse Interviews Referrals and Evaluation Center (CAIRE). The expenditure will be offset with revenue received from the sub grantor by fiscal year end. A required 25% in-kind match of $10,500, included in the agreement for salary and benefit costs of a Deputy District Attorney, is funded in existing appropriations in the District Cont. Page 2
BOARD ACTION AS FOLLOWS: No. 2011-031
On motion of Supervisor O'Brien, Seconded by Supervisor Withrow, and approved by the following vote,
Ayes: Supervisors: O'Brien, Chiesa, Withrow, DeMartini, and Chairman Monteith.
Noes: Supervisors: None
Excused or Absent: Supervisors: None
Abstaining: Supervisor: None
1) X Approved as recommended
2) Denied
3) Approved as amended
4) Other:
MOTION:
ATTEST: CHRISTINE FERRARO TALLMAN, Clerk
Approval of Sub Grant Agreement with the Southern Regional Children's Advocacy Center for Closed-Circuit Television and/or Recording Equipment for Use in Criminal Child Abuse Cases and Training.
FISCAL IMPACT CONTINUED:
Attorney's General Fund Criminal budget. There is no exposure to the County General Fund.
DISCUSSION:
The National Children's Advocacy Center (NCAC) was awarded a grant from the US Department of Justice, Office of Juvenile and Delinquency Prevention, to provide funds for the purchase and/or upgrade of closed-circuit television and/or recording equipment for use in criminal child abuse cases, and for the development and delivery of related personnel training. Through a competitive process, the Southern Regional Children's Advocacy Center (SRCAC), a program of the NCAC, has chosen to fund and enter into a sub grant agreement in the amount of $30,000 with the Stanislaus County District Attorney's Office to purchase recording equipment for the CAIRE Center. The sub grant agreement is effective for a period of one year, from August 1, 2010 through July 31, 2011.
The CAIRE Center provides forensic interviews, medical examinations and therapeutic services to young victims of crime in a single, child friendly location. The center is jointly operated by the Stanislaus County District Attorney and Stanislaus County Community Services Agency. The CAIRE Center interview is the final interview of the child prior to court. The recorded interview is then available to use in support of the child's testimony in court. However, the current recording equipment does not always record accurately and is outdated. The outdated system does not indicate how many hours of interview data it has stored on the hard drive. When the hard drive reaches its maximum capacity, the interview with the child has to stop so the Information Technology (IT) department can be contacted to delete prior interviews to free space on the hard drive. The staff and child have to wait until the IT department arrives, which can take anywhere from 60 - 90 minutes. Because the interview has to be stopped, the flow of the interview is affected and increases stress to the child having to wait to continue. The new recording equipment to be purchased is iRecord. The new iRecord equipment will provide continuity and follow-through that does not currently exist during the child victim interview process. This will decrease stress and anxiety for the child, and also use less IT and staff time trying to back up and free up data space.
POLICY ISSUES:
Acceptance of this contract will assist the District Attorney's office in meeting the Board's priority of A Safe Community by providing assistance to victims of crime.
STAFFING IMPACTS:
There are no staffing impacts at this time.
CONTACT INFORMATION:
Carol Shipley, Assistant District Attorney (209) 525-5550
SUB GRANT AGREEMENT
BETWEEN
THE SOUTHERN REGIONAL CHILDREN’S ADVOCACY CENTER, A PROJECT OF
THE NATIONAL CHILDREN’S ADVOCACY CENTER
AND
County of Stanislaus, District Attorney
FOR
THE CLOSED-CIRCUIT TELEVISION AND RECORDING TECHNOLOGY
FOR USE IN CHILD ABUSE CASES PROGRAM
THIS Sub Grant Agreement is entered into by and between the Southern Regional Children’s Advocacy Center (SRCAC), a project of the National Children’s Advocacy Center (NCAC), and County of Stanislaus, District Attorney (“Sub Grantee”). NCAC has been awarded a grant from the US Department of Justice, Office of Juvenile and Delinquency Prevention (OJJDP) through grant number 2009-MU-MU-K005 to provide funds for the purchase and/or upgrade of closed-circuit television (CCTV) and/or recording equipment for use in criminal child abuse cases, and for the development and delivery of related personnel training. SRCAC desires to fund Sub Grantee’s Project, as described in Attachment A of this Sub Grant, to further the objectives of the Grant.
In consideration of the payments, terms and conditions set forth in this Sub Grant Agreement, Sub Grantee agrees to perform the Project in accordance therewith. This Sub Grant Agreement and its attachments establish the entire agreement between SRCAC and Sub Grantee and may only be changed by prior written approval of both parties, as set out in Section 29 of this Sub Grant Agreement.
TERMS AND CONDITIONS
Article I: General Provisions
1. SUBGRANT PERIOD. This Sub Grant Agreement is effective August 1, 2010 - July 31, 2011.
2. SUBGRANT AMOUNT. SRCAC agrees to grant the Sub Grantee an amount not to exceed $30000 in allowable costs for the implementation of the Project described in Attachment A (Concept Narrative and Project Timeline) in accordance with the budget set forth in Attachment B (Budget Summary), both of which are incorporated by reference herein. Sub Grantee agrees to provide and document, to the reasonable satisfaction of the SRCAC, a cash or in-kind match of $10500.
3. SUB GRANT REPRESENTATIVES. Unless otherwise specified under this Sub Grant for particular activities, SRCAC designates Cym Doggett, Project Director, as its representative in carrying out the terms of this Sub Grant. The Sub Grantee designates Carol Shipley as its representative in carrying out the terms of this Sub Grant. Either designee may change upon written notice to the other Party.
4. TERMINATION. Either Party may terminate this Sub Grant Agreement at any time and for any reason, including but not limited to Sub Grantee’s failure to implement the Project to SRCAC’s satisfaction, by giving written notice of termination to the other Party’s representative, and by specifying therein the effective date of the termination. Notice of termination must be received at least thirty (30) days prior to the effective date. SRCAC shall have the right to terminate this Sub Grant Agreement immediately if it has good-faith reason to believe that Sub Grantee has engaged in financial mismanagement or misfeasance of Sub Grant funds. The Sub Grant Agreement may be terminated on behalf of the SRCAC only by its Executive Director, Project Director, or Chief Financial Officer. In the event of termination for cause, SRCAC will provide notice in writing as soon as reasonably possible under the circumstances, but in any case within not more than seven days. Notwithstanding the foregoing, if funds to finance this Sub Grant Agreement become unavailable, SRCAC may terminate the Sub Grant Agreement upon no less than twenty-four (24) hours notice by telephone, e-mail or other writing to Sub Grantee’s representative.
After notice of termination, Sub Grantee shall not incur any new obligations with respect to the terminated portion of the Sub Grant Agreement, and shall terminate any consulting agreements or contracts to the extent that they relate to the terminated work. SRCAC will pay for authorized costs incurred through the date of termination. Sub Grantee will furnish all necessary reports of work completed, or in progress, through the date of termination.
5. COMPLIANCE REQUIREMENTS FOR FEDERALLY FUNDED GRANTS. If any portion of this Agreement will be paid with any federal funds, Sub Grantee understands and agrees that compliance is required with the provisions of 2 CFR 215.48 and Appendix A to 2 CFR 215, which are hereby adopted by reference and made a part of this Agreement. It is Sub Grantee’s responsibility to review, understand, and comply with these requirements, a copy of which may be obtained by accessing the Code of Federal Regulations at www.access.gpo.gov.
6. AUDITS. Sub Grantee agrees to cooperate in any audit of its organization commenced by SRCAC, the funding agency or their authorized representatives. Audit activities shall include the examination and/or copying of all contracts, invoices, materials, payrolls, records of personnel, conditions of employment, and other data, including other sources of funding for Sub Grantee’s activities. Deficiencies noted in any audit report must be fully cleared by Sub Grantee within thirty (30) days after Sub Grantee’s receipt of notice of such deficiencies.
7. RETENTION of RECORDS. Sub Grantee shall retain program reports, financial records, supporting documents, statistical records and all other records pertinent to the Project for a period of three years from the date SRCAC submits the final financial report to OJJDP for the Grant. Records that are the subject of audit findings shall be retained for three years after such findings have been resolved or from the date SRCAC submits the final financial report to OJJDP for the Grant, whichever last occurs. Records for nonexpendable property acquired with funds under this Sub Grant shall be retained for three years after final disposition of such property.
8. LIABILITY. Sub Grantee shall indemnify and hold SRCAC/NCAC and their officers, agents and employees harmless against any and all liability imposed or claimed, including reasonable attorney’s fees and other legal expenses, arising, directly or indirectly from any act or failure of Sub Grantee or Sub Grantee’s assistants, employees or agents, including all claims relating to the injury or death of any person or damage to any property or any cause of action of whatever nature, that may arise out of the performance of the Sub Grant.
9. ASSIGNMENTS. No part of this Sub Grant Agreement shall be contracted, assigned or delegated without the express written approval of SRCAC.
10. CONFIDENTIALITY. Reports, information and data given to or prepared or assembled by Sub Grantee under this Sub Grant shall be kept confidential and shall not be made available to any individual or organization without prior written approval by SRCAC.
11. RELATIONSHIP. The relationship created under this Sub Grant Agreement between SRCAC, including NCAC, and Sub Grantee is that of grantor and grantee, respectively, and in no way creates an employer/employee relationship between them, and any of Sub Grantee’s employees or agents.
12. WORKERS COMPENSATION. Sub Grantee agrees to provide workers’ compensation insurance, where applicable, for Sub Grantee’s employees and agents and agrees to hold harmless and indemnify NCAC/SRCAC, their officers, agents and employees from any and all claims arising out of any injury, disability, or death of any of Sub Grantee’s employees or agents.
13. STATE AND FEDERAL TAXES. As Sub Grantee is not NCAC’s or SRCAC’s employee, Sub Grantee is responsible for paying all applicable required taxes, including but not limited to United States state or federal taxes. In particular:
SRCAC will not withhold FICA (Social Security) from Sub Grantee’s payments;
SRCAC will not make state or federal unemployment insurance contributions on Sub Grantee’s behalf;
SRCAC will not withhold state or federal income tax from payment to Sub Grantee;
SRCAC will not make disability insurance contributions on behalf of Sub Grantee;
SRCAC will not obtain worker’s compensation insurance on behalf of Sub Grantee.
14. PROHIBITION OF TERRORISM FINANCING. Sub Grantee warrants and agrees that it shall comply with all United States laws and Executive Orders prohibiting transactions with and/or the provision of support and resources to individuals and organizations associated with terrorism. The Sub Grantee further warrants that Sub Grantee, its employees and agents shall abstain from any such activities.
15. GOVERNING LAW. This Sub Grant shall be governed by and construed in accordance with the laws of the State of Illinois, United States of America.
16. DISPUTES. The Parties shall use good faith efforts to cooperatively resolve disputes and problems that arise in connection with this Sub Grant. If the Parties are unable to resolve a dispute, and at the written request of a Party, each Party shall appoint, within seven (7) days of receipt of the written request, a knowledgeable representative with decision-making authority for the matter in dispute. The representatives shall meet within fourteen (14) days of receipt of the written request and shall negotiate in good faith to resolve the dispute. Neither Party nor their representatives may choose OJDP or any OJDP personnel or contracted staff to be a representative in any dispute related to this Agreement.
If the representatives are unable to resolve the dispute within thirty (30) days of receipt of the written request, either Party may demand that the matter be resolved by arbitration under the rules of the American Arbitration Association, before a panel of three arbitrers, with each Party to select one
arbiter and the two arbiters to mutually select the third. Any such arbitration shall take place in Huntsville, Alabama, USA. Neither Party nor their arbiters may choose OJJDP or any OJJDP personnel or contracted staff to be an arbiter in any dispute related to this Agreement.
The panel's decision or award shall be final and fully obligatory for both Parties, and enforceable and subject to an entry of judgment by a court of competent jurisdiction. Each Party hereby agrees to subject itself to the personal jurisdiction, and specifically waives any objection it may have to such personal jurisdiction, of the federal or state courts sitting in Huntsville, Alabama, USA, as the venue in which the arbitration is conducted, for the enforcement of any arbitration award. The Party losing the arbitration shall reimburse the Party who prevailed for expenses and reasonable attorneys' fees, in the amount that is determined by the panel.
17. PARTIAL INVALIDITY. If any provision in this Sub Grant is held by a court of competent jurisdiction to be invalid, void or unenforceable, the remaining provisions will nevertheless continue in full force without being impaired or invalidated in any way.
Article II: Programmatic Provisions
18. PROJECT COMPLIANCE. The Sub Grantee shall comply with the Concept Narrative contained in Attachment A. The Sub Grantee shall track the use of equipment using a data collection form (provided by, or adapted from, the SRCAC) and monitor the performance of the Project. The Sub Grantee shall allow SRCAC or its agents access to case records (in accordance with laws and policies regarding confidentiality) and ensure that practitioners will be available for interviews with SRCAC evaluators. The Sub Grantee shall meet with SRCAC and other appropriate persons upon SRCAC’s request to discuss Project activities.
19. TRAINING FOR PROJECT STAFF. Training shall include provisions for at least one Project Staff to attend the National Symposium on Child Abuse in Huntsville, Alabama, March 28-31, 2011. Sub Grantee will receive a registration scholarship, and be reimbursed for up to $500 in travel and per diem, for one project staff person. Sub Grantee agrees to comply with federal per diem requirements and rates for Huntsville, Alabama.
20. PROGRAMMATIC REPORTING.
A. Quarterly Program Reports. The Sub Grantee shall submit a Progress Report to the Program Manager within thirty (30) days of each calendar quarter following the execution of this Sub Grant Agreement (see Attachment C for reporting schedule). If the state is the unit of government and will be distributing to small jurisdictions, the state office maintains responsibility for submitting all reports. Failure to submit the report may result in loss of access to grant funds. More frequent submission may be required by the Sub Grantee as a condition of continued participation in the program.
B. Final Report. Within thirty (30) days following the end of the project period a final report, to include activities for the month of July, 2011, and final financial and Program progress information, must be submitted by the Sub Grantee. If the state is the unit of government and will be distributing to small jurisdictions, the state office maintains responsibility for submitting all reports. A standardized final report form will be provided to Sub Grantee prior to the submission deadline.
Article III: Financial Provisions
21. BUDGET. Budget variances over ten percent (10%) per budget category must be pre-approved in writing by the SRCAC.
22. DISBURSEMENT OF FUNDS (Budgets). SRCAC shall disburse payments to the Sub Grantee as soon as possible, but no later than thirty (30) days after the Program Manager has received a proper invoice from the Sub Grantee. The Sub Grantee understands that no funds will be provided without approval by the SRCAC.
The total of all funds disbursed to the Sub Grantee to reimburse allowable costs under this Sub Grant Agreement shall not exceed the amount specified in Article I, Section 2 of this Sub Grant Agreement and described in Attachment B.
Any unspent funds at the end of the Sub Grant period shall be retained by SRCAC.
23. FINANCIAL REPORTING. The Sub Grantee shall submit a Quarterly Financial Report to the Program Manager within thirty (30) days following the end of three months from the award date and every three months thereafter. If the state is the unit of government and will be distributing to small jurisdictions, the state office maintains responsibility for submitting all reports. Failure to submit the report may result in loss of access to grant funds. More frequent submission may be required by Sub Grantee as a condition of continued participation in the program.
24. FINAL REPORT. Compliance with the Final Report specified in Section 22(B) of this Sub Grant shall constitute compliance with submittal of a Final Financial Report.
25. FAILURE TO MEET MATCH REQUIREMENT. The failure by Sub Grantee to meet the cash or in-kind match requirement of $10500 will result in the need by Sub Grantee to refund a pro-rated portion of the sub award costs that have been paid.
26. FINANCIAL RECORDS. All costs shall be supported by properly executed payrolls, time records, invoices, contracts, vouchers, or other official documentation evidencing in proper detail the nature and propriety of charges. All checks shall be signed by a legally authorized agent(s) of Sub Grantee. All accounting records including supporting documents pertaining in whole or in part to this Sub Grant shall be readily accessible and shall be maintained in accordance with Section 7 of this Sub Grant (Retention of Records).
27. PROCUREMENT GUIDELINES. Only items identical to or similar to those listed in the Project or Budget shall be purchased. Any deviation from those items must be pre-approved in writing by SRCAC.
28. ALLOWABILITY OF COSTS. If payments by SRCAC to Sub Grantee include payment for costs subsequently disallowed by SRCAC or by an authorized agent of OJJDP, Sub Grantee shall repay on demand the amount of any such disallowed costs, subject to Sub Grantee’s right to defend orally or in writing the allowability of any such costs to SRCAC or OJJDP.
If any portion of this Agreement is paid with any federal funds Sub Grantee shall comply with the cost principles set out in OMB Circular A-122, under which costs for entertainment and alcohol are specifically not allowed.
29. ENTIRE AGREEMENT. This instrument and attachments contain the entire agreement and understanding of the Parties hereto. They may not be changed orally but only by agreement in writing with the mutual consent of both Parties. Consent to any change in the Agreement may be given on behalf of the SRCAC only by its Executive Director, Project Director, or Chief Financial Officer. There is no other contemporaneous understanding or agreement, oral or written, between the Parties on said subject matter, and neither Party shall be bound by any statement or representation not contained or incorporated herein.
IN WITNESS WHEREOF, SRCAC and the Sub Grantee, by their representatives duly authorized, have executed this Sub Grant Agreement.
County of Stanislaus, District Attorney
[Signature]
By: Birgit Fladager
Title: District Attorney
Date: 1-14-11
NATIONAL CHILDREN'S ADVOCACY CENTER
[Signature]
By: Chris Newlin
Title: Executive Director
Date: November 23, 2010
## County of Stanislaus: Auditor-Controller
### Legal Budget Journal
**Database**
FMS11DB.CO.STANISLAUS.CA.US.PROD
**Set of Books**
County of Stanislaus
| Balance Type | Budget |
|--------------|--------|
| Category | * List - Text Budget - Upload |
| Source | * List - Text |
| Currency | * List - Text USD |
| Budget Name | List - Text LEGAL BUDGET |
| Batch Name | Text |
| Journal Name | Text |
| Journal Description | Text Increase appropriations and revenue for CAIRE Ctr sub grant |
| Journal Reference | Text |
| Organization | List - Text Stanislaus Budget Org |
| Upf | Fund | Org | Acc't | GL Proj | Loc | Misc | Other | Debit | Credit | Period | Line Description |
|-----|------|-----|-------|---------|-----|------|-------|-------|--------|--------|------------------|
| R2 | 0100 | 0023113 | 62400 | 0000000 | 000000 | 000000 | 000000 | 29,148 | | JAN-11 | Increase appropriations |
| R2 | 0100 | 0023113 | 67040 | 0000000 | 000000 | 000000 | 000000 | 852 | | JAN-11 | Increase appropriations |
| R2 | 0100 | 0023113 | 25000 | 0000000 | 000000 | 000000 | 000000 | 30,000 | | JAN-11 | Increase Revenue |
**Totals:** 30,000 30,000
**Explanation:** Increase appropriations and revenue for CAIRE Ctr sub grant
---
**Requesting Department**
Lori Acree
Signature
1/6/2011
Date
---
**CEO**
Signature
1/6/11
Date
---
**Data Entry**
Keyed by
Prepared By
Approved By
---
**Auditors Office Only**
---
SJ Budget JV JAN-11.xls |
Proceedings of the
SUFFOLK INSTITUTE OF ARCHAEOLOGY
For 1970
VOLUME XXXII, PART 1
(published 1971)
PRINTED FOR THE SOCIETY BY
THE ANCIENT HOUSE PRESS, IPSWICH, SUFFOLK, ENGLAND
EXCAVATIONS AT THE
OLD MINSTER, SOUTH ELMHAM
by Norman Smedley, M.A., F.S.A., F.M.A.
and Elizabeth Owles, B.A., F.S.A.
The date of the Minster at South Elmham (Plate I and Fig. 1), and its possible claim to be the Cathedral Church of the See of Elmham, created in the third quarter of the 7th century, have been a subject of controversy for over 100 years. The writers had for some time been urged by local historians, notably Mr. Derek Charman, then the County Archivist, and Mr. Norman Scarfe, to endeavour to throw some light on the problem by carrying out excavations. The Minster is scheduled as an Ancient Monument, and the Ministry of Public Building and Works readily gave permission subject to the agreement of the owner of the land, Mr. George Sanderson. This was obtained, and work began in July, 1963, under the auspices of the Ipswich Museum, of the staff of which both writers were then members, and with some financial help, mainly from the Suffolk Institute of Archaeology.
In had been intended to produce the results of this investigation as part of a symposium to include work on other aspects of the controversy, by those working on these problems, but this has so far not been found possible, and it is felt that publication of the results of the excavation should not be further delayed. Another reason for holding up publication was the desirability of carrying out further excavation in the enclosure in which the Minster stands, but this, too, has so far proved impracticable, although no final conclusions as to the nature of the site can be reached until this has been done.
It is not intended here to recapitulate all the arguments which have been adduced to prove or disprove the theory that South Elmham, and not North Elmham, was the episcopal see of the second of the two bishops chosen first as coadjutors to Bisi when sickness hindered him 'from administering his episcopal functions',¹ and then as his successors, nor are we concerned with the question as to whether the original see was at Dunwich or, as suggested by Mr. S. E. Rigold,² at Felixstowe. From either of these places, it would be more logical to establish a second see at North rather
¹ Bede, Ecclesiastical History of the English Nation, trans. J. Stevens, Everyman ed., Chap. v., p. 174.
² S. E. Rigold, 'The Supposed See of Dunwich', Jour. Brit. Arch. Assoc., xxiv (1961), p. 55, n. 6.
than at South Elmham, but logic did not then, as now, always play a conspicuous part in such decisions. The questions to be answered so far as possible are, Was this an ecclesiastical building, and if so of what kind? What is the probable date when it was built? Are there any indications, such as other buildings, graves, etc., that here was a settlement of the kind which would be expected in the precincts of a cathedral?
As has been indicated, the answer to the last question cannot be given until the surrounding enclosure can be excavated. The excavations already carried out do throw some light on the nature and date of the building.
THE EXCAVATIONS
Owing to other commitments, the work had to be carried out in stages, spread over the months of July, September and October, 1963, and continuing in August, 1964. Mr. Sanderson was most co-operative, and greatly lightened the initial stages by cutting the luxuriant growth of nettles which occupied the whole of the interior of the building. The first step was to establish the general accuracy of the plan given by Peers,\(^3\) and to confirm the existence of the apse, which was completely covered by earth, a fact which had led to a belief in some quarters that it was not in fact present at all, but had been included in Peers' plan as a feature to be expected in 'Saxon Churches of the St. Pancras type'. It has been suggested that the building was merely a rectangular barn. Peers' plan was in fact found to be substantially correct, except in minor though not unimportant details such as the relation of the two windows immediately to the east of the wall separating the narthex and the nave (Fig. 2).
In 1846, Suckling wrote, 'Mr. George Dumont, the present occupant of South Elmham Hall, informs me, that he caused the whole interior to be dug over, five feet deep, about four years since, but discovered nothing besides a few bones, and a small piece of old iron, with one or two ancient keys. It then appeared that the foundations of the walls are full five feet thick at the base, rising with two sets-off to the surface of the soil . . . the adjoining site is entirely free from any foundations but those of the "Minster" itself, while the frequent discovery by the plough, of urns filled with burnt bones and ashes, seems to confirm the voice of a tradition very current in the village, that the "Minster" occupies the site of a pagan temple'.\(^4\)
The excavations did not confirm Mr. Dumont's statement that the whole of the interior had been dug over to five feet, and when Micklethwaite\(^5\) visited the site with Raven in 1897, he too was of the opinion that 'what we found did not confirm the story, mentioned by Mr. B. B. Woodward in the fourth volume of "Suffolk Archaeology"', of the whole surface being dug over'.
It was unfortunately impossible to extend the 1963–4 excavations to examine the enclosure in which the Minster stands, and there is now no record of what became of the 'urns filled with burnt bones and ashes' referred to by Suckling, but there is no reason to doubt the accuracy of the observation. Mr. James Campbell, of Worcester College, Oxford, one of our helpers, heard a story that more urns had been found some 40 or 50 years
---
\(^3\) C. R. Peers, *Arch. J.*, lvm (1901), pp. 433–4.
\(^4\) A. I. Suckling, *The History and Antiquities of the County of Suffolk*, i (1846), p. 209.
\(^5\) J. T. Micklethwaite, *Proc. Suff. Inst. Arch.*, xvi (1916–18), pp. 29–35.
Fig. 2.—Plan of the Minster, showing extent of excavation.
ago, and were incorporated in the foundation of chicken huts. A future excavation of the area is of great importance, as the regular use of the Minster would surely imply the existence of other buildings to house the Bishop's retinue.
The extent of the enclosure, and its relation to the Minster building, may be seen in the plan of the site by Cleer S. Alger, published with Woodward's account of the Minster in 1874. He did not believe that the church had ever been finished, and suggests that the absence of foundations in the enclosure is accounted for by the fact that 'the conventual buildings would have been of wood, and have therefore disappeared'.
So far as the Minster itself is concerned, there is still nothing to controvert or confirm the view of Harrod that the present building, which he considers to be early Norman, replaced an earlier wooden structure which he attributed to Felix (one of the South Elmham group of villages is named Flixton). From this he proceeds to argue that when the see was divided, one of the new bishops would naturally establish himself at South Elmham.
Others who have entered into the controversy, either as to the relationship of the Elmhams, North and South, to the bishopric, or as to the dating of the building, have largely stated their views categorically without adducing any concrete evidence to support them. Redstone saw an indication of the importance of the 'Old Minster' in the fact that in 1326 Edward II, on his way from Hoxne to Norwich, broke his journey at South Elmham. St. John Hope, writing to a correspondent in Suffolk simply gave it as his opinion that 'the remains of "The Minster" are those of an undoubted 7th century church'. Rigold (loc. cit.) refers to it as 'the so-called "Minster"', and seemingly doubts whether 'it ever functioned as a church'.
As regards the enclosure, it has been generally accepted that this is of Roman date, although at one stage during the excavation the suggestion was put forward that the bank and ditch had been constructed as a defence against the Danes. It would not seem, in the opinion of the writers, that this need be taken seriously, par-
---
6 B. B. Woodward, *Proc. Suff. Inst. Arch.*, iv (1874), pp. 1–7.
7 Henry Harrod, *Proc. Suff. Inst. Arch.*, iv (1874), pp. 7–13.
8 Norman Scarfe, in a paper read before the Society of Antiquaries of London in October 1963, before the date of the building had been established with any certainty, has suggested that the transfer of rights from King to Church may have taken place in the time of Sigebert and Felix, and the Minster erected by Bertgils Boniface, second in succession to Felix.
9 V. B. Redstone, 'South Elmham Deanery', *Proc. Suff. Inst. Arch.*, xiv (1912), pp. 323–331.
10 Sir William St. John Hope in litt., quoted by Francis Seymour Stevenson in 'The Present State of the Elmhamb Controversy', *Proc. Suff. Inst. Arch.*, xix (1926), p. 111.
ticularly as there is no evidence that there was anything to defend at that time. Raven\textsuperscript{11} regarded it as a small camp ‘to be occupied on occasion’. Trial trenches put down across the north and west ditches produced only a few small sherds of Roman pottery and this, bearing in mind the close proximity of the kiln of the late Roman period excavated by the writers at Homersfield in 1959,\textsuperscript{12} should leave little doubt as to the nature of the enclosure. Nevertheless, further excavation both of the ditch and of the interior of the enclosure would be profitable.
Before proceeding to a detailed description of the various features of the building, some general observations may be of interest. The foundation of the whole structure might be described as a hollow ‘raft’ (Fig. 3), considerably wider than the walls which it supported. Suckling (\textit{loc. cit.}) writes ‘the walls are full five feet thick, rising with two sets-off to the surface of the soil’, and this is a fairly accurate general description. Micklethwaite noted that ‘the salient angles both inside and out, have been of wrought stone, all of which has been taken away’, and our conclusion on this point tallied (Plate II). Raven, on the other hand, found ‘no worked stone’. He observed that ‘the land falls slightly eastward, following the set of the ground, so that the apse is somewhat lower than the west end’. This was confirmed when levels were taken, the fall from the narthex-nave crossing to the apex of the apse being about three feet.\textsuperscript{13}
Micklethwaite commented on the unusual put-log holes, triangular in form, but seems to have missed the fact that all were not apex upwards, some being the reverse (Plate II). Another uncommon feature was that the holes on the inside of the walls were still visible. This has been taken in some quarters as evidence of the fact that the building was never completed, but it is probable that they were merely filled with loose rubble before the walls were plastered, and this filling has naturally fallen out as the plaster deteriorated. For the most part the walls, standing in places to a height of fourteen feet, have been robbed of the facing to the height of a man’s reach, even of the flints.
**THE APSE**
Until the excavations took place, the apse was completely concealed below ground level, a fact which may have given rise to the suggestion mentioned earlier that, in spite of Peers’ remarks and Alger’s plan, it did not exist.
\textsuperscript{11} Rev. John James Raven, \textit{Proc. Suff. Inst. Arch.}, ix (1900), pp. 1–6.
\textsuperscript{12} ‘A Romano-British Pottery Kiln at Homersfield’, \textit{Proc. Suff. Inst. Arch.}, xxviii (1959), pp. 168–184.
\textsuperscript{13} If the hand-sketch which accompanies Alger’s plan in Peers’ account of the Minster is accurately to scale, this seems quite possible.
A trench was cut round the outer face of the wall, and inside the north face. In view of the suggestion that the Minster was a cathedral church, an extension was cut, 4 feet wide and extending for over 12 feet down the centre of the apse, to establish whether or not there had been a foundation which would indicate an episcopal throne and a stone altar. No evidence was found.
Fig. 3.—Section at south-west corner looking south, showing raft-construction of foundation (see key on p. 13).
The apse extended for 20 feet beyond the cross-wall with the nave (internal measurement). The wall is 3 feet in thickness, with an offset externally of 4 to 6 inches at about 8 inches below present ground level, and another of 1 foot, at 1 foot 6 inches below the upper. Inside the wall the upper offset is not present, the lower being 9 inches to 1 foot wide.
**THE CROSS-WALL BETWEEN APSE AND NAVE**
The cross-wall between the apse and the nave has a thickness of 4 feet. On the east side, facing the apse, it has an upper offset at 1 foot 6 inches of 4 inches in width, and at 1 foot 4 inches below this, a lower offset of 8 inches.
On the west side, opening on to the nave, the upper offset is 6 inches wide, with a lower offset of 8 inches at 1 foot 4 inches below the upper—2 feet 6 inches below the present ground level.
Raven and Micklethwaite, visiting the Minster together in 1897, had looked for evidence of a tripartite arch, and Peers also regarded this as probable, and indicated it on his plan. There is in fact no evidence to support this except the considerable width of the opening, and the fact that on the south side the springing of the arch occurs at 7 feet from ground level. Mr. A. B. Whittingham, whose co-operation throughout has been so valuable, has made the suggestion that this may indicate a triple arcade, with the two outer arches somewhat lower than that in the centre. There is a butt joint between the return of the nave wall and the cross-wall proper, so that the latter could be a later addition or a re-building, probably when some alteration was made, as for instance the replacement of the triple arcade by a single wide arch.
There is a slight indication at the north end of the wall of a plinth to the supporting pillar, but this is less obvious on the south.
**THE NAVE**
The nave provided the most valuable evidence for the dating of the building, and a great deal of credit is due to the experienced observation of Mr. Whittingham. He it was who first noted the error in Peers’ plan, in which the most westerly pair of nave windows are placed directly opposite one another, and equidistant from the nave-narthex crossing, and it was in looking for a reason for placing the north window at a less distance from the wall than its fellow that he was led to seek, and find, the north door, which will be described later.
The nave is 38 feet in length and 27 feet wide, with walls of a thickness of 4 feet, where they have not been reduced by robbing to about 3 feet 8 inches. It would appear that Micklethwaite was right in his surmise that the angles, both inside and out, had been
of worked stone, but this has been robbed except where it was underground.
The windows, some 7 feet 6 inches above ground level, are 4 feet wide at the inner splay surface, and about 2 feet at the outer face, or possibly less if the missing stonework is taken into account. The springing of the arch begins at 5 feet above the sill level.
The north wall of the nave (Plate I) still stands to a level of some 14 feet up to the splay of the first nave window, 4 feet 4 inches from the junction of the nave-narthex crossing. It is then reduced practically to ground level until the third window, and here a large block of masonry from the north-east corner has been dislodged and lies reversed in the corner.
The south wall stands to a height of 14 feet up to the position of the first window, here 6 feet 6 inches from the cross-wall (interior

measurement) where it drops to 10 feet 6 inches. It is then reduced to 5 feet 9 inches until at the third window it resumes its full height, and at the upper level continues at a height of 14 feet for 4 feet over the apse. Here it has been robbed in its lower portion in such a way as to give almost the impression of a doorway and in view of this the question arose as to the possibility of a *porticus* such as those at Bradwell.\(^{14}\) This led to the excavation of the south-east corner of the nave which was to produce such incontrovertible evidence of the relatively late date of the Minster, at least in its present form.
Just below ground level it was revealed that the corner was supported by a stone block 1 foot 5 inches in length, 7 inches wide on its eastern face and 6 inches at its other end, and 4 inches deep (Plate IIIa, Fig. 4). The inner (north) face was somewhat irregularly
\(^{14}\) S. E. Rigold, *North Elmham Saxon Cathedral* (H.M.S.O., 1960), p. 9, and plan on pp. 6–7.
broken. On the upper surface of this stone was a carved design consisting of a heavy diagonal bar running upwards from the west corner, a more lightly carved vertical centrally, and a pattern of interlace in the east corner. It was in fact quite obviously a fragment from a tomb slab similar to that at Milton Abbas.\textsuperscript{15} It could hardly be of earlier date than the 9th century, and showed signs of considerable weathering before being broken and put to its present use, suggesting a date for that use as perhaps the 10th but more probably the 11th century. The possibility was not overlooked that it might have been a repair, but it was bonded into the wall in such a way as to preclude this interpretation. Underneath the carved stone was another slab of similar dimensions, but unmarked.
The date of the Minster, then, can with some certainty be established as most probably post-Conquest, but conceivably only a little earlier.
As has been said, Mr. Whittingham had suggested that the obvious reason for the more westerly placing of the north-west window as compared with its pair in the south wall was the insertion of a north door, and this he proceeded to explore (Figs. 2 and 5).
\textsuperscript{15} T. D. Kendrick, \textit{Late Anglo-Saxon and Viking Art} (1949), p. 82 and Pl. LIV.
Although the state of the masonry rendered interpretation at this point by no means easy, there seems little doubt that this door did exist, and was of the type associated with Norman rather than Saxon buildings. It began some 6 feet 9 inches from the splay of the window, and was 5 feet in width. On the outer face the wear of the corners seemed sufficiently uniform to suggest the presence of an inset or 'nib', which would support the later date, although this was not obvious, it must be confessed, without the guidance of the expert.
THE CROSS-WALL BETWEEN NAVE AND NARTHEX
The cross-wall, opening by two doorways into the nave, still stands to a height of 9 feet. The abacus of the arches can be clearly seen at 7 feet 6 inches above present ground level, and the springing of the south arch, measured from the exposed foundation of the doorway, was at 10 feet 8½ inches.
The wall has a thickness of 4 feet 7 inches, agreeing with that of the narthex itself. This fact is significant as it would indicate that the additional strength as compared with the nave was intended to support a tower higher than the nave, and probably with a four-sided gable.
The centre block of the wall is 7 feet long, and the width of each of the two doorways 7 feet, subject to allowance for stone-robbing. The northern doorway is entirely blocked by a large ash tree. The offsets on either side have a width of 6 inches at the level of the lower offset of the nave walls.
THE NARTHEX
The narthex is 26 feet square (interior measurement) with two windows on either side similar in dimensions to those of the nave. In assessing the proportions of the windows it must be remembered that much robbing of material has taken place. Not only has all the worked stone been removed, but the flint walls themselves have been denuded of their facing up to the height of convenient reach of an average man. Certainly no worked stone would be allowed to escape; good building material is scarce in East Anglia. The original size of the window apertures would therefore almost certainly have been appreciably smaller than is now the case.
The west door is 5 feet wide, and the springing of the arch 10 feet 3 inches above the main step, which is 1 foot 3 inches above the general floor level, with indications of two steps down, though the stone-work is so worn that the width of these cannot easily be defined, and they are therefore not indicated in the plan.
Figure 1. Plan view of the site showing the location of the trenches.
Fig. 6.—Plan and sections of platform outside south wall of narthex.
| Layer Description | Symbol |
|----------------------------|--------|
| Top soil | |
| Grey soil with rubble | |
| Sand with soil-rubble | |
| Hard reddish sand | |
| Flint masonry | |
| Cement | |
| Sand and clay mixture | |
| Chalky boulder clay | |
| Black soil with stones and cement | |
The cavity left by the removal of a substantial dressed stone was clearly visible outside the northwest corner; that of the southwest corner was still in position.
Outside the south wall of the narthex there was revealed one of the most puzzling features of the building (Plate III, b and Fig. 6). It had been decided to put down a trial trench outside the south wall, in case this might reveal any further clues to the date in the form of pottery or other small finds. The only pottery found was extremely fragmentary, and in fact of very little help in arriving at any conclusions as to date. A note is appended below (p. 15).
What did emerge, however, was the presence of an apsidal platform, centred on the second, or easterly window in the south wall of the narthex. It had a base of 15 feet, and projected for 11 feet from the wall. The floor was solidly constructed of flint pebbles, and for 6 feet or so from the wall, was cemented over; the level was 1 foot above the main offset of the narthex. The immediate comparison which springs to mind is the stair-turret which led to the upper chamber of the west tower at North Elmham, which it resembles in form and size; the central cemented area was probably the floor of the turret, and this would naturally not be represented in the surround, which would be the base of the wall. Its presence could well be accounted for if Whittingham's tentative explanation is accepted, that the Bishop of the time (?Aylmer) who had built the stair at North Elmham was having it copied at South Elmham, for this stair is evidently an afterthought; there is no provision for the turret to be entered from the interior of the narthex as at North Elmham, and if it had an external door the entrance to the upper room seems never to have been completed. Rigold points out that 'such a chamber was a normal late Saxon feature', and adds, 'it may have acted as the Bishop's pew'. The lower chamber might have served as court and audience chamber on the occasion of the Bishop's visits to South Elmham.
This feature does therefore seem to point to a relationship with the North Elmham building, and points again to a later date.
THE ENCLOSURE
Little need be said about the general character of the enclosure, except to confirm its Roman origin. Not only were sherds of Roman pottery found during the excavation of both the north and the west ditches, but Mr. Peter Wade-Martins, whilst examining the Minster site for a comparison with that at North Elmham, on which he had been working, found a number of sherds in the field opposite the south entrance.
Any suggestion that the enclosure is a moated site is discounted by the relative levels of the ditches as shown in the traverses prepared by Alger. It is clearly demonstrated, particularly in the case of the west-east traverse, that the west ditch would remain dry, or at the most might retain stagnant pools of rain-water in places, unless the eastern ditch was not only flooded but the bank submerged. This is confirmed by a survey carried out by Mr. G. Mathieson in 1965, using theodolite, plane-table, and C.T. & S. tilting level.
THE POTTERY
Two small fragments of a wheel-made cooking pot were found by the footings of the apsidal platform on the south side of the narthex (Fig. 7). The ware is dark brown and gritty. The rim is almost upright, slightly out-turned, with a hollow on the top which is considered by Mr. J. G. Hurst as being typical of Middle Saxon pottery. In his opinion, if not Middle Saxon, it might be early medieval.
SUMMARY
Excavations carried out in 1963-4 at the Old Minster, South Elmham, were aimed principally at establishing the true nature of the building, its date and probable use.
All indications point to its having been a church, possibly functioning as a Minster serving the area of the group of South Elmham parishes, but unlikely to have been the site of the See of Elmham.
The date of the existing building cannot have been much earlier than the 11th century, and although this does not preclude the possibility of an earlier timber building, no evidence of this was found.
Further light might be thrown on the exact nature of the site by the excavation of the interior of the enclosure, which is undoubtedly of Roman date.
ACKNOWLEDGEMENTS
Thanks are due to the Ministry of Public Building and Works for readily granting permission for the excavation of this scheduled ancient monument to take place, to Mr. George Sanderson of South Elmham Hall, the owner of the land, for agreeing to allow the work to proceed, and for material help in the early stages. The writers are also indebted to Mr. Derek Charman who provided much of the historical background necessary in deciding on the excavation, to Mr. Norman Scarfe, who made the results of his own studies relating to the Minster available, and on occasion gave physical help in the work, and also to a large number of voluntary helpers who carried out the digging of the site under the guidance of the writers.
The Minster from the North.
Interior of north wall, showing putlog holes, window and stone robbing.
a. South-east corner of nave, showing built-in grave slab and dressed stone quoin.
b. Apsidal platform on south side of narthex. |
A LOWER BOUND ON THE AVERAGE GENUS OF A 2-BRIDGE KNOT
MOSHE COHEN
Abstract. Experimental data from Dunfield et al using random grid diagrams suggests that the genus of a knot grows linearly with respect to the crossing number. Using billiard table diagrams of Chebyshev knots developed by Koseleff and Pecker and a random model of 2-bridge knots via these diagrams developed by the author with Krishnan and then with Even-Zohar and Krishnan, we introduce a further-truncated model of all 2-bridge knots of a given crossing number, almost all counted twice. We present a convenient way to count Seifert circles in this model and use this to compute a lower bound for the average Seifert genus of a 2-bridge knot of a given crossing number.
1. Introduction
On randomness. In recent years, many topologists and geometers have utilized randomness to study the behaviors of their favorite mathematical objects. However, any random model has inherent biases. Erlandsson, Souto, and Tao [EST20] show that the set of pseudo-Anosov elements of the mapping class group of a surface is generic with respect to several metrics, confirming a result by Maher [Mah11] using a metric given by random walks on the Cayley graph of the mapping class group. On the other hand, Malyutin [Mal19] and with Belonosv [BM19] (and see also [Mal20]) show that hyperbolic links and knots are not generic, that the proportion of satellite knots among all prime non-split links does not converge to zero, contradicting results by Ma [Ma14] and Ito [Ito15] showing that the closure of a random braid is a hyperbolic link and also a result by Ichihara and Ma [IM17] showing that a random link via bridge position is hyperbolic.
A solution is to study these objects from many different random models with hopes of determining whether these properties are inherent or intrinsic to the model alone. Dunfield and Thurston [DT06] list several models for random three-manifolds that could be developed further. Work by Petri with Baik, Bauer, Gekhtman, Hamenstädt, Hensel, Kastenholz, and Valenzuela [BBG+18], Thaule [PT18], Mirzakhani [MP19], and Raimbault [PR20] showcases the benefits of exploring various models.
Another important avenue is to study further properties of the mathematical objects appearing in previous random models. Marin [Man20] finds bounds for the areas of minimal Seifert surfaces of knot obtained from random embeddings of a polygon studied by Millet [Mil00] via Monte Carlo exploration.
The genus of an object is often a particularly accessible invariant for calculations on randomness. Linial and Nowik [LN11] draw a correspondence between generic curves in oriented surfaces and oriented chord diagrams and then show that a randomly chosen oriented chord diagram of order $n$ has expected genus $\frac{n}{2} - \Theta(\ln n)$. Chmutov and Pittel [CP13] draw a correspondence between surfaces obtained by gluing the sides of an $n$-gon and chord diagrams and then show that the distribution of the genus of a random chord diagram is asymptotically Gaussian.
Brooks and Makover [BM04] obtain a random Riemann surface by randomly gluing together an even number of triangles and then show that the expected genus is between $(1 + \frac{g}{2}) \pm \Theta(\log n)$. Chmutov and Pittel [CP16] randomly glue together the sides of $n$ oriented polygonal discs and
then show that with high probability the surface consists of a single component and its genus is asymptotic to a Gaussian random variable. Even-Zohar and Farber [EZF20] randomly glue together some of the sides from the two models above and then show that the genus and number of boundary components asymptotically follow a bivariate normal distribution.
Shrestha [Shr20] uses the symmetric group to study random square-tiled surfaces and shows that the genus satisfies a local central limit theorem.
**On random knot diagrams.** How does a given numerical knot invariant grow with respect to the crossing number of the knot? Random knots can be used to answer this question, provided the model for random knots is conducive to calculating the invariant. Since several knot invariants are calculated diagrammatically, it will be useful to find models for random knot diagrams, particularly diagrams that have been well-studied.
Grid diagrams, introduced (and re-introduced) in [Cro95, Bru98, Dyn06], were popularized through connections to knot Floer homology in [MOST07, MOS09]. Dunfield et al [Dun14] developed a random model for these grid diagrams, and Doig [Doi20] studied the number of components. These grid diagrams can be re-envisioned as petal (or Petaluma) knot diagrams with “übercrossings” introduced by Adams et al [ACD+15, ACSF+15, Ada17] and studied by Colton et al [CGHS19]. Even-Zohar, Hass, Limal, and Nowik [EZHLN16, EZHLN18] developed a random model for these petal knot diagrams. It is more difficult to sample all knot diagrams, as in work by Chapman [Cha17] and with Cantarella and Mastin [CCM16]; using a particular diagrammatic model, for example straight knots studied by Owad [Owa18], may lead to easier computations.
Chebyshev diagrams (of long knots) were introduced by Fischer [Fis01] and studied by Koseleff and Pecker [KP11b, KP11a] together with others [KPR10, KPRT18, BKP20] and by this author [CT14, Coh14]. These are related to Lissajous knot diagrams studied by the late Vaughn Jones (1952-2020) with Bogle, Hearst, and Stoilov [BHS94] and Przytycki [JP98].
A Chebyshev diagram can be obtained from the projection of a curve parametrized in three dimensions $x = T_a(t)$, $y = T_b(t)$, and $z = T_c(t + \phi)$ by Chebyshev polynomials $T_n(t)$ for some $a, b, c, n \in \mathbb{N}$ and a phase shift $\phi \in \mathbb{R}$. Projections of these Chebyshev diagrams can be re-envisioned as trajectories on $a \times b$ billiard tables, with $a$ and $b$ relatively prime for knots, as follows: a billiard ball fired at 45° from the lower left corner bounces through every $1 \times 1$ square with slope either 1 or -1 and leaves the table at one of the corners on the right as in Figure 1; including crossing information gives billiard table diagrams $\tilde{D}$.

**Figure 1.** The projection of a $3 \times 8$ billiard table diagram and a (reduced) billiard table diagram $\tilde{D}$ obtained from the (reduced) billiard table word $w = +--++-+$.
In particular Koseleff and Pecker show that all knots can be realized by Chebyshev diagrams. Knots with bridge number $br$ can be realized for any $a \geq 2br - 1$ and $a < b$.
The author with Krishnan [CK15] and with Even-Zohar and Krishnan [CEZK18] developed a random model for these $a = 3$ Chebyshev diagrams by taking a random string of $\{+, -\}^n$, where $n = b - 1$ is the number of crossings of the diagram and where $+$ and $-$ correspond to the slope of the overstrand in the crossing: $\times$ and $\times$, respectively. Because $a = 3$, the only knots appearing in this model are 2-bridge knots and the unknot. The sequel paper computed an exact formula for
the probability of a such a knot with crossing number $c$ appearing in a random binary string of length $n$.
One reason to restrict to the setting of 2-bridge knots (or rational knots) is that they can easily be described either by (finite) continued fractions (giving rational numbers), which we use below, or by closures of rational tangles, as developed by the late John Conway (1937–2020) [Con70].
**On the Seifert genus of a knot.** Kanenobu [Kan92, Assertion 2] gives a formula for the Seifert genus of a 2-bridge knot based on the notation for Conway’s normal form. Murasugi [Mur58] shows that for any alternating knot with a constant incidence number, the genus is exactly equal to one half of the degree of its Alexander polynomial. Crowell [Cro59, Theorem 3.5] shows that for an alternating link type with multiplicity, the degree of its reduced Alexander polynomial is one less than twice the genus plus the multiplicity. Suzuki and Tran [ST18] relate genera of 2-bridge knots to epimorphisms of the respective knot groups.
Baader, Kjuchukova, Lewark, Misev, and Ray [BKLMR19] show that the expected value of the ratio of the 4-genus $g_4$ and the Seifert genus $g$ over all 2-bridge knots whose 4-plat presentation, in the sense of Conway, has $2n$ crossings is $\lim_{n \to \infty} \left\langle \frac{g_4}{g} \right\rangle_n = 0$.
Dunfield et al [Dun14] use rejection sampling to compile a list of one knot per crossing number for each crossing number up to 1000 and compute upper and lower bounds on the genus of each of these knots. Their experimental data suggests that the genus of any knot grows linearly with respect to its crossing number. This result especially motivated the work in the present paper.
**On the present paper.** Here only 2-bridge knots will be considered, and so $c \geq 3$. We consider all 2-bridge knots of crossing number $c$, counted exactly once or twice according to Assumption 2.2 and Remark 2.3, by considering every case of $\varepsilon_i \in \{1, 2\}$ for $2 \leq i \leq c - 1$ and $\varepsilon_1 = 1 = \varepsilon_c$ in Equations (2.1) and (2.2):
$$ (+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4} \ldots (-)^{\varepsilon_{c-1}}(+)^{\varepsilon_c} \text{ for } c \text{ odd and } $$
$$ (+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4} \ldots (+)^{\varepsilon_{c-1}}(-)^{\varepsilon_c} \text{ for } c \text{ even, } $$
with reduced length $\ell = \sum_{i=1}^{c} \varepsilon_i \equiv 1 \mod 3$. Each of these reduced billiard table words $w$ produces an alternating diagram $D$ by Theorem 2.4. The total number of these words or diagrams produced is given by Theorem 2.6. See Section 2 for details.
From this alternating diagram $D$, the Seifert genus $g(K) = 1 - \frac{1 + s - c}{2} = \frac{1 - s + c}{2}$ of the knot $K$ can be computed via Seifert’s algorithm [Sei35] counting the number of Seifert circles $s$ and using the crossing number $c = c(K)$. We count this number via vertically-smoothed crossings in Theorem 3.5 with bounds given in Corollary 3.6. Examples 3.10 and 3.11 are worked out for $c = 6$ and $c = 7$, respectively, with figures inside Tables 3 and 4, respectively. See Section 3 for details.
From this we arrive at the Main Theorem 4.3, presented in a condensed form here.
**Theorem.** A lower bound on the average genus of a 2-bridge knot with given crossing number $c$ is roughly
$$ \frac{c - 1}{2} - \left( \frac{3}{2(2^{c-2})} \right) \left( \sum_{i=2}^{c-1} \sum_{d_1=0}^{i-2} \sum_{d_2=0}^{c-i-1} \binom{i-2}{d_1} \delta(i) \binom{c-i-1}{d_2} \right). $$
This average is taken over all 2-bridge knots appearing twice except for those with palindromic type only appearing once.
Corollary 4.4 shortens this summation due to symmetry. Examples 4.5 and 4.6 apply this Main Theorem to our $c = 6$ and $c = 7$ cases, respectively, showing the calculations explicitly.
Acknowledgements. The author would like to thank Nathan Dunfield for his experimental work on genus, John McCleary for his pointing out an important result, Marina Ville for her hospitality during which a portion of this work was completed, Christopher Panna and Lumina Resnick for their hospitality during which another portion of this work was completed, and Adam Lowrance for his reading of an earlier draft.
2. Alternating diagrams of 2-bridge knots
Both [CK15] and [CEZK18] discuss moves on billiard table diagrams that are similar to Reidemeister moves: the internal reduction move that deletes a run $+++ +$ or $--- -$ of three in a row and the external reduction move that deletes $++ - -$ or $-- +$ only from the start of the word or $-+ +$ or $+--$ only from the end of the word.
For crossing number $c \geq 3$, a word in $\{+, -\}$ can thus be reduced by these moves to one of the form
\begin{equation}
(+)^\varepsilon_1 (-)^\varepsilon_2 (+)^\varepsilon_3 (-)^\varepsilon_4 \ldots (-)^\varepsilon_{c-1} (+)^\varepsilon_c \text{ for } c \text{ odd},
\end{equation}
\begin{equation}
(+)^\varepsilon_1 (-)^\varepsilon_2 (+)^\varepsilon_3 (-)^\varepsilon_4 \ldots (+)^\varepsilon_{c-1} (-)^\varepsilon_c \text{ for } c \text{ even},
\end{equation}
\begin{equation}
(-)^\varepsilon_1 (+)^\varepsilon_2 (-)^\varepsilon_3 (+)^\varepsilon_4 \ldots (+)^\varepsilon_{c-1} (-)^\varepsilon_c \text{ for } c \text{ odd, or}
\end{equation}
\begin{equation}
(-)^\varepsilon_1 (+)^\varepsilon_2 (-)^\varepsilon_3 (+)^\varepsilon_4 \ldots (-)^\varepsilon_{c-1} (+)^\varepsilon_c \text{ for } c \text{ even}.
\end{equation}
where $\varepsilon_1 = 1 = \varepsilon_c$ and all other $\varepsilon_i \in \{1, 2\}$. These shall be referred to as reduced billiard table words.
We refer to a run of three in a row as a triple, a run of two as a double, and a run of one as a single.
By the following lemma, the number of ways of writing a 2-bridge knot in a reduced word is either 4 or 8:
Lemma 2.1. (adapted from Cohen-Krishnan [CK15, Lemma 2.20] based on work by Schubert [Sch56] and Koseleff-Pecker [KP11a]) Let $K$ be a 2-bridge knot. Then there are exactly two reduced lengths $\ell$ for $K$. Furthermore, these lengths are $\ell_0 \equiv 0 \mod 3$ and $\ell_1 \equiv 1 \mod 3$, and the number of ways $r(\ell)$ to write $K$ in reduced length $\ell$ is either 2 or 4.
From a reduced word $w$, another word yielding the same knot can be obtained by three techniques: reversing the word; taking the mirror image of the word, replacing each $+$ with $-$ and vice versa; and replacing all interior $\varepsilon_i = 1$ with $\varepsilon_i = 2$ and vice versa for every $i = 2, \ldots, c - 1$.
Assumption 2.2. To reduce the number of ways of writing a 2-bridge knot in a reduced word further to 1 or 2, only words beginning with $+$ and with length $\ell_1 \equiv 1 \mod 3$ will be considered in the current work. That is, we only consider reduced billiard table words of the forms in Equations (2.1) and (2.2).
Remark 2.3. All 2-bridge knots appear twice in this model except for palindromic words with $c$ odd and words that are equal to the reverse of the mirror image of $w$ with $c$ even. We will collectively refer to all of these words as having palindromic type; these appear exactly once in the model.
Notationally it is convenient to view these knot diagrams $D$ as braids: starting from left to right, with the labeling of strands going upwards, but with a plat closure. They are of the form
\begin{equation}
\sigma_1 \sigma_2^{-1} \sigma_1^{\pm 1} \sigma_2^{\pm 1} \ldots \sigma_1 \text{ for } c \text{ odd or}
\end{equation}
\begin{equation}
\sigma_1 \sigma_2^{\pm 1} \sigma_1^{\pm 1} \sigma_2^{\pm 1} \ldots \sigma_2^{-1} \text{ for } c \text{ even}.
\end{equation}
We take the convention that the $i$th strand in $\sigma_i$ passes over the $i + 1$st so that $\sigma_i^{\pm 1}$ corresponds with a $+$ and $\sigma_i^{\mp 1}$ corresponds with a $-$ in the reduced billiard table word $w$.
Reduced billiard table diagrams $\tilde{D}$ from reduced billiard table words $w$ can be simplified to alternating diagrams $D$:
**Theorem 2.4.** (Mentioned without proof in [CEZK18]) A reduced billiard table diagram $\tilde{D}$ from a reduced billiard table word $w$ corresponds to a unique alternating diagrams $D$.
**Proof.** Figure 2 ([CEZK18, similar to Figure 3]) demonstrates that every run of length two in $w$ corresponds to a single crossing in a new diagram $D$.

**Figure 2.** Fix the left-hand side of the knot. Grasp the right-hand side of the knot and rotate forward or backward (depending on the crossing information) by $180^\circ$ to replace two red crossings in the reduced billiard table diagram $\tilde{D}$ obtained from a run of two in $w$ with a single crossing in $D$, as in [CEZK18, Figure 3].
Figure 2 shows only two of four possible pictures: with a double $++$ as $\sigma_1 \sigma_2$ and with a double $--$ as $\sigma_2^{-1} \sigma_1^{-1}$. Omitted are the cases with a double $++$ as $\sigma_2 \sigma_1$ and with a double $--$ as $\sigma_1^{-1} \sigma_2^{-1}$. These cases cannot occur as long as these moves are performed from left to right. That is, the first run of two from the left will have its first letter in the correct place by sign: $+$ yielding $\sigma_1$ and $-$ yielding $\sigma_2^{-1}$.
Finally we conclude with the facts that the first crossing is always positive here, so that the “long” strand to the left comes from an overcrossing, and that the last crossing is positive if the crossing number $c$ is odd and negative if $c$ is even, so that in both cases the “long” strand to the right comes from an undercrossing.
This guarantees that the crossings shown in this long knot are alternating with respect to each other. □
It is important to note that these alternating diagrams $S$ are no longer of the billiard table form. Table 1 explains this correspondence between $w$ and $D$.
| Run in reduced billiard table word $w$ | $(+)^1$ | $(+)^2$ | $(-)^1$ | $(-)^2$ |
|--------------------------------------|---------|---------|---------|---------|
| Crossing in alternating diagram $D$ | $\sigma_1$ | $\sigma_2^{-1}$ | $\times$ | $\sigma_1$ |
| | $\times$ | $\sigma_2^{-1}$ | $\times$ | $\times$ |
**Table 1.** Following Figure 2, each run in the reduced billiard table word $w$ is converted to a single crossing in the alternating diagram $D$.
To see many examples, the reader is encouraged to skip ahead to Table 3 from Example 3.10 and Table 4 from Example 3.11 to see how the billiard table word $w$ becomes an alternating word with an alternating diagram in the first three columns of each table.
To conclude this section, we count the number of knots appearing on our list of knots for a given crossing number $c$.
The following result is discussed in Guichard [Gui95] and more recently McCleary [McC17, “Indicators” p.206]:
**Proposition 2.5.** (Netto [Net58, p.20].)
\begin{align*}
(2.3) \quad & \binom{k}{0} + \binom{k}{3} + \binom{k}{6} + \ldots = \frac{1}{3} \left( 2^k + 2 \cos \frac{k\pi}{3} \right) \\
(2.4) \quad & \binom{k}{1} + \binom{k}{4} + \binom{k}{7} + \ldots = \frac{1}{3} \left( 2^k + 2 \cos \frac{(k-2)\pi}{3} \right) \\
(2.5) \quad & \binom{k}{2} + \binom{k}{5} + \binom{k}{8} + \ldots = \frac{1}{3} \left( 2^k + 2 \cos \frac{(k-4)\pi}{3} \right)
\end{align*}
**Theorem 2.6.** Consider the collection of all 2-bridge knots of given crossing number $c$, counted exactly once or twice according to Assumption 2.2 and Remark 2.3, by considering every case of $\varepsilon_i \in \{1, 2\}$ for $2 \leq i \leq c - 1$ and $\varepsilon_1 = 1 = \varepsilon_c$ in Equations (2.1) and (2.2):
\[ (+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4} \ldots (-)^{\varepsilon_{c-1}}(+)^{\varepsilon_c} \text{ for } c \text{ odd and } (+)^{\varepsilon_1}(-)^{\varepsilon_2}(+)^{\varepsilon_3}(-)^{\varepsilon_4} \ldots (+)^{\varepsilon_{c-1}}(-)^{\varepsilon_c} \text{ for } c \text{ even,} \]
with reduced length $\ell = \sum_{i=1}^{c} \varepsilon_i \equiv 1 \mod 3$.
Then there are exactly $\frac{2^{c-2}+*}{3}$ (or approximately $\frac{2^{c-2}}{3}$) elements in this collection, where
\[
* = \begin{cases}
2 \cos \frac{(c-2)\pi}{3} & \text{if } c \equiv 1 \mod 3, \\
2 \cos \frac{(c-4)\pi}{3} & \text{if } c \equiv 0 \mod 3, \text{ and} \\
2 \cos \frac{(c-6)\pi}{3} & \text{if } c \equiv 2 \mod 3.
\end{cases}
\]
**Proof.** The total length of the word $\ell = \sum_{i=1}^{c} \varepsilon_i$ is equal to the crossing number $c$ plus the number of doubles (where $\varepsilon_i = 2$). Since the length must be congruent to 1 modulo 3: if $c$ is congruent to 0, then the number of doubles must be congruent to 1; if $c$ is congruent to 1, then the number of doubles must be congruent to 0; and if $c$ is congruent to 2, then the number of doubles must be congruent to 2.
Consider first the case where $c$ is congruent to 1. One element from this collection is when 0 doubles are added. We may also add 3 doubles, and these doubles must be selected from $2 \leq i \leq c - 1$, so there are $\binom{c-2}{3}$ elements. We may also add 6 doubles for an additional $\binom{c-2}{6}$ elements, and so on. Apply Proposition 2.5. The other cases are similar. \qed
Note that this counting of doubles will reappear in the Main Theorem 4.3.
### 3. Counting the number of Seifert circles
Seifert’s algorithm first requires us to smooth the crossings of a diagram following their orientation.
**Definition 3.1.** We say a crossing is *vertically-smoothed* if its strands are both oriented upwards or both oriented downwards. We denote such a crossing by $V$.
We say a crossing is *horizontally-smoothed* if its strands are both oriented to the left or both oriented to the right. We denote such a crossing by $H$.
We adapt for this setting a more general result from the discussion of the Jones polynomials of 2-bridge knots.
**Proposition 3.2.** (Cohen [Coh14, follows from Property 7.1]) In a billiard table diagram with $a = 3$ and $n = 3m + 1$ crossings numbered from left to right, the orientations of the crossings are $H(VVH)^m$.
In a billiard table diagram with $a = 3$ and $n = 3m$ crossings numbered from left to right, the orientations of the crossings are $(VHV)^m$.
We will only be concerned with the first case following Assumption 2.2. One can see this result immediately in Figure 3.

**Figure 3.** The pattern for horizontal and vertical orientations for $n \equiv 1 \mod 3$ is not dependent on the billiard table word $w$.
We next translate this result to our new setting of the alternating diagram obtained from the reduced billiard table word as in Theorem 2.4.
**Lemma 3.3.** If either a single $+$ or a single $-$ appears in any position in the reduced billiard table word $w$ that is congruent to 1 modulo 3, then it corresponds to a single crossing in the alternating diagram $D$ that is horizontally-smoothed.
If either a double $++$ or a double $--$ appears in any two positions in the reduced billiard table word $w$ that are congruent to 2 and 3 modulo 3, then they correspond to a single crossing in the alternating diagram $D$ that is horizontally-smoothed.
The remaining cases correspond to crossings in $D$ that are vertically-smoothed.
**Proof.** The first statement on a single $+$ or a single $-$ is easy to see based on Proposition 3.2.
To show the second statement, we consider all of the possible cases for a double $++$ and a double $--$. First we remind the reader that a double $++$ must occur as $\sigma_1\sigma_2$ and a double $--$ must occur as $\sigma_2^{-1}\sigma_1^{-1}$ as in the proof of Theorem 2.4, reducing the number of cases. Note that both of these cases involve an “overstrand” passing over the two consecutive crossings.
Recall that our long knot starts on the left, moves to the right, backtracks to the left, and then returns to the right. So if this overstrand is oriented to the left, then both of the other arcs must be oriented to the right. This occurs in the third row of Figure 4. If this overstrand is oriented to the right, then one of the other arcs must be oriented to the right and the other to the left. These two cases occur in the first and second rows of Figure 4.
In the first row a $VH$ becomes a $V$ for both the $++$ and $--$ cases. In the second row an $HV$ becomes a $V$ for both cases. In the third row a $VV$ becomes an $H$ for both cases. These cases are summarized in Table 2. Lastly we note that $VV$ only occurs in positions congruent to 2 and 3 modulo 3 by Proposition 3.2. $\square$
We now use these vertically-smoothed crossings to count the number of Seifert circles obtained from the alternating diagram $D$.
**Definition 3.4.** Consider the alternating diagram $D$ obtained from a reduced billiard table word $w$ as described in Section 2.
Figure 4. The only possible configurations of pairs of oriented crossings in a reduced billiard table diagram $\tilde{D}$ obtained from doubles $++$ or $--$ in the associated reduced billiard table word $w$, together with the corresponding single crossing in the alternating diagram $D$, as in the proof of Lemma 3.3.
Table 2. A summary of the cases associated with Figure 4 in the proof of Lemma 3.3.
| $++ \rightarrow \sigma_2^{-1}$ | $-- \rightarrow \sigma_1$ |
|-------------------------------|--------------------------|
| VH $\rightarrow$ V | VH $\rightarrow$ V |
| HV $\rightarrow$ V | HV $\rightarrow$ V |
| VV $\rightarrow$ H | VV $\rightarrow$ H |
We say a vertically-smoothed crossing is viable if the next vertically-smoothed crossing appears at the same height or if it is the last vertically-smoothed crossing.
We say a vertically-smoothed crossing is sequential if it is not the last vertically-smoothed crossing and if the immediate next crossing is also vertically-smoothed and appears at the same height.
Theorem 3.5. The number of Seifert circles in the alternating diagram $D$ obtained from a reduced billiard table word $w$ is two more than the number of viable vertically-smoothed crossings.
From this Lemma we can obtain an upper bound and a lower bound on the number of Seifert circles:
Corollary 3.6. The number of Seifert circles in the alternating diagram $D$ obtained from a reduced billiard table word $w$ is at most two more than the number of all vertically-smoothed crossings.
Corollary 3.7. The number of Seifert circles in the alternating diagram $D$ obtained from a reduced billiard table word $w$ is at least two more than the number of sequential vertically-smoothed crossings.
Remark 3.8. Observe that for every non-viable vertically-smoothed crossing, there is a viable one immediately following, and so the number of non-viable ones must be less than or equal to the number of viable ones. Thus the total number of all vertically-smoothed crossings, which is the sum of the number of non-viable ones and the number of viable ones, must be less than or equal to twice the number of viable vertically-smoothed crossings. This total number is also greater than or equal to the number of viable vertically-smoothed crossings.
This gives an idea of how good the upper bound of Corollary 3.6 is.
Before we get to the proof of Theorem 3.5, we address the following small technical lemma that will make the main proof easier to follow.
Lemma 3.9. Suppose that one of our alternating diagrams $D$ has no vertically-smoothed crossings. Then the number of crossings $c$ must be odd.
Furthermore, the first vertically-smoothed crossing, if it exists, must be a $\sigma_2^{-1}$. If $c$ is odd, then the last vertically-smoothed crossing must be, as well; if $c$ is even, then the last must be a $\sigma_1$.
Proof. The first crossing is always $\sigma_1$ arising from a single + appearing in position 1 (and hence 1 modulo 3) so that it is horizontally-smoothed.
If the second crossing was a $\sigma_2^{-1}$ arising from a single – appearing in position 2 (and hence 2 modulo 3), it would be vertically-smoothed by Lemma 3.3, so instead it must be a $\sigma_1$ arising from a double — appearing in positions 2 and 3, and it is horizontally-smoothed.
If the third crossing was a $\sigma_2^{-1}$ arising from a double ++ appearing in positions 4 and 5, it would be vertically-smoothed by Lemma 3.3, so instead it must be a $\sigma_1$ arising from a single + appearing in position 4, and it is horizontally-smoothed.
For position 5, this brings us back to the argument for the second crossing, and so we must have + and then —— repeating. By Assumption 2.2, the reduced word must end with a single, so the word must be $+(- - +)^m$ for some positive integer $m$. Then the alternating knot must be $\sigma_1^{2m+1}$, and so it must have an odd number of crossings.
These exact arguments also show the first vertically-smoothed crossing must be a $\sigma_2^{-1}$.
To see the final claims, consider the reverse of the reduced word. If $c$ is odd, then the reverse also begins with +, and its alternating diagram must have a $\sigma_2^{-1}$ as its first vertically-smoothed crossing, so the original reduced word must have a $\sigma_2^{-1}$ as its last vertically-smoothed crossing. If $c$ is even, then the reverse begins with a – and so the mirror must be taken, and its alternating diagram must have a $\sigma_2^{-1}$ as its first vertically-smoothed crossing, so the original reduced word must have a $\sigma_1$ as its last vertically-smoothed crossing. \qed
Proof of Theorem 3.5. First observe that a horizontal smoothing acts like the identity tangle in a braid. Let us then ignore these crossings from our discussion below so that we have only vertically-smoothed crossings.
Suppose there are no vertically-smoothed crossings. Then by Lemma 3.9, the crossing number of the alternating diagram $D$ must be odd, and so this gives two circles: one from the plat closure between the strands labeled two and three, and one from the “long” component of the long knot.
Suppose now there are vertically-smoothed crossings, and let’s consider the base cases determined by whether $c$ is odd or even. By Lemma 3.9, if $c$ is odd, the first and the last vertically-smoothed crossings are a $\sigma_2^{-1}$. If these are the same crossing, this gives the case shown in the first row of Figure 5, where a smoothing yields three circles, satisfying the conclusion of the Theorem. If these are not the same crossing, this gives the case shown in the second row of Figure 5, where a smoothing yields four circles, satisfying the conclusion of the Theorem. By Lemma 3.9, if $c$ is even,
the first vertically-smoothed crossing is a $\sigma_2^{-1}$ and the last is a $\sigma_1$. That gives the case shown in the third row of Figure 5, where a smoothing yields three circles, satisfying the conclusion of the Theorem since the first crossing is not viable.
\[
\begin{array}{c}
\text{Figure 5. Base cases for the proof of Theorem 3.5, where the bold indicates the start of Seifert circles.}
\end{array}
\]
Next we consider the two cases of adding additional vertically-smoothed crossings: whether the next crossing is at the same height as the previous one or whether it differs.
Suppose we have a vertically-smoothed crossing at a given height and that the next vertically-smoothed crossing is at the same height. Then as in the first row of Figure 6, we do indeed have an additional circle.
\[
\begin{array}{c}
\text{Figure 6. Inductive steps for the proof of Theorem 3.5, where the bold black indicates the start of Seifert circles and the bold gray indicates a vertically-smoothed crossing that is not viable.}
\end{array}
\]
Suppose we have a vertically-smoothed crossing at a given height and that the next vertically-smoothed crossing is at the opposite height. Then as in the second row of Figure 6, we do \textit{not} have an additional circle, but as the first crossing is no longer viable, we have not added any additional viable vertically-smoothed crossings. \qed
\textbf{Example 3.10.} Let us consider the set of all knots obtained in this model for crossing number $c = 6$. Since $6 \equiv 0$ modulo 3, we only consider adding $d \equiv 1$ modulo 3 doubles for $0 \leq d \leq 6 - 2 = 4$, yielding $d = 1$ or $d = 4$. The $\binom{4}{1} = 4$ cases for $d = 1$ and then the $\binom{4}{4} = 1$ case for $d = 4$ are listed in the first column of Table 3. The knots are named in the final column.
In the second column, the reduced words are transformed into sigma notation (with a plat closure) for an alternating diagram following Theorem 2.4, and these alternating diagrams are presented in the third column. The fourth column gives the oriented smoothing following Seifert’s algorithm.
Marked in black in these columns are the letters or crossings or smoothings corresponding to viable vertical crossings. Marked in gray are those corresponding to vertical crossings that are not
| Billiard Table | Alternating Word | Diagram $D$ | Seifert circles | Knot $K$ |
|----------------|-----------------|-------------|-----------------|---------|
| +---+---+- | $\sigma_1^3\sigma_2^{-1}\sigma_1\sigma_2^{-1}$ |  |  | 6$_2$ |
| +---+++---+ | $\sigma_1\sigma_2^{-3}\sigma_1\sigma_2^{-1}$ |  |  | 6$_1$ |
| +---+---+---+ | $\sigma_1\sigma_2^{-1}\sigma_1^3\sigma_2^{-1}$ |  |  | 6$_1$ |
| +---+---+---+ | $\sigma_1\sigma_2^{-1}\sigma_1\sigma_2^{-3}$ |  |  | 6$_2$ |
| +---+---+---+ | $\sigma_1^2\sigma_2^{-1}\sigma_1\sigma_2^{-2}$ |  |  | 6$_3$ |
**Table 3.** All reduced billiard table words for $c = 6$ crossings with their associated alternating words, alternating diagrams, and Seifert circles. The viable vertically-smoothed crossings of the alternating diagram are marked in black; these correspond to new Seifert circles. The non-viable ones marked in gray.
viable. One can see how each viable vertical crossing contributes a new circle and how each vertical non-viable crossing does not.
The actual average number $s$ of Seifert circles for $c = 6$ is $2 + \frac{9}{5} = \frac{19}{5} = 3\frac{4}{5}$.
The upper bound on the average number of Seifert circles given by Theorem 3.5 is $2 + \frac{14}{5} = \frac{24}{5} = 4\frac{4}{5}$.
By the genus formula for alternating knots, the actual average genus is $g(K) = 1 - \frac{1+c-c}{2} = \frac{1-s+c}{2} = \frac{14}{5} - \frac{5}{2} = \frac{7}{2} - \frac{10}{10} = \frac{10}{10} = 1\frac{0}{10}$.
The lower bound on the average genus is $g(K) = \frac{7}{2} - \frac{24}{10} = \frac{14}{10} = 1\frac{0}{10}$.
**Example 3.11.** Let us consider the set of all knots obtained in this model for crossing number $c = 7$. Since $7 \equiv 1$ modulo 3, we only consider adding $d \equiv 3$ modulo 3 doubles, giving us the data in Table 4.
The actual average number $s$ of Seifert circles for $c = 7$ is $2 + \frac{26}{11} = \frac{48}{11} = 4\frac{4}{11}$.
The upper bound on the average number of Seifert circles given by Theorem 3.5 is $2 + \frac{32}{11} = \frac{54}{11} = 4\frac{10}{11}$.
By the genus formula for alternating knots, the actual average genus is $g(K) = 1 - \frac{1+c-c}{2} = \frac{1-s+c}{2} = \frac{14}{5} - \frac{5}{2} = 4 - \frac{20}{11} = \frac{44}{11} = 1\frac{0}{11}$.
The lower bound on the average genus is $g(K) = 4 - \frac{27}{11} = \frac{17}{11} = 1\frac{6}{11}$.
| Billiard Table Word $w$ | Alternating Word | Diagram $D$ | Seifert circles | Knot $K$ |
|-------------------------|-----------------|-------------|-----------------|---------|
| + - + - + - + | $\sigma_1 \sigma_2^{-1} \sigma_1 \sigma_2^{-1} \sigma_1 \sigma_2^{-1} \sigma_1$ |  |  | 7$_7$ |
| + - + - + + - + | $\sigma_1 \sigma_2^{-1} \sigma_1^2 \sigma_2^{-1} \sigma_1^2$ |  |  | 7$_6$ |
| + + + + + + + + | $\sigma_1 \sigma_2^{-4} \sigma_1^2$ |  |  | 7$_2$ |
| + - + + - + + + | $\sigma_1 \sigma_2^{-2} \sigma_1^4$ |  |  | 7$_3$ |
| + - + + - + + + + | $\sigma_1 \sigma_2^{-2} \sigma_1 \sigma_2^{-2} \sigma_1$ |  |  | 7$_4$ |
| + + + + - + + + | $\sigma_1^3 \sigma_2^{-2} \sigma_1^2$ |  |  | 7$_5$ |
| + + + + + + + | $\sigma_1^7$ |  |  | 7$_1$ |
| + + + + - + + + | $\sigma_1^4 \sigma_2^{-2} \sigma_1$ |  |  | 7$_3$ |
| + - + + + + + + | $\sigma_1^2 \sigma_2^{-2} \sigma_1^3$ |  |  | 7$_5$ |
| + - + + - + + + + | $\sigma_1^2 \sigma_2^{-4} \sigma_1$ |  |  | 7$_2$ |
| + - + + + - - + | $\sigma_1^7 \sigma_2^{-1} \sigma_1^2 \sigma_2^{-1} \sigma_1$ |  |  | 7$_6$ |
**Table 4.** All reduced billiard table words for $c = 7$ crossings with their associated alternating words, alternating diagrams, and Seifert circles. The viable vertically-smoothed crossings of the alternating diagram are marked in black; these correspond to new Seifert circles. The non-viable ones marked in gray.
4. Counting contributions to genus via crossing index
We now use the genus formula $g(K) = 1 - \frac{1+s-s^*}{2}$ for alternating knots and Theorem 3.5 to calculate a lower bound for the genus of an average 2-bridge knot of given crossing number. However, the following theorem performs the necessary summation not by calculating the genus of each knot but by calculating the contribution to genus by each indexed crossing.
**Definition 4.1.** Recall from the proof of Theorem 2.6 that the reduced billiard table word $w$ has length $\ell = c + d$ where $c$ is the number of crossings in the alternating diagram $D$ and where $d$ is the number of doubles.
The crossing index $1 \leq i \leq c$ is the index of the crossing from left to right of the alternating diagram $D$.
Let $d_1(i)$ be the number of doubles appearing in the associated reduced billiard table word $w$ prior to the crossing at index $i$ and let $d_2(i)$ be the number of doubles appearing after the crossing at index $i$.
Then the sum $d_1(i) + d_2(i)$ is either equal to $d$ or one less than $d$, depending on whether or not the crossing at index $i$ in $D$ comes from a single in $w$, respectively.
We use $d_1(i)$ and $d_2(i)$ below and also in the Main Theorem 4.3.
**Definition 4.2.** For crossing index $1 \leq i \leq c$ and number of doubles $d_1(i)$ prior to the crossing at index $i$, define $\delta(\{i\})$ and $\delta(\{i, i+1\})$ to be vertical indicator functions that are equal to 1 or 0 depending on whether the crossing in $D$ at index $i$ is smoothed vertically or horizontally, respectively, as follows:
(4.1)
$$\delta(\{i\}) = \begin{cases}
1 & \text{if } i + d_1(i) \not\equiv 1 \mod 3 \\
0 & \text{if } i + d_1(i) \equiv 1 \mod 3
\end{cases}$$
when the crossing at index $i$ arises from a single and
(4.2)
$$\delta(\{i, i+1\}) = \begin{cases}
1 & \text{if } i + d_1(i) \not\equiv 2 \mod 3 \\
0 & \text{if } i + d_1(i) \equiv 2 \mod 3
\end{cases}$$
when the crossing at index $i$ arises from a double.
Note that this comes from Lemma 3.3.
Now we arrive at the Main Theorem 4.3.
**Theorem 4.3.** A lower bound on the average genus of a 2-bridge knot with given crossing number $c$ is
(4.3)
$$\frac{c-1}{2} - \left( \frac{3}{2(2^{c-2} + s)} \right) \left[ \sum_{i=2}^{c-1} \sum_{d_1=0}^{i-2} \sum_{d_2=0}^{c-i-1} \binom{i-2}{d_1} \delta(\{i\}) \binom{c-i-1}{d_2} \right.$$ $$\left. + \sum_{i=2}^{c-1} \sum_{d_1=0}^{i-2} \sum_{d_2=0}^{c-i-1} \binom{i-2}{d_1} \delta(\{i, i+1\}) \binom{c-i-1}{d_2} \right],$$
where * is as in Theorem 2.6.
This average is taken over all 2-bridge knots appearing twice except for those with palindromic type only appearing once as in Remark 2.3.
**Proof.** For convenience we refer to sum of the summations inside [ ] as “same as above.”
By the genus formula \( g(K) = 1 - \frac{1+s-c}{2} \) for alternating knots we have left to show that the average number \( s \) of Seifert circles must be at most \( 2 + \left( \frac{3}{2c-3+s} \right) \) [same as above].
By Theorem 2.6 on the number of knots on our list, we have left to show that the sum of all Seifert circles is at most \( 2 \left( \frac{c^2+2}{c-1} \right) + \) [same as above].
By Corollary 3.6 to Theorem 3.5 on an upper bound for the number of Seifert circles, we have left to show that the total number of vertically-smoothed crossings over all our 2-bridge knots of crossing number \( c \) following Remark 2.3 is [same as above].
First observe that the first crossing \( \sigma_1 \) is always horizontally smoothed by Lemma 3.3 because it appears in position 1, and so we begin our sum with crossing index \( i = 2 \).
Note next that no horizontally-smoothed crossings contribute to this sum based on the vertical indicator functions \( \delta(\{i\}) \) and \( \delta(\{i, i+1\}) \). We have two sums: the first corresponding to counting contributions from singles and the second to those from doubles.
Consider some vertically-smoothed crossing at crossing index \( i \). Its vertical indicator function \( \delta \) must be 1. We now count how many reduced billiard table words it appears in.
Suppose there are \( d_1 \) doubles prior to this crossing. Then since the first run is a single +, there are only \( i - 2 \) choices for where these doubles may occur, giving us \( \binom{i-2}{d_1} \) different ways to write a reduced billiard table word \( w \) up to the \( i \)th crossing.
If the \( i \)th crossing is a single, we perform the first summation; if it is double, we perform the second. In the first case, we have the sum \( c + d_1 + d_2 \) giving the reduced length \( \ell \) of the word, which must be congruent to 1 modulo 3; in the second case we have \( c + d_1 + 1 + d_2 \) accounting for the double at crossing index \( i \).
There are now some number \( d_2 \) of doubles remaining in the \( c - i - 1 \) positions (where the final crossing cannot have come from a double), giving us \( \binom{c-i-1}{d_2} \) different ways to write a reduced billiard table word \( w \) after the \( i \)th crossing. \( \square \)
We note that there is a symmetry where index \( i \) contributes the same as index \( c + 1 - i \).
**Corollary 4.4.** For even \( c \), the first summation need only go to \( \frac{c}{2} \) with the total doubled. For odd \( c \) one may count doubly for \( i \) up to \( \lfloor \frac{c}{2} \rfloor \) and count singly \( \lceil \frac{c}{2} \rceil \).
**Proof.** This is because the reduced billiard table word is of length 1 modulo 3 and because of symmetry of positions 2 and 3 in Proposition 3.2 and Lemma 3.3. \( \square \)
**Example 4.5.** We continue Example 3.10 for the set of all knots obtained in this model for crossing number \( c = 6 \) and demonstrate this computation via Main Theorem 4.3.
Note that \( c + d = 6 + d \) must be congruent to 1 modulo 3, so the number of doubles must be congruent to 1 modulo 3.
Because index \( i = 1 \) never contributes to the number of Seifert circles, we begin with index \( i = 2 \). Only a single – will contribute. There are three more opportunities for doubles, but since \( d \equiv 1 \mod 3 \), we have \( \binom{3}{1} = 3 \) different knots with a vertically-smoothed crossing at index \( i = 2 \).
At index \( i = 3 \), we have one opportunity for a double prior and two opportunities for doubles afterward. We may have a single + here; if there was a single – immediately prior, this gives \( \binom{1}{0} \binom{2}{1} = 2 \) different knots with a vertically-smoothed crossing \( \sigma_1 \) crossing at index \( i = 3 \); if there was a double -- immediately prior, the vertical indicator function is 0. We may also have a double ++, giving us \( \binom{1}{0} \binom{2}{0} + \binom{1}{1} \binom{2}{2} = 2 \) knots with a vertically-smoothed \( \sigma_2^{-1} \) crossing at index \( i = 3 \).
By Corollary 4.4, this gives a total of \( 2(0 + 3 + 4) = 14 \) contributions to the number of Seifert circles, with 5 knots on the list, matching the numbers found in Example 3.10.
**Example 4.6.** We continue Example 3.11 for the set of all knots obtained in this model for crossing number \( c = 7 \).
Note that \( c + d = 7 + d \) must be congruent to 1 modulo 3, so the number of doubles must be congruent to 0 modulo 3.
Because index \( i = 1 \) never contributes to the number of Seifert circles, we begin with index \( i = 2 \). Only a single — will contribute. There are four more opportunities for doubles, and since \( d \equiv 0 \mod 3 \), we have \( \binom{4}{0} + \binom{4}{3} = 5 \) different knots with a vertically-smoothed crossing at index \( i = 2 \).
At index \( i = 3 \), we have one opportunity for a double prior and three opportunities for doubles afterward. We may have a single + here: if there was a single − immediately prior, this gives \( \binom{1}{0} \binom{3}{0} + \binom{1}{0} \binom{3}{3} = 2 \) different knots with a vertically-smoothed crossing \( \sigma_1 \) crossing at index \( i = 3 \); if there was a double −− immediately prior, the vertical indicator function is 0. We may also have a double ++, giving us \( \binom{1}{0} \binom{3}{3} + \binom{1}{1} \binom{3}{1} = 6 \) knots with a vertically-smoothed \( \sigma_2^{-1} \) crossing here.
At index \( i = 4 \), we have two opportunities for a double prior and two opportunities for doubles afterward. We may have a single − here, but then in order for the indicator function to be 1, the number of doubles prior must be \( d_1 \not\equiv 0 \mod 3 \). This gives \( \binom{2}{0} \binom{2}{2} + \binom{2}{1} \binom{2}{1} = 4 \) different knots with a vertically-smoothed crossing \( \sigma_2^{-1} \) crossing at index \( i = 4 \). We may have a double −− here, but then \( d_1 \not\equiv 1 \mod 3 \). This gives \( \binom{2}{0} \binom{2}{2} + \binom{2}{2} \binom{2}{0} = 2 \) different knots with a vertically-smoothed crossing \( \sigma_1 \) crossing here.
By Corollary 4.4, this gives a total of \( 2(0+5+8)+6 = 32 \) contributions to the number of Seifert circles, with 11 knots on the list, matching the numbers found in Example 3.11.
---
**References**
[ACD⁺15] Colin Adams, Thomas Crawford, Benjamin DeMeo, Michael Landry, Alex Tong Lin, MurphyKate Montee, Seojung Park, Saraswathi Venkatesh, and Farrah Yhee, *Knot projections with a single multicrossing*, J. Knot Theory Ramifications **24** (2015), no. 3, 1550011, 30.
[ACSF⁺15] Colin Adams, Orsola Capovilla-Searle, Jesse Freeman, Daniel Irvine, Samantha Petti, Daniel Vitek, Ashley Weber, and Sicong Zhang, *Bounds on overcrossing and petal numbers for knots*, J. Knot Theory Ramifications **24** (2015), no. 2, 1550012, 16.
[Ada17] Colin Adams, *Turning knots into flowers and related undergraduate research*, Amer. Math. Monthly **124** (2017), no. 9, 791–806.
[BBG⁺18] Hyungryul Baik, David Bauer, Ilya Gekhtman, Ursula Hamenstädt, Sebastian Hensel, Thorben Kastenholtz, Bram Petri, and Daniel Valenzuela, *Exponential torsion growth for random 3-manifolds*, Int. Math. Res. Not. IMRN (2018), no. 21, 6497–6534.
[BHJS94] M. G. V. Bogle, J. E. Hearst, V. F. R. Jones, and L. Stoilov, *Lissajous knots*, J. Knot Theory Ramifications **3** (1994), no. 2, 121–140.
[BKLMR19] Sebastian Baader, Alexandra Kjuchukova, Lukas Lewark, Filip Misev, and Arunima Ray, *Average four-genus of two-bridge knots*, 2019.
[BKP20] Erwan Brugalle, Pierre-Vincent Koseleff, and Daniel Pecker, *The lexicographic degree of the first two-bridge knots*, Ann. Fac. Sci. Toulouse Math. (6) **29** (2020), no. 4, 761–793.
[BM04] Robert Brooks and Eran Makover, *Random construction of Riemann surfaces*, J. Differential Geom. **68** (2004), no. 1, 121–157.
[BM19] Yury Belousov and Andrei Mahurin, *Hyperbolic knots are not generic*, arXiv:1908.06187, 2019.
[Brn98] H. Brunn, *über verknotete Kurven*, Verhandlungen des Internationalen Mathematiker-Kongresses (1898), 256–259.
[CCM16] Jason Cantarella, Harrison Chapman, and Matt Mastin, *Knot probabilities in random diagrams*, J. Phys. A **49** (2016), no. 40, 405001, 28. MR 3556174
[CEZK18] Moshe Cohen, Chaim Even-Zohar, and Sunder Ram Krishnan, *Crossing numbers of random two-bridge knots*, Topology Appl. **247** (2018), 100–114.
[CGHS19] Leslie Colton, Cory Glover, Mark Hughes, and Samantha Sandberg, *A Reidemeister type theorem for petal diagrams of knots*, Topology Appl. **267** (2019), 106896, 22.
[Cha17] Harrison Chapman, *Asymptotic laws for random knot diagrams*, Journal of Physics A: Mathematical and Theoretical **50** (2017), no. 22, 225001.
[CK15] Moshe Cohen and Sunder Ram Krishnan, *Random knots using Chebyshev billiard table diagrams*, Topol. Appl. **194** (2015), 4–21.
[Coh14] Moshe Cohen, *The Jones polynomials of 3-bridge knots via Chebyshev knots and billiard table diagrams*, arXiv:1409.6614, 2014.
[Con70] J. H. Conway, *An enumeration of knots and links, and some of their algebraic properties*, Computational Problems in Abstract Algebra (Proc. Conf., Oxford, 1967), Pergamon, Oxford, 1970, pp. 329–358.
[CP13] Sergei Chmutov and Boris Pittel, *The genus of a random chord diagram is asymptotically normal*, J. Combin. Theory Ser. A **120** (2013), no. 1, 102–110.
[CP16] ———, *On a surface formed by randomly gluing together polygonal discs*, Adv. in Appl. Math. **73** (2016), 23–42.
[Cro59] Richard Crowell, *Genus of alternating link types*, Ann. of Math. (2) **69** (1959), 258–275.
[Cro95] Peter R. Cromwell, *Embedding knots and links in an open book. I. Basic properties*, Topology Appl. **64** (1995), no. 1, 37–58.
[CT14] Moshe Cohen and Mina Teicher, *Kauffman’s clock lattice as a graph of perfect matchings: a formula for its height*, Electron. J. Combin. **21** (2014), no. 4, #P4.51.
[Doi20] Margaret Doig, *Typical knots: size, link component count, and writhe*, arXiv:2004.07730, 2020.
[DT06] Nathan M. Dunfield and William P. Thurston, *Finite covers of random 3-manifolds*, Invent. Math. **166** (2006), no. 3, 457–521.
[Dun14] Nathan Dunfield, *Random knots: a preliminary report*, Slides for the talk available at http://dunfield.info/preprints, 2014.
[Dyn06] I. A. Dynnikov, *Arc-presentations of links: monotonic simplification*, Fund. Math. **190** (2006), 29–76.
[EST20] Viveka Erlandsson, Juan Souto, and Jing Tao, *Genericity of pseudo-Anosov mapping classes, when seen as mapping classes*, Enseign. Math. **66** (2020), no. 3-4, 419–439.
[EFZ20] Chaim Even-Zohar and Michael Farber, *Random surfaces with boundary*, arXiv:2001.01713, 2020.
[EZHLN16] Chaim Even-Zohar, Joel Hass, Nati Linial, and Tahl Nowik, *Invariants of random knots and links*, Discrete Comput. Geom. **56** (2016), no. 2, 274–300.
[EZHLN18] Chaim Even-Zohar, Joel Hass, Nathan Linial, and Tahl Nowik, *The distribution of knots in the Petaluma model*, Algebr. Geom. Topol. **18** (2018), no. 6, 3647–3667.
[Fis01] Gerd Fischer, *Plane algebraic curves*, Student Mathematical Library, vol. 15, American Mathematical Society, Providence, RI, 2001, Translated from the 1994 German original by Leslie Kay.
[Gui95] David R. Guichard, *Sums of selected binomial coefficients*, The College Mathematics Journal **26** (1995), no. 3, 209–213.
[IM17] Kazuhiro Ichihara and Jiming Ma, *A random link via bridge position is hyperbolic*, Topology Appl. **230** (2017), 126–138.
[Ito15] Tetsuya Ito, *On a structure of random open books and closed braids*, Proc. Japan Acad. Ser. A Math. Sci. **91** (2015), no. 10, 160–162.
[JP98] Vaughan F. R. Jones and Józef H. Przytycki, *Lissajous knots and billiard knots*, Knot theory (Warsaw, 1995), Banach Center Publ., vol. 42, Polish Acad. Sci. Inst. Math., Warsaw, 1998, pp. 145–163.
[Kan92] Taizo Kanenobu, *Genus and Kauffman polynomial of a 2-bridge knot*, Osaka J. Math. **29** (1992), no. 3, 635–651.
[KPP11a] P.-V. Koseloff and D. Pecker, *Chebyshev diagrams for two-bridge knots*, Geom. Dedicata **150** (2011), 405–420.
[KPP11b] ———, *Chebyshev knots*, J. Knot Theory Ramifications **20** (2011), no. 4, 575–593.
[KPR10] P.-V. Koseloff, D. Pecker, and F. Rouillier, *The first rational Chebyshev knots*, J. Symbolic Comput. **45** (2010), no. 12, 1541–1558.
[KPRT18] P.-V. Koseloff, D. Pecker, F. Rouillier, and C. Tran, *Computing Chebyshev knot diagrams*, J. Symbolic Comput. **86** (2018), 120–141.
[LN11] Nathan Linial and Tahl Nowik, *The expected genus of a random chord diagram*, Discrete Comput. Geom. **45** (2011), no. 1, 161–180.
[Ma14] Jiming Ma, *The closure of a random braid is a hyperbolic link*, Proc. Amer. Math. Soc. **142** (2014), no. 5, 1695–1705.
[Mah11] Joseph Maher, *Random walks on the mapping class group*, Duke Math. J. **156** (2011), no. 3, 429–468.
[Mal19] Andrei Malyutin, *Hyperbolic links are not generic*, arXiv:1907.04458, 2019.
[Mal20] Andrei V. Malyutin, *On the question of genericity of hyperbolic knots*, Int. Math. Res. Not. IMRN (2020), no. 21, 7792–7828.
[Man20] Fedor Manin, *Filling random cycles*, arXiv:2008.10761, 2020.
[McC17] John McCleary, *Exercises in (mathematical) style*, Anneli Lax New Mathematical Library, vol. 48, Mathematical Association of America, Washington, DC, 2017, Stories of binomial coefficients.
[Mil00] Kenneth C. Millett, *Monte Carlo explorations of polygonal knot spaces*, Knots in Hellas ’98 (Delphi), Ser. Knots Everything, vol. 24, World Sci. Publ., River Edge, NJ, 2000, pp. 306–334.
[Ciprian Manolescu, Peter Ozsváth, and Sucharit Sarkar, *A combinatorial description of knot Floer homology*, Ann. of Math. (2) **169** (2009), no. 2, 633–660.
[Ciprian Manolescu, Peter Ozsváth, Zoltán Szabó, and Dylan Thurston, *On combinatorial link Floer homology*, Geom. Topol. **11** (2007), 2339–2412.
[Mirzakhani and Bram Petri, *Lengths of closed geodesics on random surfaces of large genus*, Comment. Math. Helv. **94** (2019), no. 4, 869–889.
[Kunio Murasugi, *On the genus of the alternating knot. I, II*, J. Math. Soc. Japan **10** (1958), 94–105, 235–248.
[Eugen Netto, *Lehrbuch der Combinatorik*. Chelsea Publishing Company, New York, 1958.
[Nicholas Ovad, *Straight knots*, arXiv:1801.10428, 2018.
[Bram Petri and Jean Rainbault, *A model for random three-manifolds*, arXiv:2009.11923, 2020.
[Bram Petri and Christoph Thäle, *Poisson approximation of the length spectrum of random surfaces*, Indiana Univ. Math. J. **67** (2018), no. 3, 1115–1141.
[Horst Schubert, *Knoten mit zwei Brücken*, Math. Z. **65** (1956), 133–170.
[H. Seifert, *Über das Geschlecht von Knoten*, Math. Ann. **110** (1935), no. 1, 571–592.
[Sunrose Shrestha, *The topology and geometry of random square-tiled surfaces*, arXiv:arXiv:2005.00099, 2020.
[Masaaki Suzuki and Anh T. Tran, *Genera of two-bridge knots and epimorphisms of their knot groups*, Topology Appl. **242** (2018), 66–72.
Mathematics Department, State University of New York at New Paltz, New Paltz, NY 12561
Email address: firstname.lastname@example.org |
UNIT 12 CONTROL STRUCTURES
Structure
12.1 Introduction
Objectives
12.2 Types of Control Structures
12.2.1 Canal Falls that Nearly Maintain a Definite Depth-Discharge Relationship
12.2.2 Canal Falls which Nearly Maintain a Constant Water Surface Level in the Canal Upstream of the Fall
12.2.3 Canal Falls which Allow Variation of Water Level Upstream of the Fall
12.2.4 Miscellaneous Types
12.3 Types of Cistern Elements
12.3.1 Vertical Impact Cistern
12.3.2 Horizontal Impact Cistern
12.3.3 Inclined Impact Cistern
12.3.4 No-Impact Cistern
12.4 Roughening Devices for Energy Dissipation
12.4.1 Friction Blocks
12.4.2 Ribbed Pitching
12.4.3 Deflector
12.4.4 Biff Wall
12.4.5 Dentated Sill
12.5 Types of Canal Falls
12.5.1 Stepped or Cascade Type Fall
12.5.2 Open Drop Structure
12.5.3 Piped Drop
12.5.4 Pipe Fall
12.5.5 Pipe Drop Structure
12.5.6 Well Drop Structure
12.5.7 Vertical Drop Structure
12.5.8 Common Chute
12.5.9 Glacis Fall
12.5.10 Trapezoidal Notch Fall
12.6 Design Parameters
12.7 Summary
12.8 Key Words
12.9 Answers to SAQs
12.1 INTRODUCTION
It is often necessary to build irrigation channels on land slopes so steep that the water will attain erosive velocities. Severe erosion will occur in earth channels if structures to control this situation are not provided. Various types of control structures are available to control objectionable erosion. These have been developed by various designers based on extensive model tests. Such structures are, in general, known as falls. In this unit various types of structures commonly used to deal with the situation are discussed.
Objectives
After studying this unit, you should be able to describe,
• the various types of control structures,
• types of cistern elements,
• roughening devices for energy dissipation,
• falls, and
• various basic design parameters.
12.2 TYPES OF CONTROL STRUCTURES
Control structures and chute drops are used to prevent erosion in field channels.
Canal drops which are basically control structures, may also be utilized for hydropower development, using bulb or propeller-type turbines. Large numbers of small and medium-sized drops are desirable, especially where the existing power grids are far away from the farms. Such a network of micro-installations for power generation is extremely helpful in providing energy for pumping ground water, and for the operation of agricultural equipment, village industries, etc. However, the relative economics of providing a large number of small falls versus a small number of large falls must be considered. A small number of large falls may result in unbalanced earthwork but, on the other hand, some savings in the overall cost of the drop structures can be achieved.
A drop (or fall) structure is a regulating structure in the sense that it lowers the water level along the course of a given canal. The slope of a canal is usually milder than the terrain slope as a result of which normally the canal in a cutting at its headworks will soon outstrip the ground surface. In order to avoid excessive infilling the bed level of the downstream canal is lowered, the two reaches being connected by a suitable drop structure. Moreover, falls are introduced in the general run of the canal also, when the terrain is steeper than the appropriate bed slope of the canal (Figure 12.1).

A drop is located so that the fillings and cuttings of the given canal are equalized to the maximum extent possible. Wherever possible, the drop structure may also be combined with a regulator or a bridge to reduce the construction costs. The location of an offtake from the canal also influences the choice of a fall site; and generally offtakes are located upstream of the fall structure. Canal drops fall into various following categories.
(i) Canal falls that nearly maintain a definite depth-discharge relationship,
(ii) Canal falls which nearly maintain a constant water surface level in the canal upstream of the fall,
(iii) Canal falls which allow variation of water level upstream of the fall, and
(iv) Miscellaneous types.
12.2.1 Canal Falls that Nearly Maintain a Definite Depth Discharge Relationship
A **notch fall** maintains a definite relationship between depth of flow and discharge passing over it; the notch being rectangular or trapezoidal in shape. The rectangular notch (or low weir), though economical and more suited to measurement of discharges, cannot accurately maintain normal depth-discharge relationship. The trapezoidal notch fall (an improvement over rectangular notch) is formed of a number of trapezoidal notches set in a high breast wall across the channel. The sill of the notch is set at the canal bed level; and, therefore, it prevents the silting of the canal upstream of the fall. The shape of the notch is determined from the considerations of full supply and half supply discharges.
12.2.2 Canal Falls which Nearly Maintain a Constant Water Surface Level in the Canal Upstream of the Fall
The water surface level obviously needs to be maintained constant upstream of the fall, on the main canal, when a branch canal takes off or there is a hydroelectric plant combined with the fall. **High crested falls** can maintain the desired water surface levels. They are provided across the full width of the main canal so that the discharge intensity, $q$, is comparatively small. Smaller the discharge intensity, smaller is the head that is sufficient to maintain $q$ and so the water level upstream of the fall can be maintained at a
constant level. Moreover, a smaller $q$ means lesser energy dissipation that would be required, hence such falls are economical also.
12.2.3 Canal Falls which Allow Variation of Water Level Upstream of the Fall
Such falls are required when the subsidiary canal, upstream of the fall, has to be fed while the main canal carries discharge at its minimum supply level. Such falls consist of rectangular notches combined with any one of the following types of regulators:
(i) **Sluice gates** – The upstream level is controlled by the raising or lowering of the gate,
(ii) **Horizontal stoplogs inserted into grooves** – The water level may be raised or lowered by inserting or removing a few stoplogs, and
(iii) **Vertical strips or needles** – This device allows to change the effective width of the canal and prevent silting.
12.2.4 Miscellaneous Types
These falls are brought into service for meeting specific requirements. However, in general, canal falls may be divided into two broad categories, such as, *meter* and *non-meter falls* depending on their capability to measure discharge.
**SAQ 1**
(a) What is the purpose of control structures? Give some actual field examples with necessary neat, labelled sketches.
(b) Give other classifications of canal falls.
---
### 12.3 TYPES OF CISTERN ELEMENTS
A cistern element is necessary on the downstream side of a fall to dissipate the kinetic energy and prevent the damage to the bed and sides due to undesirable scour. After the energy has been dissipated the water flowing over the crest of the fall is allowed to flow into the canal. The cistern element comprises of glacis, devices for the formation of hydraulic jump and diverting the high velocity jets, roughening devices and the pool of water that serves as a cushion to withstand the hydraulic impact. The cisterns are of four types depending on the type of impact. They are described in the following sub-sections.
#### 12.3.1 Vertical Impact Cistern
In these cisterns (Figure 12.2) water falls freely down the drop, vertically. The falling stream follows a parabolic path. This type of cistern is very efficient in dissipating surplus energy of water. The dimensions of the cistern are such as to still the water and suppress the residual eddies and disturbances.

Empirical expressions for the cistern length, $L_c$, and cistern depth, $d_c$, are given as follows:
Dyas' expression: \[ d_c = 0.82 \times \sqrt{H_L} \times h_3^{1/3} \] ... (12.1)
Glass' expression: \[ d_c + h_3 = 1.85 \sqrt{E_1} \times H_L^{1/3} \] ... (12.2)
and, \[ L_c = 5 (d_c + h_3) \] ... (12.3)
where, $E_1$ is the specific energy of water upstream of the fall.
Etchcvery formula: \[ d_c = L_c / 6 \] ... (12.4)
and, \[ L_c = 3 \sqrt{(E_1 \times H_L)} \] ... (12.5)
UPIRI (U.P. Irrigation Research Institute) formula: \[ d_c = 0.25 (E_1 \times H_L)^{2/3} \] ... (12.6)
\[ L_c = 5 \times \sqrt{(E_1 \times H_L)} \] ... (12.7)
In a vertical impact cistern, no roughening device is provided on the assumption that energy dissipation takes place through impact. Aeration of the underside of the falling nappe is essential to prevent the nappe from adhering to the crest wall. This aeration is ensured by providing pipes embedded in the wing walls with one end emerging below the nappe and near the downstream face of the crest, while the other end is above the FSL of the canal. Smooth streamlined flow should be established between the end of the cistern and the start of the downstream canal. Because of the cistern being below the bed level of the downstream portion of the canal, a gently rising slope (say, 1 in 5) is generally provided joining it with the downstream bed of the channel.
If a bridge is to be combined with the fall, the canal width may have to be reduced at the site of the fall. This reduction increases the discharge intensity, and likewise increases the surplus energy that needs to be dissipated. This may lead to providing an expensive cistern in which case some other kind of cistern should be adopted.
### 12.3.2 Horizontal Impact Cistern
In this type of cistern (Figure 12.3) the water coming over the crest flows over a glacis with a reverse curve at the end of the glacis to convert the inclined supercritical jet of water into a horizontal supercritical flow that strikes the horizontal subcritical flow of the downstream channel; and, it gives rise to the formation of a hydraulic jump. Since a hydraulic jump on a horizontal floor is never stable (the relevant parameters often being subject to change), the position of the jump will shift either upstream or downstream with the slightest change in the discharge or depth or velocity of especially the downstream flow. The jump formed is never perfect, and the energy dissipation is not efficient. Thus, for simplicity and as a first approximation the usual practice is to depress the cistern downstream of the location of the jump so that the depth of the cistern is about 1/4 of the tail water depth.

**Figure 12.3 : Horizontal Impact Cistern**
Downstream specific energy ($E_2$) is calculated knowing discharge intensity, $q$, and the drop, $H_L$. The bottom of the cistern is provided at an elevation that is 1.25 times $E_2$ below the downstream TEL. This way the cistern bed level is independent of the bed of the downstream channel. A cistern length of 5 to 6 times $E_2$ ensures that the jump is confined to within the cistern. Roughening arrangements (for more efficient energy dissipation) are provided from a section that is at a distance of half the height of the jump from the toe of the glacis.
So long as the cistern is of constant width, the design is simple. However, if the supercritical jet is divergent, there will be a bowed shape of hydraulic jump, with the discharge intensity being higher in the central portion of the cistern and lesser on the flanks. This complicates the design of the cistern. Thus, the expansion should be gradual.
The horizontal impact cistern is an efficient energy dissipator; however, it requires costly devices to arrest the jump, and therefore is not commonly used on canal falls.
12.3.3 Inclined Impact Cistern
In such cisterns, the glaciis is continued straight into the cistern without introducing any reverse curvature at its end. However, the energy dissipation is not very efficient because the vertical component of the supercritical jet remains unaffected by the impact. Therefore, roughening devices have to be provided.
The cistern dimensions are determined as in case of the horizontal impact cistern. Since the cistern should be provided with roughening devices, as mentioned above, it increases the length of the cistern to much beyond the limits relevant in the case of previous two cisterns discussed above.
12.3.4 No-Impact Cistern
In low submerged falls hydraulic impact is not possible nor can a hydraulic jump be formed for energy dissipation. In such cases baffle walls with some suitable roughening device are adopted to dissipate the energy. The depth of no-impact cisterns cannot be determined theoretically. It is therefore, appropriate to lower the bed of the cistern to below the bed of the downstream channel as much as economically possible. This means providing a cistern that holds a larger volume of water that helps dissipate the energy. By providing a slope rising from the bottom of the cistern to the normal downstream bed level large scale turbulence is controlled.
SAQ 2
(a) Why is a cistern element needed, and where is it located on a canal fall?
(b) What are the various types of cisterns? Which is the most efficient type for energy dissipation? What is the basis on which various dimensions have been defined?
12.4 ROUGHENING DEVICES FOR ENERGY DISSIPATION
In most hydraulic structures including canal falls, the best means of energy dissipation is the hydraulic impact. Of the three impact type cisterns, the most efficient is the *vertical impact cistern* and the least effective one is the *inclined impact cistern*. Even with the highly efficient vertical impact cistern the residual energy causes high turbulence beyond the cistern and some measures to dissipate this surplus energy are necessary. In cisterns without hydraulic impact, the roughening devices (installed on the cistern floor) are the only way to dissipate this excess kinetic energy. Boundary friction is, obviously, increased by introducing artificial roughness elements. This roughness increases the internal friction between the layers of the stream and the boundary helping the dissipation of energy. Roughening devices may be any one type or a combination of various types that are available; these types are discussed as under.
12.4.1 Friction Blocks
Surplus kinetic energy below hydraulic structures can be easily dissipated by anchoring

rectangular concrete blocks on the cistern floor. These blocks project up to 1/4 of the full supply depth, and are very cheap, simple and effective. The blocks in a row are spaced at twice the height of the blocks. And, depending on the need, two or more rows of friction blocks, staggered in plan, may be provided (Figure 12.4).
Specially shaped friction blocks (Figure 12.5) known as arrows are equilateral triangles in plan with the corners rounded off. The face of the arrow on the downstream side is vertical. In order to give an upward deflection to the stream filaments, the top of the arrow is sloped from the front rounded corner to the back edge. The cistern is to be roughened to a length corresponding to,
\[
L = c \left( \frac{h_3^{3/2}}{D} \right) \left( \frac{H_L^{1/2}}{D} \right)
\]
...(12.8)
where, \(c\) = coefficient,
- \(= 1\) for vertical impact,
- \(= 3\) for horizontal impact,
- \(= 4\) for inclined impact with baffle,
- \(= 6\) for inclined impact without baffle,
- \(= 8 - 10\) for no impact,
\(D\) = depth of water in the cistern,
\(H_L\) = height of fall
\(h_3\) = depth of flow in the downstream canal.
For desired results the roughening blocks should be placed at a distance of half the height of the hydraulic jump measured from the downstream of the jump. Beyond the roughened length a smooth cistern, that is, without roughening devices, should be provided for half as long as the roughened cistern.

**Figure 12.5: Arrows for Energy Dissipation**
### 12.4.2 Ribbed Pitching
The bed and/or sides of the channel may be provided with bricks alternately laid flat and on edge (Figure 12.6) to dissipate surplus energy of flow. The bricks projecting into the flow cross-section increase the boundary friction.
12.4.3 Deflector
A deflector (or a baffle wall) is provided if the high velocity flow continues up to the end of the cistern (Figure 12.7). The baffle wall provides a deep pool of water in the cistern, which helps in dissipating the residual energy.

**Figure 12.7: Deflector**
12.4.4 Biff Wall
The biff wall at the downstream edge of the cistern produces a reverse roller which causes a controlled scour away from the wall and piles up the scoured material against the toe of the structure, and thus prevent subsequent damage (Figure 12.8). This is provided where the high velocity flow continues unabated up to the end of the cistern.

**Figure 12.8: Biff Wall**
12.4.5 Dentated Sill
This device is also provided where the high velocity flow continues till the end of the cistern. The dentated sill breaks up the stream jet into smaller jets. It causes reverse rollers similar to the action of a biff wall (Figure 12.9).

**Figure 12.9: Dentated Sill**
SAQ 3
(a) Why do we need roughening devices downstream of a canal fall, and what locations are suitable for positioning them?
(b) Describe in detail all the types of roughness devices that are available for use. Give advantages and disadvantages of each.
12.5 TYPES OF CANAL FALLS
Various investigations have led to the development of a number of canal falls, such as:
(1) Stepped or cascade type fall,
(2) Open drop structure,
(3) Piped drop,
(4) Pipe fall,
(5) Pipe drop structure,
(6) Well drop structure,
(7) Vertical drop structures –
(a) Common (straight) drop
(b) Rectangular weir drop with raised crest (France)
(c) Sarda type fall
(d) YMGT-type drop (Japan),
(8) Common chute,
(9) Glacis fall,
(10) Trapezoidal notch fall.
The following sub-sections briefly describe these types of canal fall.
12.5.1 Stepped or Cascade Type Fall
This consists of stone-pitched floors between each set in a series of weir blocks which act as check dams and are used in canals of small discharges, e.g. in the tail reach of a main canal escape (Figure 12.10).

12.5.2 Open Drop Structure
Figure 12.11 shows the details of an open drop structure. Such a structure can be made of timber, concrete, or brick or stone masonry. Timber is usually not preferred due to its short life particularly under alternate wet and dry conditions. Open drops for a vertical fall of 50 to 60 cm are especially suitable for those areas which are bench-terraced. When used in installations, the structures is used for two purposes, namely:
(1) to convey water from a higher to a lower elevation without excessive erosion, and
(2) to serve as check for controlling the elevation of the water surface in the channel section upstream from the structure. Check gate provided at the inlet of the drop structure is used to control the water surface heights on the upstream stretch of the channel.
When earth channels are to be built on steep slopes (or on bench-terraced lands, as mentioned above) it is necessary to construct a series of drop structures to flatten the channel grade between each drop. The channel section between succeeding drop structures is thus made nearly flat. The crest of one structure and the level of the apron floor of the next structure on the uphill are nearly the same.
The minimum width of the inlet of a drop structure is equal to the bottom width of the irrigation channel. Water enters the structure through the inlet which is in the form of a weir or notch in a wall. Vertical walls, extend down into the soil under the inlet in order to prevent/minimise water seepage (and seepage force) under the structure, and are known as cut-off walls. Similar walls extending from the inlet to prevent/minimise seepage around the ends of the structure are called headwall extensions. Water falls into a stilling basin, which is an essential part of an erosion control structure. The stilling basin reduces the erosive force of the falling water. The length of the stilling basin is nearly twice the height of the drop. When the depth of overpour does not exceed 30 cm, the water depth in this basin should be about 45 cm for drops up to 60 cm, and 60 cm for drops 60 to 90 cm. A small cross wall, called the end sill, about 10 to 12 cm high, placed at the end of the basin, increases the efficiency of energy dissipation.
Drop structures often set up eddy currents in the irrigation stream and these currents tend to cause erosion of the channel section immediately downstream of the structure. Stones or brick bats placed over a length of 1 to 2 m from the structure help to prevent channel erosion near erosion control structures.
12.5.3 Piped Drop
A piped drop is the most economical structure compared to an inclined drop while small discharges of up to 50 l/s are to be handled. It is usually equipped with a check gate at its upstream end, and a screen (debris barrier) is installed to prevent the fouling of the entrance.
12.5.4 Pipe Fall
This is an economical structure generally used in small channels. It consists of a pipeline (precast concrete) which may sometimes be inclined sharply downwards (USBR and USSR practice) to cope with large drops. However, an appropriate energy dissipator (e.g., a stilling basin with an end sill) must be provided at the downstream end of the pipeline.
12.5.5 Pipe Drop Structure
Sometimes construction of an open drop structure is not possible without disturbing an existing bund or dam. In such cases water can be safely discharged from a higher level to a lower one by providing a pipe drop as shown in Figure 12.12. This type of structure allows discharge of water through a pipe line leaving the bund or dam undisturbed. Vitrified sewer pipes or concrete pipes made with bell joints, or corrugated metal pipes are used as the conduit. A water tight lid at the inlet will function as a check gate. The corner in the pipe line is made of a large-radius bend. Pipe joints are made with cement mortar. A stilling basin made of brick or stone masonry, or concrete is provided at the outlet of the pipe conduit to dissipate the energy of the incoming stream. A masonry or concrete apron is provided around the inlet end of the pipe to prevent seepage around it.

The velocity of flow of water in pipe drop spillways (i.e., fall) using different size pipes may be calculated from the following relationship obtained by applying the Bernoulli’s theorem:
\[
\text{Available head} = \frac{\text{Frictional loss in the pipe line}}{\text{Velocity head at the entrance of pipe}} + \frac{\text{Head loss at the bend}}{\text{Head loss}}
\]
i.e. \[ H = \frac{4fLv^2}{(2gd)} + \frac{v^2}{2g} + K_1 \frac{v^2}{2g} + K_2 \frac{v^2}{2g} \]
...(12.9)
where, \( H = \) difference in elevation between the water level at the upstream and downstream ends of the structures, metres
\( v = \) velocity of flow in the pipe, (m/s)
\( f = \) coefficient of friction for the pipe (usually assumed to be about 0.01)
\( L = \) length of pipe, (m)
\( g = \) acceleration due gravity, \((9.81 \text{ m/sec}^2)\)
\( d = \) diameter of pipe, (m)
\( K_1, K_2 = \) co-efficients (for pipe drop that account for entry and exit losses \((K_1 = 0.5 \text{ and } K_2 = 0.25)\))
The discharge capacity of the pipe drop structure may be determined by the following relationship,
\[ Q = av \]
where, \( Q = \) discharge (m\(^3\)/s)
\( a = \) area of cross-section of the pipe, (m)\(^2\)
**Example 12.1**
Given, \( H = 1 \) m, \( d = 10 \) cm, \( f = 0.012 \), \( L = 3 \) m
Determine the discharging capacity of the pipe drop spillway.
**Solution**
\[ H = 4fLv^2/2gd + v^2/2g + 0.5v^2/2g + 0.25v^2/2g \]
Substituting the values of the variables, we have:
\[ 1 = 4 \times 0.012 \times 3 \times v^2/(2 \times 9.81 \times 10/100) + v^2/(2 \times 9.81) \]
\[ + 0.5v^2/(2 \times 9.81) + 0.25v^2/(2 \times 9.81) \]
\[ \therefore v = \sqrt{6.15} = 2.5 \text{ m/s} \]
\[ a = (\pi/4) \times d^2 = (\pi/4) \times (10/100)^2 = 0.0078 \text{ m}^2 \]
\[ Q = av = 0.0078 \times 2.5 = 0.0195 \text{ m}^3/\text{s} = 19.5 l/s \]
Table 12.1 presents the velocities and discharge capacities of concrete pipes of different sizes used in pipe drop spillways. The length of apron in the stilling basin is more to still high velocity flows.
**Table 12.1: Discharge Capacities of Pipe Drop Spillways Using Different Size Concrete Pipes (\( L = 3 \) m, \( f = 0.012 \))**
| Head (cm) | Diameter of pipe, (cm) | Discharge (l/s) |
|-----------|------------------------|-----------------|
| | 7.5 | 10.0 | 12.5 | 15.0 | 17.5 | 20.0 | 22.5 | 25.0 | 27.5 | 30.0 |
| 30.0 | 5.6 | 10.7 | 17.0 | 26.0 | 36.4 | 48.5 | 62.4 | 78.0 | 95.5 | 114.8 |
| 40.0 | 6.5 | 12.3 | 20.2 | 30.1 | 42.0 | 56.0 | 72.0 | 90.1 | 110.3 | 132.5 |
| 50.0 | 7.2 | 13.8 | 22.6 | 33.6 | 46.9 | 62.6 | 80.5 | 100.8 | 123.3 | 148.2 |
| 60.0 | 8.0 | 15.1 | 24.7 | 36.8 | 51.4 | 68.5 | 88.2 | 110.4 | 135.1 | 162.3 |
| 70.0 | 8.5 | 16.3 | 26.7 | 39.8 | 55.5 | 74.0 | 95.3 | 119.2 | 146.0 | 175.3 |
| 80.0 | 9.1 | 17.4 | 28.5 | 42.5 | 59.4 | 79.2 | 101.8 | 127.4 | 156.0 | 187.4 |
| 90.0 | 9.7 | 18.5 | 30.3 | 45.1 | 63.0 | 84.0 | 108.0 | 135.2 | 165.4 | 198.8 |
| 100.0 | 10.2 | 19.5 | 31.9 | 47.5 | 66.4 | 88.5 | 113.9 | 142.5 | 174.4 | 209.6 |
**SAQ 4**
Given \( H = 2 \) m, \( d = 15 \) cm, \( f = 0.012 \), \( L = 4 \) m
Determine the discharge capacity of the pipe drop spillway.
### 12.5.6 Well Drop Structure
The well drop (Figure 12.13) consists of a rectangular well and a pipeline (almost horizontal) followed by a downstream apron. Most of the energy is dissipated in the well.

and this type of drop is suitable for low discharges (upto 50 l/s) and high drops (2-3 m); and this device is used in tail escapes of small channels. It is also known as a cylindrical fall.
12.5.7 Vertical Drop Structure
Under this category there are following types of drop structures commonly in use.
Common (Straight) Drop
The common drop structure, in which the aerated free-falling nappe (modular flow) hits the downstream basin floor, and with turbulent circulation in the pool beneath the nappe contributing to energy dissipation, is shown in Figure 12.14.

The following equations fix the geometry of the structure in a suitable form for steep slopes:
Drop number: \( D_r = \frac{q^2}{gd^3} \) \hspace{1cm} \ldots (12.10)
where, \( q \) = the discharge per metre width,
Basin length: \( L_B / d = 4.3 D_r^{0.27} + L_j / d \) \hspace{1cm} \ldots (12.11)
where, \( L_j \) = length of the jump.
Pool depth under nappe: \( y_p / d = D_r^{0.22} \) \hspace{1cm} \ldots (12.12)
Sequent depths: \( y_1 / d = 0.54 D_r^{0.425} \) \hspace{1cm} \ldots (12.13)
\[ y_2 / d = 1.66 D_r^{0.27} \] \hspace{1cm} \ldots (12.14)
where, \( d \) = height of drop crest above the basin floor.
A small upward step, \( h \) (such that \( 0.5 < h/y_1 < 4 \)) at the end of the basin floor is desirable in order to localize the hydraulic jump formation.
The USBR impact block type basin also provides energy dissipation under low heads, and is suitable where the tail water level is more than the sequent depth, \( y_2 \). The dimensions of such a structure (Figure 12.15) are recommended to be fixed by the following relations:
Basin length, \( L_B = L_d + 2.55 y_c \) \hspace{1cm} \ldots (12.15)
Location of impact block, \( L_d + 0.8 y_c \) \hspace{1cm} \ldots (12.16)

Minimum tail water depth, \( y_2 \geq 2.15 y_c \) \hspace{1cm} \ldots (12.17)
Impact block height = 0.8 \( y_c \) \hspace{1cm} \ldots (12.18)
Width and spacing of impact block = 0.4 \( y_c \) \hspace{1cm} \ldots (12.19)
End sill height = 0.4 \( y_c \)
... (12.20)
Minimum side wall height = \( y_2 + 0.85 y_c \)
... (12.21)
where, \( y_c \) = critical depth of flow.
The values of \( L_d \) can be read from Figure 12.16.

**Rectangular Weir Drop with Raised Crest (France)**
Vertical falls up to 7 m in channels of bed widths ranging from 0.2 to 1 m, having flow depths of 0.1 to 0.7 m at full supply, may be provided as per illustrated in Figure 12.17.

Height of crest, \( P = D_1 - H \)
... (12.22)
Discharge, \( Q = C L H^{3/2} \sqrt{2g} \)
... (12.23)
**Design of Cistern:**
Volume of basin, \( V = Q H_{dr}/150, \text{ m}^3 \)
... (12.24)
Width of basin, \( W_B = V/[L_B (D_2 + d_c)] \)
... (12.25)
where,
\( B_1 \) = bed width of the rectangular channel (m),
\( C = 0.36 \) for the vertical upstream face of the crest wall,
= 0.40 for the rounded upstream face (5 – 10 cm radius),
\( d_c \) = depth of basin = 0.1 to 0.3 m,
\( H_{dr} \) = drop = upstream FSL (m) – downstream, FSL (m)
\( L \) = crest length = \( L_B - 0.1 \) (m),
**Sarda Type Fall**
This fall is provided with a raised crest wall, upstream and downstream wing walls, an impervious floor and a cistern, and downstream bank and bed protection works. The vertical impact of the falling water causes the energy dissipation (Figure 12.18).
Two types of crests are used (Figure 12.19) in the design of a Sarda fall. The rectangular one is for discharges less than 14 cumec while the trapezoidal crest is meant for higher discharges.
The notations used in this Figure are explained as under:
\( B \) = width of crest (m),
\( C_d \) = coefficient of discharge = 0.65,
\( D \) = drop in bed level on upstream and downstream sides (m),
$H = \text{head over crest (m)},$
$H_L = \text{difference between the upstream and downstream water levels (m)},$
$h_a = \text{head due to velocity of approach (m)},$
$h_d = \text{submergence head (m)},$
$h_1 = \text{upstream depth of flow (m), and}$
$h_3 = \text{downstream depth of flow (m)}.$

The crest length ($L$) of the fall is usually the same as the bed width of the canal. However, to take into account any anticipated (proposed) increase in the discharge, in future, the length may be increased by an amount equal to the depth of flow (i.e., $L = \text{Bed width} + \text{Depth of water}$). To reduce the cost of construction it is customary to go in for the fluming of the fall. No choking upstream of the fall occurs in the case of a flumed fall with a fluming ratio of $2F_1$, where $F_1$ is the Froude number of the approaching flow. A fall is not flumed beyond 50 %. In case the canal is flumed, the upstream or contracting and downstream or expanding transitions have to be provided (Figure 12.20).
The crest level is so fixed that it does not create appreciable changes in upstream water levels (due to backwater or drawdown effects).
Top width of rectangular crest, \( B = 0.55 \sqrt{d} \)
Discharge of rectangular crest for free flow conditions,
\[
Q = 0.415 \sqrt{(2g)} \ L H^{3/2} (H/B)^{1/6}
\]
Figure 12.20: Plan of Transitions: Cylindrical Inlet and Linear Outlet
(\( L_c \) and \( L_e \) refer to contraction and expansion lengths)
Top width of trapezoidal crest, \( B = 0.55 \sqrt{(H + d)} \)
\[
= 0.55 \sqrt{(h_1 + D)}
\]
Base width of the trapezoidal crest can be arrived at knowing its other geometrical elements. Upstream and downstream faces are usually given slopes of 1:3 and 1:8, respectively.
Discharge (for free flow conditions) of trapezoidal crest,
\[
Q = 0.45 \sqrt{(2g)} \ L H^{3/2} (H/B)^{1/6}
\]
However, for drowned conditions (submergence > 33%)
\[
Q = C_d \sqrt{(2g)} \ L \left[ \frac{2}{3} \ H_L^{3/2} + h_d \sqrt{H_L} \right]
\]
For submerged flow conditions, and considering the velocity of approach (where, \( h_a \) is head due to approach velocity)
\( Q \) for a trapezoidal crest
\[
C_d \sqrt{(2g)} \ L \left[ \frac{2}{3} \left( (H_L + h_a)^{3/2} - h_a^{3/2} \right) + h_d \left( H_L + h_a \right)^{1/2} \right]
\]
The base width \( B \), of a rectangular crest wall is taken as equal to \( (H + d) / G \), where \( G \) is the specific gravity of its material. For falls with drops greater than 1.5 m, the stability of all crest walls should be checked following usual procedure.
**Design of Other Elements of a Sharda Type Fall**
As mentioned already (Equation 12.7) the
Length of cistern, \( L_C \) (providing a depressed cistern = 5 \( \sqrt{(EH_L)} \)
Depth of cistern, \( d_C \) below the d/s bed = 0.25 \( (EH_D)^{2/3} \)
Depth (minimum) of upstream cutoff below the surface of u/s floor
\[
= (0.3 \ h_1 + 0.6) \text{ m}
\]
with a minimum of 0.8 m.
Depth (minimum) of downstream cutoff below the surface of d/s floor
\[
= (0.5 \ h_3 + 0.6) \text{ m}
\]
with a minimum of 1.0 m.
Thickness of cutoff = 0.4 m
Sometimes thickness is kept only 30 cm (1 $\frac{1}{2}$ brick thick).
Minimum length of d/s floor,
$$l_d = 2h_3 + H_L + 2.4 \text{ m}$$
(12.36)
It is measured from the toe of the crest wall. While applying Bligh's creep or Khosla's theory, the balance length of the impervious floor is provided under the crest wall, and as the u/s floor.
Thickness of upstream floor = 0.3 m to 0.4 m.
It is usual minimum thickness with reference to practical consideration.
Thickness of downstream floor is determined from considerations of uplift pressure subject to a minimum of 0.45 to 0.6 m for large falls and 0.3 to 0.45 m for small falls.
Brick pitching is provided on the channel bed just upstream of the crest. It is laid at a slope of 1 : 10 for a distance = $h_1$.
The crest is provided with a few drain holes of 15 to 30 cm diameter to drain the upstream bed when the canal may be closed for maintenance purposes.
Radius of curvature of upstream wing walls = 5 to 6 H.
Angle subtended by upstream wings walls at centre is generally = 60°, and thereafter the walls are extended 1 m into the earthen banks. The walls are taken as far upstream as the impervious floor.
Wing walls downstream of the fall are kept vertical for a distance = 5 to 8 times $\sqrt{(E \cdot H_L)}$ and lowered through a series of steps. They are warped so that their slope changes from vertical to the side slope of the canal. The wings are designed as retaining walls with earth pressures on the rear side and no water in the canal (i.e., for the worst condition).
Boulder or stone pitching is provided on the bed and sides of the canal downstream of the warped wing walls. The pitching is laid dry and without the use of mortar. Length of pitching = $3 \times h_3$, or using Table 12.2 and adopting the greater of the two values. Bed pitching is laid level upto the end of wing walls beyond which it is laid sloping in 1 : 10.
Side pitching extends from the end of the downstream bed pitching and towards upstream at a slope of 45° (Figure 12.21). A toe wall supports the side pitching.

**Table 12.2: Length of Bed Pitching**
| Head over Crest, $H$ (m) | Total Length of d/s Pitching (m) |
|--------------------------|---------------------------------|
| Less than 0.3 | 3.0 |
| 0.3 - 0.45 | $3.0 + 2H_L$ |
| 0.45 - 0.60 | $4.5 + 2H_L$ |
| 0.60 - 0.75 | $6.0 + 2H_L$ |
| 0.75 - 0.90 | $9.0 + 2H_L$ |
| 0.90 - 1.05 | $13.5 + 2H_L$ |
| 1.05 - 1.20 | $18.0 + 2H_L$ |
| 1.20 - 1.50 | $22.5 + 2H_L$ |
The total length of the impervious floor, as mentioned earlier, is decided on the basis of Bligh's theory or Khosla's theory.
Example 12.2
Design a 1.3 m Sarda fall for a channel conveying 20 cumec of discharge at a depth of flow equal to 1.5 m. The bed width of the canal is 18 m.
Solution
The discharge exceeds 14 cumec, hence the crest adopted will be of trapezoidal section with an upstream batter of $1:3$ (H : V) and a downstream slope of $1:8$ (H : V).
Crest Dimensions
Width of crest, $B = 0.55 \sqrt{(H + d)} = 0.55 \sqrt{(h_1 + D)}$
\[= 0.55 \sqrt{(1.5 + 1.3)}\]
\[= 0.920 \text{ m}\]
Length of crest = bed width of canal = 18 m
Head over crest = $H = \left[ \frac{Q B^{1/6}}{0.45 \sqrt{(2g)} \times L} \right]^{-0.6}$
\[= \left[ \frac{20 \times 0.92^{1/6}}{0.45 \sqrt{2g} \times 18} \right]^{-0.6}\]
\[= 0.70 \text{ m}\]
Height of crest above upstream bed = $h_1 - H$
\[= 1.5 - 0.70\]
\[= 0.80 \text{ m}\]
Height of crest above downstream bed, $d = h_1 + D - H$
\[= 1.5 + 1.3 - 0.70\]
\[= 2.1 \text{ m}\]
The base of the fall should be at least 0.5 m below the downstream bed level. Accordingly,
Base width of fall (i.e., wall section) = $(1/3) (d + 0.5) + B + (1/8) (d + 0.5)$
\[= (11/24) (d + 0.5) + B\]
\[= (11/24) (2.1 + 0.5) + 0.92\]
\[= 2.11 \text{ m}\]
Depth of cistern, $d_c = (1/4) (E \cdot H_L)^{2/3}$
\[= (1/4) \times (0.70 \times 1.3)^{2/3}\]
(taking $E = H$ i.e., neglecting velocity of approach, and assuming $H_L = D$)
\[= 0.234 \text{ m}\]
\[= 0.2 \text{ m (say)}\]
Length of cistern, $L_c = 5 \sqrt{(E \cdot H_L)}$
Assuming, $E = H$ (as mentioned above), $L_c = 5 \sqrt{(0.70 \times 1.3)}$
\[= 4.769 \text{ m}\]
\[= 4.77 \text{ m (say)}\]
Upstream and Downstream Cutoffs
Depth of upstream cutoff = $h_1 / 3 + 0.6 \text{ m}$
\[= 1.5/3 + 0.6 = 1.1 \text{ m (> 0.8 m)}\]
Depth of downstream cutoff = $h_1 / 2 + 0.6 \text{ m}$
\[= 1.5/2 + 0.6 = 1.35 \text{ m (> 1.0 m)}\]
Thickness of these cutoffs may be kept equal to 0.4 m.
**Length of Impervious Floor**
We know that exit gradient (of seeping water), $G_E$ is given by the relation:
$$G_E = \frac{1}{\pi \sqrt{\lambda}} \times \frac{H}{d_1}$$
where, $H =$ head for no flow condition,
= height of crest above the downstream bed
= 2.1 m (for the given case)
$d_1 =$ the depth of downstream cut off
= 1.35 m (as provided above)
$\lambda$ is a parameter enunciated in Khosla’s theory.
Assuming a safe exit gradient of 1/5, we can write:
$$1 / (\pi \sqrt{\lambda}) = G_E (d_1 / H) = (1/5) \times (1.35/2.1) = 0.12$$
From Khosla’s curves value of $b/d_1 = 13$ (refer standard reference books on Irrigation Engineering)
(where, $b$ is the total length of impervious floor)
Therefore, $b = 13 \times 1.35 = 17.55$ m
Hence, total floor length = 17.55 m
Minimum floor length required downstream = $2h_1 + H_L + 2.4$
$$= 2 \times 1.5 + 1.3 + 2.4$$
$$= 6.7 \text{ m}$$
So provide the downstream floor length equal to 6.7 m and the balance 10.85 m long impervious floor on the upstream side. Thickness of the concrete floor at various sections is decided on the basis of unbalanced uplift pressures.
**Upstream Protection**
Radius of curvature of the upstream wing walls = 5 to 6 $\times H$
$$= 5 \times 0.7 \text{ to } 6 \times 0.7$$
$$= 3.5 \text{ m to } 4.2 \text{ m}$$
$$= 4.0 \text{ m (say)}$$
Brick pitching on the upstream bed is provided at a slope of 1 : 10 for a distance equal to the upstream depth of flow. Drain holes of 20 cm dia are provided at an interval of 4 m.
**Downstream Protection**
The downstream wing walls are to be kept vertical for a distance of about 5 to 8 $\sqrt{(E \times H_L)}$
$$= 5 \sqrt{(0.70 \times 1.3)} \text{ to } 8 \sqrt{(0.7 \times 1.3)}$$
$$= 4.77 \text{ to } 7.63 \text{ m}$$
$$= 5.5 \text{ m (say)}$$
Beyond that the wing walls are warped from the vertical to the side slopes of the canal. The wing walls are given an average splay of about 1 : 2.5 to 1 : 4 at the top. The difference in the surface width of flow in the rectangular and trapezoidal section is 1.5 (i.e., vertical slope of wall) $\times$ 1.5 (i.e., depth of flow) $\times$ 2 (i.e., twice) = 4.5 m. For providing a splay of 1 : 2.5 in the warped wings, the length of the warped wings along the channel axis is equal to $(4.5/2) \times 2.5 = 5.625$ m.
According to Table 12.2, the total length of bed pitching on the downstream side equals to $6.0 + 2 \frac{H_T}{E} = 6.0 + 2.6 = 8.6$ m.
The bed pitching should be laid level up to the end of the downstream wing walls beyond which it should be laid at a slope of 1 in 10. A toe wall of thickness equal to 0.4 m and a depth equal to 1 m should be provided at the end of the bed pitching.
Side pitching is provided downstream of the wing walls up to the end of bed pitching. The side pitching may be suitably curtailed. A longitudinal section of the Sarda fall based on the above design is shown in Figure 12.21.
**SAQ 5**
Design a 1.4 m Sarda fall for a channel conveying 24 cumec of discharge at a 1.2 m depth of flow. The bed width of the canal is 24 m.
**YMGT-type Drop (Japan)**
Figure 12.22 shows a drop suitable for small canals with discharges less than 1 cumec. The fall is used in flumed sections.
Following design criteria are relevant for this type of fall:
Sill height, $P$, varies from 0.06 m to 0.14 m for discharge intensity,
$q = 0.2$ to 1.0 cumec/m
Depth of cistern, $d_c = 0.25 \left( E_C H_{dr} \right)^{1/2}$
Length of cistern, $L_c = 2.5 L_d$
where, $E_c = 1.5 y_c$,
$L_d = L_{d1} + L_{d2}$,
$L_{d1} = 1.155 \sqrt{\left[ \left( P' / E_c \right) + 0.33 \right]}$,
$L_{d2} = (D_2 + d_c) \cot \alpha$,
$\cot \alpha = y_c / L_{d1}$,
$q = \text{discharge intensity}$,
$y_c = (q^2 / g)^{1/3}$
**12.5.8 Common Chute**
This drop is characterised by a sloping (straight) downstream face (between 1/4 and 1/6), called a *glacis* followed by any type of low head stilling basin. Such a drop is recommended for a wide range of discharges and drop heights.
**12.5.9 Glacis Fall**
In this type of fall the energy dissipation is achieved through a hydraulic jump (Figure 12.23).
It comprises two main types.
**Glacis without baffle wall**
The straight glacis is replaced by a parabolic glacis (convex towards water) as suggested by Montague. The jump forms on the glacis itself.
**Straight glacis with baffle platform and baffle wall**
The baffle wall is located at a calculated distance from the toe of the glacis and is of a calculated height, to ensure the formation of the jump on the baffle platform. This was developed by Inglis.
However, the above mentioned falls are obsolete by now.
### 12.5.10 Trapezoidal Notch Fall
This type of fall has a number of trapezoidal notches set in a high breast wall laid across the channel. The entrance to the fall is smooth and a flat lip projecting downstream allows the falling jet to spread out (Figure 12.24(a), (b) and (c)). The design of a trapezoidal fall involves the use of following equations:
\[
Q = 0.667 C_d \sqrt{2g} \left[ LH^{3/2} + (4/5) H^{5/2} \tan \alpha \right]
\]
...(12.45)
for free flow conditions. And, for submerged flow conditions, we have:

(a) Trapezoidal Notch Fall
(b) Dimensions of Trapezoidal Notch Fall
(c) Depth-Discharge Relation
**Figure 12.24**
\[ Q = 0.667 \ C_d \ \sqrt{(2g)} \ (H - h_d)^{3/2} \ [(L + 2h_d \ \tan \alpha) + 0.8 \ (H - h_d)] \ \tan \alpha \]
\[ + \ C_d \ \sqrt{(2g)} \ (H - h_d)^{1/2} \ (L + h_d \ \tan \alpha) \ h_d \]
... (12.46)
where, \( \tan \alpha = \frac{15}{8} \left[ \frac{Q_2 \ H_1^{3/2} - Q_1 \ H_2^{3/2}}{C_d \ \sqrt{(2g)} \ H_1^{3/2} \ H_2^{3/2} \ (H_2 - H_1)} \right] \)
... (12.47)
Length of crest at sill level, \( L = \frac{Q}{(2/3) \ C_d \ \sqrt{(2g)} \ H_1^{3/2}} - \frac{4}{5} \ H_1 \ \tan \alpha \)
... (12.48)
where,
\( C_d = \) coefficient of discharge,
= 0.78 for canal notches,
= 0.70 for distributary notches,
\( H = \) depth of water above the notch sill upto the normal water surface measured some distance away (u/s) from the crest to eliminate drawdown effect (m),
\( H_1 \) and \( H_2 = \) heads at full supply discharge, \( Q_1 \), and half of full supply discharge, \( Q_2 \),
\( h_d = \) submerged head (m),
\( L = \) length of crest (m), and
\( \alpha = \) inclination of the sloping sides of the notch from the vertical (in degrees).
The number of notches is adjusted such that the top width of flow in the notch lies between 0.75 to 1 times the full water depth above the sill of the notch; it is done to fairly distribute the discharge through the length of the notch wall. The piers have a thickness of half the depth of flow as a minimum, and a larger thickness for piers that are required to carry a heavy superstructure.
### 12.6 DESIGN PARAMETERS
The design parameters include the hydraulic and structural features of a fall; and, in particular cover the shape, size and configuration of the following components of a fall:
(a) Upstream approach,
(b) Throat,
(c) Crest,
(d) Downstream glacis,
(e) Cistern element,
(f) Length and thickness of floor,
(g) Downstream expansion, and
(h) Bed and side protection,
A fall may have some or all of the above elements in its design configuration. Each type of drop (or fall) has a different set of criteria to determine these elements as discussed above.
The selection of a suitable type of fall for a particular set of site conditions mainly depends on such factors as: height of drop, and discharge to be passed over. Thus, in nutshell one can say that it depends on the amount of energy to be dissipated — therefore, a fall that performs the most efficient dissipation of energy at a minimum of cost is preferred.
12.7 SUMMARY
In this unit we have learnt about the various types of structural elements that are commonly provided on canals for controlling erosion of the bed and sides at the location of a canal drop (or fall). The criteria followed to design these falls have been discussed. A detailed design of a Sarda fall is presented for a better insight into the design procedure.
12.8 KEY WORDS
**Biff Wall**: A device provided at the downstream edge of the cistern to produce a reverse roller which causes a controlled scour away from the wall and piles up the scoured material against the toe of the structure to prevent subsequent damage.
**Deflector**: A device provided in a cistern if the high velocity flow continues up to the end of the cistern. It deflects the high-velocity stream in order to reduce its destructive force.
**Dentated Sill**: A device provided in a cistern if the high velocity flow continues up to the end of the cistern. It bears the impact of this high flow.
**Falls**: Devices provided on a canal to negotiate steep slopes in the bed to prevent excessive erosion due to the generated high velocity of flow.
**Friction Blocks**: Rectangular concrete blocks anchored on the cistern floor to dissipate surplus kinetic energy below hydraulic structures.
**Horizontal Impact Cistern**: A device in which the water flowing over the crest flows over a glacis at the bottom of which the reverse curve converts the inclined supercritical flow into a horizontal supercritical flow that strikes the horizontal subcritical flow of the downstream channel causing a hydraulic jump to occur.
**Inclined Impact Cistern**: A device in which the glacis is continued as such into the cistern.
**Vertical Impact Cistern**: A very efficient device in which water falls freely to strike the floor (in fact the standing water pool) vertically, and thus dissipate the surplus energy.
**Ribbed Pitching**: Bricks laid alternately flat and on edge to dissipate surplus energy of flow by increasing the boundary friction.
**Roughening Devices for Energy Dissipation**: Devices introduced in a hydraulic structure to increase the friction and thus dissipate surplus energy of flowing water.
12.9 ANSWERS TO SAQs
Read through the various sections and solved examples, as well as reference books. |
CpG and transfer factor assembled on nanoparticles reduce tumor burden in mice glioma model
Yi-Feng Miao, Tao Lv, Ran Wang, Hui Wu, Shao-Feng Yang, Jiong Dai and Xiao-Hua Zhang*
This work describes the use of a transfer factor, a low molecular protein that can transfer cell mediated immunity from donor to recipient, and CpG, a clinically relevant toll-like receptor agonist, for treating glioma. Transfer factor and CpG were assembled onto gold nanoparticles via layer-by-layer assembly. The modified nanoparticles (i.e. particles assembled with transfer factor and CpG) were characterized by size, zeta potential, and loading. An *in vivo* tumor study revealed that the nanoparticles can inhibit tumor progression more effectively than using either TF or CpG alone or using an equivalent dose of CpG and TF in a soluble mixture. To investigate the anti-tumor mechanism, the modified nanoparticles were interacted with dendritic cells, macrophages. Viability tests showed that the modified nanoparticles did not affect cell viability, and neither did the use of soluble TF or CpG alone. Cell activation assessments showed that the modified nanoparticles can activate DC surface markers (CD80+, CD86+, and CD40+), and promote the production of cytokines (GM-CSF, TNF, and IL-6) from macrophages. In addition, the *in vivo* study revealed that the modified nanoparticles promoted the production of both inflammatory and effector cytokines in mice serum. Finally, the study also revealed that the production of inflammatory cytokines came primarily from the CpG component, not TF. This study may provide us with a new immune therapeutic strategy for treating glioma.
Introduction
Glioma is one of the most aggressive and lethal tumors that thrives in the human central nervous system – specifically, the brain.\(^1,^2\) Despite the fact that multimodal treatments (*e.g.* surgical resection, radiotherapy, and cytotoxic chemotherapy, or a combination of these therapies) have been employed to eradicate this disease, the 5 year survival rate of glioma patients is still extremely low (<5%).\(^3\) While glioma immunotherapies are a rapidly expanding frontier,\(^4,^5\) developing effective immunotherapeutic strategies is hindered by multiple challenges, such as the strong immune suppressive environment from the tumor or lack of glioma specific antigens.\(^6,^7\) Thus it is necessary to search for new immune therapeutic strategies for this lethal disease.
Transfer factors (TFs) are small molecular peptides that are usually dialyzed from immune cells. These peptides possess the capability to transmit cell mediated immunity from sensitized donors to naive recipients.\(^8\) Current studies identified that TFs possess a high ratio of tyrosine and glycine that is similar to the N-terminal of some neuropeptides (*i.e.* encephalin peptide), but more detailed information on TF is still missing.\(^9–^12\) Although current studies are still unclear as to the molecular mechanism of TFs, multiple clinical trials demonstrated the effectiveness of TFs for treating a wide variety of diseases, such as infectious diseases (*i.e.* leishmaniasis; toxoplasmosis), immunodeficiencies (chronic granulomatosis) and even cancers.\(^11,^13–^15\)
For example, in one clinical study, TF from lymphocytes of blank donors was employed to treat 356 patients with non-small cell lung cancer (NSCLC); the patients that received TF showed a remarkably improved survival rate, indicating that the administration of TF directly improved lung cancer survival rate.\(^16\) In another clinical trial, 5 patients at the advanced stage of breast cancer were treated with TF donated from healthy subjects for 21 to 310 days. While no detectable inflammation or hypersensitivity was detected, regression of the tumor that lasted for 6 months was observed in the recipients, demonstrating the anti-tumor functions of TF.\(^17\) CpG is an adjuvant that binds and activates toll like receptor 9 in immune cells. These receptors survey the body for pathogens such as viral RNA and, bacteria that are not common in humans, and they promote the activation of innate and adaptive immunity once they encounter such danger signals. CpG is being employed in multiple clinical trials for the treatment of various cancers.\(^18–^20\) While these trials are promising, striking discoveries in recent
years found that a combination of TLR agonist the tumor antigens can generate synergistic effects in the treatment.\textsuperscript{21–23} Thus in this study we used CpG and TF together for the treatment of glioma.
In this work, TF was extracted from donor mice implanted with an experimental glioma tumor. CpG and TF were assembled onto gold nanoparticles (NP) \textit{via} layer-by-layer assembly – a technique that uses electrostatic interactions to assemble oppositely charged material onto substrates.\textsuperscript{24–26} The assembly of multilayers of materials onto nanoparticles has been widely employed for a variety of applications including radiation therapy, immune therapy and drug delivery,\textsuperscript{27–30} and showed unique advantages in delivering multiple functional cargos at the nanoscale. We therefore employed this strategy to deliver TF and CpG together for treating glioma.
**Materials and methods**
**Materials**
Gold(III) chloride tri-hydrate salt (99.9%) was from VWR. 1× phosphate buffered saline (PBS) was from Sigma. (4,6-Diamidino-2-phenylindole) (DAPI) was from Invitrogen. The positive isolation beads for isolating dendritic cells were from Miltenyi Biotec. Chitosan (MW = 20 000) was from Sigma. Fluorescently labeled antibodies for CD80 (PE), CD86 (PE-Cy7), CD40 (APC) were purchased from Biologend. RPMI cell culture medium was obtained from VWR.
**Cells and animals**
All experiments that involve mice were approved by the Animal Research Committee Board of Shanghai Jiaotong University, and animal experiments were performed by following the Institutional Animal Care and Use Committee (IACUC) approved by the committee. Mice sacrifice was performed by exposing mice to CO\textsubscript{2}; the CO\textsubscript{2} concentration was increased gradually. Cervical dislocation was employed to ensure a successful sacrifice. C6 glioma cells were obtained from School of Medicine, Shanghai Jiaotong University. The cells were cultured in Dulbecco’s modified Eagle’s medium plus 10% fetal calf serum. Method for generating C6 glioma cells was established as followed. Briefly, $1 \times 10^6$ glioma cells were implanted into mice (10–12 weeks) intraperitoneally. Tumor will be established to sizes ranging from 1 cm\textsuperscript{2} to 1.5 cm\textsuperscript{2} in 10 to 15 days. The tumor was then dissected from mice, and minced to pieces with size smaller than 1 mm\textsuperscript{2}. The minced tumor tissues were then processed through a cell strainer (40 μm pore size) by mechanical force to obtain cell suspension. After washing with PBS twice, the cells were collected by centrifuging at 500 rpm for 5 min.
**Glioma antigen preparation**
The freshly dissected tumor tissues were processed into cell suspensions \textit{via} mechanical forces. After washing with PBS twice, the cells was counted \textit{via} a cell counter. The cells were frozen to −80 °C and thaw to room temperature. This process was repeated for at least 4 to 5 times to ensure a complete lysis of the cells. The cellular lysate was centrifuged at 1500g for 5 min to remove the large debris from the cells, followed by passing the supernatant to a filter (0.22 μm). The proteins collected was then quantified with a Nanodrop 2000 (Thermo-scientific) and stored at −80 °C before use.
**TF generation**
Mice ($N = 12$) were treated with the glioma antigen (75 μg) plus CpG (25 μg) \textit{via} intradermal injection on day 0, followed with a boost on day 14. The mice were sacrificed on day 21 with the whole blood collected and treated with EDTA (2 mg mL\textsuperscript{−1} in blood). The blood was centrifuged at 500g for 25 min to remove the buffy coat. After washing the leukocytes with PBS twice, the cells were frozen to −80 °C and thawed to room temperature. This process was repeated for 10 cycles to ensure a complete lysis of cells. The lysate was collected and dialyzed in a dialysis membrane (MW cut-off = 12 000). The TF was dialyzed through the membrane and freezing dried and stored at −80 °C before use. This process also ensures that no pathogen will be collected in the TF since all pathogens have a diameter larger than the dialysis membrane pore.
**Nanoparticle synthesis and characterization**
The nanoparticles were synthesized according to literature.\textsuperscript{31} Briefly, 50 mL chitosan was dissolved in acetic acid (1%) at a concentration of 0.5% w/v (pH = 5.1). The chitosan solution was heated to 100 °C and stirred at 650 rpm. Aqueous chloroauric acid (HAuCl\textsubscript{4}, 1 mM, 75 μL) was added into the chitosan solution. After 30 min, the solution turned into a red wine color. The solution was then moved to 4 °C fridge to stop the reaction. Nanoparticles in solution was stored at 4 °C fridge before use. Surface modification of the nanoparticles was performed through conventional layer-by-layer assembly by following the literatures.\textsuperscript{24–26} Briefly, the nanoparticles were collected from the solution through centrifuge (15 000 rpm, 25 min). The nanoparticles (1.5 mg in 100 μL water, pH = 7.1) were deposited with 1.5 mL TF (200 μg mL\textsuperscript{−1}, pH = 5.1) layer for 10 min. The particles were washed with water (pH = 5.1) to remove the free TFs. Following the deposition of TF, the nanoparticles was then assembled with chitosan layer (1.5 mL, 0.1% in water, pH = 5.3) to generate a positively charged surface on the particles. After washing the particles with water to remove free chitosan, CpG (1.5 mL, 200 μg mL\textsuperscript{−1}, pH = 5.7) was then coated onto the particles as the third layer. These process of deposition was repeated until a desired number of layer was coated onto the particles. Each deposition took 10 min, followed by a washing step with water. The unmodified nanoparticles were imaged using transmission electron microscope (TEM, JEOL JEM-3100F). Zetasizer Nano ZS (Malvern) was employed to measure the size and zeta potential of the particles. The measurement was performed in water (pH = 7.1) and was repeated 3 times. Loading of TF and CpG was performed by an indirect method. Briefly, a certain amount of CpG and TF (i.e. 200 μg for each) was added to the particles. After the assembly, the supernatant was collected by centrifugation (15 000 rpm, 25 min). The amount of CpG and TF in the supernatant was character by UV-Vis
spectroscopy. The absorbance at 262 nm and 278 nm was used to characterize CpG and TF respectively. The amount of CpG and TF loaded onto the particles were calculated by subtracting the amount of CpG and TF in the supernatant with the original materials added.
**In vitro test**
For viability test, splenic dendritic cells were collected by sacrificing the mice and collecting the spleen. The spleen was minced to size smaller than 1 mm$^2$, followed by treating with dissociation medium (Miltenyi Biotec), and collected by positive magnetic collection. DCs were then treated with different samples (\textit{i.e.} TF (25 μg mL$^{-1}$), CpG (5 μg mL$^{-1}$), NP (50 μg mL$^{-1}$) or NP modified with TF and CpG (50 μg mL$^{-1}$)) for 24 hours, followed by staining with DAPI, assessed with follow cytometry to test their viability. To test the impact of samples to macrophage viability, a same procedure was followed. Activation of DCs were studied by staining the cells with fluorescently labeled antibodies (Invivogen), and assessed \textit{via} flow cytometry. Production of cytokines from macrophages was performed by using ELISA kits by following the manufacturer’s instructions. To test the level of cytokines in serum, C57/BL6 mice (female, 4–8 week old) was immunized with different samples (\textit{i.e.} TF, CpG, NP or NP modified with TF and CpG) on day 0, and peripheral blood was collected from mice on day 3. The blood was centrifuged at 18 000g for 15 min to collect the serum. The serum was stored for ELISA test by following the manufacturer’s instructions.
**Tumor study**
C57/BL6 mice (female, 4–8 week old) were immunized with different samples (\textit{i.e.} TF, CpG, NP, TF + CpG, or NP modified with TF and CpG) on day 0, followed with a boost on day 15. The mice were implanted with $3 \times 10^5$ glioma tumor cells on the flank of mice on day 16. Tumor size was measured every day, and determined by using $W \times L$. Mice were sacrificed when the tumor size reached 1.5 cm$^3$.
**Results**
The first task was to synthesize positively charged gold nanoparticles to act as carriers through the layer-by-layer method reported in literature.\textsuperscript{31} The particles had a spherical shape, as imaged under transmission electron microscope, although other shapes were also observed (Fig. 1A). The particles were then modified with TF and CpG through layer-by-layer deposition of TF, chitosan, and CpG – the positively charged chitosan served as a bridge that connected the negatively charged TF and CpG onto the nanoparticles. The sizes of the surface modified nanoparticles were studied \textit{via} dynamic light scattering, in which the particle sizes increased after the deposition of each layer of material. For particles with 0 to 2 trilayers, the average sizes were 18, 91, and 189 nm, respectively (Fig. 1B). Sizes of the particles from TEM is smaller than the number measured by dynamic light scattering probably because the dynamic light scattering tends to collect information from larger particles or aggregates. Zeta potentials of the nanoparticles were also measured. The surface charges of the particles reversed after the deposition of each layer of material, indicating successful deposition. For particles with 0 to 7 single layers of materials, the surface charges were +27.2, −17, +23.7, −15.3, +21.9, −11.9, +20.2, and −12.5, respectively. Briefly, the originally synthesized nanoparticles was positively charged since it was reduced by positively charged chitosan. Once deposited with negatively charged TF, there was a reverse in the surface charge. Another positively charged chitosan was deposited onto the particles to generate a positively charged surface, so that the negatively charged material, CpG, can be deposited onto the particles. Such a deposition was repeated until a desired number of material was assembled (Fig. 1C). The loading levels of the cargo materials (\textit{i.e.} TF and CpG) were studied by an indirect measurement as well. The particles with 1 and 2 bilayers of (NP/(TF/Chi/CpG)$_n$, $n = 1$ or 2) had 78.6 μg TF, 126 CpG (for 1 trilayer), and 181.2 μg TF and 251.8 μg CpG (for 2 trilayers), respectively (Fig. 1D).
The modified nanoparticles (\textit{i.e.} NP/(TF/Chi/CpG)) as well as other samples (TF or CpG) were employed for an \textit{in vivo} anti-tumor study. The mice with no treatment were used as a negative control (CTRL). The treated mice were injected with these aforementioned samples on day 0 and boosted on day 14. They were then implanted with glioma tumor on day 15 and observed daily. NP/(TF/Chi/CpG) had more dramatic anti-tumor effects compared to the other samples (Fig. 2A). The use of CpG or TF alone also generated an anti-tumor effect – although this tumor
growth rate was faster than that in mice treated with modified nanoparticles (Fig. 2A). These anti-tumor effects were measured through tumor weight. On day 18, the mice were sacrificed and measured with tumor weight. The mice treated with NP/(TF/Chi/CpG) had the lowest tumor weight among the four groups, indicating NP/(TF/Chi/CpG) is the most potent in inhibiting tumor progression in all groups (Fig. 2B). Similar to the tumor volume study, TF or CpG alone also reduced the tumor weight in mice compared to CTRL (Fig. 2B). In both measurement, we noticed that a mixture of CpG and TF (i.e. TF + CpG) can also inhibit tumor progression as compared to CTRL or mice treated only either CpG or TF. However, NP/(TF/Chi/CpG) has a stronger potency in tumor inhibition compared to the mixture (Fig. 2A and B).
The surface modified nanoparticles (i.e. NP/TF/Chi/CpG) were then utilized to interact with immune cells (i.e. DCs, macrophages, and lymphocytes). The modified nanoparticles were designed for intradermal (i.d.) injection, where dendritic cells and macrophages would be two of the major cells to take up the nanoparticles. Thus, these two cell types were first interacted with NP/TF/Chi/CpG to test the uptake. Fluorescence labeled chitosan (FITC-Chi) as well as TF and CpG were employed to compose the nanoparticles (i.e. NP/(TF/FITC-Chi/CpG)) so as to facilitate flow cytometry analysis. The resulting flow cytometry assay showed that the uptake of nanoparticles by DCs and macrophages was dose dependent (Fig. 3A). When a high dose of nanoparticles was employed (i.e. 1× to 4× dilution), a high percentage (>70%) of cells took the fluorescence labeled nanoparticles; in contrast, when the dose of nanoparticles was diluted 8 to 16 times (i.e. 8× to 16×), a lower percent (~60%) of DCs and macrophages took the nanoparticles (Fig. 3A). Under flow cytometry, the cells treated with nanoparticles had a stronger fluorescence signal than cells with no treatment (CTRL) (Fig. 3B), again confirming the uptake of nanoparticles by DCs and macrophages (Fig. 3B). A viability assay was used to investigate the impact of nanoparticles on the viability of DC and macrophages (Fig. 3C). Compared to cells with no treatment, the surface modified nanoparticles did not affect the viability of dendritic cells, macrophages (Fig. 3C). Other samples (e.g. CpG, NP, TF, or CpG + TF) were employed to test their impact on cell viability as well. Assessments showed that none of these components affected cell viability (Fig. 3C).
The surface modified NPs (i.e. NP/TF/Chi/CpG) were used to test their impact on primary DCs – the key antigen presenting cells that bridge adaptive and innate immunity. Briefly, primary DCs isolated from mice spleens were used to interact with NP/TF/Chi/CpG and assessed through a flow cytometry assay to test the activation of major markers (i.e. CD86+, CD80+, CD40+, and CD11c+). Other samples (i.e. no treatment, NPs with no modification, TF, CpG, and TF + CpG in soluble form) were used as controls. Flow cytometry assessments showed that the NP/TF/Chi/CpG promoted CD 86+ activation on DC surfaces (Fig. 4A). Similarly, other important DC activation markers (i.e. CD80+ and CD40+) were also activated by NP/TF/Chi/CpG (Fig. 4B and C). TF in soluble form also slightly activated these surface markers, but the unmodified NP did not have this function compared to CTRL (Fig. 4A–C). Furthermore, compared to TF and CpG in a soluble mixture (TF + CpG), NP/TF/Chi/CpG induced a more potent activation of these markers (Fig. 4A–C). While CD11c+ markers were all at a relatively high level in all DCs, NP/TF/Chi/CpG increased the activation of this marker on DCs compared to TF + CpG in soluble form and CTRL (Fig. 4D). CpG, a toll like receptor that naturally activated all the markers, served as a control in this study, and activated these markers at high efficiency (Fig. 4A–D). These
**Fig. 2** Anti-tumor study using NP/TF/Chi/CpG and other control samples. (A) Daily measurement of tumor in mice treated with different samples. (B) Tumor weight on day 17 after tumor cell implantation. The mice were treated with NP/TF/Chi/CpG or other control samples (soluble TF or CpG, or a mixture of soluble TF and CpG). Those mice with no treatment were used as negative control.
**Fig. 3** *In vitro* interactions of NP/TF/Chi/CpG with immune cells for viability test. (A) Dose dependent uptake of NP/TF/Chi/CpG by macrophages and DCs. Chitosan (Chi) was conjugated with fluorescence dye to facilitate flow cytometry detection. The uptake of nanoparticles by the cells were dose dependent. (B) Representative flow cytometry assessment showing the fluorescence shift of macrophages and DCs after nanoparticles uptake. For cells with no nanoparticles treatment, almost no fluorescence signal was detected under flow cytometry, after feeding the cells with nanoparticles, strong fluorescence was detected in the cells, indicating the nanoparticles were uptake by the cells. (C) Viability of macrophages, DCs and lymphocytes after interacting with NP/TF/Chi/CpG. The nanoparticles did not affect viability of the cells.
data showed that the co-delivery of TF and CpG can promote the activation of dendritic cells *in vivo*.
NP/TF/Chi/CpG was employed to test their impact on macrophage activation. Briefly, these particles – as well as other controls (*i.e.* CpG, NP, TF or CpG + TF) – were co-cultured with macrophage cell line J774.A1. ELISA was employed to test the efficacy of these samples on the secretion of three major cytokines: tumor necrosis factor-α (TNF-α), IL-6, and G-CSF, which are three of the major cytokines that are involved in the macrophage associated anti-cancer process. NP/TF/Chi/CpG enhanced TNF-α secretion more than TF + CpG in soluble form, showing the synergistic effect by delivering the two immune cargos in particulate form (Fig. 5A). The use of TF or CpG alone also promoted TNF-α production compared to CTRL group, indicating that both cargos can stimulate the activation of macrophages. Compared to the CpG group, the capacity of TF in stimulating TNF-α production is relatively low (Fig. 5B). Similar trends were also observed in the secretion of another two cytokines (G-CSF and IL-6). The use of CpG alone promoted G-CSF and IL-6 secretion, since CpG was a TLR9 agonist and stimulates macrophages naturally. TF alone also slightly stimulated the production of both cytokines, despite the fact that their production levels were relatively low compared to the CpG group (Fig. 5C and D). The use of CpG + TF in soluble form showed a similar level of G-CSF and IL-6 production compared to the use of CpG alone, indicating that no synergistic effect was generated during the combinational use of G-CSF and IL-6.
The impact of NP/TF/Chi/CpG on cytokine production in serum was also assessed. To perform this study, mice were immunized with NP/TF/Chi/CpG as well as with other controls (*i.e.* no treatment, TF, CpG, TF + CpG, and NP) on day 0. Peripheral blood was drained from mice on day 3 and centrifuged at high speed (20 000 rpm, 5 min) to collect serum for ELISA. IL-1β level was relatively high in mice treated with NP/
---
**Fig. 4** Activation of DCs by NP/TF/Chi/CpG and other control samples. Expression of (A) CD86+ (B) CD80+, (C) CD40+ (D) CD11c+ on DCs after treating with different samples. Control samples includes, nanoparticles with no modification (*i.e.* NP), TF or CpG in soluble form, mixture of TF and CpG (*i.e.* TF + CpG). Those cells with no treatment were employed as control.
**Fig. 5** Secretion of cytokines from macrophages treated with NP/TF/Chi/CpG and other control samples. Expression of (A) GM–CSF, (B) TNF and (C) IL–6 by macrophages treated with NP/TF/Chi/CpG. Control samples include TF or CpG in soluble form, nanoparticles with no modification (*i.e.* NP), TF and CpG in soluble mixture form (*i.e.* TF + CpG). Cells with no treatment was employed as negative control. Macrophages was treated with the samples for 24 hours. Supernatant was collected for ELISA assessment.
**Fig. 6** Expression of cytokines in serum of mice treated with NP/TF/Chi/CpG and other control samples. Expression of (A) IL-β, (B) IL-6, (C) IFN-γ and (D) TNF in mice serum. The mice were treated with NP/TF/Chi/CpG. Control samples include TF or CpG in soluble form, nanoparticles with no modification (*i.e.* NP), TF and CpG in soluble mixture form (*i.e.* TF + CpG). Cells with no treatment was employed as negative control. Mice were treated with different samples on day 0 and peripheral blood was collected from mice on day 3. Serum was collected from blood through centrifugation and employed for ELISA assessment.
TF/Chi/CpG, CpG, or TF + CpG – all of these groups involved the use of CpG. This suggests that CpG plays a significant role in the production of IL-1β. In contrast, IL-1β level in mice treated with TF or NP was relatively low, meaning that neither of these components regulates IL-1β expression (Fig. 6A). Similar trends were observed in the production of another inflammatory cytokine, IL-8. All the groups that involved the use of CpG (i.e. CpG, CpG + TF, and NP/TF/Chi/CpG) showed an enhanced production of IL-8, but the use of TF or NP alone did not show this enhancement (Fig. 6B). As for the effector cytokines (i.e. TNF and IFN-γ), NP/TF/Chi/CpG promoted both cytokines’ production in mice; a striking discovery was that TF alone increased TNF and IFN-γ expression compared to CTRL (Fig. 6C and D). In particular, compared to TF + CpG in soluble form, NP/TF/Chi/CpG promoted a higher level of IFN-γ, although this trend was not obvious in the production of TNF (Fig. 6C). It is worth mentioning that the use of NP alone did not promote any of the above-mentioned cytokines (Fig. 6A–D), indicating that the nanoparticles are immune inert.
**Discussion**
Immune suppression is one of the major issues associated with immune therapy of cancers, including glioma. Cancer progression is usually associated with impaired immune function; specifically, suppressed effector T cell function but enhanced regulatory T cell function. The search for effective immune cargos to combat these suppressions has therefore become one of the key goals in the therapeutic treatment of cancers. Broadly, toll-like receptor agonists – a group of “danger” signals (i.e. pathogens) that exists widely in viruses and bacteria but not in humans – have been employed in different ways to treat cancers. For example, CpG has been conjugated to nanoparticles to activate macrophages for the immune therapy of cancer; the use of CpG enhanced the secretion of cytokines associated with the anti-cancer process. In another study, polyIC – a TLR3 agonist – was assembled with a model peptide onto nanoparticles to expand antigen-specific T cells *in vivo*. Similarly, dramatic examples also include the co-use of TLR adjuvants for clinical trials of cancer treatment. As for glioma – one of the most challenging cancers in the world – immune therapies have also drawn extensive attention in recent years. For example, the delivery of genes that encode cytokines that can modulate the immune suppressed tumor environment have showed promising results by promoting DC activation and effector T cell proliferation. In a clinical trial, CpG was used in a phase I trial to treat patients with recurrent glioblastoma; preliminary evidence of this study found that two in six patients had a median survival period of 7.2 months.
Transfer factor is a low molecular peptide that can be obtained from humans and animals that have developed immunity against certain type of diseases. Studies over transfer factors have drawn considerable attention in the past century. In particular, multiple clinical trials were carried out for various diseases, ranging from infectious disease to cancers, with both failure and success reported in these trials. For example, in one trial, 6 in 9 humans had a reduced hypersensitive immune response to M. Jerpase antigens after being treated with transfer factor. In another study, two antigens (i.e. tuberculin, and keyhole limpet hemocyanin) were employed to test for immune specificity and the potency immune response transited from donors to recipients by transfer factor; this study demonstrated in a human trial that the recipient of transfer factor could generate the very same specific immune response as in the donors. Cytosine-phosphate–guanine (CpG) is a potent toll like receptor agonist that activates both innate and adaptive immunity. CpG can promote the synthesis of inflammatory cytokines as well as other co-stimulatory molecules; it also promotes effector CD8+ T cell production. CpG shows great potential in combating cancer and has already been employed in several clinical trials. We therefore combined these two immune cargos in our study to treat glioma. In our study, TF and CpG were assembled onto positively charge nanoparticles via layer-by-layer assembly; this technique uses electrostatic interactions to integrate oppositely charged materials into thin films. Through this technique, we noticed that there was a non-linear increase in particle size after the deposition of each layer of materials (Fig. 1A). This is probably because a certain degree of aggregation was induced during the preparation. We prepared the particles with well-defined properties. The particles showed a reversal of surface charge after the deposition of each layer of material – a result that is expected in the process of layer-by-layer assembly to prove the successful deposition of each layer of material (Fig. 1B). The particle modified with TF and CpG (i.e. NP/TF/Chi/CpG), as well as other control samples, was then employed to treat glioma tumor in mice. NP/TF/CpG had more potent anti-tumor effects compared to using each component alone or a mixture of the two cargos (Fig. 2A). Similar result was confirmed by tumor weight measurement (Fig. 2B).
Flow cytometry showed that the particles were taken up by two major immune cells, macrophages and DCs, indicating that the aggregation did not affect the application of the particles (Fig. 3A and B). Viability tests also confirmed the safety of NP/TF/CpG – neither TF nor CpG impaired cell viability compared to cells with no treatments (Fig. 3C). In this study, TF and CpG were assembled together for treating glioma; this is because previous studies found that the co-delivery of different immune cargos (i.e. different toll like receptor agonists or antigen plus adjuvant) could generate potent synergistic effects much greater than from using each component alone. This was consistent with what was demonstrated in our study: the nanoparticles (i.e. NP/TF/Chi/CpG) induced a higher expression of DC surface markers compared to TF + CpG in soluble form (Fig. 4A–C – DC activation). TF alone slightly activated DC surface markers (i.e. CD80+ and CD86+), although the activation level was lower than CpG (Fig. 4A–C). This data indicated that TF was involved in adaptive immunity through interactions with DCs – an important antigen presenting cell that bridges antibody and cellular immunity. On the other hand, TF slightly promoted cytokine production from macrophages – an immune cell that plays an important role in both antibody and cellular immunity (Fig. 5A–C). These data confirmed that TF was involved in adaptive
immunity and potentially joined with antibody immunity through macrophages. However, since this study did not investigate the role of B cell functions, the role of TF in antibody immunity has not yet be concluded.
To investigate the *in vivo* functionality of NP/TF/Chi/CpG, we treated mice with the different samples on day 0 and collected the peripheral blood on day 3 to test the secretion of inflammatory and effector cytokines *via* ELISA (Fig. 6). NP/TF/Chi/CpG induced a higher level of IL-1β production than the TF + CpG in soluble form (Fig. 6A and B). Similarly, NP/TF/Chi/CpG promoted effector cytokines (*i.e.* IFN-γ and TNF-α) production to a higher level compared to TF + CpG in soluble form (Fig. 6A–D). This is probably either because the CpG in particular form has an improved adjuvant effect, or because of the synergistic effect of delivering CpG and TF together. We also noticed that TF alone yielded a very low level of IL-1β and IL-8, indicating TF was not involved in promoting the production of inflammatory cytokines. Instead, TF promoted the production of TNF (Fig. 6C), but not IFN-γ (Fig. 6D). Putting everything together, these data suggest that the TFs involved were involved in effector but inflammatory immune response. While the *in vivo* tumor study illustrated a synergistic effect of NP/TF/Chi/CpG against glioma, these DC activation and *in vivo* and *in vitro* assays partially explained the mechanism underlying this synergistic performance. For a better understanding of using TF and CpG for cancer treatment, further study requires the investigation of the role of the B cell – a major immune cell for antibody immunity in these processes. In addition, studies of the activation pathway of immune cells as well as infiltration of immune cells into tumor tissue will also help us understand the mechanism in this study.
**Conclusion**
This study investigated the use of TF and CpG for the treatment of glioma. These two immune cargos were loaded onto nanoparticles to ensure the co-delivery of both. An *in vivo* study showed that particles carrying TF and CpG (i.e. NP/TF/Chi/CpG) had a more potent anti-tumor effect than TF or CpG alone or TF + CpG in soluble mixture. An *in vitro* activation study showed that NP/TF/Chi/CpG activated DC markers and promoted cytokine productions from macrophages. *In vivo* serum cytokine secretion analysis showed that NP/TF/Chi/CpG promoted the production of both inflammatory and effector cytokines; among these cytokine productions, our study indicated that TF contributed to the production of effector cytokines but not inflammatory cytokines. These *in vitro* assays partially explained the immune mechanism involved in the anti-tumor immunity of NP/TF/Chi/CpG. Further studies such as infiltration of immune cells to the tumor as well as B cell immunity will be performed to better understand the mechanism of using TF and CpG together.
**Conflict of interest**
The authors declare no conflict of interest.
**Ethical standard**
All studied that involves animals were performed under the regulation rules of ethical standards of the Institutional and/or National Research Committee.
**Acknowledgements**
This project was supported by the National Nature Science Foundation (No. 81000498).
**References**
1 A. Omuro and L. M. DeAngelis, *JAMA, J. Am. Med. Assoc.*, 2013, **310**, 1842–1850.
2 T. F. Cloughesy, W. K. Cavenee and P. S. Mischel, *Annu. Rev. Pathol.: Mech. Dis.*, 2014, **9**, 1–25.
3 D. R. Johnson, H. E. Leeper and J. H. Uhm, *Cancer*, 2013, **119**, 3489–3495.
4 L. Yang, G. Guo, X. Y. Niu and J. Liu, *BioMed Res. Int.*, 2015, **2015**, 717530.
5 N. Ung and I. Yang, *J. Neuro-Oncol.*, 2015, **123**, 473–481.
6 T. D. Azad, S. M. Razavi, B. Jin, K. Lee and G. Li, *J. Neuro-Oncol.*, 2015, **123**, 347–358.
7 L. W. Xu, K. K. Chow, M. Lim and G. Li, *J. Immunol. Res.*, 2014, **2014**, 796856.
8 H. S. Lawrence, *J. Clin. Invest.*, 1955, **34**, 219–230.
9 M. S. Ascher, A. A. Gottlieb and C. H. Kirkpatrick, *Transfer factor: basic properties and clinical applications; proceedings of the Second International Workshop on Basic Properties and Clinical Applications of Transfer Factor, held at the United States Army Medical Research Institute of Infectious Diseases, October 5–8, 1975*, Academic Press, New York, 1976.
10 A. Khan, C. H. Kirkpatrick and N. O. Hill, *Immune regulators in transfer factor: [proceedings]*, Academic Press, New York, 1979.
11 C. H. Kirkpatrick, D. R. Burger and H. S. Lawrence, *Immunobiology of transfer factor*, Academic Press, New York, 1983.
12 S. J. Rozzo and C. H. Kirkpatrick, *Mol. Immunol.*, 1992, **29**, 167–182.
13 W. E. Bullock, M. Brandris and J. P. Fields, *N. Engl. J. Med.*, 1972, **287**, 1053–1059.
14 S. Estrada-Parra, A. Nagaya, E. Serrano, O. Rodriguez, V. Santamaría, R. Ondarza, R. Chavez, B. Correa, A. Monges, R. Cabezas, C. Calva and I. Estrada-Garcia, *Int. J. Immunopharmacol.*, 1998, **20**, 521–535.
15 G. B. Wilson, J. F. Metcalf and H. H. Fudenberg, *Clin. Immunol. Immunopathol.*, 1982, **23**, 478–491.
16 V. Pilotti, M. Mastrorilli, G. Pizza, C. De Vinci, L. Busutti, A. Palareti, G. Gozzetti and A. Cavallari, *Biotherapy*, 1996, **9**, 117–121.
17 H. F. Oettgen, L. J. Old, J. H. Farrow, F. T. Valentine, H. S. Lawrence and L. Thomas, *Proc. Natl. Acad. Sci. U. S. A.*, 1974, **71**, 2319–2323.
18 B. Badie and J. M. Berlin, *Immunotherapy*, 2013, **5**, 1–3.
19 E. Vacchelli, L. Galluzzi, A. Eggermont, W. H. Fridman, J. Galon, C. Sautes-Fridman, E. Tartour, L. Zitvogel and G. Kroemer, *OncolImmunology*, 2012, **1**, 894–907.
20 S. Adams, *Immunotherapy*, 2009, **1**, 949–964.
21 S. P. Kasturi, I. Skountzou, R. A. Albrecht, D. Koutsounas, T. Hua, H. I. Nakaya, R. Ravindran, S. Stewart, M. Alam, M. Kwissa, F. Villinger, N. Murthy, J. Steel, J. Jacob, R. J. Hogan, A. Garcia-Sastre, R. Compans and B. Pulendran, *Nature*, 2011, **470**, 543–547.
22 D. I. Gabrilovich, S. Ostrand-Rosenberg and V. Bronte, *Nat. Rev. Immunol.*, 2012, **12**, 253–268.
23 T. Queerc, S. Bennouna, S. Alkan, Y. Laouar, K. Gorden, R. Flavell, S. Akira, R. Ahmed and B. Pulendran, *J. Exp. Med.*, 2006, **203**, 413–424.
24 P. Ott, K. Trenkenschuh, J. Gensel, A. Fery and A. Laschewsky, *Langmuir*, 2010, **26**, 18182–18188.
25 G. Decher, *Science*, 1997, **277**, 1232–1237.
26 H. Ai, S. A. Jones and Y. M. Lvov, *Cell Biochem. Biophys.*, 2003, **39**, 23–43.
27 P. Zhang, Y. Qiao, C. Wang, L. Ma and M. Su, *Nanoscale*, 2014, **6**, 10095–10099.
28 Y. H. Roh, J. B. Lee, K. E. Shopsovitz, E. C. Dreaden, S. W. Morton, Z. Poon, J. Hong, I. Yamin, D. K. Bonner and P. T. Hammond, *ACS Nano*, 2014, **8**, 9767–9780.
29 A. Elbakry, A. Zaky, R. Liebl, R. Rachel, A. Goepferich and M. Breunig, *Nano Lett.*, 2009, **9**, 2059–2064.
30 P. Zhang, Y. C. Chiu, L. H. Tostanoski and C. M. Jewell, *ACS Nano*, 2015, **9**, 6465–6477.
31 H. Huang and X. Yang, *Biomacromolecules*, 2004, **5**, 2340–2346.
32 J. Hollidack, *Drug Discovery Today*, 2014, **19**, 379–382.
33 S. Rakoff-Nahoum and R. Medzhitov, *Nat. Rev. Cancer*, 2009, **9**, 57–63.
34 E. Y. So and T. Ouchi, *Int. J. Biol. Sci.*, 2010, **6**, 675–681.
35 A. Y. Lin, J. P. Almeida, A. Bear, N. Liu, L. Luo, A. E. Foster and R. A. Drezek, *PLoS One*, 2013, **8**, e63550.
36 S. Behboudi, D. Chao, P. Klenerman and J. Austyn, *Immunology*, 2000, **99**, 361–366.
37 A. Carpentier, F. Laigle-Donadey, S. Zohar, L. Capelle, A. Behin, A. Tibi, N. Martin-Duverneuil, M. Sanson, L. Lacomblez, S. Taillibert, L. Puybasset, R. Van Effenterre, J. Y. Delattre and A. F. Carpentier, *Neuro-Oncology*, 2006, **8**, 60–66.
38 K. S. Zuckerma, J. A. Neidhart, S. P. Balcerza and A. F. Lobuglio, *J. Clin. Invest.*, 1974, **54**, 997–1000.
39 A. Uotila, *Transfer factor and other immunological activities of human leucocyte dialysate and other dialysates of mammalian tissues*, University of Tampere, Tampere, 1979.
40 P. Gröhn, PhD thesis, University of Tampere, Finland, 1976.
41 E. Karhumäki, *Modulation of infection resistance of mice with dialysates containing transfer factor-like activity derived from leukocytes of man and other mammalia*, University of Tampere, Tampere, 1988.
42 B. Jahrsdorfer and G. J. Weiner, *Update Cancer Ther.*, 2008, **3**, 27–32. |
The Camp French Mining Co.
Colorado
John H. Marks
Office Copy
The "Stuff"
from which blooms and blossoms
the everlasting dollar.
—Cecil Rhodes.
Errors Discovered Too Late to Amend in the Text
Page 7, line 9, should read:
The great reputation of this new prospect among veteran miners of the
Page 9, line 3, should read:
vein and will join the Baltimore shown by the white line towards No. 3. The moun-
Page 11, lines 6 and 7, should read:
Baltimore ........................................ Patent Survey 16750 Patented
Gold Leaf ........................................ Patent Survey 16755 by the late owner
Page 13, line (e), should read:
(C) points to the dump in front of the adit, which is hid by brush and timber.
Page 22, heading, should read:
IMPROVED METHODS OF TODAY
ARE HIGHLY STIMULATING
Page 24, re-arrange last heading:
MEN OF SMALL
MEANS WILL
FIND THIS THE
BEST AID TO
FORTUNE.
Page 30, line 3, should read:
3—Jackson Mill; 4—Hudson Mill; 5—Newton Mills; 6—Virginia Canon;
THE
Camp French Mining Company
Incorporated Under the Laws of the State of Colorado
CAPITAL STOCK, $500,000
Divided Into 500,000 Shares
PAR VALUE, $1.00
Fully Paid and Non-Assessable
Two Hundred Ninety-One Thousand Shares in the Treasury
Depository: Colorado National Bank
Denver
INCORPORATORS AND OFFICERS
Abram I. Fistell .................................................. President
Jacob Marcovsky ........................................ First Vice-President
Louis Doppelt ........................................ Second Vice-President
John L. Roberts ........................................ Secretary-Treasurer
John L. Roberts ........................................ Superintendent
ADDRESS
The Camp French Mining Co.
Post Office Box 2185, Denver, Colorado
Principal Office, 91 South Pearl Street, Denver, Colo.
Copyright by John L. Roberts, Denver
John H. Marks
Office Copy
STANLEY MINE
In fact there is a cluster of veins and mines here, and for a block farther on, as shown in the Lincoln on next page, the railway and river are between us and the dump.
LOOKING SOUTHWEST
This picture represents one of our great Millionaire Mines, which has been operating, more or less since 1864. J. W. MacKay, New York, bought it for $200,000 cash in 1880, one year after the Freeland, and its ore, and that of the Freeland Mine, was smelted into Matte, on its ground in the eighties. The stack is there still. The picture is placed here as an objective evidence of the rich territory that lies south-west of it; and therefore, its bearing on our Baltimore group in particular, which is about 16,000 feet S. 70° W. from this point. We pass it in the Clear Creek Valley, one mile west of Idaho Springs, before we make the climb of 1300 feet to Freeland.
The then owner, Colonel Brownlee, an authority on Mining, claimed it to be the extension of the trunk, the true fissure vein, Lamartine; consequently, he entered and patented every foot of ground he could, up towards the Lamartine Mine, from 2 to 4 (and more) claims abreast all the way, for 3 miles, crossing through some of our unpatented locations which were honored by the noble Colonel and passing on alongside our Baltimore within 80 feet. But he failed to connect with the Crown Vein of the Lamartine; our Gold Leaf and other patents prevented him.
It is more than probable that the Freeland’s curvilinear vein, swings around into and joins the Stanley-Lamartine formation in the Baltimore hill, which can not help resulting in enormous enrichment! The nature of the ores of these mines is identical—galena, carrying considerable gold as well as rich silver-lead; at times, zinc and copper.
LINCOLN GROUP OF MINES
Here we have 4 mines, only 2 blocks west of Stanley Mine, which we pass just before we turn up to the Freeland or Trail Creek Gulch. 1—Elliott and Barber Mine. 2—Josephine Mine. 3—South Lincoln vein. 4—Lincoln Mine and Mill. The Colorado and Southern Railway passes under the trestle, and a block beyond is the old shipping point.
WHERE IS THE BEST FIELD FOR INVESTMENT?
COLORADO. The claims of Colorado are very numerous. It is rich in gold and silver, and most of the known metals of the world, both precious and base, are to be found here. It has a record of producing $80,930,571.00 in one year; gold, silver, copper, lead, zinc and iron, $51,622,-383.00; coal $8,308,188.00; miscellaneous $21,000,000.00. Gold alone to date over $500,000,000.00. We say on the authority of experienced mining men, our ex governors, McDonald and Shoup, that Colorado has produced over two billions ($2,000,000,000.00) dollars, in precious and base metals.
It is not necessary for us to go into its agricultural wealth, neither its oil wells nor its mountains of oil shale! Denver, the Queen City of the Plains, second to no city only in the whole of the United States in magnificence, is the greatest testimony to the wealth of Colorado Mountains, for their metalliferous opulence placed it on the map.
And it is only beginning. Some new metalliferous ore is discovered every once in a while. Among the latest are pitch-blende and carnotite, radium ores, and tungsten. Whenever a call for a new source of metal arises Colorado generally can supply it, if there is any pay in it.
Some forty years ago, my friend Mr. Stap, found a most wonderful vein of ore, he thought, near Boulder. He located and entered several claims on it. Then having made sure that his claims were safe, he took a sample of the strange ore to an assayer, who laughed at his stupidity in bringing such stuff to him! It was worthless he said! Mr. Stap gave up digging there and abandoned his claims. The great war came on and tungsten was in urgent demand and abundance was found in these old abandoned claims! Others reaped fortunes where Mr. Stap missed it, for sheer lack of technical knowledge. Undoubtedly there are more surprises in store yet, for much of Colorado has not been explored, so far, for forty per cent, or (41,571 square miles) of our state is, as yet, unsurveyed. This is a greater area than the State of Indiana as well as Ohio, which is only 41,060 square miles!
Surely, Colorado is a most favored state. Not the least among its virtues are its rarefied air, impregnated with ozone in its glorious sunshine; where tuberculosis is unknown among its natives. We can not can its sunshine, nor bottle its balmy, sweet air, to send you; but, we herewith offer you a rare chance to pocket some of its gold and silver in exchange for material help to drag out of its vaults the hidden treasure of its hills.
John Hays Hammond, the world-famed expert, some years ago, in reporting to his syndicate in London, says of Colorado: "I regard the state as one of the best gold mining states I have ever seen."
CLEAR CREEK COUNTY THE BEST.
Having settled the claims of Colorado for first consideration; then, what section of it? We say, unhesitatingly, Clear Creek. Clear Creek County is the "Old Standby," and considered by eminent practical men the most reliable. This was the opinion of the late John W. Mackay, of New York, when he paid $250,000 spot cash for the Freeland Mine in 1879 (and $200,000 for the Stanley Mine in 1880), and where he took out $3,500,000 in a few years. Its hills have been worked for 66 years, without making much impression on them—they are hardly scratched. The true fissure formation of its veins is a constant quantity, seldom at "fault," and carrying their treasures to depths as yet unknown—there is no giving out, to these true fissure lodes. Clear Creek is the oldest mining field of the state. Here at the door of Idaho Springs, gold was first discovered, and later, farther to the west, silver-lead was discovered. And, it appears now that real mining is only beginning here, although it has yielded untold fortunes.
Here are some of the most noted mines: Dives-Pelican, Stevens, Terrible, Colorado Central, Joe Reynolds, Lamartine, Red Elephant, Hukill (Stanley now), Little Mattie, Two Sisters, Freeland. These eleven mines are credited with a production of over $81,000,000.
Several in the above list are our close neighbors, two especially, our principal group being situated in a direct line between Freeland and the Lamartine, two of the most celebrated mines of the region.
Clear Creek is one of the smallest counties in area, but is an empire of mineral resources. The State Bureau of Mines Report, says:
Clear Creek, although one of the smallest in area, is one of the most important counties in the state. It was organized in 1861 and bears the distinction of the first "pay placer beds" in the state.
From 1859, the year in which this discovery was made, at the mouth of Chicago Creek at Idaho Springs, up to the present day, mining has been continuously prosecuted and each year productive of important development.
Many mines, especially the Lamartine, have been noted for their fine steel (crystalline) galena, which is in great demand for radios. Even if the price of silver, lead, and zinc had not advanced to the present levels, the demand for this rare galena would have given a tremendous boost to our county. Old Mexico had been the chief source of supply; but now, on account of its superior quality, Clear Creek is in the lead and the Clearco Crystal Company, of Idaho Springs, is shipping to Old Mexico herself, and to all parts of the globe. It is able to turn out 10,000 crystals a day and more.
Silver Link road passes east of Baby Eddy dump. Like its neighbors, Baby Eddy has a promising outlook.
LOOKING UP WEST
This is the South Creek Gulch and road on the east side of the Free!and Hill. The white cross marks the spot where Silver Link is. The black crosses are Free-land dumps Nos. 3 and 4.
CAMP FRENCH MINING COMPANY
THE CAMP FRENCH MINING PROPERTIES.
Silver Link—Baltimore.
are located in Trail Creek mining district, the Silver Link being about five miles S. W. of Idaho Springs and three miles from Colorado & Southern Railroad at Fall River. The Baltimore is one-half mile or more farther up the Alpine Range, south. Distance from Denver about 43 miles west; can be reached by auto in three or four hours, over a scenic road whose entrancing scenery can not be described.
These properties are near three of the greatest mines of lower Clear Creek County, namely, the famous Stanley, Freeland and the more famous Lamartine, and besides, are flanked by others that have not reached their zenith, and may become as famous. Some of these minor mines are the:
Oneida, with a production of ...........................................$700,000
New Era .................................................................150,000
Lone Tree ...............................................................125,000
Gum Tree ..............................................................300,000
Anchor .................................................................50,000
Freeland Extension ..................................................100,000
Toledo .................................................................160,000
Crazy Girl, on our east flank, is beginning to produce rich ore.
It will be seen that the company has two fields to work in, having two distinct mining properties, each of which is a rich entity in itself, for both have the greatest promise of becoming famous producers; it is only a question of adequate development, which means adequate funds.
Our eggs are in two baskets. The advantage of having two incipient mines of good promise should appeal to the investor, for one (if not both), is sure to prove a bonanza.
Carnegie said, put your eggs in one basket and keep your eye on it. Divide your risks, is the slogan of the times. Insurance companies are conducted on this principle and spread their risks among several associates. We know this principle to be the best to follow in mining also. One of the greatest incentives to invest in our company is that it has two sources to draw from.
We have the utmost confidence in the success of the undertaking, for great successes have been the fashion of the district, and it can be "shown" that our ground is as rich as any one of them.
Now let us look at the properties separately;
THE SILVER LINK.
Consists of one patent survey No. 1678 and several claims, whose titles are held until patent is secured, by annual assessment work.
This mine has made considerable shipments from grass roots. Although an old patent, it is not fully developed, the poor miners were unable to put sufficient developments on it to bring it to produce tonnage. Good ore was present all the way down for 130 feet, and in the drift thereat, but further depth in the shaft and longer drifts must be had to produce tonnage. No matter how rich a mine may be, it can not produce tonnage without considerable developments. The mine is of proven worth, excellent ore, of the same grade as the Freeland and Freeland Extension mines, being a parallel vein, and in the writer's opinion the same as the Gum Tree Mine. Mr. John G. Roberts, owner of the Jackson Mill, a veteran mechanic and milling man, had this to say:
Camp French Mining Company
"In 1881 the Silver Link shaft was sunk by William Edwards & Co. At that time the undersigned was a foreman at one of the mills in Idaho Springs. Edwards & Co. brought some of the ore for treatment, the results were highly satisfactory. I have no doubt by development, the mine will be a profitable producer in a short time.
(Signed) JOHN G. ROBERTS,
(Now, 3 South Newton St., Denver.)
Idaho Spring, Colo. ... June 22, 1907.
The retired veteran's words mean much in this connection, for he was absolutely disinterested. Mr. Roberts gave this encouraging information to the writer (his friend, John L. Roberts) as a buyer, and not for influencing a sale.
The great reputation of this new prospect among veteran mines of the Camp, which had become current talk, made the writer eager to acquire it, and as soon as he was able, became possessed of it in 1909.
The interest in it became so intense, that the Veteran Cornish Miners of the camp "had to see it," and their verdict was, without exception, "that it needed only sufficient depth and development to make it a great mine."
The foregoing facts are deemed sufficient, without burdening the prospectus with any further testimonials.
The writer will state without hesitation, that the possibilities of this mine are very great.
The nature of the ore is Galena, which carries gold, silver, lead, zinc, and copper in varying proportions. For instance in our neighbor's mine, the Freeland, its ore ran in gold from .75 to 11.00 ounces per ton, and was freighted with silver-lead at the same time.
Here is an assay of a piece of mineral the writer (J. L. R.), picked from the dump, April 15, 1908:
.03 ozs. gold; 18.20 ozs. silver; 56% lead.
The lead itself is worth over $100 a ton. An English engineer, Mr. George Bennet, picked a piece from the dump in 1910 which gave 2.24 ozs. gold; 23.60 ozs. in silver. This would give $60 without the lead contents.
Here is the result of assays of ore from the shaft at the depth of 110 feet:
| Sample | Assay |
|--------|-------|
| 1 | $37.56 |
| 2 | 20.32 |
| 3 | 82.12 |
| Average| 46.66 |
This was taken from Mr. Edwards' diary. No weights nor date were given, consequently, no comparison can be made with present prices; probably 50 per cent. can be added.
These three samples, were three different streaks in the vein at that point.
One great advantage we have in the "Silver Link" is, that no pumping of water will be necessary after the surface water, held back by silt, is baled out, for the reason that the Freeland Mine will drain it to a great depth. It encountered no water when the shaft was sunk. The new tunnel of the Freeland cuts its vein about 1,300 feet deeper than the old tunnel, and under the summit about 2,200 feet. The depth to which ore in the Freeland can be followed is, as yet, not known, and the same condition will obtain in the Silver Link.
CAMP FRENCH MINING COMPANY
THE FREELAND MINE.
We shall introduce our neighbor, the great *Freeland Mine*, in this connection, very briefly though:
The U. S. Geologists, Josiah E. Spurr and G. H. Garrey, in their report, in Professional Paper No. 63, said: "The Freeland Mine produced $4,655,000 up to August, 1905." We can not give details here, but we shall quote a very significant item: "The mine waters of the Freeland, contains the Sulphates of Copper and iron and carbonate of lime, this is shown by deposits, on the walls, of copper and iron sulphates and calcite. It is said that nails have changed to native copper by the mine waters, through the well-known action of metallic iron upon copper in solution."
This rich mine was idle for 20 years, but is producing again, on account of improvement in prices and methods, and will be producing for the next 50 years and more. Its extension also has produced largely.
John W. Mackay, the late banker, New York, and of Comstock fame, bought this mine in 1879, for $250,000.00 spot cash, as we have stated already, and took out $3,299,000 up to 1888.
The Silver Link, as we have already said, is a parallel vein, and is a strong wide lode, from four to six feet in width, and bears the same characteristics as the Freeland.
LOOKING WEST
This picture presents the Silver Link neighbors. The peak in the extreme west is pointed out in the smelter picture as 10,800 feet high. The black crosses mark the dumps of the Freeland Nos. 1 and 2. Freeland is worked 1300 feet lower under Ohio Mountain.
Toledo and Gum Tree (east of Toledo) are north of the Creek. All of these have had a glorious past, and are destined to have a brilliant future.
This is a remnant of the old shaft sunk 119 feet on the Baltimore, discarded for a lower location. This picture is placed here to show its possible connection with rich mines all the way northwest to Gilpin County. Topeka is 5 miles away, on top of the range.
The trees look small on account of the height and distance taken from. No. 1 looks higher than (3) which is 102 feet higher. No. 2—The Union Flag is a large vein and will join the Baltimore shown by the white line towards. No. 3—The mountain somehow looks flat but it is hard to climb. No. 4—Harrisburg dump. No. 5—The Brighton Mine. The Freeland Vein passes up right of the Brighton into the higher hill.
CAMP FRENCH MINING COMPANY
THE LAMARTINE MINE
This has been the most wonderful mine of Clear Creek County. Its production is said to be somewhere between $7,000,000 and $8,000,000, and can produce for years to come. It is the only mine in the county that produced $1,000,000 in one year.
STANDING ON OHIO MOUNTAIN LOOKING SOUTH
This shows the Tunnel dump of the Lamartine Mine. Mr. John G. Roberts, the owner of the Jackson Mill, built a little mill under it to treat the dump. It was operated through Oneida mine.
CAMP FRENCH MINING COMPANY
THE BALTIMORE.
Consists of a group of four (4) patented mining claims, and one location, which are situated directly east of the Lamartine Mine, the greatest mine of the region. The mother vein of the Baltimore goes straight through the Gold Leaf and telescopes, as it were, into the Crown Lode—the richest vein in the Lamartine group. The names of the claims are:
- Baltimore ............... Patent Survey 16750 by the late owner
- Gold Leaf ................ Patent Survey 16755 Patented
- Hard to Beat ............. Patent Survey 16755 J. L. Roberts
- Gold Belt ................ Patent Survey 16755 in 1904.
This hill is intensely mineralized. The pick will find mineral everywhere, but it meant a great amount of labor and money in pitting and trenching to find the mother vein, the Lamartine passing through it, and when found was named the Baltimore. A shaft was sunk at the S. W. end 119 feet deep and at the N. E. end a cut and tunnel about 110 feet, and between these points it was opened every few feet to test its intensity, showing everywhere an immense "iron hat," the vein blossom, which is the delight of the miner! There is a vast body of ore under the dome of this group, for it is one mass of mineral. Read the following which speaks of one great vein that enters the Baltimore ground.
An eminent engineer, Mr. J. B. Caldon, having experience in Africa and South America in his report on the Lone Tree, one of the veins that will join the Baltimore diagonally, wrote thus:
"Coming out of the tunnel and passing over the mountain to the southwest for a distance of 200 feet, where you have encountered the main lode on which the long tunnel was driven, I find at the breast of the cross-cut a drift N. 20° E., all on vein matter." "The oxidized ore from this point pans freely, samples running as high as $16 ounces in gold, and 158 ounces in silver to the ton."
Then in going farther up towards the "dome" of the Baltimore he said, "Here the surface could be shovelled and passed through a mill."
The Lone Tree is immediately below the Baltimore group. This mine produced $125,000 in a distance of 1,700 feet, but never deeper than 150 feet from surface. It is about starting up again at about 400 or 500 feet deeper, and no one can estimate the riches it will produce.
This is one of, at least, five lodes or veins, that converge in the Baltimore, and any intelligent miner knows that such convergencies—the meeting of the waters, as it were, mean great enrichments.
It is an absolute fact that veins are not isolated; like fish, they go in schools.
Then beyond us on the S. W. end, is the
LAMARTINE MINE.
As we have said already. This is the only mine in the county that produced $1,000,000 in one year, and with a small force of men, and primitive methods! Practically, this was in its third year of actual mining—after reaching the sulphide zone.
As we have every reason to believe that the Baltimore will develop into a second Lamartine, the reader will be glad to know a little more about it. We can not go into its history which would be very interesting, but will put down a few facts which will help the reader to realize why we expect so much from the Baltimore group.
CAMP FRENCH MINING COMPANY
The Geologists who reported on the Freeland, August, 1905, have this to say in the same volume on the Lamartine:
In the first place, the Geologists assert that $616,000 was taken out in 16 months—(the latter part of a two years' lease)—and that the ore averaged $100 a ton, for years.
Total production to August, 1905, was $2,361,039.15 from 67,946,019 pounds of ore, yielding
39,291.81 ounces of gold
(Worth $812,161.70 anytime!)
2,677,470.79 ounces of silver
3,232,020 pounds of lead
Zinc and copper contents were not given, although it produced considerable zinc and some copper, to the writer's knowledge. It produced much fine steel galena-crystallized galena, which would bring a high price for radio crystals these days!
It will be seen that although classed as a silver mine it produced much gold. The owner, Dr. F. E. Harrod, New York, left it idle for years on account of declining metal prices, and as yet, his heirs have not reopened it. We could write much more about this and other neighboring mines, but we shall close this section, by giving some specimens which can be seen in the Capitol Museum at Denver.
Sample 1. 900 ounces silver; 65 per cent. lead
2. 700 ounces silver; 60 per cent. lead
3. 290 ounces silver; 60 per cent. lead
Gold and other contents are not given; the reader can calculate what the silver and lead amounts to, on the day when he reads this, which will not be under $250, without the gold.
We have every reason to expect similar results in the Baltimore in a short time, if our subscribers will stand by us. We could say more in support of our claims. Our subscribers can rest assured that the Baltimore has a great future, and every effort will be made to bring it into the producing class early.
It would be easy to quote pages of the opinions of mining men about the richness of this hill but we shall write a few words only. Dr. R. D. George, our State Geologist for many years, and Professor of Geology in the State University, said in reference to the Baltimore in speaking of its possibilities:
"I have no doubt of the general attractiveness of that part of the camp."
November 21, 1925.
Courtney Ryley Cooper, the celebrated author, who has resided at Idaho Springs for some years, said: "Your property looks interesting indeed."
February 8, 1925.
ENDORSEMENT
To the Reader:
I wish to state that the statement herein regarding the Camp French mining properties is true. I lived in Freeland for many years, and have been intimately acquainted with every mine and hill in the district for 45 years. I was connected with the famous Freeland Mine for a long time, and later was a co-owner and superintendent of the Brighton Mine, which lies midway between the Freeland and the Baltimore.
The Camp French Mining Company's properties have high merit and are likely to prove bonanzas.
I have known the superintendent since the eighties, and the developments in his hands are sure to be carried on with economy and intelligence; for he has had experience, and his ability and integrity are beyond question.
October 4, 1926.
(Signed) DAVID ELLIS,
Denver, Colo.
This shows our power line, crossing from Georgetown to Idaho Springs.
(A) points to a black hole, the mouth of a fumarole, an old volcanic vapour vent, occurring in the heart of the Baltimore vein. This is most significant, but neither its significance nor its importance can be discussed here.
(B) points to the enormous shining black quartz taken out of the shaft.
(C) points to the dump in front of the exit, which is hid by brush and timber.
Especially Silver-Lead-Zinc. We are not concerned so much about gold; still, we are going to have considerable gold as a by-product. Mr. Robert J. Grant, the director of mints, was in our city recently and is reported thus:
"The favorable attitude of European nations manifested so far toward the Dawes reparation plan is probably one important cause of the startling advance in the price of silver.
"Reports from the international conference relative to the Dawes plan, which is being held in London, indicates a readiness of the nations involved to put the plan in effect. * * * The influence of the Dawes plan would be permanent and any increase in the price of silver caused by it will also be permanent."
Mr. James F. Callbreath, secretary of the American Mining Congress, is reported thus, in the Rocky Mountain News, October 24, 1925:
"Mr. Callbreath especially is optimistic over the outlook in the silver mining regions of the state. "Silver," he continued, "Now has reached a price whereby it can be mined, under the improved mining conditions, and be sold at a profit." * * * the main factor as I see it, in the matter of keeping up the prices of silver is the fact that many of the countries of Europe and South America once again are going on a silver basis for their financial structures.
"The action of Poland in restoring silver as a basis of its monetary system has been the greatest stabilizing influence of recent months. Peru, Guatemala and other South American countries are following suit and are purchasing silver in the United States. The government completed its purchase of a million and a half ounces of silver for metal coinage, and is now confronted with the prospects of being forced to purchase an additional quantity. All of these things make for the prosperity of the silver industry and the allied mining properties."
Mr. Callbreath further said: "The advance in silver in connection with the liberal prices for lead and zinc, and the more liberal terms at the smelters, has stimulating effect upon mining in Colorado, more especially in districts where silver, lead, and zinc are prominent factors in the ores. The increased saving of values at the modern flotation, cyanide and amalgamation plants, is also an asset of encouragement. In many of the Colorado camps the mill saving is 20 per cent. higher than it was previous to the World War, at some decrease in the handling of the ores.
Further: "In the past miners were penalized upon shipping ores to smelters for the presence of silver in these ores. Today a process has been developed whereby the zinc can be extracted from the ores and the miner paid for it, rather than penalized for its presence."
The American Mining Congress Journal for March, 1925, said: "It is not a wild prediction that the Dawes settlement, the scarcity of gold and the adoption of silver for subsidiary coinage by the paper currency countries of the world, will make the dollar seem as cheap as 50-cent silver did a few years ago. The price of silver and the industrial progress of the world are going up together."
Captain Smith said: "As the price lifts, Colorado will regain her former prestige as a producer of the white metal preferred by the millions, while gold is coveted by the banks. While Great Britain coins silver at 14 to 1, the parity of 16 to 1, is conservative and safe."
The Denver Mining and Financial Record reports Mr. John T. Joyce, Commissioner of Mines, thus: "The advance in the price of silver, lead, zinc and copper, and the unwavering faith of mining men throughout the nation in the stability of the market, is awakening capital to the full realization of the opportunity for sound investment offered by the mining industry," etc.
BUT LEAD AND ZINC ARE MORE IMPORTANT THAN SILVER
It is very gratifying to have such bright outlook for silver, but the outlook for lead and zinc is far more reassuring. The possible alloy to render silver untarnished is very promising no doubt. However, the uses found for lead and zinc not only will keep the price of lead from going down to 1½ cents, to say nothing of the abolished penalties, it will go up to a still higher level. It will not stay long at 8.90 cents a pound and more than 7.45 cents will be got for zinc.
Some may be astonished at, and others may question these figures, but the writer has a Certificate from Chamberlain-Dillingham Ore Company, showing that the price they paid on April 12, 1905, was 30 cents per unit, which, in common terms, means 1½ cents per pound for lead. Does this not show the reason why our miners got discouraged and temporarily dropped mining? But let us look at the sunny side of today.
Gail Martin says in the Denver Mining & Financial Record of September 19, 1925: "Probably never in the history of the world has industry faced the same situation with regard to lead. The leading aspects of this unprecedented situation can be summarized as follows:
1. Demand for lead is increasing by leaps and bounds.
2. No new lead mines have been discovered lately.
3. Substitutes for this metal, so vital to industry and science, in a multitude of ways are lacking.
(Note—It is well known that the Mid-Continent Lead fields, are being exhausted).
"Students of metal predict, if substitutes are not found, the gray metal will some day soon be selling on a parity with copper. For on all sides is the need of lead growing. Lead goes into paints, babbitt, solder, pipe, roofing, printers' metal, chemicals, glass, pottery, enamels and batteries."
"Moreover, it has scores of other uses too numerous to name. Take, for example, the one use for automobile batteries. There are over 20,000,000 automobiles in the United States. Allowing 30 pounds of metal to the battery, 600,000,000 pounds are utilized for this purpose alone. The life of a battery varies from eighteen to twenty months, after which only 60 per cent. of the precious gray metal can be reclaimed; therefore, 200,000,000 pounds are irrecoverably lost!"
"Most certainly a phase in the history of mining has been reached never before experienced. What will be the outcome, promises to be one of the most interesting speculations of recent times!"
We quote from an article by Captain Jas. T. Smith, Mining Editor of the Rocky Mountain News, as follows: "Mr. John H. Marks, M. E., the Veteran Hydraulic, Mining and Civil Engineer predicts a very general revival in mining for ores in Colorado, that furnish five per cent. (5%) or more in lead. The writer called at his office the other day, and found a dozen mining men from several sections where lead is predominant, all engaged in the examination of maps, which are kept up to date, with a view to the reopening of several well-known properties."
Another use for lead is forthcoming: Stringing of telephone and telegraph wires on poles is coming to an end in our cities. Now, in most large cities thousands upon thousands of miles of wires are put underground in lead conduits. You may ask, why is lead used? There are strong reasons for its use, for which no substitutes can be found: It is impervious to moisture, and it resists corrosion.
Note.
The circles give the centers where heavy production has been made, or where developments indicate intense mineralization.
To avoid crowding we have left much vacant ground but all is intensely mineralized. Here we see the mines are on top of the hills and are looking down to Freeland, so all located.
OLD SMELTER
This is the location of the Old Smelter, a block west of New Era Mill.
NEW ERA MINE & MILL
New Era is in the Freeland Camp or Village. The reader should trace this vein from the Creek up through the Long Tree Mine into the Baltimore. This is one of the many veins that converge in our ground.
THE IMPORTANCE OF MINING
Mining is an industry which lies at the base of national prosperity and greatness, and of all industries, except, perhaps, agriculture—stands first. The strength and enduring wealth of the nation lies in her mines of base as well as precious metals. Mining, however, is not appreciated by the public because of the limited and vague knowledge which obtains concerning it. To the initiated it has positive fascination, for gold is the one product of the earth that is unchangeable. True, precious metals are but mediums by which necessities and luxuries of life are exchanged, but they are the most desirable media the world has. The qualities of these metals for their use are the best obtainable, whether considered by their chemical or other qualifications. The influence of mining in general, is widespread and it has at all times been a pioneer of civilization and commerce. On hillsides, in gulch and canon, cities have sprung up where before the silent tread of the wild beasts only was known. Plains and prairies not long ago almost unknown, have become the habitation of agriculturists who followed in the miners’ trail. There has been distrust shown to this class of investments, but it arises from a superficial knowledge as to what mining really is. We hope that the following pages will, in some little measure, help to dispel this mistrust. Remember, that men of honor, men of the greatest wealth, who have become wealthy in its pursuit, and men of profound learning, are among its most ardent supporters.
The Mining Herald, New York, says:
“Few realize to what extent the mining interests of the United States enter into its business. To make clear the wonderful production of ‘things in the earth,’ this illustration may be given: If the wealth of the nation was wiped out, if its farms were destroyed, its manufactories annihilated, its railroads torn up and cast into the sea, its ships sunk; if every vestige of the nation’s wealth were to perish, and leave only its mines, the mining industry at the rate of last year’s production, would rebuild the entire structure in seventy years.”
THE ONE SURE THING.
“When a mining proposition is offered to an investor, he immediately remembers that he has heard of money having been lost in mining. He usually stops at this point and decides that he wants no part in an investment that is located in a field where failures have occurred. He makes the statement that he has no money to risk; that what little he possesses is all placed in a good, safe, conservative manner where there is no danger of loss, etc. If such an investment has been found we have neither seen nor heard of it. It is a common idea that an investment that pays a low rate of interest is safe, or let us say safer than one that pays a high rate. Under certain conditions, and with certain securities, this is a good rule to follow. But it is decidedly wrong to imagine that there is no possibility of failure or losing money in these investments simply because they are low interest payers and are considered conservative.—From Boston Mining Bureau.
A banking institution is perhaps the best example of the low-interest conservative class that can be found. But statistics for one year show, that there were 1,000 failures throughout the country, about forty of them being national banks! What the cause of these failures was is immaterial. Whether it was through the dishonesty of some of the officers or whether bad management was responsible, the result to the stockholder is the same—they lost their money. Look at the railroads and you will find that a large number of them are paying not a cent of interest to their stockholders. Not only this, but the value of the stock itself has depreciated. Yet these are called good, legitimate investments, and the man who was afraid to risk his money in a good mining proposition will put every cent that he can rake and scrape together in one of these so-called safe and conservative securities, because he can not afford to run the risk of a loss! There was more money lost last year through the failures of banks than there has been lost through mining investments in a generation.
The failure of a mine is a very rare thing. No mining failure has been in our courts for years. But we have had seven (7) bank failures here recently—five (5) in 1925, and there is a host of bankers and directors under indictment now, having left want and distress in their trails—some accepting deposits a few minutes before stopping payments! Once the money has passed the window your control over it is gone.
And yet people will tell you that mining is risky, but most other concerns are conservative and absolutely safe—especially BANKS!
To a mining man these conditions are almost inexplicable. He knows what a good mining investment is worth, and when he is turned aside for other investments, he is dumb-founded.
"The same conditions govern the mining industry that govern any other business in the universe. Honesty and ability must be the predominating elements in every business, be it mining, railroading, or any other line, when these two factors do not go hand-in-hand the result is failure. But when they are united there is every reason for success, and when a mining company achieves success the returns to the stockholders are many times greater than those of any other business."—From Boston Mining Bureau.
MINING PROFITABLE BEYOND ANY OTHER INDUSTRY.
This topic is a fertile field, but it would be tedious for the reader to read all we would like to place before him. The instances where fabulous wealth have been made by the expenditure of a small sum of money are numerous and startling—clean wealth, too, from the bosom of the hills, obtained without robbing anybody. It is not saying too much to assert that it is the only honest way to acquire wealth on a small investment. The Mining Investor says:
"The investor who seeks a profitable employment for his idle funds will find the best means for rapid increase in mining investments. More dividends are distributed by the mines of the United States annually than any other industrial corporations and the railroads as well. The average result of mining investments is greater to the individual than even the rate of dividends paid would indicate. It is no uncommon thing for the stock of a mining company to advance 4,000 to 10,000 per cent. over night. Hundreds of incidents may be cited where $500.00 has brought a fortune to the investor.
"A New York man recently subscribed for shares in a certain mine. He inadvertently filled out the application for 5,000 instead of 500 shares. When the time came to make payment for them, as the price was low, he paid for 5,000 shares, but under protest. In less than one year's time he had sold half of his holdings for over $500,000.00 and refused a like amount for the remainder."
The Wall Street Journal said: "The greater part of the large fortunes in this country were started from investment in gold and silver mines. The dividends paid by gold and silver mines are greater than the dividends paid by all the banks of this country."
The possibilities of increased value are very great. For instance, in 1892 the Independence Mine, of our state, was offered for $100,000, six years later it was sold for $10,000,000! Its neighbor, the Gold Coin, was offered in 1898 at three cents per share but has sold since, all the way up to $8.00 per share.
It would be an easy matter, as well as a fascinating one, to recount the fortunes made in our own county, such as the Lamartine Mine, where many lessees have acquired independence in a few years. The Freeland, Donaldson, Stanley, Joe Reynolds, Two Sisters, Red Elephant, and others in the lower part of our county have produced many millions, in the days when 46% only of the actual value in the ores could be saved by the then, antiquated methods.
So, we assert that investment in a good mining property, which has a fair proportion of gold and silver in its ore, and especially those carrying heavy lead, zinc, and some copper besides, is as safe as anything can be—safer than ever now, when lead and silver are advancing, and will advance—especially lead, for many years to come for reasons we shall give later. The future possibilities of mining are very great; risk with present methods is reduced to a vanishing point. There is no competition to fear, as is the case with inventions. A good invention today may be scrapped by a better one tomorrow. To the contrary, every mechanical invention has a tendency to make the profits greater in mining.
WHAT ABOUT MARK TWAIN AND THOMAS EDISON?
Mark Twain once defined a mine as "a hole in the ground and a liar on top of it." He could have made a fabulous sum had he clung to his interest in the Comstock, when an undeveloped claim, and stayed with it. Instead, he sold it for $300 while his co-partners, J. W. MacKay (late owner of Freeland & Stanley Mines) Field, Flood, and their new partner, later, made out of their interests over $300,000,000 (three hundred millions). In this case the laugh was on the cynic!
Another false move. Being a printer, he was sure he could not fool himself in that line. He decided that the Paige Typesetting Machine had a bright future, although in its embryonic stage then. He joined hands to perfect it. After 10 years of "watchful waiting" and the expenditure of $2,500,000, involving himself in financial distress to the tune of $190,000, his great expectations were scattered to the winds, vanished like the dew of the morning; for the machine, although meritorious, proved obsolescent—fit for the scrap heap only, when a superior invention—the Linotype, was placed on the market.
And, what about Edison's gigantic failure? On the strength of his great name he was able to raise a working capital of $10,000,000.00 to build an Electric Smelter and a new town, ready for occupancy for the workmen—Edison, New Jersey, which is today a "deserted village." Mr. Edison's new process proved a fiasco!
We have presented two investments as an object lesson and we could cite a legion, beside which generations of mining failures would pale into insignificance. No sneering reference is ever made about these, but let somebody drop a $10 bill into the till of something that wears the name of a mine, whose stock is foisted on the public, great is the howl should it go to the wall.
Besides being the safest of investments, mining is the most remunerative of all occupations. L. E. Aubrey, State Mineralogist of California, said: "The average annual product or earning per capita in California, of those engaged in farming is $300.00; in manufacturing, including its bounties, $1,000.00; in mining with all its ventures, $1,500.00; and yet, many people denounce the business as comparatively precarious and unprofitable."
Railroads are considered by the gullible public as safe. But what are the facts? "In one year, with a capital of $5,453,000,000.00, the dividends were $32,630,000.00, or an average of 1.51 per cent. Over 70 per cent. of the railroad stock paid no dividend that year." The reader can draw his own conclusions.
CAMP FRENCH MINING COMPANY
MINING VERSUS MANUFACTURING
"A gentleman with statistical bent, has found that in mining there is a profit of 300 per cent., with 35 per cent. failures, while manufacturing ranges from 10 to 25 per cent. profit with failures of 95 per cent.
"The statistics of 50 Colorado Mining companies, with a combined capital of $46,000,000.00, showed the following results: Dividends paid, $20,000,000.00; original investments by shareholders, $7,000,000.00; returned to shareholders on par value over 43 per cent.; returned to shareholders on original investments, 300 per cent.
"What other business, he asks, can make such a showing? That carefully selected mining investments are the safest, more permanent and more profitable than any other, there can be no question. Mining is not a romantic dream, but a serious, legitimate and profitable business."—From Pacific Magazine.
As to permanency, not one of the mines named in the previous chapter has been exhausted, and are destined to produce for years to come, although all of them are old, some produced in the early sixties.
IMPROVED METHODS OF TODAY ARE HIGHLY STANDARDIZED
Improved methods of treating ores with modern appliances have of late increased the profits of mining; and further improvements are pending, both in milling and in metallurgy.
Not many years ago refractory—rebellious, ores could not be mined unless they carried values of, at least, $60.00 per ton; but now, the rebellious element of zinc is deleted by modern selective flotation, and consequently, instead of the ore being penalized for its presence, zinc adds immensely to the value of such refractory ores.
To substantiate our claims, we quote a paragraph which appeared in Denver Mining and Financial Record, November 21, 1925: "Colorado has not less than 100,000,000 tons of silver, lead, and zinc ores exposed in her mines, or mine dumps, and the new selective flotation process of refining will turn those ores into cash in the next few years, according to Robert A. Wilson, mining engineer of New York City, who has been traveling over Colorado three months, visiting scores of mining camps at the behest of Eastern capital.
"For 25 years the lead, zinc ores of Colorado have been neglected," said Wilson, "not because we did not know they were here, but because we could not refine them at a profit. I was in Colorado 20 years ago and am well acquainted with the conditions at that time.
"A new era is opening, in fact, it has already opened, by the selective flotation process for refining ores that for years have been commercially worthless."
Now let us turn to metallurgy or rather chemistry and see what has been done this year, with "leaded gasoline."
In May of this year (1925) Captain Smith wrote thus, in the Rocky Mountain News:
"Tetraethyl lead is added to gasoline to slow down the explosion of the gasoline in motors and prevent 'knocking' and to increase the efficiency or kick in ordinary gasoline.
"Demand for the new product exists and science—invention—will produce the article in time, free from grave danger.
"The effect upon lead producing mines in Colorado can hardly be estimated, some experts predicting 15 cents a pound for lead when oil companies begin buying large quantities for immediate use. Chemists have been working on this problem for a long time, for in September of 1924 a $5,000,000 corporation was formed by the Standard Oil Company, with others, to produce the fluid which is used chiefly in the automobile business, but the new company found that too many deaths resulted from the manufacture of the fluid and had to stop all operations by the order of the Government. But, this state of things was of short duration.
DENVER CHEMISTS TRIUMPHANT.
Rocky Mountain News, August 29, 1915: "Denver chemists are now producing a tetraethyl of lead, which is non-poisonous, it became known yesterday. * * * * * The gas-saving fuel will be marketed through a corporation soon to be formed here!" The problem is solved, and its effect on lead mining can hardly be estimated.
We must close, without following the subject further, with this statement, that during John W. Mackay's operation in the Freeland Mine in the eighties, no more than 46 per cent. of the values were saved; now, from 90 to 95 per cent. is the average—a gain of 100 per cent.
All of this goes to show the wonderful advancement in both Milling and Metallurgy, and thus can not help but prove a great boon to mining in our state.
The Colorado School of Mines at the foot of the hills, is the best in the world, and is ever on the alert in developing and inventing new processes (such as the selective flotation) which in the future, as in the past, will inure greatly to the mining wealth of our state.
CAMP FRENCH MINING COMPANY
WHAT THE SAGES SAY, AND DO—LINCOLN.
CECIL RHODES SAID:
Cecil Rhodes said: "Mining has been the foundation of the world's wealth from the beginning of time. A country of great mines always means a country of great wealth, influence and power. Investigation shows that 29 per cent. more people proportionately lose money and fail in mercantile business than in mining; 41 per cent. more people lose money and fail in the manufacturing business than in mining; more people lose money and fail proportionately in any of the professions than in mining. It is not an uncommon thing for a good mine to pay the holders of stock thousands of dollars for every dollar they invest. Mining offers greater inducements than any other business in the world to make quick and great wealth."
LISTEN ONCE MORE.
I speak advisedly, and say, what every man who has invested knows to be the truth, that less money is lost proportionately in mining and investments in mining stock than in any business on earth. A good mining stock will pay the investor more easily 20, 30, 40, 50 and 100 per cent. annually than municipal bonds, railroad bonds and stocks or government bonds can possibly pay 5 per cent. Money invested in good mining stocks is safer than in a bank, than in mortgages, railroad securities or government bonds.
"The security of a good mining stock is the raw material of money itself; it is what we call in Africa, the 'stuff'; it is the staff at whose feet governments, cities, banks, railroads and all forms of business kneel. I speak only of gold and silver mines, from the metals of which blooms and blossoms the everlasting dollar; the crude metal in our gold and silver mines is the first and best security in the world. This is what makes banks and banking a possibility. This is what gives legs to a municipality, spine to a government and creates the business of the world into a living, breathing, active creature of life.
"Buy a good mining stock, buy it low; when it has doubled or quadrupled sell it; buy another good mining stock; pursue this policy and before you dream of it you will find that your dollars have increased to thousands and during all this time your dividends have been 100 per cent. higher than they would have been in any other business." Again, he says:
FORTUNE. BEST AID TO FIND THIS THE MEANS WILL MEN OF SMALL
"In answer to your question as to what I think of men and women of small means investing in mining stocks or in mines, my answer is that these are the very people who should invest in mining stock above all others. They have too little money for 3 or 6 per cent. to do them much good, while on the other hand 12 or 40 per cent. on their small investments would bring the comforts of plenty. People who need most of all to invest their money in good mining stocks are those who can only invest from $20 to $100—the small investor—because if well invested, these stocks besides paying good interest while they are held, can soon be re-sold for 100 per cent. profit, and in this way often the poorest man or woman can quickly climb the ladder of wealth."
Men of wealth have been persistent investors in mines. More prominent men of national fame invest in mines than the public is aware of. They don’t blow their trumpets about it. The Whitneys were intensely interested in mining.
George Gould was known as a railroad magnate but was extensively interested in mining and his son took a mining engineer’s course.
W. R. Hearst, the rich newspaper man, acquired his wealth through mining. Schwab, the steel magnate, made immense profits in mining ventures.
The fact is, the richest men before the war, with a few exceptions got their money from the ground.
Mr. Henry Ford, now, has entered the mining business, since lead has advanced, and it is hard to know how far he will go, for he needs about 30 or more pounds for each car!
The late John W. Mackay, New York City, said: “Nothing pays better than an investment in a good mine.” And, bear this in mind, he selected Freeland as the best place in Colorado to put his money into, for which he, as already stated, paid $250,000 cash, and bought the Stanley Mine, a year later, for $200,000.
Baron Rothschild said: “To make money now, we must dig it out of the ground.”
David H. Moffat said: “Gold mining is the surest road to fortune.”
If he lived today, no doubt his song would be, “Lead mining is a sure road to fortune.” He was said to have made $18,000,000.00 in mining. However, it melted away like snow, when he tried banking and railroad building!
Jay Gould said: “Invest now. Don’t wait for it to pan out all right for there’s where you lose your opportunity. When you have waited to see if it pans out, you will have to pay premiums for the stock and you have lost the biggest advance.”
A PROPHESY FULFILLED
President Lincoln’s Message to Miners
Just before his assassination he said to Schuyler Colfax: “I want you to make a speech for me to the miners on your journey west. I have very large ideas of the mineral wealth of our nation. I believe it practically inexhaustible. It abounds in the Rocky Mountains—
“Tell the miners for me that I shall promote their interests to the utmost of my ability, because their prosperity is the prosperity of the nation. We shall prove in a few years, that we are indeed, the Treasury of the World.”
AN UNUSUAL OPPORTUNITY.
Here is an opportunity that you can hardly afford to ignore—don’t let it slip by. Act now; think of the possibilities of this offer. Procrastinators have to row against the tide. Don’t imitate them, but launch your boat now on the tide of prosperity. Buy as liberally as you can, if you want to receive liberally. Remember, delaying is expensive; your co-operation is desired and needed now. A year or two hence we will not need you, in all probability; and, if we do, it will cost you more. We have every reason to anticipate an advance of 50 per cent. and higher still, later.
THE COMPANY'S OBJECT.
Mutual help and benefit are the objects. The principle of buying this stock is much like making a loan, with this difference: That the lender in this case participates both in the interest and full earning power of his money. When he places his money in a savings banks he gets 3 or 4 per cent. interest, but he does not participate in its proper earning power. The banker will advise you that it is not safe to invest where large interest is offered. Does he believe it? Not he. Where would he get his profits if he did? He invests your money in securities where he thinks large profits are made, and we have shown already that mining heretofore has paid the larger. We have got a rich property ready for development; you have got some idle money, or you may have some out earning little interest. It will pay you to aid us and thus aid yourself with it by placing it in a rich prospect. It is in young mines where the most money is made.
You may be contemplating life insurance. Listen! This is better than life insurance; you will derive benefit yourself from it and your family are more sure of being benefited by it than by an insurance policy. Should you get drowned or killed in some mysterious way, there will be no trial to ascertain whether you committed suicide or not. Or, if burned in a railroad wreck, your family will not be defrauded because of lack of identification. Or you may, after two or three years, on account of failing health, or some adversity, fail to pay your premium and your hard-earned money—many hundreds may have gone to return no more.
But two or three hundred dollars in a good mining property, when the stock is low, will bring you and your family good returns; the everlasting hills themselves are your surety.
CO-OPERATION.
It may be stated that great achievements are the result of co-operation. On this topic the New Herald has this to say:
"Under incorporation great achievements are possible through combining the limited sums of thousands, thus equalizing the capital of the individual millionaire make possible gigantic undertakings that are the producers of enormous revenues. The dollar of the man of moderate means is equally as powerful as the dollar of the money king. Both serve their purpose; both are entitled to their proportionate profits."
Our millionaires of today were the poor working men of yesterday. How have they acquired competence? Not by putting their money in the savings banks to draw interest, nor by interest, but by joining others in some profitable business. Our J. D. Rockefeller's wealth is the result of co-operation. He did not wait until he had the means to start business himself; if he had he might have waited until now. If a man waits until he has large means before investing he will wait a long time. If he has $50 idle, let it be put where it will grow. Idle money does nobody good. Let him join others—20 or 100 persons with equal amounts—and there is accumulated a force of capital that can, rightly, managed, do wonders.
Then why should not workmen have the courage to co-operate and reap the profits as well as the interest themselves? We invite this class of investors and want the co-operation of those who can only spare $50 towards this enterprise; if they can spare more—$100, $200, $300, $1,000, or more—all the better for all concerned. Aim as high as possible on the start; self-denial now means much in later years. Make a resolution, have the courage, have the foresight, have the determination, if not to build a great fortune, to acquire independence and lay the foundation for it right here—
You can take our word for it, that magnificent possibilities are manifest in this enterprise where the co-operation of the small investor can reap rich
reward. The average wage earner, no matter how economical he may be, can never, by labor alone, hope for independence in old age. If you want to acquire a competence, invest in a good mining proposition. The mining kings began at the bottom; they did not buy stock when the mines had been developed—they loaded up with securities based on prospects of promising value, such as ours is.
We want to remind the working man, that most of those enjoying material wealth from mining ventures are from their ranks, but did not succeed without co-operation. The late Stratton was a poor carpenter; John Harman, a coal miner; James F. Burns, a pipe fitter; James Doyle, a carpenter; our Ex-Governor Shoup, a teamster, etc.
**SUCCESS.**
We repeat that mining is the main foundation of wealth. Mining, therefore, should have more attention by the general public, especially those desiring to better their condition. How to succeed in securing competence for himself and family is the dream of every man worthy of the name. It is a landable ambition. No man wants to depend on others in the declining years of his life. So don’t chase rainbows, look under your feet instead. Mother earth is ready to help you at all points, but you must help yourself. Good as her velvety crust is, go deeper still to her treasure vaults. If you can not go yourself, do it by proxy, the next best thing.
**CAPITALIZATION, OWNERSHIP, ETC.**
The capitalization of the Camp French Mining Company is much lower than the average, simply $500,000, divided into 500,000 shares, of the par value of one dollar each. The shares are common; there are no preferred (favored) shares, and 291,000 shares have been assigned to the treasury to provide “ways and means” for the company to conduct its business in accordance with the “Issuer’s Prospectus” presented to, and accepted by, the Secretary of State, as being in conformity with the recently enacted “Blue Sky Law” of the state, known as “‘The Securities Act,” Chapter 168, Session Laws, Colorado, 1923. It is superfluous to add that the company is proceeding to do business under the full authority of the state, by its charter.
**OWNERSHIP.**
The Camp French Mining Company owns, in fee simple, five patented mining claims (see page 12) and possessory rights to as many claims, rich in timber and minerals, which when patented, will more than double its present patented acreage. Four of the patents have been transferred for the first time by the patentee, the secretary-treasurer of the company, by a “warranty deed.” Therefore, there can be no question regarding our titles to the properties. This matter of ownership is of
**SUPREME IMPORTANCE.**
This will be realized by the lay reader, when he is told that most of the mining prospects whose stock is offered for public subscription is done by concerns who do not own the property, but are working under an option. They have acquired a possessory right, subject to rigid restrictions, maybe, to the prospect, under lease and bond agreement, for two or three years, stipulating to pay so much royalty on ore shipped, based on smelter returns during
the life of the lease. If the restrictions have been honored and the royalties have proved adequate, a deed is issued and the subscribers can feel comparatively safe.
But, should the conditions not be fulfilled and the Royalties prove insufficient to pay for the property, the mine reverts to the owner, with the equipments and improvements placed thereon. Where the subscribers stand in the case, the reader can imagine—probably between Migdol and the sea.
However, as we have already shown, The Camp French Mining Company has absolute titles to the five claims, or two mines, it is developing, and is reasonably sure to be able to add much thereto, with the proper expenditure of money; untold wealth is hid in the properties, but it cannot be taken out without money. Anybody possessing ordinary common sense can see by referring to the map in the center of the book that the mines are in the center—the very heart of the richest mineral zone of the country. In fact, it is in the middle of that great mineral belt that traverses from N.E. to S.W., from Boulder through Gilpin, Clear Creek, and Summit Counties without a break, to Leadville and beyond!
Our present need is ample funds to carry on the work with due energy. However, we are not obliged to imperil the existence of the company by contracting debts in the form of bonds, mortgages, etc., for the management has assigned practically three-fifths of its capital to the treasury, to be sold as needs arise to meet its obligations of every nature, equipments, wages, etc., until the properties are dividend-paying.
Every subscriber should feel special interest in the company of which he is a member, with a voice in its management, which he will be invited to exercise at least once a year, in person or by proxy.
As this may be read by persons who are not familiar with the details of incorporated companies it might be well to state that confusion or dissolution cannot be brought about by the withdrawal or death of any officer or stockholder. For that matter its constituents can change continually, like the water of the Mississippi or any navigable river, and still maintain its identity and usefulness.
WHY DO YOU SELL STOCK?
The direct answer would be because it is the only thing we can properly sell! We have laid by, a non-participating stock for this very purpose, until the time when we can market our ore in profitable quantities. Did you ask such a question to Uncle Sam, why do you sell bonds? And, how much did you gain by buying his bonds? Bonds in their very nature cannot make you a fortune.
Not very many years ago such a wealthy corporation as the Guggenheims, put 10,000,000 shares on the market, for working capital!
We have exactly the same reason to offer—simply, to carry on our business, we need working capital. We have abundant capital in the vaults of the mountain but we need working capital to bore into it, and under it, to drag it out!
The burden of such enterprise is not trifling; the initial cost is heavy. But when the cost of such enterprise is divided among many it is easy of accomplishment.
There is nothing unusual about this. It is easy to recall big payers that have been developed by small public subscriptions—they are a legion!
The most noted mines in our State were developed by the sale of treasury stock. Some of them that were sold at 1c and 2c a share are today selling for $5 to $8 a share!
Here is a question that almost every prospector has been asked, "If your property is such a rich one, why don't you keep it, and work it yourself and get all there is in it?" Many are obliged to work on this principle, because they cannot get help to equip their prospect with the proper machinery. You cannot unwater your prospect with a milk pail, nor produce tonnage with a wheelbarrow, any more than you can cultivate a cornfield with a spade.
So, in the absence of adequate funds, the miner remains poor in the midst of plenty! Anyway, a man is a social being and when he tries to keep everything himself he has not very much to keep; it is the one that scatters, is the one that gathers.
Before anything can be accomplished in mining, a modern plant must be installed, men must be hired and paid for months—perhaps for two years or more, before results are in sight.
And, the more men that can be employed in the first two years, the greater earning capacity is acquired, and as a matter of course, profits are proportionately greater.
CONCLUSION.
Now, we urge you to buy mining stock as the safest investment and because it has the greatest possibility of increasing in value, and consequently, yield larger returns than other investments; for we have established beyond question the incontrovertible fact, that mining has more dividend paying concerns in proportion to the capital employed than any other industry or institution in the world.
We have made as clear a statement of the company's condition and prospects as possible. We mean to live up to all we have said, and will do our utmost to see that our prophecies are fulfilled to the letter. No sinecure jobs are provided for officers. No office rent has to be paid, for the incorporators are carrying on the preliminary work in the secretary-treasurer's home until the volume of business makes it absolutely necessary to make a change; every effort is made to keep down office expense, and the same care will be taken to make every cent count at the mines. Our words are not without serious meaning; we don't deal in dissolving views; we are not given to speculation and we shall countenance no gambling. We are proud of our properties. Their acquisition has been the work of long dreary years. We had a vision, but are by no means visionary. Some of our dear friends thought we were dreaming, for it appeared to them that our case was hopeless when silver remained for years from forty-seven cents to fifty-two cents an ounce, and lead cheaper than salt—two or three pounds for a nickle! Vision, courage, perseverance and determination to win, has triumphed; and the same qualities will win dividends for our subscribers.
We invite you to come in with us to develop these properties for our mutual profit. We are confident—sure of success, in view of a prospect of a rising market for silver, lead, and zinc, for years to come; lead will be our mainstay—gold simply a by-product.
CAMP FRENCH MINING COMPANY.
P. O. Box 2185, Denver, Colorado.
On our left is the Soda Creek (x) coming down to Clear Creek and Chicago Creek joining Clear Creek. 1—Blue Ribbon tunnel; 2—Waltham Mine and Mill; 3—Jackson Mill; 4—Hudson Mill; 5—Newton Mills; 6—????; 7—Bertha Mill; 8—Alpine Mill; 9—Big Five power house; 10—Golden Circle shaft of Stanley Mine; 11—Cardigan Mine; 12—England Mine; 13—Bullion King Mine; 14 and 15—Bench gravel.
IDAHO SPRINGS
We cannot refrain from saying a word about this beautiful mountain town, for it is the gem of the mountains. In fact, it is the metropolis of the County, although Georgetown is the County seat.
It is the hub, from which radiates fine roads to the lakes and over mountain passes, and therefore, is fast becoming a summer resort. Its rarefied air is a tonic itself!
Metalliferous ore is not all we can offer. We have mineral water also, hot and cold, carrying radium, the nerve energizer, in its bosom!
Are you troubled with rheumatism? Come here and have it boiled out of your system, by the healing springs.
Many come here for this ailment, and if not permanently cured—which is generally the case—are greatly benefitted. A course in its baths, whether troubled with the "rheu," or not, is very exhilarating and beneficial.
Many come here on crutches, practically helpless, and in four or six weeks are able to throw their sticks away! Why? Analysis has shown the presence of radium in these Springs in appreciable and effective quantities.
TRAIL CREEK VALLEY
Looking West
This picture gives a good idea of this narrow valley beyond Freeland and New Era Mines.
The valley comes to an end near Brazil. The range turns around in a horseshoe fashion and joins the Lamartine range opposite.
The range begins at Fall River and separates Clear Creek from Trail Creek. Joe Reynolds Mine is on the north or Clear Creek side. |
Longitudinal Studies of Viral Sequence, Viral Phenotype, and Immunologic Parameters of Human Immunodeficiency Virus Type 1 Infection in Perinatally Infected Twins with Discordant Disease Courses
Cecelia Hutto
*University of Miami School of Medicine, Miami, Florida*
Yi Zhou
*University of Miami School of Medicine, Miami, Florida*
Jun He
*University of Miami School of Medicine, Miami, Florida*
Rebeca Geffin
*University of Miami School of Medicine, Miami, Florida*
Martin Hill
*University of Miami School of Medicine, Miami, Florida*
Hutto, Cecelia; Zhou, Yi; He, Jun; Geffin, Rebeca; Hill, Martin; Scott, Walter; and Wood, Charles, "Longitudinal Studies of Viral Sequence, Viral Phenotype, and Immunologic Parameters of Human Immunodeficiency Virus Type 1 Infection in Perinatally Infected Twins with Discordant Disease Courses" (1996). *Virology Papers*. Paper 160.
http://digitalcommons.unl.edu/virologypub/160
This Article is brought to you for free and open access by the Virology, Nebraska Center for at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Virology Papers by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
See next page for additional authors
Follow this and additional works at: http://digitalcommons.unl.edu/virologypub
Part of the Virology Commons
Authors
Cecelia Hutto, Yi Zhou, Jun He, Rebeca Geffin, Martin Hill, Walter Scott, and Charles Wood
This article is available at DigitalCommons@University of Nebraska - Lincoln: http://digitalcommons.unl.edu/virologypub/160
Longitudinal Studies of Viral Sequence, Viral Phenotype, and Immunologic Parameters of Human Immunodeficiency Virus Type 1 Infection in Perinatally Infected Twins with Discordant Disease Courses
CECELIA HUTTO,1 YI ZHOU,2,3 JUN HE,2,3 REBECA GEFFIN,1 MARTIN HILL,1 WALTER SCOTT,4 AND CHARLES WOOD2,3,a
Departments of Pediatrics,1 Microbiology and Immunology,2 Neurology,3 and Biochemistry and Molecular Biology,4 University of Miami School of Medicine, Miami, Florida 33136
Received 17 November 1995/Accepted 26 February 1996
Perinatal human immunodeficiency virus type 1 (HIV-1) infections cause a broad spectrum of clinical disease and are variable in both the age of the patient at onset of serious disease and the progression of the clinical course. Heterozygotic perinatally infected twins with a marked difference in their clinical courses were monitored during the first 2 years of life. Twin B, the second-born twin, developed AIDS by 6 months of age and died at 22 months of age, while twin A remained minimally symptomatic through the first 2 years.
Sequential blood specimens were obtained from the twins in order to characterize the immunologic properties of the children and the phenotypes of the genotypes of HIV-1 isolates at various time points. Twin A developed neutralizing antibodies and a high-level antibody-mediated cellular cytotoxicity (ADCC) response, while twin B had no neutralizing antibody and a much lower ADCC response. The virus isolates obtained from the two children at various time points proliferated equally well in peripheral blood mononuclear cells, were nonsyneutin inducing, and could not infect established T-cell lines. They differed in their ability to infect primary macrophages. In parallel to the biological studies, the HIV-1 tat and part of the env gene sequences of the longitudinal isolates at four time points were determined. Sequences of virus from both twins at different time points were highly conserved; the viruses evolved at a similar rate until the last analyzed time point, at which there was a dramatic increase in sequence diversity for the sicker child, especially in the tat gene. Our results show that the viruses isolated at different times do not have significant changes in growth properties. The absence or low levels of neutralizing antibodies may correlate with disease progression in the twins.
Perinatal human immunodeficiency virus type 1 (HIV-1) infection presents a unique setting for investigating HIV-1 pathogenesis. The timing of infection can be defined within a specific period, and events can be characterized sequentially. The evolution of disease in perinatally infected children is characterized by two different patterns (2, 20, 40). In the first, infants develop severe clinical illness, including AIDS, within the first few weeks of life and their disease progresses rapidly. This pattern is usually associated with a concomitant steep decline in CD4 cell numbers within the first 12 to 24 months of age. The natural history of disease associated with the second presentation is more variable, but the course of disease is generally more indolent, with the onset of serious disease often delayed for several years, and the CD4 cell numbers decline more slowly. Some children in this group survive into early adolescence.
It is not clear what factors are responsible for this difference in the evolution of perinatally acquired HIV-1 infection. The timing of perinatal infection, in utero or at delivery, as determined by the initial detection of virus at delivery or later, has been proposed as a determinant of disease onset and progression (12). In a study from the Centers for Disease Control and Prevention, infants with positive virus cultures or positive PCR at delivery had a greater risk for early and severe disease than infants in whom virus was not detected until after the first week of age (34). The severity of the maternal disease stage during pregnancy has also been directly associated with risk of early disease in infants (3, 40). In a study by investigators in France, infants born to mothers with AIDS had a substantially greater risk of having AIDS by 1 year of age than infants born to asymptomatic mothers (2). Whether this risk is related to virus burden, virus phenotype, or other factors has not been determined.
In adults, the appearance of virus with the ability to form syncytia or replicate to high titers has been associated with disease progression (9, 39). The role of viral phenotype in the evolution of disease in children has not been described. Other viral factors, such as genetic variation, could also be important in disease progression (25). Studies evaluating the genetic diversity of virus in perinatally infected infants have found a homogeneous sequence population in infants at birth (30, 42); others have suggested that multiple maternal genotypes are present in infected children (26). Very little, however, is known about the correlation between genetic variability and disease progression over time in adults or children.
Multiple factors are likely to alter the course of disease in children. The occurrence of divergent courses of disease in perinatally infected children provides the opportunity for identifying viral or immunologic factors that influence the natural history of perinatally acquired HIV-1 infection and disease. Perinatally infected heterozygotic twins with divergent clinical courses were monitored until almost 24 months of age. Sequential specimens from the twins were studied retrospectively to characterize the immune response and viral phenotype and...
genotype for each twin over time. Our results suggest that the absence of neutralizing antibodies can distinguish infants with differing disease courses.
MATERIALS AND METHODS
Cell cultures. Healthy donor peripheral blood mononuclear cells (PBMC) were obtained from leukopacs (American Red Cross). The cells were purified on Lymphocyte Separation Medium (Oragron Teknika, Durham, N.C.) and propagated in RPMI 1640 medium containing 10% heat-inactivated fetal bovine serum (FBS), 2 mM l-glutamine, 50 μg of penicillin per ml, and 50 μg of streptomycin per ml. PBMC were cocultured with patient PBMC or infection with culture supernatant containing HIV-1. Primary monocytes were obtained from gradient-purified PBMC by the plastic adherence technique (7). The monocytes were cultured for 7 to 10 days in RPMI 1640 medium containing 10% FBS and 10 ng of granulocyte-macrophage colony-stimulating factor (GIBCO) per ml to allow differentiation into macrophages.
Virus culture and phenotype and tropism determination. HIV-1 was isolated by standard coculture procedures (18). In brief, patient PBMC were cocultivated with an equal number of phytohemagglutinin-stimulated PBMC from an HIV-1-seronegative donor (final inoculum, 2 × 10^6 PBMC per ml). Half of the culture medium was replaced with fresh medium every week, and numbers of fresh uninfected phytohemagglutinin-stimulated PBMC were added to the culture once a week. Virus production was monitored by measuring HIV-1 p24 antigen levels with a commercial enzyme-linked immunosorbent assay (Coulter Corporation, Hialeah, Fla.). Virus stocks were prepared when p24 antigen exceeded 10 ng/ml, and titers were determined by limiting dilution to obtain the 50% tissue culture infectious doses (TCID_{50}) (see below).
Syncytium-inducing virus (SIV) tropism and coreceptor usage (R5) phenotype was determined by infecting MT2 cells in duplicate microtiter wells in a 96-well flat-bottomed plastic tissue culture plate with 50 μl of fresh virus stock (21). Virus was scored as SI if three to five syncytia were observed per well within a 14-day period after virus was synchronously released from the cells.
To determine viral tropism for various host cell types, human PBMC, primary macrophages, and T-cell lines CEM and HP were infected with the virus stock by a standard coculture method. Briefly, cells were infected with 1 μg of virus protein (1 h at 37°C) and then washed with virus free (0.1 mg of HIV-1 p24 antigen per 5 × 10^6 host cells; incubation for 2 h at 37°C), washed, and suspended in fresh culture medium. Supernatants were removed at various times postinfection and assayed for HIV-1 p24 antigen.
Virus titration and TCID_{50} calculation. Stocks of virus obtained from the children at the different time points were thawed and serially diluted fivefold, starting with a dilution of 1:5. One hundred microliters of each dilution was incubated in duplicate with 0.5 ml of pooled PBMC containing 10^6 cells in eight-cister strip tubes (Costar, Cambridge, Mass.) for 1 h at 37°C in a 5% CO₂ incubator. The cells were washed twice, resuspended to 0.2 ml with complete medium, and transferred to a new tube containing 0.8 ml of complete medium.
Four days after infection, 0.5 ml of culture supernatant from each well was replaced with 0.5 ml of supernatant fresh medium. At days 7 and 10 postinfection, 0.5 ml of supernatant in each well was replaced with 0.5 ml of PBMC suspension containing the virus. Virus titers were determined at day 14 postinfection by measuring the HIV-1 p24 antigen level. One TCID_{50} was defined as the virus dilution required to infect half of the replicate cultures.
Neutralization of autologous virus. Plasma specimens were heat inactivated at 56°C for 30 min and serially diluted foldwise in complete medium, starting with a 1:5 dilution. Fifty microliters of each plasma dilution was incubated in duplicate with 0.2 ml of 100, 50, 25, and 1.25 TCID_{50} units of autologous virus cultured from the same time point, at 37°C for 1 h in a CO₂ incubator. Two hundred fifty microliters of pooled PBMC were added to each vial, and the mixture was incubated overnight. The cells were then washed twice, resuspended in 0.2 ml of complete medium, and transferred to a 48-well tissue culture plate containing 0.8 ml of medium per well. Fourteen-seven days later, the supernatant was removed for p24 antigen determination. Medium was replaced on day 4, and on day 7 the culture was terminated. Samples with p24 levels above the range of the assay were diluted and reassayed. For each virus input per plasma dilution, the percent neutralization was calculated as follows: [100 - (p24 antigen produced in tubes containing serum/p24 antigen production in tubes with no serum)] × 100.
HIV-1 ADCI. Antibody-dependent cellular cytotoxicity (ADCI) antibodies specific for HIV-1 Env were detected by using a chromium release assay. D₂ cells, chemically transformed mouse fibroblasts (gift from Thomas Leist, Cornell Medical Center), infected with a vaccine virus recombinant, VPE5 (AIDS Reference Reagent Repository) were used as target cells. D₂ cells infected with HIV-1 and uninfected D₂ cells were used as positive and negative assay as controls. To prepare the target cells, D₂ cells were grown to semiconfluent in RPMI 1640 medium with glutamine, 10% FBS, and 0.1% gentamycin for 24 h. The cells were harvested and washed, and 2 × 10^6 cells labelled with 51Cr (Amersham) for 1 h at 37°C were incubated, the cells were then infected with the recombinant virus or control vaccinia virus at a multiplicity of infection of 10 PFU per cell in a volume of 1.0 ml of medium. A third population of uninfected control cells was adjusted to a total volume of 1.0 ml and thereafter treated similarly to the infected cells. After incubation at 37°C (5% CO₂) for 2.5 h, the cells were washed three times in RPMI 1640 medium with 10% FBS and resuspended to a final concentration of 10⁴ cells per 50 μl. For the assay, 10⁴ cells were added per well. PBMC from a healthy uninfected donor previously determined to have a high titer of ADCI activity were washed in the Lymphocyte Separation Medium and resuspended there, and the cell concentration was adjusted to provide effector-to-target cell ratios as indicated below (see Fig. 1). Three different effector-to-target cell ratios were used in each assay to verify the specificity of the ADCI reaction.
Serum was heat inactivated, and 50 μl of serum was added to each well. Target cells were incubated with sera at 4°C for 45 min. A single concentration of serum (1:200, final dilution) was tested in each assay. In addition to the test sera, each assay also included vaccine virus-positive sera and vaccine virus-negative and HIV-1-seronegative control sera. For the negative control, the effector and target were tested in duplicate in 96-well U-bottom microtiter plates. Spontaneous release and maximum release were each tested in six wells, and a mean value was calculated for each. Serum was first added to the wells, and the 50 μl of target cells were added, leaving the remaining six wells empty. The plates were incubated for 4 h at 37°C in 5% CO₂ and centrifuged at 400 × g for 5 min. Fifty microliters of supernatant from each well was transferred to a counting tube for measurement of 51Cr release. For each serum sample tested, lysis was calculated as follows: [(average cpm of sample – average cpm of spontaneous release)/average cpm of maximum release – average cpm of spontaneous release] × 100.
To determine HIV-1-specific cytotoxicity, percent lysis of the vaccine virus-seropositive or vaccine virus- and HIV-1-seronegative sera (wherever was greater) was subtracted from percent lysis of the test sera.
DNA preparation, PCR analyses, and cloning. High-molecular-weight DNA was obtained from infected cells by a standard proteinase K digestion method (1). Briefly, 10^6 to 3 × 10^6 infected cells were suspended in 0.3 ml of digestion buffer (100 mM NaCl, 10 mM Tris-HCl [pH 8.0], 25 mM EDTA [pH 8.0], 0.5% sodium deoxycholate, and 10 μg of proteinase K per ml) and incubated at 50°C overnight. The digested samples were then extracted with phenol-chloroform-isooamyl alcohol (25:24:1) and dialyzed against TE buffer (10 mM Tris-HCl, 1 mM EDTA [pH 8.0]). DNA was eluted and the final concentration was verified by UV absorbance. The HIV-1 copy number of each DNA preparation from different time points was determined by semiquantitative PCR by methods described previously (43). DNA samples containing about 500 HIV-1 target molecules per sample were used for PCR.
All the primers for PCR were designed by using the HIV-1 conserved sequences published in the Human Retrovirus and AIDS Sequence Data Base (31). The positions of each primer correspond to the positions of HIV-1 HBX2 variant (GenBank accession no. U47562). The positions of the V3 loop and CD4 binding region are as follows: HENV-1, 5'-GTATAGTATCCAACTGGTTGATAATAGTGCGCATG-3' (positions 6986 to 7015; an EcoRI site is shown underlined); and HENV-2, 5'-TATAAGTACACTTCTTCCAAATGTGTCCAT-3' (positions 7676 to 7705; a BamHI site is shown underlined). HENV-3, 5'-TATGATGTTTTTATTGCTGAGGAGGATGTGTA-3' (positions 7487 to 7563) were used as primers for performing DNA sequence analyses. TAT-5 (5'-TTTGAAGATATGAGTACCAACA-3' (positions 5771 to 5793) and TAT-6 (5'-GGCCTGAGGATGATGATGAT-3' (positions 6132 to 6100) were used to amplify and clone the viral env exon 1.
PCR mixtures consisted of 50 mM KCl, 10 mM Tris-HCl (pH 9.0), 0.1% Triton X-100, 1.5 mM MgCl₂, 200 μM dNTPs, 1 μg of high-molecular-weight DNA, 0.25 μM (each) phosphorylated primers, and 2.5 units of Taq polymerase. The reactions carried out in a total volume of 100 μl. The first cycle was 94°C for 3 min, 72°C for 2 min, and 50°C for 1 min 30 s and was followed by 35 cycles of 94°C for 45 s, 72°C for 2 min, and 55°C for 1 min. The last cycle was 94°C for 45 s, 72°C for 10 min. The PCR products were run through a 1% agarose gel and purified with a Microcon device. The PCR DNA fragments were ligated to the pGEM T-vector according to the procedures recommended by the manufacturer (Promega, Madison, Wis.).
DNA sequencing and analysis. Sequencing data were obtained by using a commercial sequencing kit (U.S. Biochemicals, Columbus, Ohio) based on the dideoxynucleotide termination method (35). Sequence analyses were carried out with the Genetics Computer Group, Inc. (Madison, Wis.), sequence analysis package. Sequence differences were also identified manually. Sequence gaps were filled after double acid correction to maintain translation integrity. Nucleotide gaps and insertions were commonly found in the V4 region even in different sequences from the same individual. Nucleotide distances were determined by using the CLUSTAL W program (DNASTAR, Madison, Wis.). Alignment gaps were not counted as differences.
Phylogenetic analyses were performed by using the program PHYLIP, version 3.5c (15). Five hundred bootstrap sequence sets were generated, parsimony analyses were performed with universal branch swapping, and a consensus tree was constructed. A time scale was added to the trees by using the dates of sample collection (19).
Nucleotide sequence accession numbers. The GenBank accession numbers for the env sequences are U47562 to U47588, and the numbers for the tat sequences are U47589 to U47613.
### TABLE 1. Clinical histories of the twins
| Date (mo/yr) | Age (mo) | Finding(s) for twin: | Absolute CD4 T-cell no./mm³ |
|-------------|----------|----------------------|-----------------------------|
| | | A | B | Twin A | Twin B |
| 04/92<sup>a</sup> | 1.5 | Normal exam | Normal exam | 1,882 | 699 |
| 06/92<sup>a</sup> | 3 | LAD | Hepatomegaly, LAD | ND<sup>b</sup> | ND |
| 10/92 | 6 | HSM | Hepatomegaly, LAD, esophagitis, AIDS | 1,774 | 94 |
| 01/93<sup>c</sup> | 10 | LAD, HSM | LAD, hepatomegaly, developmental delay | ND | ND |
| 04/93<sup>d</sup> | 13.5 | | | | |
| 08/93<sup>d</sup> | 16 | Recurrent otitis media, LAD, HSM | Pneumonia and sepsis, hepatomegaly, developmental delay | 1,347 | 64 |
| 12/93 | 20 | Not seen | Pneumonia and sepsis, esophagitis, herpes simplex stomatitis, developmental delay | ND | ND |
| 02/94 | 22 | Chronic otitis media, LIP, LAD, HSM | Acute respiratory failure, death | ND | ND |
| 05/94<sup>d</sup> | 25 | Chronic otitis media, LAD, HSM, LIP | | 1,556 | |
<sup>a</sup> Both twins are female.
<sup>b</sup> LAD, lymphadenopathy; HSM, hepatosplenomegaly; LIP, lymphoid interstitial pneumonitis.
<sup>c</sup> HIV-1 genetic analyses were performed.
<sup>d</sup> HIV-1 cell tropism analyses were performed.
<sup>e</sup> ND, not determined.
### RESULTS
**Clinical disease course.** Twins A and B were born to a 29-year-old multigravida mother who received no prenatal care and whose own infection was not diagnosed until after the birth of the twins. The mother had no history of drug use or previous blood transfusions. The twins have several siblings, but none were HIV seropositive. The infants were delivered by cesarean section because of a failure of labor to progress. Twin A, the healthier twin, was born first weighing 2.8 kg, and twin B weighed 3.0 kg. Because of maternal endometritis and a positive urine antigen test for group B streptococcus in both infants, both received antibiotics for 10 days. Their nursery course was not remarkable otherwise.
The infants were first tested for HIV-1 at 6 weeks of age. Virus cultures from both infants were positive. The infants, however, were asymptomatic at this time, with normal physical examinations (Table 1). At 6 weeks (1.5 months), the CD4 count was already significantly lower for twin B (699/mm³) than for twin A (1,882/mm³). Twin B suffered from severe developmental delay, from esophagitis at 6 months of age, and encephalopathy. She had numerous infections and was hospitalized six times before her death at 22 months of age. In contrast, twin A had minimal clinical symptoms (lymphadenopathy, hepatomegaly, and chronic otitis media and radiographic evidence of lymphoid interstitial pneumonitis) during the first 24 months of life, and her CD4 counts remained >1,500/mm³.
**ADCC.** Lysis of cells expressing HIV-1 gp120 by ADCC was measured for both twins, with serum samples obtained at 1.5, 10, 13.5, and 20 months of age (Fig. 1). At 1.5 months, the degree of lysis was very similar for both infants (10% or less). At 10 months, the level of lysis was distinctly different between the twins and remained different in all sequential serum specimens tested. Sera from twin A, the healthier twin, had >40% lysis at 10 months, but lysis in twin B was still only about 10%. At 13.5 and 20 months, lysis of cells by serum from twin B was almost 20%, but this level was at least one-third less than that measured for twin A.
**Neutralizing antibodies.** Autologous neutralizing-antibody reactivity in the plasma of the twins was tested. For twin A, plasma samples were available at ages 1.5, 3, and 25 months. For twin B, autologous neutralization was tested for virus-plasma pairs at 1.5, 3, 10, and 13.5 months of age. As shown in Fig. 2, twin A had no neutralizing antibodies at 1.5 months, but by 3 months of age neutralization of the autologous isolate was already evident. The levels of neutralizing-antibody reactivity increased with time, and at 25 months of age, the level of neutralizing-antibody reactivity was higher than in the previous plasma samples. Twin B did not develop any measurable neutralizing antibodies in any of the sequential plasma samples tested, suggesting that the absence of neutralizing antibodies may correlate with disease progression.
**Viral growth properties and host cell tropism.** Studies were carried out to examine the growth properties of the different viral isolates from the twins at various times. Five isolates from each child were studied. For twin A, isolates taken at 1.5, 3, 10, 13.5, and 25 months were studied. For the sicker twin (B), isolates taken at 1.5, 3, 10, 13.5, and 16 months were studied. The viruses, isolated from minimally infected PBMC, were further characterized for their growth characteristics. The viruses isolated from the two children at various times grew equally well in PBMC (Table 2). They were all NSI, and none of the isolates grew in CEM or H9 T-cell lines. Even the last available isolate obtained from twin B, at 16 months, prior to death, displayed no SI phenotype and could not grow in H9 cells. All the isolates from the five time points from the healthier twin A grew in macrophages but less well than positive-control macrophage-tropic HIV-1 isolate 128A. In contrast, for the sicker twin (B), viruses from the earlier time points grew very well in macrophages, similar to the positive control. However, their ability to grow in macrophages decreased with time, and by 16 months, they also displayed only minimal macrophage tropism. These differences in tropism are unlikely to be due to differences in input virus titer, because the p24 titers and TCID<sub>50</sub> of the viruses were equalized before infection. In addition, subsequent passages of virus from each time point for both children were tested, and their macrophage tropism was confirmed (data not shown).
**Molecular cloning and sequence analyses.** In parallel with our studies of the biological properties of the HIV-1 isolates, the genetic sequences of the longitudinal viral isolates were also determined. The env gene from each isolate at four time points for each twin was cloned and sequenced; the region analyzed included the V3, V4, and CD4 binding domains (Fig. 3). In addition, the sequences of the three longitudinal isolates were also determined in order to obtain an estimate of the rate of evolution of the virus in the absence of the strong...
selection pressure reflected by changes in the env gene (Fig. 4). Five env clones and at least five tat clones from each isolate at each time point (a total of 40 env clones and 32 tat clones) were sequenced and aligned. Because of gaps and insertions, the nucleotide length of the env sequences ranged from 580 to 670 bases. There were 27 different env forms among the 40 clones. One env sequence had a mutation which resulted in a frameshift. The tat sequences were uniform in length (219 bases). There were 21 different tat forms among 42 clones sequenced. The nucleotide sequences from different time points and from different twins were highly conserved. Phylogenetic analysis comparing the env V3, V4, and CD4 nucleotide sequences shows that even though the sequences from the twins are highly conserved, sequences from each twin seem to cluster more closely to each other and are distinct from the clade B consensus sequence (data not shown). The mean divergent sequences of the different env genes from the twins still had about 97% identity. The intraindividual differences in nucleotide sequence between isolates from different time points are shown in Table 3. The env gene from each of the twins had approximately the same rate of diversification from the first to the last time points ($1.9 \times 10^{-2}$ nonsilent changes per nucleotide per year for twin B and $1.7 \times 10^{-2}$ for twin A). The silent mutations from the first to last time points are $7 \times 10^{-4}$ changes per nucleotide per year for twin B and $4 \times 10^{-3}$ for twin A. In contrast, the tat gene for twin sticker twin B had 6 times more nonsilent changes than twin A ($1.8 \times 10^{-2}$ changes per nucleotide per year versus $3.2 \times 10^{-3}$) and 4.5 times more silent changes than twin A ($7.3 \times 10^{-3}$ changes per nucleotide per year versus $1.6 \times 10^{-3}$), with most of the differences occurring at the last time point.
When the V3 sequences were aligned, all the isolates displayed sequences that were found in typical macrophage-tropic HIV-1 isolates. There were four different forms of V3 sequences among the sequenced clones. They are represented by clones A1-E1, A3-E1, A10-E1, and B13-E5 (Fig. 3 and 5). No significant differences in the V3 region were observed between isolates that displayed macrophage tropisms (twin B) and those that did not (twin A) (Fig. 3). Our sequences nevertheless are more similar to those of cloned HIV-1 strains that display macrophage tropism (SF128A, JRFL, BAL1, and SF162) than to those of strains that display T-cell tropism (SF2, IIIB, and NL43). Several amino acids (positions 13, 23, and 27) that were found to be conserved in these different macrophage-tropic isolates were also found in our clones (Fig. 6). It has been suggested that the overall charge of the V3 loop at pH 7 correlates with the ability of HIV-1 to induce syncytia (14). Strains with higher charges can induce syncytia, and those with lower charges cannot. Interestingly, the overall charge of the V3 loop of our clones from both twins is found to be around 3. These low net charges correlate well with the NSI phenotype of the isolates (Fig. 5).
To further determine the relationship between the HIV-1 clones from different time points, the sequences were also analyzed by neighbor-joining and parsimony analyses. An evolutionary tree was constructed from the parsimony analysis of these sequences (Fig. 6). This tree shows the heritage of progeny virus forms derived from earlier forms and the discontinuance...
of other branches. The divergence among viral forms in the tree branches is represented by their vertical spread. For the env gene sequences for twin A, the first time point began with two distinct viral forms. However, only one viral form was carried through to the last analyzed time point. Interestingly, for twin B, the sicker child, greater diversity of the viruses was observed at the fourth time point. Especially for the tat gene, there was a dramatic change between time points 3 and 4, and the viruses diversified and formed a distinct group. Separate phylogenetic analyses using the neighbor-joining method of linking maximum-likelihood distances confirm these findings for the env and tat genes (data not shown).
**DISCUSSION**
In these perinatally infected twins, clinical courses that were distinctly different were noted within a few months after birth. The different patterns of disease that occurred in these infants were characteristic of the bimodal patterns of clinical disease that have been described previously for children with perinatal HIV-1 infection (20). One of the children developed an AIDS-defining disease by 6 months of age that was associated with a rapid decline in CD4 cell number, had a pattern of early-onset disease with rapid progression. In comparison, her twin was relatively healthy during the first 24 months of life, with CD4 cell counts that were normal for her age. Her course was typical of children having the second pattern of disease, which is characterized by a more indolent disease process and a comparatively longer survival.
Various factors have been proposed to explain differences in the rate of disease progression between perinatally infected children, including the rate of disease progression in the mother at the time of infection (2, 40), the ability of the infant to mount an immune response following infection (5), and biological properties of the infecting virus (11, 39). In addition, it has been shown that the timing of HIV-1 transmission from mother to infant (in utero, at birth, or postpartum) correlates with the rate of disease progression (12). Whatever the underlying cause of the difference, the twins in this study provided the opportunity to compare specific virologic and immunologic factors that may play a role in the differing patterns of disease that occur in children. Our longitudinal and parallel characterization of the immunologic and genetic diversity and biologic properties suggest that both immunologic and viral factors distinguish the two disease courses.
Since this was a retrospective study, not all relevant specimens were available. Specimens from the mother and infants at birth and from the twin with early disease just prior to death would have provided more-detailed information about the role of virus phenotypes and genetic diversity in evolution of the disease. In addition, the timing of perinatal transmission and the role of viral burden in the two infants at birth could not be examined, because birth specimens were not available. For most time points studied, only virus isolates from cells that had undergone 14 days in culture were available for study.
Our studies that compared the development of autologous neutralizing antibodies and ADCC found clear differences between the two infants. Using contemporaneous virus isolates and plasma specimens, we showed that neither infant had antibodies capable of neutralizing autologous virus at 6 weeks of age. By 3 months, however, the healthy twin had developed neutralizing antibodies, while these antibodies were not present in serum from the twin with early onset of disease at 3 months of age or in any subsequent sample studied. The lack of neutralizing antibodies at 6 weeks of age in both twins suggests that maternal neutralizing antibodies to the transmitted virus were absent at birth and therefore were absent in the
---
**TABLE 2. Host cell tropism of various viral isolates from the twins**
| Isolate | P24 antigen production in the indicated host cells<sup>a</sup> |
|---------|---------------------------------------------------------------|
| | PBMC | Macrophages | CEM | H9 |
| 128A | 740,000 | 1,582 | — | — |
| SF2 | 2,350 | 19 | — | 48,570 |
| A1 | 726,400 | 140 | 11 | 1 |
| A3 | 817,500 | 160 | 21 | 0 |
| A10 | 375,300 | 319 | 5 | 2 |
| A13 | 489,500 | 278 | 2 | 0 |
| A25 | — | 154 | — | 2 |
| B1 | 731,500 | 1,764 | 4 | 0 |
| B3 | 781,600 | 2,022 | 3 | 0 |
| B10 | 528,700 | 3,521 | 9 | 0 |
| B13 | 362,700 | 839 | 4 | 2 |
| B16 | — | 142 | — | 0 |
<sup>a</sup> Given in picograms of p24 antigen per ml measured after 14 days in culture.
<sup>b</sup> —, not tested.
<sup>c</sup> Al, viral isolate from twin A at 1 month of age; other isolates are designated similarly.
mother. This is in agreement with recently published studies correlating the absence of maternal neutralizing antibodies and a higher risk of infection of the infant (23, 36). De novo production of antibodies by 3 months of age in the healthier twin is consistent with reports indicating that new antibodies to HIV-1 are detectable in some children between 3 and 6 months of age (17, 33). The presence of an autologous neutralizing antibody in infant specimens collected between 0 and 3 months has been investigated previously (23). In that study, no neutralizing antibodies were found in any of four children tested.
FIG. 3. Alignment of the envelope gene amino acid sequences from the V3 to CD4 binding domains of the various HIV-1 isolates from the twins. Five clones from each twin at each time point were sequenced. A1-E1, envelope clone 1 from twin A isolate at 1 month of age; A3-E1, envelope clone 1 from twin A isolate at 3 months of age; other clones are designated similarly. The consensus sequence (Consen.) of the V3 to CD4 binding domain region is listed at the top. Amino acids identical to the consensus sequences (-); those that are different (lowercase), and gaps (dots) are indicated.
FIG. 4. Alignment of the tat gene product amino acid sequences from the various HIV-1 isolates from the twins. Five clones from each twin at each time point were sequenced. A1-T1, tat clone 1 from twin A isolate at 1 month; other clones are designated similarly. See the legend to Fig. 3 for details. Consen., consensus sequence; *, stop codon.
TABLE 3. Nucleotide distances of twin A and B HIV-1 isolates at different times
| Gene segment and isolatea | Internal variation (%)b | % Changesc |
|---------------------------|-------------------------|------------|
| V3-V4 | | |
| A1 | 0.64 | |
| A3 | 0.53 | 2.13 |
| A10 | 0.4 | 1.96 |
| A13 | 0 | 2.09 |
| B1 | 1.34 | |
| B3 | 0.5 | 2.51 |
| B10 | 1.51 | 2.22 |
| B13 | 1.21 | 2.69 |
| Tat | | |
| A1 | 0.55 | |
| A3 | 1.19 | 1.10 |
| A10 | 0.55 | 0.55 |
| A13 | 0.50 | 0.46 |
| B1 | 0 | |
| B3 | 0.18 | 0.09 |
| B10 | 0.96 | 0.55 |
| B13 | 1.74 | 2.65 |
a A1, viral isolate from twin A at 1 month of age; other isolates are designated similarly.
b Average percent nucleotide distance between clones from the same isolate at each time point.
c Percent nucleotide distance from sequence obtained at the first time point for the same twin.
Thus, this is the first report documenting the development over time of an autologous neutralizing-antibody response in children in correlation with disease progression. Even though no autologous neutralizing-antibody response was found in the sicker twin as determined by using contemporaneous plasma specimens, it is possible that a delayed response to the virus isolates occurred later. Because only a limited amount of plasma was available, this question could not be further explored. In addition, the lack of development of autologous neutralizing antibodies in the sicker twin is not a reflection of an absolute lack of de novo synthesis of antibodies, as indicated by the detection of ADCC in this twin by 10 months of age.
The ability to form syncytia has been shown to correlate with disease progression in adults (9, 39). Very little information is known about this factor in children. Recent studies have suggested that infants with viral isolates classified as rapid or high-level virus producers and having the SI phenotype are more likely to have rapid disease progression, whereas children with slow or low-level virus production and NSI viral isolates are likely to have a more indolent disease course (11, 38). With our progressor twin B, even at the last time point characterized, the virus was NSI, displayed a low level of cytopathology, and was unable to grow in T-cell lines; however, the last viral isolate from twin B (fifth time point) seemed to be losing its macrophage tropism. It cannot be ruled out that, prior to the death of twin B, her virus may have acquired T-cell tropism, SI, and a rapid- or high-level-producer phenotype.
Evolution of viral phenotypic properties can be attributed to changes in HIV-1 env. The variable domains of the gp120 env gene contain important determinants for viral host cell tropism, for cytopathology in culture, and for host immune
responses to HIV-1 infection (6, 16). In particular, the well-characterized V3 region of env has been found to determine macrophage tropism (8, 16, 27, 32, 37). All of the V3 sequences obtained from the twins have extensive identity with sequences of macrophage-tropic HIV-1 isolates from adults. However, the sequences of some isolates from twins A and B, which display differences in macrophage tropism, have identical V3 sequences (for example, A10 vs. B1 [Fig. 3]), suggesting that other viral sequences besides V3 may contribute to macrophage tropism (4, 15, 41). The net charge of the V3 region has also been correlated with the SI or NSI phenotype (14). Based on the overall charge of the V3 loop sequence, all our sequenced isolates display a relatively low overall charge, which corresponded well with the observations that a low overall charge in the V3 loop correlates with the NSI phenotype (14). However, it is also possible that only a few SI virus particles may exist at the different time points, insufficient to produce an SI phenotype in our assay, and the cloning of viral sequences may have missed these minor populations so that their sequences are not reflected in our analyses.
Studies of natural HIV-1 transmission have shown that the recipients initially have a relatively homogeneous HIV-1 population, while the transmitters have a mixture of different viruses with a spectrum of genotypes and phenotypes (29). At later times after infection, severalfold-greater diversity was noted than at earlier stages of infection (28), suggesting that
**FIG. 5.** Alignment of the V3 sequences of the HIV-1 isolates from the twins. Amino acids identical to the consensus sequence (dashes), those that are different (lowercase), and gaps (points) are indicated. Amino acids that were suggested to play important roles in determining HIV-1 macrophage tropism (8) are shaded. HIV-1 strains SF2, HTLV-IIIB, and NL43 are T-cell tropic (T). HIV-1 strains SF128A, BAL1, JRFL, and SF162 are macrophage tropic (M). The overall charge of each V3 domain was calculated at pH 7 by using the ISOELECTRIC program in the Genetics Computer Group sequence analysis package.
**FIG. 6.** Evolution over time of the viral env and tat gene sequences among the sequential isolates from the twins. Viral forms are designated by sequence number (Fig. 3). Evolutionary heritage (solid lines), with number of nucleotide changes from the earlier form noted above the terminus, evolutionary heritage with no nucleotide changes (dotted lines), and additional viral forms identical to the one designated (numbers in parentheses) are shown. The vertical spread of forms at a given time point represents the internal diversity among forms according to the scale of nucleotide differences provided at the extreme right. Time points correspond to successive sampling times for genetic analysis, with time 0 being a hypothetical time before which no sequence divergence had occurred.
sequence diversity may correlate with disease progression (22). Our results showed that diversity among viral forms increased to a greater extent at the final time point for the twin who progressed to AIDS than in the twin who did not. However, the degree of divergence as represented by nucleotide distances in our study should be met with some skepticism: it is based on a small sample size of cloned viral isolates which may not represent what may be significant to pathogenesis in vivo. Since the initial viral load cannot be determined, it will be impossible to distinguish whether the greater viral sequence diversity occurred in the sicker child because immune suppression allowed the emergence of genetic variants or whether the increased heterogeneity is generated by a high initial viral load in the sicker child’s PBMC, leading to detection of additional minor viral populations. In contrast, a study by Delwart et al. (10) found a high degree of quasispecies complexity in the env gene of virus from asymptomatic adults with a strong immune response. These differences could be due to the above-mentioned limitations of sampling or could be the results of dissimilar patient populations. This study contrasted fast disease progressors with slower progressors.
The possibility of PCR contamination in our sequence analyses cannot be eliminated, especially in a situation in which the patients have a shared epidemiology. PCR contamination among samples has indeed been of major concern recently (24). From our phylogenetic analysis, identical viral forms were not found in the two children but were commonly found within one twin. Viral forms obtained from the first time point for twins A and B differed from each other by an average of 2.7% (range, 2.2 to 3.7%) for the V3-V4 region and by 0.7% (range, 0.5 to 1.4%) for the V3 region. These differences disappeared at later time points. The clustering of sequences from twin A and B within each twin suggests that there was no cross contamination, at least not between the twins. The phylogenetic analysis including V3-V4 sequences from both patients along with the consensus form of HIV-1 clade B also showed no significant clustering between the twins’ sequences and clade B consensus sequences.
Despite the drawbacks due to sample availability and the limited number of viral isolates analyzed, our study with these heterozygous twins represents a nearly ideal opportunity to monitor parallel viral phenotypes and genotype changes and their relationships to change in the immune response and disease progression. It will still be important to examine the immune response as well as the viral tat and env gene sequences of twin A further to determine if sequence homogeneity is maintained as long as the child is healthy. Further studies using larger panels of longitudinal pediatric samples are also necessary in order to confirm our initial findings.
ACKNOWLEDGMENTS
C. Hutto and Y. Zhou contributed equally to the study. We thank Gwen Scott for helpful discussion and comments. We also thank Shenghan Lai for sequence analyses, Sandra Berens for preparation of the manuscript, and Maria Saenz, Fred Breakenridge, and Mercedes Arana for key technical support. The following materials were obtained through the NIH AIDS Research and Reference Reagent Program, MT2 cells (from D. Richman) and pVES (from P. Earl and B. Moss).
This work was supported, in part, by NIH grant HD26619 to C.H., grants AI03356 and CA62810 to C.W., and grant AI23524 to Gwen Scott M.H. was supported in part by an AIDS postdoctoral training grant (AI07393).
REFERENCES
1. Ausubel, F. M., R. Brent, R. E. Kingston, D. D. Moore, J. G. Seidman, J. A. Smith, and K. Struhl (ed.), 1990. Current protocols in molecular biology. Greene Publishing Associates and Wiley Interscience, New York.
2. Blanche, S., M.-L. Mauxoa, C. Rouzioux, J.-P. Teslas, G. Firtion, F. Monpoux, N. Ciraru-Vinseron, F. Meier, J. Tricoire, O. Courpotin, E. Vilmer, C. Griscelli, J.-F. Delfraissy, and The French Pediatric HIV Infection Study Group, 1994. The influence of early maternal HIV infection on the severity of the disease in their mothers at delivery, N. Engl. J. Med. 330:308–315.
3. Blanchard, S. M., M. Tartljen, A. Dullage, C. Ronjonza, P. Le Deist, K. Fukunaga, M. Cariglia, C. Jacomet, A. Messiah, and C. Griscelli, 1990. Longitudinal study of 94 symptomatic infants with perinatally acquired human immunodeficiency virus infection. Am. J. Dis. Child. 144:1208–1211.
4. Boyer, M. T., G. R. Simeoni, A. J. Spector, J. Johnson, and R. A. Weiss, 1993. A single amino acid substitution in the V3 loop of human immunodeficiency virus type 1 gp120 alters cellular tropism. J. Virol. 67:3649–3652.
5. Broderick, J. W., S. Sperling, J. T. Too, P. Moscherosh, G. S. Brilliotti, P. A. Broli, C. Fundaro, and P. Ross, 1994. Acquired-dependent neutralizing activity and neutralizing activity in sera from HIV-1-infected mothers and their children. Clin. Exp. Immunol. 93:56–64.
6. Cheng-Mayer, C., T. Sodroski, and J. A. Levy, 1991. Host range, replicative, and cytopathic properties of isolates of human immunodeficiency virus type 1 are determined by very few amino acid changes in tat and gp120. J. Virol. 65:6931–6941.
7. Cheng-Mayer, C., C. Weiss, D. Seto, and J. A. Levy, 1989. Isolates of human immunodeficiency virus type 1 from the brain may constitute a special group of the AIDS virus. Proc. Natl. Acad. Sci. USA 86:8575–8579.
8. Chesbrough, B., K. Weber, J. Nishio, and P. Perronneau, 1991. Macroscopic, in vitro, and immunologic characteristics of two different pediatric HIV-1 isolates: unusual V3 envelope sequence homogeneity in comparison with C-tropic isolates definition of initial amino acids involved in cell tropism. J. Virol. 66:657–663.
9. Connor, R. L., and D. D. Ho, 1994. Human immunodeficiency virus type 1 variants with increased replicative capacity develop during the asymptomatic stage before clinical progression. J. Virol. 68:1000–1007.
10. Delwart, E. L., H. W. Shepard, B. W. Walker, J. Goudsmit, and J. D. Mullins, 1994. Human immunodeficiency virus type 1 evolution in vivo tracked by DNA heteroduplex mobility assays. J. Virol. 68:6672–6683.
11. De Rossi, G., A. Giampinto, A. Di Giacomo, F. Mazzotta, P. D’Antonio, D. Dunn, and A. Chessa, 1993. Replication and tropism of human immunodeficiency virus type 1 as predictors of disease outcome in infants with vertically acquired infection. J. Pediatr. 123:929–935.
12. De Rossi, G., R. T. Dittmar, S. G. Gao, A. Devito, M. Keller, S. Plager, Marshall, I. Chen, S. Dinn, E. B. Stites, and Y. Bryson, 1994. Rapid increases in load of human immunodeficiency virus correlate with early disease progression and loss of CD4 cells in vertically infected infants. J. Infect. Dis. 170:1379–1380.
13. Felsenstein, J., 1993. PHYLIP, version 3.5c. (Copyright Felsenstein and the University of Washington.)
14. Fouchier, R. A. M., M. Peeters, N. A. van Montfort, M. Termeulen, H. G. Huisman, F. A. de Vries, and H. Schuitemaker, 1994. Sequence-associated sequence variation in the third variable domain of the human immunodeficiency virus type 1 gp120 molecule. J. Virol. 68:3183–3187.
15. Freed, E. O., and A. Martin, 1991. Evidence for a functional interaction between the HIV-2 envelope and human immunodeficiency virus type 1 envelope glycoprotein. J. Virol. 68:2503–2512.
16. Grimaldi, K. J., B. A. Fuller, P. D. Renne, M. B. Nelson, M. L. Hammarskjold, R. P. Moore, M. Maldarelli, P. Petropoulos, and J. Gary, 1994. Mutations in the principal neutralizing determinant of human immunodeficiency virus type 1 affect syncytium formation, virus growth kinetics, and neutralization. J. Virol. 68:1875–1881.
17. Hammers, H. P., M. J. Samson, G. Delage, M. Boucher, C. Hankins, J. Stephens, and N. Lapointe, 1993. Ontogeny of the humoral immune response to human immunodeficiency virus type 1 in infants. J. Infect. Dis. 168:285–290.
18. Hafner, F. B., J. W. Bremer, L. E. Myers, J. M. Gold, L. McQuay, and the NIH/NIHDAIDS/ACTG Virology Laboratories, 1992. Standardization of sensitive human immunodeficiency virus coculture procedures and establishment of multicenter quality assurance program for the AIDS Clinical Trials Group. J. Clin. Microbiol. 30:1787–1791.
19. Holmes, E. C., L. Q. Zhang, P. Simmons, C. A. Ludlam, and A. J. Brown, 1992. Convergent and divergent sequence evolution in the surface envelope glycoprotein of human immunodeficiency virus type 1 within a single infected patient. Proc. Natl. Acad. Sci. USA 89:4835–4839.
20. Italian Registrar for HIV infection in Children, 1994. Features of children perinatally infected with HIV-1 surviving longer than 5 years. Lancet 343:193–194.
21. Japour, A. J., S. A. Fiscus, J. M. Ardhuin, D. L. Mayers, P. S. Reichelderfer, and D. R. Kuritzkes, 1994. Single-strand conformation assay for determination of structural and functional types of clade B human immunodeficiency virus type 1 isolates. J. Clin. Microbiol. 32:2291–2294.
22. Kasper, P., R. Kaiser, J. P. Klein, J. Oldenburg, H. H. Brackmann, J. Rockstroh, and K. E. Schweines, 1993. Diversification of HIV-1 strains after infection from a unique source. AIDS Res. Hum. Retroviruses 9:155–157.
23. Kilka, S. C., D. W. Wara, D. V. Landers, and J. A. Levy, 1994. Features of
HIV-1 that could influence maternal-child transmission. JAMA 272:467–474.
24. Korber, B. T. M., G. Learn, S. M. Bhattacharya, B. Hahn, and S. Wolinsky. 1995. Proteins of viral evolution. Nature (London) 374:242–243.
25. Lamers, S. L., J. W. Sleasman, J. M. She, K. A. Barrir, S. M. Pomeroy, D. J. Barrett, and M. M. Goodenow. 1993. Independent variations and positive selection in env VI and V2 domains within multiple viral strains of human immunodeficiency virus type 1. J. Virol. 67:3951–3960.
26. Lamers, S. L., J. W. Sleasman, J. M. She, K. A. Barrir, S. M. Pomeroy, D. J. Barrett, and M. M. Goodenow. 1994. Persistence of multiple maternal genotypes of human immunodeficiency virus type 1 in infants infected by vertical transmission. J. Clin. Invest. 93:380–390.
27. Liu, Z.-Q., C. Wood, J. A. Levy, and C. Cheng-Mayer. 1990. The viral envelope gene is involved in the macrophage tropism of a human immunodeficiency virus type 1 strain isolated from brain tissue. J. Virol. 64:6148–6153.
28. McNeerley, T., Z. Hornickova, B. Kloster, A. Birdwell, G. A. Storch, S. H. Polmar, M. R. and L. Ratner. 1995. Evolution of sequence divergence among human immunodeficiency virus type 1 isolates derived from a blood donor and a recipient. Pediatr. Res. 35:36–42.
29. McNeerley, T., Z. Hornickova, R. Markham, A. Birdwell, M. Arens, A. Saah, and L. Ratner. 1992. Relationship of human immunodeficiency virus type 1 sequence heterogeneity to stage of disease. Proc. Natl. Acad. Sci. USA 89:10247–10251.
30. Maldarelli, F., M. A. Kulkosky, J. Deklerk, H. J. Scherphier, K. Boer, and J. Goudsmit. 1993. Genetic human immunodeficiency virus type 1 RNA variation in mother and child following intra-uterine virus transmission. J. Gen. Virol. 74:747–756.
31. Myers, R. A., E. Rabkin, J. F. Murphy, T. F. Smith, J. A. Berzofsky, and F. Wong-Staal (ed.). 1994. Human retroviruses and AIDS. Los Alamos National Laboratory, Los Alamos, N.M.
32. O'Brien, W. J., A. M. Neuzil, J. Q. Zhao, A. Diagne, K. Ilder, J. Zaidi, and L. S. Y. Chen. 1992. Comparison for membrane-bound phagocytes can be determined by regions of gp120 outside the CD4-binding domain. Nature (London) 348:167–169.
33. Pollack, M. A., M. A. Kulkosky, T. Ilmet-Ilmet, K. Ajuang-Simbiri, K. Krasinski, and A. Borkowsky. 1993. Analysis of antibody to human immunodeficiency virus (HIV) antibody production in HIV-1 infected infants. Proc. Natl. Acad. Sci. USA 90:2340–2344.
34. Rogers, M. F., C.-Y. Ou, M. Rayfield, P. A. Thomas, E. E. Schoenbaum, E. Abrams, K. Krasinski, P. A. Sebwn, J. Moore, A. Kaul, K. T. Grimm, M. Bamji, S. Schochetman, and the New York City Collaborative Study of Maternal HIV Transmission and Maternal Medical Center HIV Perinatal Transmission Study Groups. 1993. Use of the envelope gene characterization for early detection of the proviral sequences of human immunodeficiency virus in infants born to seropositive mothers. N. Engl. J. Med. 326:1649–1654.
35. Sanger, F., S. N. Nicklen, and A. R. Coulson. 1977. DNA sequencing with chain-terminating inhibitors. Proc. Natl. Acad. Sci. USA 74:5463–5467.
36. Scarlatti, G., J. Albert, P. Rossi, V. Hodara, P. Biraghi, L. Muggiacca, and E. M. Fenyo. 1992. Mother-to-child transmission of human immunodeficiency virus type 1: identification of neutralizing antibodies against primary isolates. J. Infect. Dis. 168:207–210.
37. Shioda, T., J. A. Levy, and C. Cheng-Mayer. 1991. Macrophages and T-cell line tropisms of HIV-1 are determined by specific regions of the envelope glycoprotein. Nature (London) 349:745–748.
38. Spencer, L. T., M. T. Ogino, W. M. Dankner, and S. A. Spector. 1994. Clinical significance of human immunodeficiency virus type 1 phenotypes in antiretroviral therapy. J. Infect. Dis. 169:930–937.
39. Teramoto, M., J. M. A. Lozano, R. E. Y. deGoede, F. de Wolf, J. K. M. Edlin, Schattenkerk, P. T. A. Schellekens, R. A. Coutinho, H. G. Huisman, J. Goudsmit, and F. Palese. 1989. Association between biological properties of human immunodeficiency virus variants and risk for AIDS and AIDS mortality. Lancet 1:983–985.
40. Toto, P. A., M. De Martino, C. Gabiano, N. Cappelo, R. D'Elia, A. Loy, A. Plebani, G. Di Zucchero, M. Daniele, G. Ferrari, A. Casini, F. Fumadaro, P. D'Angelo, L. Gali, N. Pratapip, M. Zanetti, M. R. Rao, E. Palomba, and the Italian Registry for HIV Infection in Children. 1992. Prognostic factors and survival in children with perinatal HIV-1 infection. Lancet 339:1249–1253.
41. Westerfield, P., D. B. Kessler, L. O. Martin, B. A. Zimmerman, Y. Li, B. H. Hahn, G. M. Shaw, R. W. Price, and L. Ratner. 1992. Mapping of epitope determinants of human immunodeficiency virus type 1 in vivo. J. Virol. 66:2577–2585.
42. White, S. M., C. M. Wike, B. T. M. Korber, C. Hutto, W. P. Parks, L. L. Rosenblum, K. J. Kunstman, M. R. Furtado, and J. L. Munoz. 1992. Selective transmission of human immunodeficiency virus type-1 variants from mother to infant. Science 255:1351–1354.
43. Zack, J. A., A. M. Haislip, P. A. Sodroski, and I. S. Chen. 1992. Incompletely reverse-transcribed human immunodeficiency virus type 1 genomes in quiescent cells can function as intermediates in the retroviral life cycle. J. Virol. 66:1717–1725. |
The Sling Specialist in Safe Patient Mobilization
Alpha Modalities
SLING CATALOG
Reusable, Wipeable & SPU Series
July 2018
800.273.5749
www.alphamodalities.com
email@example.com
3 YEAR WARRANTY
Guide to Color Coding for Mobility Assessment
**RED** Full Assist: Patient is unwilling or unable to cooperate
- Page 1.................................A-TSL Series
- Page 2.................................A-Seat Series
- Page 3.................................A-SitOnSling Series
- Page 4.................................A-TTurner Series
- Page 5.................................A-Pannus/A-PannusSW
- Page 6.................................A-Limb
**YELLOW** Partial Assist: Patient can follow basic commands but requires assistance
- Page 7.................................A-SitStandSW
- Page 8.................................A-WalkVSW
- Page 9.................................A-BariV/A-BariVSW
**GREEN** Stand-by Assist: Patient can follow basic commands and requires little to no assistance
- Page 10...............................A-WedgeSW
- Page 11...............................A-BedLadderSW
- Page 12...............................SPU Series
- Page 13...............................Customizations
- Page 14...............................Compatibility Statement
- Page 15...............................Cleaning and Warranty
Product Features
- Breathable—able to remain under patient
- Attaches to ceiling and floor lifts w/ 2,4,6 or 8 point hanger bar systems
- Pressure Mapping documentation on low air loss support surfaces
- Available in multiple colors
- SWL of 454kg /1000lb w/option for 500kg/ 1100lbs
- Latex Free
Options
Materials
- Open Weave Mesh—bathing and general use
- Tight Weave Mesh—replaces linen on bed
- IT—wicking material w/woven antimicrobial
Sizes
- Width Options: 32”, 45”, 50”, 60”, or 66”
- Length Options: 64”, 86”, 100”, or 118”
Attachments
- Color Ladder (CL)—industry standard
- F8 Numerical—numerical identification aids caregivers w/color blindness
- F8 Recoil (RC)—designed to prevent tripping with low support surfaces
Product Features
- Tight-weave, open weave bathing mesh and wipeable series.
- Our Bari Line is made with our IT wicking material and will address patients of size up to 500kg /1100 lbs
- We also are introducing a seated sling without binding as pressure point so that it can be left behind the patient for longer periods.
Options
- Available with or without head support
- Loop or key clip attachment
- Color coded to easily identify size
Sizes/Recommended Weight Ranges:
- Pediatric (Dino Print): < 15kg
- Small (Red): 14-34kg
- Medium (Yellow): 34-66kg
- Large (Green): 66-100kg
- X-Large (Purple): 100-200kg
- XX Large (White): 200-363kg
- Bari-XX (Grey w purple trim): 363- 500kg
A-SitOnSling Series
Intended use: Designed to remain underneath the patient and provide full body support without splitting the legs when transferred. Toileting option available.
Product Features
• Made with our Grey IT material designed to remain underneath the patient for longer periods.
• The edge of sling color is color coded to indicate size.
• The sling offers support from the knees to the shoulders on patients.
• Placement is accomplished by turning the patient side to side, positioning the sling at the bend of the knee.
• Slings are washable.
• Latex Free.
Options
Sizes
• Medium (Yellow Trim) - Seat width 30in/72cm
• Large (Green Trim) - Seat width 32 in/81cm
• XL (Purple Trim) - Seat width 35in/89cm
• XXL (White Trim) - Seat width 39in/99cm
Materials
• Grey IT material
• Polyester Webbing and Trim
• Coated Polyester Thread
• Max SWL 454kg/1000lbs
WARRANTY YEAR 3
A-TTurner Series
Intended use: To turn patient and to hold patient onto their side.
Product Features
• Fits into the lower back of patient
• Bariatric sizes available
• Available in washable & wipeable fabric
• Latex Free
Options
Sizes: Max Weight / Length
• Offered in both washable (Velcro) or wipeable (coated polyester strap)
• Regular: 150kg/33”L”
• Bariatric: 300kg/36.6”L”
Material
• Polyester Tight Weave Mesh
• Spacer Mesh
• Coated Polyester Webbing - Wipeable
• Polyester Webbing & Velcro - Washable
• Urethane Coated Nylon
WARRANTY
3 YEAR
A-Pannus/A-PannusSW
Intended use: To assist in moving patient’s panniculus away from the body for cleaning skin, wound checks, or dressing changes.
Product Features
• Holds panniculus to allow inspection of skin fold areas
• Compatible with loop lift systems or used manually
• SWL 454kg / 1000lbs
• Latex Free
Options
Washable Material-
• Soft inner wicking liner
• Polyester, wicking agent, antimicrobial, polyester webbing
Wipeable Material-
• Ideal for ED and areas that do not have laundry services – wipe with hospital approved disinfectants or 10% bleach solution
• Coated Polyester Webbing, urethane coated nylon, 14oz polyvinyl
A-Limb Series
Intended use: To support limbs for dressing, wound, and/or back care.
Product Features
• Designed to lift and hold limbs eliminating the need to manually hold the limb for dressing changes, pericare, and placing pillows.
• The use of two limb sling will help with patients of size in holding limbs while on their side.
• Available in washable, wipeable, and SPU disposable
• Custom sizes available by request
• SWL 150kg/330lbs
• Latex Free
Options
• A-LimbTWF8: reusable with F8 numerical strap
• A-LimbTWSC: reusable with single clip
• A-LimbSWWL: wipeable with loop or clip
• A-LimbSPU: disposable with loop or clip
Colors:
• Washable available in blue, grey or purple
• Wipeable is only available in blue
• SPU is white
A-SitStandSW
Intended use: Used with a powered mobile lift unit to assist a patient to a standing position.
Product Features
• Addresses issues of having to purchase multiple pieces per size for laundry turn-around.
• Inner safety belt that is adjustable and nonslip.
• Easy to wipe with hospital grade disinfectant and will be ready immediately after cleaning.
• Loop, clip or Arjo-Huntleigh Sara Plus attachment options
• Latex Free
Options
Sizes - color coded tab indicates size
• Small (Red) – Inner Safety Belt 29”-46”
• Medium (Yellow) – Inner Safety Belt 38”-55”
• Large (Green) – Inner Safety Belt 45”-64”
• X-Large (Purple) – Inner Safety Belt 55”-73”
Material
• Herculite Fusion III
• 18oz Polyvinyl
• Coated polyester webbing and thread
• Chemical seam seals
• Max Safe Working Load – 300kg
WARRANTY
3 YEAR
Product Features
- Provide safety to patients who are at risk of falling while ambulating
- Multiple sizes identified by color of the vest
- Design allows for toileting by releasing the clip on the groin straps
- Chemical and heat sealed seam
- Max SWL of all vests - 300kg/660lbs
- Latex Free
Options
Sizes-Based upon chest measurement
- Pediatric (Blue) 20-26”/50-66cm
- Small (Red) 34-38”/86-97cm
- Medium (Yellow) 39-43”/99-109cm
- Large (Green) 44-48”/112-122cm
- Xlarge (Purple) 49-53”/123-135cm
- Extender Panel adds 16”/40cm
Material
- Coated polyester webbing and thread
- Metal Cobra buckles
- Nylon coated Urethane
- Spacer mesh polyester
A-BariVest Series
Intended use: Ambulation and gait training
Product Features
• Provide safety to patients of size who are at risk of falling while ambulating
• Multiple sizes identified by color of the vest
• Comes in both washable and wipeable versions
• Cobra Buckles – either plastic or metal
• Scrotum Pouch for comfort
• Latex Free
Options
Sizes - by Max weight
• Small (Red) 100kg
• Medium (Yellow) 200kg
• Large (Green) 300kg
• Xlarge (Purple) 400kg
Material
• Polyester
• Coated polyester webbing
• Spacer Mesh
• Urethane Coated Nylon
• Herculite Fusion III
Warranty
3 YEAR
A-WedgeSW
Intended use: Positioning wedge to keep patient on their side or prevent sliding down in bed.
Product Features
• Designed with black non skid surface
• Place between the bed and patient’s linen to keep your patient on their side.
• The white wipeable handle allows you to move and position the wedge easily.
• The wedge is sealed to allow for easy cleaning by wiping down with hospital approved disinfectant
• Two size options
• Latex Free
Options
Sizes
• 16”/40.6cm Length
• 23”/58.4cm
Fill Options
• Medical grade foam - firm
• Spun Polyester - soft
Material
• 100% polyester
• Herculite Fusion III
• Waterproof
• Chemical seam sealers
Intended use: To help patients sit themselves up in bed, assist in moving legs in/out of bed or vehicle and general repositioning while on a flat surface.
Product Features
• MRI Compatible
• Soft, pliable grips
• Can take the place of a trapeze in some cases
• SWL - 272kg/600lbs
• Latex Free
Options
Size
Universal Size 12”/37cm Width 63”/160cm
Material
• Coated polyester webbing and thread
• Herculite Fusion III
• Closed cell foam
• Chemical seam sealer
SPU SERIES
Intended use: Single Patient Use slings that offer the same functions as our TSL, Seat, Limb, Hygiene, and BariVest.
Product Features
• An option for facilities who have challenges with a reusable sling management process
• Variety of sling styles available
• Synthetic breathable material that is strong, comfortable, and can be reprocessed
• Lightweight and individually packaged in clear sleeves
• Can be spot cleaned with hot water if soiled by patient
• Note: Slings are packaged 10 per case
Options/Sizes
Flat Repo Sling (TSL)
• 45”x86” / 114x218cm SWL 454kg
• 55”x86” / 140x218cm SWL 500kg
Seated Slings
• M/L Combo SWL 100kg
• X/XX Combo SWL 363kg
Limb Sling
• Universal Size SWL 150kg
Hygiene Sling
• M/L Combo SWL 363kg
• X/XX Combo SWL 363kg
BariKit-SPU
• All-in-one pack includes one of each of the following: Repositioning Sling, Limb Sling, and a BariVest - SWL 454kg
Sling Fabric Colors
Customizations
Blue
Grey
Red
Yellow
Green
Purple
Also available in white or other custom colors
Sling Label Options
Multiple attachment options
Industry Standard
Numerical loop identification
Aluminum Clip - Wipe or wash
Minimize trip hazard
Wipe or wash
Color Ladder Loop
F8
Clip
Recoil
Mona
3 YEAR WARRANTY
Compatibility Statement
All lift motors perform a basic function, which is to go up and down. The interface between the lift, hanger bar and sling is critical for safety and patient comfort.
We test our slings with various manufacturers’ hanger bars for compatibility to ensure that they are safe to use and are comfortable for the patient.
We guarantee compatibility with the following manufacturers:
Guldmann AS
Arjo Huntleigh
Prism Medical
Handicare
Tollos
EZ Way
Hill Rom Liko
Etac-Molift
Cleaning instructions for Washable products
- Commercial laundry systems with maximum load 850lbs - tunnel washer voids warranty
- Oxygen based bleach solutions only – chlorine based solution voids warranty
- Fresh water rinse cycle of minimum eight (8) minutes
- Wash temp not to exceed 185F / 85 C
- Dry temp not to exceed 145F / 74C
Cleaning instructions for Wipeable products
- DO NOT LAUNDER - Wipe with hospital approved facility disinfectant for “Non Critical Devices”, as per FDA guidelines or 10% bleach solution
- DO NOT soak in solution of chlorine based bleach solution
- DO NOT autoclave or steam clean
- For blood stains: if the blood is fresh and has not dried, cold water must be used to clean. DO NOT USE HOT WATER as it sets the stains to iron color from the blood. If it has dried, a spot clean of the stain of 50% Hydrogen Peroxide can be used with cold water.
WARRANTY
Within the first three (3) Years of use of production date, we will repair or replace, at no charge, any sling found to be defective in manufacturing or damaged in use/cleaning. Sling must be returned to us clean, freight prepaid and attached to Return Authorization Number.
ALL PRODUCTS ARE LATEX FREE |
COMPUTING CENTRE NEWSLETTER
Using the IMSL & NAG Libraries
Commission of the European Communities
JOINT RESEARCH CENTRE
Ispra Establishment
SPECIAL ISSUE
Using the IMSL & NAG Libraries
A handbook describing how to use the IMSL* and NAG** Libraries of numerical mathematical and statistical subroutines as installed at the JRC Computing Centre, Ispra.
Author: Martyn D. Dowell
Version 1, September 1981
* IMSL is the trademark of the International Mathematical and Statistical Library Inc. (Houston, USA)
** NAG is the trademark of the Numerical Algorithms Group Ltd.(Oxford, UK)
CONTENTS
1. INTRODUCTION
1.1 General Introduction
1.2 The Pitfalls in Writing Numerical Software!
2. THE LIBRARIES
2.1 The IMSL Library
2.2 The NAG Library
3. COMPARISON OF THE LIBRARIES
3.1 Which Library Should I Use?
3.2 Comparative Summary of the Content of the IMSL and NAG Libraries
4. DOCUMENTATION
4.1 Overview
4.2 IMSL Documentation
4.3 NAG Documentation
5. IMPLEMENTATION SPECIFIC DETAILS
5.1 IMSL Implementation Details
5.2 NAG Implementation Details
6. USING THE LIBRARIES
6.1 Library Data Set Definition
6.2 Use of the Libraries in Batch
6.3 Use of the Libraries from a TSO Session
7. INCLUSION OF THE LIBRARIES IN OTHER LANGUAGE PROGRAMS
APPENDIX A
Examples of the Use of the Libraries in Batch Jobs
APPENDIX B
Detailed Comparison of Content of the Libraries
1. INTRODUCTION
1.1 General Introduction
In the design and implementation of computer programs there is always a requirement for the inclusion of modules (procedures; subroutines, functions) which perform specific well defined tasks. The most obvious examples of this are modules for performing transfers from peripherals and generally handling input/output devices. The program writer would almost never consider writing his own routine to read a card from the card reader or write a record to a lineprinter. Similarly, basic trigonometric and mathematical functions such as $\sin(x)$, $\log(x)$ and $e^x$ are always provided as standard. However, in the field of more advanced numerical mathematical and statistical calculations there has been a tradition of users writing their own subroutines to provide specific facilities. This has occurred for several reasons; the two most important are:
1) No good, comprehensive, well tested, well documented sets of routines have been available.
2) Users have always considered that they are capable of producing good routines suitable for their own needs.
In recent years the first of these reasons has become much less valid with the advent and subsequent development of two competing and yet complementary libraries of numerical mathematical and statistical subroutines (The International Mathematical and Statistical Library IMSL and the Numerical Algorithms Group NAG Library).
FORTRAN versions of both of these libraries are available for use on the JRC-Ispra Computing Centre Service. These libraries are rented from the organizations on an annual basis and are freely available for use to all of the local users of the JRC-Ispra Computing Centre Service. External and commercial users of the service should seek advice as to the conditions under which they may use these libraries from the Computer Manager (see the JRC Newsletter list of personnel for details).
Note. Users should note that single routines of IMSL and NAG may absolutely not be distributed outside the JRC, Ispra Establishment. However, complete programs or software systems which make use of the libraries may be distributed. For these cases users may request only object decks of the incorporated routines. The person who makes the request becomes responsible for any misuse of the requested deck.
1.2 The Pitfalls in Writing Numerical Software!
The second reason why users have habitually written their own numerical mathematical subroutines (as given in the previous section) is in almost all cases false! Perhaps a few program writers produce adequate numerical mathematical subroutines for their programs. However, very many more (by far the majority) produce subroutines which are inadequate and often produce results which are unnecessarily erroneous.
This may be displayed by the following example (first described in the Newsletter of the Computer Center of Purdue University (USA)).
The object of this example is to illustrate the quality which has been built into the IMSL & NAG Libraries. We do this by solving a problem, using the algorithm many people would use, and then by comparing the results we obtain with those of the corresponding IMSL routine.
The problem we choose is to find the roots of a quadratic equation: given real numbers \(a\), \(b\), and \(c\), find \(X\) such that \(ax^2 + bx + c = 0\). For simplicity, we assume that \(a\), \(b\), and \(c\) are such that the solution is also real. The two roots of a quadratic equation may be found by the well-known 'Quadratic Formula'
\[
X = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
\]
where one root is obtained by using the "+" of the "±", and the other is obtained with the "-". The assumption that the roots are real means that \(b^2 - 4ac > 0\). We can solve this problem with the following straightforward subroutine:
```fortran
SUBROUTINE QUAD (A,B,C,X1,X2)
D = SQRT(B*B-4.0*A*C)
X1 = 0.5*(-B+D)/A
X2 = 0.5*(-B-D)/A
RETURN
END
```
When we use this subroutine to solve the rather difficult quadratic \(X^2 + (2^{13} + 2^{-13})X + 1 = 0\) we obtain
\[
X1 = 0.0 \\
X2 = -0.8192 \times 10^4
\]
This is not the correct solution, however. The corresponding IMSL routine ZQADR (single precision) does compute a much better approximation to the actual solution as follows:
\[ X1 = -0.12070 \times 10^{-3} \]
\[ X2 = -0.81920 \times 10^{4} \]
(These results are exact to 5 significant figures)
Where did QUAD go wrong? The second statement computes D. Then the third statement forms the difference between D and B, losing almost all significance in the process. ZQADR is more careful than QUAD and thus is able to retain full significance.
Now we touch briefly on several additional problems with QUAD. The first deals with problem scaling. If the quadratic equation above is multiplied by a constant, the solution is not changed mathematically, but it is changed computationally. For example, the quadratic
\[ 10^{40}x^2 + 3 \times 10^{40}x + 2 \times 10^{40} = 0 \]
results in an overflow error because \(10^{80}\) cannot be represented by the computer. Similarly, the quadratic
\[ 10^{-40}x^2 + 3 \times 10^{-40}x + 2 \times 10^{-40} = 0 \]
produces an underflow and then gives results
\[ X1 = X2 = -1.5 \]
because \(10^{-80}\) cannot be represented by the computer and is treated as zero. However, in both cases ZQADR still computes the same, correct solution.
Finally, consider QUAD's actions if the coefficient of X in the quadratic is zero. In this case QUAD returns an infinite value for one root and an indefinite value for the other. ZQADR returns the mathematically correct value \(-c/b\) for one root and infinity for the other.
The reader may ask: "What is the point of this if I never intend to solve difficult quadratic equations?". The answer is that this example shows the problems of trying to write a subroutine for solving a simple problem. It is much more difficult to write a good or even adequate subroutine to solve more complicated problems.
2. THE LIBRARIES
2.1 The IMSL Library
The International Mathematical and Statistical Library (IMSL) is produced by IMSL INC. of Houston, Texas (USA).
The library is a FORTRAN source library which contains over 400 user-callable subroutines. For IBM sites (such as the JRC-Ispra) two load module versions of the library are available, which contain sets of the subroutines with single and double precision real parameters.
The company was founded in the early 1970's and has now a very strong world-wide user base. It is especially strong in the North American continent. In total, the number of installations subscribing to the IMSL Library exceeds 1700 (located in 36 different countries). IMSL has an estimated user base of 106,000 persons.
A board of technical advisors to IMSL which consists of many famous experts in numerical mathematics, statistics and computer science, is responsible for ensuring that the library maintains a set of high quality subroutines which reflects the current state of the science.
The library is divided into a number of chapters, each of which covers an area of numerical mathematics or statistics. A brief list of the topics included follows:
Analysis of Experimental Design Data
Basic Statistics
Data Screening; Transgeneration
Elementary Classical Inference
Elementary Bayesian Inference
Categorized Data Analysis
Differential Equations; Quadrature; Differentiation
Eigenanalysis
Forecasting; Econometrics; Time Series
Generation and Testing of Random Numbers; Goodness of Fit
Interpolation; Approximation; Smoothing
Linear Algebraic Equations
Mathematical and Statistical Functions
Probability Distribution Functions
Special Functions of Mathematical Physics
Non-Parametric Statistics
Analyses of Variance
Binomial or Multi-nomial Bases
Hyper (or Multi-hyper) Geometric Bases
Kolmogorov-Smirnov Tests
Other Bases
Randomization Bases
Observation Structure
Canonical Analysis
Cluster Analysis
Discriminant Analysis
Factor Analysis
Principal Components Analysis
Regression Analysis
Linear Models
Special Non-linear Models
Sampling
Acceptance Sampling
Preference Testing
Survey Sampling
Utility Functions
Error Detection
Special I/O Routines
Vector, Matrix Arithmetic
Zeros and Extrema; Linear Programming
Subroutines in the IMSL library have alphanumeric names of up to six characters in length. The first character of the name is always that of the chapter in which the subroutine is located.
2.2 The NAG Library
The NAG Library is produced and distributed by the British company NAG (Numerical Algorithms Group) Ltd. The NAG project began in 1970 when six British computer centres decided to jointly develop a library of mathematical routines. Later, many other universities and research organizations became involved in the development and use of the library.
In 1975 the library distribution service to commercial subscribers began. The library now consists of approximately 400 user-callable FORTRAN subroutines (there are also Algol 60 and Algol 68 versions of the library available, but not at the JRC-Ispra). There are FORTRAN implementations of the library on approximately 50 machine ranges. The total number of installations using the library world-wide exceeds 300. The library is most popular in Europe although there is an increasing usage in the USA and Canada. The library is always distributed in compiled form (i.e. as a load module library for IBM users). Source copies of individual routines are available for inspection.
For IBM users (such as the JRC-Ispra Computing Centre Service) there are two load module versions of the library using single and double precision real parameters.
The aim of NAG has always been to produce a comprehensive library of subroutines and to have as main criteria for selection of these routines the concepts of:
i) usefulness ii) robustness iii) numerical stability iv) accuracy v) speed
Contributors to the NAG Library are expert numerical mathematicians and computer scientists in the UK and throughout the world. These are backed by the NAG Central Office staff who co-ordinate and control the work of the contributions.
The NAG Library is structured in chapters using the conventions adopted by the American A.C.M. (Association for Computing Machines) modified SHARE Classification Index.
Summary of the Chapters of the NAG FORTRAN Library
A02 - COMPLEX ARITHMETIC
C02 - ZEROS OF POLYNOMIALS
C05 - ROOTS OF ONE OR MORE TRASCENDENTAL EQUATIONS
C06 - SUMMATION OF SERIES
D01 - QUADRATURE
D02 - ORDINARY DIFFERENTIAL EQUATIONS
D04 - NUMERICAL DIFFERENTIATION
D05 - INTEGRAL EQUATIONS
E01 - INTERPOLATION
E02 - CURVE AND SURFACE FITTING
E04 - MINIMIZING OR MAXIMIZING A FUNCTION
F01 - MATRIX OPERATIONS INCLUDING INVERSION
F02 - EIGENVALUES AND EIGENVECTORS
F03 - DETERMINANTS
F04 - SIMULTANEOUS LINEAR EQUATIONS
F05 - ORTHOGONALISATION
G01 - SIMPLE CALCULATIONS ON STATISTICAL DATA
G02 - CORRELATION AND REGRESSION ANALYSIS
G04 - ANALYSIS OF VARIANCE
G05 - RANDOM NUMBER GENERATORS
H - OPERATIONS RESEARCH
M01 - SORTING
P01 - ERROR TRAPPING
S - MATHEMATICAL CONSTANTS
X01 - MACHINE CONSTANTS
X03 - INNERPRODUCTS
Subroutines in the NAG Library have names which are defined in the following manner:
- The name is 6 alphanumeric characters.
- The first 3 characters are the name of the chapter in which the subroutine is found (see previous page). For example C05 for a routine which is concerned with a technique in the subject area of roots of transcendental equations.
- The 4th and 5th characters are alphabetic and serve to distinguish between different subroutines in the same chapter.
- The 6th character defines the language type of the subroutine (A for Algol, F for FORTRAN etc.); see section 5.2 for more details.
3. COMPARISON OF THE LIBRARIES
3.1 Which Library Should I Use?
In some situations this question will be easy to answer. If a certain algorithm is only implemented in the NAG Library (for example) then the user must obviously make use of this subroutine. If, however, equivalent subroutines are available in both the NAG Library and the IMSL Library, then other factors are involved in the choice.
* For cases in which the finished program or package is to be transported to another computer site or another computing system, the decision may depend on which of the two libraries will be available at the other site or system.
* For packages which are to be made generally available to a large number of users, the location of the users may be important. As previously stated, the NAG Library is widely available at centres in Europe, whilst the IMSL Library is more predominant in the USA and Canada.
* For cases in which such constraints do not apply, for example a small program to be run on the JRC-Ispra Computing Centre Service, then personal preference will be the deciding factor. The user should compare the specification and description of the equivalent routines and decide which is the more appropriate.
In the following section a brief summary and comparison of the content of the two libraries is given.
3.2 Comparative Summary of the Content of the IMSL and NAG Libraries
The following comparison gives a very basic idea of the strengths and weaknesses of the two libraries. It is very much the author's opinion on the subject and does not in any way express the views of either IMSL or NAG. The list is ordered by the chapter of the IMSL Library. It gives details of each chapter with some idea of the coverage of the NAG Library in each subject area. At the end of this list details of NAG Library subject areas not covered by the IMSL chapters are given.
IMSL Chapter A - Analysis of Experimental Design Data
This subject area is well covered in the IMSL Library. In the NAG Library the G04 chapter contains only a subset of the material covered by IMSL.
IMSL Chapter B - Basic Statistics
IMSL Chapter C - Categorized Data Analysis
Are equivalent to the material covered in the NAG G01 and G02 chapters. In general, there is a more varied coverage in the IMSL Library, although the NAG Library implements some algorithms which are not included in the IMSL Library.
IMSL Chapter D - Differential Equations, Quadrature, Differentiation
This IMSL chapter is equivalent to the four NAG chapters:
D01 - Quadrature
D02 - Ordinary Differential Equations
D03 - Partial Differential Equations
D04 - Numerical Differentiation
Both libraries cover this area well. However, the NAG coverage is better.
IMSL Chapter E - Eigenanalysis
The IMSL chapter is equivalent to the NAG Chapter F02. Both of the libraries have good subroutines available for all the commonly required form of the eigenvalue problem. The NAG chapter is of an especially high quality.
IMSL Chapter F - Forecasting, Time Series Analysis, Fourier Transforms
The Fourier transform section is equivalent to the NAG C06 chapter. The forecasting and time series analysis subroutines are not available in the NAG Library (but are planned for a future release in the G13 chapter).
IMSL Chapter G - Generation and Testing of Pseudo-random Numbers
The NAG G05 chapter contains the equivalent subroutines. Both libraries give an excellent coverage of this area.
IMSL Chapter I - Interpolation, Approximation, Smoothing
The NAG E01 chapter covers the subject area of interpolation. The NAG E02 chapter covers the subject area of approximation and smoothing.
IMSL Chapter L - Linear Algebraic Equations
The NAG F04 chapter contains the equivalent subroutines.
As with the chapter on eigenanalysis, this subject area is given excellent coverage by both libraries.
IMSL Chapter M - Mathematical and Statistical Special Functions
The NAG S chapter contains the equivalent subroutines. There is a wide diversity of the special functions which are covered. The user will generally need to consult both libraries to find an implementation of the subroutine required. IMSL has more statistical special functions and NAG has more mathematical special functions.
IMSL Chapter N - Non-parameteric Statistics
Subroutines for this subject area are only available in the IMSL Library.
IMSL Chapter O - Observation Structure, Multivariate Statistics
The NAG G08 chapter covers this area. There is a much wider coverage in the IMSL Library.
IMSL Chapter R - Regression Analysis
The NAG G02 chapter contains some subroutines which cover part of the subject area. The NAG G02 chapter does not include subroutines for stepwise regression analysis or curvilinear regression analysis. For curvilinear regression analysis the NAG documentation suggest the choice of a model and then the use of a least squares fit subroutine for the E04 (minimization) chapter.
IMSL Chapter S - Sampling
Subroutines for this subject area are only available in the IMSL Library.
IMSL Chapter U - Utility Subroutines
This chapter has no equivalent chapter in the NAG Library. It consists of two separate subsets:
1) Subroutines for input/output in various special forms such as matrix input/output lineprinter histogram drawing.
2) HELP subroutines, to obtain information about various IMSL Library aspects
IMSL Chapter V - Vector Arithmetic & Sorting
The NAG F01 chapter contains the equivalent subroutines for vector arithmetic. The coverage of the subject area in both libraries is good. The NAG M chapter contains the equivalent sorting routines.
IMSL Chapter Z Zeros and Extrema, Linear Programming
The NAG C02, C05 and E04 chapters contains the subroutines for zeros and extrema.
The NAG H Chapter contains the linear programming subroutines.
The NAG E04 Chapter gives a much wider coverage of the general problem of finding local maxima or minima of a function.
The NAG H Chapter (Operations Research) contains more than simply linear programming subroutines.
Chapters and Facilities in NAG which are not available in IMSL.
NAG has explicit chapters for mathematical and machine constants (X01 and X02).
Full chapters on determinants (F03) and orthonormalization (F05) are present in the NAG Library.
A chapter on integral equations (D05) is provided in the NAG Library.
In appendix B a full comparative list of the various subroutines in the NAG & IMSL libraries is given.
Both organizations actively encourage users to request inclusion of any algorithms which are not present. IMSL even provides a formal "RAI" request service (Request for Ability Inclusion) with a specific form for the users to complete.
4. DOCUMENTATION
4.1 Documentation Overview
Before using a subroutine from either of the libraries it is necessary to read the relevant documentation. This, and the following sections, give information which will assist the programmer to use the available documentation effectively.
The documentation may be considered in five separate parts.
1) General introductory information.
Information of a general introductory type may be found in this document and in the general introductions to the NAG Library Manual and the IMSL Library Manual. Both of these manuals are available for reference in the Computing Support Library (Room 1871 building A36).
2) Subject introduction and algorithm choice.
Documentation which gives background information about a subject area in numerical mathematics or statistics, together with advice on choice of subroutines for different problems within the subject area.
This information may be found in the introduction to the relevant chapter of the NAG Library Manual or the IMSL Library Manual. Also, in the case of the NAG Library only, there is a publication titled the NAG Mini-Manual, which is simply a collection of all the chapter introduction documents. Again this manual is available for reference in the Computing Support Library.
3) Individual Subroutines Documentation.
Documentation about individual subroutines which explains in detail how to use the subroutines in a standard FORTRAN manner (i.e. without details of how to use it on the JRC-Ispra Computing Centre Service. This information may be found in the individual routine documents in the IMSL Library Manual and the NAG Library Manual.
4) Implementation Specific Documentation.
Documentation which relates the documentation described in 2) and 3) to a particular implementation of the IMSL or NAG Library (e.g. documentation giving details of IBM specific features of certain aspects of the particular library).
This information is provided in the form of a short document by both IMSL and NAG. Copies of these documents have been included in the reference copies of both the IMSL Library Manual and the NAG Library Manual. Also, some of the more important information of this type is presented in the following sections of this document.
5) Use of the libraries at the JRC-Ispra
Documentation giving details of how to use the library on the JRC-Ispra Computing Centre Service. This information is available in section 6. of this document.
4.2 IMSL Documentation
The IMSL documentation is in the form of the IMSL Library Manual. This is at present in three volumes. The manual structure is as follows:
Introduction
A general introduction which also gives some information about specific features relating to different computer-compiler environments.
Contents
A brief description of such subroutines, which is ordered by chapter.
KWIC Index
A keyword in Context (KWIC) index of the subroutines to help the user locate the chapter/subroutine required.
Chapters A-Z
Each chapter is split into two parts:
a) The general introduction which contains:
Chapter Name
Quick Reference Guide to Chapter Facilities
Featured Abilities
Name Conventions for this Chapter
Special Instructions on Usage (Optional)
Subtleties to Note (Optional)
Pitfalls to Avoid (Optional)
The chapter introduction is followed by individual subroutine documentation.
b) Individual subroutine documents
Subroutine documentation consists of two parts. The first is a copy of comment lines that appear at the beginning of each subroutine source deck.
The comment lines are as follows:
IMSL ROUTINE NAME - routine name
PURPOSE - a statement of the purpose of the routine
USAGE - the form of the subprogram CALL with arguments listed
ARGUMENTS - a description of the arguments in the order of their occurrence in USAGE
PRECISION/HARDWARE - environment specific information giving the precision of the routine - SINGLE, or DOUBLE
REQD. IMSL ROUTINES - a list of all IMSL routines called (directly and indirectly) by this routine
NOTATION - reference to manual introduction and IMSL routine UHELP
REMARKS (optional) - details pertaining to code usage
The second part of the document (which does not appear in the source code) includes the following sections:
ALGORITHM - a brief statement of the algorithm and references to detailed information
PROGRAMMING NOTES (optional) - programming details not covered elsewhere
ACCURACY (optional) - a statement about the accuracy of the routine
EXAMPLE - an example showing subroutine input, required dimension and type statements and output
4.3 NAG Documentation
The NAG documentation for use at the JRC-Ispra Computing Centre Service is in the form of three publications:
1) the NAG Library Manual (at present in 6 volumes)
2) the NAG Mini-Manual (the chapter introduction for the NAG Library Manual)
3) the NAG IBM FORTRAN Implementation Documents (single and double precision)
The NAG Library Manual
The manual structure is as follows:
Foreword
Written by Professor L. Fox (Oxford University) and Dr. J.H. Wilkinson (N.P.L.) This is interesting and educational reading.
Introduction
Contains a great deal of important information.
Chapters A02-X04
Each is split into two parts:
a) the chapter introduction which contains:
1. The scope of the chapter
2. Background to the problems
3. Recommendations on choice and use of routines
b) the individual routine documents which are structured as follows:
All routine documents have 13 numbered sections with the following headings:
1. Purpose
2. Specification
3. Description
4. References
5. Parameters
6. Error Indicators
7. Auxiliary Routines
8. Timing
9. Storage
10. Accuracy
11. Further Comments
12. Keywords
13. Example
The NAG Mini-Manual
This is formed from the foreword, introduction and all of the chapter introductions. This is a useful manual for an introduction to the NAG Library and for helping the user to find the routine which he requires. Actual routine specifications are, however, only found in the NAG Library Manual.
The NAG IBM FORTRAN Implementation Documents
These documents contain details of how the IBM FORTRAN implementation of the NAG Library should be used (in a general sense) and how the implementation differs from the standard implementation (as defined by the NAG Library Manual). It is important that all prospective library users read the appropriate single or double precision documents. Both documents are inserted at the beginning of each NAG Library Manual.
5. IMPLEMENTATION SPECIFIC DETAILS
5.1 IMSL Implementation Details
The subroutine naming convention in the IMSL libraries is the same for both single and double precision versions. This implies that in one program it is not normally possible to include one routine from the single precision library and another routine from the double precision library.
Although most subroutines are available in both single and double precision versions, there are some exceptions. In each routine specification document there is a section "PRECISION/HARDWARE". The user must check this to make sure that the routine is implemented in the required environment.
SINGLE/H32 means that the subroutine is available in the IBM FORTRAN single-precision library.
DOUBLE/H32 means that the subroutine is available in the IBM FORTRAN double-precision library.
Examples given in the routine specification will normally require some modifications before being suitable for use with either the single or double precision libraries. Normally, the examples will have been written for the single precision version, and therefore the normal changes (of REAL to DOUBLE PRECISION etc.) will be necessary to run them with the double precision version of the subroutines.
5.2 NAG Implementation Details
The naming convention for NAG subroutines is given in Section 2.2. The sixth character of the NAG subroutine name is a letter which defines the language type of the subroutine. This letter is also used to create a difference between the single and double precision IBM FORTRAN subroutines.
For the single precision library the sixth character of the routine name is always E.
For the double precision library the sixth character of the routine name is always F.
Therefore, all subroutines in the single precision library and double precision library have different names. Thus, mixing of the use of different single and double precision NAG subroutines in one program is possible.
For double precision:
CALL E04CGF(N,X,F,IW,LIW,W,LW,IFAIL)
For single precision:
CALL E04CGE(N,X,F,IW,LIW,W,LW,IFAIL)
In the NAG Library Manual there are certain terms which are italicized. The implication is that these terms are implementation dependent and should be replaced by the appropriate actual term for the implementation being used (i.e., for the single precision IBM FORTRAN implementation or the double precision IBM FORTRAN implementation).
| Italicized term | IBM FORTRAN single precision | IBM FORTRAN double precision |
|--------------------------|------------------------------|-----------------------------|
| real | (REAL*4) | DOUBLE PRECISION (REAL*8) |
| complex | COMPLEX*8 | COMPLEX*16 |
| basic precision | single precision | double precision |
| additional precision | double precision (REAL*8) | quadruple precision (REAL*16)|
Example programs published in the Library Manual are in single precision. Therefore, for use with the single precision library they should seldom require any modification. For use of example programs with the double precision library may require modifications in the following area:
1) Inserting as appropriate REAL*8 and COMPLEX*16 statements
2) Changing any intrinsic functions to double precision version e.g., SORT to DSORT
3) Specifying real constants in double precision (D) format.
4) Explicitly converting any implicit integer to real conversions using the DFLOAT function
5) Changing any E formats to D formats
6. USING THE LIBRARIES
The IMSL Library and NAG Library are available for use on the JRC-Ispra Computing Centre Service. The following sections describe their use in FORTRAN programs, both in batch and from a TSO foreground session. In section 7, a brief description is given of how to include NAG and IMSL subroutines in programs written in other languages.
6.1 Library data set definitions
There are four IMSL and NAG load module libraries available for use. These are the single and double precision versions of both libraries. The names of the data sets are given in the following table:
| Library | Data set name |
|--------------------------|-----------------|
| IMSL single precision | SYS1.LIBMASXS |
| IMSL double precision | SYS1.LIBMASXD |
| NAG single precision | SYS1.LIBNAGS |
| NAG double precision | SYS1.LIBNAGD |
ALL OF THE DATA SETS IN THE ABOVE TABLE ARE CATALOGED
6.2 Use of the Libraries in Batch
Users may easily access one of the libraries by using one of the standard FORTRAN G1 compiler procedures in the following manner:
```
// EXEC FTG1CG,PRN=abcd
```
where abcd is replaced as follows:
- **NAGS** - NAG single precision library
- **NAGD** - NAG double precision library
- **MASXS** - IMSL single precision library
- **MASXD** - IMSL double precision library
So, for example:
```
// EXEC FG1CG,PRN=NAGS
```
implies a FORTRAN compilation, load and go with subroutines in the program from the NAG single-precision library.
Note. The use of the FG1CG procedure is only an example. The use of these libraries in this manner is possible for all of the following FORTRAN G1 & HE procedures.
FTG1CL FTHECL
FTG1CLG FTHECLG
FTG1CG FTHECG
FTG1L FTHEL
FTG1LG FTHELG
FTG1G FTHEG
In appendix A examples (with explanation) are given of the use of the above mentioned system.
Use of More than One Library in the Same Program
As has been stated in section 5.1, it is not normally possible to use different subroutines from both the IMSL single and double precision libraries in the same program. This is due to the fact that the naming conventions are the same for the two libraries. However, it is possible to mix subroutines from the two NAG library and also the mix subroutines from the NAG libraries with one of the IMSL libraries. In this case it is not possible to use the job control command specified in the previous section because it is necessary to include subroutines from more than one of the libraries. In this case the libraries are included into the standard SYSLIB by concatenation.
e.g.
// EXEC FTG1CLG
//CMP.SYSIN DD *
...
{ FORTRAN source program
...
}
/*
//LKED.SYSLIB DD
//
// DD
//
// DD
//
// DD DSN=SYS1.LIBNAGS,DISP=SHR
// DD DSN=SYS1.LIBNAGD,DISP=SHR
In general, any 2 or 3 of the 4 libraries (except combinations including the two IMSL libraries) may be included by concatenation in this manner (see Appendix A for an example).
6.3 Use of Library from a TSO Session
There are two different ways in which subroutines from the mathematical libraries may be used in TSO test FORTRAN compilations.
a) Using the FG1CLG command procedure to perform the compilation, link edit and execution. With this procedure it is possible to obtain the necessary subroutines from one of the libraries by including a specific parameter.
b) Using the FORT TSO command followed by either LOADGO or LINK it is possible to include subroutines from one or more of the libraries.
Note. It is also possible to make use of the mathematical libraries by using the CONCAT TSO command. For further details see the IBM Manual TSO Command Reference Manual (GC28-6732).
a) Using FG1CLG
The TSO command procedure FG1CLG has a parameter PRN (---). This parameter is equivalent to the use of the PRN= parameter of FG1CLG in a batch mode. This allows the inclusion of subroutines from one of the mathematical libraries. The PRN(---) parameter has as operand the xxxx part of the library name SYS1.LIBxxxx (e.g. for SYS1.LIBNAGS the user should include the parameter PRN(NAGS)). An example of the use of this technique is given in the following example (example 1).
b) Use of FORT followed by either LOADGO or LINK
Using this method the compilation of the FORTRAN program is split into two separate phases:
1) The compilation to produce an object module.
2) The use of either the link-editor or the loader to take the object module together with any appropriate subroutine from the various subroutine libraries and produce an executable program.
Note. If the link editor procedure is used then a load module version of the program is created. This must be executed using the TSO CALL command.
In the second stage of this operation it is possible to include subroutines from one or more of the mathematical libraries by using the LIB(---) parameter. Examples of the use of these techniques are given in examples 2, 3 and 4.
Note for NAG Users
Users are reminded that in the case of NAG library routines the routine names are different for single and double precision libraries.
E04CAF on the double precision library becomes:
E04CAE on the single precision library (i.e. the final character is changed from F to E)
However, for the IMSL library the routine names are the same for the single and double precision versions. Therefore, in general, it will not be possible to use subroutines from the single & double precision IMSL libraries in the same program.
Examples of TSO Usage of NAG & IMSL Subroutines
In the following examples lines typed by the user are shown in lower case. The carriage return/ENTER character at the end of each input line is marked by a CR.
The following type of examples are given:
Example 1:
This example shows the use of the FG1CLG TSO command procedure for a program which uses the IMSL single precision library.
Example 2:
This example shows the use of the NAG double precision library using FORT followed by LOADGO.
Example 3:
This example shows the compilation link edit and execution of a program which uses the NAG single precision library.
Example 4:
This example shows the compilation, load and execution of a program which uses both the NAG and IMSL single precision libraries.
Example 1
This example shows the use of the FG1CLG TSO command procedure for a program which uses the IMSL single precision library.
```
qed newcomp1 fortgi new CR
INPUT
00010c example of imsl single precision library CR
00020c analysis of two-way classification design data CR
00030 integer i,ndf(5),ier CR
00040 real y(6),em(11),gm,s(5) CR
00050 data y/73.,90.,98.,107.,94.,49./ CR
00060 call arcban(y,1,3,2,em,gm,s,ndf,ier) CR
00070 write(6,99999) (em(i),i=1,3) CR
00080 write(6,99998) (em(i),i=4,5) CR
00090 stop CR
0010099999 format(18h block means are : ,11f7.2) CR
0011099998 format(22h treatment means are : ,11f7.2) CR
00120 end CR
00130 CR
QED
scan CR
QED
end save CR
SAVED
READY
fg1clg newcomp1 prn(masxs) CR
```
DATA SET NEWCOMP1.LIST NOT IN CATALOG
DATA SET NEWCOMP1.OBJ NOT IN CATALOG
DATA SET NEWCOMP1.DECK NOT IN CATALOG
DATA SET NEWCOMP1.SYSUT1 NOT IN CATALOG
DATA SET NEWCOMP1.DUMP NOT IN CATALOG
DATA SET NEWCOMP1.LOAD NOT IN CATALOG
UTILITY DATA SET NOT FREED, IS NOT ALLOCATED
UTILITY DATA SET NOT FREED, IS NOT ALLOCATED
ENTER CONTROL STATEMENTS-
END OF CONTROL STATEMENTS
BLOCK MEANS ARE: 81.50 102.50 71.50
TREATMENT MEANS ARE: 88.33 82.00
READY
In stage A a FORTRAN program is typed by the user (using the QED editor). Note the use of the SCAN subcommand of QED to check the validity of the FORTRAN program.
In stage B the FG1CLG TSO command procedure is used to compile link edit and execute the program. Note, that link editor control statements may be input. Typing a CR without any other information ends these control statements.
Example 2
This example shows the use of the NAG double precision library using FORT followed by LOADGO.
```
list newcomp2.fort CR
NEWCOMP2.FORT
00010 C EXAMPLE OF NAG DOUBLE PRECISION LIBRARY
00020 C USES F03AAF - MATRIX DETERMINANT CALCULATION
00030 DOUBLE PRECISION DETERM,A(4,4),WKSPCE(18)
00040 INTEGER I,N,J,IA,IFAIL
00050 READ(5,99999) (WKSPCE(I),I=1,7)
00060 N=3
00070 READ(5,99998) ((A(I,J),J=1,N),I=1,N)
00080 IA=4
00090 IFAIL=1
00100 WRITE(6,99997) (WKSPCE(I),I=1,6)
00110 CALL F03AAF(A,IA,N,DETERM,WKSPCE,IFAIL)
00120 IF(IFAIL.EQ.0) GOTO 20
00130 WRITE(6,99996)IFAIL
00140 STOP
00150 20 WRITE(6,99995)DETERM
00160 STOP
00170 99999 FORMAT(6A4,A3)
00180 99998 FORMAT(3F5.0)
00190 99997 FORMAT(4(1X/),1H ,5A4,A3,7RESULTS/1X)
00200 99996 FORMAT(25HOERROR IN F03AAF IFAIL = ,I2)
00210 99995 FORMAT(24HOVALUE OF DETERMINANT = ,F4.1)
00220 END
READY
fort newcom2 CR
G1 COMPILER ENTERED
SOURCE ANALYZED
PROGRAM NAME = MAIN
* NO DIAGNOSTICS GENERATED
READY
loadgo newcomp2.obj lib('sys1.libnaed') fortlib CR
f03aaf example program data CR
C1
33 16 72 CR
-24 -10 -57 CR
-8 -4 -17 CR
C
FO3AAF EXAMPLE PROGRAM RESULTS
C2
VALUE OF DETERMINANT = 6.0
```
In stage A a previously created data set is listed.
In stage B the program stored in the data set is compiled.
In stage C the load and execution of the program is performed.
In C1 the data is input.
In C2 the output is produced.
Example 3
This example shows the compilation link edit and execution of a program which uses the NAG single precision library.
```
list newcomp3.fort
NEWCOMP3.FORT
00010 INTEGER MAXDIV,IFAIL,NOFUN
00020 REAL A,B,EPS,ACC,ANS,ERROR,FUN
00030 EXTERNAL FUN
00040 A=0.0
00050 B=1.0
00060 MAXDIV=20
00070 EPS=1.0E-8
00080 ACC=0.0
00090 IFAIL=1
00100 CALL D01AGE(A,B,FUN,MADIV,EPS,ACC,ANS,ERROR,NOFUN,
00110 * IFAIL)
A 00120 WRITE(6,99998)ANS,ERROR,NOFUN
00130 IF(IFAIL)20,40,20
00140 20 WRITE(6,99997)
00150 40 STOP
00160 99998 FORMAT(/12H INTEGRAL = ,F11.4,3X,9H ERROR = ,E11.4,3X,
00170 * 20H NUMBER OF POINTS = ,I3)
00180 99997 FORMAT(43H METHOD WAS UNABLE TO EVALUATE THE INTEGRAL)
00190 END
00200 REAL FUNCTION FUN(X)
00210 REAL X
00220 FUN=4.0/(1.0+X*X)
00230 RETURN
00240 END
READY
fort newcomp3
G1 COMPILER ENTERED
SOURCE ANALYZED
PROGRAM NAME = MAIN
B * NO DIAGNOSTICS GENERATED
SOURCE ANALYZED
PROGRAM NAME = FUN
* NO DIAGNOSTICS GENERATED
*STATISTICS* NO DIAGNOSTICS THIS STEP
READY
link newcomp3.obj lib('sys1.libnags') fortlib
C READY
call newcomp3
TEMPNAME ASSUMED AS A MEMBER NAME
D INTEGRAL = 3.1416 . ERROR = 0.3052E-04 NUMBER OF POINTS = 9
```
In stage A a previously created data set is listed. In stage B the program stored in the data set is compiled. In stage C the link edit of the program is performed. The output load module of the program is stored in NEWCOMP3.LOAD(TEMPNAME). In stage D the library program is executed.
Example 4
This example shows the compilation, load and execution of a program which uses both the NAG and IMSL single precision libraries.
```
list newcomp4.fort CR
NEWCOMP4.FORT
00010 C EXAMPLE OF THE USE OF TWO LIBRARIES
00020 C NAG & IMSL (BOTH SINGLE PRECISION)
00030 C THE PROGRAM FINDS THE ROOT OF A FUNCTION
00040 INTEGER MAXFN,IER
00050 REAL A,B,FUN
00060 EXTERNAL FUN
00070 A=-10.
00080 B=10.
00090 MAXFN=100
00100 CALL ZBRENT(FUN,0.0,8,A,B,MAXFN,IER)
00110 IF(IER.EQ.0)GOTO 20
A 00120 WRITE(6,99998)IER
00130 STOP
00140 20 WRITE(6,99999)B
00150 STOP
00160 99999 FORMAT(20HOESTIMATE OF ROOT = ,E10.3)
00170 99998 FORMAT(25HOERROR IN ZBRENT IER = ,12)
00180 END
00190 REAL FUNCTION FUN(X)
00200 REAL X
00210 IFAIL=0
00220 FUN=S15ACE(X,IFAIL)-0.75
00230 RETURN
00240 END
READY
fort newcomp4 CR
1 COMPILER ENTERED
SOURCE ANALYZED
PROGRAM NAME = MAIN
B NO DIAGNOSTICS GENERATED
SOURCE ANALYZED
PROGRAM NAME = FUN
NO DIAGNOSTICS GENERATED
*STATISTICS* NO DIAGNOSTICS THIS STEP
READY
loadgo newcomp4.obj lib('sys1.libnags' 'sys1.libmasxs') fortlib CR
C ESTIMATE OF ROOT = -0.674E+00
READY
```
In stage A a previously created data set is listed.
In stage B the program stored in the data set is compiled.
In stage C the load and execution of the program is performed.
(Note, in particular, the use of the LIB parameter with two libraries.)
7. INCLUSION OF LIBRARIES IN OTHER LANGUAGE PROGRAMS
Information regarding the use of the IMSL and NAG libraries in programs written in other programming languages (i.e. non-FORTRAN programs) will be included in a later version of this document.
APPENDIX A
Examples of the Use of the Libraries in Batch Jobs
1. Example of Use of the IMSL Library
The example shows the use of the IMSL single precision library using the IMSL subroutine ZX3LP which is an "easy-to-use" linear programming subroutine which uses the revised Simplex algorithm (see Hadley, G. "Linear Programming", Addison-Wesley, Reading, Massachusetts, 1962).
The problem is to maximise \( x_1 + 3x_2 = S \)
Subject to the constraints:
\[
\begin{align*}
x_1 & \leq 1 \\
x_2 & \leq 1 \\
x_1 + x_2 & \leq 1.5 \\
x_1 + x_2 & \geq 0.5 \\
x_1 & \geq 0 \\
x_2 & \geq 0
\end{align*}
\]
Results of Example
ZX3LP EXAMPLE PROGRAM RESULTS
VALUE OF OBJECTIVE FUNCTION= 3.500
SOLUTION VECTOR= 0.500 1.000
See next page for listing of example
Listing of Example 1 Job
// JOB (YOUR JOB CARD)
$ CLASS 2
// EXEC PTC1CG,PRN=MASXS
//CMP.SYSIN DD *
C ZX3LP EXAMPLE PROGRAM
C
INTEGER IA,N,M1,M2,IW(16),IER
REAL A(6,2),B(6),C(2),RW(52),PSOL(4),DSOL(6),S
C NUMBER OF UNKNOWNS
N=2
C M1=NUMBER OF INEQUALITY CONSTRAINTS
M1=4
C M2=NUMBER OF EQUALITY CONSTRAINTS
M2=0
C IA=FIRST DIMENSION OF A
IA=6
C SET UP MATRIX OF CONSTRAINTS
A(1,1)=1.0
A(1,2)=0.0
A(2,1)=0.0
A(2,2)=1.0
A(3,1)=1.0
A(3,2)=1.0
A(4,1)=-1.0
A(4,2)=-1.0
C VECTOR OF RIGHT-HAND SIDES OF CONSTRAINT EQUATIONS
B(1)=1.0
B(2)=1.0
B(3)=1.5
B(4)=-0.5
C COEFFICIENTS OF OBJECTIVE FUNCTIONS
C(1)=1.0
C(2)=3.0
CALL ZX3LP (A,IA,B,C,N,M1,M2,S,PSOL,DSOL,RW,IW,IER)
C CHECK IF ERROR (IER NOT EQUAL TO ZERO)
IF(IER.NE.0)WRITE(6,100)IER
IF(IER.NE.0)GOTO 20
C WRITE RESULTS
WRITE(6,1001)S,PSOL(1),PSOL(2)
20 STOP
100 FORMAT(1H/8.2/2F8.2/4F8.2)
1000 FORMAT(' ERROR IN ZX3LP IER= ',I5)
1001 FORMAT(' ZX3LP EXAMPLE PROGRAM RESULTS/'
1 ' VALUE OF OBJECTIVE FUNCTION=',F8.3/
2 ' SOLUTION VECTOR=',2F10.3)
END
/*
2. Example of the Use of the NAG Library
The example shows the use of the NAG double precision library. The example shows the use of the NAG subroutine E04CGF which implements an easy-to-use "quasi-Newton" algorithm (see Gill P.E. & Murray W., "Quasi-Newton methods for unconstrained optimization", Journal of the Institute of Mathematics and its Applications, 1972, Vol. 9, 91-108) for finding an unconstrained minimum of a function $F(X_1, X_2, \ldots, X_n)$ of the $N$ independent variables $X_1, X_2, \ldots, X_n$ using function values only.
In the example the function which is minimized is
$$F(X_1, X_2) = e^{X_1} \cdot (4X_1^2 + 2X_2^2 + 4X_1X_2 + 2X_1 + 1)$$
starting from an initial guess of $X_1 = -1$ and $X_2 = 1$.
Results of Example
E04CGF EXAMPLE PROGRAM RESULTS
FUNCTION VALUE ON EXIT IS 0.0000
AT THE POINT 0.5000 -1.0000
See next page for listing of example
Listing of Example 2 Job
// JOB(YOUR JOB CARD)
$ CLASS 2
// EXEC FTG1CG,PRN=NAGD
//CMP.SYSIN DD *
C EO4CGF EXAMPLE PROGRAM TEXT
C ..LOCAL SCALARS..
DOUBLE PRECISION F
INTEGER I, IFAIL, LIW, LW, N, NOUT
C ..LOCAL ARRAYS..
DOUBLE PRECISION W(29), X(2)
INTEGER IW(4)
C ..SUBROUTINE REFERENCES..
C EO4CGF
C ..
DATA NOUT /6/
WRITE(NOUT,99999)
N=2
X(1)=-1.0D+0
X(2)= 1.0D+0
LIW=4
LW=29
IFAIL=1
CALL EO4CGF(N,X,F,IW,LIW,W,LW,IFAIL)
C SINCE IFAIL WAS SET TO 1 BEFORE ENTERING EO4CGF, IT IS
C ESSENTIAL TO TEST WHETHER IFAIL IS NON-ZERO ON EXIT
IF(IFAIL.NE.0) WRITE(NOUT,99999) IFAIL
IF(IFAIL.EQ.1) GO TO 20
WRITE(NOUT,99997) F
WRITE(NOUT,99996) (X(I),I=1,N)
20 STOP
C END OF EO4CGF EXAMPLE MAIN PROGRAM
99999 FORMAT (////31H EO4CGF EXAMPLE PROGRAM RESULTS/)
99998 FORMAT (16H ERROR EXIT TYPE,I3, 23H - SEE ROUTINE DOCUMENT)
99997 FORMAT (27H FUNCTION VALUE ON EXIT IS ,F12.4)
99996 FORMAT (13H AT THE POINT, 2F12.4)
END
C
SUBROUTINE FUNCT1(N,XC,FC)
C FUNCTION EVALUATION ROUTINE FOR EO4CGF EXAMPLE PROGRAM -
C THIS ROUTINE MUST BE CALLED FUNCT1
C ..SCALAR ARGUMENTS..
DOUBLE PRECISION FC
INTEGER N
C ..ARRAY ARGUMENTS..
DOUBLE PRECISION XC(N)
C ..
C ..LOCAL SCALARS..
DOUBLE PRECISION X1, X2
C ..FUNCTION REFERENCES..
DOUBLE PRECISION DEXP
C ..
X1=XC(1)
X2=XC(2)
FC=DEXP(X1)*(4.0D+0*X1*(X1+X2)+2.0D+0*X2*(X2+1.0D+0)+1.0D+0)
RETURN
C END OF FUNCTION EVALUATION ROUTINE
END
3. An Example of More than one Library in a Batch Job
The test program which is shown as example 4 in section 6.3 (for a TSO session) may be executed using the following batch job.
```
// JOB(YOUR JOB CARD)
$ CLASS 2
// EXEC FTG1CLG
//CMP.SYSIN DD *
C EXAMPLE OF THE USE OF TWO LIBRARIES
C NAG & IMSL (BOTH SINGLE PRECISION)
C THE PROGRAM FINDS THE ROOT OF A FUNCTION
INTEGER MAXFN,IER
REAL A,B,FUN
EXTERNAL FUN
A=-10.
B=10.
MAXFN=100
CALL ZBRENT(FUN,0.0,8,A,B,MAXFN,IER)
IF(IER.EQ.0)GOTO 20
WRITE(6,99998)IER
STOP
20 WRITE(6,99999)B
STOP
99999 FORMAT(12HOESTIMATE OF ROOT = ,E10.3)
99998 FORMAT(25HOERROR IN ZBRENT IER = ,I2)
END
REAL FUNCTION FUN(X)
REAL X
IFAIL=0
FUN=S15ACE(X,IFAIL)-0.75
RETURN
END
/*
//LKED.SYSLIB DD
// DD
// DD
// DD DSN=SYS1.LIBNAGS,DISP=SHR
// DD DSN=SYS1.LIBMASXS,DISP=SHR
END OF DATA
```
APPENDIX B
Detailed Comparison of Content of the Libraries
(based on a table produced by Dr. P. Kemp, University of Newcastle (U.K.))
| NAG | IMSL |
|---------|--------|
| A02 COMPLEX ARITHMETIC | |
| square root | A02AAF |
| modulus | A02ABF |
| quotient | A02ACF |
| C02 ZEROS OF POLYNOMIALS | |
| complex coefficients | C02ADF |
| real coefficients | C02AEF |
| quadratic, real coeff's | - |
| , complex coeff's | - |
| C05 ROOTS OF ONE OR MORE TRANSCENDENTAL EQUATIONS | |
| real function of one variable | C05AAF |
| | C05ABF |
| | C05ACF |
| | C05AZF |
| , Bus & Dekker alg. | C05ADF |
| , bin. search B & D | C05AGF |
| , continuation secant | C05AJF |
| , bin. search, reverse comm. | C05AVF |
| , as C05AJF, reverse comm. | C05AXF |
| complex analytic function | - |
| n equations, n variables, functions | C05NAF |
| | C05NBF |
| | C05NCF |
| C06 SUMMATION OF SERIES, FOURIER TRANSFORMS | |
| FFT, 2**m real data values | C06AAF |
| FFT, real data values | C06EAF |
| (uses extra workspace) | C06EAF |
| FFT, Hermitian sequence | C06EBF |
| (uses extra workspace) | C06EBF |
| FFT, 2**m complex data values | C06ABF |
| FFT, complex data values | C06ADF |
| (uses extra workspace) | C06ECF |
| FFT estimates. power, cross spectra | - |
| real circular convolution, period 2m | C06ACF |
| sin, cos transforms. real series | - |
sum of Chebyshev series C06DBF -
conjugate of Hermitian sequence C06CBF -
conjugate of complex sequence C06GCF -
inverse Laplace transform - FLINV
FFT of array (1,2 or 3 dim) - FFTT3D
D01 QUADRATURE
finite interval D01ACF DCADRE
(Patterson algorithm) D01AGF DCSQDU
(de Doncker algorithm) D01AHF -
(for oscillating fns.) D01AJF -
(user-specified singularities) D01AKF -
(log-type end point sing.) D01ALF -
(Gauchy princ. value) D01APF -
(non-adaptive) D01BDF -
Gauss. integral D01BAF -
weights and abscissae D01BBF -
D01BCF -
infinite interval D01ANF -
double integral D01DAF DBCQDU
DBLINT
multiple integral, Monte Carlo D01FAF -
, Gauss D01FBF -
, adaptive D01FCF -
trigonometric integral D01ANF -
tabular function D01GAF -
spline E02BDF -
D02 ORDINARY DIFFERENTIAL EQUATIONS
IV problem, range D02BAF DREBS
D02CAF DVERK
D02EAF DGEAR
(stiff)
D02BBF -
D02CBF
D02EBF
(stiff)
D02BDF -
D02BGF
D02CGF
(stiff)
D02EGF
D02BHF -
D02CHF
D02EHF -
D02PAF -
D02QAF
D02QBF
(stiff)
interpolation for D02PAF, all comps. D02XAF -
, one comp. D02XBF -
D02QA/BF, all comp. D02XGF -
, one comp. D02XHF -
one step Runge Kutta
BV problem, 2 point
, 2-point, linear
, 2-point, non-linear
, generalized
, linear
collocation method
eigenvalues St-L., reg., finite range
, general
eigenfns. St-L., general
D02YAF - D02ADF - D02CBF -
D02JAF D02JBF D02GAF DTPTB
D02HAF D02HBF D02RAF
D02SAF D02AGF - D02TGF -
D02AFF D02KAF D02KDF -
D02KEF -
D03 PARTIAL DIFFERENTIAL EQUATIONS
elliptic, Laplace 2-d
, soln f.d. eqs. 5pt 2-d mol.
, Stone's strongly imp. 5pt
, soln f.d. eqs. 7pt 3-d mol.
, Stone's strongly imp. 7pt
triangulation
parabolic, 2 point BV, non linear
, (single eq.)
, (general sys.)
D03EAF - D03EBF - D03UAF -
D03ECF - D03UBF - D03MAF -
D03PBF - D03PAF - D03PGF -
D04 NUMERICAL DIFFERENTIATION
fn of single real variable
partial differentiation
D04AAF DCSEVU
- DBCEVU
D05 INTEGRAL EQUATIONS
Fredholm, 2nd kind, split kernel
, smooth kernel
D05AAF - D05ABF -
E01 INTERPOLATION
1 variable, equal spacing
, unequal spacing
, cubic spline
, periodic cubic spline
, Chebyshev polynomial
2 variables
E01ABF - E01AAF ICSICU
E01ADF ICSCCU
E01BAF - ICSPLN
E01AEF - E01ACF IBCICU
IBCIEU IQHSCV
E02 CURVE AND SURFACE FITTING
1-s curve, cubic splines
, polynomial
, Chebyshev series
, user supplied basis
1-s surface fit
minimax curve fit
minimax minimax soln. over-deter. lin. eq.
L1 approx. linear fn
constraints
rational approx.
Pade approximant
evaluate Chebyshev series
cubic spline
ditto derivs.
ditto, definite integral
poly in 2 vers, from E02CAF
bi-cubic spline
rational fn. from E01RAF
poly. from Cheby. series
Chebyshev coeff's. of derivative
ditto of integral
sort 2-d data into panels
generate points in n-dim space
data smoothing, outlier detection
, cubic spline (easy to use)
, quasi Hermite
E02BAF ICSFKU
ICSVKU
E02ADF -
E02AFF -
E02AGF -
- IFLSQ
E02DAF -
E02CAF -
E02ACF -
E02GCF RLLMV
E02GAF RLLAV
E02BFF -
- IRATCU
E02RAF -
E02AEF -
E02BBF ICSEVU
E02BCF DCSEVU
E02BDF -
E02CBF -
E02DBF IBCEVU
E02RBF -
E02AKF -
E02AHF -
E02AKF -
E02ZAF -
ZSRCH
ICSMOU
ICSSCU
ICSSCV
IQHSCU
E04 MINIMISING OR MAXIMISING A FUNCTION
1 variable, fn values
, fn, 1 deriv.
1 var, easy use, fn values
, fn, 1 deriv.
, fn, 1 2 deriv.
1 var, comprehensive, fn values
, fn, 1 deriv.
bounded vars, easy use, fn values
, fn, 1 deriv.
, fn, 1 2 deriv.
bounded vars, comprehensive, fn values
, fn, 1 deriv.
constrained, fn values
E04ABF ZXGSN
ZXGSP
E04BBF -
E04CGF -
E04DEF ZXMIN
E04DFF ZXCGR
E04EBF -
E04CCF -
E04DBF -
E04JAF -
E04KAF -
E04KCF -
E04LAF -
E04JBF -
E04KBF -
E04KDF -
E04LBF -
E04UAF -
, fn, 1 deriv.
, fn, 1 2 deriv.
, quadratic form
sum of squares, fn values
, fn, 1 deriv.
, fn, 1 2 deriv.
sum sqs., comprehensive, fn vals.
, fn, 1 deriv.
, fn, 1 2 deriv.
1st deriv estimation
check 1st deriv
check 2nd deriv
check Jacobian
check Hessian
check 1st deriv fn constraints
check 2nd deriv fn constraints
F01 MATRIX OPERATIONS, INCLUDING INVERSION
invert real matrix, approximate
, accurate
invert real, sym, pos def, approx
, accurate
, simplified
, sym def band, approx
, accurate
, pos def
pseudo inverse and rank
singular value decomposition
QR decomposition, real, rank = n
, rank = n
LU decomposition, real
LU decomposition, real, banded
LU decomposition, real, sparse
LL' decomp., real, sym, pos. def.
, band
, complex, herm., pos. def.
ULDL'U' decomp., real, sym, def, band
LDLT of A E, A symm, E diag.
QU of m by n matrix
UQ of m by n matrix
reduction of real, sym Ax=kBx. B def
ABx=kx. B def
real, band, sym Ax=kBx B def
real, general Ax=kBx
complex, general Ax=kBx
balance complex matrix
E04VAF -
E04VBF -
E04WAF -
E04WAF -
E04PDF ZXSSQ
E04GCF -
E04GEF -
E04HFF -
E04FCF -
E04GBF -
E04GDF -
E04HEF -
E04HBF -
E04HCF -
E04HDF -
E04YAF -
E04YBF -
E04ZAF -
E04ZAF -
F01AAF LINV1F
LINV2F
F01ADF LINV1P,
LINV2P
F01ACF -
F01ABF -
LIN1PB
LIN2PB
F01BPF -
F01BLF LGINF
F01BHF LSVDF
F02SZF LSVDB
F01AXF -
F01BKF -
F01BTF LUDATF
F01BMF -
F01LBF -
F01BRF -
F01BSF -
F01BXF -
F01MCF -
F01BNF LUDECP
F01BUF -
F01BQF -
F01QAF -
F01QBF -
F01AEF -
F01BDF -
F01BVF -
EQZQF
ELZHC
F01AVF EBALAC
balance real matrix
reduction, complex- u. Hessenberg
complex Herm- real tridiag
real upp. Hessenberg
accumulate FO1AKF products
reduction real sym tridiag
, accumulating product
, packed storage
, real sym band tridiag (alt. storage)
, upper tri. bidiagonal
backtransformation of eigenvectors
-complex, after balancing
-complex, after Hessenberg reduction
-real, after balancing
-real, after Hessenberg reductions
-real sym., after reduction
-real sym., after reduction, packed
-AzkBx or ABx=kx
-BAx=kx
Householder trans., real
, real, zero 1 el.
, real, zero 2 els.
, complex
construct Givens rotation
apply Givens rotation
construct modified Givens rotation
apply modified Givens rotation
FO2 EIGENVALUES AND EIGENVECTORS
blackbox, complex, all e-vals
, all e-vals -vecs
, selected e-vals -vecs
, complex Herm, all e-vals
, all e-vals -vecs
, complex generalised Ax=kBx
A,B band, 1 e-vec
, real, all e-vals
, all e-vals -vecs
, selected e-vals -vecs
, real symm., all e-vals
, all e-vals -vecs
, selected e-vals -vecs
, band, e-vals e-vecs
, generalised Ax=kBx
, symmetric Ax=kBx, e-vals
,e-vals -vecs
complex Hessenberg, all e-vals
, all e-vals -vecs
, selected e-vecs
reduced complex, all e-vals -vecs
reduced complex Herm, e-vals -vecs
FO1ATF EBALAF
FO1AMF EHESSC
FO1BCF EHOUSH
FO1AKF EHESSF
FO1APF -
FO1AGF EHOUSS
FO1AJF -
FO1AYF -
FO1BJF -
FO1BWF -
FO1LZF -
FO1AWF EBBCKC
FO1ANF -
FO1AUF EBBCKF
FO1ALF EBBCKF
FO1AHF -
FO1AZF -
FO1AFF -
FO1BEF -
VHS12
VHS2R
VHS3R
VHS2C
D/SROTG
D/SROT
D/SROTMG
D/SROTM
FO2AJF EIGCC
FO2AKF EIGCC
FO2BDF -
FO2AWF EIGCH
FO2AXF -
FO2GJF EIGZC
FO2SDF -
FO2AFF EIGRF
FO2AGF EIGRF
FO2BCF -
FO2AAF EIGRS
FO2ABB EIGRS
FO2BBF -
EIGBS
FO2BJF EIGZF
FO2ADF -
FO2AEF -
FO2ANF ELRH1C
FO2ARP ELRH2C
FO2BLF -
FO2ARF -
FO2AYF EHBCKH
reduced real, all e-vals -vecs F02AQF -
reduced general, complex - ELZVC
, real - EQZTF
real Hessenberg, all e-vals F02APF EQRH3F
, selected e-vecs F02BKF EQRH1F
reduced real symm, all e-vals -vecs F02AMP EHOBKS
real symm. band, selected e-vals F02BMF -
real tri-diax, all e-vals F02AVF EQRT2S
, selected e-vals F02BFF EQRT1S
, selected e-vals -vecs F02BEF -
SVD, real upper bidiagonal F02SZF LSVDB
, sing.values rt-vecs. m by n(m =n) F02WAF -
(m= n) F02WBF -
, sing. values vectors m by n F02WCF -
QU-fact. part of SVD F02WDF -
FO3 DETERMINANTS
black box, complex F03ADF -
, real F03AAF -
, real symm. pos. def. F03ABF -
, real symm. pos. def.band F03ACF -
lu det, complex F03AHF -
, real F03AFF -
ll' det, real, symm. pos. def. F03AEF -
, real symm. pos. def. band F03AGF LUDAPB
real banded F03ALF -
complex Hermitian pos. def. F03AMF -
FO4 SIMULTANEOUS LINEAR EQUATIONS
black box, complex, approx. F04ADF LEQT1C
, accurate - LEQ2C
, real, accurate, 1 rhs F04ATF -
, 1 rhs F04AEF LEQT2F
, approx., 1 rhs F04ARF -
, 1 rhs F04AAF LEQT1F
, real sym. def,acc, 1rhs F04ASF -
, 1 rhs F04ABF LEQT2P
, approx. - LEQT1P
, band, approx. F04LDF LEQT1B
, accurate - LEQT2B
, real sym. def band, approx F04ACF LEQ1PB
(variable band) F04MCF -
, acc. - LEQ2PB
, real sym indef., approx. - LEQ1S
, accurate - LEQ2S
soln inverse, real - LINV3F
, real sym. def. - LINV3P
factorised, complex, approx F04AKF -
, complex Herm., approx F04AWF -
, real, accurate F04AHF LUREFF
, approx. F04AJF LUELMF
F04AYF
F04AVF -
F04AXF -
F04AFF LUREFP
F04AGF LUELMP
F04AQF
F04AZF
F04ALF LUELPB
- LUREPB
least squares, rank n, accurate F04AMF LLBQF
, approx. F04ANF LLSQF
, rank n F04AUF -
, m by n m =n F04JAF -
m =n F04JDF -
, automatic treatment of rank.def. F04JGF -
L1, rank n E02GAF -
**F05 ORTHOGONALIZATION**
Schmidt orthogonalization F05AAF -
2-norm of vector F05ABF -
**G01 SIMPLE STATISTICAL CALCULATIONS**
produce a letter-value summary - BDLTV
descriptive, 1 variable, from data G01AAF BEIUGR
, from freq. table G01ADF BEIGRP
BEGRPS
, 2 vars, from data G01ABF -
frequency table from raw data G01AEF BDCOU1
1-way analysis of variance G01ACF -
2-way conting.tab. reduction signif. G01AFF CTRBYC
NHEXT
formation - BDCOU2
median polish of two-way table - BEMDP
compute exact probs. for conting table - CTPR
transgeneration matrix cols., in core - BDTRGI
, out of core - BDTRGO
var, co-var of linear fn., in core - BECVL
, out of core - BECVL1
plot of 2 vars (scatter plot) G01AGF -
plot vector against normal scores G01AHF -
print a box plot - USBOX
print stem and leaf display - USSLF
minimum and maximum in vector - USMMX
calculation of normal scores G01DAF -
general cts. prob. dist. fn. - MDGC
ratio ordinate to normal upper tail - MSNRAT
distribution fn., Students t G01BAF MDTD
, f G01BBF MDTNF
, chi-square G01BCF MDCH
, beta 1st kind G01BDF MDBETA
, normal S15ABF MDNOR
, inverse Students t G01CAF MDSTI
, inverse f G01CBF MDFI
, inverse chi square G01CCF MDCHI
, inverse beta 1st kind G01CDF MDBETI
, inverse normal G01CEF MDNRIS
, binomial - MDBIN
, bivariate normal - MDBNOR
, non-central chi sq. - MDCHN
, f (real deg freedom) - MDFDRE
, gamma - MDGAM
, hypergeometric - MDHYP
, Kolmogorov-Smirnov asymp. - MDSMR
, non-central t - MDTN
, Poisson, terms cum. prob. - MDTPS
inverse of cont. pdf - MDGCI
generate order stats., normal dist - GGNO
, unif. dist - GGUO
G02 CORRELATION AND REGRESSION ANALYSIS
Pearson product-moment corr coeff
-all vars, no missing values G02BAF BECOVM
, casewise deletion G02BBF -
, pairwise deletion G02BCF -
-subset, no missing values G02BGF -
, casewise deletion G02BHF -
, pairwise deletion G02BJF -
"correlation-like" coeff
-all vars, no missing values G02BDF -
, casewise deletion G02BEF -
, pairwise deletion G02BFF -
-subset, no missing values G02BKF -
, casewise deletion G02BLF -
, pairwise deletion G02BMF -
Kendall Spearman coeff
-no missing vals, overwriting input G02BNF -
, preserving input G02BQF -
-casewise treatment, overwriting G02BPF -
, preserving G02BRF -
- pairwise treatment G02BSF -
linear regression, constant term,
, no missing values G02CAF RLONE
, missing vals G02CCF RLONE
linear regression, no constant term,
, no missing vals G02CBF -
, missing vals G02CDF -
mult.lin.reg, const term, from corrn. G02CGF RLMUL
no const., from corrn.
from raw data
select els. from vectors and matrices
re-order vectors and matrix elements
means, st devs, corrn (out of score)
tetrachoric correlations
means, st devs, simple l.r. coeffs, st err
(missing values, in core)
(out of core)
means, st devs, 3rd 4th moments
(biserial/point biserial corrn)
biserial correlations
bivariate normal corrn. est.
from cont.table(ml method)
generate orthog. central comp. design
decode quadratic reg model
var est for decoded orth poly coeffs
coded
coeff decoder for orth. poly.model
leaps and bounds - best subsets reg.
replication err d.f. s.s.(in core)
(out of core)
univ. Curvilinear fit, orth poly
(easy use)
with weights
mean correction corrected ssps, in core
(out of core)
response control, simple lin reg model
inverse prediction
generate orth polys
confidence int. for responses, in core
(out of core)
residual anal for lin reg model
forward stepwise regression
(easy to use)
fit y=a b*(c**x) by least squares
log-linear fit of conting. table
inverse pred., fitted lin. reg.model
GO4 ANALYSIS OF VARIANCE
latin square design
one-way classification
two-way crossed classification
two-way hierarchical classification
balanced incomplete block/lattice
contrast estimate and sums of sqs.
full factorial plan
(easy to use)
balanced complete design (b.c.d.)
general linear model
interval est. variance component
expected ms. for b.c.d. - AGXPM
expected data by unweighted means - AMEANS
covariance anal. for 1-way classn. - ANCOV1
completely nested design, equal subcl. - ANESTE
unequal subcl. - ANESTU
reordering data from a b.c.d. - AORDR
Student-Newman-Keuls test - ASNKMC
G05 RANDOM NUMBER GENERATORS
uniform over (0,1) G05CAF GGUBFS
uniform over (a,b) G05DAF -
exponential G05DBF GGEXN
logistic G05DGF -
normal G05DDF GGNML
GGNFM
GGNQF
lognormal G05DEF GGNLG
Cauchy G05DFF GGCAY
gamma G05DGF GGAMR
chi-square G05DHF GGCHS
Student's t G05DJF GGAMR
Snedecor's f G05DKF GGAMR
beta, 1st kind G05DLF GGBTR
2nd kind G05DMF -
uniform integer G05DYF GGUD
pseudo-random logical G05DZF -
Weibull generator G05DPF GGWIB
unif devs. from sphere surf 3,-4-space - GGSPH
vector of uniform (0,1) devs. - GGUBS
uniform (0,1) with shuffling - GGUW
geometric deviate - GGEOT
Poisson gen.-frequent param changes - GGPOI
vector of normal deviates - GGNML
triangular distn generator - GGTRA
general continuous distn. - GGVCR
multinomial deviate generator - GGMTN
integer from reference vector G05EYF -
set up reference vector, uniform G05EBF -
, Poisson G05ECF GGPOS
, binomial G05EDF GGBN
, negative binomial G05EEF GGBNR
, hypergeometric G05EFF GGHPR
reference vector from pdf or cdf G05EXF -
m.v. normal gen. using ref. vec. - GGNSM
time series ref. vect. init. G05EGF -
time series gen. using ref. vect. G05EWF FTGEN
initialise generator, repeatable G05CBF -
non-repeatable G05CCF -
save state of generator G05CFF -
restore state of generator G05CGF -
D-squared tally - GTDDU
D-squared test - GTD2D
moments of uniform random numbers - GTMNT
test for normality - GTNOR
prob. dist. into 2 equi-prob. states - GTPKP
poker test tally - GTPL
poker test - GTPOK
tally of co-ords. of pairs - GTPR
pairs (Goods serial) test - GTPST
runs test - GTRN
tally of no of runs and down - GTRTN
tally for triplets test - GTTRT
triplets test - GTTT
set of binomial random deviates - GGBN
set of discrete random devs., alias , table lookup - GGDA
set of deviates, mixture 2 exponent'ls - GGEXT
set of stable dist. random deviates - GGSTA
non. hom. Poisson process gen. - GGNPP
random par of 1,...k - GGPER
simple random sample from finite pop. - GGSRS
G08 NONPARAMETRIC STATISTICS
sign test G08AAF NBSIGN
Wilcoxon test G08ABF NRWMD
median test G08ACF -
Mann-Whitney test G08ADF NRWRST
Friedman test G08AEF -
Kruskal-Wallis test G08AFF NAK1
Kolmogorov-Smirnov 1-sample test G08CAF NKS1
Kendall coeff of concordance G08DAF NMCC
Mood's and David's tests G08BAF -
Wilson's anova, 2,3 way, no reps - NAWNRP
, 1,2,3 way, equ.reps. - NAWRPE
, uneq.reps. - NAWRPU
Noether's test for cyclical trend - NBCYC
Cochran q test - NBQT
Cox Stuart sign test for trends - NBSDL
nonparametric pdf estimation - NDMPLE
includance test - NHINC
Kolmogorov-Smirnov 2-sample test - NKS2
Kendall's test for correlation - NMKEN
significance of Kendall correlation - NMKSF
k sample trends test - NMKTS
Bhapkar v test - NRBHA
ranking-a vector - NMRANK
computing tie statistics - NMTIE
evaluate p.d.f. - NDEST
nonparametric p.d.f. estimation - NDKER
G09 PARAMETER ESTIMATION
interval est. of p (binomial) - BELBIN
lambda (poisson) - BELPOS
mean inf., normal dist., known var. - BEMNON
mean and var. inferences, normal - BEMSON
var inf., normal sample, known mean - BENSON
mean and var inf., 2 norm., uneq.var - BEPAT
, equ.var - BEPET
ml est norm.params from censored data - OTMLNR
G12 HYPOTHESIS TESTING
Chi-squared goodness of fit - GFIT
sample size/no class interval, chi-sq - GTCN
G13 TIME SERIES ANALYSIS
Box-Jenkins univariate modelling
-mean, var, autocov, autocorr, par.cor. - FTAUTO
-AR params prelim. estimation - FTARPS
-MA params prelim. estimation - FTMPS
-transforms, diff, seasonal diff. - FTRDF
-AR MA parameter estimation - FTMXL
-model analysis - FTCMP
-forecasting - FTCAST
trasfer functions
-cross correlation of 2 series - FTCRXY
-prelim est fir transfer fn model - FTTRN
means, vars, cross-cov - cor: 2 n-ch ser. - FTCROS
FFT power and cross spectra - FTFRS
single/multi chan tsa, time freq dom - FTREQ
Kalman filtering - FTKALM
Wiener forecasting - FTWEIN
multichannel Wiener forecast - FTWENW
ML par est mult chan 1 o/p ts model - FTWENX
MULTIVARIATE TECHNIQUES
cluster analysis - OCLINK
discriminant anal, linear a la Fisher - ODFISH
, my normal linear - ODNORM
factor/pca, score coeffs - OFCOEF
, unrot.factor loading - OFCOMM
, factor rot., oblique axes - OFHARR
, unrot.fact.load (image) - OFIMAG
(princ.comp.mod.) - OFPPI
, oblique trans fact loading - OFPROT
, communalities norm.fact.res.cor mx - OFRESI
, orthog fact rot.(qu-, var-, equ-max) - OFROTA
(target matrix) - OFSCHN
pairwise dist. between cols of matrix - OCDIS
fact scores form fact coeffs - OFSCOR
principal component calculation - OFPRINC
Wilks test for m.v. norm indep. - OIND
SAMPLING
simple random sampling, prop. data - SSPAND
, cont. data - SSSAND
, cont. data, ratio/reg - SSRAND
strat. random sampling, prop. data - SSPBLK
, cont. data - SSSBLK
, cont. data, ratio/reg - SSRBLK
1-stage clust. sampling, cont. data - SSSCAN
2-stage sampling, cont. data - SSSEST
H OPERATIONS RESEARCH
lin. prog., simplex, 1 iteration HO1ABF -
, revised simplex HO1ADF ZXOLP
, contracted simplex HO1BAF ZX4LP
, find pt. given lin. constraints HO1AEF ZX3LP
quadratic programming H02AAF -
integer linear programming H02BAF -
transportation problem H03ABF -
M01 SORTING
vector, real, ascending MO1ANF VSRTA
, descending MO1APF -
, absolute values - VSRTM
, integers, ascending MO1AQF -
, integers, descending MO1ARF -
, characters, alphanumeric MO1BBF -
, reverse alphanumeric MO1BAF -
vector index, real, ascending MO1AJF VSRTP
, descending MO1AKF -
, absolute values - VSRTR
, integer, ascending MO1ALF -
, descending MO1AMF -
index to sorted, real, ascending MO1AAF -
, descending MO1ABF -
, integers, ascending MO1ACF -
, descending MO1ADF -
matrix rows, real, ascending MO1AEF -
, descending MO1AFF -
, integers, ascending MO1AGF -
, descending MO1AHF -
matrix columns, character, alphanum MO1BDF -
, reverse MO1BCF -
PO1 ERROR TRAPPING
value of error indication PO1AAF -
S APPROXIMATIONS OF SPECIAL FUNCTIONS
tan S07AAF -
arcsin S09AAF -
arccos S09ABF -
tanh S10AAF -
sinh S10ABF -
cosh S10ACF -
arctanh S11AAF -
arcsinn S11ABF -
arcosh S11ACF -
exponential integer S13AAF MMDSI
sine integral S13ADF -
cosine integral S13ACF -
gamma S14AAF MGAMA
log gamma S14ABF MLGAMA
logarithmic deriv of gamma fn. - MMPSI
cumulative normal distribution S15ABF MDNOR
complement of cumulative normal dist S15ACE -
error function S15AEF MERF
complement of error fn S15ADF MERRC
inverse complement error fn. - MERFC1
inverse error function - MERFI
Dawson's integral S15AFF MMDAS
Bessel function j0 S17AEF MMBSJO
j1 S17AFF MMBSJ1
yo S17ACF -
yi S17ADF -
y, general order - MMBSYN
Airy function ai S17AGF -
bi S17AHF -
deriv. of Airy function ai S17AJF -
bi S17AKF -
modified Bessel function i0 S18AEF MMBSIO
i1 S18AFF MMBSI1
k0 S18ACF MMBSKO
k1 S18ADF MMBSK1
Fresnel integrals s(x) S20ACF -
c(x) S20ADF -
complete elliptic integral, 1st kind S21BBF MMDELK
, 2nd kind S21BCF MMDELE
, 3rd kind S21BDF -
Kelvin fns., order zero - MMKELO
, order one - MMKEL1
, derivs. - MMKELD
decompose integer into prime factors - VDCPS'
X01 MATHEMATICAL CONSTANTS
pi X01AAF -
Eulers const X01ABF -
X02 MACHINE CONSTANTS
smallreal X02AAF -
smallest positive real X02ABF -
maxreal X02ACF -
X02ABF/X02AAF X02ADF -
largest neg. argument for exp X02AEF -
smallest x;x,-x,1.0/x,-1.0/x repres. X02AGF -
floating point base X02BAF -
maxint X02BBF -
max n for 2**n representable X02BCF -
min n for 2**n representable X02BDF -
max decimal digits X02BEF -
active set size in paged environment X02CAF -
X03 VECTOR/MATRIX ARITHMETIC
null vector F01CQF -
null matrix F01CAF -
unit matrix F01CBF -
copy vector, real F01CPF D/SCOPY
, complex - CCOPY
copy vector - matrix row F01CNF -
copy matrix F01CMF -
, partial F01CFF -
interchange vectors, real - D/SSWAP
, complex - CSWAP
interchange matrix row/column - VSRTU
const*vector, real - D/SSCAL
, complex - CSCAL
, real*complex - CSSCAL
const*vector vector, real - D/SAXPY
, complex - CAPY
find. el. of max. magnitude, real - ID/SAMAX
, complex - ICAMAX
max. absolute value, vector - VABMXF
, matrix row/col. - VABMXS
sum of absolute values, real - D/SASUM
, complex - VABSMF
, real matrix row/col - SCASUM
vector euclidean norm, real - VABSMS
, complex - D/SNRM2
matrix 1-norm - SCNRM2
matrix euclidean norm - VNRMF/S1
matrix infinity norm - VNRMF/S2
matrix addition F01CDF VUA..
| Matrix Operation | Subroutine |
|----------------------------------------|--------------|
| matrix subtraction | F01CEF |
| partial matrix additions | F01CGF |
| partial matrix subtraction | F01CHF |
| matrix transposition | F01CRF |
| matrix multiplication | F01CKF |
| Matrix multiplication by transpose | F01CLF |
| mult vector by symm.matrix (packed) | F01CSF |
| matrix polynomial | VPOLYF |
| vector convolution | VCONVO |
| matrix storage mode conversion | VCVT |
| scalar product, real | F01DEF |
| scalar prod. const,real,basic prec. | F01DAF |
| scalar prod. const,real,basic prec. | F01DBF |
| extended precision add | VXADD |
| multiply | VXMUL |
| store | VXSTO |
**X04 INPUT/OUTPUT UTILITIES**
| Utility | Subroutine |
|----------------------------------------|--------------|
| error message unit no. | X04AAF |
| advisory message unit no. | X04ABF |
| manipulate I/O unit numbers | UGETIO |
| set message level | UERSET |
| print error message | UERTST |
| plot cluster (from OCHIER) | USCLX |
| input of matrix | USCRDM |
| input of vector | USRDV |
| print histogram | USHIIST |
| print results of regression | USLEAP |
| print pdf information | USPC |
| plot 2 pdf's | USPDF |
| print binary tree | USTREE |
| "help" information | UHELP |
| printer plot of functions | USPLT |
| print matrix | USWBM |
| print vector | USWFM |
MANUAL REGISTRATION
If you wish to receive updates for this green book please complete the attached form and send it to:
Computing Support Library
Ms. A. Cambon
Building 36
Div. 1 Dept. A
JRC Euratom
21020 Ispra
Italy
I wish to receive future updates for "Using the IMSL & NAG Libraries".
Name ...........................................
Address .........................................
....................................................
Signature ....................................... |
Sixth Sunday in Ordinary Time
Year A
12th February 2017
Pastor’s Note: Matthew 5:17-37. God’s Own Rule, Golden Rule
We tend to think of the contents of today’s Gospel as a series of little pieces: a rule on this, a rule on that, a bit of spiritual or theological matters on some other point. When we take up a chop-up into sentences approach to gospels is not a good exegesis, but a type of fundamentalism, and misses the whole point of the sermon by Jesus. The central point is that the message of the gospel is greater than the sum of its parts. It is not a new rule about this, a change in the rules about something else and so forth; rather, the message is that God’s love is greater than all, and we are called to respond to that love in a complete loving way. Love always must go beyond box ticking or it is not love. The first and foremost is to state that: love and faith are more than box ticking. We can keep all the rules, but if our own lives have not got that spark of love and laughter, then we are not following the God of love, ‘but the great policeman in the sky’. That sparkle of love is what makes the difference between the saint and the boor. That sparkle in the heart is the ability to see beyond the rules, to glimpse a mystery that is greater than the universe, to glimpse the love of God beckoning us. We can only discover how to love by helping those in need, whether they deserve help or not. We can only discover how to love by standing with those who are oppressed even if it is dangerous to us. We can only discover how to love by asking the Holy Spirit to enlighten our minds with wisdom.
Gospel today is the sermon on the Mount continued. Far from making life easier (we may think that way), the Sermon is even more demanding than the Old Law. This list of Jesus’ sayings give an indication of what is important to him. He states the religious tradition and practice of the people when he says that he has not come to abolish the old religion, and religion must go much deeper and be an affair of the heart. He asks for forgiveness and reconciliation when he speaks about leaving a gift at the altar. He commends marriage, and faithfulness in marriage, in his views on divorce. He believes in the respect for sexuality that is shown in not using a person. The Sermon on the Mount, of which this is part, is the backdrop to much of Jesus’ life and mission, a sort of vision statement for his life and mission. Much of this went against the religious practice of his people, which was centred mostly on externals. He saw the ritual and the law of religion as important if it came from the heart. Pope Francis says, “It is not enough to just respect the commandments and do nothing more. Christian life is not just an ethical life: it is an encounter with Jesus Christ” (May 9, 2016).
Fr Jacob Chacko, MCBS
Readings Sixth Sunday in Ordinary Time
First Reading: Sirach 15:15-20 He never commanded anyone to be godless.
Second Reading: 1 Corinthians 2:6-10 God in his wisdom predestined our glory before the ages began.
Gospel: Matthew 5:17-37 Such was said to your ancestors; but I am speaking to you.
Readings for next weekend: Seventh Sunday in Ordinary Time
Leviticus 19:1-2, 17-18 1 Corinthians 3:16-23 Matthew 5:38-48
**IN OUR PARISH THIS COMING WEEK:**
| MON 13th February | 8.00am Mass |
|-------------------|-------------|
| TUE 14th February | Sts Cyril & Methodius
8.00am Mass with children |
| WED 15th February | 6.30am Mass |
| THU 16th February | 8.45am Play Group |
| FRI 17th February | 9.00am Rosary
9.30am Mass |
| SAT 18th February | 9.30am Social Justice Group meeting
5.00pm Reconciliation
6.00pm Vigil Mass for Sunday
Followed by Cuppa & Chat for Fr Jacob’s Farewell |
| SUN 19th February | Seventh Sunday in Ordinary Time
9.00am Mass—Special Parish & School Mass for Fr Jacob’s Farewell followed by Cuppa & Chat & Sausage Sizzle (Note: No Children’s Liturgy)
6.00pm Mass
Followed by Cuppa & Chat for Fr Jacob’s Farewell |
**Farewell to Fr Jacob:** We will be holding a special farewell celebration for Fr Jacob next Sunday 19th February at the 9am Mass which will be a Family School Mass as well. There will be a Cuppa and Chat after each Mass next weekend 18th and 19th February. Please bring a plate of food to share! Fr Jacob’s last Masses with us will be the following weekend, 25th & 26th February. If anyone would like to contribute to a gift for Fr Jacob, please place in a clearly labelled envelope and put it in the collection plate, give to a Parish Councilor or you can drop it off to Julie in the Parish Office during the week.
**Fr Baiyi Gong,** our new parish priest, will be joining us in time for the Tuesday morning 8am Mass with the Children on 28th February.
Fr Baiyi (*pronounced* By-ye) is also known as Fr Peter though he would prefer Fr Baiyi. Fr Baiyi has written an article about his life story to introduce himself to us. Pick up a copy of the leaflet from the back of the church.
**Name Badges:** We will be placing an order for Parish name badges soon. Please write your name on the relevant form near the doors if you would like to order one. Please write clearly and indicate whether you would like a magnetic or pin attachment. Wearing of name badges will help our new parish priest, Fr Baiyi to get to know us. Magnetic badges cost $8, Pin $6. Please pay when ordering if possible.
**Play Group has a new leader and a new day!**
Play Group will now meet on Thursdays during school terms from 8.45am to 9.45am in the Parish Centre. Our new leader is Cassie Catterson who is a part-time Prep teacher.
**Communion Ministers and Readers** please advise any changes or days unavailable during the next roster which covers March, April and May. This roster period includes Easter when a special roster will be drawn up. Complete the white slips and place them in the box.
**Lenten Program:** Lent begins with Ash Wednesday on 1st March this year. Expression of interest slips are on the back table for Lenten Discussion Groups. If you are interested in joining a Discussion Group please write your name on the blue slips and indicate your preference of day or evening and which days you would be available. Please place your completed slips in the red box. Groups will be prepared from this data.
**Ash Wednesday 1st March:** As usual we will have two Masses: one at 6.30am then another at 9.30am which the school children will attend though all are welcome to attend either Mass.
**Stations of the Cross** will be held on the Fridays during Lent at 7.30pm. 2 volunteers needed each week to lead Stations. Please help out Fr Baiyi and volunteer. Sign up sheet available today.
**Volunteers needed for Church Cleaning and washing the Church Linen:** Please contact Julie in the Parish office if you could help out in these roles. Please consider volunteering as these are vital roles. Linen roster calls for the purifiers etc to be washed once per month. Church cleaning once every 8 weeks.
**Planned Giving:** another vital component of our functioning parish is the Planned Giving Program. This is our main source of revenue and we can’t pay the bills without it. Please consider if you can join the Planned Giving Program either through the envelope system or credit card or direct debit. Please see Julie.
**Break Open the Word:** we recommend all Readers buy a copy of Break Open the Word. It provides useful preparation on the Readings of the day. $18.
**Advertising in our Newsletter:** If you would like to advertise your business on the back of our newsletter in 2017 please contact austnews either by phone on 1800 245 077 or email: advertising.austnews.net.au. Applications close 20th February.
**World Day of Prayer** Locally, St Marks’ Anglican Church at Daisy Hill is hosting this annual Ecumenical Service on **Friday 3rd March** at 10am. This year the service has been prepared by the women of the Philippines with a theme of “Am I Being Unfair to you?” All are most welcome and are encouraged to attend.
Social Justice News: Thank you to everyone who attended our first Social Justice meeting for 2017. We were delighted to welcome a new member. At present we have 14 members, more than we have ever had in the past 20 years. If you would like to join us or have any issues you would like to discuss, please contact us. We had a special guest at our meeting, Lenore from the "Rescue Mission for Children." The Mission is in Chiang Rai, in Thailand. Lenore spoke with great passion about this project. If you are interested to learn more, there are brochures at the back of the church. Erika
CRI in State Schools: Do you have a heart for local missions – reaching those in literally our own backyard?! Why not consider going back to school this year? Christian Religious Instruction teams in your area need volunteers to help continue this amazing ministry. As a CRI teacher or helper, you can read the Bible with students and share the gospel message clearly. All that is needed is 30 minutes a week to teach, some time to prepare, blue card and a letter from your church minister. Teaching curriculum and training are provided. Please contact us if you would like some more information.
Sonia Kenny firstname.lastname@example.org 38413583 or Kerri-Ann Caswell email@example.com 3219 7435
Griffith University Chaplaincy Service:
Starting or continuing at Griffith University? Griffith University’s Chaplaincy Service is there to help you get off to a good start and reach your goals, within the context of your relationship with God. It provides a confidential and safe place for people of any faith to talk about things that trouble you, the meaning of life, questions of faith, and how to live in this world. Go to griffith.edu.au and search ‘chaplaincy’ to find a short video, subscribe to our email news, find out who and where we are, or make an appointment. Catch us at O Week and fill in the quiz to win a café voucher! Phone: 37357113.
AWAKEN: Together in Christ
St Francis’ Hall,
47 Dornoch Tce., West End.
Awaken your spiritual walk and discover how to connect and encounter God in your everyday life - Enliven your faith and allow God to fill you with His presence and purpose.
Special Healing Evening
Tuesday 14th February
All are welcome!
Hosted by St Mary’s and Dutton Park Parishes
Parishioners have been receiving amazing healings in their lives, come along and receive God’s Graces in your life and circumstances this week
PLEASE REMEMBER IN YOUR PRAYERS:
Those who have died: Kathryn Ann Woodhead, Raymond Naughton, Josehito Umali, Bella Aguas
Book of Life & Mass Requests: Alexander Irvine, Bernard Sullivan, Alicia Marco, Robina Collins, Michelle Matthews, Pearl Parker, Reg Rixon, Dean Anderson, Allan Hopkins, Mary Gredden, Edna Mildren, Cyril Lamberth, M Luszczek, Steve Hooiveld, Ken Chapman, Les Dawson, Josephine Cooney, George Hardy, Ricci Stein, Luke Timbs, Mohan Varghese, Tom (Joe) Doyle, Tom Shoring, Phil Ruthven, Vonnie Edwards
Please pray for those who are sick, especially: Ann Mendoza, Peter Holzknecht, Elmore Kotzur, Flex Dance, Michael Ah Koy, Monica Ah Koy, Kevin Kelly, Annette Sparks, Hamish Stuhmeke, Joe Wegener, Roger Byrne, Ingeborg McIntosh, Don Bechly, Brian Button, Ramona Cruz, Bonnie Dematillar, Terri Mills, Johnny Summers
Prayer requests should be recorded on the form on the back table of the church nearest to the lectern. Please ensure you have the permission of the person who is sick to make their name public.
Roster 18th & 19th February 2017
| | Communion | Readers | Altar Servers | Musicians |
|----------------|----------------------------|------------------|---------------------|-----------|
| Sat 6:00pm | Lyndal Bray-Claridge | Peta Grandin | Clive D'Silva | Liz |
| | Shelagh Ballment | Michael Claridge | Angelina D'Silva | |
| | Joyce Lyster | Neil Ballment | | |
| Sun 9:00am | Diane McKeone | Kath Seymour | Jada | Combined group |
| | Bev de Paola | School child | | |
| | Shane Seymour | School child | | |
| Sun 6.00pm | Deacon Mike Jones | Anne Fry | Chevonne Mone | Anne |
| | Kevin Fry | Vicki Routledge | | |
| | | John Blake | | |
Church Cleaning: 5 Mary Andrew, Mark Connolly, Narelle Garland, Shane D'Silva
Children’s Liturgy: none
This week: Linen: Celine Phillips
Counting: Bob’s team
Cuppa: all Masses next weekend: Helpers needed. Please bring a plate of food to share to make it special!
Are you new to our Parish? If you’ve recently joined the Parish, WELCOME!! Please make yourself known to us. We’d like to formally greet you, so please complete a ‘New Parishioners Card’ at the back of the Church and place it on the collection plate or hand it to Fr Jacob or to one of the Parish Councillors.
Parish Ministry Group Co-ordinators
In our parish we have many ways you can be involved. Please contact the Parish Office if you would like to speak to one of our Co-ordinators whose numbers is not listed.
Baptism Preparation: Julie Hondroudakis
Bereavement Group: Contact the Parish Office
Care & Concern: Transport: 0412267337
Co-ordinator Lynn White 0403 051103
Children’s Liturgy: Jodie Baigent
Family Groups: Alan & Kristy Scanlon
Facebook: St Peters Rochedale Family Groups
Memorial Wall & Gardens: Les O’Gorman 3341 3862
Mothers’ Prayers: Trish Bakker
Music Liturgy Group: Liz Fort
Parish Pastoral Council:
Chairperson: Malcolm Carroll
Play Group: Kristy Scanlon
RE Teachers in State Schools: Rita Bishop
Rosary Group: Fridays 9:00am
(1st Fridays 8.30am)
Social Justice Group: Erika Meerwald
Facebook: St Peter’s Rochedale Social Justice Group
St Vincent de Paul Rochedale Conference:
John Blake
Our Parish Safeguarding Children & Vulnerable Adults Representative is Malcolm Carroll. If you have any questions or concerns contact Malcolm on 0438 949 099 or firstname.lastname@example.org
These advertisers support us, please support them.
Gateway Dental Health
Shop 3, 66A Slobodian Ave Eight Mile Plains
Taking appointments now! call 3493 0028
Rochedale Family Practice
Your local AGPAL accredited family doctor
Bulk billing all eligible patients
Your health is our priority
Call us on 3341 2022 for an appointment or visit www.rochedalefp.com.au to book online at your convenience
Tailored storage solutions,
so you only pay for what you need.
National Storage Springwood
3421 Pacific Highway
07 3808 7558
nationalstorage.com.au
Adrian Schrinner
Deputy Mayor
Councillor for Chandler Ward
Shop 8, 14 Millennium Blvd, Carindale Q 4152
Tel: 3407 1400 • Fax: 3407 1891
Email: email@example.com
www.adrianschrinner.com
www.facebook.com/councillor.schrinner
Authorised by: Adrian Schrinner, 8/14 Millennium Blvd, Carindale Q 4152
K.M.Smith Funeral Directors
We think of everything
Prearrangement and prepayment options available. Ask us to send you a free Life Book on 3800 7800
3 Helen St, Hillcrest Qld
www.kmsmithfunerals.com.au
All your marketing needs, in one place.
Tel: 3841 2842 Open 6 days
firstname.lastname@example.org
22 Ferguson Street, Underwood
brisbaneeurospecialists
All makes and models specialising in the euro market, BMW and Mini specialists.
Safety certificates on site and at location. Log book servicing, Air-con, Diagnosis, Mechanical & Electrical. Servicing from $99. Everything you need under one roof!
Take your business to the next level...
Enquire about advertising on this parish’s newsletter.
1800 245 077
advertising.austnews.net.au
first national Rochedale
Buying or Selling?
Call Irene Wilson
0418 430 225
Take Advantage of the $500 Referral Offer!
(T&Cs apply)
Your Trusted Local Agent email: email@example.com |
A Source of Extreme Ultraviolet Light Based on High Harmonic Generation in Noble Gases
Master thesis
ADVISER: assoc. prof. dr. Irena Drevenšek Olenik
COADVISER: prof. dr. Thomas Udem
Ljubljana, 2016
Peter Šušnjar
Izvor ekstremne UV svetlobe na osnovi generacije višjih harmoničnih frekvenc v žlahtnih plinih
Magistrsko delo
MENTORICA: izred. prof. dr. Irena Drevenšek Olenik
SOMENTOR: prof. dr. Thomas Udem
Ljubljana, 2016
Izjava o avtorstvu in objavi elektronske oblike
Izjavljam:
— da sem magistrsko delo z naslovom *Izvor ekstremne UV svetlobe na osnovi generacije višjih harmoničnih frekvenc v žlahtnih plinih* izdelal/-a samostojno pod mentorstvom izred. prof. dr. Irene Drevenšek Olenik in prof. dr. Thomasa Udema,
— da je elektronska oblika dela identična s tiskano obliko in
— da Fakulteti za matematiko in fiziko Univerze v Ljubljani dovoljujem objavo elektronske oblike svojega dela na spletnih straneh Repozitorija Univerze v Ljubljani.
Ljubljana, dne 22.8.2016
Podpis:
Acknowledgements
When I think about the time I spent in Bavaria while working on my master thesis project at MPQ, I can hardly believe how large impact this year had on me. Very pleased with the final outcome - determined to continue my career as an experimental physicist, $N$-times more self-confident (where $N$ is a large number) and prepared for adult’s life struggles - I am aware that it was not the wind, the water and the sun that have shaped me such, but people I was lucky to have walked next to. Here, I would like to express my infinite gratitude to:
∞ Thomas, who despite my almost zero experience of working with lasers gave me the opportunity to join his group and allowed me to play with these incredible “toys”.
∞ dr. Irena Drevenšek Olenik, who without hesitation accepted to be my supervisor at University of Ljubljana and carefully corrected this thesis.
∞ Andreas, who accepted the challenge to supervise me: from the first weeks, when you slowly introduced me to the field of ultrashort light pulses and HHG through our after-lunch conversations, to the final steps, when you (during your vacation) corrected this thesis.
∞ Arthur, whom I had the great time sharing the office with. None of my excuses, why I cannot cycle to work did convince you and you were right: There is no such thing as bad weather, only bad clothes. Our debates on politics were a calming shelter from blinding laser light and I have learned a lesson from your realistic views free of any ideologies.
∞ Akira, who always took time to find a solution for problems I had in the lab and thought me all sorts of tricks.
∞ Josue, whose lunch box was inspiration for my dinner and Hossein who made me aware of the option to speed up YouTube videos.
∞ Charlie, Wolfgang and Helmut, who can built a space ship for you, if you like. You did it so many times for me.
∞ Monika and Malte, our Patentante and Patenonkel in Freising. “For I was hungry and you gave Me something to eat, I was thirsty and you gave Me something to drink, I was a stranger and you took Me in, I was naked and you clothed Me, . . .” (Matthew 25:35-36). In your Lederhose, though slightly too large, I could live like a Bavarian for a while.
∞ Family, for all kinds of support. A special thanks goes to my father who did not forget to remind me every single time that the thesis should be finished. Finally I can say: Voilà!
∞ Urša, who went to buy banana chips, missed the bus and took the one that I was on. Since then, we have driven on many buses together. The ride in Bavaria was much more fun with you aside.
Izvleček
Glavna namena raziskav v sklopu mojega raziskovalnega dela sta bila: (1) izdelava in karakterizacija izvora ekstremne ultravijolične (EUV) svetlobe prek tvorbe višjih harmonikov in v nadaljevanju uporaba le-tega za (2) testiranje fokusiranja snopa pri višjih harmoničnih frekvencah. Rezultati te raziskave bodo v pomoč pri oblikovanju postavitve za načrtovani eksperiment visokoločljive spektroskopije He$^+$ na Max Planckovem institutu za kvantno optiko v Garchingu, kjer so bili tudi izvedeni poskusi predstavljeni v tem delu. Tvorba višjih harmonikov in s tem EUV svetlobe je bila izvedena s fokusiranjem ojačanih ultra kratkih pulzov svetlobe iz laserskega sistema CPA v tarče napolnjeno z žlahtnim plinom. Da bi ugotovili optimalne pogoje za pretvorbo v višje harmonike, smo proučevali vpliv geometrije fokusiranja, oblike tarče in pritiska v njej ter vrste plina na izkoristek. Iz meritve spektra ter povprečne moči EUV svetlobe z umerjenim detektorjem smo določili učinkovitost pretvorbe v posamezni harmonik. V zadnjem delu smo preverili možnost fokusiranja harmonskega snopa s paraboličnim zrcalom na mikrometrsko območje in analizirali njegov prečni profil z interferometrično kalibriranim merilnikom temelječim na metodi ostrega roba. Rezultati študije vpliva posameznih eksperimentalnih parametrov na izkoristek pretvorbe so v skladu z opaženimi trendi o katerih poročajo številna znastvena dela. Nasprotno pa je dosežena absolutna vrednost izkoristka in s tem povprečna moč EUV svetlobe precej nižja od pričakovanj. Kljub temu je predstavljen svetlobni izvor primeren za nadaljnje teste fokusiranja z optiko primerno za EUV spektralno področje.
Ključne besede: Nelinearna optika, ultrakratki svetlobni sunki, tvorba višjih harmonikov, izvor EUV svetlobe
PACS: 42.65.Ky, 42.79.Nv, 32.80.Rm
The study described in my master thesis had two major purposes: (1) to build and characterize an EUV light source based on high-harmonic generation and then use it for (2) testing EUV beam delivery and focusing. The obtained results will presumably contribute to the design of experiments of high precision spectroscopy on He$^+$ at Max Planck Institute of Quantum Optics in Garching, where experiments presented in this work were conducted. To drive high-harmonic generation, energetic ultrashort light pulses from CPA laser system were focused with a lens into a target filled with a noble gas. In a multi-parameter study, effects of focusing geometry, target design, gas type and gas pressure on harmonics yield were investigated to determine optimal generation conditions. Furthermore, generated EUV light was spectrally characterized and absolute power of single harmonic was extracted from measurement with calibrated detector. Finally, focusing of harmonic beam with an off-axis parabolic mirror was tested and analyzed with interferometrically controlled knife-edge scan apparatus. The results of the multi-parameter study are in agreement with observed trends reported in scientific literature. However, an unexpectedly low power of generated EUV light implies that further improvement of the experimental setup, especially of the gas target and laser system, are required. Despite that, the built EUV light source can be used for further studies of focusing of harmonic beam using suitable EUV optics.
**Keywords:** Nonlinear optics, ultrashort light pulses, high harmonic generation, EUV light source
**PACS:** 42.65.Ky, 42.79.Nv, 32.80.Rm
## Contents
1 Introduction 1
2 Background 3
2.1 Ultrashort light pulses 3
2.1.1 General description 3
2.1.2 Light pulse propagation - dispersion 6
2.1.3 Laser beam propagation 8
2.1.4 Nonlinear effects 11
2.2 High harmonic generation 12
2.2.1 Single-atom response 12
2.2.2 Macroscopic response and propagation 16
2.2.3 General characteristics of high harmonic radiation 20
2.3 Optics in EUV spectral range 22
2.3.1 Optical constants 23
2.3.2 Thin metal filters 24
2.3.3 EUV mirrors 24
3 Generation, characterization and manipulation of ultrashort light pulses 27
3.1 Generation of high-energy ultrashort light pulses with chirped-pulse amplification system 27
3.2 Characterization and simulation of ultrashort light pulses propagation 30
3.2.1 Beam characteristics 31
3.3 Focusing of the IR beam into gas target 32
3.3.1 Potential effects of the lens on the light pulse 34
3.3.2 Self-focusing 34
4 Characterization and optimization of HHG 37
4.1 Experimental setup 37
4.1.1 Gas target 37
4.1.2 EUV monochromator 39
4.1.3 Detection 39
4.2 Spectrum 40
4.3 Optimization of HHG 42
4.3.1 Gas target 43
4.3.2 Gas target position 44
4.3.3 Apertured beam 45
4.3.4 Intensity dependence 46
4.3.5 Gas pressure in the target 47
5 Absolute power measurement .......................... 51
5.1 Experimental setup ................................ 51
5.1.1 Calibration of thin aluminum filters .......... 52
5.2 Experimental results .............................. 54
5.2.1 Detected EUV power ............................ 54
5.2.2 Estimation of generated power and conversion efficiency 55
6 Towards the focusing of high harmonics ............... 59
6.1 Experimental setup and methods ................... 59
6.1.1 Off-axis parabolic mirror and its alignment .... 59
6.1.2 Knife-edge scan ............................... 61
6.1.3 High harmonics related specifics in the setup .. 62
6.2 Simulations ..................................... 63
6.3 Experimental results .............................. 65
6.3.1 IR beam ..................................... 65
6.3.2 High harmonics ............................... 66
7 Conclusions and discussion .......................... 69
8 Outlook ............................................. 71
Bibliography ......................................... 75
Razširjeni povzetek v slovenskem jeziku ................. 81
8.1 Uvod ........................................... 81
8.2 Glavni rezultati ................................ 82
8.2.1 Laserski sistem .............................. 82
8.2.2 Optimizacija nelinearnega procesa tvorbe višjih harmonikov 83
8.2.3 Določitev absolutne moči EUV svetlobe .......... 85
8.2.4 Fokusiranje snopa svetlobe višjih harmonikov .... 85
Chapter 1
Introduction
Generation of light at new frequencies via nonlinear optical conversion in crystals is a well known process in nonlinear optics. The doubling of laser frequency from red into ultraviolet was achieved for the first time by Franken et al. [1], only one year after invention of laser by T. H. Maimann. The mechanism behind is a not-purely sinusoidal motion of electrons in crystal lattice in response to the strong light field. As result of this motion, light at fundamental frequency together with its harmonics is re-radiated.
At extreme conditions of light intensity of $10^{13} - 10^{15}$ W/cm$^2$ this concept reaches a new level. The corresponding electric field of that intense light is of similar amplitude as the Coulomb field of an atom. Due to this, binding potential is significantly modified and electron can be excited from the bound state. Once set free, it is accelerated by light field and after field reverses its sign, it recollides with the atom. At the recollision, energetic photons can be emitted at multiples (10s and 100s) of the driving laser frequency reaching extreme ultraviolet (EUV) and soft X-ray spectral region. This process is called high(-order) harmonic generation (HHG).
In general, HHG is utilized by focusing ultrashort visible or infrared (IR) light pulses of ps- or fs-pulse duration into a nonlinear medium, usually a noble gas. High harmonic (HH) radiation is built up in the nonlinear medium and emitted collinearly with the fundamental beam. It contains several (odd) harmonic orders of the fundamental laser frequency spanning from visible to deep into soft X-ray region. Moreover, HH radiation shows some unprecedent characteristics of high spatial coherence and ultrashort pulse duration. However, a low conversion efficiency on the order of $10^{-5} - 10^{-7}$ limits available EUV power.
Despite that, several applications, e.g diffractive imaging [2], could benefit from HHG as a table-top alternative to costly and hardly accessible EUV and soft X-ray light sources like synchrotron and free electron laser. HHG is also the key process behind generation of attosecond EUV light pulses, the shortest bursts of radiation ever observed. The controlled generation and measurement of single attosecond pulse set the new milestone in ultrafast physics with possibilities for tracking the atomic-scale motion of electrons [3]. Furthermore, HHG is being employed for the coherent upconversion of existing visible and IR frequency combs into EUV. The remarkable accuracy of EUV frequency combs would enable precision spectroscopy on simplest atoms and ions and therefore allow tests of the fundamental theory of bound state quantum electrodynamics [4].
One such experiment - precision spectroscopy of $1s - 2s$ two-photon-induced
transition of singly ionized helium [5] - is being planned in Laser spectroscopy division of Max Planck Institute for Quantum Optics. To drive the transition, 60.8 nm light from the EUV frequency comb will be used. Due to the very limited available power from EUV frequency comb, it has to be tightly focused into trapped He$^+$ ions. It is expected that $10-100 \mu W$ focused down to $< 10 \mu m$ would be sufficient for spectroscopy [5].
This thesis reports on our work to reach these numbers using an auxiliary EUV laser source. Its goal was to address some technical challenges, which we expect to be present in the final experiment, as well as to identify possible new ones. The results are presented in the following way: in chapter 2, the background of phenomena encountered in experiments are explained and notation and definitions as used within this thesis are given. It starts with the properties of ultrashort light pulses and their evolution in time and space, as a driving source for HHG. Next, the mechanism behind HHG on a single-atom level is explained using a semiclassical three-step model. The phase-matching condition for efficient built-up is given and some characteristics of HH radiation, that were analyzed in experiments are presented. EUV spectral range brought up several technical challenges, which are discussed in the last part of the chapter. The introductory “theoretical” chapter is followed by four chapters, each presenting single experimental step on our way toward the “as powerful and as tightly focused as possible” EUV laser source. Our EUV laser source was based on frequency conversion of intense ultrashort light pulses from chirped pulsed amplification (CPA) laser system via HHG. Therefore, it could mimic beam characteristics of EUV frequency comb in the best possible way. The laser part of our experiment is described in chapter 3 together with temporal, spectral and spatial characterization of driving IR laser pulses including the analysis of effects related to focusing into the gas target used for HHG. In chapter 4 HHG process is investigated for different experimental conditions. Spectral measurements together with systematic optimization of 13$^{th}$ harmonic power around 61 nm are presented which served for determination of possible underlying processes limiting the efficient HHG. In chapter 5, absolute power of our optimized EUV laser source is estimated from the measurement with the calibrated detector and from theoretical models of transmission of elements in the beam path. Our attempt to focus HH beam is described in chapter 6 - it includes test of experimental apparatus for focusing and analysis of micrometer focal spots with IR beam and the first measurement of EUV focus. Finally, all results are summarized in chapter 7 and possible improvements of the setup as well as suggestions for future measurements are given in chapter 8.
Chapter 2
Background
Understanding of properties of ultrashort light pulses and their interaction with matter is essential for the design of experiments performed within this work as well as for interpretation of their results. Nomenclature used within this thesis is defined in this chapter and relevant phenomena associated with propagation of high-energy ultrashort light pulses are described. Linear, non-linear (perturbative) and highly non-linear interaction (namely, high harmonic generation) of the latter with matter are explained. Special emphasize is given to HHG as a source of EUV light in our experiments. Finally, specific experimental challenges in EUV spectral range as well as some solutions are being discussed.
2.1 Ultrashort light pulses
2.1.1 General description
Ultrashort light pulses are electromagnetic wave-packets propagating with the speed of light. However, to describe temporal characteristics only their representation as spatial and temporal dependent vector field $E(\mathbf{r}, t)$ can be fully simplified to time dependent scalar quantity $E(t)$. On the other hand light pulses can equally be represented with their Fourier transform equivalents in frequency domain.
Imagine an imaginary detector of electric field placed at fixed position being hit by a light pulse. The signal it detects can be written in mathematical form as $E(t) = A(t) \cos(\Phi(t))$ – harmonic wave described by its phase $\Phi(t)$ multiplied by an amplitude or envelope of the pulse $A(t)$. As quite often in physics, it’s much more convenient to write an electric field in a complex form
$$E(t) = A(t)e^{i\Phi(t)}, \tag{2.1}$$
while being aware that at the end only the real part of it represents a real physical quantity. In a similar way, we can write complex spectrum of ultrashort light pulse as
$$E(\omega) = S(\omega)e^{i\Psi(\omega)}, \tag{2.2}$$
with $S(\omega)$ being spectral amplitude and $\Psi(\omega)$ spectral phase. $E(t)$ and $E(\omega)$ are
related to each other via Fourier transformation
\begin{align}
E(t) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} E(\omega) e^{i\omega t} d\omega, \\
E(\omega) &= \int_{-\infty}^{\infty} E(t) e^{-i\omega t} dt.
\end{align}
Usually, basic information about the pulse, namely pulse duration and pulse bandwidth, are defined as FWHM (full width at half maximum) values of the temporal intensity function $I(t)$
$$I(t) = \frac{1}{2} \epsilon_0 c n |E(t)|^2$$
and spectral intensity $I(\omega)$
$$I(\omega) = \frac{1}{4\pi} \epsilon_0 c n |E(\omega)|^2.$$
Here $c$, $n$, and $\epsilon_0$ are the vacuum speed of light, the refractive index of the material in which pulse propagates, and the permittivity of free space, respectively. FWHM values for pulse duration and bandwidth can be extracted from measurements of pulse intensity autocorrelation and its power spectrum density measured with a spectrometer with some presumptions of the pulse shape. However, with novel measurement techniques (e.g. FROG, SPIDER) it became possible to retrieve a complete information about both the envelope and the phase of the pulse [6].
From our description so far, it’s not obvious whether temporal and spectral phase have any impact on pulse temporal and spectral envelope. Detailed look and evaluation of Fourier transformation would show that they indeed have a significant effect and we will see some mechanisms of phase modulation in next chapter. So let us now discuss more carefully the physical meaning of the phase functions $\Phi(t)$ and $\Psi(\omega)$. Temporal phase $\Phi(t)$ can be expanded around $t = 0$
$$\Phi(t) = \Phi_0 + \omega_0 t + \Phi_a(t)$$
where $\Phi_0$ is a carrier envelope phase, which determines offset between pulse envelope and carrier oscillation, $\omega_0$ is carrier frequency and $\Phi_a(t)$ is an additional time-dependent phase term of higher orders. When $\Phi_a(t)$ is non-zero, an instantaneous frequency $\omega(t)$ can be defined as
$$\omega_{\text{inst}}(t) = \frac{d\Phi}{dt} = \omega_0 + \frac{d\Phi_a}{dt}.$$
Variation of instantaneous frequency with time is called a chirp and is positive when $\frac{d\omega_{\text{inst}}(t)}{dt} > 0$ (up-chirped pulse) and negative when $\frac{d\omega_{\text{inst}}(t)}{dt} < 0$ (down-chirped pulse). An effect of the chirp on the pulse shape is illustrated in figure 2.1.
Similarly, spectral phase contains an information about frequency-dependent temporal delay
$$T_g(\omega) = \frac{d\Psi(\omega)}{d\omega},$$
which gives a relative offset for each spectral component as pulse evolves in time.
Figure 2.1: Effect of the chirp on pulse shape: a), b), and c) show the time dependent electric field for chirp-free, up-chirped, and down-chirped Gaussian pulse with same pulse duration. Broken line is an envelope $A(t)$ and colored full lines are product of the latter with oscillatory part with time dependent argument - phase - $\Phi(t)$. Carrier envelope phase $\Phi_0$ is chosen to be 0 so that envelope and carrier are in phase at time $t = 0$. d) Given the same bandwidth, the chirped pulse (blue) is longer than chirp-free (red) or so-called bandwidth limited, i.e. the shortest possible pulse at given bandwidth. (e) Instantaneous frequency defined in (2.7) linearly increases for up-chirped, decreases for down-chirped and is constant for bandwidth limited pulse. f) Two curves represent spectra for all the pulses shown in figures a)-d): the blue one for the bandwidth-limited and chirped (consequently longer) pulse on figure a) and d) and the green one for the chirped pulses on figures b) and c). To achieve the same pulse duration as bandwidth limited pulse, chirped pulses must have a larger bandwidth.
Figure 2.2: Normalized spectral (left) and pulse (right) intensities for Gaussian (blue) and sech$^2$ (red) pulse: both pulses have same bandwidth, but due to the different pulse shape and consequently different time-bandwidth product, Gaussian pulse is slightly longer.
Although light pulses in real experiments can be of more complex shape, they are often approximated by analytical functions. Two common pulse shapes – Gaussian and sech$^2$-pulse – which were also assumed for the pulses used in our experiments are given by
\begin{align}
E_{\text{Gauss}}(t) &= E_0 e^{-(\frac{t}{\tau})^2} e^{i \omega_0 t}, \\
E_{\text{sech}}(t) &= E_0 \text{sech}\left(\frac{t}{\tau}\right) e^{i \omega_0 t}
\end{align}
and spectra are simply their Fourier transforms
\begin{align}
E_{\text{Gauss}}(\omega) &= E_0 \sqrt{\pi \tau} e^{-\left(\frac{\omega - \omega_0}{2/\tau}\right)^2}, \\
E_{\text{sech}}(\omega) &= E_0 \pi \tau \text{sech}\left(\frac{\pi (\omega - \omega_0)}{2/\tau}\right).
\end{align}
In figure 2.2 the spectral intensity and the pulse intensity for both pulse shapes are shown.
For simple pulse shapes FWHM values of temporal and spectral intensity are a standard measure for pulse duration $t_{FWHM}$ and bandwidth $\Delta f = \frac{\Delta \omega}{2\pi}$. Equations for pulse durations and bandwidth can be calculated from (2.9) and (2.10) and are given in table 2.1 for both pulse shapes. They are connected through so-called time-bandwidth product, which has a specific value for each pulse shape and is valid only for bandwidth-limited pulses. Therefore, time-bandwidth product is a good indicator whether the pulse is chirped or not and if we might compress it furthermore.
### 2.1.2 Light pulse propagation - dispersion
One unavoidable mechanism of light pulse modification while propagating through matter is dispersion. Frequency-dependent refractive index $n(\omega)$ gives rise to different travel times for each spectral component of the pulse resulting in change of
Table 2.1: FWHM equations for pulse duration and bandwidth for two pulse shapes used within this thesis, and related time-bandwidth product.
| Pulse shape | $t_{\text{FWHM}}$ | $\Delta \omega$ | $t_{\text{FWHM}} \Delta \nu$ |
|-------------|------------------|-----------------|-------------------------------|
| Gaussian (2.9a),(2.10a) | $\tau \sqrt{2 \log 2}$ | $\frac{\sqrt{8 \log 2}}{\pi}$ | $\frac{2 \log 2}{\pi} \approx 0.441$ |
| sech$^2$ (2.9b),(2.10b) | $2 \tau \text{arccosh}(\sqrt{2})$ | $\frac{4 \text{arccosh}(\sqrt{2})}{\pi \tau}$ | $\frac{4 (\text{arccosh}(\sqrt{2}))^2}{\pi^2} \approx 0.315$ |
the pulse’s temporal envelope. To describe this problem mathematically, it’s most convenient to treat it in frequency domain.
Let a light pulse with spectrum $E(\omega)$ travel through non-absorbing linear medium with refractive index $n(\omega)$. After distance $L$ its spectrum is modified into
$$E(\omega, L) = E(\omega, 0) e^{-i \Psi_{disp}(L)}. \quad (2.11)$$
The only change is an additional dispersive term $\Psi_{disp}$ in its spectral phase given by
$$\Psi_{disp} = \beta(\omega)L = n(\omega) \frac{\omega}{c} L \quad (2.12)$$
where $\beta(\omega)$ is propagation constant of a medium. To understand effect of material dispersion on a pulse shape in a qualitative way, it’s customary to expand $\beta(\omega)$ in Taylor series about carrier frequency
$$\beta(\omega) = \sum_{n=0}^{\infty} \left. \frac{\partial^n \beta(\omega)}{\partial \omega^n} \right|_{\omega=\omega_0} \frac{(\omega - \omega_0)^n}{n!} = \beta_0 + \beta_1 (\omega - \omega_0) + \frac{\beta_2}{2} (\omega - \omega_0)^2 + \cdots. \quad (2.13)$$
Now, one can Fourier transform the spectrum at the output of the medium $E(\omega, L)$ to obtain pulse shape in time domain. It has been done elsewhere [7] and I will just briefly summarize results. First two terms in equation (2.13) don’t effect a pulse envelope: $\beta_0$ is inversely proportional to phase velocity $v_p = \frac{\omega_0}{\beta_0}$ with which carrier wave propagates and $\beta_1$ is reciprocal value of group velocity
$$v_g = \beta_1^{-1} = \left( \left. \frac{\partial \beta(\omega)}{\partial \omega} \right|_{\omega=\omega_0} \right)^{-1} \quad (2.14)$$
at which envelope moves through medium. Group velocity usually differs from phase velocity resulting in evolution of carrier-envelope phase (offset between pulse envelope’s peak and carrier wave). Stabilization of the latter is necessary for certain applications, e.g. frequency comb [8], single attosecond pulse generation [9], but won’t be important for experiments in this work.
Directly related to group velocity is group delay time $\tau_g(\omega) = L/v_g$. Its frequency dependence is called group delay dispersion (GDD) and is a reason for modifications of pulse envelope. Propagation time of spectral component at $\omega$ relative to one at carrier frequency $\omega_0$
$$\tau_g(\omega) - \tau_g(\omega_0) = \left. \frac{\partial \tau_g}{\partial \omega} \right|_{\omega=\omega_0} (\omega - \omega_0) = \frac{\partial v_g^{-1}}{\partial \omega} (\omega - \omega_0)L = \beta_2 (\omega - \omega_0)L \quad (2.15)$$
is proportional to coefficient $\beta_2$ termed group velocity dispersion (GVD). Non-zero $\beta_2$ leads to linear variation of delay time with frequency so pulse acquires linear
chirp and stretches. For near IR light lower frequency spectral components of the pulse (red part) travel faster in usual optical glasses (fused silica, BK7) resulting in up-chirped pulse as in figure 2.1. Shorter the pulse, broader the spectrum and also higher terms in (2.13) need to be taken into consideration to accurately describe material dispersion. But as our pulses will be “long” enough, we will stop here.
Until now only material dispersion has been discussed, but dispersion can also be introduced into optical system with special arrangements of diffractive optical elements (combination of prisms [10], gratings and mirrors [11]) or specially designed multi-layered chirped mirrors [12]. One application in which an extensive amount of dispersion is used to stretch pulses before amplification is chirped pulse amplification (CPA). It is the standard method for generation of high energy ultrashort pulses used in experiments in this work. Working principles of CPA laser system will be presented in chapter 3.1.
### 2.1.3 Laser beam propagation
So far, light pulses were treated as a temporally modulated propagating plane waves. Such a picture is just a mathematical idealization and differs significantly from light beam that comes from a laser, i.e. a source of ultrashort light pulses. However, to describe spatial properties of a laser beam, we will stay in our simplified view and describe the electric field $\mathbf{E}(\mathbf{r}, t)$ this time as a scalar, space dependent-quantity $E(r)$.
One of the main properties of laser beam is that it is collimated - light is confined around propagation axis $z$. As long as transverse beam dimensions remain sufficiently small compared to propagation distance, paraxial approximation of wave equation is valid. An important solution of the latter is a Gaussian beam
$$E(\rho, z) = E_0 \frac{w_0}{w(z)} e^{-\frac{\rho^2}{w^2(z)}} e^{-i \frac{k_0 \rho^2}{2R(z)}} e^{-i(kz + \eta(z))}, \quad (2.16)$$
where
$$w(z) = w_0 \sqrt{1 + z^2/z_R^2}, \quad (2.17)$$
$$R(z) = z(1 + z_R^2/z^2), \quad (2.18)$$
$$\eta(z) = \arctan(z/z_R), \quad (2.19)$$
$$z_R = \frac{\pi w_0^2}{\lambda} \quad (2.20)$$
are beam radius (distance from beam center at which electric field falls to $1/e$ of its maximum value), radius of curvature of wavefronts, Gouy phase and Rayleigh range respectively (see figure 2.3). Here Gaussian beam in its cylindrical symmetrical form also-called fundamental or TEM$_{00}$ mode is written with $\rho = \sqrt{x^2 + y^2}$ being distance from $z$ axis and $k$ being a wave number. To get a complete form of electric field of ultrashort light pulse one now has to multiply its spatial part (2.16) with the temporal one (2.1).
Gaussian beam got its name after its centrosymmetric Gaussian shape intensity profile in transverse plane with a peak value at the beam axis
\[
I(\rho, z) = \frac{2P}{\pi w^2(z)} e^{-\frac{2\rho^2}{w^2(z)}},
\]
where \( P \) is an average power of the laser. Beam radius \( w(z) \) is smallest at beam waist, which we chose to be at the origin \( z = 0 \). It increases symmetrically on both sides away from beam waist and reaches \( \sqrt{2}w_0 \) at the Rayleigh range \( z_R \). At this point size (area) of the beam is twice of that and peak intensity on beam axis is twice smaller. When we are focusing Gaussian beam, another parameter is often used, namely confocal parameter
\[
b = \frac{2\pi w_0^2}{\lambda} = 2z_R,
\]
which denotes a depth of the focus. Within confocal parameter on-axis intensity is higher or equal to half of the peak intensity in the focus. For nonlinear processes for which high intensity is required, it is a good measure of volume at which those processes can actually take place (are efficient). From (2.22) one can see that tighter we focus (smaller \( w_0 \)), shorter the confocal parameter gets.
Although Gaussian beam is in general the most directional form of propagating light\(^1\), it still is divergent. The divergence angle
\[
\Theta = \frac{w(z)}{z} \approx \frac{w_0}{z_R} = \frac{\lambda}{\pi w_0} \text{ for } z \gg z_R
\]
is proportional to wavelength \( \lambda \) and inversely proportional to beam waist radius. In order to get collimated beam one should use shorter wavelengths and thicker beam waist.
Wavefronts of a Gaussian beam, shown as dotted curves in figure 2.3, can be well approximated by a rotational paraboloid [13] with a radius of curvature \( R(z) \) given
\(^1\)Gaussian beam is the diffraction limited solution of paraxial wave equation. An analog in quantum mechanics is a Gaussian wave packet, the solution of Schrödinger equation, which spreads the slowest.
by (2.18). It is infinite at beam waist, what means that wavefront are planar there. As distance from beam waist increases, wavefronts curve quickly and get almost spherical far away from the waist.
Equation (2.16) can be rewritten into more compact form
\[ E(\rho, z) = E_0 \frac{w_0}{w(z)} e^{-i\frac{k_0 \rho^2}{2q(z)}} e^{-i(k_z z + \eta(z))} \]
(2.24)
introducing a complex beam parameter \( q(z) \)
\[ \frac{1}{q(z)} = \frac{1}{R(z)} - \frac{i\lambda}{\pi w^2(z)} = \frac{1}{q(0) + z}. \]
(2.25)
By comparing (2.16) with (2.24) one can see that Gaussian beam is completely defined by its wavelength \( \lambda \), beam radius \( w(z) \) and radius of curvature \( R(z) \) as well as by \( \lambda \) and \( q(z) \) only. A complex beam parameter is a convenient quantity when simulating propagation of a Gaussian beam through an optical element, which can be described by an ABCD matrix formalism known from geometrical optics. Instead of searching for an integral solution of a paraxial wave equation for given boundary conditions, it was shown [14], [15], that one can use an ABCD matrix for a given optical system to calculate a complex beam parameter \( q_2 \) at the exit
\[ q_2 = \frac{Aq_1 + B}{Cq_1 + D}, \]
(2.26)
where \( q_1 \) is a complex beam parameter at the input.
Two simple examples are ABCD matrices for free space and a thin lens, which where used for designing a beam path in experiments within this work are
\[ M_{\text{free space}} = \begin{bmatrix} 1 & z \\ 0 & 1 \end{bmatrix}, \quad M_{\text{thin lens}} = \begin{bmatrix} 1 & 0 \\ -1/f & 1 \end{bmatrix}, \]
(2.27)
where \( f \) is a focal length of a lens. Note that ABCD matrix for a curved mirror with a radius of curvature \( R \) is the same as the one for a lens, only the value for \( f \) has to be replaced by \( f = R/2 \). When several optical elements are used in a row, ABCD matrix for the whole set is obtained by multiplication of the matrices for each element in the same order as beam passes through them.
Very often real laser beam differs from ideal TEM\(_{00}\) mode. For example, beam can be asymmetric, i.e. beam profile is not round, but elliptic. In that case, propagation equation (2.17) - (2.20) can be written for two orthogonal (major and minor) axis of an ellipse with corresponding waist radius \( w_{0,x} \) and \( w_{0,y} \) separately. Another case would be an astigmatic beam, where the position of the beam waist is not the same for both axis. An astigmatism is an undesirable effect of focusing due to the aberrations of focusing element, but can be usually minimized by proper alignment or precompensated with another element.
Discrepancy from an ideal Gaussian beam is often characterized by a beam quality factor \( M^2 \). It is defined as a ratio of beam width-divergence product (\( w_0 \times \Theta \)) of a real beam and an ideal (diffraction-limited) Gaussian beam and is consequently always \( \geq 1 \). Despite beam profile resembling a perfect Gaussian profile, it can be actually a superposition of higher, more divergent modes [16]. We can experimentally determine \( M^2 \) by measuring waist radius and divergence at the far field or by
measuring beam radius at several locations along beam axis and then fitting a curve for a real beam spot size variation [16]
\[ w^2(z) = w_0^2 + M^4 \left( \frac{\lambda}{\pi w_0} \right)^2 z^2 \]
(2.28)
to the data.
### 2.1.4 Nonlinear effects
When ultrashort light pulse propagates through matter, its response is often nonlinear, as the energy is concentrated within such a short time leading to a very strong electric field. Different non-linear processes can take place [17]. When utilized in controlled manner, they can be exploited for several applications, e.g. generation of light at new frequencies. However, besides high harmonic generation, those that we encountered in the experiments - *self-focusing* and *self phase modulation* (SPM) - are in general undesirable as they distort the initial pulse. A mechanism for both of them is a Kerr effect, i.e a third-order nonlinear process, when refractive index becomes intensity-dependent
\[ n = n_0 + n_2 I(\mathbf{r}, t). \]
(2.29)
where \( n_0 \) is an ordinary and \( n_2 \) is a nonlinear refractive index.
Spatial variation of intensity in transverse plane transfers to a spatial dependent index of refraction. Therefore, different phase is acquired at different points of the beam and wavefronts bend. Depending on the sign of \( n_2 \) and beam transverse profile, beam will usually either focus or defocus. Due to this self-induced focusing, this non-linear process got its name, i.e self-focusing. For a Gaussian beam given by (2.16), the phase term acquired after propagation through a thin piece of medium of thickness \( \Delta z \) is
\[ e^{-ik_z \Delta z} = e^{-i \frac{2\pi \Delta z}{\lambda} (n_0 + n_2 I_0 e^{-2w^2/w^2(z)})} \approx e^{-i \frac{2\pi \Delta z}{\lambda} (n_0 + n_2 I_0 (1-2\rho^2/w^2(z)))}. \]
(2.30)
In the last term we can recognize a phase function of a lens with a focal length
\[ f_{NL} = \frac{w^2(z)}{4n_2 I_0 \Delta z}. \]
(2.31)
However, for more accurate propagation of a beam its diffraction should also be taken into account, as well as continuous change of intensity. An accurate analysis of propagation over a longer distance is only possible via numerical simulation. Nevertheless an estimation of critical power (also-called self-trapping power)
\[ P_{\text{crit}} = \frac{\lambda^2}{8\pi n_0 n_2}, \]
(2.32)
can be made at which diffraction and self-focusing compensate, beam gets “trapped”, i.e. doesn’t diverge at all [6]. For higher powers and longer medium a catastrophic self-focusing can occur, where beam focuses so tight that it damages the medium.
Similarly as spatial, temporal variation of intensity also leads to a phase variation and time-dependent phase. Therefore, a pulse gains chirp and its spectrum broadens,
whereas its envelope doesn’t change. When SPM is really strong, a spectrum can broaden to such an extent that white light (continuum) is generated. While SPM is a main mechanism behind, also other nonlinear effects like self-focusing contribute, so that it’s a very complicated problem to treat theoretically. On the other hand it is quite easy to observe it in the lab making one sure that nonlinear processes are transforming an original pulse.
2.2 High harmonic generation
High harmonic generation is a highly non-linear process, taking place when short, linearly polarized, intense ($I > 10^{13} \text{W/cm}^2$) laser pulse interacts with matter, usually a noble gas, resulting in emission of light consisting of integer (odd) multiples of fundamental laser frequency. High harmonic (HH) radiation has a characteristic spectrum with a quick fall off of low-order harmonics, extended plateau of harmonics with approximately constant intensity and a sharp cut off. In gases, it was first observed in 1980s by McPherson et al. [18] and Ferray et al. [19]. Existence of plateau, where harmonics intensity doesn’t follow an usual intensity scaling law $I_q \propto I^q$, lead them to a conclusion that high harmonic generation (HHG) is another type of non-linearity, which cannot be described by perturbative theory.
A very intuitive physical insight into the HHG was given by a semi-classical “three step” or “simple man’s” model introduced by Kulander et al. [20] and Corkum [21]. According to this model, single atom response in HHG process consists of three steps: ionization, propagation of free electron and recombination with a parent ion, when an extent of energy is released by emission of HH photon. Although quite simple model, it turned out to be a real breakthrough in understanding of HHG: on one hand it explained several properties of HH radiation and on the other hand it served (and it still does) experimentalists as a very intuitive picture when tweaking laser parameters in order to control and optimize the process.
Since the discovery of HHG a lot of work had been done in order to generate higher harmonic orders, increase conversion efficiency and explore potential applications of HHG (see [22] for review). Besides, HHG opened the way to the new chapter in ultra fast physics, i.e. attosecond physics, via generation of sub-fs bursts of light in a controlled manner.
However, only a basic theory required for understanding of experiments in this work will be explained within this section. First, single atom response and relations to some macroscopic observables are derived. Next, phase matching as a necessary condition for efficient HHG and other propagation effects are discussed. Finally we take a look on some general characteristics of HH radiation, which were mainly studied in this work.
2.2.1 Single-atom response
To understand the mechanism behind frequency conversion in HHG process, we should first take a look how a single atom responds to a very intense light. While other more rigorous theories exist, we will stick here to the semi-classical three-step model of HHG, which explains most of the characteristics of HH radiation relevant for this work. It is based on the single-active electron (SAE) approximation, which assumes that only one electron of the atom interacts with the laser field, whereas
others, together with the core, form an effective static potential. A schematic illustration of three-step model is shown in figure 2.4.
When an atom is exposed to the light with intensity $I \approx 10^{14} \text{ W/cm}^2$ and corresponding electric field on the order of atomic electric field, the potential which electron feels changes significantly. The modified potential has a finite barrier through which an electron can leave the atom with certain probability via tunneling. Once freed, it gets accelerated by the light field and moves away from the ion. However, when the driving field changes its sign, a free electron gets first decelerated and then accelerated back to its parent ion. While several scenarios are possible, the one relevant for this work is HHG. In that case, electron recombines with its parent ion and an excesses of energy is released via emission of a single photon.
**Ionization**
To bridge the ionization potential $I_p$ that binds an electron to the nucleus, several low-energy photons have to be absorbed. It’s a non-linear process called multiphoton ionization (MPI) and can be described perturbatively. The probability to ionize an atom with $n$ photons is proportional to the $n$-th power of light intensity and decreases with the number of photons required. Therefore, it’s more probable to ionize an atom with shorter-wavelength light. If the intensity is increased further on, one enters the non-perturbative above-threshold-ionization (ATI) regime, where even more photons than required for ionization are absorbed. The excess of energy is transformed into kinetic energy of released photoelectrons, what leads to a characteristic photoelectron spectrum with peaks separated by driving photon energy.
Another interpretation of ionization mechanism for sufficiently strong low-frequency field is possible through a quasi-static model. Here bound electron experiences an effective potential $V_{\text{eff}} = V_C + eE \cdot r$, which is a sum of Coulomb potential of the core $V_C$ and a part due to the instantaneous laser field $E$. The effective potential has a barrier (see figure 2.4) through which an electron can tunnel. An instantaneous
ionization rate is then given by a tunneling rate, which can be calculated for given laser parameters and atomic species using one of the existing models (e.g. ADK [23], PPT [24]).
To distinguish between multi-photon and tunnel ionization Keldysh parameter was introduced [25]
\[
\gamma_K = \sqrt{\frac{I_p}{2U_p}},
\]
(2.33)
comparing a frequency of driving fundamental optical field and tunneling frequency. Here \( U_p \) is a ponderomotive energy, i.e. an average quiver energy of a free electron in oscillatory electric field \( E(t) = E_0 \cos(\omega_0 t) \) given by
\[
U_p = \frac{e^2 E_0^2}{4m_e \omega_0^2} \propto I \lambda^2
\]
(2.34)
where \( m_e \) is electron mass, and \( I \) and \( \lambda \) are intensity and wavelength of the driving laser field respectively. For \( \gamma_K \ll 1 \) tunneling dynamics prevails, while for \( \gamma_K \gg 1 \) MPI is a dominant mechanism for ionization. By increasing a laser intensity further on, a potential barrier is getting narrower until it vanishes and an electron can freely escape. This regime is know as over-barrier ionization. The critical intensity to ionize a xenon atom this way is \( I_c = 8.7 \times 10^{13} \text{ W/cm}^2 \).
**Propagation**
Once an electron is set free, its evolution described within frames of a semi-classical three-step model follows laws of classical mechanics. Neglecting the effect of the core potential and assuming that light is linearly polarized in \( x \) direction, the equation of motion is simply the following
\[
m_e \ddot{x} = -eE_0 \cos(\omega_0 t)
\]
(2.35)
As long as we don’t use few-cycle pulses, \( E_0 \) can be considered constant within one optical cycle. Next, we assume the initial position and velocity of the free electron at the time of ionization \( t_i \) to be both zero. Integration of the (2.35) gives the solution for the velocity
\[
\dot{x} = -\frac{eE_0}{m_e \omega_0} [\sin(\omega_0 t) - \sin(\omega_0 t_i)]
\]
(2.36)
and further on for the trajectory
\[
x = \frac{eE_0}{m_e \omega_0^2} [\cos(\omega_0 t) - \cos(\omega_0 t_i) + \sin(\omega_0 t_i)(\omega_0 t - \omega_0 t_i)]
\]
(2.37)
The constant term \( \sin(\omega_0 t_i) \) in (2.36) can be recognized as a drift velocity that leads an electron away from the core. In figure 2.5, different possible trajectories are shown. Only electrons that come to the vicinity of the their parent ion at least once again before they are pulled away by the laser field, can contribute to the HHG. Therefore, the relevant trajectories are the ones, which cross the zero point \( x = 0 \) at later times \( t_r > t_i \) when recombination can take place. The recombination time \( t_r \) doesn’t have an analytical expression, but its value can be found numerically by setting \( x = 0 \) in (2.37). Now a kinetic energy of the electron at the recollision \( W_{\text{kin}}(t_i) \) for the given \( t_i \) can be extracted. Kinetic energies at the recollision for different ionization times is shown in figure 2.5.
Figure 2.5: Electron trajectories and corresponding kinetic energies at the recollision with the parent ion for different ionization times: on the upper part three different electron trajectories are shown together with a driving laser field (black). Normalization unit for electron position $x$ is $x_0 = \frac{xE_0}{m\omega_0^2}$. ‘Long’ (red), ‘short’ (blue) and trajectory of an electron that doesn’t recollide (green) start according to the ionization time and cross zero axis at recombination time. On the lower part, a kinetic energy in units of ponderomotive energy versus ionization (full curve) and recombination (dashed curve) time is plotted, where red and blue color again mark ‘long’ and ‘short’ trajectories. Note that energies for the second half of the optical cycle would look the same only shifted to later times but were omitted here for the sake of clarity.
Recombination
Whether an electron can potentially recombine with its parent ion and what amount of energy will the emitted photon carry away depends on the ionization time $t_i$. As can be seen in figure 2.5, only electrons freed in the first quarter of optical cycle of duration $T_0$ will recollide with the parent ion, while the ones released in the second quarter will be immediately pulled away and are lost for HHG. On the other hand, certain electron trajectories even pass the core several times. However, due to the spread of the electron wave packet, recombination is the most probable at the first return.
When a recombination takes place, a photon is emitted with an energy
$$\hbar \omega_{\text{HH}} = W_{\text{kin}}(t_i) + I_p.$$
(2.38)
It’s a sum of kinetic energy of the electron at the recollision $W_{\text{kin}}(t_i)$ and ionization energy. Here $\hbar$ is a reduced Planck constant and $\omega_{\text{HH}}$ is an angular frequency of the emitted photon. The highest photon energy corresponds to the ionization time $t_i = 0.05 T_0$ and kinetic energy $W_{\text{kin}} \approx 3.2 U_p$ at the recollision. It also determines the cut-off frequency in HH spectrum. While ionization rate is the highest at field’s maximum at $t_i = 0$, those electrons return with zero velocity. However, laser field is sufficiently strong around $t_i = 0.05 T_0$, when electrons which eventually contribute to the photons with highest energies are set free. Therefore, a conversion efficiency remains high for high-order harmonics, resulting in the plateau in HH spectrum.
Except for the maximal (cut-off) energy, there are two trajectories with different ionization and recombination times for each $W_{\text{kin}}$ as shown in figure 2.5. They are named ‘short’ and ‘long’, because electrons are set free for different amount of time. Those corresponding to the ‘long’ trajectory are released earlier and recombined later and vice versa. While they contribute to the same harmonic order, the phase of the emitted radiation differs for each trajectory, which brings up some important macroscopic effects, as we will see later.
The same process of ionization and recombination with photoemission repeats every half a cycle. Radiation bursts which occur twice per laser period manifests in frequency domain as peaks separated by twice the carrier frequency of the driving light pulse, what can indeed be seen in HH spectrum.
Besides HHG, there are also another possible processes at the recollision: an elastic scattering of the electron with the atom which contributes to the above-threshold-ionization (ATI) with its characteristic photoelectron spectra [26] and inelastic scattering or non-sequential double ionization (NSDI), in which a electron collides with a parent ion and kicks out another electron [27].
To conclude, the simple man’s model is a semi-classical model in regards that it explains ionization through quantum tunneling effect, while treats a subsequent motion of electron classically. At the end, the recombination is again explained using quantum physics arguments. However, despite its strong approximations, the simple man’s model gives an physical insight into HHG and successfully predicts the position of the cut-off frequency in HH spectrum. It also incorporates the contribution of different trajectories to HHG signal, but fails to accurately predict their phases.
On the other hand, it has some severe limitations: it is restricted to harmonic orders with energies larger than ionization potential as photons with $\hbar \omega_{\text{HH}} < I_p$ would require a negative kinetic energy at recombination according to (2.38). Besides that, it doesn’t explain a very important feature of HH radiation, namely its coherence. Many quantum effects such as quantum diffusion of wave packets and quantum interference are also not included [28].
A fully quantum and exact theory of HHG can be formulated in terms of solution of time-dependent Schrödinger equation (TDSE). However, it is very time consuming to solve TDSE in order to optimize experimental parameters. An intermediate way is a Lewenstein model [28], which is a fully quantum mechanical analog to the simple-man’s model. It consists of same three steps, but explains another features of HH radiation in addition, e.g. coherence and intensity dependent phase. Due to its partial analytic results it allows rapid calculation of single atom response [29]. Emission from single dipoles can be then added coherently in order to obtain experimentally observable macroscopic response.
### 2.2.2 Macroscopic response and propagation
The understanding of single atom response to intense light pulse is necessary to explain the origins of HHG. However, several macroscopic effects within non-linear medium determine the efficiency of HHG and consequently the total harmonic yield. We will consider here as a non-linear medium a gas target in a form of a pressurized
gas cell or of a gas jet emitted from a pressurized nozzle and free-focusing geometry\footnote{Another common geometry used consists of a hollow-core fiber \cite{30} or a capillary tube filled with gas}. In that case, the most important and most limiting processes are \cite{31}:
- Reabsorption of the HH radiation within the gas target and along the beam path,
- Defocusing of the fundamental beam due to the self-generated plasma,
- Phase mismatch between harmonic and driving field.
In following, we will take a detailed look in the most notable among them, i.e. phase matching and just briefly comment on the other two.
**Phase matching**
Macroscopic response of the gas target to intense laser field is a coherent sum of contributions from single emitting dipoles (atoms) together with propagation effects, e.g. absorption and diffraction. Constructive interference of single emitters is required for overall frequency conversion to be efficient. In other words, while harmonic field is building up within the gas target, it is necessary that the part generated at the front is in phase with the one generated at the back of the gas target, otherwise destructive interference would occur. This can be achieved by phase-matching the polarization wave induced by the driving (fundamental) field and the building-up harmonic wave \cite{32}, so that phase mismatch
$$\Delta \Phi(r, z, t) = q \Phi_1(r, z, t) - \Phi_q(r, z, t)$$ \hspace{1cm} (2.39)
is kept constant. Here $q$ is a harmonic order, $\Phi_1$ is the phase of the fundamental field and $\Phi_q$ is the field of the $q$-th harmonic. For now, we will neglect the time- and radial-dependence of the phase terms and consider a phase-matching on the beam axis.
Assuming that a driving laser beam is a Gaussian beam defined by (2.16), its on-axis phase can be written as
$$\Phi_1 = [1 + \delta n_p(\omega_1) + \delta n_g(\omega_1)] \frac{\omega_1}{c} z - \tan^{-1}\left(\frac{z}{z_{R_1}}\right).$$ \hspace{1cm} (2.40)
Here the effective refractive index of the target consists from contribution of vacuum (1), laser generated plasma ($\delta n_p$) and the neutral gas ($\delta n_g$). The second term is Gouy phase and arises from focusing. A phase shift of $\pi/4$ is acquired from focus to the Rayleigh range $z_{R_1}$. The phase of harmonic field has a similar form
$$\Phi_q = [1 + \delta n_p(\omega_q) + \delta n_g(\omega_q)] \frac{\omega_q}{c} z - \tan^{-1}\left(\frac{z}{z_{R_q}}\right) + \frac{\alpha_{q_{l,s}} I_0}{1 + (z/z_{R_1})^2}$$ \hspace{1cm} (2.41)
with an additional intensity-dependent term, originating from single harmonic dipole \cite{33}. It is related to the time that free electron spends in the laser field (2\textsuperscript{nd} step in three-step model). Therefore, there are two different values of the proportionality constant $\alpha_{q_{l,s}}$ for a long and a short trajectory. They differ for roughly an order of
magnitude ($\alpha_{q_1} \approx 10^{-13} \text{cm}^2/\text{W}$ and $\alpha_{q_2} \approx 10^{-14} \text{cm}^2/\text{W}$), so that phase matching in general cannot be fulfilled for both trajectories simultaneously. The harmonic beam is assumed to be of Gaussian shape too. The Rayleigh range of the harmonic beam $z_{R_q}$ is usually smaller but comparable to the one of fundamental. The phase mismatch can now finally be evaluated
$$\Delta \Phi = [\Delta n_p(\omega_1, \omega_q) + \Delta n_g(\omega_1, \omega_q)] \frac{q\omega_1}{c} z - (q-1) \tan^{-1} \left( \frac{z}{z_{R_1}} \right) - \frac{\alpha_{q_{c,s}} I_0}{1 + (z/z_{R_1})^2}. \quad (2.42)$$
The first term is a dispersive term and contributes to the phase mismatch due to different refractive index at fundamental and harmonic frequencies $\Delta n_{p,g}(\omega_1, \omega_q) = \delta n_{p,g}(\omega_1) - \delta n_{p,g}(\omega_q)$. While for low-order harmonic generation in crystals this term is tuned to zero by exploiting birefringent properties of the crystal, in HHG it can help us compensate other two terms, i.e. Gouy and dipole phase.
From (2.42) it is clear, that phase mismatch $\Delta \Phi$ varies significantly through the focus. A common way to quantify this change is by calculating a coherence length
$$L_{coh} = \frac{\pi}{\left| \frac{\partial \Delta \Phi(r=0,z)}{\partial z} \right|}. \quad (2.43)$$
It defines the length scale at which phase mismatch is small enough for harmonic emission from different positions to add up constructively.
In figure 2.6, both phase mismatch $\Delta \Phi$ and coherence length are shown for a realistic case with parameters as in our experiment. However, dispersive term was not included due to missing information of plasma and neutral atoms density. A Gouy phase monotonically increases along the axis, while the intensity-dependent dipole phase is symmetric over the focus and starts decreasing after the focus. Around $z \approx 6$ mm the change of both contributions cancel each other out, such that the phase mismatch becomes constant. However, phase-matching is only full-filled in this point, but drastically worsens on both sides before and after this optimal position. A general rule thus usually applies that the length of the target should be kept roughly one order-of-magnitude shorter than $z_{R_1}$. The dipole phase belonging to the 'long' trajectory with larger $\alpha_q$ was chosen for the results shown in figure 2.6.
**Other macroscopic effects**
Two other effects, namely absorption and ionization also play a significant role in HHG. Almost all materials absorb radiation in EUV spectral range to which high-order harmonics usually belong. That’s why HHG has to be performed in vacuum. However, we need a non-linear material for frequency conversion, which is an noble gas in our case and it absorbs EUV light as well. Even for the perfect phase-matching, harmonic yield would saturate because of the absorption in the gas target. For a moderate phase mismatch and loose focusing conditions, Constant et. al [34] derived ‘rules of thumb’ for the optimal medium length $L_{med} < 3L_{abs}$ and $L_{coh} > 5L_{abs}$, which serves as a good starting point when optimizing HHG process. Here, $L_{abs}$ is absorption length defined as a distance at which transmitted harmonic power falls to $1/e$ value of its initial value.
Ionization of the gas target alters HHG conditions in several ways. Degree of ionization depends on laser intensity and pulse duration. On-axis plasma density is thus higher than in the off-axis region. As $\delta n_p < 0$ plasma forms a diverging lens for
Figure 2.6: Phase mismatch (black, full) and corresponding coherence length (red) for experimental condition as in our setup with $f = 750$ mm lens used for focusing: only Gouy (dot-dashed) and intensity-dependent dipole phase (dashed) are considered to contribute, while dispersive term of plasma and neutral gas were omitted due to unknown gas density in the target. The range of horizontal axis spreads over two times the Rayleigh range $z_{R1} \approx 22$ mm on both sides of the focus. Around $z \approx 6$ phase mismatch has a local extreme and gets a bit flatter. The best position for the gas target concerning the phase matching is thus the interval from 0 to 10 mm where phase mismatch changes for a total of less than $\pi/2$.
a laser beam. This so-called plasma defocusing counteracts the focusing of driving beam. Therefore, lower intensities are reached in the gas target compared to focusing in vacuum. Inhomogeneities in the gas target also transfer to inhomogeneous plasma distribution. This induces distortions in the laser as well as HH beam. Another important consequence of ionization is the depletion of the atomic population and thus reduced number of potential emitters of HH radiation. However, by tuning plasma contribution to refractive index $\delta n_p$ phase-matching along the beam axis can be achieved. As negative effects of ionization prevail, anyway in general one tries to minimize the presence of plasma.
2.2.3 General characteristics of high harmonic radiation
In following section we will briefly review some characteristics of HH radiation which have been studied in this work.
Harmonic spectrum
A schematic representation of HH spectrum is shown in figure 2.7. Its specific shape caught scientist’s attention right at the discovery of HHG and “guided” them in formulation of theoretical models of the new non-linear process. First important feature is that only odd harmonics of the fundamental frequency are present. It can be explained through the three-step model in the following way: every half-a-cycle, electrons tunnel and recollide with the atom. At the recollision a sub-femtosecond burst of EUV photons are emitted. Except for few-cycle pulses, the driving field consists of several cycles leading to repetitive bursts also called train of attosecond pulses [35]. Assuming that the non-linear medium is symmetric, it will interact with the first and the second half of pulse cycle in the same way. However, there will be a phase shift of $\pi$ between two consecutive bursts due to the change of direction of electrical field. In frequency domain, train of pulses with repetition time $T_0/2$ is represented as a comb with a separation $\frac{2\pi}{T_0/2} = 2\omega_1$. The phase shift leads to destructive interference of even harmonics so that only odd harmonics remain in the spectrum. Nevertheless, by breaking the symmetry either by introducing an asymmetric molecules as a gas target [36] or by adding a second harmonic of the driving laser pulse [37] even harmonics can be generated as well.

Another characteristics is an existence of three different regions within harmonic spectrum: low-order harmonics, which can still be described within the lowest-order perturbative theory, belong to the first part. Their intensity $I_q$ follows the usual potential law $I_q \propto I_1^q$ and decreases quickly with an increasing harmonic order $q$. Here $I_1$ is intensity of a driving laser. Next is a plateau region with several harmonic orders of comparable intensity. Their intensity follows a different scaling law
$$I_q \propto I_1^p \quad \text{with} \quad p < q,$$
(2.44)
as they already belong to the non-perturbative regime. The parameter $p$ is roughly the same for the harmonics within the plateau, but can vary for different experimental conditions. In the last, cut-off region around the cut-off frequency, intensity of harmonics suddenly drops off. It was one of the first major goals to extend this cut-off region to higher orders [29] and by now harmonic orders > 5000 have been reached [38].
**Beam quality**
A HH beam is emitted collinearly with a fundamental beam, but has a smaller diameter and divergence, because HHG takes place only within the region with sufficient intensity of the driving beam. While divergence was found to be constant for harmonics in the plateau region, it decreases with increasing order for the cut-off harmonics [39]. A quantitative estimation for the divergence of the $q$-th harmonic order $\theta_q \approx \sqrt{p\theta_1/q}$ can be made. It’s based on assumption that harmonic beam is simply rescaled driving beam according to the effective power law (2.44) for the harmonic frequency (wavelength) [40]. While measured divergences reasonably agree with this approximation, it has been shown that aberrations of the fundamental beam are not simply imprinted onto the harmonics beam [41].
Systematic investigations on dependence of spatial properties on generation conditions found that beam shape varies significantly with a position of the gas target relative to the focus of the driving beam [39], [42]. The HH beam was round and almost Gaussian, when a gas target was placed after the focus, but became annular and distorted when moved into the focus. This effect was attributed to the combination of ionization-induced distortions and intensity-dependent dipole phase. Due to the higher plasma density on the beam axis, phase-matching can be in certain conditions achieved off-axis only at the ring around the beam axis. It results in a donut-shaped far-field beam pattern. When phase-matching was simultaneously fulfilled (or at least partially) for ‘short’ and ‘long’ trajectory contributions, a distorted beam profile was observed with the wide platform belonging to the more divergent ‘long’ trajectory and a narrow centered peak [43]. However, in more recent experiments it has been shown that at the optimal generation conditions in terms of conversion efficiency, simultaneously the most regular beam shapes were obtained [41].
**Optimization of conversion efficiency**
Conversion efficiency for harmonic order $q$ is defined as the ratio between energy of $q$-th harmonic $E_q$ and energy of the driving pulse $E_{\text{pulse}}$
$$\eta_q = \frac{E_q}{E_{\text{pulse}}}.$$
(2.45)
It has been one of the main goals since the discovery of HHG to increase its very low conversion efficiency. The process of optimization includes finding optimal driving laser parameters (pulse duration, wavelength) and focusing, selecting the design of the gas target (length, type of the target, e.g. nozzle, cell, fiber) and adjusting the interacting medium (gas type, pressure). While there is no general formula to predetermine optimal parameters, some general trends exist.
Table 2.2: Conversion efficiency of HHG in xenon with Ti:sapphire based systems for $13^{th}$ and $15^{th}$ harmonic order. For comparison, experimental parameters, i.e. pulse duration $\tau$ and energy $E$, focusing geometry (focal length $f$), focal spot radius $w_0$ and target type are listed where available.
| $\tau$ [fs] | $E$ [mJ] | $f$ [m] | $w_0$ [$\mu m$] | target | $\eta$ [$\times 10^{-5}$] |
|-------------|----------|--------|-----------------|--------|------------------|
| Constant et al. [34] | 40 | 1.5 | 1 | 125 | jet from 800 $\mu m$ nozzle | 0.5 (H13) |
| Takahashi et al. [45] | 35 | 14 | 5 | 210 | hollow-core fiber (4cm) | 4 (H15) |
| Hergott et al. [46] | 60 | 25 | 5 | | nozzle (0.3-3) mm | 6 (H15) |
| | 4.3 | 2 | | | | 1 (H15) |
| Suda et al. [47] | 22 | 14 | 4 | 300 | gas cell | 44 (H13) |
One usually starts with the selection of the gas according to the harmonic order to be generated. Lighter noble gases have higher ionization potential and are thus suitable to generation of the highest orders. However, the recombination cross-section is larger for heavier gases making xenon an optimal selection for generation of harmonics around 60 nm. While laser parameters are in general set by the laser available in the lab, both shorter pulses and shorter wavelengths are preferable. Yet, an impact of the pulse duration on the conversion efficiency varies with the gas type and regime (plateau or cut-off) to which harmonic belongs. It is strongest for cut-off harmonics in heavier gases [44].
In order to increase the HH yield, more atoms should be available to interact with the light. One way is to increase the interaction volume by increasing the beam size. This requires a more powerful laser to keep intensity in the focus unchanged. Another option is higher pressure in the gas target and consequently increased number density of the gas within interaction volume. However, this enhances reabsorption and worsens phase-matching. The same is true for longer gas targets.
In table 2.2, a few experiments of HHG in xenon are listed with generation parameters and conversion efficiencies for $13^{th}$ or $15^{th}$ harmonic. They were all performed with a Ti:sapphire based laser systems with a central (carrier) wavelength around 800 nm and can be thus compared with our experiment. Due to the relatively high pulse energies and short pulses, loose focusing could have been applied and highest conversion efficiencies so far have been achieved.
### 2.3 Optics in EUV spectral range
EUV radiation is part of electromagnetic spectrum between UV and soft X-rays spanning wavelengths from 10 to 120 nm, corresponding to photon energies of 120 eV to 10 eV respectively. Large number of atomic resonances take place within this
energy range leading to extremely high absorption of light in all materials within nanometers and micrometers as well as in air [48]. This brings great experimental and technological challenges, because all system components (source, optics, detectors, sample, etc.) have to be placed in vacuum. Also, most of the transmissive, refractive optics, which is available in visible and IR cannot be used. However, following chapter is not meant to review the field, but to introduce basic relations necessary for the understanding of this work as well as to address those challenges that we encountered.
### 2.3.1 Optical constants
The intensity of generated EUV light via HHG in our experiments was always weak enough, so that nonlinear response of the medium could be neglected. The absorptive and dispersive medium can be thus fully described via a wavelength-dependent refractive index
\[
\tilde{n}(\lambda) = n(\lambda) + i\kappa(\lambda),
\]
(2.46)
which is now a complex quantity\(^3\). The real part \(n(\lambda)\) contains information on dispersion of the medium, while imaginary part \(\kappa(\lambda)\) on absorption.
For photon energies larger than 30 eV, refractive index is related to scattering factors \(f_1(\lambda)\) and \(f_2(\lambda)\) of individual atoms through relation [49]
\[
\tilde{n}(\lambda) = 1 - \frac{1}{2\pi}r_e\lambda^2 \sum_j N_j(f_{1,j} + if_{2,j}),
\]
(2.47)
where \(N_j\) is the number of atoms of type \(j\) per unit volume and \(r_e\) is the classical electron radius. Atomic scattering factors are determined through photoabsorption measurements of elements in their elemental state. Molecules or condensed matter composed from several atomic species are usually modeled as a collection of non-interacting atoms [48].
In the process of the design of experiments as well as for data analysis, I made use of different collections of optical constants of materials. A very useful source is an online database of Center for X-Ray Optics (CXRO) at Lawrence Berkeley National Laboratory [50] with values of atomic scattering factors for photon energies higher than 30 eV and 10 eV for \(f_1\) and \(f_2\) respectively. Additionally, there are also online calculators for transmittance of solids or gases and reflectivity of mirrors, which served as a double check. For wavelength range not available in [50], values for \(n\) and \(\kappa\) were also taken from [51] or [52].
#### Transmittance
The transmittance of an absorptive element, \(T\), is defined as a ratio between the output and input intensity. In case of the homogeneous medium with refractive index \(\tilde{n}\) and length \(L\), it equals
\[
T(\lambda) = \frac{I_{\text{out}}}{I_{\text{in}}} = e^{-\mu(\lambda)L}
\]
(2.48)
\(^3\)Refractive index is in general complex, but we neglected the imaginary part when talking about propagation of IR light, because absorption in the elements in the beam path was so weak.
where $\mu(\lambda)$ is absorption coefficient of the medium related to the imaginary part of refractive index by
$$\mu(\lambda) = \frac{4\pi}{\lambda} \kappa(\lambda).$$ \hspace{1cm} (2.49)
The equation (2.48) is called Beer-Lambert law. It can be further extended to the non-homogeneous media,
$$T(\lambda) = e^{-\int_0^L \mu(z)dz} = e^{-\sigma \int_0^L N(z)dz},$$ \hspace{1cm} (2.50)
where, for example, number density of absorbers $N$ varies along propagation direction $z$. Here $\sigma(\lambda)$ is photoabsorption cross section of single absorber, e.g. an atom or molecule, proportional to imaginary part of atomic scattering factor $f_2(\lambda)$
$$\sigma(\lambda) = 2r_e \lambda f_2(\lambda).$$ \hspace{1cm} (2.51)
An example of non-homogeneous absorbing medium is residual noble gas in vacuum chambers in HHG experiments, released from the gas target. The number density of absorbers, i.e. gas atoms, decreases further away we go from the gas target.
### 2.3.2 Thin metal filters
In many experiments which employ HHG as a source of EUV light one has to separate the HH beam from the residual driving beam to avoid its interaction with the target. Several schemes have been proposed and demonstrated to solve this problem: a Si/SiC plate at Brewster angle [53], diffraction gratings [54], a dichroic beam-splitter [55], annular fundamental beam [56], etc. However, the most common way is to use several hundreds nm thin metal foils (TMF) to block laser beam, while transmitting significant amount of EUV light [57].
One can choose among several materials offered by manufacturers (Lebow, Luxel) in order to fit transmission window to the desired wavelength range. For our experiments aluminum (Al) and tin (Sn) filters are appropriate, which can be both used as a band pass filter around 60 nm. Modeled transmittance of 200 nm thick Al and Sn foil is 0.63 and 0.22 for 60 nm light and $3 \times 10^{-12}$ and $7 \times 10^{-10}$ for the 810 nm light of the driving laser respectively. Aluminum TMFs are thus the best suitable candidate and were installed in our setup. However, aluminum oxidizes immediately when exposed to air forming few nanometers thick layer of aluminum oxide. Besides, already at manufacturer, TMFs have some pinholes caused by dust or deposition effects during manufacturing, the number of which increases with decreasing thickness of the foil. Therefore, several TMFs usually have to be employed reducing the available EUV flux.
### 2.3.3 EUV mirrors
High absorption of EUV light in all materials prevents usage of refractive optics for focusing. Therefore, either mirrors or Fresnel zone plates are usually employed. Focusing of HHs with zone plates was achieved in several experiments ([58] and references therein), but as zone plates aren’t matter of our experiments we won’t discuss them here. Anyway, a more common focusing elements are mirrors of spherical and aspherical shape (paraboloid, ellipsoid, toroid) coated with a single or multiple layers of precisely chosen material optimal for the wavelength range of an EUV source.
However, performance of mirrors in EUV is much worse compared to IR and visible, mostly due to weak reflectivity and imperfections of the surface.
Reflectivity from a semi-infinite medium with a (complex) refractive index $\tilde{n}$ for a s- or p-polarized light incident at angle $\phi$ according to the normal of the surface are given by Fresnel equations [48]:
\[
R_s = \left| \frac{\cos \phi - \sqrt{\tilde{n}^2 - \sin^2 \phi}}{\cos \phi + \sqrt{\tilde{n}^2 - \sin^2 \phi}} \right|^2, \quad R_p = \left| \frac{\tilde{n}^2 \cos \phi - \sqrt{\tilde{n}^2 - \sin^2 \phi}}{\tilde{n}^2 \cos \phi + \sqrt{\tilde{n}^2 - \sin^2 \phi}} \right|^2.
\]
(2.52)
It’s assumed here, that light propagates in vacuum with refractive index of 1. The upper equations are the same as in the case of a lossless medium only that refractive index is now a complex value. This has some direct consequences: for a custom linearly polarized light (not s or p), different phase change will be introduced upon reflection for s and p part, such that reflected light becomes elliptically polarized [59]. Also, at Brewster’s angle for p-polarized light there is finite reflection (see figure 2.8). However, none of these effects is a limiting factor for the poor mirror performance.
If we evaluate equation (2.52) for normal incidence,
\[
R_s = R_p = \left| \frac{1 - \tilde{n}}{1 + \tilde{n}} \right|^2 = \frac{(1 - n)^2 + \kappa^2}{(1 + n)^2 + \kappa^2}
\]
(2.53)
we can see, that high reflectivity from a single layer requires that either real or imaginary part of refractive index of the medium (or both) are much larger than refractive index of vacuum, i.e. 1. For EUV light this is not the case so reflectivities are in general quite low (compared to IR and visible). At 60 to 100 nm, best candidates are SiC and B$_2$C with a reflectance at normal incidence of 30 – 35%, while below 65 nm, single layers of few metals like Pt, Ir and Os reflect up to 20 – 35% [60]. Investigations of new coating materials is therefore an active area of research.
To increase reflectivity of the mirror, one can reflect light at angles far from the surface normal, also-called grazing angles. As shown in figure 2.8, reflection above 80% is achieved for radiation of any polarization for angles > 85°. However, price we pay is reduced numerical aperture and potentially high aberrations.
Large reflectivity can also be obtained from mirrors with multilayer interference coatings. In that case, alternating layers of two materials are stacked on each other, such that multiple weak reflections from each interface add up constructively. Such mirrors are carefully designed for specific wavelength and angle of incidence such that Bragg’s law is fulfilled [48]. For normal incidence, a thickness of single material layer is approximately $\lambda/2$. The highest reflectivity up to 70% was achieved with Mo/Si multilayer targeted for $\lambda = 13.5$ nm, operational wavelength of next-generation EUV lithography machines [61]. At 60 nm, no transmittive materials are available as well as layers have to be thicker. This limits the performance to an estimated reflection of $\approx 40\%$ for Mg/SiC multilayer mirrors [62]. Another advantage of multilayer mirrors is their high reflectance in a narrow band making them suitable for band-pass filters, e.g. to select a harmonic order.
For real mirrors, the reflectance gets even worse as mirror’s surface is not ideally smooth resulting in additional losses due to scattered light. A common model
Figure 2.8: Calculated wavelength- (left) and incident angle-dependence (right) of reflectivity from an ideally smooth gold mirror for s- (full) and p-polarized (dashed) light and Debye-Weller factor (DBW, red) for a real rough surface with $\sigma = 6$ nm according to (2.54). For the left figure an incident angle of $45^\circ$ and for the right $\lambda = 60$ nm were assumed. Data for refractive index were taken from [52].
for reflection from rough surface adds a Debye-Waller correction factor to Fresnel reflectance from perfectly flat surface $R_{s,p}$[63]
$$R = R_{s,p} \times \exp \left[ -\left( \frac{4\pi\sigma_R \cos \phi}{\lambda} \right)^2 \right]. \quad (2.54)$$
Here $\sigma_R$ is called a surface roughness, i.e. a mean value of variation of surface’s height. A Debye-Waller factor evaluated for different wavelengths and incident angles is shown in figure 2.8 (red line). One can see that the effect of rough surface is stronger for shorter wavelengths and smaller incident angles. However, mirrors with surface roughness well below 1 nm can be machined nowadays reaching the precision of few atomic layers [48].
**Focusability of high harmonics**
A good beam quality and spatial coherence of HH radiation promise the possibility to focus harmonic beam to diffraction limited spots. Suda et al. [47] investigated the focusability of high harmonics with different mirrors. They focused a 27th harmonic at $\approx 30$ nm to a 1 $\mu$m focal spot, only 1.7 times the diffraction limited value with a 50-mm off-axis parabolic mirror with SiC/Mg multilayer coating and 0.7 nm surface roughness. A slightly larger focal spot of 2.3 $\mu$m was obtained when focusing multiple harmonics ($21^{st} - 33^{rd}$) at the same time using a grazing incidence Pt coated mirror with 5 nm surface roughness.
Chapter 3
Generation, characterization and manipulation of ultrashort light pulses
Generation of HHs requires light intensity on the order of $\approx 10^{13} - 10^{14}$ W/cm$^2$. To reach it, one has to confine light in time and space, or in practice, use very short light pulses and focus them. As ultrashort pulses coming from laser oscillators usually carry too little energy they should be additionally amplified prior to focusing. Stronger the amplification, less tight we have to focus and higher conversion efficiency can be achieved. In following chapter all three steps, i.e. generation, amplification and focusing of ultrashort light pulses as utilized in our experiments will be presented.
3.1 Generation of high-energy ultrashort light pulses with chirped-pulse amplification system
In figure 3.1 a scheme of the laser-system part of our experimental setup is shown. A mode-locked laser oscillator seeds a chirp-pulsed-amplification (CPA) system with ultra-short light pulses, where only every $100000^{th}$ pulse is selected and amplified. Both lasers use a titanium sapphire crystal as a gain medium, which emission band is centered in the near-infrared at 800 nm, while its absorption band maximum lies in green (490 nm). Thus frequency doubled solid state lasers with neodymium-doped crystals are used for optical pumping in both cases.
In a mode-locked laser oscillator phases of a great number of highly-coherent longitudinal modes are phase-locked such that when they sum up, constructive interference occurs for a very short time [31]. The larger is the gain bandwidth of gain material, the more modes can contribute and thus shorter pulses are generated. Several techniques exist to phase-lock the modes: *active mode-locking* requires an active loss or frequency modulation, while in *passive mode-locked* laser oscillators, light in the resonator is modulated due to its self-interaction with optical elements within laser cavity. State-of-the-art example of the latter is a Kerr-lens mode locking (KLM) mechanism, on which the laser oscillator used in this work is based. When light passes through a Kerr material in the resonator (in our case, it is a titanium sapphire crystal), it experiences a self-focusing effect. Due to this intensity-dependent
phenomenon, more intense fluctuations are affected more and get focused tighter. By placing an aperture in the beam path, intensity-dependent loss is introduced favorizing intense fluctuations to less confined weak ones. After several round-trips the initially most intense part is boosted furthermore by passing through gain medium, while other fluctuations are suppressed. This ultrafast self-amplitude modulation effect can initiate and sustain the formation of an ultrashort light pulse [31].
In detail, the laser oscillator in our setup is a FemtoSource 20, linear-cavity titansapphire laser. It is pumped by a Coherent V1 laser. Due to the KLM mechanism it is fairly easy to mode-lock the oscillator. Introducing a distortion by moving one of the end mirrors which is placed on the translation stage, we bring the oscillator in a mode-locked state. Spectrum and autocorrelation trace of light pulse from oscillator are shown in figure 3.2. Train of 25-fs pulses leaves the oscillator at the repetition rate 100 MHz with 300 mW of average power. Each pulse thus carries only 3 nJ of energy, which is far too little to obtain high intensities to drive highly-nonlinear processes like HHG, which requires pulses with energy of hundreds of microjoules or even millijoules. Therefore, pulses have to be amplified first.
Figure 3.1: Laser setup in our experiment consists of 1.) a laser oscillator which is used as a seed source for 2.) chirped-pulse-amplification system that selects some of the pulses and boost their intensity for a factor of $\approx 10^6$. Both have a Ti:Sapphire crystal as a gain medium, which are pumped with green, in first case cw- and in second with Q-switched-neodymium-based solid state lasers. After each stage some of the light can be picked up by a glass wedge in order to measure pulse characteristics: an autocorrelation trace, a spectrum and a beam profile. In order to reach sufficient intensity for HHG, light beam is focused using a plano-convex lens. Below the block diagram, an effect of the single component of CPA-system on light pulse is shown (note that illustration is not in scale). Rainbow colors illustrate a chirped pulse where a delay between different frequency components were introduced in order to stretch the pulse. Note that the spectrum of light pulses in our experiment lies in near infrared so that there were almost no visible-light components.
3.1. Generation of high-energy ultrashort light pulses with chirped-pulse amplification system
Figure 3.2: (a) An autocorrelation trace and (b) a spectrum (both blue) of the light pulses from laser oscillator. Assuming a $sech^2$-shape, pulse duration of 23 fs (FWHM-value) can be extracted from fitted curve (red). According to the time-bandwidth product 2.1, spectral bandwidth would suffice to generate even shorter 15.5 fs long pulses, what means that pulses from the oscillator are already slightly chirped.
Direct amplification of the pulses is not possible, because gain material would not sustain such high intensities. A solution for this problem – *Chirped pulse amplification* (CPA) - was developed by Strickland and Morou in 1985 [64], which first used longer chirped pulses to avoid a gain crystal damage.
In following, the principles of the CPA will be explained on the CPA unit used in our setup, i.e. Spectra Physics: Spitfire. The first step in CPA process is temporal stretching of pulses by using a dispersive optical element (or combination of them), which introduces a large GVD. In our case, it is a combination of grating and mirrors. When light pulse with its 44 nm wide spectrum hits the grating, its spectral component are spatially separated. Beam is guided such that redder components of the dispersed light spectrum travel over longer path than bluer. After several passes through grating the light pulse exits the stretcher spatially recombined, but temporally broadened as different frequencies are delayed relative to each other. The pulse is chirped and up to a factor of $10^4$ longer then original pulse from the oscillator.
Such a stretched pulse with significantly reduced peak power now safely enters into a regenerative amplifier. A gain crystal (titanium sapphire) is synchronously pumped by a Spectra Physics: Empower, 1 kHz Q-switched Nd:YLF laser. The pulse from a stretcher thus passes through it right after population inversion was established by a pump pulse. Because there is not enough energy stored in a gain crystal to amplify all pulses, only every 100000$^{th}$ pulse is selected in order to be able to boost its energy to millijoule level. Synchronized with the pump pulse, a Pockels cell is switched on and off, and in combination with $\lambda/4$-waveplate and polarization selective element a single pulse is selected. It then circulates in the regenerative amplifier with a gain crystal. After single pass its energy increases for a factor of 3-4 [65]. Up to $\approx 20$ passes are possible, before the population inversion in the gain
crystal vanishes. At this point a second Pockels cell is triggered. The amplified pulse leaves the amplifier and enters into the last stage - a compressor.
A compressor is in principle the same thing as a stretcher, with one crucial difference - an opposite sign of the GVD. By introducing a horizontal reflector, red and blue parts of the pulse are forced to switch their “role” so that red ones now take the shorter path to catch up with blue ones. To precisely adjust the delay between different spectral components, a retroreflector in the compressor is mounted on motorized translation stage. By moving it, one can tune the chirp of the pulse, which would vanish in optimal case. In our non-ideal case, Spitfire emits four times longer than original, 110-fs light pulses at 1 kHz repetition rate. However, they are almost a million times more energetic carrying 2.2 mJ energy each. Spectrum and autocorrelation trace of amplified light pulses are shown in figure 3.3.
### 3.2 Characterization and simulation of ultrashort light pulses propagation
In course of HHG optimization many of the light pulse parameters are varied. To find an optimal value of this parameters as well as to be sure that they remained the same during the optimization of others, a power, a beam size, a pulse duration and a spectrum were monitored. Power at Spitfire’s output varied from maximal 2.31 W at the beginning of the operation and went down to 2.06 W after few hours. It was not sensitive to the power of seed laser oscillator, but slightly varied ($< 5\%$) with the spectral bandwidth of the seed pulses. Spectra of the seed laser and Spitfire were measured by OceanOptics USB 2000 spectrometer. Their typical
3.2. Characterization and simulation of ultrashort light pulses propagation
shapes are shown in figure 3.2b and 3.3b respectively. A pulse duration was indirectly determined through the intensity autocorrelation measurement assuming a $sech^2$-pulse. The APE Mini autocorrelator in general allows to measure interferometric autocorrelation as well, but here (see figures 3.2a and 3.3a) the interferometric part has been averaged out.
3.2.1 Beam characteristics
Knowing the spatial properties of the IR (fundamental, $\lambda \approx 800$ nm) light beam is of special importance as those properties would be to some extent transferred to the high harmonics beam. While measuring the IR light is pretty simple, EUV light demands special detectors, vacuum, etc. and not all of that is always available. To be able to at least estimate properties of the high-harmonics beam, the fundamental beam should be well characterized. Besides that, HHG requires high intensities and consequently focusing, which should be optimized for the efficient HHG.
Beam profile of the fundamental beam were measured by the Dataray WinCamD (in focus) and TaperCamD (at the laser output) beam profiling cameras. The image data were integrated along horizontal ($x$) and vertical ($y$) direction in order to get 1D profiles in $y$ and $x$ direction respectively. A Gaussian curve was fitted to the data to obtain the beam width $w_x$ and $w_y$. Another way was to directly fit a Gaussian to the row and column defined by the pixel with the highest intensity. The difference between both evaluations was less than 5%.
The beam profile at Spitfire’s output is shown in figure 3.4. To estimate the divergence of the beam, beam profile was measured also $\approx 1.5$ m away from the output. The relative beam width change was below measurement’s accuracy. $M^2$ measurement of beam quality was performed by focusing the beam with a 750 mm lens and measuring the beam profile along the focus. The beam widths in $x$ and $y$ direction shown in figure 3.5 correspond to a Gaussian beam with a $M^2_x = 1.19$ and $M^2_y = 1.14$.

Figure 3.5: $M^2$ measurement of the IR beam focused with a $f = 750$ mm lens: beam is very close to ideal Gaussian beam ($M_x^2 = 1.19 \pm 0.04$, $M_y^2 = 1.14 \pm 0.04$), slightly astigmatic and elliptic. It is in agreement with the specified value of the vendor ($M^2 < 1.3$). [66]
### 3.3 Focusing of the IR beam into gas target
In order to generate high harmonics, light intensity of the fundamental laser beam should be on the order of $10^{13} - 10^{14}$ W/cm$^2$. For our laser parameters ($E_{\text{pulse}} = 2.2$ mJ, $t_{\text{FWHM}} = 118$ fs) beam radius in the focus should be
$$w_{20} = \sqrt{\frac{P_{\text{peak}}}{\pi I}} \approx 70 - 200 \mu\text{m} \quad (3.1)$$
where $P_{\text{peak}} = 0.88 E_{\text{pulse}} / t_{\text{FWHM}}$. The factor of 0.88 is a pulse-shape-specific conversion factor for sech$^2$-pulse. According to the measurement, our IR beam is well collimated so we can assume, that the beam waist $w_{10} = 3.2$ mm lies just in front of the lens. The Rayleigh range of such beam $z_R \approx 40$ m is much larger than the focal length of the lens $f_{\text{lens}}$ so that the approximation for radius of the spot size in the focus can be made
$$w_{20} = \frac{f_{\text{lens}} \lambda}{\pi M^2 w_{10}}. \quad (3.2)$$
From here we can calculate a required focal length of the lens to be $f_{\text{lens}} \approx 600 - 2000$ mm. In real experiments there were additional losses due to transmission through rotational wheel filter used for continuously changing intensity of the fundamental beam, Fresnel back reflection at air-glass interfaces and truncation of the beam with an iris used for the improvement of beam quality and HHG optimization. Consequently, we had to use shorter-focal-length lenses (300-1000 mm) and focus the IR beam to smaller beam spots (30-100 $\mu$m).
In figure 3.6 radius of a focal spot is shown for 5 lenses used in the experiment. The beam at the lens was slightly elliptic (see figure 3.4). Simply placing the lens in the beam path lead to an astigmatic focusing, i.e. waists of the beam in $x$ and $y$ direction did not coincide. This effect was minimized by tilting the lens around $y$ axis for a specific angle $\approx 5^\circ - 15^\circ$. On the other hand, tilting the lens also decentered
3.3. Focusing of the IR beam into gas target
Figure 3.6: Beam width of the focused IR beam at the gas target: the measured values are compared with predicted ones according to the Gaussian beam propagation (3.2) for an ideal Gaussian beam ($M^2 = 1$) and real Gaussian beam ($M_x^2 = 1.19$, $M_y^2 = 1.14$). In agreement with theory, the beam is slightly narrower in $y$ direction, but the focus is actually tighter than expected for input parameters ($M_{x,y}^2$). The reason for that is that the lens itself modifies the beam and, depending on the alignment, $M^2$ value changes. The measured values (see figure 3.5) are relevant only for that particular case. However, it is a good upper approximation as all the data points are in vicinity, but still below the predicted value. Data for the focus of the truncated beam (in course of HHG optimization beam has been truncated in front of the focusing lens to increase the HH yield) are shown as well. As expected, due to the smaller beam size at the lens, the beam is larger at the focus, but surprisingly doesn’t get rounder.
the beam, which therefore left the beam path defined by reference points, i.e. fixed remaining components of the experimental apparatus. Optimization of the focus thus required iterative process of tilting and centering lens. In this way an IR beam was focused to a round and almost diffraction limited (< 20% larger) beam spots. The reason for not-diffraction-limited focus could be a non-ideal Gaussian beam ($M^2 > 1$) and still imperfect alignment. Another possibility to improve focus is by placing a round aperture in front of the lens. However, this method has a disadvantage of loosing part of the light and increasing a focal spot size.
3.3.1 Potential effects of the lens on the light pulse
As explained in chapter 2.1.2, when light pulse travels through a dispersive medium with a non-zero GVD, different spectral components travel with different group velocities and consequently pulse stretches according to the equation (2.15). The effect of material dispersion has been evaluated for on-axis rays. The thickness of the lens is the largest at the center, so that it is an upper estimation for the pulse stretching.
However, there is also angular dispersive effect of the singlet lens called pulse front distortion (PFD) leading to pulse stretching [67]. It was first explained and evaluated by Bor [68] in ray optics picture. When plane wave transmits through a lens, it acquire different phase at different radii. The phase difference comes from different traveling paths through a refractive material. Consequently the phase fronts become bent and beam focuses or defocuses according to the sign of the lens. Similarly, pulse fronts - surface coinciding with the peak of the pulse) - get bent too. Due to the difference between the phase and group velocities, the pulse front becomes delayed with respect to the phase front. The delay is radially dependent and therefore different parts of the pulse front do not coincide in focus. The difference of the delay between center and the edge of the pulse front is [68]
\[
\Delta t_{\text{PFD}} = T(0) - T(w) = \frac{w^2}{2cf(n-1)} \left(-\lambda \frac{dn}{d\lambda}\right).
\]
In figure 3.7, GVD and PFD effect are schematically illustrated. Except for few-cycle pulses with extremely broad spectral bandwidth, PFD is a dominant mechanism for pulse stretching when focusing ultra-short light pulses with a singlet lens [7]. It can be completely eliminated by using an achromatic lens that makes a delay between pulse and phase fronts constant. However, achromatic lens requires additional pieces of refractive material which increases the pulse stretching due to GVD.
All lenses used for focusing of IR beam were Thorlabs plano-convex anti-reflective coated lenses made from N-BK7 glass ($n = 1.5106$, $\frac{dn}{d\lambda} = -0.019363\,\mu m^{-1}$, $\beta_2 = 43.714\,fs^2/mm$ - all at $\lambda = 810\,nm$ [69]). The thickness at the center varied from $2.2 - 2.5\,mm$ for $f = 300 - 1000\,mm$ respectively. Pulse stretching due to GVD is estimated to be $\Delta t_{\text{GVD}} = 2.8\,fs$ and $\Delta t_{\text{PFD}} = 1.8\,fs$ due to PFD. However, initial pulse duration $t_{\text{FWHM}} = 118\,fs$ and GVD stretch add in quadrature while PFD is added directly
\[
t_{\text{new}} = \sqrt{t_{\text{FWHM}}^2 + \Delta t_{\text{GVD}}^2} + \Delta t_{\text{PFD}} = 119.8\,fs.
\]
As expected, GVD has a practically no effect on pulse duration, while PFD mechanism stretches the pulse for less than 2%. A singlet lens can thus be used for focusing of driving beam into gas target without fear to significantly worsen the conditions for HHG.
3.3.2 Self-focusing
Besides the lens, there was another piece of glass in the beam path, namely the window of the vacuum chamber in which HHG takes place. It was 3 mm thick, NIR anti-reflective coated and also made from BK-7 glass. Even though BK-7 has
relatively low non-linear refractive index $n_2 = 4.0 \times 10^{-16} \text{cm}^2/\text{W}$ [70], our high-energy pulses modified the refractive index through the Kerr-effect as explained in chapter 2.1.4 to some extent. Radial dependent intensity results in self-focusing, while through time-dependent intensity temporal phase is modulated and chirp is introduced (SPM). The effect of the window on the driving beam was first observed as a rainbow-ring pattern in the far-field after the focus. Due to the SPM new frequencies or wavelengths were generated, which refracted at different angles. The problem was solved by adding an extension tube with a window on its end to the chamber. SPM was thus reduced to the level where no new frequencies in the visible has been generated. On the other hand self-focusing could still be observed as the position of the focus shifts for different powers of the laser beam. In following, self-focusing in lens and mirror will be analyzed.
Both, the lens and the window are thin enough, so that beam size doesn’t change significantly while propagating through them. Their effect on the beam can be thus approximated by a thin lens with a focal length $f_{NL}$ given by (2.31). Effective focal length of a lens is
$$f_{\text{eff}} = \frac{f_{\text{lens}} f_{NL}}{f_{\text{lens}} + f_{NL}} \approx f_{\text{lens}} \left(1 - \frac{f_{\text{lens}}}{f_{NL}}\right)$$ \hspace{1cm} (3.5)
where the last step could be done, because $\frac{f_{\text{lens}}}{f_{NL}} \ll 1$. The latter ratio has been evaluated for all 5 lenses and varies from 0.01 to 0.033. Relative change of 3.3% is the largest for 1000-mm lens, what means that focus position moves for more than 3 cm.
This shift is even more drastic due to the self-focusing in a window. After passing through lens beam converges and is significantly smaller at the window than it is at the lens, resulting in higher intensity and consequently stronger self focusing. The
Figure 3.8: Simulation of focusing of Gaussian beam with different lenses from $f = 300$ mm (blue) to $f = 1000$ mm (purple) with (full line) and without (dashed line) a window in the beam path. The relative position of window and lens has been taken into account for each lens configuration. While window and gas target are fixed, longer focal-lengths lenses are further away from the gas target (and window) than shorter focal-length lenses. Consequently beam at the window is significantly smaller, intensity higher and self-focusing more intense.
Effect on the focus position and focal spot size can be seen in the figure 3.8, where propagation of Gaussian beam for different lenses has been simulated. Experimental configurations with different distances between the lens and the window were taken into account. Self-focusing is more intense for larger $f_{\text{lens}}$, because they had to be placed further away from the gas target and thus also from window. Similar trends were observed experimentally, but unfortunately no data were taken. To conclude, self-focusing decreases the focal spot in all cases what is in general desired. However, we don’t have a quantitative evaluation of it, therefore it just introduces additional uncertainty in determination of HHG conditions.
Chapter 4
Characterization and optimization of HHG
Once sufficiently short light pulses that carry enough energy are available, one simply has to focus them into a gas target in order to generate high harmonics. However, for HHG process to be efficient, several experimental parameters have to be tuned. To evaluate the generation as well as to find a balance among different processes taking place at the gas target (described in chapter 2.2.2), generated radiation has to be characterized. Due to the EUV spectral range to which harmonics belong to, special optical elements, detector and vacuum have to be employed. In the following chapter an experimental setup for generation and characterization of HH is presented. Next, typical spectra are shown and briefly analyzed. In the last part, results of a multi-parameter optimization of $13^{th}$ harmonic yield are summarized and general trends are discussed.
4.1 Experimental setup
In figure 4.1, the experimental setup for HHG and spectrum measurement is shown. It can be coarsely divided in three parts according to their function: HHG takes place in a vacuum chamber with a gas target, then generated harmonics are spectrally dispersed using a EUV monochromator and detected with a channeltron, i.e. a single channel electron multiplier.
4.1.1 Gas target
Laser beam enters into the vacuum chamber through an anti-reflection coated BK7 window and is focused into the gas target. A vacuum chamber had a transparent lid, so that realignment and observations of plasma were possible, while system was under vacuum. Two different gas target geometries shown in figure 4.2 were tested. From a so-called end-fire nozzle a gas jet is blown perpendicular to the beam axis, while in case of a gas cell a beam enters and leaves the pressurized cell through small openings. Here, a gas jet is emitted through the openings along the beam axis affecting both driving and harmonic beam.
The end-fire nozzle was made of glass with a circular aperture with diameter $D_{\text{e.f.}} = 600 \mu m$. Its relative position to the laser beam could be varied with a motorized translation stage moving the nozzle in all three directions. When the
Figure 4.1: Experimental setup for HHG and spectrum measurement: RWF - rotating wheel filter, AI - adjustable iris, XYZ - three-axis piezo actuator driven translation stage, PG - pressure gauge, TP - vacuum turbo pump, IS and OS - entrance (input) and exit (output) slit of the EUV monochromator respectively, RV - regulating valve and TG toroidal grating.
nozzle is moved too close to the beam, it can melt or break. A gas cell was a simple brass tube, sealed with a Torr seal on one end and with openings of irregular shape (due to laser ablations from previous experiments). The length of the cell $L_{\text{cell}} = 3$ mm is given by a diameter of the tube, while the diameter of the openings is approximately $D_{\text{g.c.}} = 1$ mm. It was aligned in transverse plane by monitoring a transmitted power through the unpressurized cell with attenuated laser beam.
Two noble gases, xenon and argon, were used as a non-linear medium in the gas target. Backing pressure at gas bottle was kept constant at 2 bars. The gas pressure at the target could be varied by controlling the gas flow with the regulating valve (EVR 116 by Pfeiffer Vacuum). The pressure in the vacuum chamber was monitored with a full-range pressure gauge. A pressure equilibrium was usually reached within few seconds after applying a gas load. The maximum gas load was limited by the vacuum turbopump (TM U 200M by Pfeiffer Vacuum), which shut
Figure 4.2: Two different gas target geometries used in this work: a) an end-fire nozzle and b) a gas cell.
down automatically at too high gas flow. At typical gas flows the pressure levels in the chamber were $3.2 \times 10^{-3}$ and $1.6 \times 10^{-2}$ mbar for xenon and argon respectively.
### 4.1.2 EUV monochromator
To spectrally resolve harmonics, HH beam was led into EUV monochromator, Jobin Yvon LHT 30. It is a grating rotation monochromator with a highly grazing incidence and fixed entrance and exit slit. A diffraction element in the monochromator is a toroidal holographic grating with groove density $\gamma = 550/\text{mm}$ and horizontal and vertical radius of curvature $R = 1000$ mm and $\rho = 103$ mm respectively. It is corrected for zero astigmatism and minimized coma\cite{71}. A curved grating serves both as an imaging and diffracting element, thus reducing reflection losses which are pretty high in this wavelength range. The monochromator has a fixed angle $2\theta = 142^\circ$ between entrance and exit arm of equal length $f_{TG} = 320$ mm, where $f_{TG}$ is focal length of toroidal grating. Slits of different width can be used to change the spectral resolution of the instrument. While it’s desirable to have a sufficient spectral resolution when measuring spectra, it would be preferable to increase the bandwidth of the monochromator at fixed harmonic (13\textsuperscript{th}) so that it is transmitted completely. However, due to very intense IR driving laser beam a narrow monochromator slits were used to at least partially block the IR light which might cause damage to the grating.
Spectra were recorded by rotating the grating. Diffraction angles are given by grating equation
$$\sin \alpha + \sin \beta = m\gamma \lambda$$
(4.1)
where $\alpha$ and $\beta$ are angles of incident and diffracted rays according to grating’s normal respectively and $m$ is a diffraction order. As the geometry is fixed, $\alpha$ and $\beta$ are related by
$$\alpha - \beta = 2\theta.$$
(4.2)
The configuration is thus defined and selected wavelength is given by \cite{72}
$$m\lambda = \frac{2}{\gamma} \cos \theta \sin (\theta + \beta)$$
(4.3)
where $\theta + \beta$ is right the angle the grating has turned from zero order. For a spectrally broad source that spans at least from $\lambda_{\text{min}}$ to $2\lambda_{\text{min}}$, diffraction orders will overlap, where $\lambda_{\text{min}}$ is the smallest wavelength present. As we will see, this is indeed the case for HH spectra.
### 4.1.3 Detection
High harmonics were detected with a single channel electron multiplier (CEM 4839 by Photonis), shortly channeltron or CEM, which was attached to the output of the EUV monochromator. A differential pumping was applied to achieve required pressure level ($\approx 10^{-5}$ mbar) for safe channeltron operation and suppression of the background signal using two turbo pumps. One (HiPace 300 by Pfeiffer) was attached to the EUV monochromator and the second one (TMH 071 by Pfeiffer) to the small chamber between exit slit and a channeltron. The pressure at second turbo pump in front of the channeltron was measured by a cold-cathode pressure
gauge and remained constant $1.3 \times 10^{-5}$ mbar also when gas was released into the gas target.
A principle of operation of CEM is similar to that of photomultiplier tube, only that it has a continuous channel instead of multiple dynodes. When an ion, electron or EUV photon hits the surface of the CEM, 2-3 secondary electrons are emitted. Due to the positive bias voltage applied to the CEM, electrons are accelerated towards the collector at the end of the channel. On their way there, they hit the wall several times producing an avalanche of electrons, which can be read out as a macroscopically observable current. In our case, we measured the photocurrent indirectly as a voltage drop over 10 MΩ resistor. Voltage drop was detected by a multimeter (Keithley 2001) connected to the PC over GPIB interface. A LabView program was used to control the rotation of the grating and to read out the values from the multimeter in order to record the HH spectra automatically.
### 4.2 Spectrum
Besides being an important characteristics of high harmonics radiation, HH spectrum can also reveal underlying processes of HHG. We took spectra first as a proof that detected signal indeed belong to the EUV light produced via HHG. Next, measured spectrum served as support for interpretations of effects of specific parameter on HHG when varying generation conditions.
All spectra shown here (figures 4.3 - 4.5) were taken at the optimized condition for generation of $13^{th}$ harmonic in a gas cell. For maximum gas load still allowing a normal operation of turbo pump, following parameters were optimized: the optimal gas target position (relative to the focus) was found, chirp of the driving light pulse was adjusted and (in some cases) driving beam was apertured prior to focusing. Spectra were then recorded for different focusing, gases (argon and xenon) and both gas target types.
In figure 4.3, five spectra of HH generated in argon are shown for five different focusing lenses used. Note that their a constant offset was added between consecutive spectra for better visibility. Several features described in 2.2.3 can be observed. For a tighter focusing (shorter focal length), laser intensity in the gas target increases. According to the predictions of three-step model (2.38) higher harmonic orders can be generated. The cut-off region indeed shifted from $17^{th}$ for $f = 1000$ mm lens to $27^{th}$ harmonic for $f = 300$ mm. The plateau is recognizable at least for tighter-focusing spectra, with more or less constant harmonic intensity between $13^{th}$ and $23^{rd}$ harmonic. For lower order harmonics, detected power decreases with a decreasing order, what is probably due to the lower diffraction efficiency of the grating, which were optimized for spectral range of 10 to 50 nm [71].
Due to the spectrally broad HH radiation, second diffraction order partially overlap with the first one. Additional peaks are present, for example, $11^{th}$ harmonic is surrounded by second diffraction order of $21^{st}$ and $23^{rd}$ harmonic. Besides that, even third diffraction order of $23^{rd}$ and $25^{th}$ harmonic can be recognized as a bump on top of second order $15^{th}$ and $17^{th}$ harmonic peaks respectively.
The same set of measurement was performed with xenon and results are shown in figure 4.4. As expected, cut-off in harmonic spectra occurs at lower order harmonics compared to argon, because ionization potential of xenon is smaller. For the same reason, ionization rate of xenon atoms was higher. Increasing the intensity
Figure 4.3: HH spectra generated in a gas cell filled with argon with five different lenses. A constant offset was added between consecutive spectra for better visibility.
Figure 4.4: HH spectra generated in a gas cell filled with xenon with five different lenses. A constant offset was added between consecutive spectra for better visibility.
with tighter focusing was therefore inefficient for optimization of the 13\textsuperscript{th} harmonic yield, because of all the negative effects of generated plasma. The beam had to be apertured in all but $f = 1000$ mm case in course of optimization for 13\textsuperscript{th} harmonic. Consequently, no extension of the cut-off with tighter focusing was observed with 19\textsuperscript{th} harmonic being the highest generated for all five lenses.
In figure 4.5 spectra of HH generated in xenon and argon are compared directly. While higher-order harmonics can be generated in argon, HHG is twice more efficient in xenon for the 13\textsuperscript{th} harmonic that we are mostly interested in. Besides, one can see a strong background signal in xenon spectrum. It originates from stray light from the grating arising from very intense low-order harmonics. It almost completely vanished, when thin aluminum foil was placed in front of the channeltron detector as we will see later.
### 4.3 Optimization of HHG
Numerous experimental and theoretical studies of optimization of HHG depending on various experimental parameters have been performed in the past (see [29] for review). However, for efficient HHG one still has to find optimal conditions within given experimental limits. Results presented in this chapter are not meant to systematically analyze effects of different parameters, but more to help identify limiting processes in the HHG. As such they are not supported with any kind of simulations, but refer to the similar trends reported in the literature to give a possible explanation of effects observed.
While whole spectra were shown in previous chapter, we will now limit ourselves to the power of 13\textsuperscript{th} harmonic as the generation of $\approx 60$ nm light is of our main interest. In experiments, EUV monochromator was fixed at the position of 13\textsuperscript{th} harmonic peak. We observed that peak position might change when changing position of the
focusing lens. In course of the optimization processes we therefore scanned across the peak and readjusted the position of the monochromator if necessary. During the single-parameter scan all other parameters were kept at the previously determined optimal value.
Optimal parameters had to be found for each lens, gas and gas target separately. However, $13^{th}$ harmonic power kept increasing or at least it saturated at the maximal possible gas load ($\Phi = 2 \times 10^{-1} \text{ mbar l/s}$ for xenon and $\Phi = 4 \times 10^{-1} \text{ mbar l/s}$ for argon). As expected, higher power of driving laser always increased the harmonic yield and rotating wheel filter was thus set to maximal transmittance ($P \approx 2.0 \text{ W}$). An optimal chirp of the driving light pulse was only slightly dependent on the lens position. Therefore, in general one had to iteratively adjust the lens position and the size of the adjustable iris in order to optimize HHG.
### 4.3.1 Gas target
In figure 4.6, $13^{th}$ harmonic yields for two different target geometries, i.e. gas cell and end-fire nozzle, are compared. In all cases the conversion was more efficient with a cell as a gas target. For the gas cell filled with xenon, HHG conversion was the most efficient for $f = 750 \text{ mm}$ lens and decreased for tighter focusing, while it remained almost the same for all lenses when end-fire nozzle was used. Opposite was true for the HHG in argon; combination of loose focusing geometry (longer focal length) and longer interaction length (gas cell) didn’t increased harmonic yield for the gas cell and even reduced it for the end-fire nozzle.
A difference in observed trends for two gases could be acknowledged to different degree of ionized gas atoms. For argon, ionization is not so critical such that tighter focusing still had a positive effect in terms of increased intensity. However, it was compensated by a negative effect due to less efficient phase-matching. This lead to weak or almost no dependence of the $13^{th}$ harmonic power on focusing geometry.
for HHG in the gas cell. On the other hand, ionization degree of xenon increased by tighter focusing to such extent that it prevented the efficient HHG within longer interaction lengths (gas cell).
The effect of the ionization can also be recognized in the change of other optimal parameters for both gases. While they were the same for the $f = 1000$ mm where degree of ionized gas atoms is negligible for both gases, they differed when shorter focal lengths lenses were used.
### 4.3.2 Gas target position
Phase mismatch varies along the beam axis and thus the macroscopic response differs for different positions of gas target relative to the beam focus. In our experiments we varied this position by moving the focusing lens on the linear translation stage. The position of the focus was not exactly known due to the self-focusing effect, but can be at least estimated by observing the plasma pattern, i.e. when plasma is brighter after the gas target, the beam is focused behind the target and vice versa. In figure 4.7, results of the scan through the focus are shown for three different cases: the blue curves mark the HHG in xenon and the red one in argon. Lens with focal length $f = 300$ mm was used for all three measurements focusing the beam to the $w_0 \approx 30 \mu m$ spot corresponding to the confocal parameter of 7 mm. From the plasma pattern observed, we could determine that the gas target was in the beam focus for the lens position $22 \pm 1$ mm. Now, by “increasing” the lens position, the focus was translated further along the beam axis. Therefore, lens positions $> 22$ mm corresponds to situation when beam was focused behind the gas target.
When a full beam was applied (triangles), at least two maxima were observed. A similar result was obtained in numerical simulations by Salières et al. [29]. They attributed the first peak (in our case at lens positions 18 mm and 21 mm for xenon and argon respectively) to the on-axis phase-matching. It is in agreement with the
phase mismatch calculated and shown in figure 2.6, which suggest optimal target position to be slightly after the focus. However, even a higher harmonic yield was obtained in our measurement as well as in simulations, when the gas target was placed before the focus (lens position 24 mm and 27 mm for argon and xenon respectively). Salières et al. attributed this effect to the off-axis phase matching (see [29] for explanation). The different optimal lens positions for xenon and argon are again probably related to the ionization effects.
In figure 4.7, results of the scan with an apertured beam are also shown. While the effect of the aperturing will be explained in following chapter, we can notice here that no structure but very smooth curve with a single peak was recorded. A position of the peak corresponds to the situation when gas target was placed in the beam focus and differs from the full beam optimal position.
### 4.3.3 Apertured beam
The aperturing of the driving beam prior to focusing is often applied to increase the HHG conversion efficiency [73]. The main effects of the aperturing are that the transmitted energy is reduced and also its distribution over focal region changes. Besides, the Gouy phase typical for Gaussian beams is modified and varies less rapidly across the focus affecting the phase matching. Due to the smaller beam size at the lens, the focal spot as well as a Rayleigh range increase by narrowing the aperture. This increases the focal volume and consequently the interaction volume. On the other hand it decreases the ionization because of the reduced laser intensity in the focus. Altogether, aperturing has a complicated effect on the HHG, which is an interplay of focal geometry change and decrease of ionization (favoring small apertures), and reduced harmonic dipole amplitude (favoring larger apertures) [73].
In figure 4.8 dependence of the $13^{th}$ harmonic power on aperture size is shown. The size of the aperture was measured indirectly by comparing the transmitted power with the power of the full beam. If the beam profile is Gaussian the transmitted power equals to
$$P_{\text{trans}}(a) = P_{\text{full}} \left(1 - e^{-\frac{2a^2}{w^2}}\right)$$
where $a$ is aperture radius and $w$ is the radius of the beam.
The peak intensity in the focus of apertured beam can be estimated from transmitted power and measured beam profiles in focus for different aperture sizes with an attenuated beam. As transmitted power decreases and focal spot size increases with smaller apertures, peak intensity scales down even faster. However, behavior of the $13^{th}$ harmonic yield is not so monotone and has a clear peak at $a \approx 2.3$ mm. At this aperture size peak intensity was reduced for almost a factor of 3, while the $13^{th}$ harmonic yield improved for more than 50% compared to the full beam values. By reducing the aperture size further on, harmonic power drops down very quickly. The same behavior with even more significant improvement was observed by Kazamias et al. [73].
However, aperturing didn’t improve the HHG for all focusing geometries: in general, it was more effective for tighter focusing and xenon as a target gas, what was most probably due to the reduced ionization and improved phase-matching, which is harder to achieve in tight focusing geometries.
4.3.4 Intensity dependence
A HHG dependence on intensity of driving light was one of the first hints, that HHG cannot be described with a perturbative theory. While intensity of low-order harmonics (3\textsuperscript{rd} and 5\textsuperscript{th}) follows a perturbative power law $I^q$, a more pronounced and diverse behavior of high-order harmonics was observed [74]. A simple modified power law $I^p$ can be used to approximate it neglecting oscillations which are often present.
Several authors [74], [75], [76] did systematic studies on intensity dependence of HHG and observed a similar trend: HH yield increases up to a certain intensity at which ionization starts to significantly affect the HHG process because of the depletion of the neutral-atom population [74]. After that, the increase is slower leading to a characteristic knee in HH power vs. laser intensity curve. According to the change of slope in the log-log scale plot, Wahlström et al. [76] determined three regions, i.e. cut-off, plateau and ionization region. HH yield varies much more rapidly with the varying intensity in the cut-off than in the plateau regime and saturates when ionization is significant. Similar behavior was observed for different noble gases with saturation intensity shifted to higher intensities for lighter gases due to the larger ionization potential.
In figure 4.9, intensity dependence of 13\textsuperscript{th} harmonic generated in xenon for different focusing conditions is shown. Intensity in the focus was varied by attenuating the beam with a Thorlabs continuous reflective filter placed in front of the adjustable aperture. Average intensity, which equals half the peak intensity for a Gaussian beam, was estimated again by measuring the transmitted power (after the filter and aperture) and beam profile in the focus. However, during the scan gas target was located at the optimal position for the full-power beam, which, as we have seen in chapter 4.3.2, didn’t always coincide with the focus.
The change of the slope was observed in all cases, but they occurred at different intensities as it is clear from figure 4.9. It is probably due to wrongly estimated average intensity as the beam size in the gas target was not precisely known. Besides, there was a second knee-shape transition present in all cases except for the loosest focusing geometry with $f = 1000$ mm lens and low gas flow. This suggests that it’s plasma related feature.
Following the explanation of Wahlström et al., we can attribute the first knee to the cut-off to plateau transition and the second one to the increasing effect of ionization. In figure 4.10 data sets in which the second knee is either not present ($f = 1000$ mm) or occurs at the last data points ($f = 750$ mm) are shown together with linear fits (in log-log scale) applied to determine the power law. Obtained values of coefficient $p$ in the plateau region are $3.8 \pm 0.2$, $3.3 \pm 0.9$ and $4.0 \pm 0.5$ for $f = 1000$ mm, $f = 750$ mm with full and apertured beam respectively. The harmonic yield varies much more rapidly in the cut-off region, but due to the lack of data points, fitted parameters have a large uncertainty ($p = 31 \pm 16$). For comparison, intensity scan with end-fire nozzle and $f = 300$ mm lens is shown. One should note that the gas density in the target could be significantly different due to different target geometry. Anyway, the second knee this time takes place right after the first one. After that, harmonic power increases much slower than in other cases ($p = 0.6 \pm 0.2$ only), what might be a sign for a saturation regime.
### 4.3.5 Gas pressure in the target
Another parameter, which can be tuned to optimize HHG, is the pressure and thus number of atoms in the gas target. When phase-matching is fulfilled, the harmonic yield would increase with the square of the pressure [34]. However, with varyFigure 4.10: Potential law determination of intensity dependence of $13^{th}$ harmonic yield for HHG in xenon for different focusing and target geometries.
Increasing pressure, phase-matching conditions also change. Several authors reported of pressure-induced phase matching, where at certain atomic density in the target an atomic dispersion can cancel out the geometrical phase [34], [77], [78]. A distinct peak at this optimal atomic density and a drop in the detected harmonic signal with further increasing the pressure in the gas target were observed. Pressure required to achieve phase-matching is much higher in case of tighter focusing compared to loose focusing geometry [78]. It often can’t be achieved due to limited pumping speed of the vacuum pumps [34]. In that case only a saturation can be observed as a result of an interplay between increasing number of interacting atoms (potential emitters as well as absorbers of harmonic radiation) and non-ideal phase-matching.
In our experiment we were changing the pressure in gas target by varying a gas flow $\Phi$ with the regulating valve in integer steps in logarithmic scale. When a non-integer value was entered in the controller, the valve behaved irregularly. The highest possible gas flow was limited by the turbo pump. The gas cell and the end-fire nozzle had relatively large openings. Despite the high gas flow, pressure remained too low to achieve phase-matching and only saturation was observed as shown in figure 4.11. However, in later measurements of absolute power of HH, the gas cell was replaced by shorter one with almost five times smaller openings. An optimal pressure could have been reached followed by a decline of HH signal with increasing gas flow.
Figure 4.11: Gas-flow dependence of $13^{th}$ harmonic yield for HHG in xenon for different focusing and target geometries.
Chapter 4. Characterization and optimization of HHG
4.1 Introduction
High harmonic generation (HHG) is a nonlinear optical process in which an intense laser pulse ionizes a neutral atom or molecule, and the resulting electron is accelerated by the laser field to high energies. The electron then emits a series of harmonics at integer multiples of the laser frequency. This process has been extensively studied due to its potential applications in ultrafast spectroscopy, attosecond science, and quantum control.
In this chapter, we will discuss the characterization and optimization of HHG. We will start by reviewing the basic principles of HHG and then proceed to describe various methods for characterizing the generated harmonics. Finally, we will present our own work on optimizing the HHG process using different techniques.
4.2 Basic Principles of HHG
The basic principle behind HHG can be understood through the following steps:
1. **Ionization**: An intense laser pulse with a wavelength $\lambda$ is focused onto a gas target. The laser field is strong enough to ionize the atoms or molecules in the target, creating a plasma.
2. **Acceleration**: The electrons that are ejected from the atoms or molecules are accelerated by the laser field. The acceleration is given by $F = ma$, where $F$ is the force exerted by the laser field, $m$ is the mass of the electron, and $a$ is the acceleration.
3. **Harmonic Emission**: As the electrons are accelerated, they emit radiation at frequencies that are integer multiples of the laser frequency. These frequencies are called harmonics.
The intensity of the harmonics depends on the strength of the laser field, the number of electrons in the target, and the geometry of the experiment. In general, the higher the intensity of the laser field, the more harmonics are generated.
4.3 Characterization Methods
There are several methods for characterizing the generated harmonics. Some of the most commonly used methods include:
- **Spectroscopic Techniques**: These techniques involve measuring the spectrum of the emitted radiation. The spectrum can be measured using a spectrometer, and the intensity of the harmonics can be determined from the spectrum.
- **Time-of-Flight Spectroscopy**: In this technique, the time it takes for the electrons to travel from the target to a detector is measured. The time-of-flight is related to the energy of the electrons, which in turn is related to the frequency of the emitted radiation.
- **Phase Retrieval Techniques**: These techniques involve measuring the phase of the emitted radiation. The phase information can be used to reconstruct the spectrum of the harmonics.
4.4 Optimization Techniques
There are several techniques for optimizing the HHG process. Some of the most commonly used techniques include:
- **Laser Intensity Tuning**: By tuning the intensity of the laser field, the number of harmonics generated can be optimized. Higher intensities generally lead to more harmonics, but there is a limit to how much intensity can be increased before the laser field becomes too strong and the harmonics are not generated.
- **Target Design**: The design of the target can also affect the number of harmonics generated. For example, using a target with a high density of atoms or molecules can increase the number of harmonics.
- **Geometry Optimization**: The geometry of the experiment can also be optimized to maximize the number of harmonics generated. For example, using a target that is close to the focus of the laser beam can increase the number of harmonics.
4.5 Conclusion
In this chapter, we have discussed the basic principles of HHG, the methods for characterizing the generated harmonics, and the techniques for optimizing the HHG process. We have also presented our own work on optimizing the HHG process using different techniques. Overall, HHG is a powerful tool for studying the properties of matter and has many potential applications in science and technology.
Chapter 5
Absolute power measurement
Next step in the characterization of HH radiation was the measurement of its absolute power. While power measurements are most of the time straight-forward in the visible or IR, EUV spectral range makes this experiment far more complicated and thus requires its own chapter.
5.1 Experimental setup
Experimental setup for absolute power measurement is schematically shown in figure 5.1. The generation part remained almost the same as in previous chapter with two small modifications. The regulating valve used to control the gas flow was removed and three new gas cells were used as a gas target.
The pressure in the gas target could then be continuously varied by adjusting the backing pressure at the bottle. The gas flow could still be compared to previous measurements indirectly by measuring the pressure in the chamber. However, conditions were not exactly the same, because there was now a long bellow and the second chamber instead of EUV monochromator with an entry slit and large turbo pump attached to it.
The first measurement of absolute power of HHs at optimal conditions for (old) gas cell (from now on labeled as no.1) revealed that HH power was much smaller than expected. We suspected that the rather long gas cell no.1 with large openings was the problem. Therefore, two shorter ($L = 1$ mm and $L = 2$ mm) gas cells with smaller openings ($D = 0.3$ mm and $D = 0.2$ mm) were made labeled no.2 and no.3 respectively. Due to the smaller opening, a part of the beam was clipped. Openings got larger in the coarse of operation because of laser ablation, but remained below 0.4 mm. The third gas cell had the opening of the intermediate size $D = 0.5$ mm and an adjustable length, which was fixed at the shortest possible $L = 2.7$ mm during this measurement.
To measure the HH power, strong IR light had to be attenuated first for several orders of magnitudes. A 100 nm and 200 nm thin aluminum foils by Lebow were placed in the beam. They served as a low-pass filter transmitting the wavelengths below 80 nm (see figure 5.2). Two filters are required instead of a single thicker one, because thin foils used in our experiments often have some micro-pores. By using two filters, the probability of two pores to coincide is significantly reduced. First Al filter was placed at the entry of second vacuum chamber $\approx 1.2$ m away from the IR beam focus to avoid the damage of the filter by intense IR beam. Second Al
filter was attached to the diode mount with a calibrated IRD AXUV-100 photodiode inside to prevent the scattered IR light to hit the diode. However, the diode mount couldn’t be completely closed, otherwise the Al filter would break due to the pressure difference in the vacuum chamber and the diode mount. A background signal due to the IR light could be measured when no gas was applied to the gas cell.
The photodiode was connected with metal wires to the connection port on the vacuum chamber. A generated photo current was detected by a multimeter as a voltage drop over 1 MΩ resistor. Another possible contribution to the background signal was due to ionized xenon atoms hitting the diode connection wires. To determine it, a mechanical shutter was installed in front of the diode mount blocking the EUV light. A vacuum turbo pump was attached to the second chamber to lower the pressure and thus reduce the absorption in residual gas.
### 5.1.1 Calibration of thin aluminum filters
Absolute power measurement requires all optical elements in the beam path to be calibrated. While IRD AXUV-100 photodiode was already precalibrated by the manufacturer, transmission of thin aluminum filters had to be determined. In order to do that, we added a movable filter mount in front of the channeltron in the setup for spectrum measurement 4.1 and recorded a spectrum of HHs generated in Xenon with and without filter in the beam path.
In figure 5.2 filtered and unfiltered spectra are shown together with theoretically predicted and experimentally determined transmission. Aluminum filters were used as low-pass filter completely blocking the wavelengths below 80 nm, e.g. 9\textsuperscript{th} harmonic at 90 nm, while only partially attenuating higher orders. Note that two transmitted peaks on both sides of 9\textsuperscript{th} harmonic correspond to 2\textsuperscript{nd} diffractive order of 17\textsuperscript{th} and 19\textsuperscript{th} harmonic. Another effect of the filter on measured spectra was almost complete removal of the baseline or the background of the spectrum. A posFigure 5.2: Determination of transmission of $0.2\,\mu m$ thin aluminum filter: spectra of HH light generated in xenon were recorded with (red) and without (blue) filter. The ratio of single peak values is used as a measure of transmission and was determined for data with (squares) and without (triangles) background signal. For comparison, a transmission of $0.2\,\mu m$ aluminum filter (full, grey) and partially oxidized one, i.e. two times 4 nm layer of aluminum oxide (dashed, grey) are shown. Both curves were calculated using CXRO data base [50].
Possible explanation is that major part of background signal originates from scattered low order ($3^{rd}$ and $5^{th}$) harmonics, which were blocked by the aluminum filter.
A transmission of aluminum filter was determined as a ratio of the filtered and unfiltered spectra at peak values for each harmonic (from $9^{th}$ to $19^{th}$) and is shown in figure 5.2 for two cases. In the first case (squares), raw data were considered, while in the second case (triangles) the background in unfiltered spectra was subtracted as the best fit to the spectrum’s baseline. However, for some filters such approach lead to physically incorrect result of transmission larger than 1 and was thus omitted. More detailed study of spectrum’s background, namely its impact on the shape of the spectrum and its origin, would be required. Theoretically calculated transmission of 200 nm thick aluminum filter obtained using CXRO online data base [50] is also plotted (black, full) for comparison. The shape of measured transmission in general resembles the theoretically obtained one, but has the knee shifted to shorter wavelengths. The datapoint for transmission at highest detected harmonic order also deviates significantly from the expected value, which is probably due to the very weak signal compared to the background. Another theoretical curve for the partially oxidized aluminum foil is shown. Pure aluminum is known to oxidize immediately when exposed to air [79]. A thin ($\approx 4$ nm) layer of oxide is formed preventing further oxidation. As our filters were stored under atmosphere condition it is reasonable to assume that they had oxidized. An 8 nm (two times 4) layer of Al$_2$O$_3$ was added to 192 nm of aluminum to calculate the theoretical transmission of oxidized aluminum foil. This one matches better with experimentally evaluated transmission and was
thus used for determination of absolute power as will be explained in following chapter.
To measure transmittance at shorter wavelengths, argon was applied as a target gas enabling the generation of higher orders. While the measurement confirmed that discrepancy between theoretically predicted and measured transmission at 19\textsuperscript{th} harmonic was due to the weak signal, it also revealed a larger problem. The transmission or more precisely, ratio between filtered and unfiltered harmonic signal turned out to be dependent on generation conditions. For example, the ratio at 13\textsuperscript{th} harmonic was more than twice smaller in case of argon. Further systematic investigation showed that measured ratio varied both due to different generation (intensity of HHs was changed by altering the pressure in the gas target) as well as detection conditions (bias voltage applied to the channeltron). The most plausible explanation of these effects is irregular behavior of detector. It could have saturated due to the dense burst of radiation despite its low average power.
### 5.2 Experimental results
In the first part of this chapter, slightly modified HHG conditions together with “raw” results of absolute power measurement are presented. In the second part, the same results are combined with spectrum and filter transmission data in order to estimate a generated power of each harmonic and to derive a conversion efficiency into single harmonic order. Even though the out-coupled EUV power is what really matters for potential applications, careful identification and analysis of limiting processes could be of great help when redesigning the experimental setup.
#### 5.2.1 Detected EUV power
For the first measurement of absolute power with the gas cell no.1 all experimental parameters were set to the values for optimal generation of 13\textsuperscript{th} harmonic as explained in chapter 4. A $f = 750$ mm lens was used and the geometry of the setup, i.e. position of filters and detector, was adjusted to corresponding beam divergence. The detected power of less than 2 nW was well below the expected value on the order of 100 nW.
As all parameters e.g. lens position, gas flow, etc., were confirmed to be optimal also by the power measurement, the only way to increase the power of HHs was by adjusting the experimental setup. The easiest, but as it turned out also quite effective solution, was to modify the gas target. Two shorter gas cells with smaller openings were machined and tested. A higher pressure in the gas cell could be reached compared to the previous gas cell at the same gas flow because of the smaller openings. Combining this with shorter target length a different interplay between phase-matching, re-absorption and dipole density could be established \footnote{Unfortunately, due to the lack of systematic measurements of dependence of HHG on different experimental parameters with new gas cells (no.2, no.3), it is hard to give any explanation for this behavior as well as for the large improvement in efficiency of HHG more detailed than this.}.
A different dependence of HH power on gas flow was indeed observed. It had a clear maximum at optimal flow (now varied by changing backing pressure at the gas bottle) and a drop to almost zero when increasing the backing pressure further on. The aperturing had barely some (positive) effect, while HHG turned to be
very sensitive to chirp adjustments of driving pulse. The biggest improvement was achieved by tighter focusing. Three shorter focal-length lenses (300, 400 and 500 mm) were tested with both small-opening nozzles and results are shown in figure 5.3. Contrary to measurements with gas cell no.1 presented in figure 4.6, where $13^{th}$ harmonic was most efficiently generated with $f = 750$ mm lens, $f = 400$ mm lens was an optimal choice for both gas cell no.2 and no.3 with a factor of $\approx 20$ improvement.
### 5.2.2 Estimation of generated power and conversion efficiency
We will consider here generated power as a power of HHs right after the gas target. On the way to the detector, HH beam was attenuated because of absorption in residual xenon (in the vacuum chambers and connecting tube) and in two aluminum filters used to block IR light. To extract the generated power of $q$-th harmonic order $P_{g,q}$, one also has to take into account spectral dependence of the absorption, as well as detector’s response, and finally combine all this with previously measured HH spectrum.
Photocurrent generated in the photodiode by HH radiation can be written as a sum over single harmonic order contributions
$$I_{ph} = \sum_q \xi_q \gamma_q P_g$$
(5.1)
with $P_g$ generated power of all harmonics, $\gamma_q$ relative power of single harmonic extracted from measured HH spectrum and $\xi_q = \eta_q T_q^{\text{Al}}T_q^{\text{Xe}}$ harmonic-dependent factor including responsivity of the photodiode $\eta_q$ and transmission through aluminum filters and residual xenon.
The responsivity of the photodiode was specified by manufacturer and was relatively constant (0.23-0.26 A/W) within spectral range of interest. The measured values of transmission of aluminum filters seemed very unreliable, therefore the filters were modeled using CXRO online data base instead. A 4-nm oxide layer was
added on both sides of two aluminum filters resulting in total of 300 nm of aluminum and 16 nm of $\text{Al}_2\text{O}_3$. For xenon transmission, the same data base was used. As different pressure levels were measured in each of the vacuum chambers, linear variation of pressure between gas cell and the detector was assumed. All calculated transmission curves are shown in figure 5.4.
An even larger compromise had to be made when determining $\gamma_q$ coefficients. As we introduced different gas cells already after we removed the EUV monochromator from experimental setup, we didn’t have any measured spectrum at new generation conditions. Therefore, I decided to use spectra recorded for HHG with end-fire nozzle, because the latter was more similar to gas cells no.2 and no.3 in terms of length. Consequently, re-absorption of HHs within gas target should be comparable, while effects of phase matching shouldn’t have a significant impact on spectral shape. The $\gamma_q$ values were depicted as single peak powers normalized to sum of peak powers of all contributing harmonic orders, i.e the ones not completely blocked by aluminum filters.
Besides, recorded spectra also don’t show the real spectral distribution of generated harmonics. The latter is transformed due to the spectrally dependent grating efficiency and xenon transmission on the way from the gas cell to the channeltron. The grating was designed for operation in spectral range between 10 and 50 nm and its efficiency decreases with increasing wavelength. The HH beam also propagates through a region with increased pressure level such that absorption in xenon modifies shape of the spectrum. Both effects were taken into account in estimation of generated power of single harmonic by modifying $\gamma_q$ factors.
When all parameters $\xi_q$ and $\gamma_q$ were “known”, the generated power of $q$-th harmonic order could be extracted from measured current as
$$P_{g,q} = \gamma_q P_g = \frac{\gamma_q I_{\text{ph}}}{\sum q' \xi_{q'} \gamma_{q'}}.$$
(5.2)
The results for modified and unmodified spectrum are shown in figure 5.5 for most
Figure 5.5: Estimated generated power and conversion efficiency for single harmonic transmitted through aluminum filters for $f = 750$ mm (green) and $f = 400$ mm lens (evaluation using measured (red) and modified (blue) spectrum - see text for explanation).
efficient HHG using a $f = 400$ mm lens and also with $f = 750$ mm lens for comparison. The estimated generated power for the 13\textsuperscript{th} harmonic is $102 \pm 75$ nW corresponding to conversion efficiency of $5.4 \pm 3.9 \times 10^{-8}$. While this is more than 20 times higher than detected power, it’s still very inefficient compared to conversion efficiencies achieved in other groups (see table 2.2).
The reason for relatively large error bars in figure 5.5 are mainly many assumptions that we had to make to estimate the generated power of single harmonic order. On the other hand, the uncertainty from the absolute power measurement (figure 5.3) contributed only a small part. To derive the widths of the margins, the upper bond in possible deviations of thickness of aluminum filters (10% or 30 nm) and of single oxide layer (20% or 1 nm) were assumed. Pressure in the first $\approx 10$ centimeters after the gas cell could also be very different (higher). The gas jet was emitted from the cell in direction of propagating beam, while the pressure was measured on the other side of the vacuum chamber. All this deviations were taken into account to calculate the new, edge value of $P_{k,q}$. A difference between this and original one was taken as the absolute error.
Following the analysis presented above, increase of the available power from our EUV light source is possible by reducing the loss due to aluminum filters and residual xenon. A single thicker aluminum filter with fewer micro-pores could suffice to block the fundamental beam. Such the number of more absorbing oxide layers would be halved. A differential pumping scheme could also be introduced shortening the beam path in the region with increased pressure where gas is injected from the pressurized gas cell into vacuum chamber. On the other hand, probably even more severe limitation is set by our laser system, which delivers rather long and less energetic pulses as those in experiments listed in table 2.2.
Chapter 5. Absolute power measurement
Chapter 6
Towards the focusing of high harmonics
The final experimental task of this work was to test the focusability of high harmonics. Our idea was to first try the focusing of the driving beam, fully characterize it, and confirm the capability of our setup for beam profile measurement of micrometer beam spots. After that, we would repeat the measurements with HH beam assuming that it collinearly propagates with the driving beam. However, experiments with the driving beam already revealed limitations of our setup. Those were later confirmed by simulations, which on the other hand gave promising predictions for HH beam. Yet, an attempt to focus HH was not successful. The setup, simulations and experimental results for both driving and HH beam are described in this chapter.
6.1 Experimental setup and methods
Experimental setup for focusing and characterization of the focus of fundamental and HH beam are shown in figures 6.1a and 6.1b, respectively. While two thirds of the setup, i.e. off-axis parabolic mirror (OPM) as a focusing element and knife-edge scan setup as a characterization tool were the same in both cases, experiments with EUV light required different detector and vacuum. The HHG part of the setup is not shown in figure 6.1, as it remained the same as in absolute power measurement (figure 5.1).
6.1.1 Off-axis parabolic mirror and its alignment
A Kugler’s bare gold coated OPM with an effective focal length $f = 50.8$ mm and an off-axis angle of $90^\circ$ was used for focusing. Manufacturer stated the mirror’s surface accuracy and surface roughness to be below 1 $\mu$m and 6 nm respectively. However, the mirror was part of other setup with a powerful IR laser prior to our experiment. During that time it was slightly damage with some defects noticeable already by visual inspection. The OPM was screwed onto the custom made mirror mount of square shape with polished surface and precisely drilled holes to fit in the symmetry defined by the mirror geometry.
An OPM has the capability of diffraction-limited focusing of collimated beam. On the same time, it folds the beam away from the incoming beam axis. However,
Figure 6.1: Experimental setup for focusing and characterization of the focus of 6.1a driving IR and 6.1b high harmonics beam. Most of the setup is the same in both cases: off-axis parabolic mirror (OPM) focuses the light, which can be blocked by a knife on a piezo-driven movable stage (XZ). To determine the position of the stage, a Michelson interferometer was built with a helium-neon laser (HeNe) and a mirror mounted on the same stage as the knife. To detected transmitted light a photo-diode (PD) was used for IR beam and channeltron for HHs, where first driving beam had to be blocked by aluminum filters (AF).
it can be very difficult to align it in order to achieve an ideal imaging. For the alignment, we used a strongly attenuated and collimated driving IR beam by employing a set of Thorlabs NIR reflective and absorptive filters and removing the lens used for the HHG. Next, we made use of a known trick of aligning OPM; a back-reflected light from the mirror mount must be overlapped with the incoming beam in order to match beam and mirror axis. To do that, an iris was placed 2.5 meters before the OPM (mount) and back-reflected light from the mirror mount had to pass through the same iris. Rotation angle around mirror axes was adjusted with the help of square-shaped mount such that beam would remain in the same horizontal plane after reflection.
Only now, the OPM was actually attached to the mount. An almost perfectly round beam profile was recorded in the focus with a CCD camera confirming this was already the optimal alignment. By moving away from this point, an astigmatism was introduced immediately.
After that, the $f = 750$ mm lens used for HHG was put back in the beam path. Due to the now divergent beam, we couldn’t apply the same trick with back-reflected beam, crucial for the alignment of the OPM. Instead, we had to make sure that we didn’t change the beam direction by adding the lens to the beam path. At the same time, we had to tilt the lens in order to optimize the focus in the gas target as described in chapter 3.3. It required several iterations, but finally, with a pair of irises and a CCD camera, we were able to confirm that the setup remained aligned. Surprisingly, a distorted beam profile was recorded in the OPM focus. However, simulations (see section 6.2) revealed, that it was due to the divergent beam and not misalignment.
6.1.2 Knife-edge scan
An EUV focal spot size was expected to be on the order of 1 µm. The easiest suitable and also at that time available method to characterize so tightly focused beam was a knife-edge scan. In this method, a “knife” is moved through the beam perpendicular to the beam axis. A transmitted power is measured at different knife positions and equals to an integrated beam profile in one dimension. A scan can be repeated in different directions. However, it’s usually done in two perpendicular ones only. Assuming a specific beam shape, an appropriate curve can be fitted to the data to obtain beam parameters.
In our case, the knife was a medical razor mounted on the motorized translation stage. It consisted of two Newport triple-divide linear translations stages that could be moved in \( x \) and \( z \) direction, perpendicular and parallel to the beam axis respectively. Both had a free travel range of almost 13 mm and could be controlled via picomotor controller and joystick from outside the chamber. Additionally, there was a small Thorlabs single-axis translation stage on top of the stage with a 25 µm piezo-actuator. It could be moved in \( x \) direction and was driven by directly applying a triangular voltage signal of \( \approx 10 - 60 \) V from the signal generator. The adapter holding the razor was attached to it. Unfortunately, the vacuum chamber was too low so that another translation stage, which could move the knife in \( y \) direction, didn’t fit in. Consequently, only one dimensional knife-edge scan could be recorded.
The displacement curves of piezo actuators often show a strong hysteresis and non-linear response to the applied voltage, so we decided for on-line calibration of the 25 µm stage with a Michelson interferometer. A helium-neon laser and photodiode detecting the interferometer signal were located outside, while mirrors and beam splitter were placed inside the chamber with the end mirror of one arm of the interferometer being mounted to the same adapter as the razor.
The driving voltage for the piezo actuator and the interferometric signal were brought to two channels of the oscilloscope, recorded and analyzed in the process of stage calibration presented in figure 6.2. First, low-frequency noise was removed from interferometric trace via Fourier transform filtering. Next, the sinusoidal signal was shifted for its average value to be centered around zero and zero-crossings were determined. Each zero crossing was related to the stage position \( x_n \) at which interference condition
\[
k_{\text{HeNe}} 2x_n = n\pi, \quad n = 1, 2, 3, \ldots
\]
(6.1)
is fullfilled, where \( k_{\text{HeNe}} = \frac{2\pi}{\lambda_{\text{HeNe}}} \) and \( \lambda_{\text{HeNe}} \) is the wavelength of helium-neon laser (632.8 nm). Therefore, two consecutive zero crossings belong to stage positions a quarter of the wavelength apart. Finally, the corresponding voltage was assigned to each zero-crossing to obtain the calibration “displacement vs voltage” curve.
To test the knife-edge scan setup, a strongly attenuated fundamental beam was applied. The rest of the light beam, not blocked by the razor was detected with a photodiode. As it’s sketched in figure 6.1a, beam was very divergent after the focus. Actually, it was far too large to fit on the photodiode located outside the chamber, so that it had to be first re-collimated and then focused by a pair of lenses.
Figure 6.2: Calibration of the movable stage with the knife using Michelson interferometer: interferometric signal (upper left) and voltage applied to piezo actuators (lower left) were recorded synchronously. Combining the zero-crossings of interferometric signal (upper right) and voltage data, a calibration curve - stage position vs. voltage (lower right) was obtained.
6.1.3 High harmonics related specifics in the setup
After the setup was tested and aligned with the driving beam, a detector for EUV was installed, vacuum chambers were closed and evacuated. To generate HHs, the driving beam was focused with a $f = 750$ mm lens into the gas cell no.2 filled with xenon. Optimization of the HHG process followed the standard procedure of varying lens position, adjusting chirp, gas flow and an aperture.
At the entrance into second chamber the driving beam was blocked with an 200 nm thick aluminum filter, so that the razor wouldn’t be burnt by intense IR beam. Transmitted harmonics were focused with the OPM and detected with a Photonis 4751G channeltron. It had a 12-mm wide opening window in $x$ direction, which should suffice that the whole EUV beam would fit on the detector according to beam propagation simulations. Bias voltage of -1.7 to -2 kV were applied to the channeltron. The generated photocurrent was detected as a voltage drop over 1 M$\Omega$ resistor and read-out by Keysight multimeter.
The critical point regarding the vacuum was channeltron. It had to be operated at pressures below $2 \times 10^{-4}$ mbar in order to avoid over-shooting. Pressure increased in both chambers during the HHG, because of released gas. We tried first to separate chambers with an aluminum filter mounted into the gate valve. However, we still couldn’t get to the specified pressure. The second aluminum filter had to be installed in front of the channeltron to separate it from the chamber. A Pfeiffer TMH071 turbo pump (TP3 in fig. 6.1) was additionally attached between filter and channeltron and pressure down to $2 \times 10^{-5}$ mbar could be reached.
6.2 Simulations
For the first estimation of OPM’s focal spot size, a driving and HH beam were propagated using a simple 1D Gaussian beam propagation formalism described in chapter 2.1.3. Such approach completely neglects any aberrations of focusing optics and cannot predict their effects on the beam shape. The OPM was presented with an ABCD matrix for a thin lens with a focal length $f = 50.8$ mm. The input parameters for fundamental beam were an ideal Gaussian beam ($M^2 = 1$) with a waist $w_{IR} = 63 \mu m$ measured at the position of the gas cell 1.4 m away from the OPM. Corresponding divergence of the beam (not measured) was $\theta_{IR} = 3.4$ mrad. For HH beam only 13\textsuperscript{th} harmonic was considered. We assumed that HHG follows the power law (2.44) for intensity dependence of HHG with a measured power of $p \approx 4$. Thus the beam waist of the 13\textsuperscript{th} harmonic $w_{13}$ should be half the $w_{IR}$ and beam divergence $\theta_{13}$ 6.5 times smaller than $\theta_{IR}$. The results are summarized in table 6.1. The error is due to uncertainty of the input parameters for the HH beam.
The measurements of the beam profile around the OPM focus of non-collimated\footnote{When we refer to the non-collimated beam, we have in mind driving IR beam, which was first focused into the gas cell for HHG with a $f = 750$ mm lens and then again refocused with an OPM. For collimated beam, lens was removed from the beam path.} driving beam revealed distorted beam shapes (see figure 6.3c). To find out, whether this was due to misalignment or divergent beam, we performed a ray-tracing and physical-optics-propagation (POP) simulations using a trial version of Zemax. The OPM was in both cases fully described by parabolic surface so that all potential effects of aberrations could be accounted.
In ray-tracing simulations, a bundle of rays is propagated through optical system, where it gets reflected or refracted at optical surfaces in the beam path. The divergence and diameter of the bundle of rays can be varied by adjusting the diameter or position of a circular aperture relative to the point source. Such values were chosen to mimic the non-collimated IR beam. Simulated images around the OPM focus in figure 6.3a clearly resemble the measured beam profiles. On the other hand, when we set the point source far away from the aperture in order to mimic a collimated beam, the rays meet in a single point as expected. It was an evidence that the setup was indeed aligned and that distortions in the beam profiles originated from focusing of divergent beam with an OPM.
However, the ray-tracing simulations do not include effects of diffraction and consequently have a limitation in quantitative evaluation of the beam profile. On contrary, the POP simulation utilizes diffraction calculations, i.e. Fresnel diffraction equation, to propagate a wavefront through an optical system surface by surface. While Zemax allows you to choose any arbitrary beam, we selected a TEM$_{0,0}$ mode with different parameters for each situation, i.e. collimated and uncollimated IR beam, and a 13\textsuperscript{th} harmonic beam with the same parameters as before. The values of the beam radius in focus are listed in table 6.1. Distortions due to divergent beam could again be reproduced and are shown in figure 6.3b. They were also present, but to much smaller extent in the less divergent 13\textsuperscript{th} harmonic beam ($\theta_{13} = \theta_{IR}/6.5$).
Table 6.1: Results of simulations (Gaussian beam propagation and Zemax’s Physical optics propagation) and measurements (beam profile and knife-edge scan) of beam radius in OPM focus. All values are in $\mu$m. Collimated IR beam represents the situation without and non-collimated with the $f = 750$ mm lens in the beam path.
| | collimated IR | non-collimated IR | $13^{th}$ harmonic |
|----------------------|---------------|-------------------|--------------------|
| ABCD | 3.8 | 2.4 | 1.2 ± 0.3 |
| POP (Zemax) | 6.7 ± 0.7 | * | 1.3 ± 0.4 |
| BP measurement | 10 ± 4 | ** | / |
| KE measurement | 6.1 ± 0.1 | / | / |
* distorted beam (fig. 6.3b), estimation for $w_x = 8 \pm 2 \mu$m
** distorted beam (fig. 6.3c), estimation for $w_x = 7 \pm 4 \mu$m
Figure 6.3: Three characteristic distorted beam profiles at different positions along beam path after divergent IR beam was focused by OPM: pictures were obtained by 6.3a ray tracing and 6.3b beam propagation simulation (both using Zemax) and 6.3c measured with a CCD camera.
6.3 Experimental results
6.3.1 IR beam
The problem of focusing a non-collimated IR beam with an OPM was already discussed above. The cause of beam distortions at the OPM focus shown in figure 6.3c was beam divergence as it was confirmed by simulations. Besides, we did a quick experimental test of the effect of beam divergence on the beam shape. An adjustable iris was placed in front of the OPM so that beam diameter and in the same time beam divergence could be continuously varied. By decreasing aperture size, distortions disappeared and beam got round already at aperture diameter of 3 mm, i.e. twice larger than estimated harmonics beam diameter at this position. Consequently, one could expect that divergence of the HH beam is sufficiently small for focusing with the OPM.
To test the knife-edge scan setup, we performed a set of measurements with a collimated IR beam. Beam profile at the focus was first inspected with a CCD camera with a pixel size of 4.65 µm. According to the POP simulation, diameter of 13 µm was expected, meaning that beam would spread over 4 pixels in one dimension only. As can be seen in figure 6.4, it was indeed the case. The recorded beam was slightly elliptic and narrower in x direction in which the knife-edge scan was later performed.
Anyway, we extracted the waist size by fitting Gaussian curves to the data being aware of the problem of lacking of data points. Besides, signal from pixels with highest intensity could also be saturated. The whole situation was thus simulated, including the dependence of relative position of the beam to the pixel grid. According to the simulation, the error on the order of the pixel size was estimated. Finally, the beam waist at focus of $10 \pm 4$ µm was determined from beam profile measurement.
However, beam size could be more accurately determined by magnifying the focus with an objective before imaging it on the camera. Unfortunately, such a configuration couldn’t fit into the setup.
Next, we performed a knife-edge (KE) scan measurement in x direction. The voltage driving the piezo actuator with the razor and the power of transmitted light (in following referred as KE signal) were recorded at the same time. Together with

previously obtained calibration curve (figure 6.2), the voltage signal was used to determine position of the knife.
For analysis of KE signal, we assumed the beam to be Gaussian
\[ I \propto e^{-2\left(\frac{x}{w_x} + \frac{y}{w_y}\right)^2}, \]
(6.2)
where \( w_x \) and \( w_y \) are beam waist in \( x \) and \( y \) direction respectively. In that case, the normalized transmitted power equals
\[ P_{KE} = \frac{1}{2} \left[ 1 + \text{erf} \left( \frac{\sqrt{2}x}{w_x} \right) \right], \]
(6.3)
where \( \text{erf}(x) \) is an error function. The latter is not an analytical function and therefore inconvenient for fitting experimental data. Instead, an analytical approximation for error function of the form
\[ 1/(1 + \exp(p(s))), \]
(6.4)
where \( s = x/w_x \) and \( p(s) \) is a 4\(^{th}\) degree polynomial, was applied as suggested by Khosrofian and Garetz [80].
In figure 6.5, results of knife-edge scan measurement are shown. Fitting function nicely overlaps with experimental data confirming our assumption of Gaussian beam profile with a beam waist of \( w_x = 6.1 \pm 0.2 \mu m \). This value differs significantly from the one obtained with the CCD camera, but still lies within the errorbars. On the other hand, it is much closer to the value obtained by POP simulation.
### 6.3.2 High harmonics
As knife-edge scan gave reliable results with collimated IR beam in agreement with POP simulations, we were ready to perform measurement with high harmonics. According to the ABCD simulation, focus of the HH beam should be roughly at the same position as the one of non-collimated IR beam. We determined this position
by inspecting the knife-edge scan on the IR card. After that, chambers were closed and evacuated.
The razor was first moved far away from the beam path so that whole beam was transmitted through. Xenon was released into the gas cell and high harmonics were generated and detected. The HHG process was quickly optimized. Now, the razor was moved into the beam path and triangular voltage signal was applied to the piezo actuator with an amplitude corresponding to $\approx 20 \mu m$ travel range. No change in the detected signal could be observed. Therefore, the beam was inspected by moving the translation stage with free travel range of 12 mm. It turned out, that razor had to be moved for more than 2 mm to completely block the beam. Our first guess, was that knife was not located at the focus so we scanned the region of 12 mm along the beam propagation axis (6 mm to each side of initial position). We couldn’t evaluate those scans quantitatively, because the translation stage wasn’t calibrated. Besides, no clear trend was observed, whether we are moving closer to or further away from the focus. The roughly estimated 2 mm distance required to fully block the beam by razor could already suffice to also block the whole field of view of channeltron. For 2 mm wide HH beam at the mirror, the estimated distance is 4 mm. It raises the suspicion that detected signal originates from EUV light scattered from the relatively rough mirror surface.
According to the literature reviewed in chapter 2.2.3, position of the gas target relative to the focus in some cases had an influence on the beam profile. In figure 6.6, knife-edge scan performed for two different gas target positions are shown. Blue curve belongs to position for optimal generation in terms of power, while red curve to gas target located after the focus when overall generation power fell to roughly half the maximal. The blue curve has a knee in the middle and a long, slowly decaying tail. The knee could be a sign for the annular beam shape. On the other hand, the red curve lacks a knee, but shows a very sharp transition with a sudden decay of the signal immediately after knife blocks the beam. This irregularity is again a clear sign for non-Gaussian beam profile of the HH beam.
Chapter 6. Towards the focusing of high harmonics
Chapter 7
Conclusions and discussion
This thesis reports on setting up an EUV laser source based on frequency upconversion of IR light into EUV via high-harmonic generation, its characterization and optimization, and our attempt to focus the generated EUV light. The driving IR laser is based on chirped pulse amplification and delivers 118 fs long light pulses with energy of 2mJ centered around 810 nm. The repetition rate is 1 kHz and the beam profile is slightly elliptic Gaussian beam. Pulses are not bandwidth-limited and it is expected that chirp is present, which cannot be removed by the compressor in the last stage of CPA unit. The driving beam is focused into the gas target with plano-convex lenses of different focal lengths. Weak self-focusing effect takes place in the entrance window of vacuum chamber raising a fear of pulse distortion due to self-phase modulation.
High harmonics were successfully generated in different gas cells filled with xenon and argon as well as in a gas jet emitted from end-fire nozzle. As expected, HHG is 2-3 times more efficient in xenon, while higher harmonics up to $27^{th}$ are generated in argon due to larger ionization potential (compared to $19^{th}$ for xenon). The extension of cut-off region by increasing the intensity of driving pulses with tighter focusing is possible for HHG in argon, but is prevented by ionization when xenon is used. Optimal generation conditions depend significantly on gas target design. Looser focusing with $f = 750$ mm lens is beneficial for HHG in the longer gas cell filled with xenon, while for all other target designs, i.e. gas jet and short gas cell, tighter focusing is preferable, without big difference between $f = 300$ mm, $f = 400$ mm or $f = 500$ mm lenses used. For tighter focusing harmonic yield can be increased for up to 50% in xenon by aperturing the beam prior to focusing and therefore reducing the degree of ionization and improving phase-matching and beam shape. Due to too low pressure in the longer gas cell and gas jet limited by pump speed of vacuum turbopumps, phase-matched generation of $13^{th}$ harmonic cannot be achieved by adjusting the pressure in the target. On the other hand, an optimal pressure is found for the shorter gas cell and all-harmonic yield. Intensity scaling of $13^{th}$ harmonic power reveals three regimes - it increases very fast in cut-off regime, then scales with the power of $p = 3.7 \pm 0.7$ in plateau regime and after that slowly saturates. As HHG is not in saturation regime for the loosest focusing with $f = 1000$ mm lens, we could still benefit from more available power and therefore higher intensities, while no significant improvement is expected in other cases.
However, despite systematic multiple-parameter optimization of HHG in longer gas cell and gas jet, detected all harmonic’s power of 1.2 nW in optimal generation
conditions is 3-4 orders of magnitude below our goal of few microwatts. Shorter gas cell improves the generation for factor of 20, with estimated $5 \pm 1$ nW of available 60.8 nm light of 13\textsuperscript{th} harmonic. If losses due to absorption in (oxidized) aluminum filters and residual xenon that harmonic beam encounters on the way from the gas target to the detector are taken into account, we can extrapolate the generated power of 13\textsuperscript{th} harmonic to be $100 \pm 75$ nW and corresponding conversion efficiency of $(5.3 \pm 4.0) \times 10^{-8}$. Compared to other groups with relatively similar generation conditions, it is still 200 times less than obtained by Hergott [46] and 100 times less than reported by Constant [34], where 60 fs, 4.3 mJ and 40 fs, 1.5 mJ driving light pulses were used respectively. However, in both cases the driving pulses were shorter and/or more energetic such that loose focusing geometry could be employed increasing the interaction volume, improving phase-matching and reducing ionization. As macroscopic response in HHG is at the same time affected by many related phenomena, it is hard to quantitatively compare different experiments.
Finally, we prepared the setup for focusing the high-harmonics beam with bare gold coated 45°-off-axis parabolic mirror with focal length $f = 50.8$ mm. Alignment test with IR laser revealed distorted beam profiles in the focus. Ray tracing simulations confirmed that this effect is not due to misalignment, but due to divergent beam and that distortions disappear when divergence is reduced. As harmonic beam is also slightly divergent, special care should be taken when using parabolic mirrors for focusing. However, Suda et al. managed to focus HH beam using off-axis parabolic mirror, but used extremely loose focusing for HHG ($f = 4$ m) and therefore less divergent harmonic beam [47]. Also for our setup with $f = 750$ mm simulations don’t predict any beam distortions.
Our experimental apparatus for beam profiling is based on knife-edge scan in single direction due to limited space. The movement of the knife was calibrated by interferometer so that its position is know with precision <100 nm. The test with focused IR beam shows reasonable agreement with knife-edge scan and beam profile recorded with camera, whose precision is limited by pixel size.
However, in our first attempt to focus HH beam we couldn’t observe any caustic while scanning around the expected position of the focus and the “spot” size was much larger than the predicted 2-3 micrometers. As the measurement was performed on the last day of my stay at MPQ, there is lack of experimental evidence, what could cause this poor performance. Anyway, there are two most probable explanations: the OPM mirror is not designed for EUV wavelengths and has relatively rough surface (6 nm) compared to standard EUV mirrors (<1 nm) and in addition to that some scratches from previous experiments. This might cause the HH beam to scatter off-specular and therefore not get focused. Another possibility is an irregular, distorted beam profile, which is often observed for non optimal generation conditions. Both assumption require further experimental investigation.
Chapter 8
Outlook
The whole experiment of setting up an EUV laser source was performed within given constraints of at that time available experimental equipment, which turned out to be to some extent limiting. However, the most severe limitation was the limited available time that the author of this report had to repeat certain measurements and further investigate unexpected phenomena observed. As a newbie in the field of ultrafast optics and high-harmonic generation I also overlooked or underestimated some undesirable effects when designing the experimental setup. In this final chapter, different improvements of experimental setup from simple to more advanced and costly ones are suggested as well as additional measurements and tests required to improve the accuracy of the presented results are proposed.
There are few things by which we could improve our laser setup: a more stable regime of laser oscillator should be found and the chirped mirrors should be realigned to improve the beam profile at its output. Pulses from the oscillator are spectrally too broad to fit on the gratings of stretcher and compressor in the CPA amplifier. Therefore, some spectral components are cut away. For current configuration the spectrum shifts toward red wavelengths for $\approx 10$ nm after amplification. It’s probably because the blue wings of the spectrum are cut to larger extent than the red ones. By realignment of the stretcher, the selected spectral components could be centered at 800 nm, i.e. peak of oscillator’s spectrum, eventually leading to increase in pulse amplification. Also, pulses from the amplifier are not bandwidth-limited with a chirp present, which cannot be removed by a compressor. To characterize the remaining chirp, a more advanced pulse characterization method than autocorrelation should be utilized, like FROG or SPIDER [7]. The remaining chirp could also be pre-compensated by installing an ultrafast pulse shaper like Dazzler™ between oscillator and amplifier. Moreover, Bartels et al. [8] showed that by carefully tailoring the driving light pulse the interaction of intense light and gas atom could be controlled improving the conversion efficiency of HHG for an order of magnitude.
The focusing geometry for HHG crucially determines the layout of experimental setup. Once it is fixed, several improvements can be implemented in the setup. For example, observed self-focusing effect of driving laser beam in vacuum chamber’s window indicates the possibilities of pulse modification due to self-phase modulation. This effect would be minimized by placing the lens in vacuum so that beam is larger and therefore less intense when it passes through window. To select the optimal focusing geometry one can rely on experimental results presented in this work. However, for different generation conditions, namely gas target design, different regime can turn out to be beneficial as was the case with different gas cells in our experiment. Any additional changes in gas target design would therefore require new systematical tests. In my opinion, simulations of HHG on macroscopic level would be a good alternative to find optimal conditions.
HHG simulations would also be required to support the interpretation of results of our multi-parameter study presented in chapter 4. To do that, generation conditions, especially the pressure in a gas target should be determined. Nevertheless, once the predictive power of simulations is confirmed, they could significantly contribute in re-designing the experimental setup saving a lot of time otherwise spent for multi-parameter optimization.
The whole setup for spectral characterization of HH radiation was found to be rather non optimal. Scanning the EUV monochromator for a measurement of single spectrum takes roughly 3 minutes. Acquisition time at single harmonic is anyway rather short, while the generation conditions can slightly change during the scan. However, even a more severe problem in our setup for spectral characterization presents the detector. After discovering irregular non-linear behavior of channeltron during calibration of aluminum filters, our study of HHG dependence on various parameters at least partially lost credibility. While we can still without doubt confirm observed trends, we cannot evaluate absolute changes. One possible alternative setup would be a series of thin metal filters to block driving laser, a transmission grating EUV spectrometer and an EUV camera [73]. Such configuration with continuous monitoring and recording of whole spectrum would make possible multi-parameter study at different harmonic orders at the same time and therefore greatly help in determination of underlying processes in HHG.
The twenty-fold improvement of HHG conversion efficiency with shorter gas-cell no.2 compared to longer gas cell no.1 suggests that further investigations of gas target design should be considered. For current laser parameters, more advanced target designs, i.e. capillary and hollow-core fiber, probably do not come into account, because the driving beam has to be focused relatively tight. So one possible improvement would be to install a gas catcher to suck the gas injected from the end-fire nozzle. Therefore gas load on the turbo pumps would be reduced and higher flow and pressure in the gas jet would become possible. The multi-parameter study and spectrum measurement should also be repeated for HHG with gas cell no.2 to identify limiting processes and to obtain spectral information required for extraction of single harmonic power from absolute power measurement.
To improve the accuracy of absolute power measurement, aluminum filters should be calibrated again, this time with a reliable detector. A simple, but not wavelength-sensitive solution would be to measure the transmittance of all harmonics with a photo-diode with 2 and 3 filters in the beam path. From the ratio of detected power and theoretical curve, transmittance for single harmonic order can then be retrieved.
At the end the available power from our EUV laser source should be maximized. Besides increasing the conversion efficiency of HHG, the harmonic beam should also be separated from driving laser beam with as little losses as possible. A single thicker (300 nm) aluminum filter would be preferable to two thinner ones (2x150 nm), because the number of more absorbing oxide layers is reduced. Thicker filters are also expected to have less micro-pores and therefore a single filter would be enough to completely block the IR light. Another source of loss is absorption in residual gas released from the gas target during HHG. A differential pumping scheme
could be set close to the target to separate the generation chamber from the rest of the setup. Small apertures employed for that purpose should be big enough to transmit the EUV beam, but could be eventually also used to partially block the driving beam while not getting damaged.
Finally, we realized that we are just at the beginning of our way towards the desired goal of focusing HH beam to a micrometer spot, which clearly won’t be that straight-forward as we thought when designing the experiment. Here, I will briefly suggest how to proceed. First, the HH beam has to be spatially characterized prior to focusing using a EUV CCD camera. Several studies report on changing the beam profile with varying HHG conditions. While in some cases optimization of HHG in terms of power lead at the same time also to the least distorted beam profiles [41], the highest harmonic power is not the guarantee for a regular beam shape. Therefore, the direct inspection of the beam is necessary. Furthermore, the divergence of HH beam could be determined by measuring the beam width at different positions. If the observed beam is sufficiently regular and divergence small enough, we should proceed with focusing with an off-axis parabolic mirror. Far-field beam profile should be monitored to check what kind of distortions are introduced with the mirror if any. The polarization of HH light should be adjusted by rotating the polarization of driving laser such that reflectivity of the mirror is highest. If intense scattering or severe distortions are observed, the mirror should be exchanged by one designed for EUV spectral range.
Our apparatus for characterization of focal spot should be upgraded with scanning capability in $y$ direction. Then, two dimensional knife edge scan could be performed to retrieve eventual ellipticity of the beam or astigmatism. Using a laser drilled pinhole of sufficient diameter, knife-edge scan could also be recorded at different angles and after that the beam profile could be reconstructed by a tomographic algorithm [82]. However, the knife-edge method is extremely time consuming and provides only partial information on two-dimensional focal spot distribution. Optimization of HH focusing would therefore be very demanding. An alternative approach of direct visualization of focal spot was presented by Valentin et al. [83]. A cerium-doped YAG crystal was placed in the focus of EUV beam inducing the fluorescence, which was then imaged on a visible CCD camera. The resolution of such system is limited by numerical aperture of imaging optics to $\approx 0.5\,\mu m$.
Following the upper suggestions, the desired goal of few microwatts of 61 nm light focused to a micrometer spot seems still within the reach. However, upgrade of the experimental setup as explained above would be required.
Chapter 8. Outlook
This thesis has presented a novel approach to the problem of finding the best possible solution for a given optimization problem, using a combination of genetic algorithms and machine learning techniques. The results obtained have been promising, with the proposed method showing significant improvements over existing approaches in terms of both accuracy and efficiency.
However, there is still much work to be done in this area. One potential direction for future research could be to investigate the use of other machine learning techniques, such as neural networks or support vector machines, in conjunction with genetic algorithms. This could potentially lead to even more accurate and efficient solutions for optimization problems.
Another area for further investigation could be the development of more sophisticated fitness functions that take into account not only the objective function but also additional constraints or objectives that may be relevant to the problem at hand. This could help to ensure that the solutions found are not only optimal in terms of the objective function, but also satisfy any additional requirements that may be necessary for the application.
Finally, it would be interesting to see how well the proposed method performs on real-world optimization problems, where the objective function may be noisy or non-differentiable. This could provide valuable insights into the practical applicability of the method and help to identify any limitations or areas for improvement.
In conclusion, this thesis has presented a novel approach to optimization problems using a combination of genetic algorithms and machine learning techniques. The results obtained have been promising, and there is still much potential for further research in this area.
[1] P. Franken, A. Hill, C. e. Peters and G. Weinreich, "Generation of optical harmonics", Physical Review Letters 7, 118 (1961).
[2] R. L. Sandberg, A. Paul, D. A. Raymondson, S. Hädrich, D. M. Gaudiosi, J. Holtsmider, I. T. Ra'anan, O. Cohen, M. M. Murnane and H. C. Kapteyn, "Lensless diffractive imaging using tabletop coherent high-harmonic soft-x-ray beams", Physical Review Letters 99, 098103 (2007).
[3] F. Krausz and M. Ivanov, "Attosecond physics", Reviews of Modern Physics 81, 163 (2009).
[4] A. Vernaleken, Advances in generating high repetition rate EUV frequency combs, Ph.D. thesis, LMU (2013).
[5] M. Herrmann, M. Haas, U. Jentschura, F. Kottmann, D. Leibfried, G. Saathoff, C. Gohle, A. Ozawa, V. Battelier, S. Knünz et al., "Feasibility of coherent xuv spectroscopy on the 1 S- 2 S transition in singly ionized helium", Physical Review A 79, 052505 (2009).
[6] J.-C. Diels and W. Rudolph, Ultrashort laser pulse phenomena (Academic press, 2006).
[7] A. Weiner, Ultrafast optics, Vol. 72 (John Wiley & Sons, 2011).
[8] R. Holzwarth, T. Udem, T. W. Hänsch, J. Knight, W. Wadsworth and P. S. J. Russell, "Optical frequency synthesizer for precision spectroscopy", Physical Review Letters 85, 2264 (2000).
[9] A. Baltuška, T. Udem, M. Üiberacker, M. Hentschel, E. Goulielmakis, C. Gohle, R. Holzwarth, V. Yakovlev, A. Scrinzi, T. Hänsch et al., "Attosecond control of electronic processes by intense light fields", Nature 421, 611 (2003).
[10] R. Fork, O. Martinez and J. Gordon, "Negative dispersion using pairs of prisms", Optics Letters 9, 150 (1984).
[11] O. E. Martinez, "3000 times grating compressor with positive group velocity dispersion: Application to fiber compensation in 1.3-1.6 μm region", IEEE Journal of Quantum Electronics 23, 59 (1987).
[12] R. Szipöcs, C. Spielmann, F. Krausz and K. Ferencz, "Chirped multilayer coatings for broadband dispersion control in femtosecond lasers", Optics Letters 19, 201 (1994).
[13] B. E. Saleh and M. C. Teich, *Fundamentals of photonics*, Vol. 22 (Wiley New York, 1991).
[14] A. E. Siegman, *Lasers*, Vol. 37 (University Science Books, 1986).
[15] O. Svelto, *Principles of Lasers* (Springer Science & Business Media, 2010).
[16] A. E. Siegman, "Defining, measuring, and optimizing laser beam quality", in *OE/LASE'93: Optics, Electro-Optics, & Laser Applications in Science & Engineering* (International Society for Optics and Photonics, 1993) pp. 2–12.
[17] R. W. Boyd, *Nonlinear optics* (Academic press, 2003).
[18] A. McPherson, G. Gibson, H. Jara, U. Johann, T. S. Luk, I. McIntyre, K. Boyer and C. K. Rhodes, "Studies of multiphoton production of vacuum-ultraviolet radiation in the rare gases", Journal of the Optical Society of America B 4, 595 (1987).
[19] M. Ferray, A. l’Huillier, X. Li, L. Lompre, G. Mainfray and C. Manus, "Multiple-harmonic conversion of 1064 nm radiation in rare gases", Journal of Physics B: Atomic, Molecular and Optical Physics 21, L31 (1988).
[20] K. Kulander, K. Schafer and J. Krause, "Dynamics of Short-Pulse Excitation, Ionization and Harmonic Conversion", in *Super-Intense Laser-Atom Physics*, B. Piraux, A. L’Huillier, edited by and K. Rzazewski (Springer US, 1993) pp. 95–110.
[21] P. B. Corkum, "Plasma perspective on strong field multiphoton ionization", Physical Review Letters 71, 1994 (1993).
[22] J. G. Eden, "High-order harmonic generation and other intense optical field-matter interactions: review of recent experimental and theoretical advances", Progress in Quantum Electronics 28, 197 (2004).
[23] M. Ammosov, N. Delone, V. Krainov, A. Perelomov, V. Popov, M. Terent’ev, G. L. Yudin and M. Y. Ivanov, "Tunnel ionization of complex atoms and of atomic ions in an alternating", Sov. Phys. JETP 64, 1191 (1986).
[24] A. Perelomov, V. Popov and M. Terentev, "Ionization of atoms in an alternating electric field", Sov. Phys. JETP 23, 924 (1966).
[25] L. Keldysh, "Ionization in the field of a strong electromagnetic wave", Sov. Phys. JETP 20, 1307 (1965).
[26] G. Paulus, W. Nicklich, H. Xu, P. Lambropoulos and H. Walther, "Plateau in above threshold ionization spectra", Physical Review Letters 72, 2851 (1994).
[27] D. N. Fittinghoff, P. R. Bolton, B. Chang and K. C. Kulander, "Observation of nonsequential double ionization of helium with optical tunneling", Physical Review Letters 69, 2642 (1992).
[28] M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L’huillier and P. B. Corkum, "Theory of high-harmonic generation by low-frequency laser fields", Physical Review A 49, 2117 (1994).
[29] P. Salieres, A. L’huillier, P. Antoine and M. Lewenstein, “Study of the spatial and temporal coherence of high order harmonics”, arXiv: quant-ph/9710060 (1997).
[30] R. A. Bartels, A. Paul, H. Green, H. C. Kapteyn, M. M. Murnane, S. Backus, I. P. Christov, Y. Liu, D. Attwood and C. Jacobsen, “Generation of spatially coherent light at extreme ultraviolet wavelengths”, Science 297, 376 (2002).
[31] T. Brabec and F. Krausz, “Intense few-cycle laser fields: Frontiers of nonlinear optics”, Reviews of Modern Physics 72, 545 (2000).
[32] Z. Chang, Fundamentals of attosecond optics (CRC Press, 2011).
[33] M. Lewenstein, P. Salieres and A. L’huillier, “Phase of the atomic polarization in high-order harmonic generation”, Physical Review A 52, 4747 (1995).
[34] E. Constant, D. Garzella, P. Breger, E. Mével, C. Dorrer, C. Le Blanc, F. Salin and P. Agostini, “Optimizing high harmonic generation in absorbing gases: Model and experiment”, Physical Review Letters 82, 1668 (1999).
[35] P. M. Paul, E. Toma, P. Breger, G. Mullot, F. Augé, P. Balcou, H. Muller and P. Agostini, “Observation of a train of attosecond pulses from high harmonic generation”, Science 292, 1689 (2001).
[36] T. Kreibich, M. Lein, V. Engel and E. Gross, “Even-harmonic generation due to beyond-born-oppenheimer dynamics”, Physical Review Letters 87, 103901 (2001).
[37] J. Mauritsson, P. Johnsson, E. Gustafsson, A. L’Huillier, K. Schafer and M. Gaarde, “Attosecond pulse trains generated using two color laser fields”, Physical Review Letters 97, 013001 (2006).
[38] T. Popmintchev, M.-C. Chen, D. Popmintchev, P. Arpin, S. Brown, S. Alisauskas, G. Andriukaitis, Balčiunas, O. D. Mücke, A. Puzgys et al., “Bright coherent ultrahigh harmonics in the keV X-ray regime from mid-infrared femtosecond lasers”, Science 336, 1287 (2012).
[39] P. Salieres, T. Ditmire, M. Perry, A. L’Huillier and M. Lewenstein, “Angular distributions of high-order harmonics generated by a femtosecond laser”, Journal of Physics B 29, 4771 (1996).
[40] M. Protopapas, C. H. Keitel and P. L. Knight, “Atomic physics with super-high intensity lasers”, Reports on Progress in Physics 60, 389 (1997).
[41] J. Lohbreier, S. Eyring, R. Spitzenpfeil, C. Kern, M. Weger and C. Spielmann, “Maximizing the brilliance of high-order harmonics in a gas jet”, New Journal of Physics 11, 023016 (2009).
[42] Y. Tamaki, J. Itatani, M. Obara and K. Midorikawa, “Optimization of conversion efficiency and spatial quality of high-order harmonic generation”, Physical Review A 62, 063802 (2000).
[43] J. Peatross and D. Meyerhofer, "Angular distribution of high-order harmonics emitted from rare gases at low density", Physical Review A 51, R906 (1995).
[44] G. Tempea and T. Brabec, "Optimization of high-harmonic generation", Applied Physics B 70, S197 (2000).
[45] E. Takahashi, Y. Nabekawa and K. Midorikawa, "Generation of 10-µJ coherent extreme-ultraviolet light by use of high-order harmonics", Optics Letters 27, 1920 (2002).
[46] J.-F. Hergott, M. Kovacev, H. Merdji, C. Hubert, Y. Mairesse, E. Jean, P. Breger, P. Agostini, B. Carré and P. Salhières, "Extreme-ultraviolet high-order harmonic pulses in the microjoule range", Physical Review A 66, 021801 (2002).
[47] A. Suda, H. Mashiko and K. Midorikawa, "Focusing Intense High-Order Harmonics to a Micron Spot Size", in Progress in Ultrafast Intense Laser Science II (Springer, 2007) pp. 183–198.
[48] D. Attwood, Soft x-rays and extreme ultraviolet radiation: principles and applications (Cambridge university press, 2007).
[49] A. Thompson, D. Vaughan, C. for X-ray optics and advanced light source, X-ray Data Booklet (Lawrence Berkeley Laboratory, 2001).
[50] CXRO, http://henke.lbl.gov/optical_constants/, accessed 10-December-2015.
[51] E. D. Palik, Handbook of optical constants of solids, Vol. 3 (Academic press, 1998).
[52] S. Adachi, The Handbook on Optical Constants of Metals (World Scientific, 2012).
[53] E. J. Takahashi, H. Hasegawa, Y. Nabekawa and K. Midorikawa, "High-throughput, high-damage-threshold broadband beam splitter for high-order harmonics in the extreme-ultraviolet region", Optics Letters 29, 507 (2004).
[54] F. Frassetto, P. Villoresi and L. Poletto, "Beam separator for high-order harmonic radiation in the 3-10 nm spectral region", Journal of the Optical Society of America A 25, 1104 (2008).
[55] R. Falcone and J. Bokor, "Dichroic beam splitter for extreme-ultraviolet and visible radiation", Optics Letters 8, 21 (1983).
[56] J. Peatross, J. Chaloupka and D. Meyerhofer, "High-order harmonic generation with an annular laser beam", Optics Letters 19, 942 (1994).
[57] Q. Zhang, K. Zhao, J. Li, M. Chini, Y. Cheng, Y. Wu, E. Cunningham and Z. Chang, "Suppression of driving laser in high harmonic generation with a microchannel plate", Optics Letters 39, 3670 (2014).
[58] J. Metje, M. Borgwardt, A. Moguilevski, A. Kothe, N. Engel, M. Wilke, R. Al-Obaidi, D. Tolksdorf, A. Firsov, M. Brzhezinskaya et al., "Monochromatization of femtosecond XUV light pulses with the use of reflection zone plates", Optics Express 22, 10747 (2014).
[59] M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light, 7th edition (Cambridge University Press, 2003).
[60] J. I. Larruquert, L. Rodríguez-de Marcos, J. A. Méndez, P. Martin and A. Bendavid, "High reflectance ta-C coatings in the extreme ultraviolet", Optics Express 21, 27537 (2013).
[61] C. Montcalm, S. Bajt, P. B. Mirkarimi, E. A. Spiller, F. J. Weber and J. A. Foltz, "Multilayer reflective coatings for extreme-ultraviolet lithography", in 23rd Annual International Symposium on Microlithography (International Society for Optics and Photonics, 1998) pp. 42–51.
[62] M. Fernández-Perea, R. Souffi, J. C. Robinson, L. R. De Marcos, J. A. Méndez, J. I. Larruquert and E. M. Gullikson, "Triple-wavelength, narrowband Mg/SiC multilayers with corrosion barriers and high peak reflectance in the 25-80 nm wavelength region", Optics Express 20, 24018 (2012).
[63] G. V. Marr, ed., "Handbook on Synchrotron Radiation" (Elsevier Science Publishers, 1987).
[64] D. Strickland and G. Mourou, "Compression of amplified chirped optical pulses", Optics Communications 55, 447 (1985).
[65] S. Physics, Spitfire: User's manual, Spectra Physics (2004).
[66] http://www.spectra-physics.com/products/ultrafast-lasers/spitfire-ace, accessed 10-November-2015.
[67] S. Ameer-Beg, A. Langley, I. Ross, W. Shaikh and P. Taday, "An achromatic lens for focusing femtosecond pulses: direct measurement of femtosecond pulse front distortion using a second-order autocorrelation technique", Optics Communications 122, 99 (1996).
[68] Z. Bor, "Distortion of femtosecond laser pulses in lenses and lens systems", Journal of Modern Optics 35, 1907 (1988).
[69] http://refractiveindex.info/, accessed 11-November-2015.
[70] W. Koechner, Solid-state laser engineering, Vol. 1 (Springer, 2006).
[71] Jobin-Yvon, JobinYvon LHT 30 XUV monochromator instruction manual, Jobin-Yvon.
[72] M. R. Howells, "Vacuum ultra violet monochromators", Nuclear Instruments and Methods 172, 123 (1980).
[73] S. Kazamias, F. Weihe, D. Douillet, C. Valentin, T. Planchon, S. Sebban, G. Grillon, F. Augé, D. Hulin and P. Balcou, "High order harmonic generation optimization with an apertured laser beam", The European Physical Journal D 21, 353 (2002).
[74] A. L’Huillier, P. Balcou, S. Candel, K. J. Schafer and K. C. Kulander, "Calculations of high-order harmonic-generation processes in xenon at 1064 nm", Physical Review A 46, 2778 (1992).
[75] L. Lompré, A. L’Huillier, M. Ferray, P. Monot, G. Mainfray and C. Manus, "High-order harmonic generation in xenon: intensity and propagation effects", Journal of the Optical Society of America B 7, 754 (1990).
[76] C.-G. Wahlström, J. Larsson, A. Persson, T. Starczewski, S. Svanberg, P. Salieres, P. Balcou and A. L’Huillier, "High-order harmonic generation in rare gases with an intense short-pulse laser", Physical Review A 48, 4709 (1993).
[77] S. Kazamias, S. Daboussi, O. Guilbaud, K. Cassou, D. Ros, B. Cros and G. Maynard, "Pressure-induced phase matching in high-order harmonic generation", Physical Review A 83, 063405 (2011).
[78] J. Rothhardt, M. Krebs, S. Härdicke, S. Demmler, J. Limpert and A. Tünnermann, "Absorption-limited and phase-matched high harmonic generation in the tight focusing regime", New Journal of Physics 16, 033022 (2014).
[79] C. Vargel, Corrosion of aluminium (Elsevier, 2004).
[80] J. M. Khosrofian and B. A. Garetz, "Measurement of a Gaussian laser beam diameter through the direct inversion of knife-edge data", Applied Optics 22, 3406 (1983).
[81] R. Bartels, S. Backus, E. Zeek, L. Misoguti, G. Vdovin, I. Christov, M. Murmann and H. Kapteyn, "Shaped-pulse optimization of coherent emission of high-harmonic soft X-rays", Nature 406, 164 (2000).
[82] H. Hertz and R. L. Byer, "Tomographic imaging of micrometer-sized optical and soft-x-ray beams", Optics Letters 15, 396 (1990).
[83] C. Valentin, D. Douillet, S. Kazamias, T. Lefrou, G. Grillon, F. Augé, G. Mullot, P. Balcou, P. Mercère and P. Zeitoun, "Imaging and quality assessment of high-harmonic focal spots", Optics Letters 28, 1049 (2003).
Razširjeni povzetek v slovenskem jeziku
8.1 Uvod
Tvorba svetlobe pri novih frekvencah oz. valovnih dolžinah preko nelinearne optične pretvorbe v kristalih je dobro poznan in razširjen pojav v nelinearni optiki. Podvajanje frekvence laserja iz rdeče v ultravijolično (UV) je prvič demonstriral P. Franken [1] zgolj eno leto po iznajdlbi laserja. Fizikalni mehanizem, ki se skriva za procesom, je ne povsem sinusoidno gibanje elektronov v kristalni rešetki vzbujeno z močnim električnim poljem laserske svetlobe. Kot rezultat tega gibanja se izseva svetloba pri fundamentalni frekvenci laserja kot tudi pri njenih harmonikih.
Pri ekstremnih pogojih, ko gostota energijskega toka vpadne svetlobe doseže vrednosti med $10^{13}$ in $10^{15} \text{ W/cm}^2$, se snov odzove drugače. Električno polje tako intenzivne svetlobe je primerljivo s Coulombskim potencialom atoma, čigar odziv je skiciran na sliki 8.1. Potencial, ki ga čuti vezani elektron, se spremení do te mere, da lahko elektron uide (tunelira) ven. Prost elektron se nato pospešeno giblje pod vplivom električnega polja svetlobe in se, ko oscilirajoče polje zamenja smer, eventualno vrne v bližino iona. V primeru rekombinacije z ionom, se presežna energija, t.j. vsota kinetične in vezavne energije, sprosti v obliki visoko energetskega fotona. Frekvenca tako izsevane svetlobe ustrezna desetkratnikom ali celo stokratnikom fundamentalne frekvence laserja in seže v spektralno območje ekstremne ultravijolične (EUV) svetlobe oz. vse tja do mehkih X-žarkov. Ta pojav imenujemo tvorba višjih harmonikov oz. *high harmonic generation* (HHG).
V tipičnem HHG eksperimentu ultrakratke sunke vidne oz. bližnje infrardeče svetlobe zadostnih energij (dolžina pulza nekaj 10 femtosekund, energija vsaj nekaj 100 $\mu J$) fokusiramo v tarčo z nelinearnim medijem, običajno zlahtnim plinom. Jakost svetlobe višjih harmonikov, ki se pri tem tvori, narašča vz dolž tarče z zlahtnim plinom in je izsevana paralelno s snopom črpalnega laserja. Spekter te svetlobe je sestavljen iz številnih lilih harmonikov fundamentalne frekvence laserske svetlobe in se lahko razteza od UV svetlobe pa vse do mehkih X žarkov. Poleg tega je svetloba, ki nastane prek HHG, koherentna in izsevana v zelo kratkih sunkih, kar predstavlja primerjalno prednost pred drugimi izvori EUV svetlobe. Na žalost pa je fotonski tok tako nastale svetlobe omejen z zelo nizko učinkovitostjo pretvorbe (med $10^{-5}$ in $10^{-7}$) črpalnega laserja v svetlobo višjih harmonikov.
Kljub temu svetlobni izvori temelječi na HHG služijo kot "namizna" alternativa velikim napravam kot sta sinhotron in laser na proste elektrone. Poleg tega je HHG ključen proces za generacijo najkrajših, attosekundnih sunkov (1 as = $10^{-18}$ s) svetlobe. Le-ti predstavljajo nov mejnik za študije ultrahitrih procesov
Figure 8.1: Tri-koračni model HHG: moder krog s polno obrobo predstavlja končno, s črtanko pa začetno stanje elektrona v posameznem delu procesa. Coulombski elektrostatski potencial se deformira pod vplivom močnega polja svetlobe, tako da nastane potencialna bariera preko katere lahko elektron z določeno verjetnostjo tunelira (1). Ko je enkrat prost, se pospešeno giblje pod vplivom električnega polja (2), dokler se ne vrne v bližino starševskega iona in se eventualno rekombinira (3). V tem primeru se izseva foton z energijo, ki je enaka vsoti ionizacijskega potenciala atoma $I_p$ in kinetične energije elektrona $W_{\text{kin}}$.
Kot je gibanje elektronov na atomski skali. Zaradi svoje koherentne narave je HHG prav tako primeren za pretvorbo obstoječih frekvenčnih glavnikov iz vidnega oz. bližnje infrardečega dela spektra v EUV območje. Izredna spektralna ločljivost EUV frekvenčnih glavnikov omogoča visoko ločljivostno spektroskopijo na enostavnih atomih in ionih in s tem testiranje fundamentalne teorije kvantne elektrodinamike vezanih stanj.
Primer je visoko ločljivostna spektroskopija dvofotonskega prehoda med 1s in 2s energijskim nivojem enkrat ioniziranega helijevega atoma, eksperiment načrtovan v skupini dr. Thomasa Udemaa na Max Planckovem institutu za kvantno optiko, kjer je bil opravljen eksperimentalni del tega magisterija. Za vzbujanje prehoda je potrebna svetloba valovne dolžine 60.8 nm iz EUV frekvenčnega glavnika. Ker je moč le-te omejena, je treba EUV svetlobo močno sfokusirati v v past ujete helijeve ione. Teoretični izračuni predpostavljajo, da bi 10 - 100 $\mu$W zbranih v točko manjšo od 10 $\mu$m zadostalo za spektroskopijo.
V okviru te magistrske naloge smo se skušali približati slednjim vrednostim s pomočjo pomožnega izvora EUV svetlobe, prav tako temelječega na HHG. Namen naloge je bil identifikacija raznih tehničnih omejitvev in razrešitev le-teh, s povdarkom na optimizaciji učinkovitosti pretvorbe preko HHG, določitvi absolutne vrednosti moči generirane svetlobe višjih harmonikov ter testiranje fokusiranja snopa višjih harmonikov s paraboličnim zrcalom. V naslednjem poglavju so predstavljeni strjeni rezultati in ugotovitve.
### 8.2 Glavni rezultati
#### 8.2.1 Laserski sistem
Izvor EUV svetlobe v predstavljenem delu temelji na frekvenčni pretvorbi infrardečega (IR) laserja v EUV preko nelinearnega procesa HHG. Črpalni IR laserski sistem je osnovan na principu ojačanja čirpiranih ultrakratkih sunkov svetlobe (ideja t.i. CPA principa je ilustrirana na sliki 8.2). Ojačani sunki izhajajo iz sistema z repeticijo 1
Figure 8.2: Laserski sistem v našem eksperimentu je sestavljen iz: 1.) laserja oscilatorja, ki služi kot izvor "semenskih" ultrakratkih pulzov za 2.) CPA-ojačevalni sistem, kjer je izbran le vsak stotisoči pulz ter ojačan za faktor $\approx 10^6$. V obeh fazah je ojačevalni medij titan-safirjev kristal, optično črpan z zelenim, v prvem primeru cw- in v drugem sunkovnim laserjem. Majhen delež svetlobe je izbran s stekleno ploščico za karakterizacijo sunkov svetlobe - določitev dolžine pulza preko meritve avtokorelacije, spektra ter transverzalnega profila snopa. Da bi dosegli jakost električnega polja potrebnega za HHG, je preostali del sfokusiran s plano-konveksno lečo v plinsko tarčo. Pod bločnim diagramom našega laserskega sistema je prikazan vpliv posamezne stopnje CPA na sunka svetlobe. Mavrični gradient predstavlja čirpiran pulz, kjer rdeče spektralne komponente zaostajajo za modrimi. V resnici spekter sunkov v našem eksperimentu leži v bližnjem infrardečem območju, brez vidnih komponent.
kHz in s sledеčimi parametri: dolžina 118 fs, energija 2.2 mJ in vrh (center) spektra pri 810 nm, transverzalni profil snopa pa je rahlo eliptičen, skoraj Gaussovski. Snop ojačanih ultrakratkih sunkov je nato s plano-konveksno lečo različnih goriščnih razdalj fokusiran v tarčo napolnjeno z žlahtnim plinom (ksenonom ali argonom). Zaradi visoke absorpcije EUV svetlobe v praktično vseh materialih je plinska tarča kot tudi celoten diagnostični sistem postavljen v vakuum.
### 8.2.2 Optimizacija nelinearnega procesa tvorbe višjih harmonikov
Tvorba višjih harmonikov je bila demonstrirana v različnih plinskih tarčah - celicah napolnjenih s ksenonom ali argonom ter v supersoničnem curku plina izstreljenega iz šobe. Shema eksperimenta je prikazana na sliki 8.3 z vsemi komponentami potrebnnimi za detekcijo ter optimizacijo pretvorbe infrardeče v EUV svetlobo okoli 60 nm. Kot je razvidno iz slike 8.4, je pretvorba 2-3 krat učinkovitejša v ksenonu, medtem ko je zaradi večjega ionizacijskega potencijala možna generacija višjih harmonikov
Figure 8.3: Shema eksperimenta HHG v plinski tarči in postavitve za spektralno analizo ter optimizacijo pretvorbe: RWF - filter z zvezno spremenljivo prepuštnostjo, AI - nastavljiva zaslonka, XYZ - piezo-krmiljen translacijski oder, giblji v treh smereh, PG - merilec tlaka, TP - vakumska turbo pumpa, IS in OS - vhodna in izhodna reža EUV monokromatorja, RV - nadzorni ventil za dovod plina in TG toroidna reflektivna uklonska rešetka. Tekom optimizacije pretvorbne smo spreminjali velikost nastavljive zaslonke, dotok plina v plinsko tarčo in njeno pozicijo ter fokus (goriščno razdaljo leče).
v argonu (27. v primerjavi z 19. v ksenonu). Tvorba še višjih harmonskih redov je mogoča s povečevanjem jakosti črpalnih sunkov IR svetlobe v primeru argona, medtem ko ionizacija to preprečuje pri uporabi ksenona kot nelinearnega medija. Optimalni pogoji za pretvorbo so odvisni od oblike plinske tarče. Tako je leča z goriščno razdaljo 750 mm najbolj primerna za daljšo plinsko celico napolnjeno s ksenonom, medtem ko je pretvorba učinkovitejša za leče s krajsimi goriščnimi razdaljami (300, 400 in 500 mm) v vseh preostalih primerih. Pretvorbo lahko v nekaterih primerih dodatno izboljšamo z delno blokado črpalnega laserja s pomočjo zaslonke pred lečo, pri čemer se izboljšajo pogoji za fazno ujemanje (angl. phase matching) ter zmanjša ionizacija. V primeru ksenona in 300-mm leče je lahko relativno izboljšanje tudi 50-odstotno. Število oz. gostota atomov nelinearnega medija je omejena s hitrostjo črpanja vakuumskih turbo črpalk kot tudi z obliko tarče. Tako v primeru daljše plinske celice ni bilo moč doseči optimalnega tlaka v celici pri katerem bi bili izpolnjeni pogoji za fazno ujemanje med poljem 13. harmonika in poljem črpalnega laserja. Slednje je uspelo v primeru krajsje celice. Meritev odvisnosti jakosti 13. harmonika od jakosti črpalnega laserja je razkrila tri režime: v cut-off režimu jakost zelo hitro naraste, nakar se rast upočasni in raste z jakostjo črpalne svetlobe na potenco $p = 3.7 \pm 0.7$ v t.i. plato območju. Na koncu nastopi faza nasicenja brez občutne spremembe v učinkovitosti pretvorbe. Le v primeru šibkejšega fokusiranja s 1000-mm lečo faza nasicenja še ni nastopila, kar pomeni, da bi močnejši črpalni laser in posledično večja jakost črpalne svetlobe pripomogla k učinkovitosti pretvorbe, kar pa ne pričakujemo v ostalih primerih.
8.2.3 Določitev absolutne moči EUV svetlobe
Ključ sistematični optimizaciji eksperimentalnih parametrov za HHG v daljši plinski celici in curku iz sobe je izmerjena moč svetlobe vseh harmonikov zgolj 1.2 nW, kar je 3-4 velikostne razrede manj od želenih nekaj mikrowattov. Precej bolj (za faktor 20) je učinkovita pretvorba v krajsi celici: $5 \pm 1$ nW svetlobe 13. harmonika okoli 61 nm. Upoštevajoč absorpcijo v aluminijskih filtirih, ki blokirajo močno IR svetlobo, ter v preostalem ksenonu v vakuumski komori lahko iz meritev izluščimo, da je učinkovitost procesa pretvorbe v 13. harmonik $5 \times 10^{-8}$, moč generirane svetlobe pri 61 nm pa $100 \pm 75$ nW. Vrednosti za ostale harmonske reče so prikazane na sliki 8.5. V primerjavi z rezultati drugih skupin s primerljivimi eksperimentalnimi parametri, so naše vrednosti 200-krat nižje od Hergotta in sod. [46] in 100-krat nižje od Constanta in sod. [34]. Vseeno je potrebno poudariti, da so bili v obeh primerih črpalni sunki krajsi (60 in 40 fs) pri večji oz. enaki energiji, kar omogoča šibkejše fokusiranje in s tem večje število atomov, ki lahko interagirajo kot tudi izboljšane pogoje za fazno ujemanje. Poleg tega so direktne kvantitativne primerjave zaradi števila povezanih pojavov, ki vplivajo na makroskopski odziv plina pri HHG, skoraj nemogoče.
8.2.4 Fokusiranje snopa svetlobe višjih harmonikov
V zadnjem delu smo preverili še fokusiranje snopa višjih harmonikov z 90-stopinjskim off-axis paraboličnim zrcalom z goriščno razdaljo $f = 50.8$ mm ter z zlatim premazom, brez dodatne zaščitne plasti. Pri testiranju postavitve z IR snopom smo opazili popačenje transverzalnega profila v fokusu kot je razvidno iz slike 8.6b. Ray-tracing simulacije (slika 8.6a) so potrdile, da izvor popačenja ni v napačni postavitvi, temveč v aberacijah optičnega sistema v primeru fokusiranja divergentnega snopa s paraboličnim zrcalom. Le-te se namreč občutno zmanjšajo oz. ingezijo, če je snop manj divergenten oz. kolimiran. Ker je snop višjih harmonikov rahlo divergenten,
Figure 8.5: Ocena moči svetlobe višjih harmonikov in učinkovitosti pretvorbe v posamezen harmonski red za različne geometrije fokusiranja: prikazani so rezultati meritev s $f = 750$ mm (zelena) in $f = 400$ mm lečo, pri čemer so bili v slednjem primeru enkrat upoštevani surovi podatki iz meritve spektralne porazdelitve svetlobe višjih harmonikov (rdeča) in drugič popravljeni glede na oceno spektralno občutljivo detekcijo (modra).
je to potrebno vzeti v zakup pri izbiri zrcala in eksperimentalni postavitvi. Sudi in sod, je uspelo sfokusirati snop višjih harmonikov na mikronsko točko [47], a so za fokusiranje črpalnega laserja uporabili leče z dolgo goriščno razdaljo (4 m). V našem primeru ($f = 750$ mm) so simulacije prav tako potrdile odstotnost popačenja zaradi aberacij. Merlini sistem za analizo transverzalnega profila v fokusu temelji na metodi ostrega robu (angl. *knife-edge scan*) v eni dimenziji. Gibanje robu je kalibrirano s pomočjo Michelsonovega interferometra, tako da je njegova pozicija določena z natančnostjo pod 100 nm. Test s fokusiram IR snopom je pokazal dobro ujemanje med *knife-edge scan* meritvijo in meritvijo s CCD kamero, katere ločljivost je omejena z velikostjo piksla na $\approx 4 \mu$m.
V našem prvem (in edinem) poskusu fokusiranja snopa višjih harmonikov nismo opazili trenda naraščanja širine snopa z oddaljevanjem od fokusa, katerega pozicija naj bi sovpadała s prej določeno pozicijo fokusa IR snopa. Širina snopa višjih harmonikov v tej točki je močno presegala pričakovano širino 2-3 mikrometrov (slika 8.7). Zaradi pomanjkanja meritev lahko o vzrokih za neučinkovito fokusiranje zgolj ugibamo. Dve najbolj verjetni razlagi sta naslednji: uporabljeno parabolično zrcalo je bilo nameanjeno za uporabo z IR svetlobo in ima tako precej bolj "hrapavo" površino kot specialna EUV zrcala kakor tudi nekaj že z golim očesom zaznavnih poškodb. To bi lahko povzročilo prevladujoče ne-zrcalne odboje oz. sipanje na površini zrcala. Druga možnost je zelo nepravilen transverzalni profil snopa višjih harmonikov, ki je bil pogosto opazen v drugih študijah v primeru neoptimalne pretvorbe (fazno neujemanje). Obe razlagi vsekakor potrebujeta dodatne eksperimentalne potrditve.
Figure 8.6: Tri karakteristične oblike popačenega transverzalnega profila IR snopa fokusiranega s paraboličnim zrcalom v bližini fokusa: slike so rezultat 8.6a ray tracing simulacije in 8.6b meritve s CCD kamero.
Figure 8.7: Meritev transverzalnega profila snopa svetlobe višjih harmonikov z metodo ostrega robu za optimalne pogoje HHG (modra) in za primer, ko je plinska tarča locirana precej za fokusom črpalne svetlobe (rdeča). Obseg horizontalne osi je približno 2-3 mm.
Razširjeni povzetek v slovenskem jeziku |
Spacecraft technology
V. R. Katti\textsuperscript{1*}, K. Thyagarajan\textsuperscript{1}, K. N. Shankara\textsuperscript{1} and A. S. Kiran Kumar\textsuperscript{2}
\textsuperscript{1}ISRO Satellite Centre, P.B. No. 1795, Vimanapura Post, Bangalore 560 017, India
\textsuperscript{2}Space Applications Centre, Ambawadi Vistar PO, Jodhpur Tekra, Ahmedabad 380 015, India
This paper presents a summary of the efforts at ISRO for the development of spacecraft technology over the past three decades and also outlines technological trends for the evolution of communication space platforms, as well as scientific and remote sensing satellite systems. Tracing the developmental efforts since Aryabhatta, the first Indian satellite, through the experimental phase of application-oriented remote sensing and communication satellites namely Bhaskara-I, Bhaskara-II and Apple, the state-of-art operational INSAT and IRS series of satellites are presented. The communication satellite platforms namely I-1K, I-2K, I-3K and I-4K with lift off masses in the range of one to four tonnes, power generation capabilities of 1 kW to 10 kW and more, meeting multifarious requirements of telecommunication, telecasting, DTH (Direct to Home TV) multimedia applications and meteorological services have been developed. The improvements derived from the usage of multijunction solar cells, lithium-ion batteries and other mass optimization techniques are passed on for larger payload compliment and longer life. Technological trends in the design of communication payloads and the advancements in key elements such as antennas, microwave power amplifiers and receive systems as well as spacecraft mainframe are discussed. Commercialization of developmental efforts in the form of design, building and commissioning of communication satellites for the international service providers are highlighted in brief.
Keywords: Astrosat-I, Chandrayaan-I, bent-pipe, multispectral, panchromatic, spacecraft technology, transponder
The launch of the first man-made satellite Sputnik by the then Soviet Union, five decades ago, heralded a new era of space exploration. Spectacular advances have been made since then, leading to utilization of outer space for enhancing the quality of life on earth, in addition to scientific investigation and planetary exploration. Space-based services such as telecommunication, television broadcasting, weather forecasting, navigational assistance, management of natural resources and natural disaster situations have now become essential elements in our life. Personalized multimedia services are also in the offing. Central to these developments and also to the exploration of outer space is the development of spacecraft, which can withstand the rigors of space environment and provide cost effective, flawless services for years. The spacecrafts have to be essentially optimized for minimizing mass, volume and power, but with high performance, reliability and longevity. Considerable progress is achieved over the last few decades in the design of space platforms through the use of new lightweight composite materials and alloys, miniaturized electronics and computer-aided design and analysis techniques and also, better understanding of outer space environment. This coupled with cost effective and increased space transportation capability has brought benefits of space to the common man.
Spacecraft development efforts in ISRO were initiated as early as four decades ago. Initial learning efforts culminated in the successful realization of Aryabhata\textsuperscript{1}, a scientific satellite carrying celestial X-ray, solar neutron, gamma and ionosphere experiments, with a modest power generation capability of 40 W and weighing about 360 kg. The Aryabhata Project was a crucial learning tool that provided lessons on the design of light weight platform structure and its flight qualification, passive thermal control, spin stabilization with cold gas control, solar power generation, storage and conditioning, attitude control, telemetry, telecommand, payload data handling/storage and RF links as well as ground reception and data dissemination. It also provided opportunity to develop expertise in the field of project management, system engineering, system integration and space system quality and reliability assurance, in addition to development of expertise in subsystem designs. With this backdrop, ISRO entered into what is now recognized as an experimental phase of application-oriented spacecraft development in the early eighties of the 20th century, with remote sensing and communication as thrust areas in view. Bhaskara-I, the very first experimental remote sensing satellite as well as its follow on satellite Bhaskara-II\textsuperscript{2}, carried slow scan video cameras with 1 km resolution and a passive microwave radiometer in K-band. A payload opportunity available on experimental Ariane launch vehicle flight of European Space Agency (ESA) was utilized to build India’s first communication satellite, Ariane Passenger Payload Experiment (APPLE)\textsuperscript{3}. APPLE provided opportunity to introduce state-of-the-art technologies of the day, such as momentum biased three-axis stabilization techniques, motor driven deployed solar array, earth sensing for altitude control, C-band transponder design, inclusive of composite reflector, orbit raising, station acquisition, station-keeping and a host of mission management and flight dynamic techniques.
The multipurpose Indian National Satellite INSAT-1 procured as per ISRO design and indigenous Indian Remote Sensing Satellite IRS-1A, ushered in the operational era. INSAT-1 with 12 transponders in C-band for Fixed Satellite Services (FSS) and 2S-band transponders for TV broadcasting and a Very High Resolution Radiometer (VHRR) providing visible and infra red imageries for meteorological applications, established operational capabilities for communication and broadcasting, weather data collection and dissemination and satellite control. IRS-1A\textsuperscript{4} on the other hand, established a whole gamut of natural resources management system through optical remote sensing that required image data collection, processing, categorization, product generation, dissemination, user-oriented application development and user agency co-ordination. The second generations Indian National Satellites (INSATS) were totally developed indigenously, retaining the multipurpose capabilities of INSAT-1, but enhancing the overall throughput and capacity in terms of number of transponders and spectrum and also the resolution of visible and infra red imageries of VHRR. It also introduced a new upper extended C band for Very Small Aperture Terminal (VSAT) applications.
Over the last two decades ISRO has been providing through second and third generation of INSAT and follow on IRS satellites, operational services meeting communication, broadcasting, meteorological, natural resource management, agricultural produce estimation, forest mapping, arid area delineations, flood mapping and disaster management requirements.
The scientific satellite development since Aryabhata has grown both in terms of sophistication, complexity and capability through the intermediate phase of Rohini and Stretched Rohini programs and has now embarked on a mission to moon and exclusive state-of-the-art satellite for astronomical observations. ISRO aided development of small satellites at the Universities (e.g. Anusat at Anna University) is also on the anvil. A number of international agencies such as NASA, ESA, have shown active interest through utilizing the opportunities available in Chandrayaan-1 (Moon mission) and Astrosat for their scientific payloads.
ISRO’s communication satellite buses namely I-2K and I-3K have attracted a number of satellite manufacturers and operators all over the world, resulting in fruitful commercial ventures, also.
**The space environment**
The spacecraft configured and designed for any specific purpose at Low Earth Orbits (LEO) or at Geostationary Equatorial Orbits (GEO) has to operate in an environment characterized by\textsuperscript{5}
- Absence of atmosphere
- Weaker gravitational and magnetic fields.
- Hostile charge particle/radiation effect.
Further, the satellites operating at GEO have to weather the geomagnetic storms, caused by intense solar activity, solar flares and also micrometeorites. During geomagnetic storms, the increased electron flux impinging on the satellite can cause electrostatic charging of exposed surfaces to as high a level as 20 to 30 kV, leading to surface degradation, charge blow off, scintillations resulting in severe degradation and malfunctioning of the onboard equipments. Similarly, the LEO polar satellites, while passing over the auroral region may encounter deleterious effects of particle and electromagnetic radiations.
The satellites have to work in vacuum. This means the advantages of thermal convection are not available. Moreover, the surfaces exposed directly to the Sun get heated and those in the shadow or not facing the Sun, see the space temperature and become very cold.
During the launch the spacecrafts have to endure
- Depressurization
- Acceleration
- Vibration and acoustic loads.
The launch and space environment have considerable influence on configuration and design of satellites and their subsystems\textsuperscript{6}.
**Communication space platforms**
Communication and access to information play an important role in our everyday life. Traditionally, these requirements are met by the terrestrial radio, television and telephony services, which concentrate on providing these services to the metropolitan and urban parts of the country only. Communication requirements of India with its vast land mass and diverse cultural traditions cannot be met with terrestrial networks alone. Indian space programme realized this way back in 1970s and initiated the development of space-based communication system, which meets the national requirements. This called for development of communication space platforms and adequate infrastructure on ground to control and utilize the capabilities of these platforms. Communication applications have been discussed in detail in Bhaskaranarayan \textit{et al.} (this issue).
**Space-based communication system**
Figure 1 shows a typical space-based communication system. The overall satellite network architecture consists of space and ground segments, in which space segment is one or more satellites, while the ground segment consists of User Terminals (VSAT or USAT), Gateway Terminals or Hub, Network Operations Center (NOC) and Satellite Management Center (SMC). Three key elements that define network architecture are the network topology, data
rates supplied and multi-user access scheme. Over the years, various network topologies and architectures for satellite-based communication system have been developed. Some of the most commonly used architectures in vogue are wide area bent pipe, spot beam bent pipe and onboard processing architectures.
**Wide area bent pipe architecture:** This is the most traditionally used system in C and Ku band networks and uses fixed assigned star topology. In the star topology, all user terminal communications go through a hub and user-to-user transmission involves two satellite hops. This architecture is shown in Figure 2. Bent pipe payload simply receives and transmits signals from the same beam’s coverage area. Because of wide area coverage user terminals, hub and operation centre can communicate with each other via satellite.
**Spot beam bent pipe architecture:** This takes advantage of possible frequency reuse and employs pre-assigned star topology. Ground elements within the same beam can communicate to each other via satellite. However, ground elements, which do not reside in the same beam, cannot communicate with each other via satellite and significant ground infrastructure is required to interconnect them. Gateway or hub is required in each spot beam to provide user access. For inter-beam user services, all the hubs are interconnected via terrestrial network.
**Onboard processing architecture:** Onboard processing provides onboard inter-beam connectivity allowing hubless user-to-user communication via satellite and is advantageous for demand assigned mesh topology networks. The capacity of resulting network is larger than the bent pipe as a result of the frequency reuse without a terrestrial infrastructure. Onboard processing eliminates the inherent disadvantages of bent pipe transponders. Overall C/N0 is independent of uplink C/N0 in regenerative transponder. This reduces the uplink Effective Isotropic Radiated Power (EIRP) requirement by 5–6 dB resulting in reduction in size and cost of ground terminal compared to bent pipe transponder. Onboard processing can efficiently utilize resources by assigning fractions of transponder bandwidths to different coverage areas and has more operational flexibility.
Processed and demodulated uplink signals are routed to the destination beam in a preset and reconfigurable fashion by circuit switching if traffic is fairly stable and concentrated. When traffic is bursty and unpredictable, switching of the packets of fixed or variable sizes based on asynchronous transfer mode (ATM) or statistical multiplexing uses system resources effectively. Circuit switching and packet switching architectures are pictorially shown in Figure 3 and Figure 4 respectively.
**Space segment development**
Communication spacecraft architecture consists of spacecraft bus and payload system. Payload segment provides the required service, whereas the spacecraft bus provides necessary support for the payload to be operational in orbit. Presently, communication satellites are employed to provide TV broadcasting, radio networking, trunk telephony and...
networking of VSAT systems. The most preferred spectrum resources are C and Ku-bands. There is shift in global telecommunications from voice-driven to video-driven and data-driven services and from analogue to digital broadcast services. Figure 5 shows the global growth of satellite usage in the next few years and significant increase in the transponders usage for the Internet.
**ISRO space segment development:** Over a period of time, satellite communication development in India went through experimental to developmental to operational phases and has presently reached the commercial phase.
Indian communication space segment development started with the initiation of experimental APPLE spacecraft (Figure 6), which was launched in 1981. The payload consisted of two C-band transponders with TWTAs in the high power stage for providing 31.5 dBW effective isotropic radiated power (EIRP) over India. The realization of APPLE satellite gave first hand experience in the development of geo-synchronous satellite technology. The satellite weighed 670 kg, with a power generation capability of 210 W and provided service for a period of two years.
The Indian National Satellite System (INSAT), one of the largest domestic/national satellite systems in the world\(^9\), was conceived in 1970s. Ford Aerospace Communication Corporation (FACC) built the INSAT-1 series satellites. INSAT-1B, which became operational in mid-1983, was the first operational satellite in this series and it provided telecommunication, TV broadcasting, radio networking, weather observation and forecasting services.
The second and subsequent generation satellites were designed and built indigenously. The INSAT-2 series\(^{10}\) was conceived with five spacecraft INSAT 2A to 2E, with the first two, INSAT 2A and 2B planned to be realized as multipurpose satellites on the lines of the INSAT-1 system. To meet the user community demands, the next two satellites\(^{11}\), INSAT 2C and 2D were reconfigured as exclusive communication satellites. Figure 7 and Figure 8 depict INSAT-2A/2B and INSAT-2C/2D satellite configurations respectively. INSAT 2E satellite, the last of the second generation INSAT 2 series was evolved as the forerunner for the future INSAT-3 bus. INSAT-2E offered 17 communication transponders with near hemispherical and zonal coverage in addition to 3-channel VHRR and CCD camera payload providing improved resolution in both visible and infrared bands. Figure 9 gives footprint of INSAT-2E dual gridded wide beam antenna coverage.
INSAT-3 series comprising five satellites, INSAT-3A to 3E and was configured with a mix of multipurpose as well as dedicated communication and advanced meteorological satellites employing C, EXT-C and KU band transponders. INSAT-3D carrying state-of-the-art advanced 6-channel imaging and 19-channel sounding system is scheduled for launch in 2008. Figure 10 depicts INSAT-3A satellite.
---
**Figure 4.** Onboard processing architecture (packet switching).
**Figure 5.** Utility of equivalent 36 MHz transponders.
**Figure 6.** APPLE spacecraft configuration.
INSAT-4A/INSAT-4B are identical satellites configured with high power Ku-band payload for DTH services and C-band payload for conventional applications.
Over the last two decades the capabilities of ISRO satellites have grown considerably in terms of power generation, service variety, service content and outreach, as well as service quality through INSAT-1 to INSAT-4 satellite series. Spacecraft power has grown from 1 kW to 6 kW (6 times), number of 36 MHz transponders has increased from 12 to 36 (three times) per satellite and satellite design life has increased to 15 years from 7 years (two times). The evolution of spacecraft power and mass trends over the years for ISRO satellite is depicted in Figures 11 and 12.
**Payload technology development:** *(i) Communication payloads* – A typical payload system consists of antenna and repeaters. The antenna system is responsible for providing adequate coverage over the service area and meeting RF power requirement at the ground-receiving terminal. The repeater ensures reception, frequency translation, channelization and amplification of all the signals. The payload operating frequency band is finalized based on orbital slot after co-ordinating with other users as per international norms. Typical frequency plan and corresponding block schematics for INSAT-4C communication payload with twelve channel Ku-band transponders are furnished in Figures 13 and 14 respectively.
Table 1 gives evolution of ISRO payloads over the years. The improvement seen in major payload performance parameters is depicted below:
• EIRP has grown in C-band from 32 dBW over India coverage to 37 dBW over expanded coverage and in Ku-band from 42 dBW to 52 dBW over India coverage and 55–57 dBW over regional spot beam coverage.
• The figure of merit (G/T) has improved from $-5$ dB/degK to $+1$ dB/degK in C-band and from $-2$ dB/degK to $+7$ dB/degK in Ku-band.
(ii) Antenna technology – Evolution of antenna technologies at ISRO has started with the development of 0.9 M body mounted parabolic dish employing Carbon Fibre Reinforced Plastic (CFRP) technologies for APPLE. Subsequently, deployable CFRP reflectors operating over S, C and Ext-C band frequencies have been developed for second generation INSAT series satellites. Shaped reflectors targeting desired land mass coverage, avoiding energy spill over to the adjacent regions or sea are developed for third generation INSAT systems. For EDUSAT type applications and to meet higher EIRP requirements of DTH and broadband services, coverage area is split into a number of smaller zones, served by spot beams. The system capacity is increased by reusing the available spectrum, through optimum allocation of frequencies and polarization.
Figure 9. INSAT-2E C-band wide beam coverage.
Figure 10. INSAT-3A satellite.
Figure 11. Satellite EOL power trend.
Figure 12. Spacecraft mass trend.
Figure 13. INSAT-4C frequency channelization plan schematic.
Table 1. GEOSAT payload evolution
| Satellite | Payload band | Coverage | EIRP |
|---------------|-------------------------------|---------------------------------|--------------------|
| INSAT-2A/2B | 12 NOR-C | India | 32 dBW |
| | 6 EXT-C | India | 32 dBW |
| | 2 S-BSS | India | |
| INSAT-2C/2D | 12 NOR-C | 10 India | 7CH 36 dBW, 3CH 32 dBW 36 dBW |
| | 6 EXT-C | 2 Expanded | 2ICC 35 dBW, 4ICC 32 dBW |
| | 3 KU | India | 41 dBW |
| | S-BSS | Metro | 42 dBW |
| | S-MSS | India | F-35 dBW R-30 dBW |
| INSAT-2E | 12 NOR-C | Wide and Zonal | 36 dBW |
| | 5 EXT-C | | 36 dBW |
| INSAT-3A | 12 NOR-C | Expanded and India | 9 ECC 38 dBW, 3ICC 36 dBW |
| | 6 EXT-C | India | 36 dBW |
| | 6 KU | India | 46 dBW |
| INSAT-3B | 12 EXT-C (Dual Pol) | India | 37 dBW |
| | 3 KU | India | 45 dBW |
| | S-MSS | India | F-35 dBW R-30 dBW |
| INSAT-3C | 24 NOR-C (Dual Pol) | India | 36 dBW |
| | 6 EXT-C | India | 36 dBW |
| | S-BSS | India | 42 dBW |
| | S-MSS | India | F-37 dBW R-35 dBW |
| INSAT-3E | 24 NOR-C (Dual Pol) | India | 36 dBW |
| | 12 EXT-C (Dual Pol) | India | 36 dBW |
| GSAT-2 | 4 NOR-C | India | 36 dBW |
| | 2 KU | India | 42 dBW |
| | S-MSS | India | F-37 dBW R-35 dBW |
| GSAT-3 | 6 EXT-C | India | 37 dBW |
| | 6 KU | Spot and National | 5 Spot 55 dBW |
| | | | 1 National 50 dBW |
| INSAT-4A/4B | 12 NOR-C | Expanded | 39 dBW |
| | 12 KU (Dual Pol) | India | 53 dBW |
| INSAT-4CR | 12 KU (Dual Pol) | India | 53 dBW |
for the spot beams. Figure 15 depicts GSAT-3 (EDUSAT) Ku-band multi-beam antenna coverage. GSAT-4 Ka-band India coverage using eight conjugate spot beams with frequency reuse is being realized through a novel design of $2.2 \text{ m} \times 1.6 \text{ m}$ shaped segmented multi-beam trans-receive reflector.
Planar Array Antenna (PAA) gives higher gain with lower mass and overall lower volume. A $550 \times 550 \text{ mm}$ PAA with $16 \times 16$ radiating elements was flown in Kalpana-1, a dedicated meteorological satellite.
(iii) Transmit receive technology – As depicted in Figure 14, transmit-receive chain consists of input filters, receivers, low noise amplifiers, demultiplexers, switches, driver amplifiers, power amplifiers, multiplexers and output filters.
(iv) High power amplifiers – APPLE and INSAT-1 series satellites employed 4W C-band Travelling Wave Tube Amplifiers (TWTAs), whereas the second generation satellites, INSAT-2 series of satellites used 4 W to 10 W C-band Solid State Power Amplifier (SSPA) for similar applications. To provide improved Effective Isotropic Radiated Power (EIRP) for the extended coverage as required in INSAT-2E and INSAT-3 series satellites, higher power SSPAs (15 W to 19 W) and TWTAs (32W/63 W) were used in C-band. To meet DTH service requirements 140 W Ku-band TWTAs are used in high power stage in INSAT-4 series. TWTA amplifiers can provide high power levels at higher frequencies with much better efficiency and reliability. Indigenous developmental efforts in this vital area initiated at Centre for Electronics and Engineering Research Institute (CEERI), Pilani and Bharat Electronics Ltd (BEL), Bangalore have culminated in successful realization of 60 W C-band and 140 W Ku-band TWT. They will be flown in the forthcoming satellites. Indigenously developed 140 W Ku-band TWT by ISRO, CEERI and BEL is shown in Figure 16.
(v) Receivers, filters and switches – Indigenously developed contiguous input and output cavity filters in the demultiplex (DEMUX) and multiplex (MUX) designs have been extensively used in all INSAT/GSAT series of satellites for channelization. Keeping pace with the state-of-the-art development in this crucial area, dielectric loaded resonating filters for DEMUX have been successfully flown in the GSAT3 and INSAT 4 series. Monolithic Microwave Integrated Circuit (MMIC) technology is being introduced in a phased manner in the forthcoming GSATs for realization of miniaturized receivers and channel amplifiers.
(vi) Meteorological payloads – Use of 3-axis stabilized platform from GEO enabled realization of compact VHRR capable of providing images in visible and Thermal Infra Red (TIR) regions of electromagnetic spectrum. The VHRR payload development involved realization of a whisk broom imager using eight inch reflective optical telescope, a passive cooler to cool down the mercury cadmium telluride detector to 105 K and a scan mechanism to provide bi-directional scanning. The INSAT-1 series imagers provided 2.75 km visible and 11 km TIR images. The second generation imagers were realized indigenously with improved spatial resolution of 2 km for visible and 8 km for TIR. This capability was further enhanced in INSAT-2E, Kalpana and INSAT-3A to imaging in three channels, viz. visible, TIR and water vapour channels.
Platform system technologies: Platform systems support the payload operation, protect and insulate payloads from launch environment, guarantee long mission life (>15 years) and are optimized to provide maximum possible resources for payload configuration. Payload demands significantly better performance from the spacecraft bus, in terms of requirement of higher DC power, higher heat rejection capacity, larger payload mounting area, increased antenna accommodation, stringent platform stability and larger life. The bus system also ensures that the pointing requirements of the payload are met.
(i) ISRO bus architecture – By keeping in view various national communications requirements, ISRO developed an array of communication spacecraft bus configurations, which cater to both in-house and commercial needs of ISRO. The standard bus configuration, presently operational or under development are I-1K, I-2K, I-3K and I-4K buses.
(a) I-1K Bus – I-1K bus is developed mainly to carry out geostationary imaging or communication missions with lift off mass of around 1100 kg and are compatible with the Indian PSLV launch vehicle. This provides a very cost effective way of launching a reasonably useful 100 kg payload to GEO orbit. KALPANA-1 spacecraft carrying VHRR payload is an example for this operational bus. Indian Regional Navigational Satellite System (IRNSS) series of satellites and Chandrayaan-1 mission are configured employing I-1K bus.
(b) I-2K Bus – I-2K bus is the workhorse for 2 t class INSAT/GSAT series of satellites and is primarily configured taking into account indigenous Geosynchronous Satellite Launch Vehicle (GSLV) capabilities and also compatible with most other commercial launch vehicles. With the spacecraft lift-off mass ranging between 2 and 2.3 tonnes and maximum payload mass of 300 kg, I-2K bus can support 12 years of mission life. I-2K bus is ideally suited for configuring satellites with payload power up to 2.5 kW and antenna size of 2.2 m in diameter. GSAT-2 and GSAT-3 (EDUSAT) satellites employing this bus are already providing operational services. INSAT-4CR, GSAT-4, 5, 6, 7, 9 and INSAT-3D are under realization with I-2K bus configuration.
(c) I-3K Bus – I-3K bus is the operational workhorse for realization communication satellites. With lift off mass in the range of 2.5–3.5 tonnes and payload mass in the range of 300–400 kg and maximum payload power of 4.8 kW, I-3K bus provides 15 years of mission life. Satellite operators have showed considerable commercial interest worldwide in this bus, for inducting satellites based on I-3K bus in their fleet. Six spacecrafts in this category namely INSAT-2E, 3A, 3C, 3E, 4A and 4B are operational. INSAT-4B was launched in March 2007.
(d) I-4K Bus – I-4K bus is presently under development and it is compatible with GSLV Mk-III launch vehicle, which is also under development by ISRO. With lift-off mass in the range of 4–4.5 tonnes and payload mass in the range of 450–550 kg and maximum payload power of 7 kW, I-4K bus will provide 15 years of mission life. The bus is being evolved incorporating the advanced technologies such as deployable thermal radiator panels, complex deployment mechanisms, electric propulsion and use of advanced materials.
A comparison of ISRO buses capabilities is given in Table 2 and a disassembled view of the spacecraft subsystems is given Figure 17.
(ii) Bus subsystem technologies
(a) Structure – The primary functions of spacecraft structure are to provide mechanical support within the framework of spacecraft configuration, to satisfy subsystem requirements such as alignment of sensors, actuators, antenna, etc. to meet system requirements for launch vehicle interfaces, integration and tests and to survive all direct and cumulative static and dynamic load combinations occurring during fabrication, testing, ground handling, transportation, launch and orbit manoeuvres without permanent deformations.
Lightweight composite, high modulus fibre structures fabricated using purest form of carbon, achieves low mass to strength ratio. Carbon fibre reinforced plastic (CFRP) structures have already been flown in METSAT, GSAT-3 satellites and are planned for all the future INSAT/GSAT missions. Use of advanced materials to fabricate structures with better thermal and electrical conductivity properties, apart from the superior strength, stiffness and better hygroscopic properties are planned in the future. The spacecraft bus configuration is evolving to modular bus concept, where equipment panels which houses the payload, housekeeping and propulsion elements are modular in design and can be tested independently and integrated at a later stage. It is implemented for the first
Table 2. ISRO communication payload bus capabilities
| Parameters | I-1K | I-2K | I-3K | I-4K |
|----------------------------------|----------|----------|----------|----------|
| Mission | MET | COM/MET | COM/MET | COM |
| Lift off mass (kg) | 1100 | 2300 | 3100 | 4500 |
| Max. propellant loading (kg) | 850 | 1400 | 1700 | 2400 |
| S/C dry mass (kg) | 500 | 900 | 1400 | 2100 |
| Payload mass (kg) | 100 | 250 | 300–400 | 450–550 |
| Max solar power gen. (W) | 1000 | 2800 | 6000 | 10000 |
| Solar cell technology | Si/GaAs | Si/GaAs/MJ | Si/GaAs/MJ | Si/GaAs/MJ |
| Battery options | 24 AH NiCd | 70 NiH$_2$ | 100 AH Li–Ion (3) | 160 AH Li–Ion(3) |
| Capacity/technology | | | | |
| | | | | |
| Payload power (W) | 550 | 2400 | 4800 | 7000 |
| Date of 1st launch | 12/9/2002 | 8/5/2003 | 3/4/1999 | |
| No. of S/C in orbit | One | Two | Six | |
Figure 17. Disassembled view of spacecraft sub-systems.
time in INSAT-4B and will be followed from GSAT-5 onwards.
(b) Mechanism – Due to constraints of launch vehicle dynamic envelope, satellite appendages, such as antenna reflectors and solar array have to be in stowed condition during launch and need to be deployed, once the satellite is injected into the orbit as per the sequence of initial phase mission operations. Deployment mechanisms facilitate stowing of these appendages during launch through appropriate hold down and deployment in orbit through ground command.
Simple deployment mechanism concepts for solar array, with one panel on each wing in APPLE has been considerably innovated and matured to deploy lightweight, rigid solar arrays with three to four panels in each wing in what is now popularly known as accordion deployment scheme. The scheme has been successfully used for all the INSAT/GSAT series of satellites. Multipurpose satellites like, INSAT-2A/2B/2E and 3A use a remarkable, indigenous development of solar sail and boom to compensate for the radiation pressure of south wing solar panels, without causing thermal heat input into the VHRR payload passive cooler. Deployment schemes of reflectors inclusive of shaped and dual gridded configurations are used in INSAT-2/3/4 series of satellites, handling increased reflector mass, complexity and finer pointing/coverage requirements. To meet the geo mobile communication requirements, large antenna of the size of 12 m or bigger is required to reduce the size of the ground terminals to a hand-held system. Such a large antenna has to be essentially an unfurlable one, with complicated deployment mechanisms. A 5.5 m unfurlable antenna with complex multi-stage deployment mechanism (Figure 18) is already under development and will be used in GSAT-6 spacecraft to provide S-band mobile multimedia services.
(c) Thermal control system – The primary functions of spacecraft thermal control system is to maintain temperature of bus electronics packages, batteries, propulsion components, IR detectors and payload elements, within the stipulated limits under varying diurnal and seasonal conditions of sun satellite geometry and heat dissipation within. Passive thermal control system for INSAT/GSAT series has been evolved using optical surface reflectors, multilayer insulating blankets and thermal heaters. Heat conduction to the radiator was done using a number of techniques, such as, heat sinks; conductive modular blocks were used in low power second-generation satellites.
Use of heat pipe network has paved the way for efficient conduction of heat to radiators in INSAT-3/4 series, high power satellites. Qualification of indigenous heat pipe development has enabled accommodation of higher payload power dissipation in I-2K series of satellites. Use of large unified dual core criss-cross heat pipe embedded panels to handle higher thermal radiation was done first in INSAT-4A. Use of high power radiatively cooled TWTA employing fin or cone-type collectors is done in GSAT-3, INSAT-4A Ku-band transponders and in future spacecrafts. Heat radiation by use of Deployable Thermal Radiator (DTR) panels to increase heat rejection area and use of Reservoir Embedded Loop Heat Pipe (RELHP) system consisting of multiple evaporators within the payload panel; a condenser within the deployable radiator panel and vapour and liquid tubes connecting these elements capable of high heat transport capacity and dissiTable 3. Electric propulsion system applications
| Orbit | Delta (v m/s) | Satellite mass (kg) | Thrust (mN) | Time | Power (kW) |
|------------------------|---------------|---------------------|-------------|------------|------------|
| GEO station keeping | 50/year | 2000–4000 | 20–200 | 2–6 h/day | 1.2–2.5 |
| HEO orbit control | 100/year | 2000–3000 | 10–250 | 3–6 h/day | 2–2.5 |
| GTO | 5 k–15 k | 5000–8000 | 100–2000 | 3–12 months| 3–5 |
| LEO-LEO orbit raising | 20 | 300–1500 | 10–100 | Weeks | 0.5–1 |
Generating few kilowatts of heat will be done in future broadband payload satellites.
(d) Propulsion system – Orbit raising, orbit maintenance and attitude control require specifically designed propulsion system that functions as per the control regime defined by onboard attitude and orbit control system. Experimental communication satellite APPLE employed solid booster for orbit raising from GTO to GSO and monopropellant hydrazine-based attitude control thrusters. INSAT-2 series onwards, indigenous unified bipropellant system has been used with liquid apogee motor (440N) and 22N/10N bipropellant thrusters for attitude control and station keeping purposes. R&D efforts for the realization of unified bipropellant system involved development of bipropellant LAM, 22N/10N thrusters, propellant tanks, pressurant tanks to hold helium at 250 bar pressure and host of latch valves, regulators, filters and transducers. Specific impulse (Isp) above 315 s for LAM and above 290 s for thruster has been achieved. Typical bipropellant system scheme is depicted in Figure 19.
Electric propulsion offers a cost effective and sound engineering solution for space applications. Use of high performance electric propulsion system (EPS) will result into reduced chemical propellant and tankage requirements, in exchange for significant usage of power. Electric propulsion thrusters are classified according to the mechanism of transfer of electric energy to kinetic energy into three broad categories of electrostatic, electromagnetic and electrothermal propulsion. Table 3 explains the applications of various EPS systems\(^{15}\).
EPS is generally used for north–south station keeping operation. Since less than 5 m/s per year delta velocity needs to be imparted for east–west station keeping (EWSK), EPS system usage is not advantageous for EWSK, as overheads will negate the benefits. Similarly, use of EPS systems for orbit raising involves months of continuous operation and a very long wait to reach GSO, nullifying the advantage. However, this could be a backup option for conventional chemical propulsion. Two indigenously developed and two imported SPT will be flown on board GSAT-4 to cater for north and south station keeping operations. Figure 20 depicts GSAT-4 ion thruster configuration.
(e) Electrical power system\(^{15}\) – Electrical power system provides regulated power to housekeeping subsystems and payload elements. It involves power generation, conditioning and management and storage for eclipse support.
Normally, power generation is achieved through solar power conversion into electrical power, using photovoltaic cell technology. As the spacecraft power requirement increases, higher power generation is desirable, without increasing the solar array size. Traditional single bandgap Si solar arrays are very inefficient in converting solar radiation into electrical power. Development of dual junction cells, with technique to dope layers of GaAs epitaxially on germanium substrate increases solar cell efficiency to 25%. Extensions of this process lead to design and fabrication of multi junction or cascaded cells. Arrays that stack three to four types using III–V compound materials, such as GaAs, GaInP, GaInAsP and GaSb have shown efficiencies of the order of 28–35%. Multi-junction/cascaded cells, with higher efficiency of the order of 27% are already in use in I-2K and I-3K buses of ISRO.
For total eclipse support, presently, NiH$_2$ batteries are used in the high power satellites. Li-Ion battery is a fastest growing technology and is being pursued to meet the larger eclipse power demand. It is mass and volume efficient and has wider temperature range of operation. Energy density of Li–Ion technology is 130 Wh/kg compared to 30–40 Wh/kg and 50–60 Wh/kg for NiCd and NiH$_2$ technologies, respectively. INSAT-4B uses 100 AH Li–Ion battery and GSAT-4 is designed with 36 AH Li–Ion battery for eclipse support.
In power electronics area, use of modular and scalable power generation architecture to suit power demands from 3 to 30 kW is being planned. Use of planar magnetic components and state-of-the-art radiation hardened MOSFETs achieves 96% efficiency for direct energy transfer. Higher bus voltage (70 V or 100 V as against 42 V) will result in reduction in bus current and power harness mass. All the INSAT/GSAT series of satellites use 42 V regulated bus. GSAT-4 will use for the first time 70 V bus for Ka-band payload.
(f) Attitude and orbit control system – Attitude and orbit control system (AOCS) consists of attitude sensors (sun, earth and/or star, inertial reference unit or gyros), control electronics and actuators (momentum and reaction wheels, magnetic torquer, bi-propellant thrusters, liquid apogee motor and solar flaps). AOCS design is based on body stabilized momentum bias system, employing momentum wheels in V configuration and reaction wheels in L configuration, as a backup. The overall platform stability is of the order of ±0.2 degrees in Pitch and Roll axis and ±0.4 degrees in yaw axis. Additional features like antenna beam pointing from inclined orbit operation up to 2.7 degrees to enhance spacecraft life by three years, as well as satellite manoeuvrability for payload in-orbit tests are also included in AOCS.
Use of miniaturized gyros, hemispherical resonating gyros (HRG), micro electromechanical sensors and multifunctional sensors is planned in the near future. Fast response sensors and actuators will be used to provide stable platform. Micro-machined optics for sensors to improve system performance multifold and control of satellite bus for improved jitter performance will be order of the day. Future system will use intelligent design with capability of on-orbit reconfiguration and modification to the soft-core architecture to support dynamic mission requirements.
(g) Telemetry tracking and command system – Telemetry, Tracking and Command (TT&C) system is configured to serve the requirements of spacecraft health monitoring through telemetry, commanding for control, tracking and ranging for orbit determination, during all the phases of the mission. Satellite parameters are monitored at 1 kb data rate in normal and/or dwell format. The dwell format facilitates super commutated sampling of specific parameters as required by the ground operations. Normal and dwell format are transmitted using 32 kHz and 128 kHz sub-carriers, respectively. The modulation scheme employed for telemetry transmission is PCM/PSK/PM. The telecommand system provides the link between the ground control and different subsystems on-board the spacecraft for decoding the command information and data for their operation. Telecommand system employs shortened BCH code for error-free command message reception. The command message consists of four command frames with 10 bit gap between the frames and 5-bit satellite address. The command bit rate is 100 bit/s. The modulation scheme employed for commanding is PCM/FSK/FM. ASIC based TT&C base band system is modular in design and the same basic system by enhancing or pruning can be used to meet requirements of I-1K to I-4K buses.
The TT&C RF system consists of telecommand receiver and telemetry transmitter operating in C-band. TT&C transponder is also used for tone ranging. The system is designed with omni directional coverage for telecommand and omni/global coverage for telemetry. TT&C system omni coverage is used during the initial phase mission operations in transfer orbit and under the loss of lock conditions in on orbit.
(h) Bus electronics technologies$^{13,15}$ – The AOCS, power and TT&C systems are designed with the extensive use of microprocessors, Applications Specific Integrated Circuits (ASICs), Field Programmable Arrays (FPGAs), Hybrid Micro Circuits (HMCs) and Micro Integrated Circuits (MICs). The basic designs undergo morphological changes periodically with availability of space qualified cutting edge device technologies. The technologies likely to be considered in the near future are:
- Low power CMOS devices
- Deep sub-micron ASICs
- High performance complex radiation hardened FPGAs
- Multi Chip Modules (MCMs)
- System On Chip (SOC)
- Micro Electromechanical Systems (MEMS)
Use of advanced technologies facilitates design of versatile, miniaturized bus management unit (BMU), employing 1553 bus for data transfer that combines functions of TT&C base band, AOCE and sensor electronics. Such a BMU is currently planned in GSAT-4 and INSAT-3D.
**Remote sensing spacecraft**
ISRO has established an operational Remote Sensing (RS) satellite system by launching its first satellite, IRS-1A (ref. 4) in March 1988, followed by a series of IRS spacecraft. The IRS-1C/1D satellites with their unique combination of payloads have taken a lead position in the global remote sensing scenario. Realizing the growing user demands for the ‘multi’ level approach in terms of spatial, spectral, temporal and radiometric resolutions, ISRO launched the Resourcesat-1 satellite in 2003 as a continuity mission with enhanced capability. These series of spacecrafts provided valuable data for various applications covering the land features. The Oceansat-1 mission launched in 1999 with an ocean colour monitor and a microwave radiometer catered to ocean and meteorology-related applications. The technology experiment satellite with the novel concept of step and stare imaging and Cartosat-1 with 2.5 m resolution stereo imaging capability, enhanced the cartographic and strategic application potential. Cartosat-2 launched on 10 January 2007 is a highly agile satellite with a PAN camera providing images with less than 1 m resolution. This mission will provide spot imaging using ‘paint-brush’ coverage for any given area of interest.
Figure 21 shows the evolution of ground resolution since 1988. A dedicated satellite for climate/atmosphere research and applications called MEGHA-TROPIQUES (scheduled for launch in 2009) will study the oceanic winds, humidity profile, liquid water, clouds, ice-clouds and radiation budget over the tropics. Commensurate with the ever increasing demands from the user community for higher spatial, radiometric, spectral and temporal resolutions, new technologies are being evolved for the remote sensing platforms – both in the areas of payload instruments and mainframe systems. Though the requirements of small satellite bus for specific scientific missions are growing all over the world, operational RS missions prefer medium/large spacecraft platforms to accommodate multiple payloads to meet their objectives. Hence, it is very essential to evolve technologies to minimize weight, volume and cost for the realization of such missions.
**Payload instruments**
As the science of remote sensing encompasses a wide range of the EM spectrum, from visible wavelength to millimeter waves, it calls for technology developments in diversified fields like optics, sensors/detector arrays, thermal control elements like coolers, antenna arrays, high power and low noise microwave devices, high speed analogue and digital electronics, etc. A brief review of these technologies is given in the following paragraphs:
*Electro-optical payloads:* The Earth Resources Technology satellite (later named as LANDSAT) launched in 1972 was the first RS satellite, which has set in motion the development of RS technology all over the world. It carried a multi-spectral scanner (MSS), providing a resolution of 83 m in 4 bands in VNIR. This was followed by a series of seven spacecrafts (till Landsat-7). From Landsat-5 the thematic mapper replaced the MSS with 7 spectral bands covering VNIR–SWIR–TIR channels. All these instruments adopted the electro-mechanical ‘whiskbroom’ scanning with a 410 mm aperture Ritchey–Chriten (RC) telescope and a scan mirror in conjunction with discrete detector elements (photo-diodes) in the split focal plane. In spite of the ‘less’ reliable electro-mechanical components, Landsat series provided voluminous data of the entire globe, during the last three decades.
With the advent of the solid state Charge Coupled Device (CCD) imaging technology in late seventies for commercial business applications, the RS community quickly adopted these devices and evolved the ‘push broom’ scanning systems. This has resulted in a dramatic reduction of weight of the camera system by a factor of ~2–3, in addition to volume and power reduction. Most of the subsequent international RS missions, including Indian Remote Sensing (IRS) and French SPOT satellites used CCDs in their payloads. With the advances in photo lithographic techniques, the pixel dimensions could be reduced, thereby increasing the array length resulting in improved spatial resolution, still with wider swath. For example, CCDs with 13 microns pixel and 2K elements were used in the first IRS-1A/1B missions as against 7 microns pixel with 24 K elements that are available now. Presently, the CCDs are developed as linear and area array devices, including the Time Delay Integration (TDI) devices covering VNIR and SWIR range. Extending this technology to cover thermal IR channels with integrated focal plane cooler is under development. In the area of
imaging optics, the basic trade-off between reflective and refractive (or a combination of both) optical systems is done, based on the parameters like orbit, ground resolution, type of detector, focal length, aperture, SNR, etc.
The optical systems realized for the IRS series of spacecrafts make use of refractive, reflective and a combination of both. The use of individual refractive optics for each multi-spectral band enabled optimization of payload radiometric and geometric performance as each band refractive optics could be designed without imposing on it the design requirements of other spectral bands. Individual refractive optics for a spectral band was used in all payloads like LISS1, LISS2, LISS3, WiFS, AWiFS and OCM. Use of refractive optics in OCM with tele-centric design enabled realization of a very large FOV (86 degrees) by using an aspheric first element. While refractive optics enabled realization of medium spatial resolution multispectral images, use of reflective optics was made for high resolution imaging systems. Realization of high resolution images requires long to very long focal lengths which along with the requirement of sufficient photon energy collection demands use of a large collecting apertures and larger spectral bandwidths. By its very nature refractive optics performance is sensitive to wavelength because refractive index of the material is a function of wavelength. This property is used in the optical design to compensate for chromatic aberrations. As refractive optical systems require light to travel through the bulk of the material, these factors impose a severe constraint on use of large aperture refractive optics as it becomes essential for the bulk transmission characteristics of the material to meet the stringent performance demands over a large aperture. Reflective optics overcomes the problems of wavelength-dependent performance as reflective optics does not make use of bulk transmission properties of base material. As the optical aperture grows beyond about 150 mm, one usually switches over to reflective systems. Reflective optics usually has limited FOVs restricting the realizable swath from a given satellite altitude. Design of compact telescope using unobscured three mirror aspheric optics to overcome this limitation was one of the major technological developments which enabled ISRO’s remote sensing data from IRS-1C and cartosat 1 to make its mark as a source of international data. As the focal length increases further, the use refractive corrector elements to increase the field of view of telescopes requires refractive correctors and such systems have been developed for TES and carto-2 series.
Development of optics with excellent image quality over a wide field-of-view under severe space environment is a real challenge. Recent developments in the optical glass materials, precision mechanical fabrication equipments, space qualified adhesives and alloy metals with matching thermal co-efficient of expansion made it possible to realize precision imaging lenses. With the latest optical design optimization software, it is possible to miniaturize the refractive systems (a 12-band ocean colour monitor can be realized with a mass of <20 kg.). For large size reflective optics using Zerodur materials, light weighting up to 60% has been achieved by scooping the glass. Future systems will use silicon carbide and other such materials with high stiffness and lightweight properties. Cartosat-3, the next generation IRS satellite with 0.25 m resolution will use 1.2 m optics with 60% of weight removal. Use of adaptive optics, acousto-optical devices, in-orbit focusing using MEMS, large area-light weight mirrors is envisaged in future missions.
The availability of high-speed devices has significantly compacted the signal processing electronics so as to handle large CCD/TDI arrays data, efficiently. Using devices like FPGAs, ASICs and single chip analogue and digital signal processors, the camera electronics size and weight are shrunk and is a part of the electro-optics module. Further developments to integrate the processing electronics as a part of the Focal Plane Array (FPA) are under way. The design of payload camera structure is evolved with new materials and optimized weight/stiffness ratio. From aluminum-welded structures used in earlier IRS missions, the technology developments enabled to use light weight composite materials for Cartosat and future missions. Figure 22 shows the improved efficiency of the IRS payloads (ratio of mass per unit ground resolution) over the last two decades.
**Microwave and millimeter wave payloads:** Owing to their inherent advantages of all-weather operability and usefulness for study of global meteorological/atmospheric parameters, the demand for microwave RS instruments is steadily increasing. Though NASA has demonstrated the capabilities of microwave instruments for various applications through their SEASAT mission as early as in 1978, it took considerable time for other countries to embark upon this technology till ESA launched ERS-1 in 1991. Microwave Instruments are classified as active (synthetic aperture radar, altimeter, scatterometer, etc.) and passive (radiometers, sounders). ISRO has flown a 3-channel (19, 22 and 31 GHz) radiometer called ‘SAMIR’ on-board the Bhaskara-I and II satellites in 1979 and 1981 for the study of atmospheric water vapour/liquid water content over the oceans. This was followed by the Multi-frequency Scanning Microwave Radiometer (MSMR) flown onboard Oceansat-1 satellite in 1999. MSMR was a four-

*Figure 22.* Improvements in payload efficiency (mass/resolution).
channel (6.6, 10.6, 18 and 21 GHz) dual polarization radiometer, useful for estimation of sea surface temperature, cloud water content and ocean surface wind magnitude. Presently, a Ku-band (13.515 GHz) pencil beam scatterometer is under development to be flown on-board the Oceansat-2 satellite. This will provide data on global scale wind vector fields over oceans.
Also, a C-band synthetic aperture radar with <3 m resolution is being developed and will be flown on the Radar Imaging Satellite. Development of advanced millimeter wave payloads, like synthetic aperture radiometer, atmospheric temperature and humidity sounders is being taken up. The major technology developments for these instruments include large area planar antenna arrays, high power pulsed transceivers, low noise amplifiers and receivers, down converters, corrugated horns, long life and stable electro-mechanical scanning motors, heat pipe embedded substrates, ground calibration systems and software tools for geo-physical model development, data validation, etc. Remote sensing applications have been discussed in detail in Navalgund et al. (this issue).
**Remote sensing satellite bus elements**
While the design of spacecraft for IRS series has many commonalities with the one for INSAT series there are many crucial differences. Both platforms are required to be 3-axis stabilized and earth-oriented with stringent attitude control to enable imaging from the optical sensors onboard the platform. Global remote sensing data collection demands polar orbit as against an equatorial orbit for INSAT. The geometrical relation, between observing instrument viewing axis on the satellite platform, the ground feature and sun, the source of illumination, plays a very major role in remote sensing observations. This requires using sun synchronous orbit with suitable choice of equator crossing time. The spacecraft subsystems have to be designed for an entirely different set of constraints encountered in equatorial geostationary orbit and polar sun synchronous low earth orbit. While the communication link for the geostationary satellite enables $24 \times 7$ access to spacecraft from ground station for the INSAT, the IRS series spacecraft subsystems have to cater to limited durations of communication between ground station and spacecraft. While the INSAT series of spacecraft AOCS has to grapple with the demands of moving the satellite from geostationary transfer orbit to the specified geostationary orbit and station keeping manoeuvres to keep the satellite from moving away from its designated longitudinal location and inclination, the LEO IRS AOCS has to meet the demands of frozen perigee and tight ground track control to enable repetivity demands of operational remote sensing.
Commensurate with the improvements in the payload instrument technology, the spacecraft mainframe systems also need to be evolved to support the payload function. The major thrust areas are light weight structures, high efficiency power systems, high speed data handling and storage systems, intelligent on-board processing architecture to achieve autonomy, highly stable and agile platforms, thermal control for high power dissipation elements, etc.
**Structure:** Continuing efforts of developing light weight composite structural materials for the satellite bus and the payload platforms resulted in significant weight reduction in realizing the 600 kg agile satellite – Cartosat-2 (Figure 23). The structural mass has reduced from 16% in IRS-1A/1B to about 10% in Cartosat-2.
(i) **Electrical power system** – Power generation on IRS satellites has seen migration from single junction solar cells to the multi junction and gallium arsenide cells with higher efficiencies thus improving the power/weight ratio of the solar panels. For example, in IRS-1A the power/eight ratio were 18.4 W/kg using Silicon cells compared to 41 W/kg in RISAT using triple-junction cells. Development of thin film solar cells/panels is in progress. In the area of power storage, nickel–cadmium batteries of a wide capacity range (18–40 AH) were used in IRS spacecraft. To meet the high power requirements, lithium–ion and nickel–hydrogen batteries are being planned to be used in the future missions like Chandrayaan-1, RISAT and Meghatripokies.
(ii) **Thermal control system** – Significant developments took place in the area of the thermal control system. Precise thermal control of large payloads within narrow limits calls for detailed analysis of a mathematical model with large number of nodes\(^{16}\) (about 450 in IRS-1A as against ~2500 in Resourcesat). The use of thermo-electric cooler to cool the star sensor CCD detector and the short wave infra red detector of the imaging payload and the

Table 4. Types of attitude sensors flown in IRS missions
| Sensor type | Attitude measurement accuracy (degrees) | Sensors/spacecraft | Heritage |
|------------------------------------|----------------------------------------|--------------------|---------------------------|
| $4\pi$ sun sensor | 1.0 | 4 | All IRS S/C |
| Fine sun sensor | 0.5 | 1 | IRS-1A/1B |
| Precision yaw sensor | 0.2 | 2 | IRS-1A/1B |
| Digital sun sensor | 0.125 | 2 | IRS-1C/1D |
| Conical scanning earth sensor | 0.1 | 2 | All IRS S/C (except Carto-2) |
| Linear array CCD star sensor | 0.8 | 2 | IRS-1A/1B |
| Area array CCD star sensor | 0.01 | 2 | IRS-P5/P6/Carto-1/Carto-2 |
Figure 24. Data rate requirements.
The development of ammonia filled heat pipe to take the heat away from the focal plane arrays are a few examples of this development. Development of a two-stage sterling cooler to achieve low temperatures for thermal IR detectors and photon detectors is under way.
(iii) Data handling and transmission – As the requirements of the spatial and radiometric resolution of the images are ever increasing, the data volume and hence, the data rate to be transmitted from satellite to ground stations also perpetually increases. Figure 24 shows the data rates for various spatial resolutions with a 70 km swath at an altitude of 817 km and with 10 bit quantization. The data transmission, which started with S-band RF system, has shifted to X-band to get more bandwidth and hence handling higher bit rate. Cartosat-1 has used QPSK dual channel X-band system to handle 210 M bits/s data rate. Microwave imaging satellite RISAT will transmit data at 640 Mbps using dual polarization technique. In order to meet the future demands, data compression, on-board data processing, high efficiency coding like Trellis, 8 PSK modulation, Ka-band transmission are planned. A significant development in RF data transmission area is the development of the Phased Array Antenna (PAA), which uses solid-state power amplifiers in place of more cumbersome Travelling Wave Tube Amplifiers (TWTAs). Starting from Technology Experiment Satellite (TES), the PAA is successfully used in all IRS missions. Equally important development is the lightweight Dual Gimbal Antenna (DGA) used in Cartosat-2 satellite.
(iv) Onboard data storage – The on-board data storage systems have migrated from the electro-mechanical tape recorders flown in IRS-1C/1D to solid state recorders in the later missions. With increased memory capacity requirements, new packaging techniques like vertically integrated 3D modules are developed. A 64 Gb SSR was used in TES and Cartosat-2 and a 240 Gb SSR is being developed for RISAT.
(v) Attitude and orbit control system – In the area of platform pointing and stability, a linear array CCD-based star sensor was flown in IRS-1A/1B for measuring attitude of the satellite as a supplement to the earth sensor. Subsequently CCD-based star sensor was developed and flown in IRS-1C/1D as a full-fledged three-axis attitude sensor. Then onwards, star sensor is the main attitude sensor onboard all IRS satellites. The spacecraft pointing accuracy, which is measure of the geometrical accuracy of the imagery, has been improved from 0.4 deg in IRS-1A to 0.05 deg in Cartosat. These star sensors in conjunction with highly stable mechanical gyroscopes resulted in platform stability of about $5 \times 10^{-5}$ deg/s. Table 4 provides the different types of attitude sensors and their evolution. Development of fibre optic gyro, which will give better performance in terms of drift stability, is under way. All IRS satellites are three-axis stabilized and they use reaction wheels as the main actuators. Earlier IRS satellites used three-wheel configuration with a skewed wheel for redundancy. The present generation IRS satellites use the tetrahedral configuration with four wheels. The demand on the higher torque capability and angular momentum for Cartosat-2 and RISAT satellites resulted in the development of 15 NMS and 50 NMS wheels respectively, with torque capability of 0.3 Nm.
(vi) Satellite positioning system – In order to improve the orbit determination and position accuracy, GPS-based satellite positioning systems (SPS) are developed and used on all present IRS platforms. The combination of star sensor attitude knowledge, SPS position knowledge and the improved control algorithms, which include the onboard IRU calibration, have resulted in ground image location accuracy of 50 m in Cartosat-1, which is a significant achievement in the satellite remote sensing.
Cartosat-2. Making use of high speed microprocessor, FPGAs, ASICs, HMCs and surface-mount-devices, this unit performs the functions of telemetry, telecommand, attitude and orbit control and thermal control. This along with the use of MIL-STD-1553B bus for electrical interface between subsystems, has significantly contributed in weight reduction, particularly in the weight of satellite level electrical interconnections. In conjunction with the hardware development, software algorithms, particularly in the area of control system and orbit determination have been developed. The step and star imaging in TES and paintbrush type of imaging in Cartosat-2 are the result of the complex algorithms realized by means of embedded software. The implementation of the Kalman filter resulted in low drift and jitter on IRS platforms.
**Space science, planetary missions and small satellites**
Even though the application of space technology for national development has been the main thrust for ISRO, space science has received special attention and has been yet another important element of the Indian space programme. The scientific instruments carried on-board spacecraft ARYABHATA, SROSS, IRS-P3, etc. have given impetus to space science research and have provided exciting and unique scientific results. The endeavour is to be continued through the development of a space science long-term plan. The plan includes pursuing planetary exploration and astronomy as one of the major directions of space activity in India. Presently ISRO has on hand three major scientific missions on planetary science, astronomy and environmental science.
**Chandrayaan-I**
As a first step towards planetary exploration, the Indian mission to Moon, called Chandrayaan-I, has been conceived (Figure 25). The main scientific objective of the mission is high resolution remote sensing of the Moon in visible, near infra red, low energy X-ray and high energy X-ray region with the purpose of preparing a 3-dimensional atlas (with a high spatial and altitude resolution of 5 m) of regions of scientific interest on the Moon and chemical mapping of the entire lunar surface for elements such as Mg, Al, Si, Ca, Fe and Ti with a spatial resolution of about 20 km. Simultaneous photo-geological, mineralogical and chemical mapping will enable us to identify
different geological units, which will test the early evolutionary history of the Moon. The Chandrayaan-1 does include AO payloads from international space agencies that complement the scientific investigation as well as possible detection of water ice in poles.
The above scientific objective will be achieved with a Terrain Mapping Camera (TMC) in the panchromatic band, having a 5 m spatial resolution and 20 km swath; a hyper-spectral wedge filter camera operating in 400–900 nm band, with a spectral resolution of 15 nm and spatial resolution of 80 m with a swath of 20 km; a laser ranging instrument with height resolution of about 5 m; a collimated low energy (1–10 keV) X-ray spectrometer using silicon-based swept charge X-ray detector for measuring fluorescent X-rays emerging from the lunar surface and a high energy X-ray (10–200 keV) mapping, employing CZT solid-state detector to understand the transport of volatiles on the moon. The instruments and science expected from Chandrayaan-1 are discussed in detail in Agarwal et al. (this issue). The Chandrayaan-1 spacecraft will orbit Moon in a polar orbit at a distance of 100 km and for mission duration of two years. Since the launcher has to place the spacecraft at a distance of approximately 4 lakh km, the payload capability of the launcher will be limited. Hence, the spacecraft design for Moon and other planetary missions will be different from earth observation and communication missions, in the sense that emphasis should be placed on miniaturization. This means smaller weight, size and low power requirement. The optical imaging payload for this reason uses an Active Pixel Sensor (APS), wherein the conversion from the light energy to a voltage level takes place at the pixel level, thus minimizing the hardware and power requirements. The hyper-spectral imager for its spectral selection, in order to generate a large number of spectral bands, incorporates a new technology called wedge filter (an interference filter with varying coating thickness along one dimension, so that the spectral content transmitted through it varies in that direction). Besides, for weight reduction purpose, the telescope assembly uses CFRP structure. The laser ranging instrument uses a laser source as well as a receiver. The X-ray instrument uses SCD detector for low energy and CZT for high energy, with associated complexity in thermal management for optimal performance, with low background noise.
Eleven science payloads are being flown in Chandrayaan-1 mission to cover photo selenological features, mineralogical mapping, chemical mapping, magnetic field anomaly, as well as possible detection of water ice in the poles of the Moon. For this purpose, they require different orientation for observational purpose, like looking at Moon by optical instruments, sun by X-ray sky monitor, orientation of payload for the purpose of detection of water ice to look at poles, etc. This poses great challenge in accommodating different payloads in specific location/orientation in a small spacecraft. The data rates for different payloads are different, starting from a few kilobits to several megabits. These have to be properly formatted and stored for onward transmission to ground. Optical imaging payloads operate only during illumination portion of the orbital lifetime, whereas X-ray and other payloads operate throughout the mission life. This requires proper operational scenario of different payloads, storage of data and transmission to ground.
The development of spacecraft bus itself has thrown several technological challenges, due to less weight, volume and power availability. These include combining all the electronic functions of the satellite through a bus management unit, utilization of high efficiency solar cell for power generation, use of lithium-ion batteries for reduced weight and volume, special coding technique to have additional gain, since the data has to be received from a distance of 4 lakh km, low power data transmitters using MMICs, use of dual gimbal antenna for transmitting and the like. The spacecraft will have autonomous operations, when the spacecraft is placed in the moon orbit during non-visible period from ground. Another important technological challenge is the development of Deep Space Network (DSN) involving 32 m antenna for receiving data from Moon and beyond. This development is being realized within the country.
Very detailed and accurate mission planning is required to put Chandrayaan-1 in 100 km orbit around moon. The flight proven PSLV launcher will inject Chandrayaan-1 into elliptic transfer orbit (ETO). The spacecraft uses the 440 Newton liquid apogee motor for taking it to lunar transfer trajectory and from there to lunar orbit insertion (LOI). Lunar capture is planned at about 200 km × 1000 km altitude at polar orbit and the altitude is circularized through a series of in-plane manoeuvres to 200 km near circular orbit. The spacecraft will be finally moved to 100 km circular polar orbit around moon for photo selenological and chemical mapping of the moon. Thus in lunar orbit the configuration and operation of spacecraft will be similar to the earth imaging missions. However the following significant differences exist due to peculiar nature of lunar orbit. As the polar orbit around the moon is inertially fixed, and the earth/moon go around the sun, the orbit experiences 0 to 360 deg incidence of sun rays. Thus if the solar arrays are not canted, during noon/night orbit, 100% power would be generated, whereas during dawn/dusk orbit, no solar power generation is possible. Thus suitable canting of solar array is required to generate adequate power in noon/night as well as dawn/dusk orbits. This necessitates 180 deg yaw rotation of spacecraft once in six months. The mission planning of Chandrayaan-1 is discussed in more detail in V. Adimurthy et al. (this issue).
While Chandrayaan-1 development is progressing, efforts are under way to define Chandrayaan-2. Besides enhancing the scientific objectives of Chandrayaan-1, the Chandrayaan-2 mission is likely to include soft landing,
rover, sample to return, etc. Hence the technologies related to these are to be realized. As a precursor to this, Chandrayaan-1 will carry a moon impactor probe as a technology demonstrator.
**ASTROSAT**
The astronomy mission ASTROSAT is meant for the multi-wavelength studies of different types of cosmic sources over a wide spectral band extending over ultra violet (130–300 nm), visible, low energy X-ray (0.3–8 keV) and high energy X-ray (2–100 keV) bands from simultaneous observations. These simultaneous observations are done with different instruments called Large Area Xenon Proportional Counter (LAXPC) consisting of three units with the combined geometric area of 6000 sq. cm, Soft X-ray Telescope (SXT) with 3 arcmin angular resolution, Cadmium Zinc Telluride Imager (CZT) with a two-dimensional four quadrant coded mask for hard X-ray imaging (10–100 keV), Scanning Sky Monitor (SSM) with continuous rotation for constantly scanning the sky, thus covering more than an hemisphere in one revolution, Charged Particle Monitor (CPM) and Ultra Violet Imaging Telescope (UVIT) incorporating two co-aligned 400 mm aperture telescopes, one for the far UV and the other for the near UV and visible (with a beam splitter) and with an exceptional angular resolution of less than 2 arcsec\(^{17}\).
The simultaneous measurements from all these instruments will help in the broad band spectral and timing studies of X-ray binaries, pulsars, SNRs, AGNs and galaxy clusters; high resolution studies of galaxy morphology in UV; sky survey in hard X-ray and UV bands; detection of new X-ray transients and UV background studies. Astrosat (Figure 26) will be a National Space Observatory which will be available for astronomical observations to any researcher in India besides the payload scientists and their international collaborators.
Several new technologies were involved in the development of the payloads. ISRO had experience of developing high quality optics for the remote sensing payloads which operate in visible and near IR spectral bands. The UV telescopes need to operate in near UV and far UV spectral bands down to 120 nm. New technologies are developed in the area of coating and testing these mirrors. Another important development is the UV detector with a 25 micron resolution over a diameter of 45 mm. This is the first time in the world such a detector is flying on a UV telescope by means of which a resolution of 2 arcsec from the payload is made possible. The detector is fibre coupled to match the format of the APS detector which is used to readout the optical signal. The SXT payload uses a light weight Wolter-1 type grazing incidence optics\(^{18}\) realized by concentric conical thin aluminum foils coated with gold. This foil technology was developed and qualified at TIFR, Mumbai. The detector used for this payload is a specially designed open well structure CCD responding to soft X-ray. The detector needs to be cooled to \(-80^\circ\)C in order to reduce the background thermal noise of the detec| Sub-systems | Current technology | Future |
|-----------------------------|------------------------------------------------------------------------------------|---------------------------------------------|
| Structure/mechanisms | Graphite composite, Bonded structures, Explosive release devices | High modulus fibers, Inflatable structural elements, Payload isolations systems, Shaped memory alloy actuated release devices |
| Thermal control | Panel embedded heat pipes, Onboard processor controlled heaters | Thin, flexible heat pipes, Deployable radiators |
| Attitude control | Star tracker, Light weight reaction wheels, GPS-based orbit determination | Multi-function sensors, Micro-electro-mechanical devices |
| Electrical power | Thin multi-function cells on graphite face sheets, Panel deployment, Lithium ion battery, Standardized harness | Thin film membrane solar array, Lithium ion battery, Sodium sulphide battery, Harness imbedded in structure |
| Command and data handling | Centralized processing 31750 (1–20 MIPs), 1553 data bus, Mother board back plane, Solid state recorder | > 35 MIPs processors, 1773 optical data bus, MCM, 3D packaging, High capacity solid state recorder |
| Communication | MIC, Solid state power amplifier | MMIC, High electron mobility transistor |
| Propulsion | Mono-propellant, Blow down systems, Titanium and welded lines | Gel propellants, Elastomeric tanks, Flexible lines, Electric propulsion |
This cooling is achieved in two stages. A delta T of 40° is by means of a 3-stage thermo electric cooler and the remaining by means of a heat pipe and radiator. Large Area X-ray Proportional Counter (LAXPC) consists of three units each with an effective geometric area of 2000 sq. cm. These are very large proportional counters which are filled with xenon and methane gases at 2.5 atmospheric pressure to get an energy resolution of 10% at 22 keV. A gas purification system is developed for removing the impurities in the gas which can occur during the lifetime of five years of the payload. Two-dimensional coded mask design and analysis for the Cadmium Zinc Telluride (CZT) payload and one-dimensional coded mask for SSM payload were also new developments carried at Raman Research Institute (RRI), Bangalore.
The spacecraft configuration with severe requirements of accommodating many large payloads co-aligned and meeting the optical field of view requirements within the envelope constraints of the PSLV launcher and the adaptor was a challenge which was squarely met. SXT and CZT detectors were required to be cooled to very low temperatures. These requirements were analysed and two large radiators with 6000 sq. cm and 2000 sq. cm area were designed. To mount these radiators to the structure and at the same time reduce the parasitic thermal loads, special flexures and brackets were designed. Astrosat being an inertial 3-axis stabilized satellite having any possible orientation at a given time with respect to the thermal loads from Sun and Earth albedo, thermal design and management is a real challenge. Added to this, the SXT and UVIT payloads need stringent thermal control within 2 degrees. New paraffin based actuators are used for the cover and shutter openings in UVIT and SXT payloads. 120 Gb solid state recorder with simultaneous recording and playback facility, in order to provide continuity of data recording of the science instruments even while playing the previously recorded data, is a new development. The data handling system unlike IRS system has to operate on a continuous basis. A flexi cable consisting of 100 wires is designed to carry the electrical signals from the rotating SSM payload to the fixed spacecraft. The SSM is free to rotate by ± 180 degrees. The instruments and science expected from Astrosat are discussed in Agrawal et al. (this issue).
In future Astrosat-2 will carry higher resolution imagers, spectroscopes and polarimeter to enhance multi-wavelength observations. Future planned missions also include Mars orbiter, Small Satellite for Earth’s Near-space Environment (SENSE), Asteroid orbiter and Comet fly by; fly by to outer solar system and manned mission to the moon. ADITYA is a planned mission to study sun with an advance solar spectroscope. For the near future an X-ray
polarimeter, an infrared spectroscope and a hard X-ray imaging system are planned on 100 kg class of small satellite with the payload weight being up to 40 kg.
**Small satellite programme**
Space exploration commenced with the launch of Sputnik, a small satellite. India’s first satellite Aryabhata, to acquire know-how and technology in space systems, was in the small satellite class. In the early years, to support launch vehicle development, Rohini and SROSS series of satellites that were developed were in the small satellite class. Besides supporting launch vehicle development, these small satellites supported in technology demonstration as well as conducting small scientific experiments. From these modest missions, complex and operational communication and earth observation missions were evolved steadily over the years.
With rapid advances in light weight, miniaturized technologies especially exponential growth in electronic component performance, it is possible to build high performance satellite in small satellite class to achieve dedicated and stand alone missions. The key technologies towards small satellite development are autonomy, microelectronics, telecommunications, instrument technologies and architectures, Micro Electro-Mechanical Systems (MEMS) and modular multi-functional systems. In tune with the global trend, currently micro satellite in the 100 kg class and mini satellite in 400 kg class are being realized at ISRO. While the micro-satellite is being developed for technology demonstration besides conducting small scientific experiments, the mini satellite will be used to realize stand alone Earth observation missions as well as compact scientific missions. The key to realization of these missions is development of miniaturized systems like wheels, gyro and star sensors, adaptation of electronic miniaturization like systems in package (SIP), 3D modules, large scale use of MMIC, etc. Use of low voltage devices is the key to this development. Table 5 shows current and future technologies for representative spacecraft bus elements of earth observation, science missions and small satellite platforms\textsuperscript{16}.
**Conclusion**
The spacecraft technology has made tremendous progress over the last twenty five years and has reached almost state-of-art technology level. Today India has her own state-of-the-art constellation of satellites – INSAT and IRS, providing services not only to the user community in the country but also to many countries worldwide. With the experience gained from a number of general purpose satellites, ISRO has started developing and launching a series of unique thematic satellites such as METSAT, EDUSAT, Resourcesat, Cartosat and Oceansat in both communication and remote sensing satellite series, specifically addressing the core development issues facing the country. The Indian spacecraft technology has been well recognized in the world resulting into awarding of two communication satellites contracts to ISRO by European consortium ASTRIUM. With continuous improvements in the performance both at subsystem and system level, push towards higher reliability and longer lifetimes today satellites built in India are considered some of the best operational system in the context of application and scientific objectives. An additional feature of the evolution of satellite technology has been the introduction of several innovation in a number of areas including the optics design of remote sensing camera systems.
---
1. Rao, U. R. \textit{et al.}, Aryabhata. \textit{Proc. Indian Acad. Sci. (Eng. Sci.)}, November 1978.
2. Rao, U. R., Kasturirangan, K. and Katti, V. R., Bhaskara-I & II – The experimental earth observation satellites. \textit{Proc. of Indo-Soviet symposium on space research}, Bangalore, February 1983.
3. Vasagam, R. M. \textit{et al.}, Apple – India’s first experimental communication satellite. \textit{J. Aerounat. Soc. India (Special Issue)}.
4. Kasturirangan, K. \textit{et al.}, Special issue: Remote sensing for national development. \textit{Curr. Sci.}, 1991, \textbf{61(3&4)}, 136–152.
5. Patki, A. V., \textit{Space Environment (Near Earth)}, PPRU, ISRO HQ, Bangalore, 1999.
6. Katti, V. R. \textit{et al.}, Satellite configuration. Lecture notes on satellite communication, vol. II, SAC, Ahmedabad, 1999.
7. Bever, M. and Freitag, J., Fast packet vs circuit switch and bent-pipe satellite network architectures. TRW, 4 Ka-Band Utilization Conference, 1998.
8. Blineau, J., Cheval, P. and Verhulst, D., Satellite contribution to the internet. \textit{Alcatel Telecommun. Rev.}, 4th Quarter 2001, pp. 243–248.
9. Von Braun, W., Singer, I. and Robinson, D., INSAT: A communication satellite for India, Fairchild Industries Report, May 1973.
10. Goel, P. S. and Ramachandran, P., New Features in INSAT-II Configuration, 39th IAF Conference, Bangalore, October 1988.
11. Katti, V. R. and Neelakantan, N., Second generation Indian national satellite system overview, Conference on Communication Technologies, IIISc, Bangalore, December 1996, pp. 26–35.
12. Shankara, K. N. and Murlidhar, V., Communication Payloads for INSAT-II, 39th IAF Conference, Bangalore, October 1988.
13. Satellite Communication Technology Database, NASA/CR-2001-210563/PART2, Science Applications International Corporation (SAIC), Schaumburg, Illinois, March 2001.
14. Katti, V. R. and Pankaj Killedar, Trends and approaches for development of new communication satellite systems. National Seminar on Aerospace Opportunities and Challenges, ASI AOC 2004, VSSC, Thiruvananthapuram.
15. Global Satellite Communication Technology and Systems, WTEC Panel Report, International Technology Research Institute, Loyola College, Baltimore, Maryland, December 1998.
16. Kasturirangan, K., \textit{J. Spacecraft Technol.}, 1991, \textbf{1}(1), 1–29.
17. ASTROSAT Project Report No.ISRO-ISAC-ASTROSAT-PR-0429, December 2003.
18. Wolter, H., Mirror systems with grazing incidence as image-forming optics for X-rays. \textit{Ann. Phys. (Leipzig)}, 1952, \textbf{10}(ser. 6.), 94–114. |
Profitable Use of Fertilizer on Native Meadows
MICHAEL NELSON AND EMERY N. CASTLE
Department of Agricultural Economics, Oregon State College, Corvallis, Oregon
In an earlier article in this journal (8:20-22, 1955) C. S. Cooper and W. A. Sawyer of the Squaw Butte-Harney Range and Livestock Experiment Station, Burns, Oregon, presented results of experiments carried out in 1951 and 1952 on fertilization of mountain meadows in the Harney basin, Oregon. The subject of this paper is an economic interpretation of their most recent experiments with nitrogen, carried out in the same area in 1954 and 1955.
Three separate trials were conducted, all showing essentially the same degree of yield response to nitrogen. The pooled results of these trials are given in Table 1.
If the price of nitrogen is assumed to vary from 10 cents to 15 cents per pound, then the cost of additional hay in terms of the fertilizer requirement may be calculated from Table 1 (see Table 2).
Ranchers must figure that this additional hay is still in the field and to these figures one must add cost of harvesting and stacking. The additional hay has value, however, only if it can be used in the production of beef. The extent to which the hay can be utilized depends upon the amount of rangeland available and meadow acreage. The main purpose of the study is to investigate some aspects of the range-hay-livestock balance. The problem can be broken down into the following questions:
(1) What is the most profitable rate of fertilizer application as determined by its contribution in the production of beef?
(2) How is this rate affected by different ranch situations?
(3) How is the rate affected by changes in the price of beef and nitrogen fertilizer?
(4) What are the range policy implications of increased forage production from meadow land?
Study Procedure
Before it was possible to make an economic analysis of the experiments, it was necessary to consider the factors that influence a rancher's decision on whether or not to use fertilizer. This information was obtained from a survey of ranchers and from statements of federal and
Table 1. Pooled results of fertilizer-hay response data from three trials.
| Rate of Nitrogen Application (pounds per acre) | Hay Yield per Acre | Pounds | Tons | Pounds of Hay per Pound of N |
|-----------------------------------------------|-------------------|--------|------|----------------------------|
| 0 | 3664 | 1.83 | — | |
| 50 | 5243 | 2.62 | 31.6 | |
| 100 | 6102 | 3.05 | 24.4 | |
| 150 | 6681 | 3.34 | 20.0 | |
| 200 | 7316 | 3.66 | 18.3 | |
state agencies operating in the area. There are approximately 60 ranches in the Harney Basin, Silver Creek, and Diamond areas of Eastern Oregon. Because of the nature of the study it was decided that a selected sample of 20 ranchers would be sufficient to provide information on the various conditions and problems found in the area.
From the survey of ranchers the factors involved in a decision to use fertilizer were determined. These factors were as follows: The resource situation in terms of land, labor, and capital; the price of nitrogen and beef; the cost and requirements of stacked hay, bunched hay, and pasture.
The next step was the economic interpretation of the fertilizer experiments. To do this it was necessary to estimate hay yields for any given level of nitrogen (not just at the five levels of nitrogen used in the trials). This is obtained by formulating an estimating equation from the experimental data.
An exponential equation seemed to best fit actual yield conditions. The curves in Figure 1 were determined from this equation. The total product function is the total hay yield that can be expected with different applications of fertilizer. The average product curve represents the average yield per pound of nitrogen. The marginal product curve gives the additional hay yield associated with each additional or marginal pound of nitrogen.
**Profitable Fertilizer Rate**
Characteristics of ranching in the native meadow area make the determination of the most profitable rate of fertilizer application difficult. A ready market does not exist for wild hay. Therefore, it must be valued in terms of its use in producing beef. Some method was needed that would provide an analysis of the entire ranch business. There are a number of techniques available by which such an analysis could be made, notably budgeting, regression techniques, and linear programming.
Linear programming is a mathematical procedure that allows a system of equations, subject to certain limiting factors, to be solved in such a way that returns to the limiting factors are maximized. Applying this technique to ranch management, the limiting factors become the land, labor, and capital that the rancher has available for production. The technique was used in this study because it permits the simultaneous selection of the level of beef production; areas of meadow to be fertilized for stacked hay, bunched hay and pasture; and the rate at which these should be fertilized in order to maximize profit. Such a simultaneous selection is not possible with budgeting, and experience has shown that regression analysis is often unsuitable for problems of this type.
The data used in the programming was obtained from the ranch survey, experiment station results, U.S. Department of Agriculture reports, and 1955 Ontario, Oregon, market reports.
In the use of programming it is necessary to establish a ranching situation. When this hypothetical ranch set-up has been established, it is possible to determine the economic use of fertilizer.
The first ranch organization studied was a two-man unit producing 167,900 pounds of beef and running 300 cows, with six limitational resources. The range permit was 3,025 A.U.M.'s, the base property was 750 acres of flood meadow, of which 260 acres (Meadow II) gave unsatisfactory response to fertilizer because of deep swales or excess alkalinity of the soil. This area gave a yield of one ton of wild hay per acre. For the purposes of the analysis this 260 acres is assumed to be unfertilized, with 66 percent cut for stacked hay and 34 percent for bunched hay, yielding one ton per acre. The remaining 490 acres (Meadow I) gave a yield of 1.2 tons of hay per acre without fertilizer. It was assumed that the meadow would only be fertilized to produce stacked hay,
| Table 2. Cost of additional hay at various rates of nitrogen application. |
|---------------------------------------------------------------|
| Rate of Nitrogen Application (pounds per acre) | Cost per Ton of Additional Hay |
| Price of N 10 cents/pound | Price of N 15 cents/pound |
| 0 | — | — |
| 50 | 6.32 | 9.49 |
| 100 | 8.19 | 12.29 |
| 150 | 9.92 | 14.90 |
| 200 | 10.92 | 16.39 |
| Table 3. Fertilization rates, land use and beef production with limited and unlimited range |
|------------------------------------------------------------------------------------------|
| Solution I* | Solution II** |
| Stacked hay | 282 acres at 50 lbs. N. | 313 acres at 100 lbs. N. |
| Bunched hay | 118 acres at 40 lbs. N. | 177 acres at 90 lbs. N. |
| Meadow pasture | 90 acres at 50 lbs. N. | — |
| Increase in beef production due to fertilization | 26% | 66% |
| Increase in net return | $2058 | (See 1 below) |
* Range limited to 3025 A.U.M.'s.
** Range unlimited.
1 Although net income was determined for this situation, it is not presented since it has little economic meaning.
bunched hay and pasture. It was further assumed that all additional capital necessary for the operation of the ranch using nitrogen fertilizer and running additional cattle, would be available at 7 percent interest. As pointed out earlier, 1955 prices were used.
The solution shows that the optimum nitrogen application was 50 pounds per acre on 282 acres for stacked hay, 40 pounds on 118 acres for bunched hay and 50 pounds on 90 acres for pasture (Table 3). The 260 acres of meadow which do not respond to nitrogen were assumed to produce 170 tons of stacked hay and 90 tons of bunched. The level of beef production which this forage output would support is 212,000 pounds from a herd of 360 cows, selling yearlings. This operating system would involve pasturing 110 yearling steers on the meadow through the summer. The increase in beef production due to fertilization is 26 percent, and the additional operating expenses amount to $4900 with a net increase in return to fixed factors, land, labor, and management, of $2058.
A second ranch organization was set up to take account of any possible expansion in range grazing through development or purchase. In this case there were four limitational resources, Meadows I and II, stacked hay and bunched hay, and four levels of nitrogen on each of the two forage production methods. The results of this analysis showed that the optimum production level would be 280,000 pounds of beef given by an operation running 500 cows and selling yearlings. The range requirement for this system is 5053 A.U.M.'s, or 67 percent more than the requirement without fertilization of meadow. This points up the need for additional range production if additional hay production is to be utilized. The nitrogen application required to support this level of production would be 100 pounds on 313 acres for stacked hay and 90 pounds on 177 acres for bunched (Table 3). Production from Meadow II would be as it was in the first situation. If range rental is charged at current federal rates, the capital requirement of this system is $9900 more than an operation using no fertilizer.
Table 4 shows the manner in which the optimum rate of fertilization is related to changes in the price of beef and nitrogen. This table was developed by using the hay-nitrogen relationship shown in Figure 1, and is based on the assumption that the value of stacked hay is directly related to the price of beef. This may not be a realistic assumption for heavy rates of nitrogen, say above 50 pounds. It would be realistic for lower rates of application. It is doubtful, however, if an operator should put on less than 30 pounds of N since too little is known about hay response for small application.
From Table 3 it can be seen that under the currently feasible

Table 4. Relationship between price changes in beef and nitrogen and the optimum rate of fertilization.
| Price of Beef | 13.5 | 14 | 14.5 | 15 | 15.5 | 16 | 16.5 | 17 | 17.5 | 18 | 18.5 |
|---------------|------|-----|------|------|------|------|------|------|------|------|------|
| $ per cwt. | | | | | | | | | | | |
| $10 | 30 | 30 | 30 | 20 | 20 | 20 | 10 | 10 | 10 | 10 | 0 |
| $11 | 40 | 40 | 40 | 30 | 30 | 30 | 20 | 20 | 20 | 20 | 10 |
| $12 | 50 | 50 | 50 | 40 | 40 | 40 | 30 | 30 | 30 | 20 | 20 |
| $13 | 60 | 60 | 50 | 50 | 50 | 40 | 40 | 40 | 30 | 30 | 30 |
| $14 | 70 | 60 | 60 | 60 | 50 | 50 | 50 | 50 | 40 | 40 | 40 |
| $15 | 80 | 70 | 70 | 60 | 60 | 60 | 50 | 50 | 50 | 50 | 40 |
| $16 | 80 | 80 | 70 | 70 | 70 | 60 | 60 | 60 | 60 | 50 | 50 |
| $17 | 90 | 80 | 80 | 80 | 70 | 70 | 70 | 60 | 60 | 60 | 60 |
| $18 | 90 | 90 | 90 | 80 | 80 | 80 | 70 | 70 | 70 | 60 | 60 |
| $19 | 100 | 100 | 90 | 90 | 90 | 80 | 80 | 80 | 70 | 70 | 70 |
| $20 | 100 | 100 | 100 | 90 | 90 | 90 | 80 | 80 | 80 | 80 | 70 |
| $30 | 150 | 140 | 140 | 130 | 130 | 130 | 120 | 120 | 120 | 120 | 110 |
price range for beef, up to $20 per 100 pounds, the highest optimum rate of fertilization is 100 pounds per acre at the lowest nitrogen price. At the nitrogen prices above 16 cents per pound, beef must be worth $9 or more per 100 pounds before any fertilization is profitable.
Conclusions
It is apparent from this study that any likely increase in range capacity can readily and profitably be matched by meadow output under a system of fertilization. However, Solution I indicates that without some development of range, the expansion through fertilization of meadow alone is limited to around 25 percent.
The prices of beef and nitrogen also affect the profitable limit of expansion with fertilizer. For instance, if the price of beef increases, relative to other prices paid by ranchers, then expansion of 30-35 percent may be profitable, using heavier applications of nitrogen (see Table 4).
The policy implications of meadow improvement are only indirectly related to fertilizer, but are nevertheless of importance. Fertilizer provides a relatively flexible method of increasing hay production and reserves. In this way it acts as a form of insurance and reduces the uncertainty in the operation. Where this is true the rancher can increase production, but summer range is still the most limiting factor. The administrators of public lands are therefore faced with the problem of obtaining the best utilization of range, and at the same time allowing the best use to be made of the meadows. There are two courses of action available to them. One is to develop rangeland, either themselves, or by financial assistance to ranchers; the second is to change the management of rangeland in light of meadow potential. In some cases it is impossible for the rancher to hold cattle on meadows in April and May due to pasture damage or because the meadows are covered by water. However, he may well be able to pasture them from July onwards or to bring them in from rangeland earlier in the fall. Other ranchers may be able to hold some cattle on pasture throughout the spring and summer. It is not the purpose of this article to go into range administration. The important point is that there exists a relationship between meadow improvement and range management.
Acknowledgement
The authors wish to acknowledge the assistance given by the staff of the Squaw Butte-Harney Experiment Station, particularly by Mr. C. S. Cooper and Mr. W. A. Sawyer whose field experiments formed the basis of this study.
Special recognition is also due to all ranchers, visited in the survey, for their splendid cooperation.
CALL FOR PAPERS FOR THE 1959 ANNUAL MEETING
Members who wish to present papers at the next annual meeting of the Society, to be held in Tulsa, Oklahoma, in January 1959, are requested to submit titles and short abstracts to the Program Committee. Final date for titles to reach the Committee is July 15, 1958.—E. H. McILVAIN, Chairman Program Committee, U. S. Southern Great Plains Field Station, Woodward, Oklahoma. |
FORMATION AND ACTIVITIES OF FACTORY INSPECTION IN UKRAINIAN LANDS (1882-1918)
The article deals with establishment and main stages of activities of the Factory Inspection in Ukrainian lands during 1882-1918 years which coincides with development of the inspectorate of the Russian Empire. The place of the Factory Inspection within the system of public authorities has been identified. Its major activities depending on various chronological periods have been highlighted. The role of Inspection in solving of “employment issue” and drafting of labor and social legislation as well as organization of control over its observance at the factories and plants has been shown. Changes which took place in the structure, staff, authorities and territory under the control of the Factory Inspection after its reorganization have been analyzed. Attention has been drawn to the lacks of the Factory Inspection activities which resulted in its inactivity and inability to solve important social problems after the revolution in the Russian Empire in the early 20th century.
**Key words:** the Factory Inspection, factory inspector, factory legislation, labor law, economic history of Ukraine, factory industry, factories and plants, employers, employees, “employment issue”, strikes, revolution
In terms of modernization in the second half of the 19th early 20th centuries serfdom economy in Ukraine was transferred into industrial one which was based on the production of goods. Scientific and technological progress, bringing of new inventions to life, mechanization of labor initiated the industrial revolution in Ukrainian lands. Establishment of factories and plants which replaced manufacture caused the industrial development and economic progress. Due to social stratification in the 19th century two new different social statuses originated. They were employers and employees whose interests intersected at the factories and plants. Undue exploitation of workers, violation of their labor rights by the owners of enterprises created conditions for strikes which increased annually in the Russian Empire. This situation threatened to the violation of social and political stability within the state. That is why it had to be settled. In 1835 Derzhavna Rada of the Russian Empire (the legislative body of the Russian Empire) adopted the *Regulations on Relations between the Owners of Factories and Employees*. They were a prototype of the first factory laws. These documents first regulated wage labor and restricted child labor\(^1\). But in spite of providing terms and conditions for employment agreement, the document did not specify the mechanism of control over the compliance with the labor legislation at the factories and plants. As H. Balytskyi mentioned: “No matter how good the laws are and how well they protect the employees’ rights they would be always so perfect and good on the paper if their enforcement were not guaranteed and violation were not punished\(^2\).” Therefore, one of the current issues for the government of the Russian Empire was to draft and implement labor legislation in the industry. And consequently, in 1859 the Special Temporary Commission was established in Saint Petersburg to consider and study the reasons for labor disputes between the employers and employees in the courts. In 1860 the similar Commission was formed in Moscow. The activities of the Saint Petersburg Commission resulted in publication *Draft Rules for the Factories and Plants in Saint Petersburg and District*\(^3\). They provided recommendations to restrict labor of juveniles up to the age of 12; to reduce hours of work; to introduce social insurance in the event of maim or death at the enterprise, and to implement the position of the factory inspector. The last provision was raised for the first time. The Draft Rules were forwarded for evaluation and approval to the Governors General and employers, who rejected them, which means that they were returned for further improvement. The Emancipation Reform of 1861 and Zemstvo Reform of 1864 abandoned this issue. But strikes during 1860-1870 years reminded on the problem.
---
\(^1\) Литвинов-Фалинский, В.П. (1904). *Фабричное законодательство и фабричная инспекция в России*. 2-е издание, исправленное и дополненное. Санкт-Петербург: Типография А. С. Суворина, 1-2.
\(^2\) Балицкий, И.В. (1907). *Какая должна быть фабричная инспекция*. Москва: Моховая, д. Бенкендорф, книжный магазин Д.П. Ефимова, 3.
\(^3\) Объяснительная записка (1860). *Проект правил для фабрик и заводов в Санкт-Петербурге и уезде*.
In 1870s the Special Commissions of M. P. Ihnatev (1870-1872) and P. O. Valueev (1870-1872), who worked within the Ministry of Internal Affairs, drafted factory labor legislation. They mainly put an emphasis on laws the enforcement of which had to be controlled by police and violation of which was punished by the courts\(^1\).
Thus, when in December 1880 the crisis of overproduction burst out some factories and plants reduced production which resulted in dismissal of some workers. The agenda included “employment issues” again and that time discussions on restriction of child and women’s labor started. Therefore, drafted documents were retrieved and with participation the Ministry of Finance, the Ministry of Justice and the Ministry of Internal Affairs a new Law of June 1, 1892 *On Minors Working at the Plants, Factories and Manufactories* was drafted and signed by Alexander III\(^2\). He forbade the labor of children up to the age of 12 and restricted the labor of juveniles between the ages of 12-15 up to 8-hour working day. The adoption and implementation of this Law needed the establishment of the Special Public Authority which would have supervised its observance at the enterprises. In this regard, it was decided to form the Factory Inspection which aimed at controlling the compliance with the factory legislation during production process at the factories and plants. According to M.I. Tuhan-Baranovskyi “The factory inspector had to become the major actor who would create a new type of relations between the employers and employees at the factories”\(^3\).
The same procedure for establishment and development of labor legislation in Western Europe had preceded the establishment of the Factory Inspection in the Russian Empire. The factory inspections in the following countries had been formed earlier than in the Russian Empire: England (1833), Germany (1853), Denmark (1873), France (1874), Spain (1876), Switzerland (1877) and then Austro-Hungary (1883), Italy (1886), Belgium and Holland (1889), Portugal (1891), Norway (1892)\(^4\). It proves that the European economic process did not influence Ukrainian lands within the Russian Empire. Sometimes, it was necessary to improve the factory legislation, particularly the labor one in Ukrainian lands within the Russian Empire.
Taking into account a high social status of the Factory Inspection in the industry of the second half of the 19\(^{th}\) early 20\(^{th}\) centuries, the leading economists, lawyers and scholars focused their attention on it and had different views at its activities. Some of them stood for the Factory Inspection as an important public body aimed at the settlement of labor disputes and solving of “employment issues”, the others criticized it and referred to the factory inspectors’ inability to influence undue exploitation of workers by employers at the factories and plants which was the reason for future strikes. The interest in the Factory Inspection arose from the first days of its existence. During 1885-1886 the first factory inspectors such as I.I. Yanzhul, P.O. Peskov, Ya.T. Mykhailovskyi and I.O. Novytskyi provided their recommendations on the improvement of this public body in their annual reports. Nowadays their works are the sources in this field. At the same time attention shall be drawn to the works of V.P. Lytvynov-Falynskyi, O.O. Mykulin, M.M. Tuhan-Baranovskyi, Ye. M. Dementev, I.V. Balytskyi, V.P. Bezobrazov, M.H. Lunts and L.D. Trotskyi who also analyzed the organization of the Factory Inspection activities. They mentioned its inability to satisfy the workers’ interests in the legislative manner which contributed to the spread of strikes at the factories and plants in the early 20\(^{th}\) century. Such Soviet scholars as A.M. Pankratova, A.F. Vovchyk, S.I. Kaplun negatively evaluate the activities of the Factory Inspection. Ukrainian researchers of the Soviet period V.V. Krutykov and V. Zykova first analyzed and structured unpublished archive sources in this field. Among modern scientists the Russian scholars A.Yu. Volodin, V.P. Bogdanov, T.Ya. Veltov, S.R. Glazunov who launched the collective project “The Institute of Factory Inspection in Russia (1882-1914)” are on the top of the list. All aspects of the establishment and activities of the Factory Inspection in the entire Russian Empire was analyzed within the project. The major attention was drawn to the guberniyas which territory coincided with the borders of the modern Russian state. National historiography is poorer and represented by the works of modern scholars who only partly address the issue of the activities of the Factory
---
\(^1\) Глазунов, С.Р. (2011). К вопросу о создании института фабричной инспекции в России в конце XIX века. *Вестник Тюменского государственного университета*, 2, 97.
\(^2\) О малолетних, работающих на заводах, фабриках и мануфактурах (1882). *Полное собрание законов Российской империи III*. Т. 2, № 931.
\(^3\) Туган-Барановский, М.Н. (1997). Избранное. *Русская фабрика в прошлом и настоящем*. Историческое развитие русской фабрики в XIX веке. Москва: РОССПЭН, 390-391.
\(^4\) Энциклопедический словарь Ф. А. Брокгауза, И. А. Ефрона (1902). Т. 35 (69). Санкт-Петербург: Типография Акц. Общ. Брокгауз-Ефрон, 181-194.
Inspection in Ukrainian lands. It is essential to note O.M. Sudakova, T. Lazanska, S.O. Bila, T.S. Vodotyka, Yu. Kholod and others among them.
In this regard, the issue of establishment and further operation of the Factory Inspection in Ukrainian lands during the second half of the 20\textsuperscript{th} early 19\textsuperscript{th} centuries does not attract attention of the national and foreign scholars. Though, the research of its activities is directly connected with the studies of employees as a certain social class, “employment issues” in general and the issues related to creation of conditions for the revolutionary movement which damaged the Russian Empire. The Factory Inspection had to become a mechanism which decreased the social tension in the leading cities and industrial centers. Therefore, historical review of establishment and operation of the Factory Inspection in Ukrainian lands is important for a general study of this issue and presentation of a complete historical process, particularly unknown aspects of economic and social history of Ukraine in the second half of the 19\textsuperscript{th} early 20\textsuperscript{th} centuries. In addition, an essential amount of sources, which are kept in the archive funds of the Central State Historical Archives of Kyiv (fund 575 “Office of the district factory inspector of Kyiv district”, fund 574 “Office of the senior factory inspector of Kyiv Guberniya”, fund 2090 “Office of the district factory inspector of Kharkiv district”), provide an opportunity to view at some severe social and economic issues related to the history of Ukrainian citizens during the second half of the 19\textsuperscript{th} – early 20\textsuperscript{th} centuries all over again.
The core of the research is a chronological approach which allows to structure the main stages of the Factory Inspection and to determine its tasks during different periods in terms of the general historical processes in the state. In particular, the Russian scholar A.Yu. Volodin points out four stages of the Factory Inspection development within the Russian Empire. They are the following: i) 1882-1893 – formation of the Factory Inspection; ii) 1894-1904 – reforming of the inspectorate, caused by the activities of the Minister of Finance S.Yu. Witte; iii) 1905-1913 – examination of the Factory Inspection efficiency during the revolution of 1905-1907 years and the pre-war period; iv) 1914-1918 – operation of the Factory Inspection during the First World War, its participation in mobilization of industry and its decline after the Russian Revolution\textsuperscript{1}. The Ukrainian researcher T.S. Vodotyka emphasizes three stages. They coincide with the first three stages of A.Yu. Volodin without mentioning the last one, which does not accomplish the chronology of the inspectorate activities\textsuperscript{2}. Therefore, we propose to use the first approach.
During the first stage (1882-1893) the Factory Inspection as a certain public institute was established to supervise mainly child and women’s labor, and to draft labor legislation. The Law \textit{On Minors Working at the Factories, Plants and Manufactures} which was adopted on June 1, 1882 and had to become effective until May 1, 1884 initiated the activities of the Factory Inspection\textsuperscript{3}. The following issues were raised by the Law: which authority a new formed body had to be referred to? who had to head it? what territory and industrial enterprises had to be under the control of the inspection? Taking into account the fact that M.Kh. Bunge, who was the Minister of Finance from 1881 to 1886, initiated the draft of the law of June 1, 1882 the Factory Inspection was under the jurisdiction of the Ministry of Finance and accountable to the Department of Trade and Manufacture\textsuperscript{4}. But when I.O. Vyshnehradskyi headed the Ministry of Finance from 1887 to 1892 he supported an idea of the Deputy Minister K.V. Pleve to transfer the Factory Inspection into the Ministry of Internal Affairs\textsuperscript{5}.
After adopting the Law \textit{On Minors Working at the Factories, Plants and Manufactures} and approving the establishment of the Factory Inspection there was a lack of professional staff. According to Ya.T. Mykhailovskyi the factory inspector had to possess economic, legislative and technical knowledge; to have higher education; to understand factory issues\textsuperscript{6}. During the meeting in Moscow on June 27, 1882, Ye. M. Andreev was elected as the first major factory inspector. He held this office until resignation in April 28, 1883. Ya.T. Mykhailovskyi was the next one who occupied this position from 1883 to 1894. As the
\begin{footnotesize}
\begin{enumerate}
\item Володин, А.Ю. (2007). Фабрична інспекція в Росії (1882-1904 гг.). \textit{Отечественная история}, 1, 24.
\item Водотика, Т.С. (2013). Документи фабричної інспекції в ЦДІАК України як джерело до вивчення історії підприємництва в другій половині XIX – на початку ХХ ст. \textit{Архіви України}, 1 (283), 169.
\item О малолетних, работающих на заводах, фабриках и мануфактурах (1882). \textit{Полное собрание законов Российской империи III}, Т.2, № 931.
\item О малолетних, работающих на заводах, фабриках и мануфактурах (1882). \textit{Полное собрание законов Российской империи III}, Т.2, № 931.
\item Володин, А.Ю. (2009). \textit{История фабричной инспекции в России 1882-1914 гг.} Москва: Российская политическая энциклопедия (РОССПЭН), 37.
\item Михайловский, Я.Т. (1882). О деятельности фабричной инспекции. \textit{Отчет за 1885 г. главного фабричного инспектора Михайлова}. Санкт-Петербург, 2.
\end{enumerate}
\end{footnotesize}
Factory Inspection operated only in two guberniyas during the first year, I.I. Yanzhul and P.O. Peskov headed the factory inspectorates in Moscow and Vladimir guberniyas respectively. In a year they drafted reports with recommendations on the further operation of the Factory Inspection and drew public attention to this institution\(^1\).
In 1884 the Factory Inspection expanded its control over new territories and increased employees. Under the Law of June 12, 1884 *On School Education of Minor Employees at the Factories, Plants and Manufactures and on the Factory Inspection* this institution supervised labor and teaching of children in other guberniyas of the Russian Empire except for Petersburg, Moscow and Vladimir ones. Upon the instructions of M.Kh. Bunge, to achieve an easy control and organization of work industrial regions which consisted of several guberniyas were united into factory districts on the model of British district factory system. In this regard, Moscow, Vladimir, Petersburg, Kazan, Voronezh, Kharkiv, Kyiv, Vilno and Warsaw factory districts were formed. The further study will focus on Ukrainian lands which are represented by two districts. In particular, Kyiv factory district included Kyiv, Volyn, Podillia and Kherson guberniyas and in 1891 Bessarabia and Tavria guberniyas adhered to it. Kharkiv factory district consisted of Kharkiv, Katerynoslav, Chernihiv, Poltava guberniyas and the region of Viisko Donske. The district engineer of the South-Western mining district controlled compliance with and execution of legislation related to labor and teaching of minor employees at the factories and plants\(^2\).
Except for the territory which the Factory Inspection was in charge of, the Law of June 12, 1884 determined the number of employees at the inspectorate. Ya.T. Mykhailovskii continued being the major factory inspector. District inspector and his assistant were appointed to every factory district at the local level. In 1884 I.O. Novytskyi became Kyiv district inspector after returning from Latvia\(^3\). V.V. Sviatoslavskyi was elected as Kharkiv factory inspector. He held this position from 1884 to 1886\(^4\). In 1884 the Factory Inspection comprised of 9 district factory inspectors, 9 assistants and one major factory inspector. From 1884 they were financed from the Government budget through submission of the annual estimation in the amount of 78,500 rubles to the Department of Trade and Manufactures of the Ministry of Finance\(^5\). *Guidelines for the Officials of the Factory Inspection Regarding Control over the Compliance with Regulations Related to Minors Working at the Plants, Factories and Manufactures* and *Rules for Employers* which were published on February 26, 1885 stipulated the duties of factory inspectors and their assistants\(^6\).
The Factory Inspection was reorganized in 1886 when I.O. Vyshnehradkyi held an office of the Minister of Finance. On June 3, 1886 after revision of two drafts *On Increase of the Employees of the Factory Inspection and Rules of Control over the Factory Industry Facilities and Mutual Relations between Employers and Employees* introduced by the Minister of Finance and the Minister of Internal Affairs respectively the Law which increased the number of factory assistants up to 10 (upon the request of the Minister of Finance in the first part of the draft law) was adopted. But the second part of the Law is more interesting. It deals with the establishment of a new public collegial body which is Guberniya Prysutstviia on Factory Issues. They subordinated to the local administration headed by the Governor and were accountable to the Ministry of Internal Affairs. Its members were the Vice-Governor, prosecutor of the district court, the head of gendarme department, district factory inspector and his assistant. If needed, guberniya doctors, guberniya engineer, architect and mechanic could be attracted to the meetings of Prysutstviia. District factory inspector and his assistant administered the clerical office of Guberniya Prysutstviia. The duties of Guberniya Prysutstviia on Factory Issues were the following: to issue obligatory regulations on such issues as health protection, provision of the first aid, the employees’ observance of the
---
\(^1\) Володин, А.Ю. (2009). *История фабричной инспекции в России 1882-1914 гг.* Москва: Российская политическая энциклопедия (РОССПЭН), 40.
\(^2\) О школьном обучении малолетних, работающих на заводах, фабриках и мануфактурах, о продолжительности их работы и о фабричной инспекции (1884). *Полное собрание законов Российской империи*. III. Т. 4, № 2316.
\(^3\) Новицкий, И.О. (1886). *Отчет за 1885 г. Фабричного инспектора Киевского округа.* Санкт-Петербург: Типография В. Киршбаума.
\(^4\) Святловский, В.В. (1886). *Харьковский фабричный округ.* Отчет за 1885 г. фабричного инспектора Харьковского округа В. В. Святловского. Санкт-Петербург.
\(^5\) О школьном обучении малолетних, работающих на заводах, фабриках и мануфактурах, о продолжительности их работы и о фабричной инспекции (1884). *Полное собрание законов Российской империи* III. Т. 4, № 2316.
\(^6\) Володин, А.Ю. (2009). *История фабричной инспекции в России 1882-1914 гг.* Москва: Российская политическая энциклопедия (РОССПЭН), 47.
moral values at the factories and plants, control over modernization of industrial enterprises; to consider reports from factory inspectors and claims against their orders, and to cancel them in case of need; to inform senior inspectors on offences committed by junior inspectors\(^1\). Since 1886 any activities of the Factory Inspection was accountable to Guberniya Prysutstviia on Factory Issues irrespective of the fact that these governmental institutions were within different ministries. Thus, we can observe an attempt to pass the Factory Inspection from the Ministry of Finance to the Ministry of Internal Affairs of Ukraine and its subordination to the Governor’s power.
Though, comparing to previous years the authorities of the Factory Inspection were significantly broadened in accordance with the *Rules on Control over the the Factory Inspection Facilities* of June 3, 1886. Except for control over labor of children and their basic education it had to: i) control the compliance with the *Rules on Mutual Relations* which were based on the labor agreement and free labor of employees recorded in labor books; ii) disseminate regulations and decisions which were made by Guberniya Prysutstviia on Factory Issues and supervise their execution; iii) consider and approve rates, time sheets, schedules and rules of conduct which were approved by the administration of factories and plants and further disseminated among employees to be observed; iv) mediate in settlement of labor disputes between the employers and employees; v) make reports against offenders and forward them to the Guberniya Prysutstviia, magistrates and examining magistrate; witness at trials if required\(^2\).
Therefore, only in 1886 the Factory Inspection was transformed into a public authority with specific tasks and personnel. In spite of the fact that according to the law its jurisdiction expanded over the entire territory of the Russian Empire, this body did not start operating in Ukrainian lands because of the lack of professionals. The functioning of the Factory Inspection was delayed. The major function of the inspectors was mediation during settlement of labor disputes; investigation of the reasons for strikes at the factories and plants on the basis of claims and applications; supervision over the development of industry in general. Inability to use compulsion and to bring to liability for violation of legislation made a factory inspector a mediator in labor disputes. At the first stage the factory inspectors gained experiences, improved the mechanism of work, formed clerical offices, increased the employees.
The second stage with a duration from 1894 to 1904 is also interesting and intense. It related to the reforming of the Factory Inspection. Another change in structure and tasks of the inspectorate is connected with changes at central and local governmental bodies of the Russian Empire. In 1894 Nikolai II came to power. This situation caused transformations of personnel and the next change in domestic policy. Two years before in 1892, S.Yu. Witte became the Minister of Finance who comparing to I.O. Vyshnehradskyi considered the Factory Inspection to be an efficient tool for rebuilding factory life in the Empire. Therefore, his appointment to the position of the Ministry of Finance was a key date which meant the beginning of the second stage of the Factory Inspection activities.
The changes started when on June 8, 1893 Derzhavna Rada instructed the Minister of Finance and the Minister of Internal Affairs to expand the Law of June 3, 1886 at all the guberniyas of the Russian Empire where it was not implemented. Afterwards, on March 14, 1894 the Law *On Reorganization of the Factory Inspection and Positions of Guberniya Mechanics* was adopted. It changed the territory of the inspection’s supervision, as well as the structure and authorities of the inspection. Except for Saint Petersburg, Moscow, Vladimir, Warsaw and Petrakivsk guberniyas the *Rules of Control over the Factory Industry Facilities and Mutual Relations between Employers and Employees* extended to the following 13 guberniyas: Volyn, Hrodna, Kyiv, Kostroma, Livonia, Nizhny Novgorod, Podillia, Riazan, Tver, Kharkiv, Kherson, Esthonia and Yaroslavl where the factory inspectors had to carry out their activities. Thus, district system of control over the industry of 1884 was abolished and replaced by guberniya one. Taking into account territorial changes, the structure of the inspection was realigned. Position of the factory inspector was abolished. District factory inspectors were renamed to senior factory inspectors which number increased up to 18 persons according to the number of guberniyas which had to be supervised. They were assisted by 10 filing clerks who headed the clerical offices. District inspectors’ assistants were transformed into local factory inspectors which number increased up to 125 persons. In addition, a new position, a
---
\(^1\) По проекту Правил о надзоре за заведениями фабричной промышленности и о взаимных отношениях фабрикантов и рабочих и об увеличении числа чинов фабричной инспекции (1888). Полное собрание законов Российской империи III. Т. 6, № 3769.
\(^2\) По проекту Правил о надзоре за заведениями фабричной промышленности и о взаимных отношениях фабрикантов и рабочих и об увеличении числа чинов фабричной инспекции (1888). Полное собрание законов Российской империи III. Т. 6, № 3769.
candidate for the factory inspector, was introduced. It was intended to train 10 of them\(^1\). Senior and local inspectors were accountable to Guberniya Prysutstvia on Factory and Mining Issues which meant that they subordinated to guberniya administrations. During creation of a new personnel of the Factory Inspection there was a lack of professionals.
Another novelty of the Law of March 14, 1894 was abolishment of the office of guberniya mechanics and transition of their authorities to the Factory Inspection which could expand its duties in such a way. As a result, except for control over observance of legality of labor conditions for children and women the factory inspectors had to supervise technical conditions of steam boilers at the enterprises, to record their number and to collect a boiler tax which had to be paid for 3 years but existed till 1917. The officials of the inspection earned wage from this tax. In addition, the factory inspectors were obligated to collect, check and summarize statistical data on the industry development\(^2\).
On June 11, 1894 the Minister of Finance S.Yu. Witte issued an *Order to the Officials of the Factory Inspection* which eliminated the Instruction published in 1885. The order provided the mentioned duties of the factory inspectors and duplicated other sections of the Law of March 14, 1894\(^3\).
To exercise control over the activities of the factory inspector of June 7, 1899 the Main Prysutstvie on Factory and Mining Issues was established as a collegial body under the Department of Trade and Manufacture of the Ministry of Finance. Its task was to issue general rules and instructions for the factory officials in order to unify normative regulation of the Factory Inspection activities, to control its activities, to check the compliance with labor legislation\(^4\).
In 1899 a new district system was established to combine and structure the officials of the Factory Inspection. That time 6 factory districts such as Petersburg, Moscow, Povolzhye, Kharkiv, Kyiv and Warsaw ones were emphasized. They were headed by the district factory inspectors whose activities did not depend on the factory administration and police\(^5\). They were subordinated to the District Prysutstvie on Factory and Mining Issues which activities was regulated by the decisions and regulations of the Main Prysutstvie on Factory and Mining Issues. Kyiv factory district consisted of Kyiv, Podillia, Volyn, Kherson, Bessarabia, Tavriia, Chernihiv, Poltava, Mohyliv, Voronezh, Kaluha, Kursk, Kutaisi, Orel, Penza, Tambov, Tifliandia, Kharkiv, Katerynoslav, Chornomorsk, Don guberniyas and Sukhumi district\(^6\).
On May 30, 1903, subordination of senior and local factory inspectors to the Governors was legalized. The Governor obtained the right to appoint and allocate inspectors to the districts, to require reports from them, to contest their orders if they contradicted current legislation. District Inspection was under the jurisdiction of the Department of Industry of the Ministry of Finance and executed the regulations of the Main Prysutstvie on Factory and Mining Issues. Its task was to audit local officials of the Inspection and to summarize statistical data in annual reports\(^7\).
Therefore, during the second stage of the Factory Inspection activities the territory of its control was changed, its structure was reorganized, the number of employees significantly increased, the authorities
---
\(^1\) О преобразовании фабричной инспекции и должностей губернских механиков и о распространении действия правил о надзоре за заведениями фабрично-заводской промышленности и о взаимных отношениях фабрикантов и рабочих (1898). Полное собрание законов Российской империи III, Т. 14, № 10421.
\(^2\) Высочайше утвержденное мнение Государственного совета о преобразовании фабричной инспекции и должностей губернских механиков и о распространении действия правил о надзоре за заведениями фабрично- заводской промышленности и о взаимных отношениях фабрикантов и рабочих (1898). Полное собрание законов Российской империи III, Т. 14, № 10421.
\(^3\) Наказ чинам фабричной инспекции Департамента торговли и мануфактур Министерства финансов о правах и обязанностях фабричной инспекции от 11 июня 1894 г. (1894). Центральный державный историчний архів м. Києва України. Ф. 574, оп. 1, спр. 2., арк. 6-12.
\(^4\) Раскин, Д.И. (2001). Высшие и центральные государственные учреждения России. 1801-1917. Т.2. Санкт-Петербург: Наука, 175.
\(^5\) Володин, А.Ю. (2009). История фабричной инспекции в России 1882-1914 гг. Москва: Российская политическая энциклопедия (РОССПЭН), 68.
\(^6\) Зикова, В. (1960). Документальні матеріали архівних фондів: «Канцелярія окружного фабричного округу» і «Канцелярія старшого фабричного інспектора Київської губернії. Науково-інформаційний бюлетень архівного управління УРСР, 6 (44), 61-72.
\(^7\) Высочайше утвержденный всеподданнейший доклад Министров внутренних дел и Финансов о порядке и пределах подчинения чинов фабричной инспекции начальникам губерний и о некоторых изменениях во внутренней организации ее от 4 июня 1903 г. (1905). Полное собрание законов Российской империи III, Т. 23, № 23041.
were broadened. At that time its activities extended to all Ukrainian guberniyas within the Russian Empire. Two-level (guberniya and district) system of the factory control was created. It transformed the Factory Inspection into the real public mechanism which regulated the factory life and controlled compliance with labor legislation.
The third stage of the Factory Inspection activities (1905-1913) coincided with the revolution of 1905-1907 years and prewar training. Regular changes in legislation, structural reorganization, lack of professional staff, a low number of employees at the Factory Inspection interfered with its work in normal conditions which resulted in a negative impact on the performance of its duties. Dissatisfaction with working conditions and undue exploitation increased the number of striking workers annually. Strike at Kyiv machine factory “Hreter and Kryvanek” in December 1903 was significant. The Government was worried by strikes which took place at most enterprises of Kyiv in July 1903\(^1\). Only in January 1905 almost 400 thousand workers struck in the entire Russian Empire to show their protest.
Unsolved “employment issues” and lack of effective labor legislation resulted in revolution during 1905-1907 years in the Russian Empire which started with “Bloody Sunday” of January 9, 1905 in Petersburg. This day, a peaceful assembly of thousands workers who forwarded to the tsar with the Petition in order to improve the conditions of life was shot. More than 200 people were killed, hundreds were wounded what significantly irritated all the workers at factories and plants. Strikes expanded to all the industrial cities of the Russian Empire. Responsibility for the beginning of the revolution was partly conferred on the Factory Inspection because of its omission to act.
In light of these events on April 4, 1905, the meeting of the factory inspectors took place in Saint Petersburg under the guidance of M.P. Lanhovyi, who headed the Department of Industry of the Ministry of Finance, and the Minister of Finance V.M. Kokovtsov\(^2\). Decision was taken to cancel punishment of workers of factories and plants for protests, strikes, nonattendance, as well as early termination of the employment agreement\(^3\). At the meeting the factory inspectors argued against repressions over the employees and proposed to fulfill the function of mediators and conciliators in labor disputes without using administrative compulsion of police.
After establishing the Ministry of Trade and Industry on October 27, 1905, the Factory Inspection and the Main Prysutstvie on Factory and Mining Issues transferred into its jurisdiction\(^4\). Afterwards, the Factory Inspection activities did not change its structure. Except for the mentioned duties it drafted labor and social legislation. But it failed the efficiency check in the revolutionary years.
The last stage of the Factory Inspection (1914-1918) coincided with the period of the First World War and decline of the Russian Empire. It addressed the participation of the factory inspectors in technical support of mobilization; evacuation of industrial enterprises; control over transformation to the production of military technique.
In the early years of the war because of the mobilization on October 19, 1915, the *Order on Suspension of Some Articles* in the *Charter of Industrial Labor* until the end of the war was issued. The Order increased working hours at factories and plants, allowed child labor and night shifts. It is essential to mention that along with others the factory inspectors were also drafted into the army and their families received allowance for their military service\(^5\).
One of the main tasks of the Factory Inspection during the war was adjustment of factory enterprises to the performance of military contracts\(^6\). The inspectors had to evacuate factories and plants from Warsaw and Kyiv factory districts where war took place. Establishments which could not be evacuated were closed\(^7\).
---
\(^1\) О забастовке на заводе «Гретер и Криванек» в декабре 1903 г. (1903). Центральний державний історичний архів м. Києва України. Ф. 575, оп. 1, спр. 370, 25 арк.
\(^2\) Володин, А.Ю. (2009). *История фабричной инспекции в России 1882-1914 гг.* Москва: Российская политическая энциклопедия (РОССПЭН), 118.
\(^3\) Об отмене карательных статей закона, касающихся стачек и досрочного расторжения договоров о найме (1905). Центральний державний історичний архів м. Києва України. Ф. 575, оп. 1, спр. 413, 9.
\(^4\) Именной Высочайший указ, данный Сенату об учреждении Министерства торговли и промышленности (1905). Полное собрание законов Российской империи III. Т. 25. № 26851.
\(^5\) О выдаче пособий семьям мобилизованных служащих фабричной инспекции Киевского округа (1915). Центральний державний історичний архів м. Києва України. Ф. 575, оп. 1, спр. 775, 144 арк.
\(^6\) Переписка с Министром торговли и промышленности о переводе промышленных предприятий на выполнение военных заказов (1905). Центральний державний історичний архів м. Києва України. Ф. 575, оп. 1, спр. 766, 70 арк.
\(^7\) Копии журналов заседаний исполнительной комиссии Волынского губернского эвакуационного комитета.
Later supervision over Warsaw district was suspended. In August 1915 Kyiv district inspector V.F. Svyrskyi was appointed to be in charge of evacuation of enterprises from Kyiv factory district by the Minister of Trade and Industry\(^1\). Upon the instruction of the All-Russian Zemstvo Union he had to equip military hospital in Kyiv\(^2\).
The situation was more complicated after the end of the war because of the economic crisis. Terrible living conditions, exploitation, low wage resulted in mass dissatisfaction of workers who struck. Revolutions in February and then in October 1917 initiated decline of the Russian Empire and as a consequence of all central, superior and local authorities. In March 1918 positions of factory inspectors were liquidated under the Regulation of the Rada Narodnykh Komisariv (the Council of People's Commissars – governmental institution in the USSR)\(^3\). According to the Decree of the Rada Narodnykh Komisariv of the Ukrainian SSR dated March 19, 1919, the Factory Inspection was replaced by the Inspection of Workers which was later renamed to the Labor Inspection\(^4\).
To sum up it is essential to mention that the Factory Inspection has overcome a long way of formation and development. In the beginning, entrepreneurs, owners of factories and plants including the factory administration were against its establishment. Due to M.Kh. Bunge the Factory Inspection was implemented by the Law of June 1, 1882 under the Ministry of Finance but in fact it started operating only in 1884. On initial stage it controlled only 3 guberniyas of the Russian Empire. They were Saint Petersburg, Moscow and Vladimir ones. Ukrainian lands were controlled by it only in 1894 when the Minister of Finance S.Yu. Witte reorganized it by increasing the employees of the inspectorate and changing its competence. The Factory Inspection supervised compliance with labor and social legislation within Kyiv and Kharkiv factory districts which consisted of all the Ukrainian lands under the control of the Russian Empire.
Regular changes in legislation, lack of qualified personnel, structural reorganization retarded the activities of the Factory Inspection. It could not fulfill the tasks because of the low number of employees. In addition, fight between the Ministry of Finance and the Ministry of Internal Affairs for the jurisdiction over the Factory Inspection had a negative impact on its operation. Lack of the right to bring to liability for violation of labor legislation transformed factory inspectors into technical supervisors and mediators of labor disputes. It resulted in the uncertainty of “employment issue” which had a partial effect on the beginning of revolution during 1905-1907 and 1917-1920 that caused decline of the Russian Empire.
But in spite of the mentioned problems the Factory Inspection had a relatively high status in the industry of the second half of the 19\(^{th}\) the early 20\(^{th}\) centuries. Its activities was in a high light of the reputable economists, lawyers and scholars of that time. A long time it was the major leverage over the solving of “employment issue” and a mechanism of prevention of social revolt in the state. Moreover, documents of the factory inspectors have not been evaluated yet. But they are the major source to study the development of the factory industry and social and economic issues of the second half of the 19\(^{th}\) the early 20\(^{th}\) century. Therefore, we need to pay attention to the existence and activities of such a public authority as the Factory Inspection.
**References**
1. Balytskyi, Ye.V. (1907). *Kakaia dolzhna byt fabrichnaia inspektsiia*. Moskva: Mokhovaia, d. Benkendorf, knizhnii mahazin D.P. Efymova.
2. *Entsiklopedicheskii slovar F. A. Brokhausa, Ye. A. Efrona* (1902). T. 35 (69). Sankt-Peterburh: Tipohrafiia Aktionernoho Obshechestva Brokhaus-Efron.
3. Hlazunov, S.R. (2011). *K voprosu o sozdanii instituta fabrichnoi inspekteii v Rossii v kontse XIX veka*. *Vestnik*
Планы эвакуации. Отчеты о деятельности Эвакуационной комиссии по Ровенскому уезду (1914). Центральный державный исторический архив м. Киева Украины. Ф. 575, оп. 1, спр. 751, 71 арк.
\(^1\) О назначении окружного инспектора Киевского фабричного округа главноуполномоченным эвакуацией промышленных предприятий округа. Об эвакуации предприятий в связи с войной (1914). Центральный державный исторический архив м. Киева Украины. Ф. 575, оп. 1, спр. 752, 122 арк.
\(^2\) Сведения о деятельности окружного фабричного инспектора Киевского округа В.Ф. Свирского в качестве уполномоченного Всероссийского земского союза по устройству военного госпиталя (1914). Центральный державный исторический архив м. Киева Украины. Ф. 575, оп. 1, спр. 741, 78 арк.
\(^3\) Володин, А.Ю. (2009). *История фабричной инспекции в России 1882-1914 гг.* Москва: Российская политическая энциклопедия (РОССПЭН), 125.
\(^4\) *Постановление об учреждении инспекции труда 1918* (СНК РСФСР). *Собрание узаконений и распоряжений Рабочего и Крестьянского правительства*. № 36, отд. 1, 474.
4. Kopii zhurnalov zasedanii ispolnytelnoi komissii Volinskoho hubernskoho evakuatsionnoho komiteta. Plani evakuatsii. Otchoty o deiatelnosti Evakuatsionnoi komissii po Rovenkomu uezdu (1914). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 751, 71 ark.
5. Litvinov-Falinskii, V.P. (1904). *Fabrichnoe zakonodatelstvo i fabrichnaia inspektsiia v Rossii*. 2-e izdanie, ispravlennoe i dopolnenoee. Sankt-Peterburh: Tipohrafiia A. S. Suvorina.
6. Mykhailovskii, Ya.T. (1882). O deiatelnosti fabrichnoi inspeksiia. *Otchet za 1885 h. hlavnoho fabrichnoho inspektora Mykhailovskoho*. Sankt-Peterburh.
7. Nakaz chinam fabrichnoi inspektsiis Departamenta torhovli i manufaktur Ministerstva finansov o pravakh i obiazannostiakh fabrichnoi inspektsiis (1894). *Tsentralnyi derzhavnyi istorychnyi arkhiv m. Kiieva Ukrainy* (*TsDIAK Ukrainy*). F. 574, op. 1, spr. 2., ark. 6-12.
8. Novitskii, I.O. (1886). *Otchet za 1885 h. Fabrychnoho inspektora Kievskoho okruha*. Sankt-Peterburh: Tipohrafiia V. Kirshbauma.
9. O maloletnikh, rabotaiuschykh na zavodakh, fabrikakh i manufakturakh (1882). *Polnoe sobranie zakonov Rossiiskoi imperii. III. 2, № 931*.
10. O naznacheni okruzhnoho inspektora Kievskoho fabrichnoho okruha hlavnopolnomocheninn evakuatsiei promyshlennykh predpriiatii okruha. Ob evakuatyi predpriatiit v sviazy s voinoi (1914). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 752, 122 ark.
11. O shkolnom obuchenii maloletnikh, rabotaiuschykh na zavodakh, fabrikakh i manufakturakh, o prodolzhytelnosti ikh raboty i o fabrichnoi inspeksiia (1884). *Polnoe sobranie zakonov Rossiiskoi imperii III, 4, № 2316*.
12. O vydache posobii semiam mobilizovannikh sluzhashchikhh fabrychnoi inspektsiis Kievskoho okruha (1915). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 775, 144 ark.
13. O zabastovke na zavode «Hreter i Krivanek» v dekabre 1903 (1903). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 370, 25 ark.
14. Ob otmene karatelnlykh statei zakona, kasauishchykhxia stachek i dosrochnoho rastorzheniya dohovorov o naime (1905). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 413, 9 ark.
15. Obiasnitelnaia zapiska (1860). *Proekt pravil dlia fabrik i zavodov v Sankt-Peterburhe i uezde*.
16. Perepiska s Ministrom torhovi i promishlennosti o perevode promishlennikh predpriatii na vipolnenie voennikh zakazok (1905). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 766, 70 ark.
17. Po proektu Pravil o nadzore za zavedeniami fabrychnoi promishlennostyi i o vzaimnikh otmoshenyiakh fabrykantov i rabochykh i ob uvelychenii chysla chynov fabrychnoi inspektsiis (1888). *Polnoe sobranie zakonov Rossiiskoi imperii. III. 6, № 3769*.
18. Postanovlenie ob uchrezhdennii inspektsiis truda 1918 (SNK RSFSR). *Sobranie uzakonenii i raspioriazhenii Rabocheho i Krestianskoho pravitelstva. № 36, otd.1, J. 474*.
19. Raskin, D.Ya. (2001). Visshye i tsentralnye hosudarstvennye uchrezhdenyia Rossii. 1801-1917. T.2. Sankt-Peterburh: Nauka.
20. Svedenia o deiatelnosti okruzhnoho fabrichnoho inspektora Kievskoho okruha V. F. Svirskoho v kachestve upolnomochennoho Vserossiiskoho zemskoho soiuza po ustroistvu voennoho hospytalia (1914). *TsDIAK Ukrainy*. F. 575, op. 1, spr. 741, 78 ark.
21. Sviatlovskii, V.V. (1886). *Kharkovskii fabrichnii okruh*. Otchet za 1885 h. fabrichnoho inspektora Kharkovskoho okruha V. V. Sviatlovskoho. Sankt-Peterburh.
22. Tuhan-Baranovskii, M.N. (1997). Izbrannoe. *Russkaia fabrika v proshlom i nastoiashchem*. Istoricheskoe razvitie russkoii fabriki v XIX veke. Moskva: ROSSPEN.
23. Visochayshe utverzhdennoe mnenie Hosudarstvennoho sovetia o preobrazovanii fabrichnoi inspeksiis i dolzhnostei hubernskyykh mekhanykov i o rasprostraneni deistviya pravyl o nadzore za zavedenyiamy fabrychno-zavodskoi promishlennosti i o vzaimnikh otmosheniakh fabrykantov i rabochykh (1898). *Polnoe sobranie zakonov Rossiyskoi imperii III, 14, № 10421*.
24. Vodotyka, T.S. (2013). Dokumenty fabrychnoi inspektsiis v TsDIAK Ukrainy yak dzherelo do vyvchennia istorii pidpriemnytstva v druihi polovyny XIX – na pochatku XX st. *Arkhivy Ukrainy*, 1 (283), 167-175.
25. Volodin, A.Yu. (2007). Fabrichnaia inspeksiia v Rossii (1882-1904). *Otechestvennaia istoriia*, 1, 23-40.
26. Volodin, A.Yu. (2009). *Istoriiia fabrichnoi inspektsiis v Rossii 1882-1914 hh*. Moskva: Rossiiskaia politicheskaia entsyklopedia (ROSSPEN).
27. Vysochaishse utverzhdennyi doklad Ministrov vnutrennikh del i Finansov o poriadke i predelakh podchynenyia chynov fabrichnoi inspektsiis nachalnym hubernii i o nekotorikh izmeneniakh vo vnutrennei orhanizatsii (1903). *Polnoe sobranie zakonov Rossiiskoi imperii III, 23, № 23041*.
28. Ymennoi Vysochaishyi ukaz, dannyi Senatu ob uchrezhdennii Ministerstva torhovly i promishlennosty (1905). *Polnoe sobranie zakonov Rossiiskoi imperii. III. 25, № 26851*.
29. Zykova, V. (1960). Dokumentalni materialy arkhivnykh fondiv: «Kantseliariia okruzhnoho fabrychnoho okruhu» i «Kantseliariia starshoho fabrychnoho inspektora Kyivskoi hubernii. Naukovo-informatsiinyi biuleten arkhivnoho upravlinnia URSR. № 6 (44), 61-72. |
Regular Council Meeting For Public Hearings
Monday, October 17, 2011
Place: Council Chambers
Richmond City Hall
6911 No. 3 Road
Present: Mayor Malcolm D. Brodie
Councillor Linda Barnes
Councillor Derek Dang
Councillor Evelina Halsey-Brandt
Councillor Greg Halsey-Brandt
Councillor Sue Halsey-Brandt
Councillor Ken Johnston
Councillor Bill McNulty
Councillor Harold Steves
Gail Johnson, Acting Corporate Officer
Call to Order: Mayor Brodie opened the proceedings at 7:00 p.m.
1. Zoning Amendment Bylaw 8795 (RZ 11-577573)
(3680/3700 Blundell Road; Applicant: Navjeven Grewal)
Applicant’s Comments:
The applicant was available to respond to questions.
Written Submissions:
None.
Submissions from the floor:
None.
PH11/10-1 It was moved and seconded
That Zoning Amendment Bylaw 8795 be given first and second readings.
CARRIED
2. **Zoning Amendment Bylaw 8796 (RZ 11-572975)**
(9640/9660 Seacote Road; Applicant: Gurjit Bapla)
*Applicant’s Comments:*
The applicant was available to respond to questions.
*Written Submissions:*
None.
*Submissions from the floor:*
None.
PH11/10-2
It was moved and seconded
*That Zoning Amendment Bylaw 8796 be given second and third readings.*
CARRIED
3. **Official Community Plan Amendment Bylaw 8803 and Zoning Amendment Bylaw 8804 (RZ 11-563568)**
(7691, 7711 and 7731 Bridge Street; Applicant: Am-Pri Construction Ltd.)
*Applicant’s Comments:*
The applicant was available to respond to questions.
*Written Submissions:*
None.
*Submissions from the floor:*
Ms. Jarmana, accompanied by her father, Ken Jarmana, 7671 Bridge Street, commented that their property line shares that of the subject site’s north property line and queried whether the applicant would install a fence along the entire length of the shared property line. She noted that the shared property line is the location of a public walkway, and at present there is a hedge along the property line, but it does not run the entire length of the shared property line.
Brian J. Jackson, Director of Development, provided advice regarding the applicant’s plan to have a fence installed along the entire length of the shared property line.
Regular Council Meeting For Public Hearings
Monday, October 17, 2011
PH11/10-3 It was moved and seconded
That OCP Amendment and Zoning Amendment Bylaws 8803 and 8804 each be given second and third readings.
CARRIED
PH11/10-4 It was moved and seconded
That OCP Amendment Bylaw 8803 be adopted.
CARRIED
4. Zoning Amendment Bylaw 8806 (RZ 11-585249)
(11531 Williams Road; Applicant: Ajit Thaliwal)
Applicant’s Comments:
The applicant was available to respond to questions.
Written Submissions:
None.
Submissions from the floor:
None.
PH11/10-5 It was moved and seconded
That Zoning Amendment Bylaw 8806 be given second and third readings.
CARRIED
5. Official Community Plan Amendment Bylaw 8807 and Zoning Amendment Bylaw 8808 (RZ 11-561611)
(10600, 10700 Cambie Road and Parcel C (PID 026-669-404); Applicant: Abbarch Architecture Inc.)
Applicant’s Comments:
The applicant was available to respond to questions.
Written Submissions:
None.
Regular Council Meeting For Public Hearings
Monday, October 17, 2011
Submissions from the floor:
None.
PH11/10-6 It was moved and seconded
That OCP Amendment Bylaw 8807 and Zoning Amendment Bylaw 8808 each be given second and third readings.
CARRIED
PH11/10-7 It was moved and seconded
That OCP Amendment Bylaw 8807 be adopted.
CARRIED
6. Zoning Text Amendment Bylaw 8811 (ZT 11-565675)
(14000 and 14088 Riverport Way; Applicant: Patrick Cotter Architect Inc.)
Applicant’s Comments:
The applicant was available to respond to questions.
Written Submissions:
(a) Memorandum dated October 6, 2011, from Brian J. Jackson, Director of Development (Schedule 1)
(b) Robert A. Gillis, General Manager, Holiday Inn Express & Suites, 10688 No. 6 Road (Schedule 2)
(c) Avtar Bains, President, No. 176 Sail View Ventures Ltd., 14200 Entertainment Blvd. (Schedule 3)
(d) Chris & Kenneth Lau, #303-14100 Riverport Way (Schedule 4)
(e) Mark Westcott, #208-14100 Riverport Way (Schedule 5)
(f) Tanya Deutsch, #201-14100 Riverport Way (Schedule 6)
(g) Darshan Rangi, #310-14200 Riverport Way (Schedule 7)
Submissions from the floor:
Janice Ruby, 14200 Riverport Way, spoke in support of low-rise development in the neighbourhood, but was opposed to the seven-storey height of the proposed development. She expressed concern regarding increased risk for residents, present and future, in her neighbourhood due to: (i) the proximity of the proposed marine terminal for jet fuel; and (ii) the sewage facility.
Ms. Ruby also noted that, at present, there is parking congestion in the neighbourhood, and new residents would bring more cars into an area already crowded with vehicles.
Ms. Terri Havil, 14300 Riverport Way, spoke in support of the proposed development. She acknowledged there was a parking problem in the area, but she stated she looked forward to more development in the area, where she lives an enjoyable lifestyle.
She also referenced the traffic congestion that takes place on a regular basis at the Highway 99 Overpass and Steveston Highway, which needs to be addressed.
Patrick Cotter, applicant, provided clarification on the parking ratio for the proposed existing developments, and responded to questions about on-street parking, affordable housing, and the indoor and outdoor amenity spaces.
A discussion ensued among Council, Mr. Cotter, and staff, regarding whether the public would be able to use the indoor amenity space.
PH11/10-8
It was moved and seconded
That Zoning Amendment Bylaw 8811 be given second and third readings.
The question on Resolution PH11/10-8 was not called, as the following amendment was introduced:
PH11/10-9
It was moved and seconded
That item #3 of the Zoning Text Amendment considerations be amended to add the words “as well as City-affiliated groups seeking meeting space”.
PH11/10-10 The question on Resolution PH11/10-8 was then called and it was CARRIED, with Cllrs. E. Halsey-Brandt and G. Halsey-Brandt opposed.
ADJOURNMENT
PH11/10-11 It was moved and seconded That the meeting adjourn (8:11 p.m.).
CARRIED
Certified a true and correct copy of the Minutes of the Regular Meeting for Public Hearings of the City of Richmond held on Monday, October 17, 2011.
Mayor (Malcolm D. Brodie) Acting Corporate Officer
City Clerk’s Office (Gail Johnson)
Purpose of Memo
On September 26, 2011, Council gave First Reading to the Patrick Cotter Architect Inc. zoning text amendment proposal regarding 14000 & 14088 Riverport Way. The issue of affordable housing was discussed, and Council made the following referral requesting further information:
“staff was directed to provide information for the Public Hearing, on the strategy used in determining the density for this application.”
The purpose of this memorandum is to respond to this request.
Proposed Density
Staff carefully reviewed the applicant’s request to change land uses and increase density from 1.0 FAR to 1.91 FAR to accommodate a new mixed-use purpose built rental apartment building on the development site at 14000 Riverport Way. Staff considered the following in determining an appropriate density for the site:
- ability of site to maximize amount of market rental residential housing;
- ability of site to accommodate building massing;
- ability of site to accommodate adequate parking for commercial and residential uses;
- opportunity to provide a taller landmark building at the East end of Steveston Highway on the River’s edge;
- fit with neighbouring 1.5 FAR density market rental residential housing development (see Table 1 below);
- need for higher density to offset more expensive higher quality concrete construction; and
- requirements for neighbourhood meeting room, and indoor and outdoor amenities for residents.
The proposed increased density of 1.91 FAR allows the project to shift from wood construction to more expensive concrete construction, which provides the following benefits:
- Longer building life (approximately 100 years);
- Lower maintenance costs with reduced materials shrinkage; and
- Improved resident privacy with reduced lower pitch vibration noise transmission from floor to floor.
Table 1: 14000 & 14088 Riverport Way: Comparison of Density and Land Uses
| | Permitted FAR | Proposed FAR | Permitted Uses | Proposed Uses |
|----------------------|---------------|--------------|----------------------------------------------------|---------------------------------------------------|
| 14000 Riverport Way | 1.0 | 1.91 + 0.1 amenity | Child care
Dormitory
Hotel
Office
Parking, non accessory
Private club
Restaurant
Retail, General
Outdoor storage | 68.3 sq.m. CRU
Deleted: Outdoor storage
Housing, apartment (60)* |
| 14088 Riverport Way | 1.5 | 1.5 | Child care
Dormitory
Hotel
Office
Parking, non accessory
Private club
Restaurant
Retail, General
Housing, apartment* | Housing, apartment (80)* |
Apartment Housing* may include the following permitted secondary uses:
- residential security/operator unit
- community care facility, minor
- home business
**Market Rental Support to Affordable Housing**
Canada Mortgage and Housing Corporation (CMHC) reports that the Richmond rental housing vacancy rate was 1.5% in October 2010 and is anticipated to decline modestly in 2011. Moreover, CMHC indicates that a strong rental demand will remain due to a number of factors, including:
- The region’s diverse economy and role as the gateway to Asia-Pacific immigrants;
- The anticipated location for 40,000 new residents annually; and
- Anticipated employment growth (e.g. Attracting and keeping knowledge based workers is integral to supporting a strong economy in coming years. Technical Industries employ knowledge workers who are highly mobile and often depend on rental housing located near employment).
The Urban Futures report entitled: “Community-level Projections of Population, Housing and Employment” prepared for the City’s 2041 OCP Update, suggests that Richmond’s share of new apartments in the Region will decline from 11 percent in 2009 to 6 percent in 2041. Some of the reasons cited are:
- Increased competition throughout the region for this housing form;
- Regional availability of land in other areas; and
- Region-wide densification patterns.
The report also reveals that 77 percent of Richmond’s condo apartment development is anticipated to be located in the City Centre. With these considerations in mind, the Riverport application provides a unique opportunity to develop much needed rental housing in an area outside of the City Centre, which will:
- Meet growing rental demand;
- Relieve pressure on vacancy rates; and
- Serve as dedicated rental housing stock in perpetuity.
Staff recognize that it is financially challenging to develop purpose-built rental housing in the absence of Senior government funding or incentives. In the absence of such programs or other incentives (e.g. Vancouver’s Short Term Incentives for Rental development), rental revenue will be required to offset the project’s debt-servicing costs; whereas, a private condominium development would utilize unit sales revenue. Further, independent studies for Metro Vancouver and Vancouver, confirm that both concrete and wood-frame, purpose-built market rental developments are at a capital cost disadvantage relative to condo apartment developments. Thus, challenges exist to achieve viable project economics to support both the development and delivery of market rental housing.
Decreased rental housing starts and forecasted future rental demand impose on-going pressure on existing rental stock. For example, the CMHC report entitled: “Rental Market Report- Vancouver and Abbotsford CMAs” released in the Fall of 2010 reflected that 20 of the 1,088 one-bedroom units in Richmond were vacant and 15 of the 1,065 two-bedroom units were vacant. The report also reveals that average Richmond market rents range from $724 for bachelor units to $1,096 for two-bedroom units.
Securing additional purpose-built rental stock is considered important, both for households who are not able to or for those who choose not to purchase housing. At this time, the applicant is not able to set rental rates as a full accounting of the construction and financing costs are not yet known. However, based on preliminary rental rate estimates, it is estimated that at least 40 percent of Richmond renters could afford the expected market rents in 14000 and 14088 Riverport Way.
The Regional Growth Strategy indicates that Richmond’s 10 year estimated rental demand is 5,600 units or 560 units, annually. The Riverport project will deliver 140 rental units or approximately 25 percent of Richmond’s annual estimated need for rental housing. The units will be affordable to individuals with incomes between $35,800 and $84,400, thus, relieving pressure on available private rental stock for Richmond’s low to moderate income households with incomes between $31,500 and $51,000, as stipulated in the City’s Affordable Housing Strategy.
For the reasons listed above, Staff recommend waiving the affordable housing contribution of $213,823.00 with respect to the project’s delivery of rental housing that will:
- Be secured through legal agreements in perpetuity;
- Attract and support current and future employment growth in Richmond;
- Potentially serve 40 percent of Richmond’s renters; and
- Added market rental stock will relieve pressure on local rental housing demand.
In summary, the proposed Riverport development will increase the variety of available rental options in the City, thereby, relieving pressure on other forms of rental options that may be more affordable (e.g. secondary suites, low end market rent units, co-op housing, and affordable rental housing).
**Challenges of Dormitory Development and Market Rental Development**
The original Riverport rezoning included the development of dormitory space within the overall project. With close proximity to the Riverport Athletics and Entertainment Complex, dormitory space was then seen as a need and an economically viable use. As noted in the staff report, since the original rezoning, a hotel has been developed in the immediate area satisfying much, if not all, of the need for short term stay accommodation. As the area now has no specific need for a dormitory (the local hotel already satisfies the needs of the neighbourhood), staff agree with the applicant’s contention that a dormitory in this location would not be financially self-sufficient, and would most likely result in operating losses. Therefore, staff believed that it was appropriate to consider another, more viable use on this site.
There is a shortage of purpose built market rental residential accommodation in Richmond and very little interest in developing new purpose built market rental residential accommodation. The primary reasons for the lack of new purpose built rental development are as follows:
- the demand for residential land in the region is extremely intense, leading to high levels of competition resulting in very significant land value increases;
- people are willing, and able, to pay more to purchase units as compared to the capitalized value of such units based on their achievable market rental rates; and
- based on the above, the result is that there is significantly more profit potential, and actual profits derived, from the development of units to sell in the open market, thus “out-competing” the market rental building developer for the land.
The likelihood of Richmond seeing any sort of significant development of market rental units in the near future is very limited. Unless lands are specifically set aside for market rental development only (which lowers land price expectations thus providing developers with similar profit expectations) or there are very significant relaxations of other rezoning and building related provisions (such as parking requirement relaxation), projects oriented toward the ownership market will continue as the predominant, if only, form of residential development for the foreseeable future.
This issue has long been a problem in the Lower Mainland, and has been identified as such since the late 1980’s. Clearly, there has been a very limited increase in the supply of market rental product over the past 20 or so years, which is in marked comparison to the extreme levels of development oriented toward the ownership market.
Conclusion
Staff supports the proposal to develop the last remaining development parcel in the Riverport waterfront community with a new mixed-use building including 60 units of purpose built market rental apartment housing. The applicant has demonstrated the feasibility of accommodating the proposed density within a building that responds to its context and a site specific rental residential parking rate.
The proposal addresses the need for market rental residential accommodation in Richmond. The proposed Riverport development will increase the variety of available rental options in the City, thereby, relieving pressure to other forms of rental options that may be more affordable (e.g. secondary suites, low end market rent units, co-op housing, and affordable rental housing).
Brian J. Jackson, MCIP
Director of Development
Dena Kae Beno
Affordable Housing Coordinator
604-247-4946
October 11, 2011
firstname.lastname@example.org
Mayor and Councilors
City of Richmond
6911 No. 3 Road
Richmond, BC V6Y 2C1
Dear Mayor Brodie and Councilors;
Re: Application for a Zoning Text Amendment – Riverport to Permit A Mixed-use Development with Rental Apartment Housing at 14000 and 14088 Riverport Way (File Ref. No. 12-8060-20-8811)
I am writing to you as the General Manager of the Holiday Inn Express Hotel and Suites, located at 10688 Number 6 Road at Riverport. We wish to register our support for the above-captioned application to permit a change in use that will result in much needed rental apartment housing to be built on the Riverport Way site.
This change in use, from the original plan that permitted dormitory facilities to be built on the site, is one that we enthusiastically welcome.
Since that original plan for the Riverport Way site was approved some years ago, we have made a substantial investment in our 105 - suite hotel. Since 2008, we have been successfully serving not only the needs of athletes visiting the facilities at Riverport but also business and leisure visitors to Richmond. Our competitive rates and our flexible accommodation arrangements make it economical for teams visiting Riverport, with athletes sharing spacious suites at our hotel, eliminating any demand for a dormitory facility at Riverport.
Moreover, with 35 people employed at our hotel, we welcome additional residential development at Riverport. The addition of rental housing will now offer our staff the opportunity to consider living in very close proximity to their place of employment.
We respectfully encourage Council to approve this application.
Yours truly,
Robert A. Gillis
General Manager
October 11, 2011
email@example.com
Mayor and Councillors
City of Richmond
6911 No. 3 Road
Richmond, BC V6Y 2C1
Dear Mayor Brodie and Councillors;
Re: Application for a Zoning Text Amendment – Riverport to Permit A Mixed-use Development with Rental Apartment Housing at 14000 and 14088 Riverport Way (File Ref. No. 12-8060-20-8811)
This letter is submitted in support of the application detailed above that will result in a zoning text amendment to permit the development of market rental housing on the site adjacent to the commercial property I own at 14200 Entertainment Boulevard.
You will know that my property is occupied by the Zone Bowling Centre, the Big River Brew Pub and the Old Spaghetti Factory restaurant.
We welcome additional residential development at Riverport. By providing more housing diversity at Riverport, you will be strengthening the mixed-use nature of this unique district, making it more vibrant and also making the area more viable for those commercial uses that serve not only visitors to Riverport, but also those who live there.
We all know that rental housing is desperately needed in Metro Vancouver and this purpose-built rental project is one of very few such projects that are being developed today. Moreover, this addition of rental housing at Riverport will offer more of the employees who work for my tenants an opportunity to consider living next door to where they work.
I urge you to support this application.
Respectfully submitted,
No. 176 Sail View Ventures Ltd.
Avtar Bains
President
Send a Submission Online (response #602)
Survey Information
| Site: | City Website |
|----------------|--------------|
| Page Title: | Send a Submission Online |
| URL: | http://cms.richmond.ca/Page1793.aspx |
| Submission Time/Date: | 10/13/2011 10:34:28 AM |
Survey Response
| Your Name: | Chris & Kenneth Lau |
|------------------|---------------------|
| Your Address: | 303-14100 Riverport Way, Richmond, B.C. |
| Subject Property Address OR Bylaw Number: | Zoning Text Amendment Bylaw 8811 (ZT 11-565675) |
| Comments: | I object the amendment as the reason we bought at this location and not on west of Steveston as in view of its low density. The increase of density would cause more traffic and parking problem. We have traffic congestion in the morning along Steveston to No. 5 Road and also after the end of movies in the neighbourhood cinema. Kindly draw your attention to these issues. Thanks. |
To Public Hearing
Date: October 17, 2011
Item # 6
Re: By-law 8811
Send a Submission Online (response #601)
Survey Information
| Site: | City Website |
|-------------|--------------|
| Page Title: | Send a Submission Online |
| URL: | http://cms.richmond.ca/Page1793.aspx |
| Submission Time/Date: | 10/13/2011 9:23:46 AM |
Survey Response
| Your Name: | Mark Westcott |
|---------------------|---------------|
| Your Address: | #208 - 14100 Riverport Way, Richmond, V6W 1M3 |
| Subject Property Address OR Bylaw Number: | 14000 / 14088 Riverport Way, Richmond |
| Comments: | Zoning Text Amendment Bylaw 8811 (ZT 11-565675) Richmond Council, I have a concern I would like you to consider when determining if you will grant the amendment to build a 7 story building at 14000 and 14088 Riverport Way in Richmond. I am an owner at 14100 Riverport Way and am very aware of the problem the current residences and our visitors have trying to park on Riverport Way today. I understand that the initial proposal for the buildings 14000/14088 Riverport Way had originally allocated 1.25 parking spots per unit. It is probable that many of the renters in the new building will have multiple cars they will have to park them on Riverport Way. Should Council allow a 7 story building to be built instead of a 4 story building there will be no parking available for visitors to any of the existing buildings on Riverport Way. Please consider not allowing the by-law amendment to avoid making an existing parking problem on Riverport Way much worse. Thank you for your consideration, Mark Westcott |
I have received notification that there has been a request to amend "Low Rise Apartment (ZLR14) - Riverport" to permit a mid-rise mixed use development with market rental apartment housing, limited commercial and community amenity space. I would like to present my comments to the City and the Public Hearing on Monday October 17. I am against this proposed amendment and do not want these changes to be approved. There has already been a significant disruption to my neighbourhood with the current construction of Riverport Flats and with the approval of this amendment I see only further disruption and destruction of the environment. My concerns are: Location - the current neighbourhood at Riverport Way is small and secluded. It has a nice quiet atmosphere and I would really like it to stay that way; it was one of the reasons I chose to live in this particular area of Richmond. There is not enough space to accommodate more people, more buildings, more parking stalls, and more cars. There will already be enough of a challenge
Comments:
with increased car traffic and parking challenges when Riverport Flats are completed. Having another taller building with businesses below will only make it worse. Having a mid-rise building and commercial space will only attract more people, ultimately crowd the area, and destroy the quiet, peaceful atmosphere of the neighbourhood.
Amenities and Commercial Space - there is plenty of commercial space and businesses in several locations, all a very short distance from Riverport Way. First, there are businesses, restaurants, and amenities located at Silvercity and Watermania, located a few minutes walk from Riverport Way. There is also the ample commercial space, businesses, restaurants, and shopping located at Ironwood and Coppersmith centres. These are also located very nearby Riverport Way and are accessible by foot, by transit, and by vehicle. I do not want to see my neighbourhood and this very beautiful area crowded with more buildings, taller buildings, people and cars. There is a calm peaceful environment here and I do not agree with continuing to destroy the natural habitat and environment here on the waterfront. It's a beautiful place to live and I would love for it to stay that way. I also do not see the need for these changes in the area as I do not believe that these changes will make the neighbourhood better but rather will be a detriment. I also do not believe that there is enough demand or traffic to sustain viable commercial businesses. Thank-you for taking my concerns into consideration and I hope that the City will vote not to amend this zoning permit.
To Public Hearing
Date: October 17, 2011
Item #:
Re: Bylaw 88-11
October 17, 2011
By FAX
604-278-5139
Dear Sir/Madam,
Re: Zoning Text Amendment By-law 88-11 (ZT II - 565675)
I am opposing to the above amendment to allow another apartment building in the area because of traffic problem.
Developers must construct 4 lanes Silverton Hwy up to Hwy #99 before city entertain any application to allow any multi-family apartment construction in the area.
Thanks,
Alvin!
Darshan Rangi
310 - 14200 Riverportway
Richmond
Cell: 778-838-7900 |
Formal Semantics of Meta-Level Architectures:
Dynamic Control of Reasoning
Jan Treur
Vrije Universiteit Amsterdam, Department of Artificial Intelligence
De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
Email: email@example.com
URL: http://www.cs.vu.nl/~treur
Abstract
Meta-level architectures for dynamic control of reasoning processes are quite powerful. In the literature many applications in reasoning systems modelling complex tasks are described, usually in a procedural manner. In this paper we present a semantic framework based on temporal partial logic to describe the dynamics of reasoning behaviour. Using these models the semantics of the behaviour of the whole (meta-level) reasoning system can be described by a set of (intended) temporal models.
1 Introduction
In the literature on meta-level architectures and reflection (e.g., [28]) two separate streams can be distinguished: a logical stream (e.g., [4], [19], [37]) and a procedural stream (e.g., [10], [11]). Unfortunately there is a serious gap between the two streams. In the logical stream one restricts oneself often to static reflections; i.e., of facts the truth of which does not change during the reasoning: e.g., provable(A) (with A an object-level formula). In the procedural stream usually facts are reflected the truth of which changes during the whole reasoning pattern; e.g. control statements like current_goal(A), with A an object-level formula, that are sometimes true and sometimes false during the reasoning. If applications to dynamic control of complex reasoning tasks are concerned these dynamic reflections are much more powerful (for applications see, e.g.: [11], [10], or [6], [7], [8], [9], [18], [30], [31], [33], [35]). However, a logical basis has not been investigated in depth. The current paper provides such a logical foundation (based on temporal logic) of meta-level architectures for dynamic control. The semantical framework allows for the analysis of these dynamic meta-level architectures by logical means. It can be viewed as a contribution to bridge the gap between the logical stream and the procedural stream.
A meta-level architecture consists of two interacting components that reason at different levels: the object-level component and the meta-level component. The interactions between the components are: upward reflection (information transfer from object-level component to meta-level component) and downward reflection (information transfer from meta-level component to object-level component). In a
meta-level architecture each of the reasoning processes in one of the components can be assigned its own *local semantics* (local in view of the whole system) that can be formally described according to well-known approaches to static (declarative) and dynamic (procedural) semantics (for instance as known from logic programming). In this local semantics a *static view* (the contents; declarative) and a *dynamic view* (the control; procedural) can be distinguished, and these views can be treated (to a certain extent) as orthogonal (e.g., see [27]).
The crucial point of a meta-level architecture for dynamic control is that the semantics of the meta-level component relates in some manner to (the control of) the reasoning process of the object-level component. To obtain an *overall semantics* for the whole system, the crux is to formally describe the precise *semantic connection* between the two components. The semantic connection is used in a bidirectional manner. Firstly, meta-level reasoning *is about* (or refers to) process aspects of the reasoning of the object-level component and uses information about that (via upward reflection). Secondly, the results of its reasoning may *affect* the control of this object-level reasoning process by changing its control settings (downward reflection). Therefore meta-level architectures enable one to represent control knowledge in the system in an explicit declarative manner.
A formal semantic connection between the two components should relate some (formal) description of procedural (inference process control) aspects of the object-level component to the formal declarative description of the meta-level component. Therefore for the overall semantics of this type of reasoning system a global distinction between a static view and a dynamic view essentially cannot be made (as an extension of the distinction that can be made within each of the components). The views are not orthogonal in this case: to a certain extent they are defined in terms of each other. In particular, it is impossible to provide independent declarative semantics for such systems without taking into account the dynamics (of the object-level component). For the overall architecture a formal semantical description is needed that systematically integrates both views. The lack of such overall semantics for complex reasoning systems (with meta-level reasoning capabilities) was one of the major open problems that were identified in [36].
In this paper we develop a formal framework where partial models are used to explicitly represent (current) information states (see also [24]). This enables us to represent inference processes within each of the components as transitions between partial models, and a trace of a reasoning process as a partial temporal model. First, in Section 2 some basic notions from Temporal Partial Logic are introduced. Next, in Section 3 we give a formalization of a static view and a dynamic view on the object-level reasoning component. In Section 4 a temporal interpretation of meta-level reasoning is introduced and in Section 5 formal semantics for reasoning patterns of meta-level architectures for dynamic control are presented based on partial temporal models that formalize the overall reasoning traces. Finally, in Section 6 an example is presented.
2 Basic notions of Temporal Partial Logic
In this section we will introduce the formal notions of partial logic and temporal partial logic that are needed later on.
**Definition 2.1 (signature and propositional formula)**
a) A *signature* $\Sigma$ is an ordered sequence of (propositional) atom names, or a sequence of sort, constant, function and predicate symbols in (many-sorted) predicate logic. By $\Sigma_1 \subseteq \Sigma_2$ we denote that $\Sigma_1$ is a *subsignature* of $\Sigma_2$. The *disjoint union* of two signatures $\Sigma_1$ and $\Sigma_2$ is denoted by $\Sigma_1 \oplus \Sigma_2$. A *mapping of signatures* $\alpha : \Sigma_1 \rightarrow \Sigma_2$ is a mapping from the set of symbols of $\Sigma_1$ into the set of symbols of $\Sigma_2$ such that sorts are mapped to sorts, constants to constants, predicates to predicates, functions to functions, and the arities and argument-sort relations are respected.
b) The set $\text{At}(\Sigma)$ is the *set of (ground) atoms* based on $\Sigma$. By a (ground) *formula* of signature $\Sigma$ we mean a proposition built from (ground) atoms using the connectives $\land, \rightarrow, \neg$. We will call these formulae *propositional formulae*, in contrast to the temporal formulae introduced later on. Below we will assume all formulas ground (closed and without quantifiers). For a finite set $F$ of formulae, $\text{con}(F)$ denotes the conjunction of the elements of $F$; in case $F$ is the empty set then by definition $\text{con}(F)$ is true.
By $\text{Lit}(\Sigma)$ we denote the *set of ground literals* of signature $\Sigma$.
As discussed in [24], partial models can be used to represent information states in a reasoning system; therefore we define:
**Definition 2.2 (partial models as information states)**
a) An *information state* or *partial model* $M$ for the signature $\Sigma$ is an assignment of a truth value from $\{0, 1, u\}$ to each of the atoms of $\Sigma$, i.e. $M : \text{At}(\Sigma) \rightarrow \{0, 1, u\}$. An atom $a$ is *true* in $M$ if $1$ is assigned to it, and *false* if $0$ is assigned; else it is called *undefined* or *unknown*. A literal $L$ is called *true* in $M$, denoted by $M \models^+ L$ (resp. *false* in $M$, denoted by $M \models^- L$) if $M(L) = 1$ (resp. $M(L) = 0$) if $L$ is an atom and $M(a) = 0$ (resp. $M(a) = 1$) if $L = \neg a$ with $a \in \text{At}(\Sigma)$. By $\text{Lit}(M)$ we denote the *set of literals* which are true in $M$.
We call a partial model $M$ *complete* if no $M(a)$ equals $u$ for any $a \in \text{At}(\Sigma)$.
b) The *set of all information states* for $\Sigma$ is denoted by $\text{IS}(\Sigma)$. If $\Sigma_1 \subseteq \Sigma_2$ then this induces an embedding of $\text{IS}(\Sigma_1)$ into $\text{IS}(\Sigma_2)$; we will identify $\text{IS}(\Sigma_1)$ with its image under this embedding: $\text{IS}(\Sigma_1) \subseteq \text{IS}(\Sigma_2)$. Furthermore, $\text{IS}(\Sigma_1 \oplus \Sigma_2)$ can (and will) be identified with the Cartesian product $\text{IS}(\Sigma_1) \times \text{IS}(\Sigma_2)$.
c) We call $N$ a *refinement* of $M$, denoted by $M \leq N$, if for all atoms $a \in \text{At}(\Sigma)$ it holds: $M(a) \leq N(a)$ where the partial order on truth values is defined by
$$u \leq 0, \ u \leq 1, \ u \leq u, \ 0 \leq 0, \ 1 \leq 1.$$
d) If $K$ is a set of formulae of signature $\Sigma$, a complete model $M$ of signature $\Sigma$ is called a *model of $K$* if all formulae of $K$ are true in $M$. An information state is
consistent with \( K \) if it can be refined to a complete model that is a model of \( K \). By \( IS_K(\Sigma) \) we denote the set of all information states for \( \Sigma \) that are consistent with \( K \).
e) If \( M \) is a partial model for the signature \( \Sigma \) and \( S \subseteq At(\Sigma) \), then by \( M|S \) we denote the restriction or reduct of \( M \) to \( S \), defined by
\[
M|S(a) = \begin{cases}
M(a) & \text{if } a \in S \\
u & \text{otherwise (i.e., if } a \in At(\Sigma)\setminus S)
\end{cases}
\]
If \( S = At(\Sigma') \) for some subsignature \( \Sigma' \) of \( \Sigma \), then we denote \( M|S \) by \( M|\Sigma' \).
Notice that for partial models \( M, N \) for \( \Sigma \) it holds \( M \leq N \) if and only if \( M \models^* L \Rightarrow N \models^* L \) for all literals \( L \in Lit(\Sigma) \). We base the interpretation of propositional formulae on the Strong Kleene truth tables for the logical connectives (see also Definition 2.6 below); more details and possibilities of partial semantics can be found in [3], [25].
**Definition 2.3 (labeled flow of time)**
Let \( L \) be a set of labels:
a) A (discrete) labeled flow of time, labeled by \( L \) is a pair \( T = (T, (\leq_i)_{i \in L}) \) consisting of a nonempty set \( T \) of time points and a collection of binary relations \( \leq_i \) on \( T \). Here for \( s, t \) in \( T \) and \( i \in L \) the expression \( s \leq_i t \) denotes that \( t \) is a (immediate) successor of \( s \) with respect to an arc labeled by \( i \). Sometimes we use just the binary relation \( s < t \) denoting that \( s \leq_i t \) for some \( i \) (for some label \( i \) they are connected). Thus \( < \) is defined as \( \lor_i \leq_i \). We will assume that this relation \( < \) is irreflexive, antisymmetric and antitransitive.
We also use the (irreflexive) transitive closure \( * \) of this binary relation, defined as \( <^* \).
b) We call \( T \) linear if \( < \) is a linear ordering and rooted with root \( r \) if \( r \) is a (unique) least element: for all \( t \) it holds \( r = t \) or \( r < t \). We say \( T \) satisfies successor existence if every time point has at least one successor: for all \( s \in T \) there exists a \( t \in T \) such that \( s < t \).
**Definition 2.4 (partial temporal model)**
Let \( \Sigma \) be a signature.
a) A labeled (linear time) partial temporal model of signature \( \Sigma \) with labeled flow of time \( T \) is a mapping
\[
M : T \rightarrow IS(\Sigma)
\]
For any fixed time point \( t \) the partial model \( M(t) \) is also denoted by \( M_t \); the model \( M \) can also be denoted by \( (M_t)_{t \in T} \). If \( a \) is an atom, and \( t \) is a time point in \( T \), and \( M_t(a) = 1 \), then we say in this model \( M \) at time point \( t \) the atom \( a \) is true. Similarly we say that at time point \( t \) the atom \( a \) is false, respectively unknown, if \( M_t(a) = 0 \), respectively \( M_t(a) = u \).
b) The refinement relation \( \leq \) between partial temporal models is defined as: \( M \leq N \) if \( M \) and \( N \) have the same flow of time and for all time points \( t \) and atoms \( a \) it holds \( M_t(a) \leq N_t(a) \).
c) \( M \) is called conservative if for all \( s, t \in T \) with \( s < t \) it holds \( M_s \leq M_t \).
From now on we will assume that all used labeled flows of time are linear, rooted and satisfy successor existence. This is equivalent to \( T \) being order-isomorphic to the natural numbers \( N \). Therefore in the rest of the paper we will use \( N \) as our flow of time.
We introduce three temporal operators, \( X, P \) and \( C \), referring to the next information state, past information states and the current information state, respectively. Intuitively, the temporal formula \( X\alpha \) is true at time \( t \) means that viewed from time point \( t \), the formula \( \alpha \) is true in the next information state. We use labeled next operators to be able to distinguish different types of steps. The temporal formula \( P\alpha \) is true at time \( t \) means that \( \alpha \) is true in some past information state. Furthermore we will need an operator that expresses the fact that currently \( \alpha \) is true (in the current information state); this will be the operator \( C \). Definition 2.5 makes this formal. Notice that sometimes we will denote the application of the temporal operators like \( F(\alpha) \); if no confusion is expected, for shortness we write \( F\alpha \). We will not need nested operators in this paper, although it would certainly be possible to use them.
**Definition 2.5 (semantics of the temporal operators)**
Let a propositional formula \( \alpha \), a labeled partial temporal model \( M \), a label \( i \in L \) and a time point \( t \in N \) be given. Then:
a) \((M, t) \models^+ X_i \alpha \iff \exists s \in N \ [t \leq s \land (M, s) \models^+ \alpha]\)
\((M, t) \models^- X_i \alpha \iff (M, t) \not\models^+ X_i \alpha\)
b) \((M, t) \models^+ C \alpha \iff (M, t) \models^+ \alpha\)
\((M, t) \models^- C \alpha \iff (M, t) \not\models^+ C \alpha\)
c) \((M, t) \models^+ P \alpha \iff \exists s \in N \ [s < t \land (M, s) \models^+ \alpha]\)
\((M, t) \models^- P \alpha \iff (M, t) \not\models^+ P \alpha\)
Now we can make new formulae using conjunctions, negations and implications of these temporal formulae. From now on the word (temporal) formula will be used to denote a formula possibly containing any of the new operators, unless stated otherwise.
As we do not need nesting of temporal operators, for convenience we will only consider non-nested formulae.
**Definition 2.6 (temporal formulae and their interpretation)**
Let $\Sigma$ be a signature, let $M$ be a labeled partial temporal model for $\Sigma$, and $t \in \mathbb{N}$ a time point.
a) A *temporal atom* of signature $\Sigma$ is a formula $O\alpha$ where $O$ is one of the temporal operators in Definition 2.5 and $\alpha$ a propositional formula of signature $\Sigma$.
A *temporal formula* of signature $\Sigma$ is a formula built from temporal atoms of signature $\Sigma$, using the logical connectives $\neg, \land, \rightarrow$.
b) Any propositional atom $p \in At(\Sigma)$ is interpreted according to:
$$
(M, t) \models^+ p \iff M(t)(p) = 1 \\
(M, t) \models^- p \iff M(t)(p) = 0
$$
For the interpretation of a temporal atom, see Definition 2.5.
c) For any two temporal or propositional formulae $\varphi$ and $\psi$:
(i) $(M, t) \models^+ \varphi \land \psi \iff (M, t) \models^+ \varphi$ and $(M, t) \models^+ \psi$
$(M, t) \models^- \varphi \land \psi \iff (M, t) \models^- \varphi$ or $(M, t) \models^- \psi$
(ii) $(M, t) \models^+ \varphi \rightarrow \psi \iff (M, t) \models^- \varphi$ or $(M, t) \models^+ \psi$
$(M, t) \models^- \varphi \rightarrow \psi \iff (M, t) \models^+ \varphi$ and $(M, t) \models^- \psi$
(iii) $(M, t) \models^+ \neg \varphi \iff (M, t) \models^- \varphi$
$(M, t) \models^- \neg \varphi \iff (M, t) \models^+ \varphi$
d) For any temporal or propositional formula $\varphi$:
$(M, t) \not\models^+ \varphi \iff (M, t) \models^+ \varphi$ does not hold
$(M, t) \not\models^- \varphi \iff (M, t) \models^- \varphi$ does not hold
$(M, t) \not\models^= \varphi \iff (M, t) \not\models^+ \varphi$ and $(M, t) \not\models^- \varphi$
e) For a partial model $M$ and a set of formulae $K$, by $M \models^+ K$ we mean $M \models^+ \varphi$ for all $\varphi \in K$. By $M \models^- \varphi$ we mean $(M, t) \models^- \varphi$ for all $t \in \mathbb{N}$ and by $M$ a *temporal model of $K$*, denoted $M \models^+ K$, we mean $M \models^+ \varphi$ for all $\varphi \in K$, where $K$ is a set of temporal or propositional formulae. A model $M$ of $K$ is called a *minimal model* of $K$ if for any model $M'$ of $K$ with $M' \leq M$ it holds $M' = M$.
The temporal approach provides declarative semantics for systems that behave dynamically, essentially since time has been put into the domain of consideration in an explicit manner: one reasons both on world states and the time points on which they occur. This means that non-conservative changes in truth values of a statement $b$ referring to a changing world state are accounted for by considering the statement in fact as two (or more) statements; one $(t, b)$ referring to one time point $t$, and another one $(s, b)$ referring to another time point $s$. The truth values of these two statements do not change; e.g., $t$ will always remain true that at time point $t$ the statement $b$ holds. Thus a dynamic system is described in a declarative manner. Its set of intended models can be constructed in the temporal sense described above. One specific behaviour of the system corresponds to one of these temporal models. We will work out this general idea for the case of a meta-level architecture. More details on temporal logic can be found in [2], [20].
3 Static and dynamic view on the object-level reasoning
In this section we use the notion of a partial model to formalize the information state of the object-level reasoning component at a certain moment. A transition of one information state to another one can be formally described by a mapping between the partial models specifying the information states. In a reasoning process such a transition is induced by a reasoning step where a knowledge unit $K$ (e.g., a set of rules or a single rule) is used to derive some additional conclusions. The dynamic interpretation of such a knowledge unit $K$ can be defined as the mapping in the set of all relevant partial models induced by $K$.
Note that information states are defined in terms of literals. This implies that in principle only literal conclusions count in inferences. Therefore we can take advantage of the fact that inference relations, restricted to literal conclusions, that are sound with respect to the classical Tarski semantics are also sound with respect to the partial Strong Kleene semantics (and vice versa), as has been established in [29] (cf. Theorem 2.3, p. 464). In the sequel by $\vdash$ we will denote any sound inference relation that is not necessarily complete (e.g. one of: natural deduction, chaining, full resolution, SLD-resolution, unit resolution, etc.).
3.1 The static view on the object-level reasoning
In this subsection we define the underlying language, logical theory and inference relation of the object-level component. Moreover we define the notions of deductive and semantic closure.
**Definition 3.1 (static view on the object-level component)**
The static view on the object-level reasoning component is a tuple
$$\langle \Sigma_o, OT, \vdash \rangle$$
with
- $\Sigma_o$ a signature, called the object-signature
- $OT$ a set of propositional ground formulae expressed in terms of the object-signature
- $\vdash$ a classical inference relation (assumed sound but not necessarily complete)
Notice that a literal formula is true in a partial model $M$ if and only if according to the classical semantics the formula is true in every complete refinement of $M$.
**Definition 3.2 (deductive and semantic closure)**
Let $K$ be a set of propositional formulae of signature $\Sigma$ and $\vdash$ a (sound) inference relation or (semantic) entailment relation.
a) For $M \in IS_K(\Sigma)$ we define the partial model $d_K \vdash (M)$ by
\[ \text{cl}_K \vdash (M) \vdash^* L \iff K \cup \text{Lit}(M) \vdash L \]
for any literal \( L \). This model is called the *closure* of \( M \) under \( K \) (with respect to \( \vdash \)). We call \( M \) *closed* under \( K \) (with respect to \( \vdash \)) if \( M = \text{cl}_K \vdash (M) \), or, equivalently, if
\[ M \vdash^* L \iff K \cup \text{Lit}(M) \vdash L. \]
b) If \( \vdash \) is an inference relation \( \vdash \) we denote \( \text{cl}_K \vdash (M) \) by \( \text{dc}_K \vdash (M) \) and call it the *deductive closure of* \( M \) under \( K \) (with respect to \( \vdash \)). We call \( M \) *deductively closed* under \( K \) if
\[ K \cup \text{Lit}(M) \vdash L \iff M \vdash^* L \]
i.e., if it is its own deductive closure under \( K \).
c) For the classical semantic consequence relation \( \models \) (based on complete models) we denote \( \text{cl}_K \vdash (M) \) by \( \text{sc}_K(M) \) and call it the *semantic closure of* \( M \). We call \( M \) *semantically closed* under \( K \) if
\[ K \cup \text{Lit}(M) \vdash L \iff M \vdash^* L \]
i.e., if it is its own semantic closure under \( K \).
**Definition 3.3 (conservation, monotonicity, idempotency)**
Let \( K \) be a set of propositional formulae of signature \( \Sigma \).
We call the mapping \( \alpha : IS_K(\Sigma) \rightarrow IS_K(\Sigma) \):
(i) *conservative* if \( M \leq \alpha(M) \) for all \( M \in IS_K(\Sigma) \)
(ii) *monotonic* if \( \alpha(M) \leq \alpha(N) \) for all \( M, N \in IS_K(\Sigma) \) with \( M \leq N \)
(iii) *idempotent* if \( \alpha(\alpha(M)) = \alpha(M) \) for all \( M \in IS_K(\Sigma) \)
For more properties of this type of functionality mapping, see [34].
**Proposition 3.4**
Let \( K \) be a set of propositional formulae of signature \( \Sigma \) and \( \vdash \) a (sound) inference relation or the semantic consequence relation.
Then the mapping \( \text{cl}_K \vdash : IS_K(\Sigma) \rightarrow IS_K(\Sigma) \) is conservative, monotonic and idempotent.
Moreover, for any \( M \in IS_K(\Sigma) \) and any model \( N \) of \( K \) that is a complete refinement of \( M \) it holds \( \text{cl}_K \vdash (M) \leq N \). In particular this holds for the semantic closure mapping.
### 3.2 Object level reasoning traces and controlled inference functions
In Subsection 3.1 we have assumed that the deduction is exhaustive with respect to the specific set \( K \); this is not a realistic assumption. In practice often only some of the inferences that are possible are applied, depending on additional control information.
However, the full deductive closure always gives an upper bound: if control is involved leading to non-exhaustive reasoning, the actual outcome is a model $M'$ with
$$M \leq M' \leq \text{dc}_K^{-1}(M) \leq \text{sc}_K(M)$$
In this paper we will assume that controlled inference is deterministic, depending on an assignment of values to some set of control parameters. In that case controlled inference can be described as follows.
**Definition 3.5 (controlled inference function)**
Suppose $K$ is a set of formulae of signature $\Sigma$ and $\vdash$ is a (sound) inference relation or the (semantic) entailment relation. The mapping $\alpha : IS_K(\Sigma) \rightarrow IS_K(\Sigma)$ is called a *controlled inference function* for $K$ based on $\vdash$ if it is conservative and monotonic and for all $M \in IS_K(\Sigma)$ it holds
$$\alpha(M) \leq \text{cl}_K^{-1}(\vdash)(M).$$
Notice that we do not require that a controlled inference function is idempotent. If reasoning is not exhaustive idempotency is often lost. Controlled inference functions can be viewed as functions $\alpha_{K,N}$ where instead of a general entailment relation $\vdash$ a variant is used that is parameterized by certain control information $N$. Two examples of control parameters and the corresponding inference functions are:
- information about which atoms are the *goals* for the reasoning
In this case the control information $N$ expresses that the conclusions should be restricted to what already is available and the set of atoms $G$; i.e.,
$$\alpha_{K,N}(M)(a) = \begin{cases}
M(a) & \text{if } M(a) \neq u \\
\text{cl}_K^{-1}(\vdash)(M)[G(a)] & \text{otherwise}
\end{cases}$$
- information about the *selection of elements of the knowledge base* to be used
Here the control information $N$ expresses that only formulae of a subset $K'$ of the theory $K$ can be used in the reasoning, i.e.,
$$\alpha_{K,N}(M) = \text{cl}_{K'}^{-1}(\vdash)(M) \leq \text{cl}_K^{-1}(\vdash)(M)$$
Notice that these examples of control apply not only to the case of an inference relation but also to the semantic consequence relation. In this sense control can be defined in a semantic (inference relation independent) manner.
In a meta-level architecture the control information $N$ is determined by the meta-level reasoning. What is needed to formalize a meta-level architecture is a formalization of this control information on the right level of abstraction; i.e., in such a manner that it can be subject of a (meta-level) inference process. We will come back to this point in Section 4.
**Definition 3.6 (object-reasoning trace)**
Let \( \langle \Sigma_o, OT, \vdash \rangle \) be a static view on the object-level reasoning component, where \( \vdash \) is a sound inference relation. A partial temporal model \( (M_t)_{t \in N} \) is called an *(object-reasoning) trace* for \( \langle \Sigma_o, OT, \vdash \rangle \) if for all \( s, t \in N \) with \( s < t \) it holds
\[
M_s \leq M_t \leq d_{OT}(M_s).
\]
**Theorem 3.7 (approximation of an intended model: soundness)**
Let \( \langle \Sigma_o, OT, \vdash \rangle \) be a static view on the object-level reasoning component and \( N \) a model of \( OT \) (the intended model).
a) If \( (M_t)_{t \in N} \) is a trace for \( \langle \Sigma_o, OT, \vdash \rangle \) with root \( r \) and \( M_r \leq N \) then for all \( t \in N \) it holds \( M_t \leq N \), i.e.:
\[
M_r \leq ... \leq M_t \leq ... \leq N
\]
b) Let for any \( t \in N \) a controlled inference function \( \alpha_t : IS_{OT}(\Sigma_o) \rightarrow IS_{OT}(\Sigma_o) \) for \( OT \) be given.
Then for any starting point \( M_0 \in IS_K(\Sigma) \) a trace \( (M_t)_{t \in N} \) for \( \langle \Sigma_o, OT, \vdash \rangle \) can be defined by:
\[
M_{t+1} = \alpha_t(M_t) \quad \text{for all } t \in N
\]
Given the formal framework as set up here, the proof of this theorem is not difficult. The above results show a direct connection between the semantics on the basis of partial models (as used here) and the classical Tarski semantics. In our terms this connection can be stated as follows. Reasoning of the object-level component is always on one specific, intended (complete) model that is a (Tarski) model of the knowledge base. An information state is a partial description of this intended model: a partial model with the intended model as one of its complete refinements. During reasoning this partial description is (step-wise) refined, but (in sound reasoning processes) always remains within the intended model. In our approach reasoning can be viewed as constructing a partial model, approximating the intended complete model better and better by refinement steps. This even holds if additional observations are allowed, based on the intended model (this point is left out of the current paper). Since at any moment in time the intended complete model is not known, in principle we have to take into account all complete refinements of the current information state that are models of the knowledge base. Thus the approach discussed here relates *static semantics* and *dynamic semantics* to each other in one formal framework.
### 3.3 Control information and dynamic view on object-level reasoning
In this section we will introduce a formalization of control aspects of the object-level reasoning. The intended model of the object-level component is (a formal representation of) a specific world situation. As the meta-level component reasons about the reasoning process of the object-level component, the intended model of this is a formal description of (relevant aspects of) the inference process of the object-level component. Considered from the viewpoint of the meta-level component, the object-level component can be considered as some exotic world situation with as a crucial
characteristic that it is dynamic: each time the meta-level component starts a new reasoning session, its associated world situation may have changed. Note that we assume that object-level and meta-level reasoning processes are alternating: during the meta-level reasoning the object-level component is not reasoning, so changes of the object-level state occur only between the reasoning sessions of the meta-level component.
This observation leads us to introduce a control signature that defines at an abstract level a number of descriptors that can be used to characterize the control and process states of the object-level reasoning: a lexicon in terms of which all relevant control information can be expressed. A truth assignment to the ground atoms of such a meta-signature is called a control-information state. Such a control-information state can serve as a (partial) model for the meta-level component. The question of what are the semantics of the meta-level component is equivalent to the question of what is described by the control-information state related to an object-level component. We illustrate this idea by some examples (for a more specific example, see Section 6):
- the fact that the object-level statement \( h \) is (currently) considered a goal for the reasoning process; e.g., expressed by the (ground) control-atom \( \text{goal}(h) \) where \( h \) is the name of an atom in the object-level language;
- a selection or priority of object-level knowledge elements to be used; e.g., expressed by the (ground) control-atom \( \text{rule\_priority}(r) \), where \( r \) is the name of a rule in the object-level knowledge base, or \( \text{goal\_priority}(h, 0.9) \), with \( h \) as above;
- the degree of exhaustiveness of the reasoning; e.g., expressed by \( \text{exhaustiveness(any)} \), meaning that it is enough to determine only one of the current goals (the one with highest possible priority).
A control-information state formalizes at a high level of abstraction the parameter \( N \) in a controlled inference function as introduced earlier in Section 3. We assume that the control-information state specifies all information relevant to the control of the (future) reasoning behaviour; i.e., the object-information state and the control-information state together determine in a deterministic manner the behaviour of the object-level reasoning component during its next activation. Of course it depends on the specific inference procedure that is used which control aspects can be influenced and which aspects cannot.
In principle for execution we would expect that all atoms of the control signature have a truth value assigned to it (i.e. the control-information state is a complete model). However, we allow partial control-information states as well.
**Definition 3.8 (dynamic view on the object-level component)**
A dynamic view related to the static view \( (\Sigma_o, OT, \vdash) \) on an object-level component is a tuple \( (\Sigma_o, \mu_{OT}^{-1}, \nu_{OT}^{-1}) \) with \( \Sigma_o \) a signature called control signature and \( \mu_{OT}^{-1}, \nu_{OT}^{-1} \) mappings
\[
\mu_{OT}^{-1} : IS(\Sigma_o) \times IS(\Sigma_o) \rightarrow IS(\Sigma_o)
\]
\[
\nu_{OT}^{-1} : IS(\Sigma_o) \times IS(\Sigma_o) \rightarrow IS(\Sigma_o)
\]
We call $\mu_{OT}^i$ the (controlled) inference function for the object-level, and $v_{OT}^i$ the process state update function. For any $N \in IS(\Sigma_o)$ the mapping
$$\mu_{OT}^N : IS(\Sigma_o) \rightarrow IS(\Sigma_o)$$
is defined by
$$\mu_{OT}^N : M \mapsto \mu_{OT}^+ (M, N)$$
We assume that for any $N \in IS(\Sigma_o)$ this $\mu_{OT}^N$ is a controlled inference function, i.e., it is conservative and monotonic and satisfies
$$\mu_{OT}^N(M) \leq d_{OT}^+ (M) \text{ for all } M \in IS(\Sigma_o).$$
When no confusion is expected, we will leave out the subscript and superscript of $\mu_{OT}^i$ and $v_{OT}^i$ and write shortly $\mu$ and $v$.
In a control signature sometimes reference will be made to (names of) elements of the language based on the object-level signature. On the other hand, also control-atoms are possible that do not refer to specific object-level language elements (e.g., exhaustiveness). We do not prescribe in a generic manner if and how reference is made to object-level language elements. In examples this will always be determined in a more specific manner.
The process state update function expresses what the process brings about with respect to the process state descriptors. Examples: an object-atom was unknown, but becomes known during the reasoning; an object-atom that was a goal has failed to be found.
The functions $\mu$ and $v$ for partial control-information states can be defined from the values of the functions for complete control-information states as follows:
$$\mu(M, N) = \text{gci} \{\mu(M, N') | N \leq N' & N' \text{ complete} \}$$
$$v(M, N) = \text{gci} \{v(M, N') | N \leq N' & N' \text{ complete} \}$$
where the greatest common information state gci(S) of a set S of information states is defined by
$$\text{gci}(S)(a) = \begin{cases} 1 & \text{if for all } M \in S \text{ it holds } M(a) = 1 \\ 0 & \text{if for all } M \in S \text{ it holds } M(a) = 0 \\ u & \text{otherwise} \end{cases}$$
A (combined) information state is a pair $(M, N)$ where $M$ is an object-information state and $N$ a control-information state. A (combined) trace is a sequence of (combined) information states. An object-level execution step on the basis of a combined information state $(M_0, N_0)$ provides the successor information state defined by:
$$(M_{t+1}, N_{t+1}) = (\mu(M_t, N_t), v(M_t, N_t))$$
A combined reasoning trace can be obtained by alternating object-level execution steps and interaction steps between the two levels to obtain new control-information states $N$. We will work this out in more detail in Section 5.
4 Temporal interpretation of the meta-level reasoning
Locally, at each of the two reasoning levels, the system behaves conservative and monotonic, but the whole cycle implies non-conservative changes of information states: the actions induced by the upward and downward reflections are not conservative. To describe this we can label the information states with an explicit time parameter (e.g., expressed by natural numbers). The non-conservatism can be covered by our declarative formal model, assuming each new (object-meta) cycle is labeled with the next (successor) time label. This approach implies that the meta-level reasoning component has semantics that relates states of the object-level reasoning component at time $t$ to states of this component at time $t+1$. In this manner, statements like
“if the atom $a$ is unknown, then the atom $b$ is proposed as a goal”
after downward reflection can be interpreted in a temporal manner:
“If at time $t$ the atom $a$ is unknown (in the object-level reasoning component)
then at time $t+1$ the atom $b$ is a goal” (for the object-level reasoning process)
Assuming that the meta-level’s proposals are always accepted (this assumption is sometimes called causal connection), downward reflection is just a shift in time, replacing the goals at the object-level by new goals (the ones proposed by the meta-level). This can be expressed as follows
$\neg \text{known}(a) \rightarrow \text{proposed\_goal}(b)$ (meta-knowledge)
$C(\text{proposed\_goal}(b)) \rightarrow X(\text{goal}(b))$ (downward reflection)
where $C$ means “holds in the current state” and $X$ “holds in the next state”.
Within the meta-information involved in the meta-reasoning we distinguish two special types: a) information on relevant aspects of the current (control-)state of the object-level reasoning process (possibly also including facts inherited from the past), and b) information on proposals for control parameters that are meant to guide the object-level reasoning process in the near future. Therefore we assume that in the meta-signature a copy of the control signature of the object-level component is included as a subsignature that refers to the current state. Moreover, we assume that a second copy of this control signature is included referring to the proposed truth values for the next state of the object-level reasoning process. For example, if $\text{goal}(h)$ is an atom of the control signature, then there are copies $\text{current\_goal}(h)$ and
proposed_goal) in the set of atoms for the meta-signature. A syntactic function transforming a meta-atom into a current variant and a proposed variant of it can simply be defined by two (injective) mappings c and p of predicates, leaving the arguments the same e.g.,
c(goal) = current_goal, p(goal) = proposed_goal.
We assume that the reasoning of the meta-level itself has no sophisticated control: for simplicity we assume that it concerns taking deductive closures with respect to the inference relation used at the meta-level. Under this assumption a dynamic view on the meta-level component is completely determined by a static view.
**Definition 4.1 (static and dynamic view on the meta-level component)**
a) The signature \( \Sigma_m \) is called a *meta-signature* related to \( \Sigma_c \) if there are two injective mappings \( c : \Sigma_c \rightarrow \Sigma_m \) and \( p : \Sigma_c \rightarrow \Sigma_m \). In this case the subsignatures \( c(\Sigma_c) \) and \( p(\Sigma_c) \) are denoted by \( \Sigma_m^c \) and \( \Sigma_m^p \); they are referring to the *current state control-information* of the object-level and the *proposed state control-information* for the object-level.
b) The *static view on the meta-level component* is a tuple
\[
(\Sigma_m, MT, \vdash_m)
\]
with
\( \Sigma_m \) a signature, called the meta-signature related to \( \Sigma_c \)
MT a set of propositional ground formulae expressed in terms of the meta-signature
\( \vdash_m \) a classical inference relation (assumed sound but not necessarily complete)
c) The *inference function of the meta-level* \( \mu_{MT} \ m \) (or shortly \( \mu^* \))
\[
\mu_{MT} \ m : IS_{MT}(\Sigma_m^c) \rightarrow IS_{MT}(\Sigma_m)
\]
is defined by the exhaustive inference function based on \( \vdash \); i.e., by the transition function
\[
\mu_{MT} \ m : N \mapsto dc_{MT} \vdash_m(N)
\]
This function \( \mu^* \) defines the *dynamic view on the meta-level component*, related to the static view
\[
(\Sigma_m, MT, \vdash_m).
\]
Note that we essentially use propositional logic to describe the meta-language; if needed a propositional signature can be defined based on the set of all ground atoms expressible in a given (many-sorted) predicate logic signature. In fact it does not matter how language elements at the meta-level are denoted, but how their semantics is defined (in terms of the controlled inference function).
5 Temporal models of overall reasoning patterns
After having introduced the required concepts in the previous sections, in this section it will turn out to be easy to compose them to semantics for the dynamics of a meta-level architecture. The information states of the meta-level component of the reasoning system will have a direct impact on the control-information state of the object-component, and vice versa. These connections will be defined formally in this section. Notice that in our approach the object-level component and the meta-level component do not reason at the same time, but are alternating.
In fact the following four types of actions take place (see Fig. 2). For a formal description, see Definitions 5.1 and 5.2 below.
- **object-level reasoning**
The reasoning of the object-level component can be described by the functions $\mu$ and $\nu$ as defined in Definition 3.8.
- **upward reflection**
Information from the control-information state of the object-level component is transformed (by a transformation function $\alpha_{\text{up}}$ defined in Definition 5.1 below) to the next information state of the meta-level component. This will provide input information for the subsequent reasoning of the meta-level component (see Definition 4.1).
- **meta-level reasoning**
The reasoning of the meta-level component can be described by the inference function $\mu^*$ as defined in Definition 4.1.
- **downward reflection**
Information from the meta-level component is transformed (by the mapping $\alpha_{\text{down}}$; see Definition 5.1 below) to the next control-information state to be used in the control of the object-level component. This will affect the reasoning behaviour during the subsequent object-level reasoning.
**Definition 5.1 (meta-level architecture for dynamic control)**
a) A *meta-level architecture for dynamic control* is described by a tuple
$$\text{MLC} = \langle \langle \Sigma_o, OT, \vdash_o \rangle; \langle \Sigma_c, \mu, \nu \rangle; \langle \Sigma_m, MT, \vdash_m \rangle; \langle c, p \rangle \rangle$$
where
$$\langle \Sigma_o, OT, \vdash_o \rangle$$
$$\langle \Sigma_c, \mu, \nu \rangle$$
are a static and a dynamic view on the object-level component,
$$\langle \Sigma_m, MT, \vdash_m \rangle$$
is a static view on the meta-level component, where $\Sigma_m$ is related to the control signature $\Sigma_c$ by the injective functions $c : \Sigma_c \rightarrow \Sigma_m$ and $p : \Sigma_c \rightarrow \Sigma_m$ and $\vdash_m$ is an inference relation. Moreover, $MT$ is a meta-knowledge base satisfying
$$IS_{MT}(\Sigma_m^n) = IS(\Sigma_m^n)$$
i.e., no information state in $IS(\Sigma_m^n)$ is inconsistent with $MT$.
Based on MLC we can define the function $\mu^*$ according to Definition 4.1.
b) Let MLC be as in a). The *upward reflection function* is the mapping
$$\alpha_{\text{up}} : IS(\Sigma_c) \rightarrow IS(\Sigma_m)$$
defined for $N \in IS(\Sigma_c)$ and $b \in At(\Sigma_m)$ by
$$\alpha_{\text{up}}(N)(b) = \begin{cases} N(a) & \text{if } b = c(a) \text{ for some } a \in At(\Sigma_c) \\ u & \text{otherwise} \end{cases}$$
The *(left) inverse upward reflection function* $\beta$ is the mapping
$$\beta : IS(\Sigma_m) \rightarrow IS(\Sigma_c)$$
defined for $N \in IS(\Sigma_m)$ and $a \in At(\Sigma_c)$ by
$$\beta(N)(a) = \begin{cases} N(c(a)) & \text{if } N(p(a)) = 1 \\ 0 & \text{otherwise} \end{cases}$$
The *downward reflection function* is the mapping
$$\alpha_{\text{down}} : IS(\Sigma_m) \rightarrow IS(\Sigma_c)$$
defined for $N \in IS(\Sigma_m)$ and $a \in At(\Sigma_c)$ by
$$\alpha_{\text{down}}(N)(a) = \begin{cases} 1 & \text{if } N(p(a)) = 1 \\ 0 & \text{otherwise} \end{cases}$$
The *time shift function* is the mapping
$$\sigma : IS(\Sigma_m) \rightarrow IS(\Sigma_m)$$
defined by
\[
\begin{align*}
\sigma(N)(b) &= N(p(a)) & \text{if} & b = c(a) \text{ for some } a \in At(\Sigma_o) \\
& & & \text{and } N(p(a)) \neq u \\
0 & & \text{if} & b = c(a) \text{ for some } a \in At(\Sigma_o) \\
& & & \text{and } N(p(a)) = u \\
u & & \text{otherwise}
\end{align*}
\]
Reasoning activities are modifying object-information states in a conservative manner (making refinements). Notice, however, that execution of upward and downward reflection may induce non-conservative changes. Notice that we force the new control state resulting from downward reflection to be two-valued. This is to avoid nondeterministic phenomena and to allow that the meta-level only provides the relevant (partial) information on control. In the rest of the paper \( MLC \) will denote a tuple as defined in Definition 5.1.
The following relations hold for the functions defined above:
\[
\beta_{\text{up}} = \text{id}, \quad \alpha_{\text{down}} = \beta \sigma, \quad \sigma = \alpha_{\text{up}} \alpha_{\text{down}}.
\]
**Definition 5.2 (overall semantics based on traces)**
a) An overall trace for the meta-level architecture \( MLC \) is a labeled linear partial temporal model
\[
(M_t \oplus N_t)_{t \in N}
\]
(also denoted by \( M \oplus N \)) over \( IS(\Sigma_o \oplus \Sigma_m) \) with set of labels \( L = \{re, sh\} \) (\( re \) for a reasoning step, \( sh \) for a time shift step) satisfying the following conditions for all \( s, t \in N \):
(i) If \( s \leq_{re} t \)
\[
\begin{align*}
M_t &= \mu(M_s, \beta(N_s)) \\
N_t &= \mu^{\ast q}(\alpha_{\text{up}}(\nu(M_s, \beta(N_s))))
\end{align*}
\]
(ii) If \( s \leq_{sh} t \)
\[
\begin{align*}
M_t &= M_s \\
N_t &= \mu^{\ast q}(\sigma(N_s))
\end{align*}
\]
b) The (intended) semantics of the meta-level architecture \( MLC \) is the set of traces as defined in a), denoted by \( \text{Traces}(MLC) \).
c) A trace is called alternating if for all \( r, s, t \in N \) and \( i, j \in L \) with \( r \leq_i s \leq_j t \) it holds \( i \neq j \).
Although usually we are most interested in alternating traces, there may be cases where we are interested in other traces as well: e.g. if we allow multiple activations of the object-level without intervention of the meta-level.
Temporal models can be defined using our framework by traces of information states. These traces are constructed during reasoning. At each moment of time only a partial (in time) fragment of such a trace-model has been constructed. The set of completed traces can be viewed as the set of intended overall models of the meta-level architecture. The meta-level architecture as a whole approximates an intended model.
in a conservative manner by subsequently adding elements to the trace according to time steps. This view will be made more precise in the following theorem.
**Theorem 5.3 (approximation of a trace)**
Let \( \text{MLC} \) be a meta-level architecture for dynamic control.
a) The set of alternating traces of \( \text{MLC} \) is parameterized by the initial states, together with the label (from \( \{\text{re}, \text{sh}\} \)) of the initial transition.
b) Let \( M \oplus N \) be a trace for \( \text{MLC} \). Define for any time point \( t \)
\[
M^{(t)}_s(a) =
\begin{cases}
M_s(a) & \text{if } s < t \text{ or } s = t \\
u & \text{otherwise}
\end{cases}
\]
\[
N^{(t)}_s(a) =
\begin{cases}
N_s(a) & \text{if } s < t \text{ or } s = t \\
u & \text{otherwise}
\end{cases}
\]
Then for all time points \( t \) it holds
\[
M^{(0)} \oplus N^{(0)} \leq \ldots \leq M^{(t)} \oplus N^{(t)} \leq M^{(t+1)} \oplus N^{(t+1)} \leq \ldots \leq M \oplus N
\]
Notice that in this section we do not (yet) add temporal elements to the languages of the reasoning components themselves, but we attribute temporal semantics to the whole system by interpreting the object reasoning process and the downward reflection in a temporal manner. This means that within each of the components (locally) we retain our original (non-temporal) semantics. The temporal semantics only serves as a foundation for the composition principle to define an overall semantics composed from the local semantics of each of the components at the two levels.
### 6 An example reasoning pattern
To illustrate the concepts introduced here we will give a trace of a meta-level architecture for reasoning with dynamic hypotheses that are used as goals. The meta-level reasoning performs hypothesis selection whereas the object-level reasoning performs testing of hypotheses by trying to derive them from observation information (in a goal-directed manner). Control is needed to direct the object-level reasoning to the goal that is to be posed: the selected hypothesis. The meta-level contains declarative knowledge on which hypothesis to select under which circumstance (state of the object-level reasoning process). The downward reflection transforms this information about the selected hypothesis to control-information in the form of a goal set for the object-level reasoning; this enables the system to effectuate control. The upward reflection provides the information for the meta-level component on the current state of what is already known and what is not yet known in the object-level reasoning process. The knowledge in this example system is not realistic, but it enables one to get an impression of the reasoning pattern.
A. Static view on the object-level reasoning component
Object-signature (propositional):
\[ \Sigma_0 = \langle s_1, s_2, s_3, h_1, h_2 \rangle \]
Object theory (knowledge base of the object-level component) OT:
\[
\begin{align*}
s_2 \land s_3 & \rightarrow h_1 \\
\neg s_3 \land s_1 & \rightarrow h_2 \\
\neg s_3 & \rightarrow \neg h_1
\end{align*}
\]
Inference relation: \( \vdash_{ch} \) (chaining)
B. Dynamic view on the object-level reasoning component
This reasoning component is used in a goal-directed fashion with chaining as inference relation. We will not involve the possibility to acquire additional information from the outside of the system.
Control-signature
\[ \langle \text{true}_s_1, \text{false}_s_1, \text{true}_s_2, \text{false}_s_2, \text{true}_s_3, \text{false}_s_3, \\
\text{known}_h_1, \text{known}_h_2, \text{goal}_h_1, \text{goal}_h_2 \rangle \]
Inference function
The dependency of the inference function \( \mu \) on the control-information state is concentrated in information expressed by goal-statements \( \text{goal}_h_i \) (meaning that \( h_i \) is a goal for the object-level reasoning process). As a formal definition we can take
\[
\mu(M, N)(a) =
\begin{cases}
d_{OT} \cdot ch(M)(a) & \text{if } a = h_i \text{ and } N(\text{goal}_h_i) = 1 \\
M(a) & \text{otherwise}
\end{cases}
\]
Process state update function
The process state update function \( \psi \) is defined as follows:
- The statement \( \text{known}_h_i \) gets truth value 1 in the control-information state if in the object-information state \( h_i \) has truth value 1 or 0; it gets truth value 0 otherwise (i.e., if \( h_i \) has truth value u in the object-information state).
- The statement \( \text{true}_s_i \) has truth value 1 in the control-information state if in the object-information state \( s_i \) has truth value 1; it gets truth value 0 otherwise (i.e., if \( s_i \) has truth value u or 0 in the object-information state).
- The statement `false_s1` has truth value 1 in the control-information state if in the object-information state `s1` has truth value 0; it gets truth value 0 otherwise (i.e., if `s1` has truth value u or 1 in the object-information state).
- The other truth values remain unchanged.
C. Meta-level reasoning component
The meta-signature is taken as the disjoint union of two copies of the control signature above: `c_at` (currently at), and `p_at` (proposed at), where `at` is an atom of the control signature.
Knowledge base of the meta-level component (MT):
\[
c_{true\_s2} \land \neg c_{known\_h1} \rightarrow p_{goal\_h1}
\]
\[
c_{false\_s3} \land \neg c_{known\_h2} \rightarrow p_{goal\_h2}
\]
The meta-level will use chaining as its inference relation.
The inference function \( \mu^* \) is the deductive closure function under MT.
Trace of an example session
In Fig. 3 a session with initial state \( \langle s1, s2, s3 \rangle : \langle 1, 1, 0 \rangle \) is described. Here for convenience partial models are denoted by the list of atomic statements and negations of atomic statements that are true. For the (combined) information states (named \( p_i \)) of the object-level component both the object-information states and the control-information states are depicted (separated by a colon ;). For the meta-level component only the object-information states are depicted (named \( t_i \)). For shortness only some relevant (literal) facts are written in the information states.
```
object-level component
p_0 : [s1,s2,\neg s3];[\neg known\_h1,\neg known\_h2]
p_1 : [s1,s2,\neg s3];[true\_s2,\neg known\_h1,\neg known\_h2]
t_0 : [c\_true\_s2,\neg c\_known\_h1,\neg c\_known\_h2]
t_1 : [c\_true\_s2,\neg c\_known\_h1,\neg c\_known\_h2,p\_goal\_h1]
p_2 : [s1,s2,\neg s3];[true\_s2,\neg known\_h1,\neg known\_h2,goal\_h1]
p_3 : [s1,s2,\neg s3,\neg h1];[true\_s2,false\_s3,known\_h1,\neg known\_h2]
t_2 : [c\_true\_s2,c\_false\_s3,c\_known\_h1,\neg c\_known\_h2]
t_3 : [c\_true\_s2,c\_false\_s3,c\_known\_h1,\neg c\_known\_h2,p\_goal\_h2]
p_4 : [s1,s2,\neg s3,\neg h1];[true\_s2,false\_s3,known\_h1,\neg known\_h2,goal\_h2]
p_5 : [s1,s2,\neg s3,\neg h1,h_2];[true\_s1,true\_s2,false\_s3,known\_h1,known\_h2]
```
Fig 3 Trace of an example session
This example shows that it does not matter how language elements at the meta-level are denoted, but how their semantics is defined. Using a propositional language at the meta-level is possible, but the more concise syntactical notation of predicate logic has practical advantages. Therefore, often a predicate logic language is used at the meta-level. For the semantics of the whole reasoning pattern this makes no essential difference.
7 Conclusions
The semantic framework as discussed provides integration of static and dynamic aspects in two different forms. On the one hand we connect the partial semantics as used to describe information states to the standard Tarski semantics: a partial model corresponds to the set of all of its complete refinements. On the other hand the integration between static and dynamic aspects takes place by introducing the notion of an explicit (declarative) control-information state in the object-level component.
Our logical framework has been partly inspired by Weyhrauch's view on the role of partial models (or simulation structures) in meta-level architectures ([37], [19]); see also [32]. What is different in our case is that the partial models may be dynamic. Furthermore, similarities can be found to the approach called dynamic interpretation of natural language (e.g., see [17], [20], [23]). In this approach the dynamic interpretation of a sentence in natural language is defined as an operator that transforms the current information state into a new one where the content of the sentence is included.
With respect to dynamics the type of meta-level architecture covered here is less restricted than sometimes studied in logical approaches, where meta-level predicates are meant to express only static properties of the object-level, e.g., provability, et cetera. We believe that the semantic model as discussed here can help to bridge the gap between the (more restricted) logic-based approaches and (less restricted) procedural approaches to meta-level architectures.
It is not difficult to use our framework to model meta-level reasoning that looks ahead more than one step. One can transfer a part of the information at the meta-level over the time shift and thus connect the reasoning in different activations of the meta-level. The results presented here can also be extended easily to the case of higher meta-levels where also the control of the meta-level is guided in a dynamic manner (in this case a refinement of the time scale can be made).
The type of meta-level architecture addressed by the semantic framework has been implemented and applied in a number of practical applications, often in projects in cooperation with companies (e.g., [6], [7], [8], [9], [18]). The type of meta-level architecture as discussed can be designed and formally specified using our compositional design method DESIRE (DEsign and Specification of Interacting REasoning components; see [5], [26]). By means of DESIRE complex reasoning systems or agents can be designed and specified according to what we call a
compositional architecture: an architecture composed from a number of formally specified reasoning components using formal composition principles (see [5]). In DESIRE various types of reasoning components are covered; e.g., goals can be used to guide the reasoning, and various measures of exhaustiveness can be used for the control of reasoning (for more details, see [5]).
In our current practical applications (using DESIRE) of the framework as described the control-information states are two-valued, i.e., no $u$'s occur as truth values. In other words: at each moment all state descriptors defined by the control signature have a determined truth value. This corresponds to the intuition that, since the meta-level reasoning is about the states of the object-level reasoning process, input information about this can be acquired from the system itself; there is essentially no incompleteness of incoming information. On the other hand the two-valuedness of the control-information states is related to the fact that we require that the control of the object-level reasoning is completely determined by the truth values of the control-atoms, and vice versa. Therefore, if we require complete deterministic specification of the behaviour of the reasoning system, all control-atoms should have determined truth values (and vice versa). If we would allow non-deterministic control (e.g., by only specifying some, but not all aspects of the control), the control-information states may be viewed as essentially three-valued.
Since for the notion of a compositional architecture and the design method DESIRE, an essential use is made of the notion of a meta-level architecture, the (formal) semantics of the static and dynamic semantics of DESIRE depend on these semantics of meta-level architectures. In the literature not much work is reported on such foundations. As this paper contributes formal semantics of meta-level architectures, this can be used for a semantics of DESIRE.
Meta-level architectures have been exploited to model nonmonotonic reasoning (see [1], [30], [31]). The type of architecture there is based on temporal epistemic reflection (i.e., dynamic addition and retraction of assumptions by explicit meta-reasoning), and not on meta-level control of the object-level reasoning (which takes place as a non-controlled deductive closure determination; therefore it differs from the type of meta-level architecture considered here. Formal analysis of semantics of this type of architecture was addressed in [21]. An interesting combination would be if both meta-reasoning on assumptions and meta-reasoning on control of the object reasoning would be covered in one (combined) architecture; this has not been addressed yet.
Disjoint from the area of meta-level architectures, a temporal perspective on the semantics of reasoning processes has been very fruitful to obtain semantics, temporal specification languages, and simulation environments for of nonmonotonic reasoning processes; cf. [13], [14], [16]. A specific type of nonmonotonic reasoning is based on default logic. In [12], [15], [22] different aspects of semantics and specification of default reasoning processes have been analysed in more depth.
Acknowledgements
This work was partially supported by ESPRIT III Basic Research Action 6156 (DRUMS II). Preliminary work on this paper has been presented at DRUMS II workshops, and at the workshop META'94. The paper has benefit from discussions in these workshops. Guszti Eiben, Joeri Engelfriet and Pieter van Langen have read and commented upon earlier drafts of this paper. This has led to a number of improvements in the text.
References
1. Allis V.E., Tan Y.H., Treur J., Meta-level Selection Techniques for the Control of Default Reasoning. *Future Generation Computer Systems*, vol. 12, Special double issue (2-3): Reflection and Meta-level AI Architectures, (R. Lopez de Mantaras, ed.), 1996, pp. 189-201.
2. J.F.A.K. van Benthem, The logic of time: a model-theoretic investigation into the varieties of temporal ontology and temporal discourse. Reidel, Dordrecht, 1983.
3. S. Blamey, Partial Logic, in: D. Gabbay and F. Guenthner (eds.), *Handbook of Philosophical Logic*, Vol. III, 1-70, Reidel, Dordrecht, 1986.
4. K. Bowen and R. Kowalski, Amalgamating language and meta-language in logic programming. In: K. Clark, S. Tarnlund (eds.), Logic programming. Academic Press, 1982.
5. Brazier, F.M.T., Jonker, C.M., and Treur, J., Principles of Compositional Multi-agent System Development. In: J. Cuena (ed.), *Proceedings of the 15th IFIP World Computer Congress, WCC'98: Conference on Information Technology and Knowledge Systems, IT&KNOWS'98*, 1998, pp. 347-360. To be published by IOS Press, Brazier.
6. F. M. T., Jonker, C. M., Treur, J., and Wijngaards, N.J.E, (2000), On the Use of Shared Task Models in Knowledge Acquisition, Strategic User Interaction and Clarification Agents. *International Journal of Human-Computer Studies*, vol. 52, 2000, pp. 77-110.
7. Brazier, F.M.T., Langen, P.H.G. van, and Treur, J., Strategic Knowledge in Design: a Compositional Approach. *Knowledge-based Systems*, vol. 11, 1998 (Special Issue on Strategic Knowledge and Concept Formation, K. Hori, ed.), pp. 405-416.
8. Brazier, F.M.T., and Treur J., Compositional Modelling of Reflective Agents. *International Journal of Human-Computer Studies*, vol. 50, 1999, pp. 407-431.
9. H.A. Brumsen, J.H.M. Pannekeet and J. Treur, A compositional knowledge-based architecture modelling process aspects of design tasks, Proc. 12th Int. Conf. on AI, Expert systems and Natural Language, Avignon'92 (Vol. 1), 1992, pp. 283-294.
10. W.J. Clancey and C. Bock, Representing control knowledge as abstract tasks and metarules, in: Bolc, Coombs (eds.), Expert System Applications, 1988.
11. R. Davis, Metarules: reasoning about control, Artificial Intelligence 15 (1980), pp. 179-222.
12. Engelfriet, J., Marek, V.W., Treur, J., and Truszczynski, M., Default Logic and Specification of Nonmonotonic Reasoning. Journal of Experimental and Theoretical AI. To appear.
13. Engelfriet, J., and Treur, J., Temporal Theories of Reasoning. Journal of Applied Non-Classical Logics, 5, 1995, pp. 239-261.
14. Engelfriet J., Treur J. Executable Temporal Logic for Nonmonotonic Reasoning. Journal of Symbolic Computation, vol. 22, 1996, pp. 615-625.
15. Engelfriet J. and Treur, J., An Interpretation of Default Logic in Minimal Temporal Epistemic Logic. Journal of Logic, Language and Information, vol. 7, 1998, pp. 369-388.
16. Engelfriet J., and Treur J. Specification of Nonmonotonic Reasoning. Journal of Applied Non-Classical Logics, vol. 10, 2000, pp. 7-27
17. T. Fernando, Transition systems and dynamic semantics, Proc. JELIA'92 Workshop on Logic and AI, Berlin, 1992.
18. P.A. Geelen and W. Kowalczyk, A knowledge-based system for the routing of international blank payment orders, Proc. 12th Int. Conf. on AI, Expert systems and Natural Language, Avignon-92 (Vol. 2), 1992, pp. 669-677.
19. E. Giunchiglia, P. Traverso and F. Giunchiglia, Multi-context Systems as a Specification framework for Complex Reasoning Systems, In: [36], 1993, pp. 45-72.
20. R. Goldblatt, Logics of Time and Computation, CSLI Lecture Notes, Vol. 7, 1987, Center for the Study of Language and Information.
21. Hoek, W. van der, Meyer, J.-J.Ch., and Treur, J., Formal semantics of temporal epistemic reflection. In: L. Fribourg and F. Turini (ed.), Logic Program Synthesis and Transformation-Meta-Programming in Logic, Proc. Fourth Int. Workshop on Meta-programming in Logic, META'94, Lecture Notes in Computer Science, vol. 883, Springer Verlag, 1994, pp. 332-352.
22. Hoek, W. van der, Meyer, J.J. Ch., Treur, J., Temporal Epistemic Default Logic. Journal of Logic, Language and Information, vol. 7, 1998, pp. 341-367.
23. J.A.W. Kamp, A theory of truth and semantic representation, In: Formal methods in the study of language. Mathematical Centre Tracts 135, Amsterdam, 1981.
24. P.H.G. van Langen and J. Treur, Representing world situations and information states by many-sorted partial models, Report PEB904, University of Amsterdam, Department of Mathematics and Computer Science, 1989.
25. T. Langholm, Partiality, Truth and Persistence, CSLI Lecture Notes No. 15, Stanford University, Stanford, 1988.
26. I.A. van Langevelde, A.W. Philipsen, J. Treur, Formal specification of compositional architectures, In: B. Neumann (ed.), Proc. 10th European Conference on Artificial Intelligence, ECAI'92, Wiley and Sons, 1992, pp. 272-276.
27. J.W. Lloyd, Foundations of logic programming, Springer Verlag, 1984.
28. P. Maes, D. Nardi (eds), Meta-level architectures and reflection, Elsevier Science Publishers, 1988.
29. Y.H. Tan and J. Treur, A bi-modular approach to nonmonotonic reasoning, In: De Gias, M., Gabbay, D. (eds.), Proc. World Congress on Fundamentals of Artificial Intelligence, WOCFAI'91, 1991, pp. 461-476.
30. Y.H. Tan and J. Treur, Constructive default logic and the control of defeasible reasoning, In: B. Neumann (ed.), Proc. 10th European Conference on Artificial Intelligence, ECAI'92, Wiley and Sons, 1992, pp. 299-303.
31. Y.H. Tan and J. Treur, Constructive default logic in a meta-level architecture, in: A Yonezawa, B.C. Smith (eds.), Proc. International Workshop on new Models in Software Architecture (IMSA) 1992, Reflection and Meta-level Architectures, 1992, pp. 184-189.
32. J. Treur, Completeness and definability in diagnostic expert systems, Proc. European Conference on Artificial Intelligence; ECAI'88, München, 1988, pp. 619-624.
33. J. Treur, On the use of reflection principles in modelling complex reasoning, International Journal of Intelligent Systems 6 (1991), pp. 277-294.
34. J. Treur, Declarative functionality descriptions of interactive reasoning modules, In: H. Boley, M.M. Richter (eds.); Processing Declarative Knowledge, Proc. of the International Workshop PDK'91, Lecture Notes in Artificial Intelligence, vol. 567, Springer Verlag, 1991, pp. 221-236.
35. J. Treur, P. Veerkamp, Explicit representation of design process knowledge, in: J.S. Gero (ed.), Artificial Intelligence in Design '92, Proc. AID'92, Kluwer Academic Publishers, 1992, pp. 677-696.
36. J. Treur and Th. Wetter (eds.), Formal Specification of Complex Reasoning Systems, Ellis Horwood, 1993, pp 282.
37. R.W. Weyhrauch, Prolegomena to a theory of mechanized formal reasoning, Artificial Intelligence 13 (1980), pp. 133-170. |
Voice Traffic Performance Measurement in Packet Networks
V. Matić, A. Bažant and M. Kos
University of Zagreb
Faculty of Electrical Engineering and Computing
Department of Telecommunications
email@example.com}
Abstract. Quality of voice transportation over packet networks depends on three critical parameters: packet loss, absolute packet delay and packet delay variation (jitter). Common practical problem is how to estimate voice quality for a different number of voice sources with different source characteristics in target networks. Important characteristics of a voice source are: rate (packets per second), packet length (depending on a type of compression), type of source (VBR or CBR) and statistical description. In this paper we propose methodology and practical approach for systematic measurement of voice traffic parameters in respect to varying voice source characteristics. Measurement results that are conducted with presented methodology on an experimental DiffServ network are given, too.
Keywords. VoIP, packet loss, jitter, delay, DiffServ, voice quality.
1. Introduction
Recently, mechanisms that offer guaranteed QoS penetrate in existing packet networks. Network providers, no matter what kind of technology they use, MPLS, DiffServ, IntServ, ATM or other, have to measure traffic performance and to estimate service quality they can offer to customers [2]. Any customer who wants to verify quality of service agreed with network provider can conduct similar measurements. But, methodology and approach to measurements for these two cases are different. Customer is aware of services he is subscribed for. He also knows traffic characteristics that are necessary for the achievement of the agreed QoS. Hence, measurements can be done for a reduced set of traffic parameters. On the other hand, network service provider has to measure wider range of traffic parameters because of a variety of offered services.
It is important to correctly map traffic parameters with a desired service. When one defines service quality, he actually puts some constraints on packet rate, jitter, packet loss, packet delay, etc. Network is allowed to apply the constrained parameters to a specific traffic type. It is hard to determine voice quality on the basis of measured traffic parameters because of a subjective nature of speech quality estimation. Common practice allows us to use empirical values as shown in table 1.
Table 1: Voice quality estimation
| Quality | Packet loss [%] | Maximum delay variation [ms] |
|---------|-----------------|-------------------------------|
| excellent | 0 | 0 |
| good | 3 | 75 |
| average | 10 | 125 |
| bad | 25 | 225 |
The next step is to describe services by means of corresponding traffic parameters and then, to design sources that can generate packet streams with characteristics as close as possible to real sources. After that, it is necessary to design voice traffic sinks. Sinks are not less important than sources because they actually measure traffic performance. What is left to be done is to compare output results of traffic characteristics measurements with service requirements. Based on the comparison
it is important to decide which services could be supported by network.
2. Voice Source Emulation
Emulated sources and sinks are necessary for measurements of voice traffic parameters in a real network. Source emulators should be configurable through following parameters: number of sources, packet length, packet rate (number of packets per second), type of service (VBR or CBR). If a source is VBR, then its statistical parameters are needed, too.
Packet length is equivalent to corresponding voice compression algorithms used to code hypothetical voice stream. Packets sent into a network by emulator are not actually empty, even if they do not carry voice information. Real voice transport is based on real time protocol, such as RTP [4], that carries not only voice, but also the information like packet sequence number, time stamp (time of packet creation) and other information important for real time delivery. It is necessary to embed that kind of information in each packet because of the emulation purposes. Sequence number and time stamp are especially important because these parameters are used on receiver side as an entry to a packet loss and jitter calculation.
While it is easy to describe CBR source by means of packet length and fixed inter-packet time, VBR source is much more complicated. VBR traffic is generated during a voice communication due to the implementation of silence suppression algorithms in a terminal equipment. While the source is silent it does not generate packets. This is so called On/Off source, and it can be described with Markov chain model. The full Markov model of a voice conversation comprises six states [3]. This model is too complex for implementation and practical use. The two state model can be derived from the full model by a series of simplifications. The new model is good enough for measurement purposes.
3. Two States Markov Chain Model of a Voice Source
As it is shown in figure 1, this model consists of two stable states.

This model is a simplification of the full conversation model. It can not describe states such as: both sides are silent or both sides are talking. After entering state 1 emulation application starts to send packets into a network, and, as long as it stays in that state, it continues to send equally spaced packets. Because of a discrete nature of this process, next state is calculated after a packet is sent.
This process can be described with a transition probability matrix,
\[
P = \begin{bmatrix} 1 - q & q \\ r & 1 - r \end{bmatrix}.
\]
(1)
If the process is stationary [6] it can be written:
\[
pP = p
\]
(2)
where \( p = [\hat{P}_1 \hat{P}_2] \). From the matrix equation (2) we can derive stationary probabilities as it is shown in (3)
\[
\hat{P}_1 = \frac{r}{r + q}, \quad \hat{P}_2 = \frac{q}{r + q}.
\]
(3)
A problem is how to choose transitional probabilities to satisfy desired mean time spent in one of the states (\( T_1, T_2 \)), while still retaining
fixed stationary probabilities. In other words, there is an infinite number of $r$ and $q$ combinations satisfying the equation (3). It is not only important to identify stationary probabilities for a voice conversation. It is also important to determine how long process continually remains in each of the states. That is a reason why constraints are put on the mean times (talking or listening). Exact values of the time intervals are gained from statistical measurements on real conversations [5].
If the first packet of the talking interval has been generated, we can ask ourselves: "What is a probability that $i$ packets will be generated in a sequence?". This could be written as
$$p(i) = (1 - q)^{i-1} q.$$ \hspace{1cm} (4)
Now we can evaluate mean value for this stochastic process as shown in the equation (5). Notice here that the number of packets received in a sequence conforms to the geometric distribution
$$E[X] = \sum_{i=0}^{\infty} i p(i) = q \sum_{i=0}^{\infty} i (1 - q)^{i-1}. \hspace{1cm} (5)$$
The equation (5) can be written in other form:
$$E[X] = q \frac{\partial}{\partial(1-q)} \sum_{i=0}^{\infty} (1 - q)^i. \hspace{1cm} (6)$$
It could be easily seen that expression (6) represents geometrical series. Hence, the result of summation is $1/ [1 - (1 - q)]$. Finally, the result is
$$E[X] = q \frac{\partial}{\partial(1-q)} \left[ \frac{1}{[1 - (1 - q)]} \right] = \frac{q}{[1 - (1 - q)]^2} = \frac{1}{q}. \hspace{1cm} (7)$$
Now when we know the expected number of generated packets in a talk interval, we can also evaluate mean time spent in state 1. Since packets in the talk interval are sent equally spaced by $\Delta$ ms, we can write
$$T_1 = \Delta E[X] = \Delta \frac{1}{q} \hspace{1cm} (8)$$
$$T_2 = \Delta E[Y] = \Delta \frac{1}{r} \hspace{1cm} (9)$$
where $T_1$ and $T_2$ are mean times spent in state 1 and state 2, respectively.
If we express transitional probabilities $q$ and $r$ through expected time $T_1$ and $T_2$, we can write,
$$q = \frac{\Delta}{T_1}, \quad r = \frac{\Delta}{T_2} \Rightarrow$$
$$\hat{P}_1 = \frac{\Delta}{\left(\frac{\Delta}{T_1} + \frac{\Delta}{T_2}\right) T_2}, \quad \hat{P}_2 = \frac{\Delta}{\left(\frac{\Delta}{T_1} + \frac{\Delta}{T_2}\right) T_1} \Rightarrow$$
$$\hat{P}_1 = \frac{T_1}{T_1 + T_2}, \quad \hat{P}_2 = \frac{T_2}{T_1 + T_2} \hspace{1cm} (10)$$
It can be seen that both assumptions are met (10): stationary probabilities are in the given ratio to each other (e.g. 40%-60%), and expected times $T_1$ and $T_2$ are equal to predefined values.
### 3.1 Model Generalization for $N$ Sources
In the case of multiple voice sources it is necessary to introduce a general model derived from the basic two state Markov model. Thus, it is observed that the steady-state arrival process is a binomial process. This is clearly due to the fact that each speaker’s behavior is an independent Bernoulli trial, with probability of supplying a packet equal to $r/(r + q)$ [3]. The question is: "How to express the probability that in $k$-th time interval (interval duration is fixed to $\Delta$ ms) $n$ of $m$ sources are active?"'. For steady-state, or in other words, when $k \to \infty$ one can write
$$P_n = \binom{m}{n} \left( \frac{r}{r+q} \right)^n \left( \frac{q}{r+q} \right)^{(m-n)}. \hspace{1cm} (11)$$
In practical application of this generalization there is a problem how to calculate the equation
$$\binom{m}{n} = \frac{m!}{n!(m-n)!},$$
$$m \in N, \ n \in N, \ m \geq n \hspace{1cm} (12)$$
in standard C programming language implementation. Largest number which can be represented with standard C library is 170!.
Hence, that number of channels is insufficient for extensive voice source emulation. On the other side, network interface capacity and computer processing power should be the only practical limitations. For example, personal computer with Pentium processor and 10 Mbit/s Ethernet network card can generate up to 500 voice channels.
Solution of the problem is the approximation of binomial with normal distribution (formal proof that this could be done is given in [6]):
\[
f(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{(x-a)^2}{2\sigma^2}}.
\]
(13)
The use of normal distribution is preceded with parameter mappings as shown in
\[
a = m \hat{P}_1 \\
\sigma^2 = m \hat{P}_1 \hat{P}_2
\]
(14)
where \(a\) is mean and \(\sigma\) is standard deviation. Expectation of normal distribution is a product of number of users and stationary probability \(\hat{P}_1\). This product represents expected number of active voice channels in \(k\)-th time interval. Let’s say that silence to talk ratio is 60%-40%, and maximum number of users is 1000. Then the expected number of active users in \(k\)-th time interval is 600, what is a logical result.
We can write distribution function expressed through error function as shown in
\[
F(x) = \frac{1}{\sigma \sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{(t-a)^2}{2\sigma^2}} dt =
\]
\[
= \frac{1}{2} \left[ 1 - erf \left( \frac{a-x}{\sqrt{2\sigma}} \right) \right].
\]
(15)
Using expression
\[
erfc(z) + erf(z) = 1
\]
(16)
distribution function could be expressed through complementary error function
\[
F(x) = \frac{1}{2} erfc \left( \frac{a-x}{\sqrt{2\sigma}} \right).
\]
(17)
Finally, the inverse function of (17) is given by
\[
x = a - \sqrt{2}\sigma [erfc^{-1}(2v)]
\]
(18)
where \(F(x) \equiv v\). If we use \(v\) as a random generated number conforming to uniform distribution, then variable \(x\) conforms to normal distribution. Almost all operating systems and programming languages support uniform distribution through `rand` function. Hence, practical implementation of expression (18) is trivial.
The exact measure of error introduced with this approximation doesn’t exist [6]. But the approximate estimation can still be done by observing figure 2 that shows both distributions (binomial and normal) on the same graph. Each graph has two parameters: \(m\) as a maximum number of users, and \(p\) as a stationary probability of silence.

As it can be seen, approximation works fine for the number of users greater than 50.
### 4. Implementation
We have implemented voice source emulator in C programming language. The idea behind a practical implementation is to use two independent emulators in the same application. The first emulator works on one UDP port and it emulates \(N\) voice sources in accordance with equation (18). The second emulator works on the other UDP port and it emulates only one voice source but with same statistical parameters as the first emulator. Channel generated by the second emulator has special purpose and we will refer to it in the
rest of the article as the measurement channel. The measurement channel carries information for real time delivery and measurements like sequence number, time stamp, number of emulated sources, etc.
Sink, which is in fact the same application as source, but with different command line switches, performs measurements only on the measurement channel. Measurements results are generalized to the rest of generated channels. This approach is possible because we can say that the measurement channel is randomly selected from an aggregated voice traffic flow. This channel has the same statistical parameters as the rest of generated channels, and accordingly, it can represent all of them. One could ask why to bother with the rest of channels at all. The answer is simple: "Because we need to load network with traffic similar to real voice transport to achieve authentic measurements".
Further, emulator application has a feature to automatically change the maximum number of users and packet length. For example, the application can be configured to generate 100 packets for each combination of packet length and number of users, where both parameters are varied within ranges defined at the application startup. Thanks to this approach, measurement results can be presented in three dimensional space, where $X$ is packet length, $Y$ is number of active channels and $Z$ is measured value (for example packet loss or jitter). This graphical representation gives better insight in phenomena and makes easier to identify the level of service that can be offered to a customer. Example of such representation for measurements on DiffServ node is shown on figure 3.
5. Further Work and Conclusion
Currently, the same packet lengths are used for all emulated channels. The future development plan is to implement support for varying packet lengths. In practice we could expect that customers use different compression algorithms. As a consequence, traffic sources generate packets of varying lengths. It has an influence on queuing disciplines in network nodes, what is directly connected to service quality. Hence, emulation would be more precise if we could define mapping between packet length and specific emulated channel.
References
[1] Željko Pauše. *Vjerojatnost, informacija stohastički procesi*. Školska knjiga Zagreb, 1974.
[2] Nobuhiko Kitawaki and Kenzo Itoh. Pure Delay Effects on Speech Quality in Telecommunications. *IEEE Journal on Selected Areas In Communications*, 9(4):[586]–[593], May 1991.
[3] Daniel Minoli and Emma Minoli. *Delivering voice over IP networks*. John Wiley and Sons, Inc., 1998.
[4] H. Schulzrinne. RTP: A Transport Protocol for Real-Time Applications, Jan 1996. Status: STANDARD.
[5] PAUL T.BRADY. A Statistical Analysis of On-Off Patterns in 16 Conversations. *THE BELL SYSTEM TECHNICAL JOURNAL*, 1968.
[6] Dimitrije Ugrin-Šparac. *Primjenjena teorija vjerovatnosti*. Sveučilišna naklada, 1990. |
EVALUATION OF THE TENSILE BOND STRENGTH OF ORTHODONTIC BRACKET BASES USING GLASS IONOMER CEMENT AS AN ADHESIVE
by
Richard D. Burns, Jr.
Submitted to the Graduate Faculty of the School of Dentistry in partial fulfillment of the requirements for the degree of Master of Science in Dentistry, Indiana University School of Dentistry, 1992.
Thesis accepted by the faculty of the Department of Orthodontics, Indiana University School of Dentistry, in partial fulfillment of the requirements for the degree of Master of Science in Dentistry.
Lawrence P. Garetto
B. Keith Moore
James R. Miller
James C. Shanks, Jr.
David K. Hennon
W. Eugene Roberts, Jr.
Chairman of the Committee
Date 5/18/92
ACKNOWLEDGMENTS
The completion of this thesis required the contributions of numerous individuals.
Special thanks go to Dr. James Miller who presented many thought-provoking ideas pertaining to the use of glass ionomer cement in orthodontics.
I am grateful to the Department of Dental Materials for taking me under its wing and helping me with the everyday problems of research. Dr. B. Keith Moore, Ms. Barbara Rhoades and Ms. Hazel Clark deserve a special thank you for their advice and expertise.
I am grateful to Ms. Jean Schotnect for her skill and cooperation in helping me with the electron micrographs. I am grateful to Mike Halloran and Mark Dirlam in Dental Arts and Illustrations for photographs, tables and graphs.
Dr. Lawrence Garetto deserves special thanks for his commitment to seeing my project to fruition.
To the members of my graduate committee, thanks for reviewing and directing this research: Drs. B. Keith Moore, James Miller, James Shanks, Lawrence Garetto, David Hennon and W. Eugene Roberts.
To my parents, Richard and Jane, I am grateful for your support of my pursuit of higher ideals through education.
TABLE OF CONTENTS
| Section | Page |
|-------------------------------|------|
| Introduction | 1 |
| Review of Literature | 3 |
| Methods and Materials | 35 |
| Results | 44 |
| Figures and Tables | 47 |
| Discussion | 70 |
| Summary and Conclusions | 81 |
| References | 84 |
| Curriculum Vitae | |
| Abstract | |
LIST OF ILLUSTRATIONS
| FIGURE | Description | Page |
|--------|------------------------------------------------------------------------------|------|
| 1 | Mean tensile bond strength for each group | 47 |
| 2 | Mean percent GIC adhesive attached to the base for each group | 48 |
| 3 | Electron micrograph (X20) of group I Dyna-Lock™ bracket base | 49 |
| 4 | Electron micrograph (X100) of group I Dyna-Lock™ bracket base | 49 |
| 5 | Electron micrograph (X20) of group II perforated metal bracket base | 50 |
| 6 | Electron micrograph (X100) of group II perforated metal bracket base | 50 |
| 7 | Electron micrograph (X20) of group III 100 gauge mesh bracket base | 51 |
| 8 | Electron micrograph (X100) of group III 100 gauge mesh bracket base | 51 |
| 9 | Electron micrograph (X20) of group IV 80 gauge mesh bracket base | 52 |
| 10 | Electron micrograph (X100) of group IV 80 gauge mesh bracket base | 52 |
| 11 | Electron micrograph (X20) of group V 60 gauge mesh bracket base | 53 |
| 12 | Electron micrograph (X100) of group V 60 gauge mesh bracket base | 53 |
| 13 | Electron micrograph (X20) of group VI 100 gauge mesh sandblasted base | 54 |
| 14 | Electron micrograph (X90) of group VI 100 gauge mesh sandblasted base | 54 |
| 15 | Electron micrograph (X20) of group VII Micro-Loc™ photo-etched bracket base | 55 |
| 16 | Electron micrograph (X100) of group VII Micro-Loc™ photo-etched bracket base | 55 |
| FIGURE | Description | Page |
|---------|------------------------------------------------------------------------------|------|
| 17 | Electron micrograph (X20) of group VIII ceramic bracket base | 56 |
| 18 | Materials for mixing and placement of GIC | 56 |
| 19 | Stress-relieving apparatus suspended in Instron™ testing machine | 57 |
| 20 | Stress-relieving apparatus with specimen suspended downward and debonding harness attached to the wings of a bracket | 57 |
| 21 | Electron micrograph of debonded group I Dyna-Lock™ bracket with minimal amount of residual GIC adhesive attached | 58 |
| 22 | Electron micrograph of debonded group I Dyna-Lock™ bracket with abundant residual GIC adhesive remaining attached | 58 |
| 23 | Electron micrograph of debonded group II perforated base with minimal amount of residual GIC adhesive attached | 59 |
| 24 | Electron micrograph of debonded group II perforated bracket base with abundant residual GIC adhesive attached | 59 |
| 25 | Electron micrograph of debonded group III 100 gauge mesh base with minimal residual GIC adhesive remaining attached | 60 |
| 26 | Electron micrograph of debonded group III 100 gauge mesh base with abundant residual GIC adhesive remaining attached | 60 |
| 27 | Electron micrograph of debonded group IV 80 gauge mesh base with minimal residual GIC adhesive remaining attached | 61 |
| 28 | Electron micrograph of debonded group IV 80 gauge mesh base with abundant residual GIC adhesive remaining attached | 61 |
| 29 | Electron micrograph of debonded group V 60 gauge mesh base with minimal residual GIC adhesive remaining attached | 62 |
| 30 | Electron micrograph of debonded group V 60 gauge mesh base with abundant residual GIC adhesive remaining attached | 62 |
| FIGURE 31 | Electron micrograph of debonded group VI
100 gauge mesh sandblasted base with minimal GIC adhesive remaining attached. |
| FIGURE 32 | Electron micrograph of debonded group VI
100 gauge mesh sandblasted base with abundant residual GIC remaining attached. |
| FIGURE 33 | Electron micrograph of debonded group VII
Micro-Loc™ base with minimal residual GIC adhesive remaining attached. |
| FIGURE 34 | Electron micrograph of debonded group VII
Micro-Loc™ base with abundant residual GIC adhesive remaining attached. |
| TABLE IA | Groups I, II, III and IV manufacturers and product numbers. |
| TABLE IB | Groups V, VI, VII and VIII manufacturers and product numbers. |
| TABLE II | Bracket base surface area (cm²). |
| TABLE III | Mean tensile bond strength for each group. |
| TABLE IV | Mean percent GIC adhesive attached to bracket base for each group after debonding. |
INTRODUCTION
The direct bonding of orthodontic brackets to teeth is a relatively new procedure. In 1967 Mitchell\textsuperscript{1} unsuccessfully tried using epoxy resin as an orthodontic adhesive. In 1968 Mizrahi and Smith\textsuperscript{2} described the use of zinc polycarboxylate cement for the bonding of orthodontic attachments to teeth. In the early 1970s the introduction of dimethacrylate monomers shifted the search for a bonding agent to these materials.\textsuperscript{3} The dimethacrylate resins proved to possess the properties of a clinically successful direct bonding adhesive. They had a high bond strength to etched enamel, good dimensional stability, and a shorter setting time than the conventional resins.\textsuperscript{3} The dimethacrylate resins have enjoyed widespread use for direct bonding orthodontic attachments since their introduction.
The search for an orthodontic adhesive that has chemical adhesion to enamel and releases fluoride into the oral environment has led to experimentation with glass ionomer cements (GIC) as bonding agents. In vitro tests comparing GIC to resin adhesives have shown that GIC are significantly weaker than resin adhesives in both tensile and shear bond strengths.\textsuperscript{4-7} In 1968 White\textsuperscript{8} reported that he successfully used GIC in vivo to bond orthodontic attachments to teeth. In 1988 Miller\textsuperscript{9} conducted a three-month in vivo study to
determine if glass ionomer cement used as a bonding adhesive could withstand the forces placed upon it in the oral environment during orthodontic treatment. The results showed no difference in failure rate between the GIC adhesive and conventional resin adhesive. The clinical use of GIC as orthodontic adhesives has suggested that they may possess the minimum bond strength needed for an adhesive in orthodontics.
The typical site of bond failure when using GIC as adhesives involves the bracket/adhesive interface.\textsuperscript{8-10} Since the site of adhesive failure almost always involves the bracket/base interface, it is felt that the bracket base design could have an effect on the attainable bond strength of a GIC adhesive.
The purpose of the present investigation was to examine the effect of orthodontic bracket base design on the tensile bond strength of brackets bonded with GIC adhesive and to assess the amount of adhesive left remaining attached to the bracket pad.
REVIEW OF LITERATURE
Development of dental luting cements with adhesive properties to enamel and dentin has occurred over the past 25 years. Zinc polycarboxylate and glass ionomer cements (GIC) share the polyacrylic acid polymer as essential ingredients. In 1968 Smith\textsuperscript{11} described a new dental cement which was formulated by mixing a concentrated aqueous solution of polyacrylic acid with zinc oxide. He noted that this new cement, zinc polycarboxylate, had strength characteristics very similar to the old standard, zinc phosphate. In addition, the cement showed considerable adhesion to enamel and dentin surfaces and caused little irritation to the oral soft tissues. Wilson and Kent\textsuperscript{12,13} in 1971 described a new translucent cement (GIC) for dentistry which resulted from the hardening reactions between certain ion leachable glass powders and aqueous solutions of polyacrylic acid. GIC was initially developed for a variety of dental applications. These included the restoration of anterior teeth, filling of erosion cavities, and general cementation and cavity lining purposes. Smith,\textsuperscript{11} Wilson and Kent,\textsuperscript{12,13} and Mizrahi\textsuperscript{2} suggested glass ionomer and zinc polycarboxylate cements may have orthodontic applications since they both showed long-term adhesion to enamel, dentin and certain metals.
The chemistry of the setting reaction of GIC has been extensively studied by Crisp\textsuperscript{14-19} et al. GIC are the product of a hardening reaction that takes place between aqueous solutions of polyalkenecarboxylic acids, in particular, polyacrylic acids and special glasses. The glasses are unusual because they are shock-cooled melts of silica and alumina mixtures fused in a fluoride flux.\textsuperscript{14} The glasses are made by fusing together mixtures of silica, alumina, cryolite, fluorite, aluminum fluoride and aluminum phosphate at 1050°C to 1350°C. These constituents are fused for 45 minutes to 120 minutes, and then the melts are rapidly cooled to form stressed opal glasses. A fluoride flux is used in order to lower the temperature of fusion, and it also provides the source of leachable fluoride ions.\textsuperscript{13} The cementing liquids are a 45 percent to 55 percent aqueous solution of homopolymers and copolymers of acrylic acid.
Because of the decrease in pH and increased concentration of soluble ions, there is an initial acid/base reaction.\textsuperscript{14} The powder acts as a proton accepter and the liquid as a proton donor. Hydrogen ions in the liquid are replaced by metallic ions.\textsuperscript{14} The powder and liquid, when mixed together, form putty-like pastes which initially set to hard translucent substances within two minutes to 10 minutes.\textsuperscript{13} Crisp\textsuperscript{15} demonstrated through infrared spectroscopic studies that initially the aluminosilicate network of the powder is
degraded to a siliceous gel, and both calcium and aluminum salts are formed. In the early stages of the reaction, the calcium salt alone appears to be formed causing gelation and initial set. The aluminum salt is formed later and results in the final hardening, with calcium and aluminum salts being present in approximately equal amounts in the cured cement. Calcium ions react within the first few minutes, while the aluminum ions react at room temperature after approximately one hour.\textsuperscript{18} The slower reacting aluminum ion is attributed to the more stringent steric requirements imposed by a trivalent ion on polyanion chain configuration.
The two-phase setting reaction of GIC conveys interesting rheological properties to the material. The initial reaction, where the calcium ion exchange is taking place, results in a set that allows the material to be carved like a dental amalgam, while the slower aluminum exchange reaction results in a hard cured cement.\textsuperscript{20} The trivalent aluminum ions ensure a much stronger cross-linking than is possible with divalent ions alone.\textsuperscript{20} Not all of the carboxylic acid groups of the chain react. This may possibly be due to steric requirements or because the remaining polyacrylate chain is largely ionized causing the remaining hydrogen ions to be firmly bound by electrostatic forces.\textsuperscript{15}
Fluoride plays an important role in the setting reaction. Crisp\textsuperscript{15} felt that the formation of cationic fluoride complexes such as AlF$_2$+, AlF++, and CaF+ play an important
role in the transfer of ionic species and their interaction with polyacrylic acid.\textsuperscript{14} Fluoride complexes mediate the transfer of positively charged ions in the setting reaction.
Crisp estimates that between 20 percent and 30 percent of the glass decomposes by proton attack to combine with the polymer in forming the matrix.\textsuperscript{16} The cured cement consists of partly reacted particles bonded by an ionomer matrix.\textsuperscript{13} The nature of the setting reaction causes the cement to appear as a composite material at the microstructural level. There is a graded structure between filler and matrix, with the leachable glass filler playing a positive role in the setting reaction. The silica gel matrix contains particles consisting of a core of unreacted glass.\textsuperscript{19} The active involvement of filler particles in the setting reaction means there is no thermal mismatch or special surface treatment required for binding of filler and matrix.\textsuperscript{13} This is in contrast with dental composite resin that requires a chemical coating on the filler for adhesive bonding to the resin.
There have been several modifications of GIC formulations in order to improve the properties of the material. Crisp and Wilson\textsuperscript{19} incorporated tartaric acid into the polyacrylic acid solutions and found that it acted as an accelerator in the setting reaction. The tartaric acid facilitates the extraction of ions from the glass powder so that there is a higher concentration of cations available to react with the polyanion (polyacrylate). This accelerates
the rate of reaction. The increased rate of reaction is manifested in the overall hardening rate, without having an effect on working time.
The introduction of certain copolymers of acrylic acids helps to reduce the viscosity of the polyacrylic acid solution. The 50 percent polyacrylic acid solution tends to thicken and gel. The addition of itaconic and acrylic acids proved to be best in producing the lowest viscosity polyacrylic acid without affecting the properties of the GIC.\textsuperscript{17}
McLean et al.\textsuperscript{21} were responsible for developing water-hardening GIC. They felt that the main deficiencies of GIC could be attributed to the clinical problem of judging the correct powder/liquid ratio due to the high viscosity of the polyacrylic acid. The water-hardening system freeze dries the polyacid and blends it with the glass powder. Cement formation is initiated by mixing the blend with water. McLean and his associates felt that the working time was markedly prolonged in this system due to the time needed for a solution to form between the water and freeze-dried polymer. Water-hardening GIC possesses a lower viscosity in the early mixing stages which could help facilitate the wetting of surfaces to which it is applied. Prosser et al.\textsuperscript{22} found no clear difference in physical properties between the conventional and water-hardening GIC.
Working time, setting time, manipulative properties, opacity, and strength all can be modified by using different
glass and polyacid formulations.\textsuperscript{20} Other modifications can be made by varying the size and molecular weight of the powder particles and the concentration and viscosity of the liquid. The setting time is controlled by adjusting the ratio of alumina to silica in the glass fusion mixture and the fineness of the powder.\textsuperscript{13}
The property of glass ionomer and polycarboxylate cements that sets them apart from other dental materials is the physico-chemical adhesion obtainable to enamel and dentin.\textsuperscript{20} The term 'physico-chemical' adhesion is more accurate than 'chemical' adhesion since the bond that forms is almost certainly the result of secondary forces of molecular attraction rather than primary chemical bonds. These secondary forces are Van der Waals forces and dipole-dipole interactions.\textsuperscript{23} Primary chemical bonds are of a covalent or ionic nature. The physico-chemical adhesion is distinguished from mechanical adhesion which relies on mechanical interlocking. The exact mechanism of adhesion to calcium in tooth structure is not entirely known.\textsuperscript{24} GIC adheres to a number of substrates due to the reaction of many of the carboxylic acid groups available for hydrogen bonding. The formation of hydrogen bonding promotes wetting of the substrate which is the first step in obtaining adhesion.\textsuperscript{20} It is postulated that as the GIC sets and hardens, hydrogen bonds are replaced by more rigid metal ionic bonds linking the cement firmly to the substrate.\textsuperscript{20} Adhesion between GIC
and substrates would be expected to result from dipole and ionic interactions because glass ionomer cements and the substrates have a polar nature.\textsuperscript{25} The enamel in tooth structure has positively charged calcium in the hydroxyapatite crystal structure that acts as substrate for bonding to the GIC.
Adhesion of restorative materials to tooth structure is a problem which has always faced dentistry. Adhesion is defined as:
...the molecular attraction exerted between the surfaces of bodies in contact or the attraction between molecules at an interface...\textsuperscript{26}
The force is adhesion when unlike molecules are attracted and cohesion when the molecules attracted are of the same kind.\textsuperscript{26} Adhesive forces depend on the forces of molecular attraction and thus exist at only very short distances of separation. Little or no adhesion can be recognized with separations greater than one or two angstroms. In practice, it is impossible to obtain atomically smooth surfaces; therefore, two surfaces that are brought together will actually have a much smaller area of contact than what is apparent. Because atomically smooth surfaces are not obtainable, fluid adhesives which flow into irregularities of the surface are used to gain adhesion.\textsuperscript{27}
A physical condition for obtaining good adhesion is adequate wetting of the substrate by the adhesive, and this
is controlled by certain physical properties of these substances.\textsuperscript{28} When the attractive forces between the adhesive and adherend are strong, wetting will occur. The general principle followed by adhesive systems is that the greater the affinity between the adhesive and substrate, the greater the wetting.\textsuperscript{28} The existence of a 'contact' angle between the adhesive and adherend at their interface is a measure of the wetting ability of an adhesive.\textsuperscript{27} The smaller the contact angle, the better the ability of the adhesive to fill in irregularities in the surface of the adherend.\textsuperscript{3} The viscosity and surface tension of the adhesive influence the extent to which these voids or irregularities are filled. The setting time of the adhesive must be balanced with the viscosity for different applications of the adhesive.
Adhesion in dentistry is especially difficult since there is an aqueous environment. Buonocore\textsuperscript{27} stated:
...a dental adhesive must be capable of bonding to tooth surfaces which may not tolerate adequate drying and, more important, the adhesive must maintain its adhesion in continuous contact with water...
Adhesion can be weakened or destroyed by water penetrating between adhering surfaces. A film of water only one molecule thick on the surface of the solid may lower the surface energy of the adherend and prevent any wetting by the adhesive.\textsuperscript{3} There is a thin layer of water which always covers the surface of the enamel and dentin, even when the
cavity or tooth structure appears clinically dry.\textsuperscript{29} This lowers the surface energy of enamel and dentin even further.
Enamel and dentin have low energy surfaces for adhesive interaction. Teeth in vivo have a thin organic surface film which decreases the surface energy.\textsuperscript{30} In 1955 Buonocore\textsuperscript{27} performed experiments in an attempt to improve the adhesion of acrylic and other materials to tooth structure. He treated the enamel with phosphoric acid. The premise of his experiment was that decalcifying the superficial enamel would result in a surface more receptive to adhesion. Essentially, acid etching changes the enamel surface from a low-energy hydrophobic to a high-energy hydrophilic surface, showing increased surface tension and wettability.\textsuperscript{31} The acid treatment of the enamel surface resulted in mechanical bonding between the acrylic adhesive and tooth.\textsuperscript{32} GIC requires that the pellicle layer be removed with polyacrylic acid prior to application.\textsuperscript{33} It has been shown that a 25 percent polyacrylic acid solution can significantly improve the bond strength to enamel and dentin.\textsuperscript{34} Aboush and Jenkins\textsuperscript{33} also found that exposure of freshly cut dentin to saliva, even after a 20 second wash, could jeopardize adhesion. They attributed the decreased adhesion to surface contamination of the dentin interfering with chemical bond formation.
GIC are known to bond to materials such as stainless steel, tin and tin oxide plated platinum or gold. They will
not adhere to porcelain, pure platinum or gold.\textsuperscript{24} Hotz et al.\textsuperscript{23} found that the adhesive bond to enamel, dentin, platinum-tin, and gold-tin alloys to be in the region of 4 Newton/mm\textsuperscript{2} (1 Newton = 102 g). They also reported that the bond of GIC to dentin is significantly weaker than the bond to enamel. It appears that GIC only bonds to surfaces where they react chemically and provide cations for bonding.
McLean and Sced\textsuperscript{35} devised a technique for bonding porcelain to platinum and gold surfaces using a 2 micrometer thick layer of tin electro-deposited on the metal surface. They felt that the layer of tin on the metal surface would provide a suitable oxide film for wetting by the GIC.\textsuperscript{35}
The ideal properties of a dental luting cement were listed by Mitchell\textsuperscript{36} and Wilson et al.,\textsuperscript{37} and the following were:
1) Very low viscosity and film thickness
2) Long working time with rapid set at mouth temperature
3) High compression and tensile strength
4) Resistance to creep
5) Adhesion to tooth structure and the restoration
6) Not be locally or systemically toxic
7) Good resistance to aqueous or acid attack
8) Some cariostatic properties
9) Translucency
10) Adhesive bond stability in a moist environment
11) Resistance to thermocycling in oral cavity
12) No harm to or discoloration of tooth structure
GIC fulfill many of these requirements. They possess the following properties:8
1) Physico-chemical bonding to enamel, cementum, dentin, non-precious metals, and plastics
2) High compressive and tensile bond strength
3) Low film thickness
4) Biologic compatability
5) Better resistance to acid attack than phosphate bonded cements
6) Translucency
7) Easier removal of material from enamel compared to composite resins
8) GIC acts as a reservoir for fluoride ions providing some means for cariostatic action
9) No harm to or discoloration of tooth structure
The problem most commonly associated with the use of GIC as luting agents is associated with the cement tending to wash out prior to the initial set.8 This problem has been remedied by using a waterproof varnish to coat the exposed material or by using a thicker mix. McLean21 stated that a principal deficiency is the clinical problem of judging the correct powder/liquid ratio because of the high viscosity of the polyacid liquids. This has been partially remedied by the introduction of the premixed water-hardening
GIC. Most brands rely on the experience of the mixer to assure the correct powder/liquid ratio has been achieved.
The main advantage that GIC has over other restorative materials and luting agents is the ability to leach fluoride and to bond tooth structure.\textsuperscript{38} The anti-cariogenic properties make GIC the material of choice for treating early carious lesions or patients with a high caries incidence.\textsuperscript{38} Zachrisson\textsuperscript{39} stated that "the increased caries risk underneath and adjacent to orthodontic bands and the potential decay-promoting interproximal environment in direct bonding techniques call for maximum use of caries preventive procedures during orthodontic treatment."
The idea that orthodontic appliances present in a patient's mouth can contribute to caries formation is not new. In 1936 Harold Noyes\textsuperscript{40} discussed the potential for caries formation and etching of enamel in the orthodontic patient. Since that time, much research has occurred concerning the caries incidence and its relationship to oral hygiene in the orthodontic patient. In 1971 Zachrisson and Zachrisson\textsuperscript{41} found a definite correlation between oral health and caries incidence in the orthodontic patient. They found that the most important factor in the development of caries during orthodontic therapy is the oral hygiene of the patient. When toothbrushing was imperfect or largely neglected, the number of new enamel defects increased considerably even with regular use of topical fluorides. The
motivation of a patient toward oral hygiene can give the best prediction of possible white spot formation and is very hard for the orthodontist to influence.
The white spot lesion is considered to be the precursor of frank enamel caries. In orthodontics it has been attributed to the prolonged accumulation and retention of bacterial plaque on the enamel surface adjacent to the appliances.\textsuperscript{41} The microorganisms within the plaque produce acids from the metabolism of carbohydrates the patient has ingested. It is now generally agreed that the accumulation of dental plaque, even on a clean tooth surface, can result in dental caries in an individual who is susceptible to the disease and consumes a diet conducive to the disease.\textsuperscript{42} Dental plaque localizes the acids next to the tooth surface.
The most likely sites for the accumulation of plaque are around the cervical margins of the teeth and under the bands in areas where the cementing medium has washed out.\textsuperscript{43} Gwinnett\textsuperscript{44} examined the plaque distribution on bonded brackets and found that the plaque was concentrated on the resin surface adjacent to bonded attachments and at the junction of the bonding-resin and the etched enamel surfaces. In addition to fostering increased accumulation of dental plaque, studies have shown that orthodontic appliances result in an alteration of the microbiologic populations.\textsuperscript{45} This study concluded that creation of new retentive areas favored the local growth of \textit{Streptococcus mutans}, which increased the
general infection level of this organism.\textsuperscript{45} If GIC were used as an orthodontic bonding adhesive, the fluoride release would be concentrated at the areas of highest plaque concentration of most pathogenic organisms.
In 1988 Mizrahi\textsuperscript{46} reported on the surface distribution of enamel opacities following multi-banded orthodontic treatment. It was found that following orthodontic care there was a significant increase in the prevalence of enamel opacities on the vestibular and lingual surfaces of the dentition. This increase was significantly greater on the maxillary and mandibular first molars, the maxillary lateral incisors, and the mandibular lateral incisors and canines. Gorelick et al.\textsuperscript{47} reported that the incidence of white spot formation after bonding and banding was greatest in the maxillary anterior and mandibular posterior segments. It was found that 50 percent of the orthodontic patients in the study experienced an increase in white spots. This suggested that the potential influence of individual differences in enamel structure, composition of saliva, tooth brushing and other variables play an important role in white spot formation.\textsuperscript{47}
The early carious lesions observed as white spots appear white due to an optical phenomenon caused by the subsurface hydroxyapatite loss.\textsuperscript{48} The outermost layer of enamel remains relatively intact initially during white spot formation, and it is known as the zone of remineralizaA subsurface lesion develops deeper into the enamel and has a low mineral content. The relatively intact mineral-rich and porous surface layer is formed most likely by kinetic events.\textsuperscript{50} Evidence from epidemiologic studies\textsuperscript{51} and in vivo experiments\textsuperscript{52} indicates that lesions may remineralize and disappear. Topical application of fluoride has been shown to remineralize enamel lesions and subsurface layers of dentin which are demineralized but not infected.\textsuperscript{53} The process of remineralization means that early carious lesions which clinically manifest as white spots can be reversed.
Remineralization is the process of redepositing of minerals after loss during caries attacks. Fluoride gels have the capacity to initiate remineralization of white spots after removal of acidogenic plaque. Saliva also does this, but to a lesser degree.\textsuperscript{54} The clinical management of visible white spots which develop during orthodontic therapy is largely still obscure. Ogaard et al.\textsuperscript{55} found, in vivo, that visible white spot lesions on the facial surfaces remineralized relatively fast in the absence of concentrated fluoride agents. It was found that fluoride applied to carious lesions beneath orthodontic appliances could arrest the lesions, inhibiting the complete repair by remineralization. This was thought to be due to calcium fluoride being transformed into fluorapatite in the surface area.\textsuperscript{55} This has clinical implications on the application of topical
fluoride gels at the time of debonding an orthodontic case. This could lead to the permanent "scarring" of teeth with white spot lesions that have not had the time to remineralize naturally from the mineral in saliva.
The development of the early carious lesion was studied in vivo by Ogaard et al.\textsuperscript{56} using premolars that were extracted for orthodontic treatment. It was found that the demineralization associated with fixed orthodontic appliances is an extremely rapid process with visible white spots being seen in four weeks in the absence of any fluoride supplementation.\textsuperscript{56} Other studies\textsuperscript{57} have supported the significant amount of demineralization that can occur over one month's time. Gorelick et al.\textsuperscript{47} found that teeth banded or bonded for a relatively short treatment interval showed the same incidence of white spots as those involved in longer treatment. A patient who neglected his oral hygiene regimen for as little as one month could suffer irreparable damage to the enamel of his teeth.
There is evidence to support that the fluoride leached from GIC functions by several mechanisms. The leached fluoride bathes the adjacent enamel and increases its fluoride concentration while decreasing its susceptibility to decalcification.\textsuperscript{58} A tooth which has recently erupted into the oral cavity has a relatively immature and permeable enamel surface.\textsuperscript{59} Only with exposure to the oral environment and time does the enamel surface become less porous and
more mature. Clinical evidence shows decreased permeability and increased resistance to caries as the tooth ages.\textsuperscript{60} The incorporation of fluoride ion helps to increase the stabilization and accelerates the process of surface glazing without interfering with deeper hydroxyapatite lattice.\textsuperscript{54} Fluoride may help with the remineralization of the tooth since it alters the metabolic activity of plaque formed at the margins of a restoration resulting in a decreased potential for secondary caries.\textsuperscript{61} It is known that fluoride inhibits the activity of \textit{Streptococcus mutans}, the principal bacteria responsible for smooth surface caries formation.\textsuperscript{62}
The initial reaction of the enamel to a topical fluoride application is the formation of calcium fluoride and fluoride-containing complexes on the surface of the enamel.\textsuperscript{63} The incorporation of fluorides into the enamel as fluorapatite is a much slower process.\textsuperscript{64} Following a topical fluoride application, much of the superficial fluoride compounds formed are washed away by the saliva prior to incorporation into the enamel lattice as fluorapatite. The more a tooth is exposed to fluorides, the greater the concentration of fluoride in the external surface layer of the enamel.\textsuperscript{60} GIC would provide a constant low level source of fluoride ions to help fortify the outer enamel surface with fluorapatite as it provides greater caries protection.
The "proper" fluoride regimen for orthodontic patients is not yet agreed upon. Zachrisson\textsuperscript{65} summarized the various
fluoride regimens which orthodontists could institute in the care of their patients. He recommended the application of topical acidulated phosphorous fluoride (APF) gel before insertion of appliances and at regular recementations. The topical applications should be combined with daily rinsing with dilute sodium fluoride or APF solutions. Daily use of a fluoride dentrifice should also be routine procedure for all orthodontic patients. In 1988 Geiger et al.\textsuperscript{66} found that with orthodontic patients it is important to have frequent and consistent application of fluoride for adequate prevention against white spots. Buonoure\textsuperscript{54} felt that fluoride gels and varnishes provide an increase in contact of the fluoride with the enamel surface, increasing the likelihood of saturating the superficial lattice of the enamel with fluoride. Trask\textsuperscript{67} recommended the use of positioners to provide daily home fluoride applications. The frequency of fluoride applications appears to be much more important than the form in which it is provided, as successful results have been shown with all the accepted forms of fluoride approved for intraoral use.
The problem associated with preventive programs is that they require patient compliance to be successful. Geiger et al.\textsuperscript{66} also found that written and oral instruction to the patients and parents were insufficient to achieve greater than 12 percent excellent compliance, with more than 50 percent of the patients complying very little or not at all.
This would indicate that the patient who is most likely to have poor oral hygiene and develop white spot carious lesions is also least likely to follow a preventive protocol. GIC used as a bonding adhesive would provide continuous fluoride release without requiring patient cooperation.
In 1977 Forsten's\textsuperscript{68} investigation into the fluoride release from a GIC showed that significant amounts of fluoride were released. This fluoride could be incorporated into hydroxyapatite structure. In 1978 Maldonado et al.\textsuperscript{69} compared a silicate cement and GIC fluoride release. They found that GIC released a much greater amount of fluoride in distilled water during the 21-day test period than silicate cement, and this difference was highly significant. A total of 1800 \(\mu g\) of fluoride was released from the GIC during the test period as opposed to 710 \(\mu g\) from the silicate cement. Tveit and Gjerdet\textsuperscript{70} investigated the fluoride released from a GIC and silicate cement into artificial saliva. Under this condition, they found that silicate cement released five times more fluoride than the GIC. Thus, they felt that while significant amounts of fluoride were released from GIC, the solvent used to examine the fluoride release could profoundly affect the results.
Maldonado et al.\textsuperscript{69} found that initially there was a greater release of fluoride for the first few days which diminished to a nearly constant level for the duration of the experiment. Muzynski et al.\textsuperscript{71} showed that the greatest
release of fluoride occurred during the first few hours when the GIC had its highest level of solubility. A constant rate was subsequently reached at 24 hours. The solubility of enamel in apposition to a GIC was reduced 52 percent when compared to silicate cement in which the solubility was reduced by approximately 39 percent over the untreated teeth.\textsuperscript{69} The fluoride present in GIC has a local effect on reducing enamel solubility.
In 1980 Swartz et al.\textsuperscript{72} studied the fluoride distribution in teeth using the silicate model. It was determined in vitro from biopsies of enamel taken both adjacent to and at least 3.0 mm distant to the restorations that the silicate restoration may well provide protection to the whole tooth.\textsuperscript{72} Teeth restored with GIC gave similar results suggesting that they would offer comparable protection. Retief et al.\textsuperscript{73} investigated the enamel and cementum fluoride uptake from GIC at various distances from the incisal and apical margins of the restoration at different time intervals. They found that fluoride content at one-, three-, and six-month time intervals was not significantly different. However, there was a tendency for the acquired enamel fluoride to decrease with increasing distance from the GIC. It was concluded that the enamel and cementum can take up a substantial amount of fluoride and that this was probably due to a topical effect from the fluoride released into the synthetic saliva.\textsuperscript{73}
In 1984 Swartz et al.\textsuperscript{74} investigated the long-term fluoride release from GIC. They found that fluoride release was similar in quantity and pattern to that released by the silicate cement. The GIC continued to release significant amounts of fluoride at the end of the 12-month test period.
Purton and Rodda\textsuperscript{75} in 1988 used an artificial caries technique to produce lesions in the cavity walls adjacent to resin and GIC restorations in tooth roots. The lesions that developed at the margins of the GIC restoration were significantly shallower than those around resin restorations. They believed that fluoride released from the GIC promoted the precipitation of minerals in the lesion. The prospect of GIC inhibiting the development of decay in the areas adjacent to the restoration stimulated the idea of using it as an adhesive for attaching orthodontic brackets since plaque accumulation around and under bands or brackets can readily lead to demineralization of the adjacent enamel.
In 1987 Valk and Davidson\textsuperscript{76} investigated the inhibitory effect of fluoride-releasing adhesives on the in vitro demineralization of uncovered enamel adjacent to a bonded orthodontic bracket. The results of the experiment demonstrated that an area 1.0 mm wide surrounding the bracket or button was left untouched by the demineralization, while the non-fluoride-releasing resin could not even protect the area under the adhesive at the bracket margin. They suggested that fluoride released from the GIC acts as a
'passive prophylaxis.' The study also indicated that the continuous release of fluoride from the GIC is much more effective as a caries inhibitor than single local APF applications. In 1988 Maijer and Smith reported the results of a two-year clinical study that examined the frequency of recementation and decalcification present underneath bands cemented with either zinc phosphate or GIC. The results showed a significant difference in recementation rate with only 16 percent of the bands cemented with GIC needing recementation as opposed to 34 percent of those cemented with zinc phosphate. There was no decalcification noted under any of the bands cemented with GIC. These studies encourage further research into the development of a fluoride-releasing orthodontic material for use as a bonding adhesive in orthodontics.
There are many advantages to direct bonding. Reynolds feels they include the following:
1) Improved esthetics
2) Ease of manipulation
3) Separation of adjacent teeth eliminated
4) Improved oral hygiene at the gingival margin
5) Decreased soft tissue irritation
6) Reduced risk of decalcification which may occur under bands
7) Caries more easily detected and treated
8) No need to close post-treatment band space
9) More precise bracket positioning possible
10) Partially erupted teeth brought under immediate control.
Reynolds\textsuperscript{31} listed the following as disadvantages:
1) Satisfactory adhesives difficult to remove
2) Surface area of attachment available for retention greatly reduced
3) Lack of approximal protection of teeth during treatment
In 1955 Buonocore\textsuperscript{32} discovered the technique of etching enamel with a phosphoric acid solution in order to make a more reactive surface that is more favorable to adhesion. It is now known that the acid actually cleanses the organic pellicle film from the tooth. More importantly, the acid selectively dissolves the enamel.\textsuperscript{3} The acid removes calcium salts from the centers of the enamel rods and creates surface irregularities which allow mechanical interlocking of the resin to the tooth. The etching markedly increases the total surface area of the enamel which is desirable for maximum bonding.\textsuperscript{3} The acid removes calcium salts from the centers of the enamel rods and creates surface irregularities which allow mechanical interlocking of the resin to the tooth. The etching markedly increases the total surface area of the enamel which is desirable for maximum bonding.\textsuperscript{3} Buonocore provided the predecessor of the orthodontic direct bonding technique.
In 1965 Newman\textsuperscript{78} experimented with epoxy resins as bonding agents in orthodontics. The physical properties appeared excellent; however, the setting time would not enable them to be of practical use. Bernstein\textsuperscript{79} tested a cyanoacrylate which Buonocore had found rated best out of 100 resins for orthodontic bonding. He found that the bond only allowed for short periods of use before it was broken. Mitchell\textsuperscript{1} described a black copper cement for use as an orthodontic bonding agent in 1967. In 1968 Mizrahi and Smith\textsuperscript{80,81} investigated the use of zinc polycarboxylate cement as an orthodontic bonding agent. They found that the tensile bond strength of zinc polycarboxylate to enamel was much stronger than the other materials tested and that the site of failure was cohesive within the material instead of at the enamel.
In the early 1970s the introduction of dimethacrylate monomers (BIS-GMA), which were derived from the reaction of bisphenol A with glycidyl methacrylate, shifted the search for a bonding agent to these materials.\textsuperscript{3} It was found that the dimethacrylate resin had a superior bond strength, increased dimensional stability, and shorter setting time than the conventional resins. Currently, these BIS-GMA adhesives are the material of choice for orthodontic direct bonding.
The search for a material that has chemical adhesion to enamel and releases fluoride into the oral environment has led to experimentation with GICs as bonding agents. GICs
have lower bond strength to enamel as compared with the traditional resin adhesives.\textsuperscript{4-7}
Jenkins\textsuperscript{4} compared the tensile bond strengths of glass ionomer cements to enamel with acid etch resins. He found that the GIC had approximately half the tensile bond strength of the resin. In addition, the GIC failed in cohesion, while the resin system experienced adhesive failure. In 1984 Murray and Yates\textsuperscript{5} compared three composite resins utilizing the acid etch technique with three GIC filling materials for shear bond strength. They found that the acid etch resin systems were two to three times as strong as the glass ionomer cements in the shear test. In 1988 Oilo\textsuperscript{8,2} found a marked variation in the compressive and flexural strengths between five different GICs and a silicate cement. This showed that there were differences in GIC between manufacturers.
In 1988 Fryar\textsuperscript{6} compared the tensile bond strength of four GICs to a composite resin designed specifically for orthodontics. He bonded Ormesh\textsuperscript{TM} 100 gauge mesh brackets to extracted human premolars. Fryar found that the tensile bond strength of the GICs was approximately one-third of the composite resin. In 1988 Cook and Youngson\textsuperscript{7} investigated the relationship between the in vitro bond strength of a GIC (Ketac-Cem\textsuperscript{TM}) under dry tooth, wet tooth, etched tooth, and polyacrylic acid treated tooth conditions. They compared the bond strength to a resin adhesive. It was found that
the GIC was significantly weaker than the resin. Acid etching and a wet tooth at bonding decreased the bond strength. In 1989 Klockowski et al.\textsuperscript{10} reported on the bond strength and durability of glass ionomer cements used as bonding agents in orthodontics. Shear tests were performed on the samples, and the resin had significantly greater shear strength than the GICs, although the difference was much less following thermocycling.
Direct bonded orthodontic brackets are subjected to forces from many directions, but all are resolvable into force components acting at right angles (tensile) and parallel (shear) to the tooth/bracket interface.\textsuperscript{83} An orthodontic adhesive should have sufficient bond strength to withstand the orthodontic force systems and forces of mastication. Reynolds\textsuperscript{84} stated that the maximum orthodontic forces are unlikely to exceed 1.5 kg with these forces being extraoral orthopedic forces exerted through headgear on molar teeth. It is difficult to determine the mean tensile bond strength required of an adhesive since occlusal loads vary enormously as well as the amount of load transmitted to the attachment. The range of forces exerted by mastication is 10kg-100 kg with the mean force being 12 kg. Again, the actual force transmitted to the bracket attachment would not be equivalent to masticatory forces. Reynolds\textsuperscript{84} stated:
...a maximum value of 60-80 kg/cm$^2$ would appear reasonable, although successful clinical
bonding has been recorded with adhesives giving an in vitro tensile bond strength of approximately 50 kg/cm²...
Beech\textsuperscript{83} stated that an ideal adhesive for orthodontics would have the weakest bond strength to enamel for adequate bracket retention. An adhesive with this bond strength would provide suitable bond strength to prevent unnecessary attachment loss and at the same time provide ease of appliance removal when treatment is complete.
In 1986 White\textsuperscript{8} reported that he used GIC for bonding and banding in orthodontics. He stated that light force arch wires must be used initially because full bond strength is not reached for 24 hours. In 1988 Miller\textsuperscript{9} conducted a three-month in vivo study to determine if glass ionomer could withstand the forces placed upon it in the oral environment during orthodontic treatment. The brackets were bonded during active orthodontic treatment. The results of this study showed no significant difference in the failure rate of brackets bonded with GIC adhesive versus those bonded with a conventional resin adhesive. Melson\textsuperscript{8,5} stated that she used GIC routinely in the placement of fixed orthodontic appliances. The clinical use of GIC as an orthodontic adhesive has suggested that it may possess the minimum bond strength needed for an adhesive in orthodontics.
Reynolds\textsuperscript{8,4} stated that the three factors involved in the successful direct bonding of orthodontic brackets to
teeth are the tooth surface, adhesive, and bracket backing. There have been many studies focused on the tooth surface and its relationship to adhesive bond strength. Adhesives have been tested in vitro and in vivo in order to determine tensile and/or peel bond strengths and compare different adhesive systems with each other. Similarly, there have been several studies investigating the bracket base and its effect on bond strength using resin adhesives, although there are no known studies which have examined bracket backings with GIC.
Reynolds and von Fraunhofer\textsuperscript{86} investigated the relationship between the gauze mesh size and adhesive bond strength of filled dimethacrylate resins. They showed that the coarser mesh gauzes gave superior tensile bond strengths for attachment to teeth. Dickinson and Powers\textsuperscript{87} tested tensile bond strengths of 14 different brackets with varied design characteristics bonded with an acrylic resin. The characteristics examined included area of bonding, mesh size, and type. They found that the tensile bond strength of bases tested was independent of the nominal area and mesh size. Studies by Reynolds and von Franunhofer,\textsuperscript{86} Dickinson and Powers,\textsuperscript{87} and Maijer and Smith\textsuperscript{88} found that spot-welding of the mesh pad to the base had the greatest effect in reducing the bond strength. The spot-welding actually resulted in a decreased mesh area due to the melting of the mesh at the weld sites. Most manufacturers of brackets now
braze the mesh to the pad backing in order not to have weld spots on the pad.
In 1980 Lopez\textsuperscript{89} examined 16 different commercially available bracket bases in order to determine if any had significantly better retentive properties. He found that the solid bases with perforations had a lower mean shear strength than the foil mesh bases. Thanos et al.\textsuperscript{90} measured the tensile, shear and torsional bond strengths of mesh brackets and perforated metal brackets using different resin adhesives. It was found that an adhesive system could not be selected on the basis of one type of test. The various systems performed differently depending on the type of test. They also found that the mesh base brackets were more retentive than the perforated base brackets in tension, while the perforated base brackets were more retentive in shear. They also found that the torsion values were not useful due to the high number of bracket wing failures.
Lopez\textsuperscript{89} examined brackets from the same manufacturer with different surface areas. He concluded that smaller bases could be used without any significant difference in bond failure when using a resin adhesive. This concurred with the findings of Dickinson and Powers.\textsuperscript{87}
Photo-etched bases gain retention from small circular indentations in the base that have been microscopically roughened by an etching process. Lopez\textsuperscript{89} found that these bases exhibit bond strengths in the intermediate range compared with other designs. Maijer and Smith\textsuperscript{88} reported that the photo-etched steel brackets do not allow air to escape easily from the retentive depressions when using a lightly filled resin. Ferguson\textsuperscript{91} compared two integral bracket/base combinations: the photo-etched base and the Dyna-Lock™ base (Unitek/3M, Monrovia, CA). The Dyna-Lock™ bracket has the bracket and base cast as an integral unit to eliminate the possibility of mesh separation. The retention is provided by horizontal undercut channels open at the mesial and distal extremities, with V-grooved patterns running vertically on the surface of the base adjacent to the enamel. In theory, this design should eliminate air entrapment since excess material can escape.\textsuperscript{91} Ferguson found that the Dyna-Lock™ base had different bond strengths with the different base/adhesive combinations. The bond strength was greater when a more viscous dimethacrylate resin was used. This indicated that the attainable bond strength can be affected by the type of adhesive used, in addition to the type of bracket.
Ceramic brackets which are used in orthodontics rely on a chemical bonding agent to unite the bracket and resin adhesive. The filler glasses present in composite resins require coating with a suitable coupling or keying agent in order to form a stable adhesive bond to the resin.\textsuperscript{3} Lack of an adequate bond will permit dislodgement of the filler from the surface or ready penetration of water along the fillermatrix interface. Ceramic bracket backings rely on the silane coupling agents in order to establish a chemical bond to the adhesive. Mechanical retention is difficult to machine into ceramic brackets, and those manufacturers which place grooves in the ceramic backing are doing so to weaken the bond and facilitate easier debonding.
There is no completely satisfactory mechanism for the action of coupling agents. Chen and Brauer\textsuperscript{92} experimented with the bonding of organo-silane to silica surfaces. They used gamma-methacryloxypropyltrimethoxysilane and theorized that the coupling agent forms a chemical bond with the surface of silica involving the methoxyl and silanol groups. The methacryl functional groups bond to the resin material chemically. The adhesion of the resin to glass is strengthened through a bridge of chemical bonds connecting the two phases. The chemistry of the GIC provides elements of silica and methacrylate that could significantly increase the bond strength of glass ionomer to ceramic, and possibly even silicoated metal bases.
The typical site of bond failure when using a GIC as an adhesive involves the bracket base and adhesive interface. The GIC adhesive remains in the mesh of the base, and the cleavage plane for bond failure is at the mesh. This involves a cohesive failure of the GIC adhesive protruding through the mesh and an adhesive failure of the GIC to the mesh. White\textsuperscript{8} stated that "almost every bond failure I have
had with GIC has occurred at the interface of the cement and the mesh." Fryar found that the majority of bond failures were of a cohesive nature involving the bracket/adhesive interface. Klockowski et al. found with shear testing the majority of failures were of a cohesive nature except for Ketac-Fil which had an equal number of failures occurring cohesively and adhesively between GIC and enamel. Cook and Youngson found with the shear/peel test an adhesive failure involving the tooth/cement interface. They stated that 93 percent of the cement was adhering to the bracket and 57 percent retained by the tooth. It was not clear how they made these quantifications and what impact the bracket mesh could have had on the site of failure.
Thus, the bracket base design could have an effect on the attainable bond strength of a GIC adhesive. Cook and Youngson suggested that the bracket base selected could have affected their results and that the results might well have been different had another type bracket been used. White felt that the orthodontic profession should develop a bracket base to maximize the effectiveness of GIC as an orthodontic adhesive.
The purpose of the present investigation was to examine the effect of orthodontic bracket base design on the tensile bond strength of brackets bonded with GIC adhesive and to assess the amount of GIC remaining attached to the pad following debonding.
METHODS AND MATERIALS
Overview
The study examined eight different orthodontic bracket base designs, each having a specific characteristic that warranted its inclusion in the study. The characteristics examined included mesh size and the type of base. Tables IA and IB list the bracket manufacturers and catalog numbers.
The bracket bases examined were designed for maxillary central incisors, since this is a broad tooth with only a slight mesial-distal curvature. Twin brackets were attached to the bases, and the bracket wings provided the means of applying the tensile force to the base for debonding. Each bracket had a 0.018 inch archwire slot in the wings.
Group I consisted of Dyna-Lock™ brackets manufactured by Unitek/3M (Monrovia, CA). The bracket and base are cast as an integral unit. Retention is provided by horizontal undercut channels open at the mesial and distal extremities, with a V-groove pattern running vertically on the surface adjacent to the enamel.
Group II contained perforated bracket bases. The perforated base has large diameter pores around the periphery of the solid metal base. It is through these pores that the adhesive is expressed and that the mechanical lock of the bracket to the adhesive is formed.
The gauze mesh used in the construction of orthodontic bracket bases is available in different sizes, which vary with the manufacturer. The size mesh typically available is 60, 80, or 100 gauge. The gauge number reflects the number of mesh openings per linear inch. The bracket bases used in this study all had mesh brazed to the underlying stainless steel pad. Brazing involves fusing the mesh to the underlying pad with a solder.
Group III contained bracket bases with 100 gauge mesh brazed to the pad and were manufactured by Ormco, Inc. (Glendora, CA).
Group IV contained bracket bases with 80 gauge mesh brazed to the pad and were manufactured by "A" Company/Johnson and Johnson (San Diego, CA).
Group V contained bracket bases with 60 gauge mesh brazed to the pad and were manufactured by American Orthodontics, Inc. (Sheboygan, WI).
Group VI consisted of 100 gauge mesh brackets manufactured by Ormco, Inc. (Glendora, CA) and sandblasted to remove the polished surface. GIC is known to bond to stainless steel.\textsuperscript{23} The polished surface of the pad was removed in an attempt to enhance chemical bonding of the GIC. The bracket mesh was sandblasted for approximately 15 seconds with 30 micron SiO\textsubscript{2} particles at 30 psi.
Group VII contained Micro-Loc™ brackets. This photo-etched base design was exclusively engineered and patented
by GAC International, Inc. (Central Islip, NY). This bracket base has circular indentations distributed over the pad. The metal is photo-etched to create a microscopically rough surface. This procedure creates much more surface area due to the etched metal and theoretically a stronger mechanical interlock.\textsuperscript{93}
Group VIII consisted of ceramic brackets designed to chemically bond with a methacrylate adhesive. A silane coupling agent applied by the manufacturer to the backing creates a chemical bond between resin and ceramic. The carboxyl groups that are present in resin are also present in GIC, and it was hypothesized that these would chemically bond with the silane in the same manner.
\textbf{Test Groups}
Tables IA and IB list the group numbers, manufacturers and product numbers of the brackets used in the study. The bracket bases examined were divided into eight groups:
Group I : Dyna-Lock™ bracket
Group II : Perforated base
Group III : 100 gauge mesh
Group IV : 80 gauge mesh
Group V : 60 gauge mesh
Group VI : 100 gauge mesh sandblasted
Group VII : Micro-Loc™ base
Group VIII: Ceramic bracket
Each experimental group consisted of 22 specimens.
Figures 2 through 17 are photomicrographs of the bracket bases in groups I through VIII at approximately X20 and X100 magnification. Group VIII only had a lower power photomicrograph since no more interesting features were noted at X100 magnification.
The experimental adhesive chosen for this study was Ketac-Fil™ (Espe-Premier, Norristown, PA). This product has shown in previous laboratory studies high compressive, tensile and flexural strengths when compared to other glass ionomer cements.\textsuperscript{94} Ketac-Fil™ is available in capsules that have premeasured components and can be mixed with a dental amalgamator for a specified time period. This provided a consistent mix of the GIC. In 1988 Miller\textsuperscript{9} used Ketac-Fil™ as the experimental adhesive in an in vivo study.
**Specimen Selection and Preparation**
The brackets were bonded to 176 bovine incisor teeth since these teeth are broad and flat. The bovine incisors were easier to obtain than the number of human incisor teeth needed. The teeth were obtained through the Oral Health Research Institute at Indiana University School of Dentistry, Indianapolis, IN. The teeth were examined to make sure that they were free from gross irregularities, hypoplastic areas, or other enamel anomalies that could affect the surface.
The specimens were prepared according to the following process. The labial surface of the crown was polished with sandpaper discs under running water to provide a consistently smooth enamel surface. The labial surface was further smoothed with pumice applied by a rag wheel on a dental lathe. The roots of the teeth were removed and the crown embedded in a self-curing block of resin capable of holding the specimen in the testing machine. In order to keep the buccal enamel free from the self-curing resin, it was positioned against modeling dough. This prevented the acrylic from flowing onto the bonding area. A metal ring was placed around the crown and acrylic flowed over the teeth into the metal ring. After the acrylic cured, the specimen was removed from the lubricated metal ring, numbered for identification, and placed back in water for storage.
**Bonding Procedure**
Prior to bonding, each specimen was cleaned with a prophylaxis cup and fine pumice slurry. After they were cleaned, the following steps were used to attach the brackets to the specimen:
1) Ten percent polyacrylic acid treatment to each tooth for 10 seconds
2) teeth rinsed and dried with a cotton roll
3) Ketac-Fil™ mixed according to manufacturer’s instructions
4) Ketac-Fil™ applied to the bracket
5) bracket placed and adjusted to its final position
6) excess adhesive removed
7) bracket margins covered with varnish
8) 15 minutes later the specimens returned to storage water
Figure 19 illustrates the equipment needed for mix and placement of the pre-encapsulated GIC. The specimens of each group were then thermocycled in an automatic thermocycling apparatus. Each group was carried in a wire basket alternately between water baths of 15°C and 55°C. The specimens spent 30 seconds in each bath for a total of 2,500 cycles. Following thermocycling, each group was stored in water at 37°C in a humidor until testing 14 days later.
**Tensile Bond Strength Determination**
The specimens of each group were tested in the Instron testing machine model 1123 (Instron Corporation, Canton, MA). The load was applied at a cross-head speed of 0.5 mm per minute until the bond failure occurred. The ultimate bond strength as recorded by the Instron testing machine was expressed in kilograms.
The bracket base surface area was determined through planimetry. Planimetry involved photographing the bracket at X20 magnification. The enlarged photograph was then
traced with a planimeter to determine the surface area. Any curvature of the bracket base was flattened prior to photographing the base. Determination of the bracket base surface area allowed for conversion of the tensile force required for debonding into force per unit area for each bracket base tested. The surface area (cm$^2$) obtained through planimetry for each bracket is listed in Table II.
The testing apparatus was designed to minimize the introduction of stresses, other than tensile, from affecting the specimen as the load is applied. The upper member of the Instron supported the specimen, which was suspended with the bracket facing downward from a platform that was freely moveable in all planes of space (except vertically). The lower member held a wire harness that was designed and constructed from two lengths of orthodontic wire. The wire harness held the bracket equally under both wings. Figure 20 illustrates the stress-relieving apparatus suspended in the Instron with a sample in place. Figure 21 illustrates the stress-relieving apparatus with lower harness hooked to the wings of a bracket ready for debonding.
**Percent Adhesive Remaining on Bonding Pad**
Following debonding, a determination of the percentage of adhesive left attached to the bracket base was made. Any adhesive left protruding beyond the plane of the bracket base was considered remaining attached to the pad. The
bracket was viewed under a light microscope at X25 magnification with a grid placed in one eyepiece. The magnification was constant. The number of grid intersections that were present over the unbonded bracket base were counted, and then the bracket was turned 90° and grid intersections recounted. These values were averaged to give the average number of grid intersections overlying the bracket base. The average number of grid intersections per unbonded bracket base was necessary for the calculation of percent adhesive remaining attached to the bonded brackets.
Each debonded bracket was then observed at X25 magnification under the microscope, and the number of grid intersections over the remaining adhesive was counted. The bracket was rotated 90° and intersections were recounted. These two values were added together and divided by two giving the average number of grid intersections overlying adhesive remaining on each bracket base. This value was divided by the total number of grid intersections per base (previously determined) and multiplied by 100 to give the percent GIC remaining on the bracket pad.
**Scanning Electron Microscope**
Each bracket base used was photographed when viewed in an Amray model 1000A scanning electron microscope (Amray, Inc., Bedford, MA). The brackets were viewed at approximately X20 and X100 magnification. The purpose of
these photographs was to illustrate what each bracket base looked like at low magnification (X20) and to highlight individual base differences through a higher magnification (X100) photograph. Debonded brackets were photographed to illustrate how bases appeared with a small and large amount of GIC adhesive left adhering. Figures 22 through 36 illustrate each group with small and large amounts of GIC adhesive remaining attached following debonding. Group VIII is not shown due to the premature bond failure.
**Statistical Analysis and Data Considerations**
The data obtained from the tensile bond strength of the bracket were expressed as force per unit area. The mean value and standard deviation for each bracket base tested were determined. Bartlett's test for homogeneity of variance showed that the variances were not homogeneous.\(^{95}\) The Welch test was then performed instead of the analysis of variance due to the lack of homogeneity of variance. Multiple comparisons between groups were then performed using Newman-Keul test to locate the differences.\(^{95}\)
The data obtained for the adhesive remaining on the bracket pad were expressed as a percentage of base covered. The mean for each group and standard deviation were determined. A one-factor analysis of variance was performed.\(^{96}\) The Scheffe F-test was then performed to see where the between group differences were located.\(^{95}\)
RESULTS
Tensile Bond Strength Determination
Mean ultimate tensile bond strengths are given in Table III for groups I through VII. Figure 1 gives a graphic presentation of the tensile bond strength data. The sample size in group II was 20, and group VI was 21 due to the debonding apparatus slipping from the bracket during debonding. From these data, it is apparent that group VII had a bond strength 65 percent higher than the next highest mean bond strength of the other brackets tested.
Group VIII contained the ceramic brackets. All of these brackets debonded when a light force was applied to remove the bonding jig. The jig was lightly secured to the occlusal and gingival wings of the bracket by the manufacturer. This force was so small that it was deemed unnecessary to prepare the brackets for testing in the Instron. This group was then dropped from the statistical calculations.
Bartlett's test for homogeneity of variance was performed, and it was found that variances were not homogeneous at the 0.10 level of significance. Due to the lack of homogeneity of variance, the Welch test was substituted for the one-factor analysis of variance to determine if there
was a significant difference between groups at the 0.05 level of significance. The Newman-Keuls test was then performed to make multiple comparisons between groups.
The Newman-Keuls test showed a significant difference between groups I and V and between group VII and all of the other groups at the 0.05 level of significance. At the 0.01 level of significance, the difference between groups I and V was no longer significant, but group VII was still significantly different from all of the other groups.
**Percent GIC Adhesive Adhering to Bracket Pad**
The mean percent of GIC adhesive remaining on the bracket pad is given in Table IV. Figure 2 gives a graphic presentation of the data.
A one-factor analysis of variance was performed to see if significant differences were present between groups. There was a significant difference at the 0.0001 level of confidence; therefore the Scheffe F-test was used to compare between groups for the difference. Significant differences were found between group I and all groups except group IV at the 0.05 level of significance. Group IV was significantly different from groups III, V, VI, and VII at the 0.05 level of significance.
In general, the site of bond failure involved the base/adhesive interface the majority of the time in all groups. In groups I and IV the mean value of adhesive left
remaining on the bracket pad was between 40 percent and 45 percent. The rest of the groups had a mean value of 20 percent or less GIC adhesive left remaining on the bracket pad.
All of the ceramic brackets failed in an adhesive manner at the ceramic/GIC junction with no adhesive in any case left adhering to the ceramic pad.
FIGURES AND TABLES
FIGURE 1. Mean tensile bond strength for each group of bases.
GIC MEAN TENSILE BOND STRENGTH
GIC Mean Tensile Bond Strength (kg/cm²)
Group Numbers
* p < 0.05 from groups I
** p < 0.01 from groups I, II, III, IV, V, VI
FIGURE 2. Mean percent GIC adhesive attached to the base for each group.
Mean Percent GIC Adhesive Attached to Bracket Base
* p < 0.05 from groups II, III, V, VI, VII
** p < 0.05 from groups III, V, VI, VII
FIGURE 3. Electron micrograph of group I Dyna-Lock™ bracket base (X20).
FIGURE 4. Electron micrograph of group I Dyna-Lock™ bracket base (X100).
15KV X19 1000U UNITEK_ IU:
15KV X100 100U UNITE__ IU:
FIGURE 5. Electron micrograph of group II perforated bracket base (X20).
FIGURE 6. Electron micrograph of group II perforated bracket base (X100).
15KV X20 1000U AMER-PERF IU:
15KV X100 100U AMER-PER_ IU:
FIGURE 7. Electron micrograph of group III 100 gauge mesh bracket base (X20).
FIGURE 8. Electron micrograph of group III 100 gauge mesh bracket base (X100).
15KV X20 1000U ORMCO IU:
15KV X100 1000U ORMCO IU:
FIGURE 9. Electron micrograph of group IV 80 gauge mesh bracket base (X20).
FIGURE 10. Electron micrograph of group IV 80 gauge mesh bracket base (X100).
15KV X20 1000U A-CO IU:
15KV X100 100U A-CO_ IU:
FIGURE 11. Electron micrograph of group V 60 gauge mesh bracket base (X20).
FIGURE 12. Electron micrograph of group V 60 gauge mesh bracket base (X100).
15KV X20 1000U AMERICAN_ IU:
15KV X100 100U AMERICAN_ IU:
FIGURE 13. Electron micrograph of group VI 100 gauge mesh sandblasted base (X20).
FIGURE 14. Electron micrograph of group VI 100 gauge mesh sandblasted base (X90).
15KV X 20 1000U ORMCO-S IU:
15KV X 90 100U ORMCO-S IU:
FIGURE 15. Electron micrograph of group VII Micro-Loc™ photo-etched bracket base (X20).
FIGURE 16. Electron micrograph of group VII Micro-Loc™ photo-etched bracket base (X100).
15KV X20 1000U GAC IU:
15KV X100 100U GAC IU:
FIGURE 17. Electron micrograph of group VIII ceramic bracket base (X20).
FIGURE 18. Materials needed for mixing and placement of GIC; dental amalgamator, capsule activator, GIC delivery unit and capsules.
TORIT
[Image of a device with a handle and several small components]
FIGURE 19. Stress-relieving apparatus suspended in the Instron™ testing machine.
FIGURE 20. Stress-relieving apparatus with specimen suspended downward and debonding harness attached to the wings of a bracket.
Figure 1. The two views of the sample holder. The top view shows the sample holder with the sample in place, and the bottom view shows the sample holder without the sample.
FIGURE 21. Electron micrograph of a debonded group I Dyna-Lock™ bracket with a minimal amount of residual GIC attached to the base (X20).
FIGURE 22. Electron micrograph of a debonded group I Dyna-Lock™ bracket with abundant residual GIC adhesive remaining attached (X20).
JULIEN MELNICK
1995
FIGURE 23. Electron micrograph of a debonded group II perforated base with minimal amount of residual GIC adhesive attached (X20).
FIGURE 24. Electron micrograph of a debonded group II perforated bracket base with abundant residual GIC adhesive attached (X20).
Figure 1. Scanning electron micrographs of the surface of a 200 µm thick section of a 3D printed part. The top image shows the surface of the part after it was sanded with 600 grit sandpaper, and the bottom image shows the surface of the part after it was sanded with 800 grit sandpaper.
FIGURE 25. Electron micrograph of a debonded group III 100 gauge mesh base with minimal residual GIC adhesive remaining attached (X20).
FIGURE 26. Electron micrograph of a debonded group III 100 gauge mesh base with abundant residual GIC adhesive remaining attached (X20).
SOUTHWARD 100% COTTON
FABRIC KINETICS, LTD.
FIGURE 27. Electron micrograph of a debonded group IV 80 gauge mesh base with minimal residual GIC adhesive remaining attached (X20).
FIGURE 28. Electron micrograph of a debonded group IV 80 gauge mesh base with abundant residual GIC remaining attached (X20).
ARCHIVES DE L'INSTITUT FRANÇAIS D'ARCHÉOLOGIE DE ROME
Fonds de la Mission archéologique française à Rome
Plaque de marbre avec inscription en grec et en latin, trouvée dans le temple d'Apollon à Pompéi.
Plaque de marbre avec inscription en grec et en latin, trouvée dans le temple d'Apollon à Pompéi.
FIGURE 29. Electron micrograph of a debonded group V 60 gauge mesh base with minimal residual GIC adhesive remaining attached (X20).
FIGURE 30. Electron micrograph of a debonded group V 60 gauge mesh base with abundant residual GIC adhesive remaining attached (X20).
FIGURE 31. Electron micrograph of a debonded group VI 100 gauge mesh sandblasted base with minimal GIC adhesive remaining attached (X20).
FIGURE 32. Electron micrograph of a debonded group VI 100 gauge mesh sandblasted base with abundant amount of residual GIC adhesive remaining attached (X20).
Figure 1. Scanning electron micrographs of the surface of a 200 μm thick section of a 3D printed scaffold. The top image shows the surface of the scaffold after printing, while the bottom image shows the surface after 7 days in culture.
FIGURE 33. Electron micrograph of a debonded group VII Micro-Loc™ base with minimal residual GIC adhesive remaining attached (X20).
FIGURE 34. Electron micrograph of a debonded group VII Micro-Loc™ base with abundant residual GIC adhesive remaining attached (X20).
Figure 1. Scanning electron micrographs of the surface of the (a) control and (b) treated samples.
| Group | Manufacturer and Product Details |
|-------|---------------------------------|
| I | Unitek/3M Corporation
2724 South Peck Road
Monrovia, CA 91016
Dyna-lock™ bracket
Product #- 018-401 |
| II | American Orthodontics
1714 Cambridge Avenue
Sheboygan, WI 53081
Perforated steel base
Product #- (544-000) and (002-007) |
| III | Ormco Corporation
1332 South Lone Hill Avenue
Glendora, CA 91740
100 gauge mesh base
Product #- 081-000 |
| IV | "A"-Company, Inc.
11436 Sorrento Valley Road
San Diego, CA 92138
80 gauge mesh base
Product #- 081-000 |
| Group | Manufacturer and Product Details |
|-------|---------------------------------|
| V | American Orthodontics
1714 Cambridge Avenue
Sheboygan, WI 53081
60 gauge mesh base
Product #- (564-000) and (002-007) |
| VI | Ormco Corporation
1332 South Lone Hill Avenue
Glendora, CA 91740
100 gauge mesh base – sandblasted
Product #- 340-0401 |
| VII | GAC International, Inc.
185 Oval Drive
Central Islip, NY 11722-1402
Micro-loc™ brackets
Product #- K232CN18 |
| VIII | Unitek/3M Corporation
2724 South Peck Road
Monrovia, CA 91016
Transcend™ ceramic brackets
Product #- 2001-602 |
Bracket base surface area as determined through planimetry
| Group | Surface Area (cm²) |
|-------|--------------------|
| I | 0.153 |
| II | 0.148 |
| III | 0.171 |
| IV | 0.166 |
| V | 0.126 |
| VI | 0.171 |
| VII | 0.136 |
| VIII | N.C.* |
* Not Calculated
### TABLE III
#### GIC
| Group | Mean Tensile Bond Strength |
|-------|----------------------------|
| I | 15.6 ± 6.15 (22) |
| II | 16.4 ± 7.50 (20) |
| III | 19.0 ± 6.70 (22) |
| IV | 19.3 ± 4.58 (22) |
| V | 20.5* ± 4.38 (22) |
| VI | 21.0 ± 7.38 (21) |
| VII | 34.6** ± 8.17 (22) |
Values are expressed as mean ± S.D. (n) in kg/cm²
* p < 0.05 from group I
** p < 0.01 from groups I, II, III, IV, V, and VI
Group I - Dyna-lock™ base
Group II - Perforated base
Group III - 100 gauge mesh base
Group IV - 80 gauge mesh base
Group V - 60 gauge mesh base
Group VI - 100 gauge mesh base - sandblasted
Group VII - Micro-loc™ base
### TABLE IV
| Group | Mean Percent GIC Adhesive |
|-------|---------------------------|
| I | 44.9* ± 27.6 (22) |
| II | 20.7 ± 23.5 (20) |
| III | 15.1 ± 20.9 (22) |
| IV | 41.9** ± 24.7 (22) |
| V | 7.9 ± 14.0 (22) |
| VI | 5.4 ± 10.3 (21) |
| VII | 13.9 ± 20.0 (22) |
Values are expressed as mean ± S.D. (n) in percent.
* p < 0.05 from groups II, III, V, VI, VII
** p < 0.01 from groups III, V, VI, VII
Group I - Dyna-lock™ base
Group II - Perforated base
Group III - 100 gauge mesh base
Group IV - 80 gauge mesh base
Group V - 60 gauge mesh base
Group VI - 100 gauge mesh base - sandblasted
Group VII - Micro-loc™ base
DISCUSSION
Glass ionomer cement has been used in orthodontics primarily as a cement for banding. Maier and Smith\textsuperscript{77} and Copenhaver\textsuperscript{97} have shown that teeth with bands cemented with GIC are afforded protection from decalcification that is unavailable from other cements. The benefits obtained from a fluoride-releasing adhesive used for bonding orthodontic brackets could result in a similar reduction in early "white spot" caries associated with fixed orthodontic therapy.
Cook and Youngson,\textsuperscript{7} Fryar,\textsuperscript{6} and Klockowski et al.\textsuperscript{10} performed in vitro experiments comparing the bond strength of GIC to conventional composite resins. Fryar determined the tensile bond strength, while Cook and Youngson and Klockowski et al. determined shear/peel strengths. Regardless of the mode of in vitro testing, it was found that GIC was significantly weaker in bond strength than the conventional methacrylate resins tested.
The limitation of in vitro studies is that they do not reflect how an adhesive will perform in vivo. In vitro bonding studies give an indication of how an adhesive might perform in the oral cavity. Cook and Youngson\textsuperscript{7} state:
If good adherence to enamel alone is the main reason for using a composite resin for direct bonding then the findings of the present study would support its
continued use. However, the glass ionomer cements have a number of advantageous properties and these may together outweigh the inferior bond strength.
It is not known what minimum in vitro bond strength is necessary for clinically successful adhesion throughout the duration of orthodontic treatment. There should be sufficient bond strength to withstand the orthodontic and masticatory forces.\textsuperscript{83} Although orthodontic force levels can be easily controlled, forces of mastication vary enormously between individuals. The level of masticatory force transmitted to attachments could be related to the diet of the individual patient. Cook and Youngson\textsuperscript{7} and Fryar\textsuperscript{6} suggested that a clinical trial was needed to determine the advantages and disadvantages of GICs use in vivo.
In 1986 White\textsuperscript{8} reported the successful use of glass ionomer cement in his orthodontic practice and described the bonding procedures for its use. In 1988 Miller\textsuperscript{9} conducted a three-month in vivo pilot study comparing the failure rate of GIC and a methacrylate resin. No significant difference was found between the GIC and resin. Currently, Miller\textsuperscript{9,8} is conducting a clinical trial using GIC as the bonding adhesive from initiation to completion of treatment.
The typical site of bond failure when GIC is used as an adhesive involves the bracket/adhesive interface. Reynolds\textsuperscript{84} stated that the bracket backing was one of three factors involved in the successful direct bonding of orthodontic brackets to teeth. The other factors were the tooth surface and adhesive. White\textsuperscript{8} and Fryar\textsuperscript{6} found that the majority of failures were of a cohesive nature involving the bracket/adhesive interface. This would not be considered true cohesive failure since the mesh provided mechanical locking of the GIC to the bracket. A true cohesive failure would occur entirely within the adhesive, without involving either the tooth surface or bracket base. A true type of cohesive failure of GIC is rarely observed. Usually, some area of the bracket base will be involved in the fracture. There is also the potential that GIC could chemically bond to the stainless steel bracket base.\textsuperscript{22} The influence that bracket base designs could have on the bond strength when GIC is used as the bonding adhesive was a question addressed by the present in vitro study.
Studies examining the effect that a particular base design has on bond strength are numerous for methacrylate adhesives. No such study had been performed using GIC as an adhesive. If a particular design feature could be found that improved the bond strength of the GIC, it might increase the feasibility of its use in vivo. Conversely, if a design feature of a bracket could be found to decrease the bond strength then that bracket might not be selected for use with GIC as an adhesive.
Group VII (Micro-Loc\textsuperscript{TM} photo-etched base) was the only bracket with a significantly ($p<0.05$) higher tensile bond
strength to bovine enamel. The Micro-Loc™ bracket has circular indentations on the pad that are not undercut. The whole base is photo-etched to give it a microscopically rough texture. The photo-etched base does not have the wire grid characteristic of the mesh bases or the sharp angles present on the Dyna-Lock™ bases in group I.
It is possible that the stress concentration within the adhesive at the mesh (groups III, IV, V, VI), and sharp line angles of the Dyna-Lock™ bases (group I) may predispose these brackets to the initiation of failure at these sites. Majjer and Smith\textsuperscript{77} noticed this problem with methacrylate adhesives and certain foil mesh bases in which mesh had been flattened when welded to the base. This flattening created sharp edges on the individual wires in the mesh where stress could be concentrated when a force is applied.
Areas of stress concentration in the bracket base design could be an even greater problem when GIC is the adhesive since it is much more brittle than methacrylate resins. GIC is known to have a poor fracture toughness.\textsuperscript{89} A brittle material will not bend appreciably without breaking. The stress concentration at specific points within the bracket/adhesive interface might have led to earlier failure with mesh bases and Dyna-Lock™ brackets.
Group II (perforated base) bracket pads did not have many perforations per pad available for the GIC adhesive to penetrate. The pad appears to be punched out of a sheet of
stainless steel with perforations. Every pad had some "half-moon"-shaped areas at the edge where a partial perforation was included in the punch. These brackets might have performed better had the perforations been smaller in diameter and more uniformly dispersed on the pad. The mean tensile bond strength of 60 gauge mesh base (group V) was significantly higher \((p<0.05)\) than the Dyna-Lock™ base (group I). The significant difference was due to the low standard deviation of the 60 gauge mesh group as compared with the other groups. The other groups (III, IV, VI) had bond strengths close to or greater than the 60 gauge mesh base, but did not have bond strengths significantly greater than group I.
The ceramic bases (group VIII) with a silane coupling agent applied to the base were included in the study to see if adhesion to the GIC could be obtained. When it is used with methacrylate resins, a chemical bond is formed between the ceramic and resin through a silane coupling agent. It was not known if a chemical bond could have formed to GIC and how strong it might have been. Based on the chemistry of methacrylate resin and GIC, it was thought that carboxyl groups would be available from the GIC for chemical bonding to the silane.
The ceramic brackets (group VIII) had so weak a bond to the GIC that when the jigs used to align the brackets were removed, all the brackets debonded except one. The force
required to remove the jig was very small, so the brackets were not rebonded. The use of Ketac-Fil™ GIC with ceramic brackets that rely on chemical retention for orthodontics does not appear to be feasible. It could be possible that few carboxyl groups were available for chemical interaction with the silane due to calcium and sodium ions in the GIC occupying the charged groups. Recently, the ceramic bracket used in this study was reintroduced with mechanical retention to lower its bond strength when used with methacrylate resins.
The tensile bond strength found for the 100 gauge mesh pad in this study was approximately a third less than what Fryar\textsuperscript{6} found in vitro. There are some differences between this study and Fryar’s that should be addressed.
The greatest difference is that Fryar used human enamel obtained from extracted premolars, while bovine enamel was used in this study. Bovine enamel was used because it is easier to obtain and prepare than human enamel.\textsuperscript{75} Nakamichi et al.\textsuperscript{100} have shown that bovine enamel may be substituted for human enamel in adhesion tests. There have been numerous studies in caries research that substituted bovine for human enamel.\textsuperscript{101,102} It has not been reported that there are differences in bond strength between bovine and human enamel. The number of intact human incisor enamel specimens needed for this study necessitated the use of bovine enamel specimens.
Another significant difference in the studies was the type of brackets used. Fryar used premolar brackets that are much more curved than incisor brackets used in this study.
The Ketac-Fil™ used in the two studies came from different batches. Changes by the manufacturer in chemical composition of the GIC in the time between studies could have affected its performance in the study.
The difference in mean tensile bond strength between this and Fryar's study could be a reflection of the effects of one or more of these variables.
There was great variability of individual bracket bond strengths, and this is reflected in the high standard deviations for all of the groups. The range of standard deviations was from 4.38 kg/cm² for the 60 gauge mesh bracket to 8.17 kg/cm² for the photo-etched bracket. The tendency was for the standard deviations to be approximately 30 percent of the mean bond strength. Cook and Youngson⁷ and Klockowski et al.¹⁰ had standard deviations approximately 30 percent of the mean bond strength when using GIC with orthodontic attachments.
The high standard deviations found when using GIC in bonding studies could reflect the poor fracture toughness of the material. Any area of the bracket pad or tooth surface that resulted in a point of stress concentration could initiate fracture of the material. The irregular contour of a
bracket pad would be ideal for providing areas of stress concentration, so erratic bond strengths resulting in high standard deviations might not be unusual for the GIC used in this study.
The percent GIC adhesive remaining attached to the bracket following debonding was calculated using a grid in the ocular of a light microscope. The point-hit technique previously described in the methods and widely used in histomorphometric studies was used to calculate the percent adhesive attached to the pad. The amount of adhesive remaining attached to the pad was examined instead of the amount attached to the tooth for several reasons. Most importantly, the main purpose of the study was to assess tensile bond strength of various bracket pads and see what design features might increase or decrease bond strength. It follows that examining the pad for remaining adhesive might give insight into the adhesion between pad and GIC. Previous studies have shown that GIC involves mainly cohesive failures, with few adhesive failures. Efforts to assess the tooth for adhesive failures would be misdirected. The bovine enamel surfaces were examined at X25 under the stereomicroscope to see that pure adhesive failures were not occurring. It is usually difficult to establish if there has been an adhesive failure since GIC fragments can still be seen in the areas suspected of adhesive failure. It was observed that all failures involved some type of cohesive
failure. For the purpose of this study only the bracket base was examined.
All of the groups had a mean percent adhesive remaining on the base less than 50 percent. Group I (Dyna-Lock™ bracket) had a significantly higher \((p<0.05)\) mean percentage of adhesive attached to the pad than groups II (perforated), III (100 gauge mesh), V (60 gauge mesh), VI (100 gauge mesh sandblasted), and VII (photo-etched base). Groups I (Dyna-Lock™) and IV (80 gauge mesh) did not have significantly different amounts of adhesive attached to the base.
The amount of adhesive left remaining on the pads was highly variable. Within a group some pads could be found with no adhesive remaining attached to the pad and others with 100 percent of the pad covered by GIC.
Group I (Dyna-Lock™) had the highest mean percent adhesive remaining \((44.9\% \pm 27.6)\) and was different from all groups except group IV (80 gauge mesh). The Dyna-Lock™ bracket has many grooves and sharp angles where stresses could be concentrated. This could explain the irregular fracture patterns seen with this bracket and high relative percentage of GIC left adhering to the pad.
Group IV (80 gauge mesh) had the second highest mean percentage of adhesive remaining on the pad \((41.9\% \pm 24.7)\). This is not easy to explain since it did not have a significantly stronger bond strength as might be expected from a bracket that had good adhesive properties to GIC. One
possible explanation is that the material was somehow mishandled during mixing or placement. The bovine incisor specimens could have been contaminated with a substance not easily removed by the polyacrylic acid such as petroleum jelly. This would have decreased the tensile bond strength of the bracket and caused more failures to be at the tooth surface.
The photo-etched bracket had the highest mean tensile bond strength (34.6 kg/cm²) with a rather low mean percent adhesive attached (13.9 percent). Every indentation on debonded brackets contained GIC, and the fracture plane was usually at the plane of the pad. Not once did GIC pull out of the indentations. The photo-etching provides microscopic retention for the material within the indentation. This could have aided in increasing the bond strength by providing large areas of cement firmly held by the bracket. It is necessary to have a force great enough to cohesively fracture these numerous areas to result in debonding. The surface of the pad adjacent to the tooth was relatively flat without sharp angles at the macroscopic level. This could result in lower stress within the adhesive as a debonding force is applied.
Careful examination of the brackets at X100 magnification revealed many individual differences between manufacturers' brackets. The mesh gauge sizes are expressed as the number of openings per linear inch for brackets.
This does not specify what the size of the gauze mesh wire will be or what the size and shape of the opening in the gauze will be. There is much more variability in the structure of the bracket than is apparent when simply considering the gauge numbers.
Future research should be aimed at developing a GIC specifically for orthodontic use with greater fracture toughness. This would help increase bond strengths to more clinically acceptable levels. The convenience of having pre-encapsulated GIC for orthodontic use cannot be overemphasized. These systems save time, are convenient, and eliminate handling inconsistencies that can be a problem with mixing.
SUMMARY AND CONCLUSIONS
This study examined how different bracket base designs might affect the mean tensile bond strength when using GIC as an orthodontic bonding adhesive. The percentage of GIC adhering to the base following debonding was examined to see if a correlation existed between it and the bond strength measurements.
The photo-etched base had a significantly stronger mean tensile bond strength than any of the other brackets examined. This was attributed to design features which favor its use with GIC. The features include a lack of stress producing sharp angles or mesh gridwork that can initiate fracture in the brittle GIC. The circular indentations retain the GIC very well and encourage a cohesive failure of the material within them.
The ceramic brackets failed to achieve a testable bond strength. It appeared that the Ketac-Fil™ GIC was unable to chemically bond to the silane coupler with sufficient strength for clinical use, or even testing.
Future research in materials science to develop a glass ionomer cement with increased fracture toughness could greatly aid the orthodontic profession. Orthodontic patients are at high risk for development of "white spot" decalcified areas that can progress to frank decay without
adequate care. The site specific protection afforded by a fluoride releasing adhesive would be of great benefit to the orthodontic profession.
REFERENCES
1. Mitchell DL. Bandless orthodontic bracket. J Am Dent Assoc 1967;74:103-10.
2. Mizrahi E, Smith DC. Studies of the adhesion of orthodontic brackets to enamel using a polyacrylate cement. IADR Program and Abstracts 1968;no. 101.
3. Phillips RW. Skinner's science of dental materials. 8th ed. Philadelphia: WB Saunders, 1973:24.
4. Jenkins CBG. A comparison of bond strength of glass ionomer cements and an acid etch resin system. IADR Program and Abstracts 1986;no. 477.
5. Murray GA, Yates JL. A comparison of the bond strengths of composite resins and glass ionomer cements. J Pedod 1984;8:172-7.
6. Fryar BC. An evaluation of the bond strength and failure site of composite resin and glass ionomer in identical orthodontic direct bonding systems [Thesis]. Indianapolis: Indiana University School of Dentistry, 1989.
7. Cook PA, Youngson CC. An in vitro study of the bond strength of glass ionomer cement in direct bonding of orthodontic brackets. Br J Orthod 1988;15:253-74.
8. White L. Glass ionomer cement. J Clin Orthod 1986;20:387-91.
9. Miller JR. Clinical evaluation of glass ionomer cement as an adhesive for the bonding of orthodontic brackets [Thesis]. Indianapolis: Indiana University School of Dentistry, 1988.
10. Klockowski R, Davis EL, Joynt RB, Wieczkowski G, MacDonald A. Bond strength and durability of glass ionomer cement used as bonding agents in the placement of orthodontic brackets. Am J Orthod 1989;96:60-4.
11. Smith DC. A new dental cement. Br Dent J 1968;125:381-4.
12. Wilson AD, Kent BE. A new translucent cement for dentistry. Br Dent J 1972;132:133-5.
13. Wilson AD, Kent BE. The glass ionomer cement, a new translucent dental filling material. Appl Chem Biotech 1971;21:313.
14. Crisp S, Wilson AD. Reactions of glass ionomer cements: (Pt 1). Decomposition of the powder. J Dent Res 1974;53:1408-13.
15. Crisp S, Pringuer MA, Wardleworth D, Wilson AD. Reactions of glass ionomer cements. (Pt 2). An infrared spectroscopic study. J Dent Res 1974;53:1414-9.
16. Crisp S, Wilson AD. Reactions in glass ionomer cements. (Pt 3). The precipitation reaction. J Dent Res 1974;53:1420-4.
17. Wilson AD, Crisp S, Ferner AJ. Reactions of glass ionomer cements. (Pt 4). Effect of chelating comonomers on setting behavior. J Dent Res 1974;59:489-95.
18. Crisp S, Kent BE, Lewis BG, Ferner AS, Wilson AD. Glass ionomer cement formulations. (Pt 2). The synthesis of novel polycarboxylic acids. J Dent Res 1980;57:1055-63.
19. Crisp S, Wilson AD. Reactions of glass ionomer cements. (Pt 5). Effect of incorporating tartaric acid in the cement liquid. J Dent Res 1976;55:1023-31.
20. McLean JW, Wilson AD. The clinical development of the glass ionomer cements. (Pt 1). Formulation and properties. Aust Dent J 1977;22:31-6.
21. McLean JW, Wilson AD, Prosser, HJ. Development and use of water-hardening glass ionomer luting cements. J Prosth Dent 1984;52:175-81.
22. Prosser HJ, Powis DR, Brant P, Wilson AD. Characterization of glass ionomer cements. (Pt 7). The physical properties of current materials. J Dent 1980;12:231-40.
23. Hotz P, McLean JW, Sced I, Wilson AD. The bonding of glass ionomer cement to metal and tooth substrates. Br Dent J 1977;142:41-7.
24. Crisp S, Wilson AD. Unpublished report. Laboratory of the government chemist, 1974.
25. Baier RE. Adhesion in biological systems. New York: Academic Press, 1970:15-48.
26. Retief DH. The principles of adhesion. J Dent Assoc S Afr 1970;25:285-95.
27. Buonocore MG. Principles of adhesive retention and adhesive restorative materials. J Am Dent Assoc 1963;67:382-91.
28. Driessens FCM. Chemical adhesion in dentistry. Int Dent J 1977;27:317-23.
29. Kibby CL, Hall WK. The chemistry of biosurfaces. New York: Dekker, 1972;663-729.
30. Glantz PO. Adhesion to teeth. Int Dent J 1977;27:324-32.
31. Reynolds IR. A review of direct orthodontic bonding. Br J Orthod 1975;2:171-8.
32. Buonocore MG. A simple method of increasing the adhesion of acrylic filling materials to enamel surfaces. J Dent Res 1955;34:849-53.
33. Aboush YEY, Jenkins CBG. The effect of polyacrylic acid cleanser on the adhesion of a glass polyalkenoate cement to enamel and dentine. J Dent 1986;15:147-52.
34. Powis DR, Folleras T, Merson SA, Wilson AD. Improved adhesion of a glass ionomer cement to dentin and enamel. J Dent Res 1982;61:1416-22.
35. McLean JW, Sced I. The bonded alumina crown. (Pt 1). The bonding of platinum to aluminous dental porcelain using tin oxide coatings. Aust Dent J 1976;21:119-27.
36. Mitchell DL. Bandless orthodontic bracket. J Am Dent Assoc 1967;74,103-10.
37. Wilson AD, Crisp S, Lewis BG, McLean JW. Experimental luting agents based on the glass ionomer cement. Br Dent J 1977;142:117-21.
38. McLean JW. Glass ionomer cements. Br Dent J 1988;164:293-9.
39. Zachrisson B. Fluoride application procedures in orthodontic practice, current concepts. Angle Orthod 1975;45:72-81.
40. Noyes, H. Dental caries and the orthodontic patient. J Am Dent Assoc 1937;24:1243-54.
41. Zachrisson B, Zachrisson S. Caries incidence and oral hygiene during orthodontic treatment. Scand J Dent Res 1971;79:394-401.
42. Shafer WG, Hine MK, Levy BM. A textbook of oral pathology. 4th ed. Philadelphia: WB Saunders, 1983:415.
43. Mizrahi E. Enamel demineralization following orthodontic treatment. Am J Orthod 1982;82:62-7.
44. Gwinnett JA, Ceen RF. Plaque distribution on bonded brackets: A scanning microscope study. Am J Orthod 1979;75:667-77.
45. Scheie AA, Arnberg P, Krogstad O. Effect of orthodontic treatment on prevalence of streptococcus mutans in plaque and saliva. Scand J Dent Res 1984;92:211-17.
46. Mizrahi E. Surface distribution of enamel opacities following orthodontic treatment. Am J Orthod 1988;84:323-32.
47. Gorelick L, Geiger AM, Gwinnett AJ. Incidence of white spot formation after bonding and banding. Am J Orthod 1982;81:93-8.
48. Darling AI. Studies of the early lesion of enamel caries. Br Dent J 1958;105:119-35.
49. Silverstone LM. Remineralization phenomena. Caries Res 1977;11:59-84.
50. Arends J, Bosch JJ. Factors relating to demineralization and remineralization of the teeth. Oxford: IRL Press, 1986:1-11.
51. Backer DO. Post-eruptive changes in dental enamel. J Dent Res 1966;45:503-11.
52. Von Der Fehr FR, Loe H, Theilade E. Experimental caries in man. Caries Res 1970;4:131-48.
53. Wei SHY, Kawueler JC, Massler, M. Remineralization of carious dentin. J Dent Res 1968;47:381-91.
54. Buonoure GM, Vezin JC. Orthodontic fluoride protection. J Clin Orthod 1980;14:321-35.
55. Ogaard B, Rolla G, Arends J, ten Cate JM. Orthodontic appliances and enamel demineralization. (Pt 2). Prevention and treatment of lesions. Am J Orthod 1988;94:123-8.
56. Ogaard B, Rolla G, Arends J. Orthodontic appliances and enamel demineralization. (Pt 1). Lesion development. Am J Orthod 1988;94:68-73.
57. O'Reilly MM, Featherstone JDB. Demineralization and remineralization around orthodontic appliances: an in vivo study. Am J Orthod 1987;92:33-40.
58. Norman RD, Platt JR, Phillips RW, Swartz, ML. Additional studies on fluoride uptake by enamel from certain dental materials. J Dent Res 1961;40:529-37.
59. Massler M. Changing concepts in prevention and treatment of dental caries. J Tenn Dent Assoc 1968;48:1-7.
60. Forrester DJ, Auger MF. A review of currently available topical fluoride agents. J Dent Child 1971;38:272-8.
61. Norman RD, Mehra RV, Swartz ML, Phillips RW. Effects of restorative materials on plaque composition. J Dent Res 1972;51:1596-1601.
62. Menaker L. The biologic basis of dental caries. Cambridge: Harper and Row, 1980:386-418.
63. Stearns RI. Incorporation of fluoride by human enamel. (Pt 3). In vivo effects of non-fluoride and fluoride prophylactic pastes and APF gels. J Dent Res 1973;52:30-5.
64. Horowitz HS, Kau MCW. Retained anticaries protection from topically applied acidulated phosphate fluoride: 30 and 36 month post-treatment effects. J Prevent Dent 1974;1:21-6.
65. Zachrisson, B. Fluoride application procedures in orthodontic practice. Angle Orthod 1975;45:72-81.
66. Geiger A, Gorelick L, Gwinnett J, Griswold PG. The effect of a fluoride program on white spot formation during orthodontic treatment. Am J Orthod 1988;93:29-37.
67. Trask PA. Orthodontic positioner used for home fluoride treatments. Am J Orthod 1975;67:677-82.
68. Forsten L. Fluoride release from a glass ionomer cement. Scand J Dent Res 1977;85:503-4.
69. Maldonado A, Swartz M, Phillips RW. An in vitro study of certain properties of a glass ionomer cement. J Am Dent Assoc 1978;96:785-91.
70. Tveit AB, Gjerdet NR. Fluoride release from a fluoride-containing amalgam, a glass ionomer cement and a silicate cement in artificial saliva. J Oral Rehab 1981;8:237-41.
71. Muzynski BL, Greener E, Jameson L, Malone WFP. Fluoride release from glass ionomers used as luting agents. J Prosthet Dent 1988;60:41-9.
72. Swartz ML, Phillips RW, Clark HE, Norman RD, Potter R. Fluoride distribution in teeth using a silicate model. J Dent Res 1980;59:1596-603.
73. Retief DH, Bradley EL, Denton JC, Switzer P. Enamel and cementum uptake from a glass ionomer cement. Caries Res 1984;18:250-7.
74. Swartz ML, Phillips RW, Clark HE. Long-term F release from glass ionomer cements. J Dent Res 1983;63:158-60.
75. Purton DG, Rodda JC. Artificial caries around restorations in roots. J Dent Res 1988;67:817-21.
76. Valk JWP, Davidson CL. The relevance of controlled fluoride release with bonded orthodontic appliances. J Dent 1987;15:257-60.
77. Maijer R, Smith DC. Variables influencing the bond strength of metal orthodontic bracket bases. Am J Orthod 1981;79:20-34.
78. Newman GV, Facq JM. The effects of adhesive systems on tooth surfaces. Am J Orthod 1971;59:67-75.
79. Bernstein, L. Methods and factors involved in bonding orthodontic attachments to enamel. J Nihon Univ Sch Dent 1965;7:96-102.
80. Mizrahi E, Smith DC. British Division of IADR 1968; no. 101.
81. Mizrahi E, Smith DC. Direct cementation of orthodontic brackets to dental enamel. Br Dent J 1969;127:371-5.
82. Oiolo G. Characterization of glass ionomer filling materials. Dent Mat 1988;4:129-33.
83. Beech DR, Jalaly T. Clinical and laboratory evaluation of some orthodontic direct bonding systems. J Dent Res 1981;50:972-8.
84. Reynolds IR. A review of direct orthodontic bonding. Br J Orthod 1975;2:171-8.
85. Melson, B. Personal communication 1989.
86. Reynolds IR, Von Fraunhofer JA. Direct bonding of orthodontic attachments to teeth. The relationship of adhesive bond strength to gauze mesh size. Br J Orthod 1976;3:91-5.
87. Dickinson PT, Powers JM. Evaluation of fourteen direct-bonding orthodontic bases. Am J Orthod 1980;78:630-9.
88. Maijer R, Smith DC. A comparison between zinc phosphate and glass ionomer cements in orthodontics. Am J Orthod 1988;59:273-9.
89. Lopez JI. Retentive shear strengths of various bonding attachment bases. Am J Orthod 1980;77:669-78.
90. Thanos CE, Munholland T, Caputo AA. Adhesion of mesh-base direct-bonding brackets. Am J Orthod 1979;75:421-30.
91. Ferguson JW, Read MJF, Watts DC. Bond strengths of an integral bracket-base combination: an in vitro study. Eur J Orthod 1984;6:267-76.
92. Chen TM, Brauer GM. Solvent effects on bonding organo-silane to silica surfaces. J Dent Res 1982;61:1439-43.
93. Dohn LA. Personal communication 1990.
94. Prosser HJ, Powis DR, Brant P, Wilson AD. Characterization of glass ionomer cements. (Pt 7). The physical properties of current materials. J Dent 1984;12:231-40.
95. Winer BJ. Statistical principles in experimental design. 2nd ed. New York: McGraw-Hill, 1987:208-20.
96. Rohlf FJ, Sokal RR. Biometry. 2nd ed. New York: WH Freeman and Co. 1985:179.
97. Copenhaver DJ. In vitro comparison of glass ionomer cements' ability to inhibit decalcification under orthodontic bands. Am J Orthod 1986;89:528.
98. Miller J. Espe-Miller research grant number 45-785-01. Indiana University School of Dentistry Department of Orthodontics 1990.
99. Croll TP. Glass ionomers for infants, children and adolescents. J Am Dent Assoc 1990;120:65-8.
100. Nakamichi I, Iwaku M, Fusayama T. Bovine teeth as possible substitutes for adhesion tests. J Dent Res 1983;62:1076-81.
101. Featherstone JD, Mellberg JR. Relative rates of progress of artificial caries lesions in bovine, ovine and human enamel. Caries Res 1981;11:109-14.
102. Davidson CL, Hoekstra IS, Arends J. Microhardness of sound, decalcified and etched tooth enamel related to the calcium content. Caries Res 1974;8:135-44.
CURRICULUM VITAE
Richard Don Burns, Jr.
July 26, 1963 Born in Seattle, Washington
May 1984 Completion of pre-dental curriculum at Indiana University, Bloomington, Indiana
May 1988 DDS, Indiana University School of Dentistry, Indianapolis, Indiana
August 1988 MSD Program, Orthodontics, Indiana University School of Dentistry, Indianapolis, Indiana
July 1990 Certificate, Orthodontics, Indiana University School of Dentistry, Indianapolis, Indiana
Professional Organizations
American Dental Association
American Association of Orthodontists
Indiana University Orthodontic Alumni Association
Indiana Dental Association
Elkhart County Dental Society
North Central Dental Society
ABSTRACT
EVALUATION OF THE TENSILE BOND STRENGTH OF ORTHODONTIC BRACKET BASES USING GLASS IONOMER CEMENT AS AN ADHESIVE
by
Richard Don Burns, Jr.
Indiana University School of Dentistry
Indianapolis, Indiana
The search for an orthodontic bonding adhesive that has chemical adhesion to enamel and releases fluoride into the oral environment has led to experimentation with glass ionomer cements. This study compared the tensile bond strength of eight different orthodontic bracket base designs in vitro and assessed the amount of adhesive remaining on the bracket pad after debonding.
Each bracket base design included in this study had unique characteristics warranting their inclusion. The groups contained brackets with 60, 80, and 100 gauge mesh pads; 100 gauge mesh sandblasted pads; perforated metal bases; Micro-Loc™ photo-etched bases; Dyna-Lock™ integral bracket/bases; and ceramic silane-coated bracket pads.
Groups contained 20 to 22 specimens that were bonded to bovine incisor teeth embedded in a self-curing acrylic
block that could be held in the testing machine. Pre-encapsulated glass ionomer cement (Ketac-Fil™) was the experimental adhesive. The adhesive was mixed according to the manufacturer's instructions in a dental amalgamator. The specimens were thermocycled between water baths of 15°C and 55°C. The specimens spent 30 seconds in each bath for a total of 2,500 cycles and were stored in a humidor until debonding. After 14 days, the specimens were subjected to a tensile force using an Instron mechanical testing machine until failure occurred.
The Micro-Loc™ photo-etched base had significantly higher mean tensile bond strength ($p<0.05$) than all other brackets tested. The ceramic brackets were unable to be tested due to the extremely weak bond strength which did not allow preparation of the samples for debonding.
Following debonding, the percentage of adhesive remaining attached to the bracket base was determined using a grid in the ocular of a light microscope. In general, the site of bond failure involved the base/adhesive interface. The Dyna-Lock™ integral bracket/base and 80 gauge mesh base had a greater mean percent of adhesive remaining attached to the base. (Dyna-Lock™ 45 percent and 80 gauge mesh 43 percent vs. all other $\leq 20$ percent.)
The results indicate that the bracket base design can influence the bond strength when GIC is used as an orthodontic adhesive and suggests that development of GIC with increased fracture toughness might increase bond strength. |
Indiana Architect
March, 1962
Add COLOR with Spectra-Glaze
Rich, vibrant color... subtle and subdued color... massive murals in color... light-hearted, random color accents: all of these effects can be accomplished with Spectra-Glaze glazed structural masonry units. Sixteen popular Standard Colors are quickly available... and at no extra cost.
Send for your free copy of folder, A New Dimension, illustrated in full color. It shows the striking color effects attainable with Spectra-Glaze.
See the 16-page Spectra-Glaze™ unit in Sweet's Catalog (4b/BU).
Distributed by—
The C. T. CORBIN CO., Inc.
7998 Meadowbrook Drive • Indianapolis • Phone: CL 1-3941
This attractive, flush, acoustical ceiling in the narthex of the Lawrence Methodist Church is part of the TEE-JOIST Floor and Roof System used in construction of the church.
CONCRETE
TEE-JOIST
FLOOR AND ROOF SYSTEM
No Forming Required . . . Rigid . . . Firesafe
Attractive . . . Acoustical . . . Maintenance-Free
ARCHITECTS and building engineers with discriminating taste and a knowledge of construction values have found the TEE-JOIST SYSTEM to be the very finest. In addition the system offers a major contribution to efficiency and all-around economy in construction, as well as combines the advantages of precast and cast-in-place methods.
Ceilings may be flush or handsomely recessed with precision-ground, light-weight filler block between load-bearing, factory cast concrete TEE-JOIST . . . all covered with a steel reinforced concrete slab to form a floor or roof. It's light and strong . . . cuts deadweight load without cutting strength. Saves materials, manpower and money, too. Compare and see for yourself the many advantages of this superior system. Call today for the facts.
AMERICAN BLOCK COMPANY INC.
Formerly Cinder Block & Material Company • Miller Products for Over Fifty Years
2200 N. MONTCALM ST. • PHONE MEIrose 2-1432 • INDIANAPOLIS 7
Electrical Contractor
For
HOOSIER MOTOR CLUB
and
THE AMERICAN STATES INSURANCE COMPANY BUILDINGS
Indianapolis
H. M. Stradling Electric Company
353 Massachusetts Ave.
Indianapolis, Indiana
ME 4-8844
Masonry Contractor for
PILGRIM LAUNDRY AND AMERICAN STATES INSURANCE BUILDINGS
W. E. BROADY & SON, INC.
2115 Martindale Ave.
Indianapolis, Ind.
Phone WA 5-4261
KEMMER CONSTRUCTION COMPANY
LAFAYETTE, INDIANA
PHONE SH2-3220
General Contractor for the FIRST FEDERAL SAVINGS AND LOAN ASSOCIATION BUILDING
General Contractor For
RIVERSIDE PHARMACY NOBLESVILLE, IND.
KENLEY’S SUPER MARKET NOBLESVILLE, IND.
Morehouse Construction Company
3440 Hovey St.
Indianapolis
PHONE WA 4-4256
Grace Methodist Church—Valley Stream, N. Y.
Architect: Frederic P. Wiedersum Associates,
Architects-Engineers. Valley Stream, N. Y.
General Contractor: Willart Associates, Inc., East Rockaway, N. Y.
Masonry Contractor: Sorrentina Contractors, Inc., Inwood, N. Y.
Tebco Face Brick Supplied by:
Andrew Miles Stone Co., Lynbrook, N. Y.
Tebco Face Brick
NOW...37 Color Combinations! 4 Textures!
The outstanding jobs are going Tebco! And for good reason. No matter what type of building—municipal, commercial, industrial, residential—Tebco Face Brick offers limitless design possibilities. Evans' big million-brick-a-week production assures fast, dependable delivery of the colors, sizes, and styles you need. For lasting beauty that never loses its appeal, design and build with Tebco. It meets all ASTM and FS standards. Write for new full-color Tebco Catalog.
Tangerine Blend, Standard, 45 K.
THE EVANS BRICK COMPANY
General Offices: Uhrichsville, Ohio • Telephone: WAinut 2-4210
Sales Offices: Cleveland, Ohio • Columbus, Ohio • Pittsburgh, Pa. • Detroit, Mich.
Bay City, Mich. • Fairmont, W. Va. • Toledo, Ohio • Philadelphia, Pa.
One of the nation's largest producers of Clay Pipe, Clay Flue Lining, Wall Coping, Plastic Pipe and related construction materials, with over 50 years of faster, friendlier service.
Dwyer...the kitchen that went to college
Universities demand durability; maintenance-free construction; fast, economical installation; a compact kitchen that can take it. Married students want dependability; beauty; an easy-to-care-for kitchen that won't bite into living, study and relaxing areas.
Dwyer Kitchens are being installed in Indiana University's new Married Students Apartment Housing. In all, 380 units will be used. Indiana University is one of many schools to use Dwyer Compact Kitchens.
Series 69 Compact Kitchen (shown above). Full-size gas or electric range, oven and broiler; refrigerator-freezer; deep sink with seamless worktop; upper and lower storage; lifetime porcelain finish; for standard or recess installation.
Dwyer Compact Kitchens—the perfect solution for schools, apartments, offices and many other small space installations. From 39" to 69".
For complete data file of Dwyer Kitchens
WILSON-PARTENHEIMER
1107 East 54th Street
Indianapolis, Indiana
Phone: CL 1-4541
ALL-GAS HOMES..
TOPS | LOWEST
in Comfort and Convenience in Operating Cost
GAS HEATS MORE HOMES THAN ANY OTHER FUEL!
Practically All new homes in our territory are GAS heated.
Natural Gas Is Best — for the Big household jobs.
Specify Gas for . . .
HEATING
COOLING
COOKING
REFRIGERATION
CLOTHES DRYING
WATER HEATING
TRASH, GARBAGE DISPOSING
OUTDOOR GAS LIGHTING
INDIANA GAS & WATER COMPANY, INC.
LIVE MODERN, FOR LESS, WITH GAS
Indiana Architect
Official Journal, Indiana Society of Architects,
A Chapter of The American Institute of Architects
Vol. V MARCH, 1962 No. 11
Edited and published monthly in Indianapolis by Don E. Gibson & Associates, 3637 N. Meridian St., P.O. Box 55594, Indianapolis 5, Indiana. Editorial and advertising policy governed by the Code of Business Conducts, Indiana Society of Architects.
Current average monthly circulation: 5,400, including all resident Indiana architects, school officials, churches and hospitals, libraries, selected public officials, and members of the Indiana Society of Architects. Further information available on request.
Member, Publishers Architectural Components
16 Affiliated Official Publications of Components of The American Institute of Architects, in 16 key states. Advertising and Listing in Standard Rate and Data Services.
Editor and Publisher Don E. Gibson
Director of Advertising William E. Stineburg
Officers and Directors, Indiana Society of Architects
Wayne M. Weber, AIA, Terre Haute, president; Walter Scholer, Jr., AIA, Lafayette, vice-president; Frank W. Haines, AIA, Indianapolis, secretary; John P. Guyer, AIA, New Castle, treasurer. District Directors: Edwin C. Berendes, AIA, Evansville; Alfred J. Porteous, AIA, Indianapolis; Walter Scholer, Jr., AIA, Lafayette; James L. Walker, AIA, New Albany; Carl L. Bradley, AIA, New Castle and Fort Wayne; Wayne M. Weber, AIA, Terre Haute. Directors-at-Large: Daniel C. Clark, AIA, Indianapolis; Harry E. Hunter, AIA, Indianapolis; Edward D. Pierre, FAIA, Indianapolis.
Now Celebrating Our 50th Anniversary of Service to Business and Industry
For half a century HOME elevators have paced the industry in engineering excellence, on-the-job performance, and over-all economy. HOME Oil-Hydraulic elevators are ideal for office buildings, hospitals, schools, parking garages, sidewalk lifts and loading docks... with these important benefits: Simplified installation and low installation cost; low maintenance; greater safety, and dependable, trouble-free service.
The Home Elevator Company assumes full responsibility... offers a complete service that includes drafting, engineering, fabrication, installation and factory representation for final adjustments.
Home Elevator's modern manufacturing facilities include such up-to-the-minute machine shop equipment as this 16-speed Monarch Model M heavy duty engine lathe equipped with 2 carriages, each with power traverse and taper attachment.
Manufacturers of Passenger and Freight Elevators... HOME GUARD Maintenance and repairs on All Types and Makes
For Complete Information, Write or Call:
THE HOME ELEVATOR COMPANY, INC.
1142 SOUTHEASTERN AVE. • INDIANAPOLIS 7, INDIANA • PHONE ME 6-3511
Compact, Single Package Pumping Unit. This "all-in-one" package includes electrical controls, motor, oil reservoir, pumps and valves... and can be located in any convenient place, such as basement, space under stairway. Pumps and valves are inside reservoir, eliminating leaks. Details available upon request.
Electrical Installation...for
Pilgrim Laundry & Cleaners
Miracle Lanes Bowling Alleys
was performed by
WATSON-FLAGG
Watson-Flagg's 60-year record of time-and-cost saving completions...the company's reputation for thinking and planning in terms of client benefits...are recognized by building contractors and owners throughout Indiana. Watson-Flagg maintains ample resources in manpower and material to assume electrical contracts of any size or complexity.
WATSON-FLAGG
ELECTRIC COMPANY, INC.
2916 North Harding Street • Indianapolis • WAlnut 3-5481
Electrical Construction...Installation...Service
OUR 61ST YEAR OF SERVICE TO AMERICAN BUSINESS AND INDUSTRY
when you need the elements for the right design expression - look to Marietta.
Architect: Walter Scholer & Assoc.
Contractor: Kemmer Contr. Co.
prestressed Floor and Roof Slabs
For the FIRST FEDERAL SAVINGS
AND LOAN CO.
MARTIN MARIETTA CORPORATION
AEROSPACE CHEMICALS — CONSTRUCTION MATERIALS
P.O. BOX 537, LAYFAYETTE, INDIANA
New factories are springing up across the nation today in forms, locations, and numbers which mark a whole new era in American industry. The new plants—clean, functional, often handsome and relatively small in size—differ as sharply from the nineteenth-century concept of industry as does the jet plane from the ancient jenny.
This revolution in industrial planning and plant design has taken place largely in the years since World War II. Part of the massive decentralization which has taken place since that time can be attributed to the lessons of warfare. A nation with its industrial eggs in one basket can have them shattered by a single bombing attack.
Industrial dispersal for safety, however, is a small part of the story. The development of a highly-organized system of roadway transportation, the dwindling urban land supply, and the movement of distribution facilities to growing southern and western markets have all played a part in creating our new industry.
However, it would be hard to imagine the integration of industry with the residential and business community today if it were not for the new industrial architecture which has brought them together. Only a few years ago, counties, towns and suburban areas spoke wistfully of the additional tax resources which factories would provide. But the very home owners whose tax loads would have been lightened by industrial location fought such plans tooth and nail for the simple reason that the word "industry" was synonymous with dirt, ugliness, and community blight.
Today, the situation is reversed. Industry can pick and choose from among thousands of community offers of free sites, tax benefits, even partial subsidy of the buildings themselves. This remarkable change in attitude can be attributed largely to a new architecture which, at its best, makes the modern factory a handsome addition to the community, at its least, renders it so unobtrusive that passersby hardly know that it is there.
Partly because the factory has moved away from congestion to areas where there is more room which costs less, today's site is almost invariably far larger than that of the old factory. With the freedom of design which additional space permits, the modern plant, except in cases where the manufacturing process decrees otherwise, is almost always found as a sleek, one-story structure. Too, sufficient elbow room permits the various departments and processes of the plant to be located and interlocked in the manner most productive to the owner. Thus the individual, physical nature of the site markedly influences the design of the plant structure itself. The local climate—prevailing winds, sun load, humidity, temperature variation and similar factors—also creates design differences.
For these reasons, industrial design is a painstaking, individual process involving close collaboration of owner, architect, and mechanical and other engineers. It also explains why architectural counsel is extremely helpful to the factory owner in the site selection process prior to the planning of the building program.
There must be room enough to accommodate both present square-foot requirements for the structure and projected expansion needs over a period of 10 to 15 years. In some cases, the design must be such that the plant can be expanded on any or all sides of the building. A large amount of additional space may be needed for other purposes. It takes, for example, approximately three and one-half acres just for a railroad siding with a right-angle turn. Parking space for 100 automobiles takes another acre. Landscaping, which, together with straightforward, clean design, makes the modern plant a good community neighbor, also consumes site space.
Good factory design starts with the basic manufacturing or processing unit. It may be a single conveyor around which the supporting spaces and equipment are planned. Raw materials must be received and finished materials taken away. Both may have to be stored. Access to water, power, and transportation must be taken into account. In some largely-automated industries, the factory building may serve merely as an attractive shell placed around mechanical equipment. But in those with a number of employees, plant design also recognizes the needs for good labor relations.
Public relations also constitute an important element in modern factory design. A factory breeds good will if it is in harmony with the community; it helps no one if it is ugly. What is esthetically suitable to a given community depends, of course, upon local tastes, customs, and existing architecture. Again, it is an individual problem which cannot be solved by use of stock plans, unprofessional building "package" suppliers, or pre-selected sizes and types of materials.
There is no excuse today for poor design. The length of wall and amount of roof area of a well-designed plant and a badly-designed plant may be identical. The cost of each may be the same. The difference is architectural design—design for a specific purpose, and individual site, for flexibility and expansion, community harmony, a specific manufacturing process, and for people.
Landscaping for
The AAA-Hoosier Motor
Club Building By
POTTENGER'S
NURSERY
AND LANDSCAPE CO., INC.
We Specialize —
IN LANDSCAPING, DESIGNING,
PLANTING
3401 LAFAYETTE RD.
INDIANAPOLIS, INDIANA
PHONE AX 1-4470
General Contractor
for
Group V — Married Student
Apartment — Indiana University
Miracle Lane & Starlight Lounge
Air Route Traffic Control Center
Weir Cook Airport — Indianapolis
F. A. WILHELM
CONSTRUCTION CO., INC.
PROSPECT & SOUTHEASTERN
INDIANAPOLIS
FL 9-5411
Conceived in the architectural
thinking of today,
TEMPLATE
by Leopold
awaits you and/or your client
in our showroom . . .
Burford's
OFFICE INTERIORS
603 E. Washington St. MEIrose 5-7301
Split-Face
Ashlar-Limestone
for the
MIRACLE LANES
and STARLIGHT LOUNGE
Indianapolis
BY
EMPIRE
STONE COMPANY
P. O. Box 788
Bloomington, Indiana
In the beginning, the project seemed to be rather simple... In order to have a remote drive-up banking unit that linked customer and bank via closed-circuit television and pneumatic conveyance... employment of these components in proper staging, apparently, was all that was necessary... This was basically true and still is, but the complexity of arrangements that followed the initial concept resulted in a series of problems of design & fabrication, efficient service projected by a bank and the ultimate acceptability by the public.
The system is comprised of many parts, but basically two terminals... an exterior Auto Station, which is the customer point of transaction and an interior Control Console where the teller performs the various banking services... Stainless steel was chosen as the best material for the Auto Station because of its strength and durability of finish. The Auto Station houses not only the television system and the pneumatic conveyor terminal chamber, but a heating and air conditioning unit, which maintains proper operating temperature for the sensitive camera and monitor and a louvred steel roll-down protective screen that is controlled electrically from the teller's console. The screen allows the Auto Station to be closed down at any time during normal banking hours and, of course, at night. There is also a small forms rack on the face of the unit... convenient and easy to reach from an automobile. The Auto Station is of three sections... base, center and hood. All three are joined to form an independent unit free from any other structure. The center section only is used when it is desirable to key the design of the island to the architectural concept of the main building or where conversion is possible on an existing drive-up island.
When the customer arrives at the Auto Station, he sees his own image on the television receiver and brakes the vehicle. By centering his image on the TV screen, the customer automatically positions himself for the necessary operations. This feature eliminated the need for an instruction sign or positioning guides. He then pushes a button marked "press for service," which lights up indicating he has signalled the bank... An audible and an illuminated signal alerts the teller inside the bank at the Control Console, which indicates the customer's presence at the Auto Station... The teller presses the lighted button, activating the system and transfers the customer image to the inside receiver and the teller image to the outside receiver. This action also triggers the audio system permitting voice communication between the two stations... At this moment the teller dispatches a carrier to the Auto Station through the pneumatic tube line. The carrier, traveling at a rate of 25 ft. a second (Speed can be increased for longer runs) arrives at the Auto Station, trips a micro switch in the tube line which opens the door of the chamber exposing the carrier to the customer. The customer may now take the carrier into the confines of his car and securely load the carrier with his transaction... He then deposits the carrier in the pneumatic chamber sending it in to the teller... A second switch in the line closes the door of the chamber. The teller receives the carrier at the console and removes its contents... Because of the positioning of the wide-angle lensed camera at the teller point, the customer has a full view of the entire transaction on the Auto Station receiver... When the receipt or money is ready for transfer, the outlined sequence is repeated.
The teller's Control Console is designed for a dual Auto Station operation. The teller, by merely pushing a button, may lock-in on an additional channel for a second Auto Station... This means that the teller's activating operation is reduced to a two button sequence... from neutral position to Auto
(Continued on next Page)
The emphasis is on flexibility at the Control Console... by designating a working surface height, the unit may be operated from either a standing or seated position. An additional "Omni Teller" console can be used in the following manner:
Four Auto Stations (A, B, C, D) are connected, via pneumatic and electrical systems, to three Control Consoles (1, 2, 3)... Console 1 operates A & B, console 3 operates C & D... and by means of an inter-connecting network, Omni Teller console 2 may lock in on A, B, C or D.
Drive-In banking has seated itself strongly over the past few years and the TV BANK system is the logical answer to the isolation problem of a remote island. The teller who was previously stationed in this island, limited by location to just a few operations, is brought back into the bank where both the number & types of transactions are definitely increased because of the accessibility of bank records, etc. Greater efficiency and closer supervision of bank personnel are maintained. TV BANK saves time and trouble for both customers and bank by permitting a more complete drive-up operation... It is a revolutionary new concept in customer service.
INCLUDED IN THE COMPANY...
"we're mighty proud to keep"
Purdue—Wabash—Ball State Teachers... are just three of the colleges and universities, elementary and secondary schools for which Poston-Herron Face Brick are being used.
At P-H, "more than 50 years of craftsmanship in clay" is reflected in exclusive Postonians and distinctive Orientals, not to mention the many, many architectural reds and blends of quality face brick.
Architects find P-H Face Brick today's most imaginative material for interpreting their design ingenuity. Contractors and masons prefer P-H Brick because they are accurately-dimensioned for easy laying. Owners like P-H Face Brick because of their distinguished appearance and maintenance-free wearability.
Your nearest P-H Dealer has samples of the complete line.
POSTON-HERRON BRICK CO., INC.
ATTICA, INDIANA
"Home of Exclusive Postonians/Orientals"
THE TREND IS TO ELECTRIC HEATING FOR Schools Motels Factories Offices Stores Apartments
Because of its flexibility that permits a wide range of installations, Electric Heating offers a new, but proved advance in structure design.
Costs on equipment, installation and operation are available on actual installations in the Indianapolis area.
FOR FULL INFORMATION CALL ARCHITECT and ENGINEERING REPRESENTATIVE MEIrose 1-1411—Extension 264
INDIANAPOLIS Power & Light COMPANY
Plans for the Women's Architectural League scholarship fund benefit at the Civic Theater, Wednesday, April 11, received a final polishing at the organization's March 5 meeting as co-chairmen Mrs. Alfred J. Porteous and Mrs. Richard C. Lennox announced the following committees:
Tickts, Mrs. Leroy M. Russel; Mrs. C. C. Shropshire (left, seated); Mrs. Donald B. Fisher; Mrs. Robert C. Smith; Mrs. Charles C. Lowe; Mrs. David L. Richardson, and Mrs. Howard L. White.
Hospitality, Mrs. Arthur L. Burns (right, standing), and Mrs. Donald B. Compton (right, seated), co-chairmen; Mrs. Richard K. Zimmerly; Mrs. Louis E. Penniston; Mrs. Horace Boggy; Mrs. Donald Clark; Mrs. George C. Wright; and Mrs. Gilbert T. Richey; and Publicity, Mrs. Raymond W. Ogle (left, standing), chairman; Mrs. Theodore L. Steele; Mrs. Marion A. Williams; Mrs. Joseph J. McGuire; and Mrs. Robert N. Kennedy.
W.A.L President, Mrs. Edward E. Simmons, announced that officers of the women's group and their husbands will greet theater goers in the lobby preceding the 8:30 p.m. performance of "The Pursuit of Happiness." Early ticket sales indicate the ISA Scholarship Fund will receive a big boost from the league's project.
The Home Elevator Company of Indianapolis has completed its first 50 years of service to architects, builders and industrial designers. Founded in 1912 by John W. Hobbs, the company is now one of the largest independent producers of electric and oil-hydraulic passenger elevators, freight elevators, dumb waiters and home lifts.
An interesting sidelight to the company's normal line of business, Home is also producing ejection spacer tubes for the aero-space field, and is active in other areas of defense projects and industrial services. This broadened scope reflects Home's possession of much unique precision equipment, including a 30½" x 336", 16-speed lathe, one of two in the country.
The Cinder Block & Material Company has completed the change in its name to American Block Company, Inc., announced last month as effective March 1st. The company was founded in 1906 by the late O. L. Miller, father of its current president, Mr. Allan C. Miller. Other officials of the firm include Stewart D. Tompkins, executive vice-president; Richard D. Light, vice-president; Charles E. Boswell, secretary; and Mrs. George V. Falkenberg, treasurer. Mrs. Falkenberg, incidentally, is a daughter of the founder.
The newly-named firm will continue to market a wide variety of concrete products, and its old stand-by and namesake, cinder block, will now be known as "Amerlite."
Mr. Hal E. Peters, president of the Indianapolis Chapter of Producers' Council, Inc., has called the I.A. staff to task for an inadvertent slight to P. C. contained in a recent article. As Mr. Peters pointed out, the article seemed to indicate there were but 37 national members of Producers' Council, all represented locally in the Indianapolis Chapter.
Actually, there are many more nationally prominent manufacturers of building materials who are members of P. C. nationally but 37 of these are represented by membership locally. Our apologies.
Miss Susan Jane Byfield, 18-year-old daughter of ISA Associate Member and Mrs. Charles Byfield II, represented the Indianapolis District, ISA, in the annual Indianapolis Home Show Queen Contest.
Susan is a graduate of North Central High School and is employed by the Hanover Insurance Company in Indianapolis.
A Competition for Awards in Indiana Architecture 1959-1962
ELIGIBILITY: All entries shall be buildings constructed in Indiana, designed by architects registered in and residents of the State of Indiana. To be eligible, a building must be completed within the three-year period of June 1, 1959, and the date of entry in this competition.
GROUP I Residential (single family dwelling)
A. Cost $25,000 or under
B. Cost over $25,000
GROUP II Public Buildings
A. Schools
B. Churches
C. Community Buildings (Firehouses, country clubs, courthouses, jails, motion picture houses, hospitals, etc.)
GROUP III Commercial Buildings (Stores, office buildings, hotels, shopping centers, etc.)
GROUP IV Apartments and group housing, including homes for the aged.
GROUP V Industrial (warehouses, manufacturing plants, research centers, etc.)
PRESENTATION:
1. Mounts: All entries shall be on 40" x 40" rigid board, with eyelets secured in the top to facilitate hanging. One building only to mount.
2. Plans Site plan and major or typical floor plan drawn to scale and with numerical or graphic indication of scale. Medium (ink, photo technique, pencil, water color, etc.) at discretion of entrant.
3. Photographs: Shall be glossy black and white or color, a minimum of 8" x 10" in size. Two exterior and one interior view minimum will be required.
4. Descriptive Data: The following information shall be included on a card attached to the back of each entry:
A. Group classification by name and division number (e.g., Public School, II-A)
B. Name of Architect (concealed by appropriate means; failure to conceal name will result in entry being banned from competition)
C. Name and location of building
D. Name and address of owner
E. Name and address of general contractor
F. Date of completion
G. Any statement of requirements, program, etc., deemed appropriate
JURY: The jury will be composed of three individuals, at least two of whom will be corporate members of the AIA, all resident outside the State of Indiana. Names of jurors to be announced.
AWARDS: First, second and third awards may be made in each category. The jury may also award honorable mentions at their discretion.
ANNOUNCEMENT OF AWARDS: The announcement of the award winners, and presentation of certificates, shall be made at the dinner meeting of the I.S.A. Annual Convention to be held on May 25, 1962.
EXHIBITION: The entries shall form an exhibit at the I.S.A. Annual Convention, and afterwards shall be displayed, in whole or in part, wherever deemed feasible and desirable by the Board of Directors of the Indiana Society of Architects. One such display already established will be at the John Herron Art Museum during the month of September, 1962.
CLOSING DATE: A letter or post card indicating intention to submit must be mailed to the Committee on Honor Awards, Indiana Society of Architects, 3637 N. Meridian Street, Indianapolis, Indiana, no later than May 10, 1962, and must be accompanied by a check in the amount of $10.00 per mount, made payable to the Indiana Society of Architects. Entries must be submitted to the same address no later than May 15, 1962.
CLEARANCE: Each entrant must assume responsibility for obtaining all necessary clearances and permissions to submit his project in this competition, and for permission to have all or any portion of his submission reproduced in any publication or news media. Photographs requiring credit lines must be so marked, along with the appropriate credit line.
DISPOSITION OF SUBMISSIONS: The Indiana Society of Architects reserves the right to make such use of the submissions in promoting the aims and objectives of the profession as is ethically proper. Submissions will be returned to entrants at the completion of such usage provided a $2.00 return fee has been paid in advance. Unless further disposition is requested by entrants, all submissions not covered by the $2.00 return fee, will be held at the offices of the Indiana Society of Architects, 3637 N. Meridian Street, Indianapolis, for a period of one year and then destroyed. Entrants desiring to pick up their submissions may do so after notification of availability and within the one year.
Eliminate the Architectural Barriers
As an architect who has had many years of experience designing buildings used by the public—government buildings, churches, schools, libraries, places of recreation, commercial and industrial buildings and other places for people to live, work and play, it would seem that I might have thought about problems of access to these buildings by physically handicapped persons. It never seemed to occur to any of us that there were people who couldn't use these buildings because of physical handicaps. I don't think I was a bit different from most of my fellow architects, judging from the results of their endeavors since the days of the early Greeks and Romans. Fortunately, while architectural designs of the last decade follow more contemporary lines and we no longer see the great flight of marble steps leading to a monumental front entrance door made of heavy bronze, unfortunately we do often have that single step that is such a trap right at the front entrance. I imagine every one of you in the audience has at one time or another stumbled or fallen over that booby trap step.
I was astonished to learn that nearly one person in six in our nation has a permanent physical handicap. Each disability has its own particular associated problems.
Unfortunately, an unnecessarily large portion of our permanently physically handicapped have been institutionalized or are confined to their homes, protected and pampered by solicitous parents, relatives and friends, or hidden from view by ashamed families. Many of the disabled are afraid to venture forth because of the architectural barriers they encounter—barriers that have unwittingly been built into the very buildings that should be most accessible. Some of these handicapped people have convinced themselves that it is better to stay home because they feel they are a burden to others in normal social settings. They may truly be a burden to others in normal social settings. They may truly be a burden but frequently it is not their fault. It is the lack of awareness of the general public which has created this difficulty.
Although there are other problems, the one that is most often heard and the one that looms the largest is the inaccessibility of buildings. The finest programs of rehabilitation, education or recreation are unavailable to the disabled if they cannot have access to the very buildings they need to enter in order to use these services. Therefore, the defects inherent in the design of buildings and the facilities within them quickly become the great deterrent. Corrected, these buildings will make it possible to convert constructive rehabilitation to social and economic gains.
The correction of these problems is not within the realm of the professional rehabilitation worker, but is rather the responsibility of the architect, engineer, designer, builder, manufacturer, and also legislators, municipal leaders, and community planners. The professional engaged in rehabilitation is eager to give his encouragement, assistance and guidance for the correction of these evils.
Basically speaking, if we can correct these failings in building design we can make it possible to use the talents and resources of millions of physically handicapped individuals for the betterment of all mankind.
In an endeavor to solve this problem, consultations were held between the Chairman of the President's Committee on the Employment of the Physically Handicapped and executives of the American Standards Association, and it was decided to invite individuals who were vitally interested in, and ably qualified to assist in attacking the problems of architectural barriers, to meet with key personnel of the American Standards Association.
After a little over two years of meetings, consultations and further research the proposed Standards have been drafted in final form, and approved by the Steering Committee, the Sectional Committee by letter ballot and the American Standards Association.
These Standards will be used by architects, designers, engineers, builders and those who want to make their buildings accessible to the physically handicapped. They will be used by building officials, legislators and government officials to amend their building regulations (1) to make these specifications mandatory or (2) to make their own public buildings accessible to the handicapped.
These Standards or specifications cover the essential elements concerned with the use of buildings and facilities by the handicapped. They include:
**The grading of ground**, even contrary to existing topography, so that it attains a level with a normal entrance that will make a building or facility accessible.
**Public walks** have been specified to be at least 48 inches wide with a grade no greater than 5 percent. Walks are to have a continuous surface not interrupted by steps or abrupt changes in level, and, where
HOOSIER MOTOR CLUB, Indianapolis
Architect: McGuire & Shook, Compton, Richey & Associates
Contractor: Carl M. Geupel
MARRIED STUDENTS' Housing Group D, Indiana University
Architect: Edward D. James
Contractor: F. A. Wilhelm Co., Inc.
PARKWAY TERRACE APARTMENTS, Indianapolis
Architect: Garns and Moore, Architects
Contractor: Winston Hawkins
STECK'S MEN'S STORE, Muncie
Architects: Hamilton & Graham
MORLEY'S RESTAURANT, Indianapolis
Architect: Edward D. James & Assoc.
Contractor: Charles Brandt
GEORGE BAHRE CO. OFFICE, Indianapolis
Architect: Richard K. Kimmerly
Contractor: George Bahre Company
FAYETTE BANK & TRUST CO.
BRANCH OFFICE, Connersville
Architect: W. Erb Hanson & Assoc.
Contractor: B & M Lumber Company
AYR-WAY STORE, Indianapolis
Architects: Lennox, Matthews, Simmons & Ford
Contractor: Carl M. Geipel
FIRST FEDERAL SAVINGS & LOAN
Association, Lafayette
Architects: Walter Scholer & Associates
Contractor: Kemmer Construction Co.
RIVERVIEW PHARMACY, Noblesville
Architect: John G. Pecsok
Contractor: Morehouse Const. Co.
DOLPHIN CLUB, Indianapolis
Architect: Lennox, Matthews, Simmons & Ford
Contractor: George MacDougal
LONN MANUFACTURING, Indianapolis
Architect: James Associates
AFNB BRANCH, Indianapolis
Architect: Evans Woollen & Assoc.
Contractor: Marion Bugher Const. Co.
KENLEY'S SUPERMARKET, Noblesville
Architect: John G. Pecsok
Contractor: Morehouse Const. Co.
INDIANA NATIONAL BANK BRANCH,
Indianapolis
Architect: Bohlen & Burns, Architects
Contractor: Krebay Construction Co.
MERCHANTS BANK BRANCH,
Terre Haute
Architect: Evans Woollen & Assoc.
Contractor: Glenn North
PILGRIM LAUNDRY, Indianapolis
Architects: Martin & Jelliffe
Contractor: Glenroy Construction Co.
AIR ROUTE TRAFFIC CONTROL CENTER, Indpls.
Cons. Architect: James Associates
Contractor: F. A. Wilhelm Construction Co.
AMERICAN NATIONAL BANK, Muncie
Architects: Hamilton and Graham
Contractor: C. Kirkland
LAKE CENTRAL AIRLINES OFFICE,
HANGER & FIRE STATION, Indpls.
Architects: James Associates
Contractor: Glenroy Const. Co.
WEIR COOK AIRPORT TERMINAL, Indpls.
Architects: James Associates
Contractor: J. L. Simmons
FRANKLIN FINANCE OFFICE, New Castle
Architects: Lennox, Matthews, Simmons & Ford
Contractor: Ralph Hunnicut & Son
MIRACLE LANES BOWLING ALLEY, Indpls.
Architect: Donald Dick, Architect
Contractor: F. A. Wilhelm Construction Co.
Lower Maintenance
Greater Durability
Positive Decorative
Specify
PLEXTONE® PAINT
COLOR-FLECKED ENAMEL
Distributed By
PERFECTION PAINT & COLOR
715 E. MARYLAND
INDIANAPOLIS, IND.
ME. 2-4312
Asphalt Paving and Parking Area for the AAA Hoosier Motor Club
By
ASPHALT SURFACING INC.
4250 E. 82nd St.
R.R. 19, Box 573
Indianapolis
Indiana
"A-J" IS PROUD TO BE A PARTNER IN THE PROGRESSIVE COMMERCIAL ACTIVITY FEATURED IN THIS ISSUE FOR:
(1) Ayr-way #1 & #2
(2) Miracle Lanes
(3) Pilgrim Laundry
(4) Indiana Natl. Bank
(5) Hoosier Mtr. Club
(6) American States Insurance
Ceilings — Roof Decks
Wall Treatments — Partitions
Proven Installations By
ANNING-JOHNSON COMPANY
1720 ALVORD STREET
INDIANAPOLIS 2, INDIANA
Branch Office: 1272 Maxwell Ave., Evansville HA 3-4469
Roofing Contractor for the ANDERSON LOAN ASSOCIATION DRIVE-IN BUILDING
By
Standard Roofing Company
107 SHEPHERD DR.
CHESTERFIELD
INDIANA
PHONE
644-6547
378-3550
RESOLUTION
WHEREAS: Contractor members of the Indiana General Contractors Association, Inc., a Building Chapter of the Associated General Contractors of America, Inc., have on at least two projects — 1) Purdue University Calumet Center Addition #1, Hammond, Indiana, and 2) Residence Hall H-4, Purdue University, Lafayette, Indiana — refrained from bidding after requesting an opportunity to bid and being invited or given an opportunity to bid.
AND WHEREAS: Contractor members of the Indiana General Contractors Association or the officers of the association reportedly asked other bidders to refrain from bidding on these projects — thus resulting in 4 out-of-state bids only on the Purdue Calumet Center project out of 15 expected bids, and 1 bid (an Indiana firm not a member of the IGCA) on the Residence Hall project out of 10 expected bids.
AND WHEREAS: The reported reason for refraining to bid on these projects was the requirement of the Owners that each bidder submit with his bid a list of his principal subcontractors and material suppliers—a requirement that was clearly brought to the bidder's attention in the invitation or announcement for bids prior to his requesting the bidding documents, (a requirement that had been followed for many years by Purdue University and others).
AND WHEREAS: The submission of the list of principal subcontractors and material suppliers with the bid encourages the best price quotations from these subcontractors and suppliers at the time of bidding, in that they have to be low to be listed — thus resulting in the lowest overall cost to the owner; whereas, if not listed with the bid, "shopping" and "bid cutting" occurs in many cases which generally means that the subcontractor and supplier make higher original quotations — thus resulting in higher bids to the owner, and any "shopping" or "bid cutting" advantages going to the successful contractor.
AND WHEREAS: Any and all schemes, designs, undertakings, plans, arrangements, contracts, agreements or combinations to limit, restrain, retard, impede or restrict bidding for the letting of any contract for private or public work, directly or indirectly, or to in any manner combine or conspire to stifle or restrict free competition for the letting of any contract for private or public work are declared illegal under the provisions of the Indiana 1907 anti-trust statute.
NOW THEREFORE BE IT RESOLVED: That the Board of Directors of the Indiana Society of Architects, a Chapter of the American Institute of Architects, meeting this 9th day of March 1962, in view of the facts set out in the premise hereof, does hereby go on record condemning the actions of the Indiana General Contractors Association and the contractor members concerned on the above named projects, in what appears to be a flagrant violation of the right to obtain bids on a free and open competition basis.
FURTHER, We affirm the right of any owner requesting bids on a project to establish whatever bidding requirements he deems desirable. We further affirm the right of any contractor to decline to bid under conditions not satisfactory to him. However, his interference or the interference of an association or others with those desiring to submit bids is illegal, and unethical, and is deplored.
FURTHER, that a copy of this resolution be forwarded to the Indiana General Contractors Association.
ADOPTED BY THE BOARD OF DIRECTORS, INDIANA SOCIETY OF ARCHITECTS CHAPTER, AMERICAN INSTITUTE OF ARCHITECTS
MARCH 9, 1962
Eliminate the Barriers
(from Page 15) they cross other walks, driveways or parking lots, they should blend to a common level. This does not mean the entire elimination of curbs but rather blending walk and driveway to one surface at their juncture.
Parking lots should have spaces which are accessible and identified for use by the physically handicapped. If the space is not open on one side for an individual in a wheelchair or on braces or crutches to get in or out, then some parking spaces 12 feet wide should be provided. Care should be exercised in planning so that individuals are not compelled to wheel or walk behind parked cars.
At least one entrance to every building should be usable by individuals in wheelchairs, and this entrance should have access to the elevator in a multi-story building.
Ramps when necessary should have a gradient of not over one foot in twelve feet or 8.33 per cent. It is interesting to note that practically every building code now permits ramps with a rise of 10 per cent. The American Standards Association committee decided this was excessive and dangerous unless special precautions were taken. The Standards require ramps to have non-slip surfaces, at least one hand rail and a level platform at the top and at least six feet of straight clearance at the bottom.
Stairs are, of course, the Number One enemy of the wheelchair user, the crutch-walker and the cardiac. Where they must be used it is recommended that the height of the riser be not more than seven inches and that the commonly known nosing be discarded for a type of riser and tread without any abrupt change of surface. At least one hand rail should be extended 18 inches beyond both the top and bottom steps.
Doors generally should be no less than 32 inches in width. Revolving doors cannot be used by those in wheelchairs or on crutches. Double doors are not permitted unless they can operate in unison by one single effort or unless each leaf is at least 32 inches in width. Of course, automatic doors solve the problem excellently. Doors should not be too heavy to be operated by children or the aged. Thresholds should be as nearly level with the floor as possible.
Floors are required to have a non-slip surface and be of a common level throughout.
Toilet rooms are required to have at least one stall that is wide enough for a person in a wheelchair. Mirrors, shelves, towel racks and other dispensers should be placed so as to be within reach of those in wheelchairs. Drain pipes and hot water pipes should be covered or protected so that a wheelchair individual without sensation will not burn himself.
Water fountains should have spouts and controls accessible to the physically handicapped. Fortunately the new designs of wall mounted drinking fountains when placed at the proper height meet the requirements for use by the handicapped.
Public telephones should be installed so that they are accessible to those in wheelchairs. An appropriate number should also be equipped for the hard-of-hearing. It is recommended that architects and builders work closely with the local telephone company in such planning.
Elevators should be accessible and usable by the physically handicapped. Elevator cabs should be large enough to enable a wheelchair to turn.
Controls and switches for lights, heat, ventilation, windows, venetian blinds, fire alarms, etc., should be placed so as to be usable by, and within reach of, the handicapped.
Identification of rooms and offices should be done by raised letters to help the blind. Likewise, any door not intended for normal use and which might prove dangerous if a blind person were to exit or enter should be quickly identifiable by the use of knurling or ridged surfaces on the handle or knob portion of door hardware.
Warning signals should include both flashing lights and audible sound for both the deaf and the blind.
The Standards also call attention to hazards that should be avoided, such as access doors or manhole covers in floors, low hanging door closers, low hanging signs, ceiling lights or similar objects which protrude into regular trafficways. Openings in pavements or floors should be protected by both audible and visual warning signals.
I have only covered the highlights of the new Standards as now approved. In their entirety they are much more inclusive and specific with respect to dimensions in use of materials and methods of construction and design. There are many more areas that need attention, but these Standards will give the designer or builder all the facts and data he needs to make buildings and facilities accessible to the physically handicapped.
STADIUM
of prestressed concrete
HIGH QUALITY
LOW PRICE
SPACE FOR LOCKER ROOMS,
TICKET BOOTHS, & STORAGE
A section 90 ft. long seats 814
Prestressed concrete, the architect's choice, offers design flexibility, low maintenance costs, durability and early completion dates.
Write, or call 2-4086
SHUTE CONCRETE PRODUCTS, INC.
Richmond, Indiana
F. E. GATES
MARBLE & TILE CO.
& INDIANA DESCO INC.
Contractors In
Marble, Ceramic Tile,
Slate and Desco Vitro
Glaze Wall and Floor Finishes
LOUIS M. BARCH, General Manager
CLifford 5-2434
5345 Winthrop Ave.
Indianapolis 20, Indiana
Plumbing, Heating Contractor for
• AIR ROUTE TRAFFIC CONTROL CENTER
WEIR COOK AIRPORT
• AYR-WAY STORE
• PILGRIM LAUNDRY
FREYN BROS.
INC.
1028 N. Illinois
Indianapolis
ME 5-9386
ROOFING &
SHEET METAL CONTRACTOR
for
American States Insurance Co.
and
Hoosier Motor Club Buildings
RALPH A. REEDER AND SON, INC.
ESTABLISHED 1897
Twenty-Fourth & Winthrop
Indianapolis
WA 3-2421
PRESCOLITE
ASTRALUME
LIGHTING FIXTURES
ROUND SQUARE
• Beautiful shallow silhouette.
Round or square hand blown
Thermal glass 100-300 Watts.
• No framing, hidden or visible
means of glass support
(U.S. Pat. No. 2,826,684).
NEW hinging principle.
• Prewired recessed housings.
• Vertical lamping.
Write for further information.
PRESCOLITE MFG. CORP.
2229 - 4th St., Berkeley, California
FACTORIES:
Berkeley, Cal. • Warrington, Pa. • El Dorado, Ark.
Furnishing Hardware For The AIR ROUTE TRAFFIC CONTROL BUILDING Weir Cook Airport Indianapolis
FERRELL HARDWARE COMPANY, INC.
1055 E. 52 ST. INDIANAPOLIS ATwater 3-1336
Electrical Contractor For Kenley's Super Market
ANDERSON ELECTRIC SERVICE INC.
Industrial and Commercial
140 S. COLLEGE AVE. INDIANAPOLIS Phone — ME 2-5393
GENERAL CONTRACTOR FOR THE ANDERSON SAVINGS AND LOAN BUILDING Anderson, Ind.
PITCOCK CONSTRUCTION CO.
M. T. PITCOCK
CHESTERFIELD INDIANA PHONE 378-7101
Finishing Hardware For—
AMERICAN STATES INSURANCE COMPANY BUILDING INDIANAPOLIS
Barrison & Clarke INC.
Building Specialties 2926 E. Washington Indianapolis ME 2-9388
Roofing & Sheet Metal Contractor For The— FIRST FEDERAL SAVINGS & LOAN ASSOCIATION LAFAYETTE
GENERAL ROOFING & SIDING CO.
1058 ROSSVILLE AVE. FRANKFORT, IND.
ELECTRICAL CONTRACTOR FOR First Federal Savings & Loan LAFAYETTE
McKINNIS ELECTRIC CO., INC.
3031 UNION STREET LAFAYETTE INDIANA
Sometime between dawn and dark today, thousands of American women will go into retail stores to buy a dress. They will emerge with the dress, and they will also be burdened down with gloves, shoes, perfume and jewelry. Several weeks from now, thousands of American husbands will look at the bills and shake their heads in mingled resignation, woe, and, perhaps, consideration of the unpredictable nature of womanhood.
Some, however, will recall in all honesty that the last time they went to a men's store to buy a suit, they, too, emerged with a number of accompanying items they did not plan to buy when they entered. This, in the language of the business world, is the result of modern merchandising. Without unplanned buying, American retail business would go bankrupt.
Modern merchandising is a vital factor in the health of the American economy. It is also, in large part, dependent upon the contribution which modern architecture is making for the welfare and profit of the nation's retail businessmen.
The modern retail store is a significant example of the manner in which architecture influences, as well as accommodates, human activities. The marketplace is, of course, as old as man himself. In many parts of the world, goods are still sold from open bazaars and tents. There are still merchants who go door to door and carry their goods with them. Others sell and have sold from push-carts and trading posts.
But the modern retail store provides a way to get and keep customers and sell merchandise on a scale never before paralleled in history. The store is, essentially, a simple structure. It comprises a front, a selling space, and a service space that supplies and moves goods and keeps the books.
An architect can tell you that the front, which serves to advertise and display the goods inside, must be designed to pull the customer inside—and fast. The average pedestrian walks by the average store in seven seconds. He drives by in three. This is the brief timespan in which the magnetism of good design must act. In crowded urban areas, today's trend is to set back or recess the store front so that the passerby can look at the merchandise without being jostled or crowded down the street.
Once inside, the prospective customer is exposed to the selling plan of the store. The sales space, the second of the three elements mentioned above, is the heart of the establishment. It is here that buyer and seller meet. It must be interlocked with the front to provide easy access and movement and with the back or service space, which supplies it.
The well-organized sales space resembles a street. Merchandise departments are located on traffic aisles as shops are located on a pedestrian thoroughfare. One of the first steps in modern store planning is to plan this street system with three divisions of merchandise in mind. These divisions are labeled "impulse," "convenience," and "demand" in keeping with the motivations and reactions of the customer.
Impulse goods are luxuries or suddenly-desired forms of merchandise, whose sales depend upon good display and accessibility. Generally speaking, such items include perfume, jewelry, gifts, and similar items. Food and drugs are typical convenience items and stores which carry them are generally referred to as convenience stores. In almost every store, however, some types of convenience items are stocked for the convenience of the shopper. Demand merchandise includes furniture, clothing, and household equipment. Stores which concentrate on these commodities are referred to as demand stores.
However, in nearly every store you will find examples of impulse, convenience, and demand merchandise of one kind or another. Naturally, there are situations in which certain items are difficult to classify or shift from one division to another, depending upon the type of store involved and local customer habits. But, in general, these principles hold good and guide modern store planning.
The profits of the average store may depend upon how well it stimulates impulse buying. Few merchants would survive by selling only convenience and demand items. Buying surveys have shown, for instance, that more than half of all drugstore sales are based on impulse.
A great deal of attention is paid to impulse merchandise planning and selling today. (Continued on Page 29)
ROOFMATE
THE NEW MOISTURE-PROOF ROOF INSULATION WITH LIFETIME EFFICIENCY
- Withstands Hot Bitumens
- Durable and Permanent
- Strong, Rigid
- Easily Installed
- Superior Insulating Performance
- Lightweight, Easy to Handle
- Water and Water Vapor Resistant
- Forms own Moisture Barrier
DISTRIBUTOR
SEWARD SALES CORPORATION
ELKHART, INDIANA
740 South Main Street
INDIANAPOLIS, INDIANA
1101 East 40th Street
CINCINNATI, OHIO
3660 Michigan Street
the economies of LATH and plaster accrue only to its users.
investigate - specify - demand the economy of LATH AND PLASTER
Lathing & Plastering Bureau of Indianapolis, Inc.
SECRETARY-TREASURER: WILLIAM F. BOYCE, P.O. BOX 572, INDIANAPOLIS 6, INDIANA
Again, Architects Specify Flameless ELECTRIC HEATING
Here is another example. Architects are specifying safer, cleaner, more dependable electric heating in modern schools, churches, hospitals and commercial buildings. Flameless electric heating is practical for home use, too. See us today for complete details!
Church of the Brethren, Kokomo, Indiana. Architect & Engineer: Kenneth Williams, Kokomo.
Initial savings in construction costs, lower maintenance and better zone control of temperature make and other advantages ideal for electric heating. Results here have been most satisfactory.
PUBLIC SERVICE COMPANY OF INDIANA, INC.
Architecture That SELLS
(from Page 27)
Only scientific store planning and point-of-sale analysis and design by competent architects, using every square foot of floor space to best effect, can accomplish this.
In creating a well-planned sales space, each of the three groups of merchandise should be organized into its own separate, well-defined department with its own job to do, its own equipment and displays, and its own place in the store plan. The location of each department is of vital importance to the owner. Customers will always find their way to demand merchandise if the way is clear. Therefore, demand departments can be placed at the far end of the inside shopping street. Convenience departments are located midway, and impulse departments are placed near the front of the store so that customers have to pass them. Repair, credit and lounge facilities are essentially demand departments, and can be placed at the far end of the street.
Take a men's furnishings store, for example. Sportswear, suits and formal clothes are the demand items here. So they are placed at the far end, adjacent to a service area for dressing and fitting. Convenience items are pajamas, underwear, and some shirts. They're located midway. Ties, jewelry, and toiletries are classed as impulse items, and they are placed near the entrance where they can't be missed.
In a larger store which handles a wider range of merchandise, sales divisions will be organized along the same lines, but, in this case, men's items will be treated as a single element—often with an impulse unit—and placed near the front of the store. There must be variations, of course, as required by related selling. For instance, ties and shirts are related from the customer's viewpoint, so the departments must be near one another to promote the sales of both.
The size of the sales space will determine whether the indoor shopping street will be a single aisle or a maze of interlocking traffic routes. In a small shop, a straight dead-end may be enough. In larger stores with constant, heavy traffic, effective selling may not be possible on main aisles so it must be planned along the side streets or in departmental alcoves divided by stock space. Gridiron planning produces an efficient flow of traffic and, in some cases, may be best. However, it tends to be monotonous; sometimes, a carefully-designed freeflow pattern provides variety, interest, and better display.
There are limits to one-floor planning. Spreading over too great a lateral distance leads to fatigue, customer confusion, and "museum feet." The solution here is vertical selling. Visualize turning the dead-end store on end. The customer service departments, along with dining facilities and lounges, are placed on top. Next on the way down are the demand departments. Convenience merchandise is located midway. Impulse departments take their natural place at the entrance level. Elevators and moving stairs—their location in relation to traffic/merchandising factors is highly technical—need intensive study for the best possible selling pattern.
In addition to salesmanship through architectural planning, there also is a need for salesmanship through architectural display. A store, really, is a stage in which the merchant is the stage manager, his merchandise the performers, and the customer the motivated audience. An effective backdrop is needed, lighting must provide soft contrast between the merchandise and its surroundings to create maximum visibility, and the set must allow for changes of theme. Fresh appeal is vital to retail display; the human eye tires rapidly and when it becomes accustomed to something, even something interesting, the brain will often cease to recognize that it exists.
Lighting, display, the effective use of color (a science in itself), a traffic pattern based on advanced selling principles; a floor plan which converts every possible square foot into sales space; a facade which acts as a magnet to customers; a flexible, economical, yet durable structure which encloses the space—these are the elements of modern store planning which the architect's ability must coordinate.
Designing a store, or remodeling one successfully, depends upon a partnership of skills supplied by two persons—the merchant and his architect. The merchant knows what he wants to sell and to whom. He supplies the facts about his operation, the objectives, and the financial means. The architect provides artistic, professional, and business experience. He translates the owner's information and needs into plans and structure, prepares specifications for the builder to follow, and, finally, supervises the construction of the store and the installation of its equipment. The result is an architecture unlike any other in America—the modern retail store.
Blue Prints • White Prints • Photo Copies
Pick-Up and Delivery Service
FOR BETTER SERVICE . . .
2 —CONVENIENT LOCATIONS— 2
K&E DEALER
Slide Rules • Measuring Tapes
Surveying Instruments, for Sale or Rent
Level Rods and Poles • Complete Stocks
LeRoy Lettering Instruments & Supplies
MARBAUGH Engineering Supply Co.
140 E. Wabash St., Indianapolis
3228 N. Keystone Ave., Indianapolis
MEIrose 7-3468
CLifford 1-7070
Residential and Commercial
SWIMMING POOLS
Compare Quality and Price!
We will show you all sizes and shapes to fit your requirements.
Oldest and Largest Pool Builders in Indiana
Quality Pools by
JAMES Construction Co.
Charter Member of the National Swimming Pool Institute
LIberty 5-0990
2321 N. Emerson LI 5-0990
QUALITY FLUORESCENT LIGHTING FIXTURES FOR SCHOOLS OFFICES STORES FACTORIES
LOUISVILLE LAMP CO., INC.
LOUISVILLE 3, KENTUCKY
FOR QUICK SERVICE CALL JU 7-6094
INDIANA REPRESENTATIVE
THE H. H. HOMAN CO.
JOHN G. LEWE
H. H. (SANDY) HOMAN
COLUMBIA WOOSTER BLDG.
ROOM 107
CINCINNATI, 27, OHIO
it pays to specify SARGENT® PAINTS
See for yourself . . . how Sargent paint products more than fill the bill for consistent quality . . . for commercial or private application.
the Sargent-Gerke Co.
323 W. 15th ST., INDIANAPOLIS, INDIANA
60 years of paint making experience makes you sure
BASIS OF AWARDS
Awards will be made on the basis of:
1. Excellence of design and skill in planning.
2. Practicality and esthetic appearance.
3. Presentation, clarity of expression and neatness.
USE OF SUBMISSIONS AND OWNERSHIP OF DRAWINGS
The Northern Indiana Chapter reserves the right to publicize, display, or authorize the reproduction of any and all designs submitted in this competition, giving credit to the designer in each instance. The original drawings will be returned C.O.D. to those submitting them, and at the expense and request of the participant, after the sponsors have utilized them for the above purposes.
ANONYMITY
Drawings shall contain no identifying marks. To the back of each entry shall be firmly attached a plain, opaque sealed envelope containing the name and address of the competitor, which must coincide with the name and address shown on the application form.
The envelope will be opened by the Professional Advisor, in the presence of the Jury, only after, and not until, all selections have been determined.
JUDGMENT AND JURY
The Jury will consist of one member chosen from the following committees of The Northern Indiana Chapter, namely: Paul F. Jernegan, Civic Planning; Robert J. Schultz, Honor Awards and Exhibits; Roy A. Worden, Education and Registration; William G. Rammel, Preservation of Historic Buildings; Nathan Carras, Public Relations and Publicity.
Should for any reason, a member of the Jury be unable to serve, the Architectural Advisor, with the consent of the Chairman of the Jury, will appoint an alternate member from the aforementioned committees.
The Jury will meet approximately one week following the receipt of drawings to determine the successful entries. The Jury will select its own Chairman.
The decision of the majority of the members of the Jury will be final and binding in respect to any matter involved in the judgment of the competition.
PRESENTATION OF AWARDS
Names of the successful competitors will be announced immediately after the judging of the submissions has been concluded, and those not present will be notified by mail.
CORRESPONDENCE
As this is an open competition, no other information other than that presented in the program will be furnished. Communications or inquiries will not be answered.
LEGAL CONSIDERATIONS
This competition is subject to all laws of the United States and of all various states and of Canada, its provinces and territories.
ISSUANCE OF PROGRAMS
March 16th, 1962, at the time the competition is open, until March 30th, 1962, the last date for submitting the drawings.
To enroll in his competition, contestants must fill out and sign the following form, and forward it to the Architectural Advisor, postmarked not later than March 17, 1962, March 16, 1962, being the time the competition is open, until March 30, 1962, the last date for submitting the drawings.
THE PROBLEM
The subject of this competition will consist of the design of a Bus Shelter and Adjoining Telephone Booth. The shelter is to be located at scheduled bus stops and should therefore add to the attractiveness of the thoroughfare. It must also be borne in mind that it will be possible to adopt the design of this structure for other areas when needed.
The area provided for the shelter and booth shall not exceed 200 square feet and be not more than ten feet in height.
The materials for construction and design shall be left entirely up to the participant.
It is also desired that a space on the shelter be provided for the display of the city map as a guide for commuters, with no other advertising space permitted. The shelter and booth shall be adequately illuminated.
1. Required Drawings: All to be shown on a single sheet of illustration board 20" x 30" and to be rendered in any medium best suited to express your ideas.
2. Plot Plan of Shelter and Booth, showing street, walk, and any landscaping necessary to enhance your concept, at \( \frac{1}{4}'' = 1 \text{-ft. 0''} \).
3. Elevation and Cross Section of Shelter at \( \frac{1}{4}'' = 1 \text{-ft.-0''} \).
4. A perspective at as large a scale as sheet composition will permit.
(Continued on next page)
APPLICATION
1962 Civic Planning Competition for Community Appearance
Name ........................................... Age .... Sex ....
Address ........................................... City ........ State ....
Permanent Address ...........................................
Present Position .................. Firm or School ........
Signature ........................................... Date ....
"Who is Responsible for Ugliness?" is one of three questions which will be explored at the First Conference on Aesthetic Responsibility on Tuesday, April 3, at the Hotel Plaza, New York City.
The Conference is sponsored by the Design Committee of the New York Chapter, The American Institute of Architects, with the cooperation of the national AIA.
Committee Chairman Richard W. Snibbe, AIA, stated these reasons for the one-day conference: "The country's dynamic growth has not been matched by a similar dynamism in the design of its cities. Throughout the country there have been editorials and other expression of citizen concern about the characterless buildings, disruptive highway routing, jumbles of signs and overhead wires, and general lack of attention to the social and aesthetic needs of people in our communities.
"Probably every sensitive person in the country has expressed himself in private on this matter. The conference will not only serve as a means for such expression, but will attempt to place responsibility on the shoulders of those persons who can do something to save the face of America from further ugliness."
Among speakers to participate in the Grand Ballroom of the Plaza will be Russell Lynes of Harper's Magazine; Dr. Leonard Carmichael, Secretary of the Smithsonian Institute; Dr. David W. Barry of the New York Board of Missions; Eris Larrabee of American Heritage Magazine; Herman Hillman, New York regional director of the Public Housing Administration, and Ernest Weissmann of the United Nations' Bureau of Social Affairs. The breadth of the cross-section being represented at the conference is further indicated by the inclusion of the noted psychiatrist Dr. John Schimmel and artist Ad Reinhart.
The conference will also be concerned with the questions "What are our Aesthetic Values?" and "What are the Aesthetic Responsibilities of Government, Business and Institutions?"
In an unusual departure for a meeting dealing with abstract concepts, the Conference on Aesthetic Responsibility is limiting each speaker to a maximum of 10 minutes. This will allow time for the presentation of many opinions, time for a question and answer period following each panel, and time for the audience, divided into small groups, to express opinions among themselves and determine how the prepared speeches and impromptu answers apply to them individually.
It is anticipated that the conference, which has drawn interest from architects as distant as Seattle, will lead to a plan of action that will place the responsibility for aesthetics in architectural design in the hands of capable groups and individuals. "We do not want the conference to conclude with a mere resolution," said Chairman Snibbe, "but hope to see it start a national movement toward a more beautiful country."
David H. Murdock, owner, Murdock Development Co., builder of Guaranty Bank Building, says: "With multiple forms and a systematic method of placing, stripping and reusing, we were able to cast one story in 5 days. Nothing can match the efficiency of modern concrete construction!"
Guaranty Bank Building. Architect: Charles G. Polacek, AIA, Phoenix, Arizona
Structural Engineer: W. T. Hamlyn, Phoenix. General Contractors: Henry C. Beck Co., Phoenix
For the tallest building in Phoenix they chose modern concrete!
The beautiful new 20-story Guaranty Bank Building rates two special distinctions. It is not only the tallest building in Phoenix, but it is also one of the tallest concrete buildings in the entire West.
Economy was the basic reason for choosing concrete. With concrete frames and light-weight panel joist floors, construction moved along with record speed—better than one floor per week. Further economies resulted from the multiple use of forms, and scheduling went along smoothly because concrete is always available on short order. It's there when you need it—another big saving in time and money.
Both front and back shear walls were faced with precast panels, with unusual exterior beauty achieved through the use of pure white portland cement. Even in the lobby of the bank, the functional beauty of concrete is seen in floors of gleaming terrazzo.
For impressive construction efficiency plus structural strength, beauty, and low maintenance costs, economy dictates the choice of concrete for structures of all dimensions and concepts.
PORTLAND CEMENT ASSOCIATION
612 Merchants Bank Bldg., Indianapolis 4, Indiana
A national organization to improve and extend the uses of concrete
FOR STRUCTURES...
MODERN concrete
new ideas in
INDIANA LIMESTONE
SCREENS:
machine fabricated,
stylish, functional
Fresh, creative architectural interpretations can be economically achieved with Indiana Limestone. Striking new patterns and textures add a new dimension to America's prestige building material.
NOW AVAILABLE!
Architects are invited to write for the newly published Handbook on Limestone, a comprehensive text on limestone applications.
INDIANA LIMESTONE INSTITUTE
BOX 757, BLOOMINGTON, INDIANA |
Staff Recommendation
That the City Centre District Energy Utility Bylaw No. 9895, Amendment Bylaw No. 9947 presented in the “City Centre District Energy Utility Bylaw No. 9895, Amendment Bylaw No. 9947” report dated December 20, 2018, from Director, Engineering be introduced and given first, second, and third readings.
John Irving, P.Eng. MPA
Director, Engineering
(604-276-4140)
REPORT CONCURRENCE
| ROUTED TO: | CONCURRENCE | CONCURRENCE OF GENERAL MANAGER |
|------------|-------------|-------------------------------|
| Development Applications Law | ☑️ | ☑️ |
| REVIEWED BY 1A/5B | INITIALS: | APPROVED BY CAO |
|-------------------|-----------|-----------------|
| | ☑️ | ☑️ |
Staff Report
Origin
In October 2015, Council endorsed the issuance of a Request for Expression of Interest (RFEOI) to identify a suitable utility partner to conduct a feasibility analysis to design, build, finance and operate a district energy utility (DEU) in the City Centre North area of Richmond, on the basis of the following guiding principles:
1. The DEU will provide end users with energy costs that are competitive with conventional energy costs based on the same level of service; and
2. Council will retain the authority of setting customer rates, fees and charges for DEU services.
LIEC staff issued a Request for Proposals (RFP) in September 2016, with an expanded scope for City Centre to the three proponents shortlisted under the RFEOI. LIEC executed a Memorandum of Understanding with the lead proponent, Corix Utilities Inc. (Corix) in February 2018, as directed by LIEC Board and endorsed by Council.
As the City Centre DEU due diligence process has advanced, through rezoning applications and/or Official Community Plan (OCP) amendment application, six developments have committed to construct and transfer energy plants to the City or LIEC at no cost to the City or LIEC, so that LIEC can provide immediate service to these customers.
Council endorsed City Centre District Energy Utility Bylaw No. 9895 (CCDEU Bylaw) in September 2018, introducing a new district energy service area starting with five developments. In October 2018, Council amended the CCDEU Bylaw to include the Richmond Centre Mall development.
The purpose of this report is to recommend expansion of the service area to include the Polygon Fiorella development, located at 3551, 3571, 3591, 3611 and 3631 Sexsmith Road, associated with rezoning application RZ 17-778835.
This report supports Council’s 2014-2018 Term Goal #4 Leadership in Sustainability:
Continue advancement of the City’s sustainability framework and initiatives to improve the short and long term livability of our City, and that maintain Richmond’s position as a leader in sustainable programs, practices and innovations.
4.1. Continued implementation of the sustainability framework.
4.2. Innovative projects and initiatives to advance sustainability.
This report supports Council’s 2014-2018 Term Goal #6 Quality Infrastructure Networks:
*Continue diligence towards the development of infrastructure networks that are safe, sustainable, and address the challenges associated with aging systems, population growth, and environmental impact.*
6.1. Safe and sustainable infrastructure.
6.2. Infrastructure is reflective of and keeping pace with community need.
**Background**
**District Energy Utilities as Part of a Sustainable Community**
Richmond’s 2041 Official Community Plan (OCP) establishes a target to reduce community greenhouse gas (GHG) emissions 33 per cent below 2007 levels by 2020 and 80 per cent by 2050. The OCP also includes a target to reduce energy use 10 per cent below 2007 levels by 2020. Richmond’s Community Energy & Emissions Plan (CEEP) identifies that buildings account for about 64 per cent of energy consumption in Richmond, and 43 per cent of GHG emissions; residential developments especially are prime energy consumers in the community. Richmond is growing, with today’s population expected to increase by 35 per cent by 2041, and employment by 22 per cent. This growth will be accompanied by new building development, the majority of which will occur in Richmond’s City Centre.
Shifting to more sustainable energy systems for buildings will support the City’s climate and energy targets. Sustainable energy systems have the following characteristics:
- Use energy wisely – e.g. they are efficient, minimize consumption, minimize waste energy, and use low carbon sources of energy.
- Increase energy security by being reliant and resilient – e.g. they minimize price volatility, incorporate localized systems to avoid being completely dependent on external systems, and are adaptable to future technologies and energy sources.
- Have low-carbon intensity – e.g. they emit zero to low GHG emissions.
- Are cost-effective and do not result in unacceptable impacts (social, environmental or economic).
The City has identified district energy utilities (DEUs) as a key component of sustainable energy systems that can be implemented in neighbourhoods undergoing redevelopment. Some of the key benefits of a DEU are as follows:
- Reduced building capital and operations costs – DEUs replace the need for individual buildings to have their own boilers or furnaces, chillers or air conditioners, resulting in capital cost and maintenance cost savings.
- Efficiency – DEUs can operate more efficiently than typical stand-alone building mechanical systems, thereby reducing emissions and costs.
• Reduced emissions through using renewable energy and waste energy sources – DEUs can use renewable sources such as sewer heat recovery, geothermal, biomass, combined heat and power generation, and other technologies with the potential for very low emissions. Moreover, DEUs can capture and use waste heat from industrial, commercial and institutional use (i.e. ice surfaces and wastewater treatment plants).
• Reliability – DEUs use proven technology; most DEU’s operate with a high reliability rate.
• Resiliency – District energy systems’ ability to make use of multiple different fuel sources allow DEUs to incorporate new energy source opportunities in the future, providing financial and environmental resiliency and mitigating the potential for volatility in thermal energy prices.
Many DEUs come to be identified by the energy source they are hooked up to, such as geothermal, biomass, or solar; however, the most critical elements of a DEU are the customer base and the distribution network, and when establishing the partnerships and legal framework of a DEU the primary focus should be on these elements. The specific system or technology that is used to generate the heat can be altered or switched out over the life of the DEU depending on the best available technology at the time.
**District Energy in Richmond**
The City incorporated Lulu Island Energy Company Ltd. (LIEC) in 2013 for the purposes of managing district energy utilities on the City’s behalf. LIEC currently owns and operates the Oval Village District Energy (OVDEU) and Alexandra District Energy (ADEU) Utilities, as well advances new district energy opportunities. Attachment 1 indicates the current and planned future DEU areas throughout Richmond. LIEC has been recognized for excellence, leadership, innovation and sustainability through receiving thirteen awards since the company’s inception, ranging from the provincial to international scale.
LIEC currently services eight buildings in the OVDEU service area, containing over 1,700 residential units. Energy is currently supplied from the two interim energy centres with natural gas boilers which combined provide 11 MW of heating capacity. When enough buildings are connected to the system, a permanent energy centre will be built which will produce low carbon energy. Currently the OVDEU is planned to harness energy from the Gilbert Trunk sanitary force main sewer through the implementation of the permanent energy centre in 2025. Over the next 30 years, the OVDEU system is anticipated to reduce GHG emissions by more than 52,000 tonnes of CO2 as compared to business as usual\(^1\). OVDEU is developed under a concession agreement with Corix. During the concession period (30 years), Corix will design, build, finance and operate the OVDEU and will supply energy services to LIEC; LIEC owns the assets and Council sets customer rates.
\(^1\) Assumed that all energy was provided for heating. The business-as-usual (BAU) assumed that 40% of the building heating load would be provided from electricity and the remaining 60% would be from gas make-up air units.
LIEC provides heating and cooling services to six residential buildings in the ADEU service area, the large commercial development at “Central at Garden City”, the Richmond Jarnathkana temple and Fire Hall #3, in total connecting over 1450 residential units and over 1.6 million square feet of floor area. While some electricity is consumed for pumping and equipment operations, almost 100% of this energy is currently produced locally from the geo-exchange fields in the greenway corridor and West Cambie Park, and highly efficient air source heat pumps. The backup and peaking natural gas boilers and cooling towers in the energy centre have operated for only a few days throughout the system’s operation to date. LIEC staff estimate that this has eliminated 2,340 tonnes of GHG emissions in the community.
The City has continued to secure commitments that new developments in potential DEU service areas will be “District Energy Ready” through rezoning, development and building permit processes. This means that new developments in appropriate potential service areas are built with in-building mechanical systems that are compatible with district energy connection for space heating and domestic water heating.
LIEC is continuing to work with Corix on the City Centre DEU due diligence process as per the executed MOU. This work includes the development and analysis of long term DEU servicing strategies for the City Centre area. Staff are expecting to report to Council on the outcomes of this due diligence process in early 2019.
As the City Centre DEU due diligence process has advanced, staff saw the opportunity to secure a customer base for the immediate implementation of greenhouse gas emissions reduction through the rezoning and/or OCP amendment application process. Six development applicants have committed to construct and transfer energy plants to the City or LIEC through either of these processes, so that LIEC can provide immediate service to these customers. The commitment for these developments to construct and transfer energy plants to the City or LIEC was subject to adoption of a DEU service area bylaw pertaining to these sites. LIEC and City staff subsequently developed the CCDEU Bylaw to secure commitments from the first five developments, which Council adopted in September 2018. Council amended the CCDEU Bylaw to include the Richmond Centre Mall development in October 2018.
The Polygon Fiorella rezoning application (RZ 17-778835) was granted third reading at the Public Hearing held on May 22, 2018. The applicant is actively working to fulfill the rezoning considerations and advance the associated Development Permit for the project to the City’s Development Permit Panel for consideration.
**Analysis**
The Polygon Fiorella development is estimated to consist of approximately 175,000ft$^2$ of residential space.
Expanding the City Centre District Energy Utility service area to include a development of this type results in the following direct benefits:
- Immediate reduction of greenhouse gas (GHG) emissions compared to business as usual;
- Immediate connectivity opportunity with the future low carbon district energy system;
- Expansion of LIEC’s customer base under a positive stand-alone business case while the City Centre strategy develops;
- Increasing community’s energy resiliency; and
- Providing financial and environmental stability to customers, mitigating potential volatility in energy costs.
City and LIEC staff met with the developer’s representatives and, through the rezoning application, obtained their commitment to design and construct a low carbon energy plant, and transfer its ownership to the City or LIEC at no cost to the City or LIEC so that LIEC can provide immediate service to the customer and start immediate implementation of GHG emission reduction.
The LIEC Board of Directors has reviewed this opportunity and recommends expanding the City Centre District Energy Utility service area to include the development located at 3551, 3571, 3591, 3611 and 3631 Sexsmith Road.
In order to address this and other business development opportunities already approved by Council and LIEC Board in the City Centre DEU service area, ongoing growth and expansion of ADEU and OVDEU, and management of LIEC assets, the need for additional staff member has been identified. LIEC staff have recommended to the Board that one regular full time Assistant Project Manager be hired with the primary responsibility to support district energy approvals process coordination, grant funding applications coordination, implementation and management of information and communication technology. This position will also allow existing staff to focus further on the operational improvements of the existing customers and infrastructure and further business expansion as directed by Council and LIEC Board. 2019 LIEC operating budget approved by the Board and presented to Council at January 9, 2019 Finance Committee had accounted for the cost of this new position. As a part of the mandate given by Council to LIEC Board to manage the business and affairs of the LIEC, LIEC Board will consider creation of this new staff position.
**Financial Impact**
None. The low carbon energy system will be designed and constructed by developers at their cost. Costs incurred by LIEC for engineering support and operations and maintenance will be funded from the existing and future LIEC capital and operating budgets. All LIEC costs will be recovered from customers’ fees.
Conclusion
Expanding the service area for the City Centre District Energy Utility Bylaw No. 9895 as proposed will allow for immediate provision of low carbon energy and in turn immediate avoidance of GHG emissions from a large development in Richmond’s City Centre area. It would also provide the new Polygon Fiorella development an immediate connectivity opportunity with the future district energy system which is currently in development. The project will increase the community’s energy resiliency by taking advantage of the district energy system’s ability to utilize different fuel sources and future fuel switching capability of the technology.
Peter Russell, MCIP RPP
Senior Manager, Sustainability and District Energy
(604-276-4130)
PRcd
Att. 1: Map of Current and Future District Energy Utility Areas in Richmond
Attachment 1 – Map of Current and Future District Energy Utility Areas in Richmond
District Energy Utility
Current and Future Developments
Legend
- OVAL VILLAGE DEU SERVICE AREA (OVDEU)
- ALEXANDRA DEU SERVICE AREA (ADEU)
- CITY CENTRE DEU SERVICE AREA (CCDEU)
- FUTURE DEU SERVICE AREAS
CNCL - 102
The Council of the City of Richmond enacts as follows:
1. The City Centre District Energy Utility Bylaw No. 9895 is further amended:
(a) by deleting Schedule A (Boundaries of Service Area) in its entirety and replacing with a new Schedule A attached as Schedule A to this Amendment Bylaw; and
(b) by deleting Schedule E (Energy Generation Plant Designated Properties) in its entirety and replacing with a new Schedule E attached as Schedule B to this Amendment Bylaw.
2. This Bylaw is cited as “City Centre District Energy Utility Bylaw No. 9895, Amendment Bylaw No. 9947”.
FIRST READING
SECOND READING
THIRD READING
ADOPTED
_____________________________ _______________________________
MAYOR CORPORATE OFFICER
CITY OF RICHMOND
APPROVED for content by originating dept.
APPROVED for legality by Solicitor
BRB
Schedule A to Amendment Bylaw No.9947
SCHEDULE A TO BYLAW NO. 9895
Boundaries of Service Area
CNCL - 104
Schedule B to Amendment Bylaw No. 9947
SCHEDULE E to BYLAW NO. 9895
Energy Generation Plant Designated Properties
CNCL - 105 |
| **Table 1: Summary of the Results** |
|-----------------------------------|
| **Variable** | **Mean** | **SD** | **Median** | **Mode** |
| **Age** | 30.5 | 7.2 | 28 | 29 |
| **Gender** | 1.5 | 0.5 | 1 | 1 |
| **Education**| 14.5 | 2.5 | 14 | 14 |
| **Income** | 5000 | 1000 | 5000 | 5000 |
| **Table 2: Descriptive Statistics for Continuous Variables** |
|-------------------------------------------------------------|
| **Variable** | **Mean** | **SD** | **Median** | **Mode** |
| **Age** | 30.5 | 7.2 | 28 | 29 |
| **Income** | 5000 | 1000 | 5000 | 5000 |
| **Table 3: Chi-Square Test for Categorical Variables** |
|--------------------------------------------------------|
| **Variable** | **Observed** | **Expected** | **Chi-Square** | **P-Value** |
| **Gender** | 15 | 15 | 0 | 1 |
| **Education**| 14 | 14 | 0 | 1 |
| **Table 4: Correlation Matrix** |
|---------------------------------|
| **Variable** | **Age** | **Income** | **Gender** | **Education** |
| **Age** | 1 | | | |
| **Income** | 0.6 | 1 | | |
| **Gender** | -0.2 | -0.1 | 1 | |
| **Education**| 0.4 | 0.3 | 0.2 | 1 |
BASIC ELECTRONICS AND DEVICES
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Define Mobility
b) Explain the operation of Light Emitting Diode.
c) Define Load and Line Regulation.
d) What is expression for ripple factor when capacitor filter is used with half wave rectifier?
e) List out the advantages of negative feedback.
f) Show that $\mu = g_m \ r_o$ in a Field Effect Transistor.
g) Why RC oscillators are not used at High Frequencies.
h) Define Q-Point.
i) Draw the simplified h-parameter model of a Bipolar Junction Transistor.
PART-B
2. a) A sample of germanium has an n type impurity concentration of $3 \times 10^{14}$ donors/cm$^3$ and p type impurity concentration of $4 \times 10^{14}$ acceptors/cm$^3$. Find the values of n and p at room temperature.
d) What is diffusion and drift phenomenon? Derive Einstein’s relationship.
c) In a Germanium semiconductor with step grading $N_D = 2000 N_A$ with $N_A$ corresponding to 1 part in $10^8$. Find the value of contact potential.
3. a) Derive an expression for Transition Capacitance of a diode.
b) Explain the operation of tunnel diode.
4. a) Derive the expressions for PIV, Conversion Efficiency and TUF of a Bridge rectifier.
b) Explain the operation of series and shunt voltage regulators and also mention their performance factors.
5. a) Explain the necessity of biasing a Transistor. Derive the Q-point of a self-bias circuit.
b) Explain the stabilization of Q-point using sensistor and thermistor.
6. a) Explain the construction and operation of depletion and enhancement mode MOSFET.
b) Draw and discuss the VI characteristics of a silicon controlled rectifier.
7. a) Derive an expression for frequency of oscillation of a RC Phase shift oscillator.
b) Quantitatively explain the effect of negative feedback on input and output resistances.
BASIC ELECTRONICS AND DEVICES
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) What is doping? Explain the necessity.
b) Differentiate between Avalanche and Zener breakdowns.
c) Explain the operation of series regulator.
d) Derive the PIV of a bridge rectifier.
e) What is the need for biasing? Explain.
f) Compare BJT and FET.
g) Draw the h parameter model of a common collector amplifier.
h) Compare CE, CB and CC amplifiers.
i) What is Barkhausen Criterion?
(2M+2M+2M+2M+3M+3M+2M+3M+3M)
PART-B
2. a) A sample of germanium has been added with $10^{14}$ donors/cm$^3$ and $7 \times 10^{13}$ acceptors/cm$^3$.
Find the values of n and p at room temperature if the resistivity is 60 $\Omega$-cm.
b) What is electron gas theory description of metals? Derive an expression for current density in metals and also derive an expression for current density in semiconductors.
(8M+8M)
3. a) Explain VI characteristics of a Zener diode.
b) Calculate the factor by which the reverse saturation current in Ge diode is multiplied when the temperature is increased from 25 to 70 degrees centigrade.
c) Explain the operation of photodiode.
(6M+5M+5M)
4. a) Derive the expressions for PIV, Ripple factor, Conversion Efficiency and TUF of a Full wave rectifier.
b) A sinusoidal voltage of amplitude 20V, 50Hz is applied to a half wave rectifier. If $R_L=1000\Omega$, $R_f=10\Omega$, $R_{\text{in}}=\infty$, Find the values of i) Conversion Efficiency ii) Ripple factor iii) Percent Regulation
(8M+8M)
5. a) Explain the input and output characteristics of a Common Emitter Configuration.
b) Draw the exact h parameter model of a Transistor suitable for any configuration. Derive expressions for voltage gain, current gain, input impedance and output impedance of an amplifier using exact h parameter model
(8M+8M)
6. a) Derive an expression for voltage gain of a Common Drain Amplifier.
b) Explain qualitatively the operation of field effect transistor.
(8M+8M)
7. a) Derive an expression for frequency of oscillation and condition for sustained oscillations of a Wien Bridge oscillator.
b) Enumerate the steps in the linear analysis of negative feedback amplifiers
(8M+8M)
BASIC ELECTRONICS AND DEVICES
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Explain different topologies in negative feedback amplifiers?
b) Define cut-in voltage of a diode.
c) Define ripple factor, rectification efficiency of a rectifier.
d) What is the purpose of using a filter in a power supply unit?
e) What are the various modes of operation of an SCR?
f) State the advantages of push-pull amplifiers.
g) Draw the low frequency model of a FET.
h) Show that gain reduces with negative feedback.
i) Differentiate between an oscillator and an amplifier.
(3M+2M+3M+2M+3M+2M+2M+3M+2M)
PART-B
2. a) What is Hall effect?. Derive an expression for hall coefficient.
b) Find the resistivity of intrinsic silicon and Germanium at room temperature. (8M+8M)
3. a) Explain the operation of i) PIN diode ii) Varactor diode
b) Explain the VI characteristics of pn junction diode. Discuss about the effect of temperature on diode characteristics. (8M+8M)
4. a) Derive the expressions for ripple factor of a full wave rectifier using capacitor filter.
b) Explain how Zener diode acts as a regulator. (8M+8M)
5. a) What are the various current gains in a Transistor and derive the relationship between them.
b) Derive simplified h parameter model of a transistor. State it’s advantages. Derive an expression for voltage gain of CE, CB and CC amplifiers using simplified h parameter model.
c) Derive the necessary condition to avoid thermal runaway in a transistor. (6M+5M+5M)
6. a) Explain the two transistor analogy of an SCR.
b) Explain about specifications of a Thyristor.
c) Perform DC and AC analysis of a common source amplifier. (6M+5M+5M)
7. a) Quantitatively explain the effect of negative feedback on Band width and sensitivity.
b) Explain the operation of push-pull power amplifier. (8M+8M)
BASIC ELECTRONICS AND DEVICES
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Draw the energy band diagram of an Insulator, Semiconductor and a metal.
b) What is depletion region?
c) Define peak inverse voltage of a rectifier.
d) State the advantages of a bridge rectifier.
e) What are various regions of operation of a BJT?
f) Explain early effect.
g) What is thermal run away?
h) Draw the electrical equivalent of a crystal.
i) What is pinch off voltage?
PART-B
2. a) What is Energy band theory description of elements. Draw the energy band diagrams of metal, insulator and a semiconductor.
b) Derive an expression for continuity equation.
c) Find the concentration of electrons and holes in a p type Ge semiconductor at 300K if the resistivity is 60 $(\Omega \text{-cm})^{-1}$.
3. a) Explain the operation of Tunnel diode
b) Explain various current components in a diode.
4. a) What are the various filter circuits used in rectifiers. Compare their performance.
b) Quantitatively explain the operation of half wave rectifier.
5. a) Explain how transistor acts as a switch.
b) Analyze CE with $R_C$ circuit using h-parameter model.
6. a) Explain the operation of a Field effect Transistor. Derive an expression for pinch-off voltage of a FET.
b) Explain the operation of IGBT.
7. a) Draw the different topologies in a negative feedback amplifier. Enumerate the steps in the analysis of negative feedback amplifiers.
b) What is an oscillator? Derive necessary condition for the oscillator to produce oscillations. Explain about amplitude and frequency stability of oscillators.
COMPLEX VARIABLES AND STATISTICAL METHODS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Write Cauchy Riemann equations in polar form.
b) Find ‘a’ and ‘b’ if \( f(z) = \left( x^3 - 2xy + ay^2 \right) + i(bx^2 - y^2 + 2xy) \) is analytic.
c) Write the test statistic for the differences of means of two large samples.
d) Expand \( f(z) = \frac{e^{2z}}{(z-1)^3} \) about \( z=1 \).
e) Determine the poles of \( \tan z \) and find the residue at the simple poles.
f) Find the bilinear transformation whose fixed points are 1 and 1.
g) Three masses are measured as 62, 34, 20, 84, 35, 97 kgs with standard deviation 0.54, 0.21, 0.46 kgs. Find the mean and standard deviation of the sum of the masses.
h) A sample size 10 was taken from population. Standard deviation of sample is 0.3. Find the maximum error with 99% confidence
\((2M+3M+2M+3M+3M+3M+3M+3M)\)
PART-B
2. a) Find the Analytic function whose real part is \( u(x, y) = \frac{\sin 2x}{\cosh 2y + \cos 2x} \).
b) Show that the function \( f(z) = \frac{z}{z\overline{z}} \) is differentiable but not analytic at origin.
3. a) Evaluate \( \int_{c} \frac{-ze^{2z}}{(z-\pi)^3} dz \), where \( c \) is a circle of radius 4 with centre at origin, by Cauchy integral formula.
b) Obtain Laurent’s expansion for \( f(z) = \frac{1}{(z+2)(z+1)} \) in \( |z| > 2 \).
4. a) Evaluate \[ \int_{0}^{2\pi} \frac{d\theta}{5 + 4\cos\theta} \]
b) Evaluate \[ \int_{0}^{\infty} \frac{\cos ax \, dx}{(x^2 + a^2)^2} \]
5. a) Discuss the transformation \( w = \cos z \).
b) Find the Bilinear transformation which maps \( z = -1, 0, 1 \) onto \( w = 0, i, 3i \).
6. a) A random sample of size 64 is taken from normal population with mean 51.4 and S.D 6.8.
What is the probability that the mean of samples will (i) exceed 52.9 (ii) less than 50.6 (iii) between 50.5 and 52.3.
b) Find the 95% confidence limits for mean of the population from which sample was taken from 15, 17, 10, 18, 16, 9, 7, 11, 13, 14.
7. a) A college management claims that 75% of all single women appointed for teaching job get married and quit the job in two years. Test this hypothesis at 5% level of significance if among 300 such teachers, 212 got married within 2 years and quit then jobs.
b) In a test given two groups of students, the marks obtained are as follows
| First Group | 18 | 20 | 36 | 50 | 49 | 36 | 34 | 49 | 41 |
|-------------|----|----|----|----|----|----|----|----|----|
| Second group| 29 | 28 | 26 | 35 | 30 | 44 | 46 | -- | -- |
Examine the significant difference between the means of the marks of the two group at 5% level.
COMPLEX VARIABLES AND STATISTICAL METHODS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Define harmonic function and give an example
b) If c is a simple closed curve then evaluate \( \int_c (\sin 3z + z^4 + e^z) dz \)
c) Write test statistic for the differences of means of two small samples
d) Find the residue of \( f(z) = \frac{e^{2z}}{(z-1)^3} \) at \( z=1 \)
e) Determine the poles of \( \tan z \) and find the residue at simple pole
f) Find the bilinear transformation whose fixed points are i and -i
g) Define two types of Errors in sampling.
h) A sample size 10 was taken from population with S.D of sample is 0.3. Find the maximum error with 99% confidence
PART-B
2. a) Find the Analytic function whose imaginary part is \( v(x, y) = \frac{2\sin x \sin y}{\cosh 2y + \cos 2x} \)
b) Show that the function \( f(z) = \sqrt{|xy|} \) is not analytic at origin although CR equations are satisfied at the point
3. a) Evaluate \( \int_c \frac{ze^{2z}}{(z-2)^2} dz \) where \( c \) is the circle with radius 3 by Cauchy integral formula
b) Obtain Laurent’s expansion for \( f(z) = \frac{1}{(z+2)(z+1)} \) in \( 1 < |z| < 2 \)
4. a) Evaluate \[ \int_{0}^{2\pi} \frac{d\theta}{5 - 4\sin \theta} \]
b) Evaluate \[ \int_{0}^{\infty} \frac{dx}{(x^6 + 1)} \]
5. a) Discuss the transformation \( w = \sin z \)
b) Find the Bilinear transformation which maps \( z = \infty, i, 0 \) onto \( w = -1, -i, 1 \)
6. a) Show that Sample mean is the unbiased estimator of population mean
b) A random sample of size 100 taken from normal population with mean 76 and S.D 16. What is the probability that the mean of samples will (i) exceed 78 (ii) less than 60 (iii) between 75 and 78.
7. a) The mean production of rice in a sample of 100 fields is 200 lb per acre with S.D of 10 lb. Another sample of 150 fields gives the mean 220 lb and S.D 11 lb. Find if the two results are consistent at 1% level.
b) The nine items of the sample had the following values: 45, 47, 50, 52, 48, 47, 49, 53, and 51. Does the mean of nine items differ significantly from the population mean of 45.57 at 1% level.
COMPLEX VARIABLES AND STATISTICAL METHODS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Find the invariant points of \( w = \frac{1+z}{1-z} \).
b) Find the Harmonic conjugate of \( \log \sqrt{x^2 + y^2} \).
c) Evaluate \( \int_c \frac{dz}{z-3} \), where \( c : |z-2| = 5 \).
d) Find the residue of \( f(z) = \frac{e^{2z}}{(z-2)^2} \) at \( z=2 \).
e) Determine and classify the singular point of \( f(z) = z^2 \sin \left( \frac{1}{z} \right) \).
f) Write any three characteristics of Normal Distribution.
g) Define Hypothesis, Critical region and Standard error.
h) If we can assert 95% that maximum error is 0.05 and \( p = 0.2 \) find the sample size.
(2M+3M+2M+3M+3M+3M+3M+3M)
PART-B
2. a) Find the Analytic function given that \( v + u = \frac{\sin 2x}{\cosh 2y - \cos 2x} \).
b) Show that the function \( f(z) = \frac{x^3 y(y-ix)}{x^6 + y^2} \) is not analytic at origin although CR equations are satisfied at the point.
3. a) Evaluate \( \int_C \frac{e^z}{(z^2 + 1)} dz \) where \( C \) is the unit circle by Cauchy integral formula.
b) Obtain Laurent’s expansion for \( f(z) = \frac{1}{(z+2)(z+1)^2} \) in \( |z| < 1 \).
4. a) Evaluate \( \int_0^{2\pi} \frac{d\theta}{3 - 2\sin\theta} \).
b) Evaluate \( \int_0^\infty \frac{dx}{x^4 + 1} \).
5. a) Discuss the transformation \( w = z^2 \).
b) Find the Bilinear transformation which maps \( z = \infty, i, 0 \) on to \( w = 0, i, \infty \).
6. a) Show that Sample variance is not the unbiased estimator of population variance.
b) A random sample of size 36 is taken from normal population with mean 155 and S.D 15. What is the probability that the mean of samples will (i) exceed 157 (ii) less than 160 (iii) between 155 and 158.
7. a) A sample of 450 items is taken from a population with mean 30 and S.D 20. Test whether the sample has come from the population with mean 29. Also calculate 95% confidence limits of the population mean.
b) Two samples are drawn from two normal populations from the following data, test whether the two samples have the same variance at 5% level.
| Sample I | 60 | 65 | 71 | 74 | 76 | 82 | 85 | 87 | -- | -- |
| Sample II | 61 | 66 | 67 | 85 | 78 | 63 | 85 | 86 | 88 | 91 |
COMPLEX VARIABLES AND STATISTICAL METHODS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
4. Probability tables Normal, t, F and chi square tables are required
PART-A
1. a) Find the invariant points of \( w = \frac{1}{z - 2i} \).
b) Find the Harmonic conjugate of \( x^2 - y^2 + xy \).
c) Evaluate \( \int_C \frac{3dz}{z+1} \), where \( C : |z| = 2 \).
d) Evaluate \( \int_C z e^z dz \) where \( C \) is the unit circle by residue theorem.
e) Determine and classify the singular point of \( f(z) = \sin\left(\frac{1}{z}\right) \).
f) Write any three characteristics of chi square Distribution.
g) Write the test statistic for testing the equality of two population means for small samples and large samples.
h) What is the maximum error one can expect to make with the probability 0.90, when using the mean of random sample 64 to estimate population mean with \( \sigma = 1.6 \)
(2M+3M+2M+3M+3M+3M+3M+3M)
PART-B
2. a) Find the Analytic function given that \( v + u = \frac{2 \sin 2x}{e^{2y} + e^{-2y} - 2 \cos 2x} \).
b) Prove that an analytic function with constant real part is constant.
3. a) Evaluate \[ \int_C \frac{ze^z}{(z-a)^2} dz \] where the point ‘a’ lies within the closed curve \( C \) by Cauchy integral formula.
b) Obtain Laurent’s expansion for \( f(z) = \frac{1}{(z+2)(z+1)^2} \) in \( |z+1| > 1 \).
4. a) Evaluate \( \int_0^{2\pi} \frac{d\theta}{3 + 2\cos\theta} \).
b) Evaluate \( \int_0^\pi \frac{x^2 dx}{(x^2 + 1)^2} \).
5. a) Discuss the transformation \( w = e^z \).
b) Find the Bilinear transformation which maps \( z = 1, i, -1 \) onto \( w = i, 0, -i \).
6. a) Write a short note on properties of Estimators.
b) A random sample of size 50 is taken from normal population with mean 55 and S.D 15. What is the probability that the mean of samples will i) exceed 57 ii) less than 60 (iii) between 53 and 58.
7. a) A college management claims that 80% of all single women appointed for teaching job get married and quit he job within two years of time. Test this hypothesis at 5% level of significance if among 200 such teachers, 112 got married within two years and quit their jobs.
b) Two investigations study the income of group of persons by the method of sampling. Following results were obtained
| Investigator | Poor | Middle | Well |
|--------------|------|--------|------|
| A | 160 | 30 | 10 |
| B | 140 | 120 | 40 |
Show that the sampling technique of at least one of the investigators is suspected at 5% level.
II B. Tech I Semester Regular Examinations, Dec - 2014
ELECTRICAL CIRCUIT ANALYSIS - II
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Given that voltage $V_{lm} = 110 \angle 30^\circ$ V in a balanced 3-phase system. Find $V_{an}$ and $V_{cn}$ assuming a positive phase sequence (ABC).
b) State the two ways in which phases of a three phase supply can be interconnected to reduce the number of conductors used compared with three single-phase systems.
c) A circuit consists of a resistor connected in series with a 0.5 $\mu$F capacitor and has a time constant of 12 milli-sec. Determine the value of the resistor and capacitor voltage at 7 milli-sec after connecting the circuit to a 10 V supply.
d) Find the admittance parameters for the network shown below Figure 1.
e) Write the condition for symmetry and reciprocity with reference to h-parameters?
f) The voltage and current at the terminals of a circuit are $v(t) = 80 + 120 \cos 120\pi t + 60 \cos (360\pi t - 300)$ and $i(t) = 5 \cos (120\pi t - 10^\circ) + 2 \cos (360\pi t - 60^\circ)$. Find the average power absorbed by the circuit.
g) For the circuit shown below Figure 2, find $i_L(x), v_C(x)$ and $v_R(x)$.
h) List any three properties of Fourier Transform?
 
PART-B
2. a) A three-phase, three-wire, ABC system, with an effective line voltage of 120 V, has three impedances of $5 \angle 45^\circ \Omega$ in a delta connection. Determine the line currents and draw the voltage and current phasor diagram.
b) Explain, with a neat sketch, how a three phase power is measured in delta connected load using two watt meters?
3. a) A three-phase, three-wire, ABC system, with line voltage $V_{BC} = 311.1 \angle 0^\circ$ V has line currents $I_A = 61.5 \angle 116.6^\circ$ A, $I_B = 61.2 \angle -48^\circ$ A and $I_C = 16.1 \angle 218^\circ$ A. Find the readings of watt meters in lines i) A and B, ii) B and C, and iii) A and C.
b) A balanced three-phase star-connected generator with $V_p = 220$ V supplies an unbalanced star-connected load with $Z_{AN} = 60 + j80 \Omega$, $Z_{BN} = 100 - j120 \Omega$, and $Z_{CN} = 30 + j40 \Omega$. Find the total complex power absorbed by the load.
4. a) An un-charged 80 μF capacitor is connected in series with a 1 kΩ resistor and is switched across a 110V supply. Determine the time constant of the circuit and the initial value of current flowing. Also, determine the value of current flowing after i) 40 ms and ii) 80 ms.
b) Referring to the circuit shown in Figure 3, the switch is closed at \( t = 0 \). i) Determine equations for \( i_L \) and \( v_L \). ii) At \( t = 300 \text{ ms} \), open the switch and determine equations for \( i_L \) and \( v_L \) during the decay phase. iii) Determine voltage and current at \( t = 100 \text{ ms} \) and at \( t = 350 \text{ ms} \).
iv) Sketch \( i_L \) and \( v_L \).

5. a) Obtain the y-parameters for the network shown in Figure 4.
b) Derive relationship between hybrid and Z-parameters of two port network?

6. a) List the properties of positive real function and test whether the following function is positive real or not?
\[ F(s) = \frac{s^2 + 4}{s^3 + 3s^2 + 3s + 1} \]
b) Determine the Foster I form of realization of the RC impedance function.
\[ Z(s) = \frac{(s + 1)(s + 3)}{s(s + 2)(s + 4)} \]
7. a) Find the Fourier series of the square wave shown in Figure 5. Plot the amplitude and phase spectra.
b) Using the Fourier transform method in Figure 6, find \( i_0(t) \), when \( i_s(t) = 10 \sin 2t \text{ A} \).
 
II B. Tech I Semester Regular Examinations, Dec - 2014
ELECTRICAL CIRCUIT ANALYSIS - II
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Write the relationships between line and phase currents and line and phase voltages for a star and delta connected system.
b) Write the differences between balanced and unbalanced 3-phase systems.
c) A capacitor of capacitance $C$ farads is connected in series with a resistor of $R$ ohms and is switched across a constant voltage DC supply of $V$ volts. After a time of $t$ seconds, the current flowing is $i$ amperes. Write the expression for voltage drop across the resistor at time $t$ seconds? What is the expression for final value of capacitor voltage?
d) Find the impedance parameters for the network shown below Figure 1.
e) Write the condition for symmetry and reciprocity with reference to $y$ and $h$-parameters?
f) The voltage and current at the terminals of a circuit are $v(t) = 80 + 120 \cos(120\pi t + 60^\circ)$ and $i(t) = 5 \cos(120\pi t - 10^\circ) + 2 \cos(360\pi t - 60^\circ)$. Find the average power absorbed by the circuit.
g) For the circuit shown above Figure 2, find $i_L(0^+)$, $v_C(0^+)$ and $v_R(0^+)$.
h) List any three properties of Fourier Transform?
PART-B
2. a) In a three phase balanced load, each arm consists of a resistor of 10 ohms, an inductance of 0.6 H and a capacitor of 130 $\mu$F connected in series. The supply is a balanced 3-phase 400 V, 50 Hz. Calculate the line current, total power consumed in the load when the three arms are connected in star and delta.
b) Three identical coils, each of resistance 10 $\Omega$ and inductance 42 mH are connected (i) in star and (ii) in delta to a 415V, 50 Hz, 3-phase supply. Determine the total power dissipated in each case.
3. a) A four-wire star-star circuit has $V_{an} = 120 \angle 120^\circ$, $V_{bn} = 120 \angle 0^\circ$, $V_{cn} = 120 \angle -120^\circ$ V. If the impedances are $Z_{an} = 20 \angle 60^\circ$, $Z_{bn} = 30 \angle 0^\circ$ and $Z_{cn} = 40 \angle 30^\circ$ $\Omega$, find the current in the neutral line.
b) For the circuit shown in figure 3, the line voltage is 240 V. Take $V_{ab}$ as reference and determine following: i) phase currents, ii) line currents, iii) total power absorbed in the load. Also draw phasor diagram.
\[ Z_{AB} = 25 \Omega \]
\[ Z_{BC} = 12 \angle 60^\circ \Omega \]
\[ Z_{CA} = 16 \angle -30^\circ \Omega \]
Figure 3
4. a) Derive an expression for the current in an RL circuit when it is excited by a unit step voltage.
b) In a series RLC circuit $L=0.5$ H, and $C=2$ F. A DC voltage of 20 V is applied at $t=0$. Obtain an expression for current $i(t)$ in the circuit, when (i) $R=3$ Ω, (ii) $R=4$ Ω, (iii) $R=6$ Ω.
5. a) Obtain the y-parameters for the network shown in Figure 4.
b) Find the hybrid parameters of the network shown in Figure 5.


6. a) Find the first Foster form of LC network for the impedance function
\[ Z(s) = \frac{s(s^2 + 2)}{(s^2 + 1)(s^2 + 3)} \]
b) Obtain the Cauer form I realization of
\[ F(s) = \frac{2(s+1)(s+3)}{s(s+2)} \]
7. a) A series RLC circuit has $R=10$ Ω, $L=2$ mH, and $C=40$ μF. Determine the effective current and average power absorbed when the applied voltage is $v(t) = 100 \cos 1000t + 50 \cos 2000t + 25 \cos 3000t$ V.
b) Using the Fourier transform method, Find the current $i_0(t)$ in the circuit shown in Figure 6. Given that $i_s(t) = 20 \cos 4t$ A.

II B. Tech I Semester Regular Examinations, Dec - 2014
ELECTRICAL CIRCUIT ANALYSIS - II
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Write the formulae for determining the active and reactive power dissipated in the load of a three-phase balanced system.
b) What are the reasons for unbalance of phases in a 3-phase system?
c) For the circuit shown below Figure 1, if \( v = 15e^{-3t} \) V and \( i = 0.5e^{-3t} \) A, \( t>0 \), find R & C.
d) Write the set of equations which describe the admittance parameters and explain each term?
e) Write the condition for symmetry and reciprocity with reference to transmission and Z-parameters?
f) List any three properties of Fourier Transform?
g) The voltage and current at the terminals of a circuit are \( v(t) = 80 + 120 \cos(120\pi t + 60^\circ) \cos(360\pi t - 30^\circ) \) and \( i(t) = 5 \cos(120\pi t - 10^\circ) + 2 \cos(360\pi t - 60^\circ) \). Find the rms value of the current and average power absorbed by the circuit.
h) For the circuit shown Figure 2, find \( i_L(0) \), \( v_C(0) \) and \( v_R(0) \).
 
PART-B
2. a) Show that the total power in a 3-phase, 3-wire system using the two-wattmeter method of measurement is given by the sum of the wattmeter readings. Draw a connection diagram and phasor diagram. Also derive the expression for power factor in terms of two wattmeter readings.
b) Each phase of a delta-connected load comprises a resistance of 30Ω and an 80 µF capacitor in series. The load is connected to a 400V, 50 Hz, 3-phase supply. Calculate (i) the phase current, (ii) the line current, (iii) the total power dissipated and (iv) the kVA rating of the load. Draw the complete phasor diagram for the load. (8M+8M)
3. A three phase 400 V star connected balanced supply is connected to star connected three load of $15 \angle 0^\circ$ Ω, $12 \angle -20^\circ$ Ω, and $18 \angle 10^\circ$ Ω. Find line current, power and current in neutral of the (i) four wire system (ii) three wire system. Assume zero neutral impedance.
(16M)
4. a) Derive an expression for voltage across ‘R’ in a series R-C circuit excited by a unit step voltage. Assume zero initial conditions.
b) i) If the switch in Figure 3, has been open for a long time and is closed at $t = 0$, find $V_o(t)$.
ii) In Figure 3, suppose that the switch has been closed for a long time and is opened at $t= 0$. Find $V_o(t)$.

5. a) Find the transmission parameters of the network shown Figure 4.
b) Determine h-parameters of a two-port network whose z parameters are $Z_{11} = Z_{22} = 6$ ohms and $Z_{12} = Z_{21} = 4$ ohms.

6. a) List the properties of positive real function and test whether the following function is positive real or not? $F(s) = \frac{s(s^2 + 6)}{(s^2 + 3)^2}$
b) Realize the driving point impedance function $Z(s) = \frac{(s + 2)(s + 5)}{(s + 1)(s + 3)}$ in Foster form – II.
(8M+8M)
7. a) The full-wave rectified sinusoidal voltage in Figure 5 is applied to the low-pass filter in Figure 6. Obtain the output voltage $V_o(t)$ of the filter.
b) Calculate the Fourier series for the function shown Figure 7.
  
II B. Tech I Semester Regular Examinations, Dec - 2014
ELECTRICAL CIRCUIT ANALYSIS - II
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Name three advantages of three-phase systems over single-phase systems
b) Write a formula for a power factor in a balanced system 3-phase system when the power is measured by two-wattmeter method.
c) For the circuit shown below Figure 1, if \( v = 20e^{-4t} \) V and \( i = 0.5e^{-4t} \) A, \( t>0 \), find time constant of the circuit
d) What is meant by time constant of a R-L circuit? What are its applications in power system.
e) Write the conditions for symmetry and reciprocity with reference to h-parameters?
f) The voltage and current at the terminals of a circuit are \( v(t) = 80 + 120 \cos(120\pi t) + 60 \cos(360\pi t - 30^\circ) \) and \( i(t) = 5 \cos(120\pi t - 10^\circ) + 2 \cos(360\pi t - 60^\circ) \). Find the r.m.s value of the current and average power absorbed by the circuit.
g) For the circuit shown below Figure 2, find \( i(0^+) \), \( v_C(0^+) \) and \( v_R(0^+) \).
h) List any three properties of Fourier Transform?
 
PART-B
2. a) Explain the method of measuring reactive power in a 3-phase balanced system using a single watt meter method.
b) The two-wattmeter method gives \( P_1 = 1200 \) W and \( P_2 = -400 \) W for a three-phase motor running on a 240-V line. Assume that the motor load is star connected and that it draws a line current of 6 A. Calculate the pf of the motor and its phase impedance.
3. a) For the circuit shown figure 3, $Z_a = 6 - j8 \Omega$, $Z_b = 12 + j9 \Omega$, and $Z_c = 15 \Omega$. Find the line currents $I_a$, $I_b$, and $I_c$.
b) A balanced three-phase star-connected generator with $V_p = 220 \text{ V}$ supplies an unbalanced star-connected load with $Z_{AN} = 60 + j80 \Omega$, $Z_{BN} = 100 - j120 \Omega$, and $Z_{CN} = 30 + j40 \Omega$. Find the total complex power absorbed by the load.

4. a) Find the voltage across the capacitance for $t > 0$ in the circuit shown in Figure 4.
b) For the circuit shown in figure 5, calculate (i) $i_L(0+)$, $v_C(0+)$, and $v_R(0+)$, (ii) $i_L(\infty)$, $v_C(\infty)$, and $v_R(\infty)$.


5. a) Determine the z-parameters for the circuit shown below Figure 6.
b) Determine the y-parameters for the circuit shown Figure 7.


6. a) Synthesize $F(s) = \frac{2(s+1)(s+4)}{(s+2)(s+6)}$ in two Cauer forms?
b) List the properties of positive real function and test whether the following function is positive real or not? $F(s) = \frac{s^2 + 4}{s^3 + 6s^2 + 6s + 2}$.
7. a) A series RL circuit in which $R = 5 \Omega$ and $L = 20 \text{ mH}$ has an applied voltage $v = 100 + 50 \sin \omega t + 25 \sin 3\omega t (\text{V})$, with $\omega = 500 \text{ rad/s}$. Find the current and the average power.
b) Obtain the Fourier transform of the “switched-on” exponential function shown Figure 8.

PART-A
1. Answer the following in 4 or 5 sentences each
a) In every electromechanical conversion systems, both generator and motor action take place simultaneously. Justify your answers.
b) Explain how torque is produced in a rotating electrical machine.
c) Explain the function of commutator in a dc machine.
d) Why no-volt release coil is provided in a dc motor starter.
e) Explain how the hysteresis loss can be reduced.
f) What are the advantages and disadvantages of specific electric and magnetic loadings.
g) An 8-pole lap-wound dc generator armature has 960 conductors, a flux of 40 mWb and a speed of 400 rpm. Calculate the e.m.f generated on open circuit.
h) What is the use of equalizer connections in lap wound dc machines?
i) What is ‘back e.m.f’ in dc machine? What is its significance?
j) What is the difference between 3-point and 4-point starters?
k) What are the different methods of braking? (2M×11=22M)
PART B
2. a) Explain the principle of energy conversion. Draw and explain general representation of electro-mechanical conversion device.
b) For a linear magnetic circuit derive the expressions for the stored energy and co-energy.
3. a) Explain the process of commutation in a dc machine and discuss the methods to improve it.
b) A 4-pole wave wound generator armature has 722 conductors, and it delivers 100 A on full load. If the brush lead is 8°, calculate the armature demagnetizing and cross-magnetizing ampere-turns per pole.
4. a) Draw and explain the no-load and load characteristics of shunt, series and compound generators. Give the applications of different types of dc generators with reasons and justification.
b) A 10 kW, 250V dc shunt generator has total no load rotational loss of 400W. The armature circuit including brushes and shunt field resistance are 0.5 and 250 ohms respectively. Calculate the shaft power input and the efficiency at rated load. Also calculate the maximum efficiency and the corresponding power output.
5. a) What are the drawbacks of three point starters? Describe a four-point starter with a neat sketch.
b) A 250V, 4-pole, shunt motor has two-circuit armature winding with 500 conductors. The armature circuit resistance is 0.25 ohms, field resistance is 100 ohms and the flux per pole is 0.03Wb. If the motor draws 14.5A from the mains, compute the speed and the internal (gross) torque developed. Neglect armature reaction.
6. a) What is meant by braking of dc motors? Briefly describe various methods of braking of dc shunt motors.
b) A 500V shunt motor runs at its normal speed of 250 rpm when the armature current is 200A and resistance of armature is 0.12 ohms. Calculate the speed when a resistance is inserted in the field, reducing the shunt field to 80% of normal value, and the armature current is 100A.
7. a) What factors need to be considered for choice of ampere conductors in dc machines?
b) List advantages and disadvantages of higher number of poles in dc machine.
1. Answer the following in 4 or 5 sentences each
a) Write the energy balance equation and explain each term.
b) What is meant by reactance voltage and how it will be neutralized?
c) Give the reasons for failure of self excited generator to build up.
d) What is the necessity of a starter for dc motor?
e) What is the disadvantage of Swinburne’s test?
f) What is resistance commutation?
g) A series generator delivers a current of 100A at 250V. Its armature and field resistances are 0.1 ohm and 0.55 ohm respectively. Find (i) armature current (ii) generated e.m.f
h) What are the causes of sparking in dc machines?
i) Differentiate between the generator action and motor action of a dc machine?
j) Explain the causes of hysteresis and eddy current losses in electric machines. On what factors do these losses depend?
k) What are commutating poles? Why are they used?
(2M×11=22M)
2. a) Explain the term ‘co-energy’ in electromechanical energy conversion and show that co-energy is given by: $W_i = \frac{1}{2} PF^2$ where P = permeance of the magnetic circuit and F = mmf in coil of magnetic circuit.
b) Discuss singly and multiply excited magnetic field systems.
3. a) Explain the function of interpoles in the generators with neat diagrams.
b) A four pole, 23.75 kW, 250V lap wound dc shunt generator has 50 slots with 8 conductors per slot and shunt field resistance of 50Ω. The brushes are given a lead of $8^\circ$ (mech) when the generator delivers full load current. Calculate the number of turns on the compensating winding if the pole arc to pole pitch ratio is 0.8.
4. a) Explain various power stages in a DC generator and also derive the condition for maximum efficiency
b) A 10 kW, 250V DC, 6 pole shunt generator runs at 1000 rpm when delivering full load. The armature has 534 lap connected conductors. Full load copper loss is 0.64 kW. The total brush drop is 1V. Determine the flux per pole. Neglect shunt current.
5. a) Plot the speed-torque characteristics of different types of dc motors. Based on these characteristics specify the applications of dc motors.
b) A 230V dc shunt generator has armature and field resistances of 0.06 ohms and 100 ohms respectively. Determine the total armature power developed when working (i) as generator delivering 25 kW output and (ii) as a motor taking 25 kW input.
6. a) What are the different types of speed control methods for dc motors? Discuss merits and demerits of each method.
b) The Hopkinson’s test on two shunt machines gave for full load the following results: Line voltage = 250V, Line current excluding field currents = 50A, Motor armature current = 380A, Field currents = 5A and 4.2A. The armature resistance of each machine is 0.02 ohm. Calculate efficiency of each machine.
7. a) On what factors does the length of the air gap in dc machines depend? Explain.
b) Find an expression of minimum number of coils required in armature winding such that the maximum voltage between consecutive segments does not exceed beyond 30V.
ELECTRICAL MACHINES-I
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. Answer the following in 4 or 5 sentences each
a) Give the examples of singly excited and doubly excited electro mechanical energy devices and also write the energy equations.
b) What is back e.m.f in dc machine? What is its significance?
c) Differentiate between Lap and Wave windings in dc machines and mention the relative merits and demerits.
d) An 8-pole wave connected dc generator has 1000 armature conductors and flux/pole is 0.035Wb. At what speed it must be driven to generate 500V on open circuit.
e) What is the need of testing dc machine and what are the different tests to be conducted on different machines
f) What are the advantages and disadvantages of specific electric and magnetic loadings?
g) The shunt field winding has high resistance while series field has a low resistance. Why?
h) In what type of dc machine wave winding is employed and why?
i) Explain the term commutation period?
j) Explain how torque is produced in a rotating electrical machine.
k) What are interpoles? Why are they used?
(2M×11=22M)
PART B
2. a) For a singly-excited magnetic system, derive the relation for the magnetic stored energy in terms of reluctance.
b) Determine the necessary expressions for determining the force and torque in multi excited magnetic field system.
3. a) What is meant by commutation? Explain how spark-less commutation is obtained in a dc generator, with neat diagrams.
b) A 4 pole 40 kW, 200V wave wound shunt generator has 420 conductors. Brushes are given a lead of 5 commentator segments. Calculate the demagnetizing amp-turns per pole if shunt field resistance is 40 ohm. Also calculate extra shunt field turns/pole to neutralize the demagnetization.
4. a) What is a compound generator? Differentiate between over, level and differential compounding? Draw external characteristics for these generators?
b) A DC shunt generator running at 1000 r.p.m gave the following O.C.C.
| Field current (Amps): | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|-----------------------|-----|-----|-----|-----|-----|-----|-----|-----|
| EMFs (Volts): | 52.5| 107.5| 155 | 196.5| 231 | 256.5| 275 | 287.5|
Calculate the voltage to which the machine will build up if the speed is 800 r.p.m and the field circuit resistance is 30 ohms.
5. a) Explain the working principle of a 3 point starter of a dc shunt motor with neat diagram.
b) A 440V shunt motor takes 105A (Armature current) from the supply and runs at 1000 r.p.m. Its armature resistance is 0.15 ohm. If the total torque developed is unchanged, calculate the speed and armature current if the magnetic field is reduced to 70% of the initial value.
6. a) Explain Retardation test with a neat diagram.
b) The armature and shunt field resistances of a 500V shunt motor are 0.2 ohm and 100 ohms respectively. Fine the resistance of the shunt field regulator to increase the speed from 800 r.p.m to 1000 r.p.m, if the current taken by the motor is 450A. The magnetization characteristics may be assumed as straight line.
7. a) Derive the expression for output equation of a dc machine.
b) List the factors to be considered for selecting the number of armature slots
ELECTRICAL MACHINES-I
(Electrical and Electronics Engineering)
Time: 3 hours
Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. Answer the following in 4 or 5 sentences each
a) Define field energy and co-energy?
b) What is meant by reactance voltage?
c) Prove that speed control characteristic of a dc shunt motor by flux control method is hyperbola.
d) What are commutating poles? Why are they used?
e) What is the need of testing dc machine and what are the different tests to be conducted on different machines
f) What are the advantages and disadvantages of specific electric and magnetic loadings
g) What purpose is served by brushes in a dc machine?
h) Why are field coils provided in a dc generator?
i) How are interpoles excited?
j) What is the significance of back e.m.f in dc machine?
k) Explain how the eddy current loss can be reduced.
(2M×11=22M)
PART B
2. a) Prove that the energy and co-energy in a linear magnetic system are given by identical expressions.
b) Determine the necessary expressions for determining the force and torque in multi excited magnetic field system.
3. a) Explain the action of compensating windings used in dc machines. Show schematically how they are interconnected?
b) A 4 pole wave connected generator supplied 134 A. It has 492 armature conductors. When delivering full load the brushes are given an actual lead of $10^\circ$. Calculate the demagnetizing ampere turns per pole. The shunt field winding takes 10A. Find extra shunt field turns necessary to neutralize this demagnetization.
4. a) With the help of suitable diagrams explain different methods of excitation of dc generators?
b) The open circuit characteristic of a separately exited generator at 600 r.p.m is as under:
| Field current (Amps): | 1.6 | 3.2 | 4.8 | 6.4 | 8.0 | 9.6 | 11.2 |
|-----------------------|-----|-----|-----|-----|-----|-----|------|
| EMFs (Volts): | 148 | 285 | 390 | 460 | 520 | 560 | 590 |
Find (i) The voltage to which the machine will excide as a shunt generator with a field circuit resistance of 60 ohm (ii) Critical resistance at this speed.
5. a) Explain speed-current, torque-current and speed-torque characteristics of a dc series motor.
b) Why is starting current high in a dc motor? Explain the working of a four-point starter for a dc machine.
6. a) Explain “Hopkinson’s” test. Why it is called a regenerative test?
b) A 220V DC shunt motor draws a no-load current of 2.5A when running at 1400 r.p.m. Determine its speed when taking an armature current of 60A, if armature reaction weakens the flux by 3%.
7. a) On what factors does the length of the air gap in dc machines depend? Explain.
b) Find an expression of minimum number of coils required in armature winding such that the maximum voltage between consecutive segments does not exceed beyond 30V.
ELECTRO MAGENETIC FIELDS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Define electric field intensity and electric potential and write the relationship between them.
b) In a certain region, the potential is given by \( V = (x^2 + 3y^2 + 9z) \). Find the electric field intensity at point P(1, -2, 3) m.
c) What is the capacitance of a parallel plate capacitor when the stored energy is 5 μJ and the voltage across the plates is 5 V?
d) What is a dipole? Write the expression for electric potential due to a dipole.
e) State Biot-Savart’s law. Give its limitation.
f) Define magnetic dipole and magnetic dipole moment.
g) A solenoid with air core has 2000 turns and a length of 500 mm. Core radius is 40 mm. Find its inductance.
h) What is Poynting vector? Write its significance.
(2M+3M+3M+3M+2M+3M+3M+3M)
PART-B
2. a) Determine the electric field intensity due to infinite line charge, at a point perpendicular to its plane and at a given distance from the line charge from first principles.
b) Find the electric field at distance ‘z’ above the center of a flat circular disc of radius ‘r’, which carries a uniform surface charge σ.
(8M+8M)
3. a) The space between two large parallel plates separated by a distance d=1 mm is filled with dielectric of relative permeability 20. Determine the polarization vector of dielectric if the plates are connected to (i) 10 V battery (ii) 20 V battery (iii) 100 V battery and (iv) 50 V battery
b) Show that the torque on a physical dipole \( \vec{P} \) in a uniform electric filed \( \vec{E} \) is given by \( \vec{P} \times \vec{E} \). Extend this result to a pure dipole.
(8M+8M)
4. a) The region is a free space enclosed by planes \( z = 0 \) and \( z = 5 \text{ cm} \), and by cylinders \( \rho = 3 \text{ cm} \) and \( \rho = 7 \text{ cm} \), forms a toroid with a rectangular cross-section. A surface current, \( K = 100 \hat{z} \text{ A/m} \) flows on the inner surface. Find the total flux and magnetic field intensity within the toroid
b) State and explain Amperes current law and derive the same in point differential form.
(8M+8M)
5. a) State and explain Lorentz’s force equation?
b) A current filament carrying 10 A in z direction lies along the entire z axis in free space. A rectangular loop connecting A (0,2,0) to B(0,2,3) to C(0,7,3) to D(0,7,2) to A (0,2,0) lies in the x = 0 plane. The loop current is 5 mA and it flows in the z-direction in the AB segment. Find forces on side AB and on side DA.
6. a) Derive the mutual inductance between an infinitely long straight wire and a one-turn rectangular coil whose plane passes through the wire and two of whose sides are parallel to the wire. Take necessary assumptions.
b) A toroidal core is composed of a material with relative permeability 25. The boundary surfaces are \( z = 0, z = 0.05, \rho = 0.05 \text{ and } \rho = 0.08 \text{ m} \). The core is wound symmetrically with 10000 turns so that H is in \( \phi \) direction. If the current in the coil is 20 A, find the total stored energy.
7. a) Show that power loss in a conductor is given as product of voltage and current using Poynting theorem.
b) State the Faraday’s laws of electromagnetic induction and derive the expressions for the transformer and motional e.m.f.s.
ELECTRO MAGENETIC FIELDS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) State the differences between Laplace’s and Poisson’s equations.
b) Why Gauss’s law cannot be applied to determine electric field due to finite line charge.
c) Distinguish between the conduction current and convection current.
d) What are the boundary conditions for perfect dielectric materials?
e) What are the limitations of Ampere’s circuital law?
f) What is the significance of Lorentz force equation?
g) Define statically and dynamically induced EMF.
h) Write expression for self and inductance of a solenoid and toroid.
(3M+3M+3M+3M+3M+3M+2M+2M)
PART-B
2. a) Four 3 nC charges are at corners of a 2-m square. The top corner charges positive where as the bottom corner charges are negative. Find the electric field at the center of the square. Assume $\varepsilon_r = 1$
b) State and explain Coulomb’s law.
(9M+7M)
3. a) A parallel plate capacitor consists of two square metal plates of side 500mm and separated by a 10 mm slab of Teflon with $\varepsilon_r = 2$ and 6 mm thickness is placed on the lower plate leaving an air gap of 4mm thick between it and upper plate. If 100v is applied across the capacitor, find D, E, and V in Teflon and air.
b) Derive continuity equation.
c) State and prove the conditions on the tangential and normal components of electric flux density and electric field intensity, at the boundary between the dielectrics.
(6M+5M+5M)
4. a) Show that $\nabla \times H = J$.
b) Derive expression for magnetic flux density at a point due to long current carrying filament.
(8M+8M)
5. a) A two wire line consists of two conductors of infinite length and circular cross section of radius 10 cm and the distance between them is 1 m. The two conductors are short circuited by a straight conducting bar. What is the force on the bar, if the current through the bar is (i) 10 A and (ii) 20 A?
b) A rectangular loop is carrying a current of 20 A in anti clockwise direction in the presence of a magnetic field \( \mathbf{B} = (2x\hat{x} + 6y\hat{y} + 9z\hat{z}) \text{T} \). If the loop lies in z = 0 plane and is bounded by \( x = 2, x = 4, y = 1 \) and \( y = 3 \text{m} \). Find
i) The force at \( y = 1, x = 2 \) to \( x = 4 \)
ii) The force at \( y = 3, x = 2 \) to \( x = 4 \)
(8M+8M)
6. a) Derive an expression for mutual inductance between a straight long wire and a square loop wire in the same plane.
b) A solenoid of 10 cm in length consists of 1000 turns having the cross section radius of 1 cm. Find the inductance of solenoid. What is the value of current required to maintain a flux of 1 milli-Wb in the toroid. Take \( \mu_0 = 1500 \).
(8M+8M)
7. a) Derive the Maxwell’s equations in point and integral form for time varying fields?
b) Starting from Faraday’s law of electromagnetic induction, derive \( \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \).
(8M+8M)
ELECTRO MAGENETIC FIELDS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Verify the given potential field satisfies the Laplace equation: \( V = (x^2 + 3y^2 + 9z) \).
b) Write the limitations of Gauss law.
c) Define polarization. Is polarization is present in conductors.
d) Write the expression for torque developed on a dipole placed in an electric field.
e) State Ampere’s circuital law.
f) What is a magnetic dipole? How it is differ from electric dipole.
g) State Faraday’s law of electromagnetic induction.
h) Write the expression for energy stored in a magnetic field.
(3M+3M+3M+3M+3M+3M+2M+2M)
PART-B
2. a) A point charge of 10 C is located at (1,1,2) in free space, while a charge of 1 C is at (4,1,3).
Find the coordinates of the point at which a point charge experience no force.
b) State and prove Gauss’s Law.
(8M+8M)
3. a) A conductor of circular cross section is constructed of steel whose conductivity is
\( 6 \times 10^8 \text{ S/m} \) in the region \( 0 < r < 1 \text{ mm} \), copper whose conductivity \( 5.8 \times 10^7 \text{ S/m} \) in the region \( 1 < r < 2 \text{ mm} \) and nichrome whose conductivity \( 10^8 \text{ S/m} \) in the region \( 2 < r < 3 \text{ mm} \), the total current carried by the conductor is 100 A. Calculate the current density in steel, copper and nichrome
b) A dipole with \( p = 3 \hat{z} \mu\text{Cm} \) is located at point (0,0,2) in free space, and the z = 0 plane is perfectly conducting. Find potential at (0,1,2), (0,2,3) and (0,3,4)
(8M+8M)
4. a) A conductor in the form of regular polygon of ‘n’ sides inscribed in a circle of radius ‘R’. Show that the expression for magnetic flux density \( B = \frac{\mu_0 n I}{2\pi R} \tan\left(\frac{\pi}{n}\right) \) at center, where I is the current. Show also when ‘n’ is infinitely increased, the expression is reduced to \( B = \frac{\mu_0 I}{2R} \).
b) Derive the expression for magnetic field intensity at the center of a circular wire. (8M+8M)
5. a) Two parallel circular loops of radii 10 m and 2 m, are coaxially located and carry currents 20 A and 5 A respectively. Find the force between the loops if the axial distance between the centers of the loops is (i) 30 m (ii) 40 m
b) Three infinitely long parallel filaments each carry 5 A in z-direction. If the filament lie in the plane x = 0 and with a 2 cm spacing between wires. Find
i) The force per meter on left filament
ii) The force per meter on center filament (8M+8M)
6. a) A solenoid is wound on a long former, square in section and containing no magnetic material. It is bent round into a toroid of internal and external radii 3 cm and 21 cm respectively. A straight thin cable of infinite length passes along the axis of the toroid at right angles to its plane. Find the mutual inductance between the cable and solenoid if there are 200 number of turns per meter on solenoid
b) Obtain an expression for the self-inductance of a toroid of a circular cross-section, with N closely spaced turns. (8M+8M)
7. a) Explain the concept of displacement current and obtain an expression for the displacement current density.
b) A square loop of wire has corners at (0,0,0), (1,0,0), (1,1,0) and (0,1,0) at t = 0. The loop is perfectly conducting except for a small 100 Ω resistor in one side. It is moving through the field \( B = 10 \cos(5 \times 10^5 t - 2x) \) μT with a constant velocity of 30 j m/s. Calculate the induced EMF. (8M+8M)
ELECTRO MAGENETIC FIELDS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Given the potential field \( V = 50x^3 yz + 20y^2 \) volts in free space. What is the electric field at a point P(1,2,-3).
b) What is an equi-potential line? Give its properties.
c) What is the capacitance of a parallel plate capacitor when the plate area is 1 m\(^2\), distance between the plates is 1 mm, voltage gradient is \(10^5\) V/m and charge density on the plates is 2 \( \mu C/m^2 \)?
d) Write ohm’s law in point form and give its significance.
e) Write the relationship between magnetic flux and magnetic flux density.
f) What is the force per meter length between two long parallel wires separated by 10 cm in air and carrying a current of 100 A in the same direction.
g) Define self and mutual inductances
h) State Poynting theorem.
(3M+2M+3M+3M+2M+3M+3M+3M)
PART-B
2. a) Two concentric coplanar rings of radii 1 cm and 4 cm carry charges -2 nC and 3 nC respectively. Find the distance of the equilibrium point from the center of the ring.
b) Find the work done in moving a 10 coulomb charge from infinity to the origin in electric filed
\[ \vec{E} = \frac{50r}{(r^2 + 1)} \hat{a}_r . \]
(8M+8M)
3. a) Derive an expression for Capacitance of a parallel plate capacitor with two different media.
b) A square parallel plate capacitor 200 mm on side with a plate spacing of 25mm is filled with a dielectric slap (\( \varepsilon_r = 240 \) of the same dimensions if 100 V is applied to the capacitor) Find:
(i) the polarization P in the dielectric and (ii) the energy stored by the capacitor.
If the voltage source is now disconnected and the dielectric slap then slipped out from between the plates, find (iii) Polarization in the dielectric (iv) Energy stored in the dielectric (v) Energy stored in the capacitor.
(8M+8M)
4. a) A filamentary current of 15A is directed in from infinity to the origin on the positive x axis, and then back out to infinity along the position y axis. Use the Biot-Savart’s law of find $\vec{H}$ at P (0, 0, 1)?
b) Find the magnetic field intensity at centre of a square of sides equal to 5 m and carrying a current equal to 10 A.
(8M+8M)
5. a) Two infinitely long parallel conductors are separated by a distance ‘d’. Find the force per unit length exerted by one of the conductor on the other if the currents in the two conductors are $I_1$ and $I_2$.
b) A straight solid wire segment carrying a current $4\sqrt{3} \text{ A}$ extends from A(0,2,5) to B(0,6,5) in free space. This wire is subjected to the magnetic field of an infinite current filament lying along the z-axis and carrying 30 A in the z-direction. Find the torque on the wire segment about an origin at (0,0,2) and (0,0,0).
(8M+8M)
6. a) A solenoid has dimensions L = 1 m, N = 1000 turns, diameter = 10 cm, and current I = 205 A. $\mu_r = 10$. Find the field and the energy density inside the solenoid.
b) Using basic laws, derive the expression for the self inductance (L) of a solenoid, if ‘N’ is the number of turns, ‘$\mu$’ is permeability, ‘A’ is the cross sectional area, ‘l’ length of the flux path.
(8M+8M)
7. a) From the Maxwell’s equations, derive the expression for Poynting vector. Also, explain the applications of the Poynting vector.
b) A conductor with circular cross-section has a radius ‘a’ and length ‘l’. It is carrying a current ‘I’ ampere. If the conductivity of conductor is ‘$\sigma$’, find the power loss in the conductor using Poynting theorem.
(8M+8M)
THERMAL AND HYDRO PRIME MOVERS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) How heat engines are classified. Explain the principle of working of heat engine
b) Explain various operation of a Carnot cycle. Also represent it on T-s and P-V diagrams
c) What are the merits of Gas turbines over the IC engines
d) What are the applications of impulse-momentum equation?
e) Classify the hydraulic turbines
f) Explain about the load curve
(3M+4M+4M+4M+4M+3M)
PART-B
2. a) With a neat sketch explain the working principle of a simple carburetor
b) Briefly discuss the air-fuel ratio requirements of a petrol engine from no load to full load.
[8+8]
3. a) Explain about pressure compounding of impulse steam turbine with a neat sketch.
b) A simple Rankine cycle works between pressures 28 bar and 0.06 bar, the initial condition of steam being dry saturated. Calculate the cycle efficiency, work ratio and specific steam consumption.
[8+8]
4. a) Explain the Inter cooling method applied to the gas turbine plant for improvement of the performance of plant with the help of P-V diagram and H-S diagram.
b) In an air standard gas turbine engine, air at a temperature of 15°C and a pressure of 1.01 bar enters the compressor, where it is compressed through a pressure ratio of 5. Air enters the turbine at a temperature of 815°C and expands to original pressure of 1.01 bar. Determine the ratio of turbine work to compressor work and the thermal efficiency when the engine operates on ideal Brayton cycle. Take $\gamma = 1.4$ and $C_p = 1.005 \text{ kJ/kgK}$.
[8+8]
5. a) Derive an expression for force exerted by a jet on a stationary flat plate held normal to the jet
b) Discuss the influence of exit blade angle on the performance and efficiency of a centrifugal pump. Assume radial flow at entrance.
[8+8]
6. A pelton wheel is to be designed to the following specifications:
Power 11948 kW, Head 381 m, Speed 750 rpm, overall efficiency 86% Jet diameter not to exceed 1/8 times the wheel diameter. Determine i) The wheel diameter ii) the number of jets required iii) The diameter of the jet.
[8+8]
7. a) With a neat sketch explain the working of a simple hydro electric power plant identify all the components and explain their functionality
b) Explain the following: i) load factor ii) utilization factor iii) capacity factor
[8+8]
THERMAL AND HYDRO PRIME MOVERS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Compare External combustion engine and Internal combustion engines
b) Explain the various operation of a Rankine cycle. Also represent it on T-s and P-V Diagrams
c) What are the merits of Gas turbines over the IC engines
d) Mention the parts of centrifugal pump. Explain the function of impeller
e) What are the parameters to be considered while designing the Pelton Wheel?
f) Explain the term diversity factor (4M+4M+4M+4M+3M+3M)
PART –B
2. a) What are the various components to be lubricated in an engine and explain how it is accomplished
b) Compare the wet sump and dry sump lubrication systems [8+8]
3. a) Explain about the Re-heat cycle with the neat sketch
b) In a steam turbine steam at 20 bar, 360°C is expanded to 0.08 bar. It then enters a condenser, where it is condensed to saturated liquid water. The pump feeds back the water into the boiler. Assume ideal processes; find per kg of steam the net work and the cycle efficiency. [8+8]
4. a) Explain the Re-heat method applied to the gas turbine plant for improvement of the performance of plant with the help of P-V diagram and H-S diagram
b) In an open cycle constant pressure gas turbine air enters the compressor at 1 bar and 300K. The pressure of air after the compression is 4 bar. The isentropic efficiencies of compressor and turbine are 78% and 85% respectively. The air fuel ratio is 80:1. Calculate the power developed and thermal efficiency of the cycle if the flow rate of air is 2.5 kg/s. Take $C_p=1.005$ KJ/KgK and $\gamma=1.4$ and $C_p=1.147$ KJ/KgK and $\gamma=1.33$ for gases. R=0.287 KJ/KgK Calorific value of fuel=42000 KJ/Kg [8+8]
5. a) Derive an expression of the force exerted by a jet on a stationary flat plate held inclined to the jet
b) Explain briefly the effect of variation of discharge on the efficiency [8+8]
6. A Pelton wheel having a mean bucket diameter of 1m is running at 1000 rpm. The net head on the Pelton wheel is 700m. If the side clearance angle is 15° and discharge through the nozzle is 0.1 m³/s, determine power available at the nozzle and hydraulic efficiency of the turbine. [8+8]
7. a) Explain about the pumped storage systems in detailed
b) Distinguish between a base load power plant and a peak load power plant [8+8]
THERMAL AND HYDRO PRIME MOVERS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Give examples of External combustion engines and internal combustion engines
b) Draw the combined velocity triangle for the single stage impulse turbine. Explain the notations used in the velocity triangles
c) What are the merits of Gas turbines over the steam turbines?
d) Draw the operating characteristic curves of centrifugal pump and explain then in brief
e) Draw the main characteristics of Pelton wheel and explain them in brief
f) Explain about the duration curve
(4M+4M+4M+3M+4M+3M)
PART-B
2. a) With a neat sketch explain Battery ignition system.
b) A four-stroke gas engine has a bore of 20cm and stroke of 30 cm and runs at 300 rpm firing every cycle. If air-fuel ratio is 4:1 by volume and volumetric efficiency on NTP basis is 80%, determine the volume of gas used per minute. If the calorific value of gas is 8MJ/m$^3$ at NTP and the brake thermal efficiency is 25% determine brake power of the engine.
[8+8]
3. Derive expression for maximum blade efficiency in a single-stage impulse turbine [16]
4. a) Explain the Regenerative method applied to the gas turbine plant for improvement of the performance of plant with the help of P-V diagram and H-s diagram
b) Describe with neat diagram a closed cycle gas turbine and also derive the expression of thermal efficiency of the closed cycle. State also its merits and demerits over open cycle gas turbine.
[8+8]
5. a) Derive an expression of force exerted on a stationary curved plate when jet strikes the curved plate at the centre.
b) Explain the working of volute casing of centrifugal pump with the help of neat sketch
[8+8]
6. The jet of water coming out of nozzle strikes the buckets of a Pelton wheel which when stationary would deflect the jet through $165^\circ$. The velocity of water at exit is 0.9 times at the inlet and the bucket speed is 0.45 times the jet speed. If the speed of the Pelton wheel is 300 rpm and the effective head is 150m, determine (i) Hydraulic efficiency (ii) Diameter of the Pelton wheel. Take coefficient of velocity $C_v=0.98$
[16]
7. a) Make a neat sketch of hydropower plant and explain working of each element in the plant.
b) Differentiate between firm power and secondary power
[8+8]
THERMAL AND HYDRO PRIME MOVERS
(Electrical and Electronics Engineering)
Time: 3 hours Max. Marks: 70
Note: 1. Question Paper consists of two parts (Part-A and Part-B)
2. Answer ALL the question in Part-A
3. Answer any THREE Questions from Part-B
PART-A
1. a) Define the following: i) bore ii) stroke iii) clearance volume iv) cubic capacity
b) Give the differences of Rankine cycle and Carnot cycle
c) List out the applications of the gas turbines
d) Explain about the multistage centrifugal pumps
e) Draw the main characteristic curves of Francis turbine and explain them in brief
f) Explain about the Utilization factor
PART-B
2. a) With a neat sketch explain the magneto ignition system
b) A four-stroke, four cylinder gasoline engine has a bore of 60mm and a stroke of 100 mm.
On test it develops a torque of 66.5 Nm when running at 3000 rpm. If the Clear acne volume
in each cylinder is 60cc the relative efficiency with respect to brake Thermal efficiency is
0.5 and the calorific value of the fuel is 42MJ/kg, determine the fuel consumption in kg/h
and the brake mean effective pressure.
3. A single stage steam turbine is supplied with steam at 5 bar, 200°C at the rate of 50 kg/min. It
expands into condenser at a pressure of 0.2 bar. The blade speed is 400 m/s. The nozzles are
inclined at an angle of 20° to the plane of wheel and the outlet blade angle is 30°. Neglecting
friction losses, determine the power developed, blade efficiency and stage efficiency.
4. a) List out the differences between the open cycle gas turbine and closed cycle cycle gas
Turbine
b) Derive an expression of air standard efficiency for the open cycle gas turbine with the
neat Sketch and indicate the operations on P-V and T-s diagram
5. a) Derive an expression of force exerted on a stationary curved plate when jet strikes the curved
plate at one end tangentially when the plate is unsymmetrical.
b) Explain the working of single stage centrifugal pump with a neat sketch
6. A pelton wheel has a mean bucket speed of 12m/s diameter and is supplied with water at the
rate of 0.7m³/s under a head of 30m. If the buckets deflect the jet through an angle of 160°, find
the power and the efficiency of the turbine.
7. a) Show that capacity factor is equal to the product of the load factor and the utilization factor
b) Differentiate between storage and pondage. Support your answer with a neat sketch |
The evolution of color naming reflects pressure for efficiency: Evidence from the recent past
Noga Zaslavsky,¹,* Karee Garvin,² † Charles Kemp,³ Naftali Tishby,⁴ and Terry Regier²,⁵
¹Department of Brain and Cognitive Sciences and Center for Brains Minds and Machines, Massachusetts Institute of Technology, MA 02139, USA, ²Department of Linguistics, University of California, Berkeley, CA 94720, USA, ³School of Psychological Sciences, University of Melbourne, Victoria 3010, Australia, ⁴Edmond and Lily Safra Center for Brain Sciences and Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel and ⁵Cognitive Science Program, University of California, Berkeley, CA 94720, USA
*Corresponding author. email@example.com
†These authors contributed equally to this work.
Abstract
It has been proposed that semantic systems evolve under pressure for efficiency. This hypothesis has so far been supported largely indirectly, by synchronic cross-language comparison, rather than directly by diachronic data. Here, we directly test this hypothesis in the domain of color naming, by analyzing recent diachronic data from Nafaanra, a language of Ghana and Côte d’Ivoire, and comparing it with quantitative predictions derived from the mathematical theory of efficient data compression. We show that color naming in Nafaanra has changed over the past four decades while remaining near-optimally efficient, and that this outcome would be unlikely under a random drift process that maintains structured color categories without pressure for efficiency. To our knowledge, this finding provides the first direct evidence that color naming evolves under pressure for efficiency, supporting the hypothesis that efficiency shapes the evolution of the lexicon.
Keywords: language evolution; language change; color naming; semantic typology; information theory; efficient communication; compression
1. Introduction
A substantial body of research suggests that languages are shaped by efficient communication (see e.g. Gibson et al., 2019 for a recent review). On this view, language evolution is driven, at least in part, by a functional need for communication to be both accurate and simple. This general idea has been pursued with respect to a number of specific aspects of language, including semantic categories (e.g., Kemp et al., 2018), with color naming as a prominent example. Many empirical findings suggest that languages tend to acquire new color terms with time, resulting in increasingly fine-grained color naming systems (Berlin and Kay, 1969; Kay and Maffi, 1999; MacLaury, 1997; Levinson, 2000; but see also Haynie and Bowern, 2016). More recently, it has been claimed (e.g., Lindsey et al., 2015; Regier et al., 2015; Gibson et al., 2017; Kemp et al., 2018; Zaslavsky et al., 2018; Conway et al., 2020) that this historical evolutionary process, and color naming more generally, are shaped by a need for efficient communication.
However, most research concerning the evolution of color naming has been based indirectly on synchronic cross-language comparison, rather than directly on fine-grained diachronic data collected in the field. There are some approaches that have approximated this ideal: e.g. Biggam (2012) considered historical texts; Kay (1975) considered informant age as a proxy for change over time; and Haynie and Bowern (2016) used phylogenetic methods to infer the history of color naming in a particular language family. However, these remain approximations: historical texts, while providing genuinely diachronic data, do not support analyses at a fine-grained level close to color perception; informant age is a reasonable proxy for change over time, but still a proxy; and phylogenetic reconstruction provides an inferred historical record rather than a directly measured one. In a recent exception to this general trend, Huismann et al. (2021) explored the evolution of color naming in Japonic languages by directly comparing fine-grained data collected in the field at different points in time. Still, this approach remains unusual, and to our knowledge no prior study has used finegrained diachronic data from the field with a view to examining questions of efficiency in the evolution of color naming.
Here, we do that. Specifically, we explore the role of efficiency in color naming evolution by considering fine-grained diachronic data from the field for a single language, Nafaanra (iso:mr, Senufo, Ghana). We do this in a theory-driven manner, by testing quantitative predictions for language change previously derived from the theoretical framework of Zaslavsky, Kemp, Regier, and Tishby (2018, henceforth ZKRT). This framework integrates the proposal that languages evolve under pressure for efficient communication together with the Information Bottleneck principle (Tishby et al., 1999), which can be formally derived from rate-distortion theory (Shannon, 1959; Berger, 1971), the branch of information theory that characterizes optimal data compression under limited communicative resources.
We find that: (1) color naming in Nafaanra has changed during the recent past by adding new color terms and becoming more semantically fine-grained; (2) this has happened in a way that is consistent with pressure for efficiency as predicted by ZKRT; and (3) this outcome would be unlikely under a process of random drift that maintains structured color categories without pressure for efficiency. To our knowledge, this is the first finding that directly supports the proposal that color naming evolves under pressure for efficiency. Xu et al. (2016) previously used a related theoretical framework to show that a specific mechanism of semantic change — semantic chaining — shows signs of pressure for efficiency in a different semantic domain, that of names for containers. Our present work shows direct pressure for efficiency in language change that is not restricted to chaining, using a different framework that suggests a continuous evolutionary process (Zaslavsky et al., 2018; Zaslavsky, 2020), and in a domain — color naming — for which questions of evolution and language change have long been theoretically central.
In what follows, we first discuss color naming in Nafaanra, comparing data from 1978 with data that one of us (K.G.) collected in 2018. We then review the theoretical framework of ZKRT and test its predictions in the case of semantic evolution in Nafaanra color naming. We conclude by discussing implications of our findings.
2. Color naming and its evolution: The case of Nafaanra
Nafaanra is a Senufo language spoken in Ghana and Côte d’Ivoire, with approximately 61,000 speakers across all dialects (Simons and Gordon, 2006). The Nafaanra data in this study were collected in the town of Banda Ahenkro, Ghana. Community members estimate that the greater Banda region currently has around 20,000 speakers of Nafaanra spread throughout the area, with around 6,000 speakers in Banda Ahenkro proper (Garvin, 2017).
In Banda Ahenkro, Nafaanra is the most commonly spoken language and is used across all domains. However, within the Banda Ahenkro community, there are no known monolingual speakers of Nafaanra except for small children, as many Nafaanra speakers also speak Twi (iso:twi, Kwa, Ghana), a member of the Kwa language family (Simons and Gordon, 2006), and English, to varying degrees of frequency and fluency. Twi serves as a lingua franca beyond Banda Ahenkro, and English is the national language, learned and used in education. Proficiency for Twi is generally higher than for English, and Twi is used more frequently and across more domains. However, media is often in English, and thus, while proficiency in English is lower, exposure to English is still high. Despite the influence of Twi and English, Nafaanra is dominant for Nafaanra speakers in the Banda Ahenkro region. Community members understand the current overall language usage profiles to be comparable between 1978 and 2018 (the two years of data collection), with Nafaanra as the dominant language, and some Twi and English usage in trade and education respectively; however, speakers also report an increase in usage and exposure to Twi and especially English since 1978. One major factor in the increase in exposure is a change to both technology and lifestyle in the community. First, more community members now have access to television, which in particular has increased exposure to English. In addition, it is more common among younger generations for children to leave the community in the later years of schooling to receive education, and subsequently, to find a job, rather than pursuing agriculture, which was once the dominant occupation in the community. Outside of the community, people are exposed in particular to more Twi, and changes in education and occupational trends results in more exposure to English.
Color naming data for Nafaanra were initially collected in 1978 in Banda Ahenkro, as part of the
Figure 2. Color naming and its evolution in Nafaanra. The Nafaanra color naming system in 1978 (A) and in 2018 (B), plotted against the color naming grid of Figure 1. Each color term is shown in the color that corresponds to the center of mass of its color category. Mode maps (top) show the modal term for each color chip. Contour plots (bottom) show the proportion of color term use across participants. Dashed lines correspond to agreement levels of 40% – 45%, and solid lines correspond to agreement levels above 50%. (A) The 1978 system: ‘fìgge’ (light), ‘wɔɔ’ (dark), and ‘nyiɛ’ (warm or red-like). (B) The 2018 system: the three terms from 1978 have smaller extensions and new terms have emerged — ‘wrenyìnge’ (green), ‘lomru’ (orange), ‘ŋgonyina’ (yellow-orange), ‘mbruku’ (blue), ‘poto’ (purple), ‘wrewaa’ (brown), and ‘tɔɔnɔ’ (gray).
World Color Survey (WCS; Kay et al. 2009), following WCS protocol.\footnote{WCS data are available at \url{http://www.icsi.berkeley.edu/wcs/data.html}. WCS protocol is specified in the Instructions to Fieldworkers, available at \url{https://www.icsi.berkeley.edu/wcs/images/WCS_instructions-20041018/jpg/border/index.html}.} Participants in the WCS were shown each of the 330 color chips in the color naming grid shown in Figure 1, in a fixed random order, and asked to provide a name for each color. A total of 29 Nafaanra speakers participated in the 1978 survey, and the resulting data are shown in Figure 2A. The Nafaanra color naming system of 1978 is a 3-term system, with terms for light (‘fìgge’), dark (‘wɔɔ’), and warm or red-like (‘nyiɛ’).
Our initial data collection began with a pilot study in 2017. Data were collected for Nafaanra by one of us (K.G.), in the same town, Banda Ahenkro, and strictly following the same protocol, which discourages using terms that specify the source of the color, e.g., terms that could be translated into phrases like \textit{fresh leaf}. In the context of Nafaanra in 2017, this effectively meant that participants were restricted to using the original three color terms from the 1978 study: ‘fìgge’, ‘wɔɔ’, and ‘nyiɛ’. To our surprise, and in contrast to the 1978 data, we found that participants were unable to name a large proportion of the chips when restricted to these three terms, and they expressed frustration at being asked to do so. This suggests a qualitative change in Nafaanra color naming over the recent past. For this reason, subsequent data collection used a free response method, in which no constraints were placed on the color terms that could be supplied as responses. In 2018, 40 years after the original WCS data collection, Nafaanra color naming data were collected again by one of us (K.G.), in the same town, Banda Ahenkro, and following the same protocol, with the exception that participants responded freely in naming the color chips.\footnote{For example, following WCS protocol, the 2018 study was conducted on bright days in the shade to ensure chip visibility and data compatibility with the 1978 data. The chips used were the same as those used in 1978, and were presented in the same order.} Speakers were asked to provide a color term for each chip in the stimulus grid (“jìga wɔɔ yi him?; What is the color?). A total of 15 Nafaanra speakers participated in the 2018 study, 6 female and 9 male, ranging in age from 18-77.\footnote{Free-response naming data were also collected in 2017 from 10 participants (6 male and 4 female, ranging in age from 20-68). Our results for the 2017 and 2018 free-response naming data are qualitatively similar.}
Based on these data, we estimated the 2018 Nafaanra color naming system (Figure 2B) by averaging the naming responses across participants (see Appendix A for individual color naming maps and age data). The 2018 system contains the same three color terms as the 1978 system: light (‘fìgge’), dark (‘wɔɔ’), and warm or red-like (‘nyiɛ’) — but these now have smaller extensions, and the system also includes seven new color terms: green (‘wrenyìnge’), orange (‘lomru’), yellow-orange (‘ŋgonyina’), blue (‘mbruku’), purple (‘poto’), brown (‘wrewaa’), and gray (‘tɔɔnɔ’). While these terms represent the most common responses, there was also some variability in term usage for a few categories; specifically, a small number of speakers used ‘nyanyìnge’\footnote{The term ‘nyanyinge’ only occurs in the 2017 pilot data for a single speaker.} instead of ‘wrenyìnge’ for green, ‘ndemimi’ or ‘mimi’ instead of ‘ŋgonyina’ for yelloworange, and ‘tra’ instead of ‘wrrwaa’ for brown. One additional term, ‘grazaan’ for red-brown, was used by a single speaker and for a small portion of chips. A more detailed discussion of the terms themselves and how they relate to Twi and English is included in the discussion section.
As can be seen in Figure 2, the Nafaanra color naming system changed substantially between 1978 and 2018, becoming more semantically fine-grained through the addition of new color terms and adjustment in extension of previously existing terms. However, these qualitative observations alone do not determine whether the system has changed in a way that is consistent with pressure for efficiency. To address that question, we turn next to a formal theoretical framework that captures the idea of communicative efficiency and generates precise testable predictions for how color naming may change continuously over time.
3. Theoretical framework and predictions
It has been argued that systems of semantic categories are shaped by functional pressure for communicative efficiency (see Kemp et al., 2018, for a review). This general proposal has been explored in the case of color naming (Lindsey et al., 2015; Regier et al., 2015; Gibson et al., 2017; Zaslavsky et al., 2018; Conway et al., 2020), as well as in other semantic domains, such as kinship (Kemp and Regier, 2012), numeral systems (Xu et al., 2020), and indefinite pronouns (Denic et al., 2020). We are interested in testing whether color naming, and semantic systems more generally, change over time while maintaining communicative efficiency.
To this end, we consider the theoretical framework of Zaslavsky et al. (2018, ZKRT), who argued that languages achieve communicative efficiency by compressing meanings into words via the Information Bottleneck (IB) optimization principle (Tishby et al., 1999). This framework is particularly useful in our context for several reasons. First, it is comprehensively grounded in rate-distortion theory (Shannon, 1959; Berger, 1971), the subfield of information theory characterizing efficient data compression under limited resources, offering firm and independently motivated mathematical foundations. Second, it has previously been applied to color naming and was shown to account for much of the known variation across languages, including fine-grained details such as soft category boundaries and patterns of inconsistent naming (Zaslavsky et al., 2018). At the same time, this framework is not specific to color and has also been applied to other semantic domains (e.g., Zaslavsky et al., 2019c), suggesting it may characterize the lexicon more broadly.
Third, this framework provides quantitative predictions not only for the efficiency of attested semantic systems, but also for how they may evolve over time and extend beyond those stages already observed. Specifically, this framework suggests an idealized continuous trajectory of semantic evolution in which efficient systems evolve through gradual adjustments of a single complexity-accuracy tradeoff parameter. In the context of color naming, this theoretically-derived evolutionary trajectory was shown by ZKRT to synthesize key aspects of seemingly opposed accounts of color naming evolution (Berlin and Kay, 1969; MacLaury, 1997; Lyons, 1995; Levinson, 2000). This finding suggests that the ZKRT account may explain substantial aspects of language change. However, that possibility has not yet been tested against diachronic data.
Next, we review ZKRT’s theoretical framework and its predictions, focusing specifically on its instantiation for color naming which we refer to as the IB color naming model. In Section 4, we will test the predictions of this model on the diachronic Nafaanra color naming data described in the previous section, and assess whether efficiency can explain semantic change over time in Nafaanra.
3.1. Communication model
The theoretical framework we review here is based on a simple communication setting (Figure 3A), that can be derived from Shannon’s communication model (Shannon, 1948). Here, we focus on the case in which a speaker and a listener communicate about colors, and attention is restricted specifically to the colors shown in Figure 3B, each of which is represented as a point $U$ in a standard perceptual color space, CIELAB. The speaker has a mental representation $M$ of one of these colors $U$, drawn from a prior distribution $p(m)$. This mental representation $M$ is assumed to be a Gaussian distribution in CIELAB space, centered at $U$, capturing the speaker’s mental uncertainty about the color. The speaker communicates this representation by encoding it into a word $W$ according to a conditional distribution $q(w|m)$, which serves as a stochastic encoder. The listener receives $W$ and attempts to infer from it the speaker’s representation $M$ by constructing another representation, $\hat{M}$, that approximates $M$. The listener’s inferences are Bayesian with respect to the speaker.
---
5 The IB color naming model is publicly available at https://github.com/nogazs/ib-color-naming.
6 We take $p(m)$ to be the prior originally used by ZKRT. See (Zaslavsky et al., 2018, 2019b) for more details about this prior, and (Zaslavsky et al., 2019a) for an evaluation of several alternative priors.
7 This is not an assumption of the model, as it can be derived directly from the IB optimality principle (see Zaslavsky et al., 2018, SI Section 1.2.).
That is, given a word $w$, the listener’s inference is defined by
$$\hat{m}_w(u) = \sum_m q(m|w)m(u),$$ \hspace{1cm} (1)
where $q(m|w)$ is obtained by applying Bayes’ rule with respect to $q(w|m)$ and $p(m)$.
### 3.2. The theoretical limit of semantic efficiency
In this formulation, human semantic systems, such as the Nafaanra color naming systems shown in Figure 2, correspond to encoders $q(w|m)$. The IB principle characterizes the set of optimal systems in this setting, which are parametrized by a single parameter that controls the tradeoff between the complexity and accuracy of the system. As in rate-distortion theory, complexity is measured by the mutual information between the speaker’s mental representation $M$ and word $W$,
$$I_q(M; W) = \sum_{m,w} p(m)q(w|m)\log \frac{q(w|m)}{q(w)},$$ \hspace{1cm} (2)
which tightly approximates the number of bits required for communication (Shannon, 1959; Berger, 1971). Accuracy corresponds to the similarity between the speaker’s and listener’s representations, and is measured by $I_q(W; U)$. Maximizing this second informational term amounts to minimizing the expected Kullback–Leibler (KL) divergence between $M$ and $\hat{M}$ (Tishby et al., 1999; Gilad-Bachrach et al., 2003, and see Appendix B for a detailed derivation in our context). Thus, high accuracy implies that the listener’s inferred representation is similar to the speaker’s representation.
Achieving high accuracy requires a complex lexicon, while reducing complexity may result in accuracy loss. According to the IB principle, optimal systems minimize complexity while maximizing accuracy for some tradeoff $\beta \geq 0$ between these two competing objectives. Formally, an optimal encoder $q(w|m)$ for a given value of $\beta$ is one that attains the minimum of the IB objective function,
$$F_\beta[q] = I_q(M; W) - \beta I_q(W; U),$$ \hspace{1cm} (3)
across all possible encoders. Let $F^*_\beta$ be the minimal value of this objective for a given value of $\beta$. The theoretical limit of efficiency, also known as the IB curve, is then determined by the set of encoders $q_\beta(w|m)$ that attain $F^*_\beta$ for different values of $\beta$. This limit in the case of color communication is shown by the black curve in Figure 3C, accompanied by a few examples of optimal encoders along the curve.
### 3.3. Evolution of the optimal systems
Intuitively, the tradeoff parameter $\beta$ controls the relative importance of maximizing accuracy over minimizing complexity, and thus how fine-grained a semantic system is. For $\beta \leq 1$, complexity is more important than accuracy, yielding at the optimum a minimally complex yet non-informative system that can be implemented with a single word. This system lies at the origin of the IB curve, as can be seen in Figure 3C. As $\beta$ gradually increases from 1 to $\infty$, the optimal systems evolve in an annealing process along the IB curve, becoming more complex and more accurate. In general, the optimal systems...
can also change via reverse-annealing, i.e., when $\beta$ gradually decreases, in which case they will travel down the curve and become less complex. Along this continuous trajectory, the optimal systems undergo a sequence of structural phase transitions at critical values of $\beta$, in which the number of categories effectively changes (Zaslavsky, 2020).
In the domain of color naming, this theoretical evolutionary trajectory was previously derived from the IB color naming model shown in Figure 3. By mapping the color naming systems of 111 languages (WCS+ dataset) — 110 from the WCS and American English from Lindsey and Brown (2014) — onto optimal systems along this trajectory, it was shown that all of these languages are near-optimal in the IB sense, and that much of the observed cross-language variation can be explained by varying $\beta$ alone. Furthermore, it was shown that the optimal trajectory synthesizes aspects of seemingly opposing accounts of color naming evolution. Berlin and Kay’s (1969) discrete evolutionary sequence is largely captured by the structural phase transitions that occur at critical points along the trajectory. However, this trajectory is continuous, categories change gradually with $\beta$, and new ones typically emerge in regions of color space that are inconsistently named. These phenomena resonate with other approaches to the evolution of color naming (MacLaury, 1997; Lyons, 1995; Levinson, 2000) that traditionally appeared to challenge Berlin and Kay’s (1969) proposal.
As noted by ZKRT, these findings suggest that semantic systems, and color naming in particular, evolve under pressure to remain near the IB theoretical limit and that the optimal evolutionary trajectory, while idealized, may capture substantial aspects of language change. From this perspective, the relative importance of accuracy versus complexity, captured by $\beta$, may change over time, driving a system up or down along the theoretical limit, but leaving it near-optimal. Thus, this model makes testable predictions for language change.
### 3.4. Quantitative predictions
We adopt the quantitative predictions and evaluation methods derived by ZKRT, and extend them by explicitly considering the dimension of time. If human semantic systems evolve under pressure to be efficient, i.e., to reach the optimum of (3), then the following two properties should hold over time.
**Near-optimality.** For each language $l$ with system $q^L_l(w|m)$ at time $t$, there should be a tradeoff $\beta_L(t)$ for which the system is near-optimal. Formally, this means that its deviation from optimality,
$$\varepsilon_L(t) = \frac{1}{\beta_L(t)} \left( F_{\beta_L(t)}[q^L_L] - F^*_L \right),$$
should be small. Because we do not know the true tradeoff parameter, we consider the candidate that maps each system to the nearest point along the theoretical limit, i.e., we take $\beta_L(t) = \argmin_\beta \{F_\beta[q^L_L] - F^*_\beta\}$. The system $q^L_L$ is then taken to be efficient to the extent that $\varepsilon_L(t)$ is small, and this can be assessed with respect to counterfactual data, as described in Section 4. We do not expect $\varepsilon_L(t) = 0$ because the model does not incorporate every possible factor that may shape language and its evolution. Therefore, we expect that actual systems would only be near-optimal, in the precise sense defined above. For the same reason, transient deviations from optimality are also possible in theory. Our prediction is that large deviations from optimality would not be stable states that are likely to be observed if languages are indeed attracted to the theoretical limit of efficiency.
**Structural similarity.** Considering $\varepsilon_L(t)$ alone reduces the system to only two features — its complexity and its accuracy. However, IB also generates predictions for the full probabilistic structure of $q^L_L(w|m)$. That is, we expect that the full structure of $q^L_L$ will be similar to that of an optimal system. For simplicity, we compare $q^L_L$ with $q_{\beta_L(t)}$, the optimal system at $\beta_L(t)$, but note that it is in principle possible that optimal systems at other values of $\beta$ could be more structurally similar to $q^L_L$. To measure the structural similarity between two probabilistic category systems, we use the generalized Normalized Information Distance (gNID; Zaslavsky et al., 2018) which was designed for this purpose. That is, $q^L_L$ and $q_{\beta_L(t)}$ are similar to each other to the extent that the gNID between them is small. In this case as well, we will assess the degree of similarity $(1 - \text{gNID})$ relative to counterfactual data.
### 4. Efficiency and language change
The previous work reviewed in Section 3 moved from a synchronic efficiency analysis based on cross-language data to a diachronic hypothesis that language change is shaped by pressure for efficiency. That diachronic hypothesis has not yet been directly tested using fine-grained diachronic data, and the Nafaanra data reported above allow us to fill that gap.
#### 4.1. Efficiency over time
First, we are interested in testing whether the efficiency of the Nafaanra color naming system has persisted over time. Because the 1978 Nafaanra data were part of the WCS, we already know from ZKRT’s analyses that the 1978 Nafaanra data lay near the IB limit of efficiency. We conducted an entirely analogous analysis on the 2018 Nafaanra data. Figure 4
Figure 4. Diachronic efficiency analysis. Color naming in Nafaanra has changed from 1978 to 2018 by climbing up the IB theoretical limit (black curve, same as in Figure 3C). Despite exposure to English, the 2018 Nafaanra system appears at a different tradeoff from the English color naming system reflecting a qualitative difference between the two systems. The gray area below the curve shows the area covered by 50 hypothetical trajectories traced out by a model of random drift, which were all initialized near the 1978 system. The green trajectory corresponds to the example of Figure 11 in Appendix E, and the pentagon marks its location after 1,500 iterations.
Figure 5. (A-B) Empirical data for Nafaanra in 1978 and 2018 (same as Figure 2). (C-D) Optimal IB systems corresponding to the actual 1978 and 2018 systems. (E-F) Rotation analysis for the 1978 and 2018 Nafaanra systems respectively. Δ efficiency/similarity loss corresponds to the difference between the score of the rotated and actual system (positive values correspond to higher losses of the rotated system).
shows that the complexity and accuracy of both the 1978 and the 2018 Nafaanra systems are near the theoretical bound, but at different places along the curve. Importantly, Figure 4 and Appendix C show that the 2018 Nafaanra system differs from the English color naming system (estimated from the data of Lindsey and Brown, 2014) and is more efficient than systems obtained by a mixture of the English and 1978 systems. This suggests that pressure for efficiency has shaped Nafaanra beyond its evident contact with English. Thus, these diachronic data from Nafaanra appear to be consistent with the near-optimality prediction.
Figures 5A–D compare these two natural systems with their corresponding optimal systems that lie directly on the IB curve. It can be seen that the optimal systems capture substantial aspects of the empirical data, but also differ from those data in some respects. For example, the 1978 system lacks a yellow category that is found in the corresponding optimal system, and the 2018 system has purple and brown categories, while the corresponding optimal system does not. While the early yellow category seems to represent a discrepancy of the model (Zaslavsky et al., 2018), the absence of purple and brown does not necessarily. These categories emerge at a slightly higher value of β (see for example Figure 3C), and therefore this mismatch between the model and data may stem simply from noise in our estimation of β.
To quantitatively test the extent to which our predictions hold, we evaluated the efficiency loss (ε₁) and similarity loss (gNID) of the 1978 and 2018 systems, and assessed each system with respect to a set of hypothetical variants. These variants were obtained by rotation in the hue dimension (columns of the WCS stimulus grid; Regier et al., 2007) as illustrated in Appendix D, Figure 10. Following ZKRT, in this analysis β was fitted to each system separately in order to consider the best scores these hypothetical systems can achieve. Consistent with ZKRT’s findings for the WCS+ languages, including the 1978 Nafaanra system (Figure 5E), the actual (unrotated) 2018 Nafaanra system scores better than any of its hypothetical variants on both measures (Figure 5F). This suggests that the 1978 and 2018 are locally optimal within their set of hypothetical variants, and thus non-trivially efficient. In addition, it can be seen by looking ahead to Figure 6A that these two systems do not deviate much from optimality (less than 0.2 bits), comparable to the average deviation across the WCS+ languages. These results show that 1978 and 2018 Nafaanra are near-optimally efficient when assessed by the same standards that ZKRT used for other color naming systems.
4.2. Random drift
So far, we have seen that over the past several decades, the Nafaanra color naming system has changed substantially, while remaining near the theoretical limit of efficiency. This outcome is consistent with our hypothesis that language change may be
shaped by functional pressure for efficiency. But before reaching that conclusion, we need to consider a natural alternative: that the same outcome could have been produced by a process of random drift, without any pressure for efficiency. The importance of considering a null model of random drift has recently been emphasized in the literature (e.g. Newberry et al., 2017; Bentz et al., 2018; Karjus et al., 2020), and so here we ask whether a process of random drift could have produced the 2018 Nafaanra system from the 1978 system.
We considered a process of random drift that is described in detail in Appendix E. To avoid random systems, which form a weak baseline, this process maintains some reasonable category structure by representing a color naming system in terms of a set of Gaussian distributions over CIELAB space. It then evolves in a stochastic process that allows existing categories to drift, new categories to emerge, and old categories to occasionally vanish. We generated a set of 50 random drift trajectories, in each case simulating this process for 1,500 iterations. The initial system was the same for all trajectories, and was obtained by fitting to 1978 Nafaanra, yielding a good approximation of the 1978 system.
The green trajectory in Figure 4 corresponds to one such random drift trajectory, illustrated in Appendix E, Figure 11. The gray area below the IB curve in Figure 4 shows the area traced out by all 50 hypothetical random drift trajectories. It can be seen that these trajectories tend to diverge away from the IB curve, and none reaches the 2018 Nafaanra system. Figure 6A plots the inefficiency ($\varepsilon_I$) of the systems in these random drift trajectories over time, and confirms that they tend to become less efficient with time. Interestingly, the same plot also shows that the starting point for these trajectories — a Gaussian approximation to the 1978 Nafaanra system — is more efficient than the 1978 Nafaanra system itself. This demonstrates that the model at the heart of this random drift process can in principle represent highly efficient systems. At the same time, however, the process does not tend to remain at such systems. Figure 6B analogously plots the structural similarity between each system in these trajectories on the one hand, and the corresponding optimal system on the other. It can be seen that the random drift process tends to lead to systems that are dissimilar from those along the theoretical efficiency limit. Given these inefficiency and dissimilarity results, it seems unlikely that this process of random drift could have produced the 2018 Nafaanra system, starting from the 1978 system.
5. Discussion
The starting point for this study was the claim that systems of semantic categories evolve under functional pressure for efficiency. This claim is consistent with a substantial amount of synchronic data, but it had not previously been tested directly, by bringing it into contact with fine-grained diachronic data that documents language change over time. The present study has addressed that open issue, by considering the evolution of color naming in Nafaanra over the past several decades, through the lens of efficiency.
We have seen that color naming in Nafaanra has changed substantially while remaining near-optimally efficient, as predicted by the Information Bottleneck (IB) optimality principle and the theory
of compression more generally. We have also seen that this outcome would be unlikely under a process of random drift that maintains structured categories but does not incorporate pressure for efficiency. Thus, in at least one language, in at least one semantic domain, and over at least one stretch of time, it appears that a semantic system has evolved in a way that reflects functional pressure for efficiency. However, the information-theoretic framework we have employed in this work and its predictions for language change are not specific to these settings. In fact, this framework has recently gained substantial cross-linguistic support in several other domains, including container naming, animal taxonomies, personal pronouns, and grammatical number, tense and evidentiality (Zaslavsky et al., 2019c; 2021; Mollica et al., 2021). However, as in the case of color naming, these results have so far been based mainly on synchronic data. Therefore, an important direction for future research is to further test the diachronic predictions of this theory in more languages, domains, and periods of time. Interestingly, this framework can also be used to study the influence of communicative need on language change. In this evolutionary view of language, communicative need parameterizes the IB objective function (Zaslavsky et al., 2019a), which in turn, serves as a fitness criterion guiding the ways in which systems of semantic categories change.
Our findings converge with those of a complementary line of work. In a comment on the finding that systems of semantic categories tend to be efficient, Levinson (2012) asked “where our categories come from” – i.e. what process gives rise to these efficient category systems. He suggested that some insight into this question might be obtained from studies of iterated learning that simulate language evolution in the lab (e.g. Kirby et al., 2008; Xu et al., 2013). This suggestion inspired Carstensen et al. (2015) to explore whether simulated language evolution in the lab in fact produces systems of increasing efficiency. They found that it does, and more recent work has probed these ideas more closely (Carr et al., 2020). Although these earlier studies were based on different formulations of the notion of efficiency, the present work resonates with their findings by showing that actual language change, not just simulated language change, tends toward communicatively efficient semantic systems. More recently, Chaabouni et al. (2021) showed that artificial neural agents playing a cooperative color-discrimination game develop color signaling systems that converge to the same IB theoretical limit of efficiency that was proposed by ZKRT and considered in this work. This suggests that the computational principles underlying language change in humans may be crucial for evolving human-like communication in artificial agents.
At the same time, the present findings leave a number of points open, some of which suggest additional directions for future research. We have considered a specific model of random drift for category systems, and while we believe this model to be a reasonable one, it is conceivable that other models of drift could yield different results. More fundamentally, although we have spoken of language evolving under pressure for efficiency, and although our findings are consistent with that idea, we do not know the shape of the trajectory that took Nafaanra from where it was in 1978 to where it was in 2018. The evolution we have seen could have come about in a series of small incremental changes, tracing the IB curve closely, or the system could have been pulled fairly far away from efficiency by some external force, such as language contact, and then gradually retreated to efficiency.
Language contact is an especially relevant consideration in the case of Nafaanra, given the exposure of Nafaanra speakers to English and Twi, as noted above (see Huisman et al. (2021) for a comparable situation). While it is not known to what extent the evolution in Nafaanra color naming is attributable to contact, it is possible that some of the new 2018 Nafaanra categories may have been borrowed or calqued. For example the word ‘mbrukú’ (blue) may plausibly be a borrowing from English ‘blue’ or from Twi in which ‘brúu’ is sometimes used for blue, though ‘bihíre’ is also used. Likewise, the Nafaanra term ‘poto’ (purple) may also be borrowed from English, whereas the Twi word, ‘brcdum’ has minimal phonetic similarity and is likely not a borrowing source. Some other Nafaanra terms, if influenced by another language, seem more likely to have been influenced by Twi than by English. For example ‘uggonína’ (yellow-orange) is Nafaanra for chicken fat and ‘wrrnyinge’ (green) means fresh leaf, reasonable descriptions of the colors involved. In Twi, the terms ‘akokscrader’ (yellow) and ‘ahabammono’ (green) likewise mean chicken fat and fresh leaf, respectively; thus the form of these color terms may be calques from Twi, as there is no phonetic similarity between the terms — or these terms may have developed independently because these referents are locally culturally salient examples of these colors.
Importantly, however, although the 2018 Nafaanra system shares some features with the English and Twi systems, it is not a simple copy of either: the category pink is missing from Nafaanra although it is present in both English and Twi (‘menem’), the category orange is minimal and only barely visible in the contour plot of Figure 2B, and the 2018 system has retained the three named categories of the 1978 Nafaanra system, with the same names but with adjusted extensions. Thus, even if substantial parts of the 2018 Nafaanra system were either borrowed from or motivated by English and/or Twi, some “naturalization” process appears to have occurred whereby the categories adjusted to form a coherent system in Nafaanra — and we have seen that the resulting system is an efficient one. Further work will be needed to more fully ascertain the role of language contact, and, to the extent possible, the details of the historical trajectory of Nafaanra language change relative to the theoretical limit. However, whatever the details of that trajectory, our current results based on the beginning and end points of that trajectory do suggest a process that is in some way constrained to either remain, or eventually return to, near the theoretical limit of efficiency.
The collection of the new Nafaanra color naming data grew out of an informal exchange between two of the authors, K.G. and T.R., in a classroom setting. T.R. was presenting color naming data from the World Color Survey, and K.G., who was taking the class, mentioned that she was very familiar with one of the WCS languages, Nafaanra, because it was a focus of her ongoing linguistic fieldwork. This led naturally to the idea of K.G. collecting new Nafaanra color naming data the next time she returned to the field. With this idea in hand, it actually came as a bit of a surprise to us to realize that the WCS data were now old enough to be of some historical interest. Although the data were collected in the 1970s, they were only digitized and web-posted in the early 2000s, and they continue to be a widely and regularly used data resource — that is, the data “got old” gradually and without anyone remarking on that fact — until the realization we have just mentioned. That realization, and the follow-up work on Nafaanra reported here, open the possibility of analogous follow-up studies for any or all of the 109 other languages in the WCS, to more comprehensively test the hypothesis we have explored: that color naming evolves under pressure for efficiency.
Acknowledgments
This paper is dedicated to the late Professor Naftali Tishby. Tali was the PhD advisor of Noga Zaslavsky, and the computational analysis in this paper was developed as part of her PhD thesis and was deeply inspired by Tali’s principled scientific approach. Tali was a rare scientist whose research and vision made a profound impact on the understanding of the computational principles that govern both natural and artificial intelligence. He is greatly missed.
We thank Paul Kay for helpful discussions, the Nafaanra community for their help in collecting the data, and Phoebe Killieck and Hsin-Yeh Tsai for their help in digitizing the raw data. We also thank Delwin Lindsey and Angela Brown for kindly sharing their English color naming data with us. This study was partially supported by an MIT Brain and Cognitive Sciences Fellowship in Computation (N.Z.), Robert L. Oswalt Graduate Student Support Endowment for Endangered Language Documentation (K.G.), ARC grant FT190100200 (C.K.), and DTRA grant HDTRA11710042 (T.R.).
References
J. T. Abbott, T. L. Griffiths, and T. Regier. Focal colors across languages are representative members of color categories. *Proceedings of the National Academy of Sciences*, 113(40):11178–11183, 2016. doi: 10.1073/pnas.1513298113.
C. Bentz, D. Dediu, A. Verkerk, and G. Jäger. The evolution of language families is shaped by the environment beyond neutral drift. *Nature Human Behaviour*, 2:816–821, 2018.
T. Berger. *Rate distortion theory; a mathematical basis for data compression*. Prentice-Hall, Englewood Cliffs, NJ, 1971.
B. Berlin and P. Kay. *Basic Color Terms: Their Universality and Evolution*. University of California Press, Berkeley and Los Angeles, 1969.
C. P. Biggam. *The semantics of colour: A historical approach*. Cambridge University Press, Cambridge, UK, 2012.
J. W. Carr, K. Smith, J. Culbertson, and S. Kirby. Simplicity and informativeness in semantic category systems. *Cognition*, 202:104289, 2020.
A. Carstensen, J. Xu, C. A. Smith, and T. Regier. Language evolution in the lab tends toward informative communication. In *Proceedings of the 37th Annual Meeting of the Cognitive Science Society*, Austin, TX, 2015. Cognitive Science Society.
R. Chaabouni, E. Kharitonov, E. Dupoux, and M. Baroni. Communicating artificial neural networks develop efficient color-naming systems. *Proceedings of the National Academy of Sciences*, 118(12), 2021.
B. R. Conway, S. Ratnasingam, J. Jara-Ettinger, R. Futrell, and E. Gibson. Communication efficiency of color naming across languages provides a new framework for the evolution of color terms. *Cognition*, 195:104086, 2020. doi: https://doi.org/10.1016/j.cognition.2019.104086.
M. Denker, S. Steinert-Threlkeld, and J. Szymańik. Complexity/informativeness trade-off in the domain of indefinite pronouns. In *Proceedings of the 30th Semantics and Linguistic Theory Conference*, 2020.
K. Garvin. Nafaanra documentation project. Number 2017-11. Survey of California and Other Indian Languages, University of California, Berkeley, 2017. doi: 10.7297/X2V98672. URL http://dx.doi.org/doi:10.7297/X2V98672. With consultants Job Kwabena Ababio, James Anane, Sampson Kwasi Attah, Charles Munuie.
E. Gibson, R. Futrell, J. Jara-Ettinger, K. Mahowald, L. Bergen, S. Ratnasingam, M. Gibson, S. T. Piantadosi, and B. R. Conway. Color naming across languages reflects color use. *Proceedings of the National Academy of Sciences*, 114(40):10785–10790, 2017.
E. Gibson, R. Futrell, S. P. Piantadosi, I. Dautriche, K. Mahowald, L. Bergen, and R. Levy. How efficiency shapes human language. *Trends in Cognitive Sciences*, 23(5):389–407, 2019. doi: https://doi.org/10.1016/j.tics.2019.02.003.
R. Gilad-Bachrach, A. Navot, and N. Tishby. An information theoretic tradeoff between complexity and accuracy. In *Proceedings of the 16th Annual Conference on Learning Theory (COLT)*, 2003.
H. J. Haynie and C. Bowern. Phylogenetic approach to the evolution of color term systems. *Proceedings of the National Academy of Sciences*, 113(48):13666–13671, 2016. doi: 10.1073/pnas.1613666113.
J. Huisman, R. van Hout, and A. Majid. Stability and change in the colour lexicon of the Japonic languages. *Studies in language*, 2021.
A. Karjus, R. A. Blythe, S. Kirby, and K. Smith. Challenges in detecting evolutionary forces in language change using diachronic corpora. *Glossa: A Journal of General Linguistics*, 5:45, 2020.
P. Kay. Synchronic variability and diachronic change in basic color terms. *Language in Society*, 4(3):257–270, 1975.
P. Kay and L. Maffi. Color appearance and the emergence and evolution of basic color lexicons. *American Anthropologist*, 101(4):743–760, 1999.
P. Kay, B. Berlin, L. Maffi, W. R. Merrifield, and R. Cook. *The World Color Survey*. Stanford: Center for the Study of Language and Information, 2009.
C. Kemp and T. Regier. Kinship categories across languages reflect general communicative principles. *Science*, 336(6084):1049–1054, 2012. ISSN 0036-8075.
C. Kemp, Y. Xu, and T. Regier. Semantic typology and efficient communication. *Annual Review of Linguistics*, 4(1), 2018. doi: 10.1146/annurev-linguistics-011817-045406.
S. Kirby, H. Cornish, and K. Smith. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. *Proceedings of the National Academy of Sciences*, 105(31):10681–10686, 2008.
S. C. Levinson. Yeli Dnye and the theory of basic color terms. *Journal of Linguistic Anthropology*, 10(1):3–55, 2000.
S. C. Levinson. Kinship and human thought. *Science*, 336(6084):988–989, 2012.
D. T. Lindsey and A. M. Brown. The color lexicon of American English. *Journal of Vision*, 14(2):17, 2014.
D. T. Lindsey, A. M. Brown, D. H. Brainard, and C. L. Appelcia. Hunter-gatherer color naming provides new insight into the evolution of color terms. *Current Biology*, 25(18):2441–2446, 2015.
J. Lyons. Colour in language. In T. Lamb and J. Bourriane, editors, *Colorar: Art and science*, pages 194–224. University of Cambridge, Cambridge, UK, 1995.
R. E. MacLaury. *Color and cognition in Mesoamerica: Constructing categories as advantages*. University of Texas Press, 1997.
F. Moravcsik, G. Bacon, N. Zaslawsky, Y. Xu, T. Regier, and C. Kemp. The forms and meanings of grammatical markers support efficient communication. *Proceedings of the National Academy of Sciences*, 118(49), 2021.
M. G. Newberry, C. A. Ahern, R. Clark, and J. B. Plotkin. Detecting evolutionary forces in language change. *Nature*, 551:223–226, 2017.
T. Regier, P. Kay, and N. Khetarpal. Color naming reflects optimal partitions of color space. *Proceedings of the National Academy of Sciences*, 104(4):1436–1441, 2007.
T. Regier, C. Kemp, and P. Kay. Word meanings across languages support efficient communication. In B. MacWhinney and W. O’Grady, editors, *The Handbook of Language Emergence*, pages 237–263. Wiley-Blackwell, Hoboken, NJ, 2015.
C. E. Shannon. A mathematical theory of communication. *Bell System Technical Journal*, 27, 1948.
C. E. Shannon. Coding theorems for a discrete source with a fidelity criterion. *IRE Nat. Conv. Rec.*, 4 (142-163):1, 1959.
G. F. Simons and R. G. Gordon. Ethnologue. *Encyclopedia of Language and Linguistics*, pages 250 – 253, 2006.
N. Tishby, F. C. Pereira, and W. Bialek. The Information Bottleneck method. In *Proceedings of the 37th Annual Allerton Conference on Communication, Control and Computing*, 1999.
J. Xu, M. Downman, and T. L. Griffiths. Cultural transmission results in convergence towards colour term universals. *Proceedings of the Royal Society of London B: Biological Sciences*, 280(1758), 2013. doi: 10.1098/rspb.2012.3074.
Y. Xu, T. Regier, and B. C. Malt. Historical semantic chaining and efficient communication: The case of container names. *Cognitive Science*, 40(8):2081–2094, 2016. ISSN 1551-6709. doi: 10.1111/cogs.12312.
Y. Xu, E. Liu, and T. Regier. Numeral systems across languages support efficient communication: From approximate numerosity to recursion. *Open Mind*, 4:57–70, 2020.
N. Zaslawsky. *Information-Theoretic Principles in the Evolution of Semantic Systems*. Ph.D. Thesis, The Hebrew University of Jerusalem, 2020.
N. Zaslawsky, C. Kemp, T. Regier, and N. Tishby. Efficient communication, color naming and its evolution. *Proceedings of the National Academy of Sciences*, 115(31):7937–7942, 2018.
N. Zaslawsky, C. Kemp, N. Tishby, and T. Regier. Communicative need in color naming. *Cognitive Neuropsychology*, 2019a. doi: 10.1080/02643294.2019.1604502.
N. Zaslawsky, C. Kemp, N. Tishby, and T. Regier. Color naming reflects both perceptual structure and communicative need. *Topics in Cognitive Science*, 11(1):207–219, 2019b. doi: 10.1111/tops.12395.
N. Zaslawsky, T. Regier, N. Tishby, and C. Kemp. Semantic categories of artifacts and animals reflect efficient coding. In *Proceedings of the 41st Annual Meeting of the Cognitive Science Society*, pages 1254–1260, 2019c.
N. Zaslawsky, M. Maldonado, and J. Culbertson. Let’s talk (efficiently) about us: Person systems achieve near-optimal compression. In *Proceedings of the 43rd Annual Meeting of the Cognitive Science Society*, 2021.
Appendix A. Nafaanra 2018 individual maps
To provide a complete view of the 2018 Nafaanra color naming data, we present here the color naming map for each participant in the data (Figure 7). It can be seen that even the participants who were born before 1978 (ages 48–77) exhibit naming patterns that are similar to the 2018 system (Figure 2B) and more refined than the 1978 system (Figure 2A). This supports our claim that color naming in Nafaanra has changed since 1978.

**Figure 7.** Color naming maps for each participant in the 2018 Nafaanra color naming data. Participants are sorted by age. Each chip in the stimulus grid (Figure 1) is colored according to its term. The color associated with each term is the color centroid of the term’s color category, evaluated per participant.
Appendix B. Accuracy and distortion in the Information Bottleneck
Section 3 refers to the fact that $I(W; U)$ corresponds to the similarity between the speaker’s and listener’s representations, and is therefore taken to be the accuracy term in the Information Bottleneck (IB) framework. This has previously been shown for IB in general (Tishby et al., 1999; Gilad-Bachrach et al., 2003), and see (Zaslavsky, 2020) for a detailed discussion of this derivation for the special instantiation of IB for semantic systems. For completeness, we review below the derivation of $I(W; U)$ as the natural accuracy measure in our setting.
Recall that each speaker meaning is defined by a distribution, or belief, over the domain $U$. Thus, we denote by $m(u)$ the probability of $u$ (in our case, $u$ is a color) given that $m$ is the speaker’s mental representation. Similarly, $\hat{m}_w(u)$ denotes the probability that the listener assigns to $u$, given that the listener infers $\hat{m}_w$ as the mental representation in response to the speaker’s word $w$. The KL-divergence between the speaker’s and listener’s mental representations is defined as
$$D[m \parallel \hat{m}_w] = \sum_u m(u) \log \frac{m(u)}{\hat{m}_w(u)}, \quad (5)$$
and the total expected divergence (or distortion) is defined as
$$\mathbb{D}[q] = \mathbb{E}_{m \sim p(m), w \sim q(w|m)} \left[ D[M \parallel \hat{M}] \right]. \quad (6)$$
Now, let $m_0(u) = \sum_m p(m)m(u)$ be the a-priori mental representation. If the listener’s inferences obey equation (1), as is the case in IB, then the following holds:
$$\mathbb{D}[q] = \mathbb{E} \left[ \sum_u m(u) \log \frac{m(u)}{\hat{m}_w(u)} \right] \quad (7)$$
$$= \mathbb{E} \left[ \sum_u m(u) \log \frac{m(u)m_0(u)}{\hat{m}_w(u)m_0(u)} \right] \quad (8)$$
$$= \mathbb{E} \left[ D[m \parallel m_0] \right] - \mathbb{E} \left[ D[\hat{m}_w \parallel m_0] \right], \quad (9)$$
where (9) follows from substituting equation (1) in the expectation of the second term. Note that the first term is constant, namely it does not depend on the speaker’s encoder nor on the listener’s inferences. The second term is an equivalent definition of $I(W; U)$, and it measures the amount of information that the speaker’s words contains about the speaker’s intended colors. Therefore, minimizing the total divergence $\mathbb{D}[q]$ is equivalent to maximizing $I(W; U)$, and the latter is the natural measure of accuracy.
Appendix C. Efficiency beyond contact
Our results in the main text suggest that although exposure to English may have inspired some of the changes in Nafaanra, the 2018 Nafaanra system reflects pressure for efficiency beyond the influence of
Figure 8. Contour plots of the Nafaanra (2018, same as Figure 2B) and English systems. While the two systems share some resemblance, they differ both in the number of color categories (e.g., pink does not appear in Nafaanra) and in their extensions (e.g., the brown and blue categories are larger in English, while the yellow and black/dark categories are larger in Nafaanra).
language contact. Here we provide additional analysis in support of this claim. First, Figure 8 shows that the 2018 system differs from the English system (estimated from the data of Lindsey and Brown, 2014) not only in the number of color categories but also in their extension. This suggests that the 2018 system cannot be explained by simply copying English color categories into Nafaanra.
Second, we compared our results with a simple baseline model of language contact that does not take into account any pressure for efficiency. Specifically, we considered a set of hypothetical systems that are obtained by linear mixtures of the 1978 system and the English system. Let $P_{78}(w|c)$ and $P_{\text{eng}}(w|c)$ be the empirical distributions of terms $w$ given colors $c$, as estimated from the 1978 Nafaanra and English data respectively. In order to combine these systems, we first need to align their categories. To this end, we mapped each term in the 1978 Nafaanra system to its corresponding English term, using the English terms “white,” “black,” “red,” and “gray.” In addition, to allow the 1978 system to potentially evolve to the full English system, we added hypothetical terms corresponding the remaining English terms but with zero probability mass. In other words, we constructed $\tilde{P}_{78}(w|c)$ from the 1978 system such that $\tilde{P}_{78}(w|c) = P_{78}(w|c)$ if $w$ appeared in 1978 and $\tilde{P}_{78}(w|c) = 0$ otherwise. We then considered the following set of hypothetical systems:
$$\alpha \tilde{P}_{78}(w|c) + (1 - \alpha) P_{\text{eng}}(w|c), \quad \alpha \in [0, 1],$$
where the 1978 system is obtained at $\alpha = 1$, the English system is obtained at $\alpha = 0$, and in between we get linear mixtures of the two systems.
Figure 9 compares the complexity and accuracy tradeoffs of these hypothetical mixture systems with those of Nafaanra and English and the optimal tradeoffs at the IB theoretical bound. It can be seen that the 2018 system is more efficient (i.e., lies closer to the theoretical bound) than the mixture systems, in addition to being distant from the English system. This further supports our conclusion that Nafaanra has changed under pressure for efficiency beyond the influence of language contact.
Appendix D. Rotation analysis
Our evaluation of the efficiency of the Nafaanra color naming system with respect to a set of hypothetical systems is based on Regier et al.’s (2007) rotation analysis. That is, for each color naming system, a set of hypothetical systems can be derived by rotations along the hue dimension of the WCS color naming grid (Figure 1). This is illustrated in Figure 10 for the 2018 Nafaanra system.
Appendix E. Random drift model
Our random drift model simulates language change via a stochastic process that preserves structured categories without incorporating pressure for efficiency. To this end, we consider a class of artificial color naming systems, in which each category $w$ induces a Gaussian distribution, $q(c|w) = \mathcal{N}(c; \mu_w, \Sigma_w)$, over CIELAB space (Abbott et al., 2016). In practice, we discretized these Gaussians by restricting them to colors of the WCS grid (Figure 1). A system with $k$ categories is defined by $k$ Gaussians, and a
Figure 10. Example of rotated variants of the 2018 Nafaanra system, $r = 0$ corresponds to the actual system, $r > 0$ to a shift of $r$ columns to the right with respect to the color grid of Figure 1, and $r < 0$ to a shift of $|r|$ columns to the left.
$k$-dimensional probability vector $q(w)$. Given these parameters, the naming distribution is taken to be $q(w|c) \propto q(c|w)q(w)$, where $c$ is a color. Our stochastic process takes an initial system from this class, and propagates it in time by allowing its parameters to change gradually.
Before we define the dynamics of this process, our parameterization requires further elaboration. First, to ensure that each covariance matrix $\Sigma_w$ remains positive semi-definite, we parameterize it by another matrix, $L_w$, such that $\Sigma_w = L_w L_w^T$. Second, to allow categories to emerge or vanish, we assume a maximum of $K = 330$ potential categories, and keep a weight vector, $\pi_w$, for them. Only categories for which $\pi_w$ is higher than a given threshold $\eta$ are included in the lexicon. For those categories, we define $q(w) \propto \pi(w)$. Therefore, $\eta$ is a hyper-parameter that controls the tendency to add new categories. At the $t$-th iteration of the process, the system is defined by $\theta(t) = \{\mu^{(t)}_w, L^{(t)}_w, \pi^{(t)}_w\}_{w=1}^K$.
Given an initial system, $\theta(0)$, the dynamics of the process are defined as follows. At each iteration $t$, a category $w_t$ is chosen at random. First, the weight vector is updated by randomly selecting whether to add or subtract $\eta$ from $\pi^{(t-1)}_{w_t}$, and keeping the vector non-negative and normalized. Next, if $w_t$ is already in the lexicon, i.e. $\pi^{(t-1)}_{w_t} > \eta$, then with probability 0.5 its parameters are updated as follows:
$$\mu^{(t)}_{w_t} = \frac{1}{2} \left( \mu^{(t-1)}_{w_t} + c_t \right), \quad c_t \sim q_{t-1}(c|w_t)$$
$$L^{(t)}_{w_t} = L^{(t-1)}_{w_t} + I + A^{(t)}, \quad A^{(t)}_{i,j} \sim \mathcal{N}(0, 1).$$
The update rule for $\mu^{(t)}_{w_t}$ shifts it in the direction of $c_t$, which on average would be a small shift because $c_t$ is sampled from $q_{t-1}(c|w_t)$. The update rule for $L^{(t)}_{w_t}$ adds to it a noise matrix, $A^{(t)}$, and the identity matrix, $I$, in order to encourage the category to grow over time.
Finally, it remains to set the initial set of parameters, $\theta(0)$, and threshold $\eta$. We set $\theta(0)$ such that the corresponding system will approximate the actual 1978 Nafaanra system. For each category $w$ in the 1978 system, we fit a Gaussian with a diagonal
covariance matrix to that category, and take $L_w^{(0)}$ to be its square root. For these categories, we take $\pi_w^{(0)}$ to be their proportion in the 1978 naming data. For the remaining potential categories, which are not in the lexicon: we set $\pi_w^{(0)} = 0$, initialize $\mu_w^{(0)}$ by randomly selecting a chip from the WCS grid (with replacement), and initialize $L_w^{(0)}$ by $\sigma_w^{(0)} I + A_w^{(0)}$, where $A_w^{(0)} \sim N(0, 1)$ and $\sigma_w^{(0)}$ is drawn uniformly from $[1, 5]$. We take $\eta = 0.01$, for which we observed a trend of gradual increase in the number of categories, reaching on average $k = 23.9$ after 1,500 iterations. An example of a hypothetical trajectory that was generated by this random drift process is shown in Figure 11. |
Analytic Solution and Theory for the Size and Shape of Skyrmions as a Function of Magnetic Material Properties
Ellen Lu, Karen Livesey
Supervised by Karen Livesey
School of Information and Physical Sciences, University of Newcastle
February 28, 2022
Abstract
The goal of this research project is to find an analytic solution for the size and shape of skyrmions as a function of magnetic material properties. Our analytic solution and theory are simpler and easier to be understood by those who are not from mathematical fields compared to the results developed by Büttner et al.. The expressions developed reveal the underlying physics of the problem. We use an analytic method which was developed from solving magnetic domain wall problems. We identify five major energy contributions that determine the size and shape of skyrmions: Dzyaloshinskii-Moriya interaction (DMI) energy, exchange interaction energy, anisotropy energy, Zeeman energy and demagnetization energy. Piecewise functions were used to approximate the magnetization angle function to simplify the energy density expressions for each of the contributions. The energy per unit area was calculated for a thin film by integrating over cylindrical coordinates. In order to examine the solution accuracy, the results were plotted alongside plots using the function from Büttner et al. That function is more accurate than our results but is very complicated. Our energy contributions are generally close to that of Büttner et al.. We produced an analytical result to describe the size and width of skyrmions by minimizing the total energy of the skyrmion.
1 Introduction
Magnetic skyrmions are tiny swirls of local magnetization in magnetic materials. As depicted in Figure 1, the direction of magnetization of a typical skyrmion varies from down $-z$ direction in the centre region to up $+z$ direction at the edge of the skyrmion. This variation occurs along the radial axis $\rho$. [1] A skyrmion is described in some papers as a topologically protected quasiparticle, which has unique properties as a whole rather than its individual components. Its stability and dynamics depend strongly on its topological properties. [1] Skyrmions are promising magnetic structures in materials for transportation and storage of digital information. [2]
![Figure 1: The schematic of a typical skyrmion: arrows show the direction of magnetization, and the colour indicates the projection along the out-of-plane $z$ direction. The local magnetization gradually rotates from $-z$ direction (centre of the skyrmion) to $+z$ direction (edge of the skyrmion) along the axis $\rho$. (a) 2D view from above the skyrmion. (b) 3D view of the skyrmion.]
There are three features of skyrmions, which make them one of the best candidate materials for next generation information storage technology. Due to skyrmions being a topologically protected atomic spin configuration, they are relatively stable compared to other magnetic structures such as magnetic vortices or bubbles. [4] Skyrmions can be extremely small, down to the “single-digit nanometre scale” with the support of the interfacial Dzyaloshinskii-Moriya interaction (DMI). DMI arises from spin-orbit coupling and stabilizes smaller skyrmion structures at room temperature. [2] Moreover, skyrmions can be created, deleted and moved like...
a single particle through a material using very small amounts of electric current. The current densities used in moving skyrmions (through magnetic material such thin films) can be several orders of magnitude smaller than the one used in moving magnetic domain walls. [2] Hence, the future digital data storage device by using magnetic skyrmions would have better stability, higher information density and much lower energy costs as well as ease of manipulation via electric current. [5] Although today’s hard-disk drive can achieve high densities of information storage, they have very complex and fragile mechanical parts. On the other hand, skyrmions do not require extra parts to be moved, and skyrmions can achieve even higher bit density compared to today’s magnetic data storage devices.[4]
Besides data storage devices, skyrmions are also a good candidate for some logic gates. [2] However, both future information and communication technologies require individual skyrmions to be small and stable at room temperature and in zero or very small applied fields.[1] In a sea of choices of potential material, a good mathematical model that can predict the size and shape and behaviour of a skyrmion as magnetic properties of the material are varied, would enable material scientists to search for the ‘holy grail’ of skyrmionic devices in bright daylight rather than in the dark.
Büttner et al. developed a very complicated analytical framework and numerical solutions to predict the property of isolated skyrmions in any magnetic thin film.[5] The downside of the work by Büttner et al. is that the analytical model is complicated for scientists who do not have a mathematical background. Moreover, some parts of the framework rely on numerical data fitting to a function rather than being purely derived from physics theories. Hence, the goal of our work is to develop simpler analytical theories that actually represent physical properties of skyrmions. The solution should be easier to use and understand by material scientists and engineers from different academic fields.
In this report, the aim is to present the analytic theories that we developed for describing the size and shape of skyrmions in thin films, as a function of magnetic material properties. In section II, our analytic energy densities contributions for skyrmions are described in detail. In Section III, energy minimization to find the skyrmion size is detailed. Finally, the conclusion and future work will be discussed in section IV.
Statement of Authorship: All works presented in this report are Ellen Lu’s calculation
except equation (17) and (18), which were proved by Karen Livesey. The results were confirmed and examined by Karen Livesey.
2 Skyrmion energy contributions
In the following subsections, the major energy density contributors are introduced and they are presented in cylindrical coordinates. The piecewise function for the magnetization inside a skyrmion is presented in detail, allowing analytic integration of the energy densities. Then, the final energy per unit area of each energy contributors are analysed in graphs.
2.1 Dzyaloshinskii-Moriya interaction energy
DMI is the antisymmetric exchange interaction, which was initially found in weak ferromagnetic materials. It arises from spin orbit coupling. [6, 7] It is the interaction that gives rise to the formation of skyrmions in magnets. $\vec{D}$ is the DMI vector with units of J/m$^2$ and it is assumed that $D$ is positive. The expression for the DMI energy density in Cartesian coordinates [9] is
$$w_{DMI} = -D \left( [\hat{y} \times \hat{z}] \cdot \left[ \hat{m} \times \frac{\partial \hat{m}}{\partial y} \right] + [\hat{x} \times \hat{z}] \cdot \left[ \hat{m} \times \frac{\partial \hat{m}}{\partial x} \right] \right),$$
where $\hat{m}$ is the magnetization vector, and $\hat{x}, \hat{y}, \hat{z}$ are unit vectors. Transforming the energy density equation to cylindrical coordinates is not only for the convenience of calculation but also a better representation of skyrmion structure, since it has cylindrical symmetry. By close investigation of Equation (1), we realise that the cross product of $\hat{y}$ and $\hat{z}$ is the unit vector $\hat{x}$. Similarly, $\hat{x} \times \hat{z} = -\hat{y}$. This means the dot products between $\hat{x}$ and any other vectors has only the component in $x$ direction left. Thus the equation can be written as:
$$w_{DMI} = D \left[ \left( m_y \frac{\partial m_z}{\partial y} - m_z \frac{\partial m_y}{\partial y} \right) + \left( m_x \frac{\partial m_z}{\partial x} - m_z \frac{\partial m_x}{\partial x} \right) \right].$$
Using the relationship between Cartesian and cylindrical coordinates, we can obtain the magnetization unit vector $\hat{m}$ components
$$m_x = m_\rho \cos \phi - m_\phi \sin \phi$$
$$m_y = m_\rho \sin \phi - m_\phi \cos \phi$$
$$m_z = m_z.$$
The chain rule can be used to obtain the coordinate transformation for the partial derivatives:
\[
\frac{\partial}{\partial x} = \frac{\partial \rho}{\partial x} \frac{\partial}{\partial \rho} + \frac{\partial \phi}{\partial x} \frac{\partial}{\partial \phi} + \frac{\partial z}{\partial x} \frac{\partial}{\partial z}
\]
\[
\frac{\partial}{\partial y} = \frac{\partial \rho}{\partial y} \frac{\partial}{\partial \rho} + \frac{\partial \phi}{\partial y} \frac{\partial}{\partial \phi} + \frac{\partial z}{\partial y} \frac{\partial}{\partial z}
\]
(4)
Substituting Equation (3) and Equation (4) into Equation (2), we can obtain a function that only have cylindrical components. As shown in Figure 1, there is no change of magnetization along the azimuthal axis $\phi$. So, the energy density function can be further simplified into:
\[
w_{DMI} = -D \left( m_\rho \frac{\partial m_z}{\partial \rho} - m_z \frac{\partial m_\rho}{\partial \rho} + \frac{m_z m_\rho}{\rho} \right).
\]
(5)
### 2.2 Other energy contributions
The exchange interaction energy density in Cartesian coordinates is expressed as
\[
w_{ex} = A \left\{ \sum_{i=x,y,z} \left[ \left( \frac{\partial m_i}{\partial x} \right)^2 + \left( \frac{\partial m_i}{\partial y} \right)^2 + \left( \frac{\partial m_i}{\partial z} \right)^2 \right] \right\}
\]
(6)
Using the same technique as for the DMI energy calculation, we can transform the coordinates system of the equation above. Similarly, all the derivatives with respect to $\phi$ go to zero due to no change in the magnetization relative to azimuthal direction. The exchange energy density in cylindrical coordinates is given by:
\[
w_{ex} = A \left[ \left( \frac{\partial m_\rho}{\partial \rho} \right)^2 + \left( \frac{\partial m_\phi}{\partial \rho} \right)^2 + \left( \frac{\partial m_z}{\partial \rho} \right)^2 + \left( \frac{\partial m_\rho}{\partial z} \right)^2 + \left( \frac{\partial m_\phi}{\partial z} \right)^2 + \left( \frac{\partial m_\phi}{\partial z} \right)^2 + \frac{m_\phi^2 + m_\rho^2}{\rho^2} \right],
\]
(7)
where $A$ is the exchange stiffness constant for a material. Because the skyrmion exists in a thin film in the $z$ direction, there is no change of magnetization on the $z$ direction, all the derivatives with respect to $z$ vanish in the Equation (7). The $\phi$ component of magnetization does not change along the $\rho$ direction either. The term $\frac{\partial m_\phi}{\partial \rho}$ goes to zero. Now the Equation (7) can be simplified as below:
\[
w_{ex} = A \left[ \left( \frac{\partial m_\rho}{\partial \rho} \right)^2 + \left( \frac{\partial m_z}{\partial \rho} \right)^2 + \frac{m_\phi^2 + m_\rho^2}{\rho^2} \right]
\]
(8)
Anisotropy energy describes the preference of the magnetization to point in or out of the plane, $\pm z$. Its energy density is given by:
\[
w_{anis} = K(1 - m_z^2),
\]
(9)
which is unchanged in cylindrical coordinates.
The Zeeman effect describes the lowering of the energy when the magnetization points along the applied field direction, taken to be \(+z\) here. Its energy density can be expressed as:
\[
w_{zee} = \mu_0 M_s B (1 - m_z)
\]
(10)
where \( \mu_0 \) is the permeability of free space with unit \([m \cdot kg \cdot s^{-2} \cdot A^{-2}] \), \( M_s \) is the saturation magnetization with unit \([A/m]\) and \( B \) is the magnetic induction with unit T.
### 2.3 Piecewise magnetization angle approximation
It would be too complicated to integrate over the original magnetization angle \( \theta \) function \((\theta(\rho) = \pm 2 \tan^{-1} \left[ e^{(\rho - R)/\Delta} \right])\), where \( R \) is the location of the magnetization variation and \( \Delta \) is the variation width. Thus, we need a simpler function to approximate the original \( \theta \), which would ultimately produce a reliable analytic result for the energy. Our skyrmion piecewise function is formulated as shown in Figure 2 below. We define \( R \) as size of the skyrmion and \( \Delta \) as the width of skyrmion. The magnetization angle \( \theta \) can be approximated by a linear ansatz

function of $\rho$:
\[
\theta(\rho) = \begin{cases}
0, & 0 < \rho < (R - \Delta), \quad \text{region I} \\
\frac{\pi}{2\Delta}\rho - \frac{\pi}{2\Delta}(R - \Delta), & (R - \Delta) < \rho < (R + \Delta), \quad \text{region II} \\
\pi, & \rho > (R + \Delta), \quad \text{region III}
\end{cases}
\]
(11)
As shown in Figure 2, the function describes the magnetization angle $\theta$ from the $+z$ direction from the centre of the skyrmion to the outside region. This allows the local magnetization function can be express as $m_z = \cos[\theta(\rho)]$ and $m_\rho = \sin[\theta(\rho)]$.
Substituting Equation (11) into cylindrical DMI energy density (Equation 5), one sees that regions I and III do not contribute to the total DMI energy. Hence, we only needed to integrate the energy density over region II, the DMI energy per unit area was obtained:
\[
\int_0^\infty d\rho \int_0^{2\pi} d\phi \int_{-\infty}^{+\infty} dz \ (\rho \cdot w_{DMI}) \equiv E_{DMI} = 2\pi t \cdot \pi RD
\]
(12)
where $t$ is the thickness of skyrmion material in metres, through the $z$ direction. The factor of $2\pi$ comes from integration over the azimuthal angle $\phi$. The equation shows that the DMI energy increases with the skyrmion size $R$ and thickness $t$ of the material.
In order to examine the accuracy of our approximation functions, we plot our energy densities as a function of skyrmion size $R$ and width $\Delta$ on the same graph that was created by using the results from Büttner et al..[5] The results from Büttner et al. have high accuracy and are proven by comparison to experimental and simulation data. [5]
Figure 3 is the DMI energy density as a function of width $\Delta$ and size $R$. It shows that DMI has no impact on the width $\Delta$ of the skyrmions. However, DMI energy linearly decreases with increase of the size $R$. This might indicate that the higher DMI energy does make smaller skyrmions more stable. Our analytic approximation (solid lines) give a perfect fit with the result from the work of Büttner et al. (dashed lines) [10].
Substitution of Equation (11) into the exchange energy density given in Equation (8) leads to the energy by integrated over region II. Regions I and III do not contribute to the total energy. However, the integration can not be solved easily at first. The exchange energy integral is given by
\[
E_{ex} = 2\pi t A \int_{R-\Delta}^{R+\Delta} d\rho \left\{ \rho \left( \frac{\pi}{2\Delta} \right)^2 + \frac{1}{\rho} \sin^2 \left[ \frac{\pi}{2\Delta} (\rho - R + \Delta) \right] \right\}.
\]
(13)
Figure 3: DMI energy per unit area as an function of Width $\Delta$ & size $R$: (a) with fixed width $\Delta = 2\text{nm}$, DMI energy linearly decreases with increase of the size $R$. Our approximation of DMI energy (blue line) is a perfect fit with the results from Büttner et al. [10] (orange line); (b) with fixed size $R = 20\text{nm}$, DMI has no effect on width of the skyrmions, and the result from Büttner’s group also confirmed this.
A Taylor expansion to first order is used to approximate the value $\frac{1}{\rho}$ near $\rho \sim R$. This is where the integrand has its main contribution. One obtains $\frac{1}{\rho} \sim \left(\frac{2}{R} - \frac{\rho}{R^2}\right)$. This allows us to obtain a much simpler expression for the exchange energy, namely
$$E_{ex} = 2\pi t A \left( \frac{\pi^2 R}{4\Delta} + \frac{2\Delta}{R} \right). \tag{14}$$
Higher order approximations by using Taylor series were examined, the first order gave the closest and simplest result. The result is plotted along side the result from Büttner et al. in Figure 4 to examine the accuracy.
The Figure 4 is the exchange energy as a function of the size ($R$) and transition width ($\Delta$) of the skyrmion. It shows that the Taylor expansion to first order preforms better than the second order approximation. Although, our function slightly underestimates the exchange contribution, it is sufficient for energy minimization (see the next Section).
Solving for the anisotropy and Zeeman energy contributions is easier by substituting the piecewise functions [equation (11)] into the energy density equations [(9) and Equation (10)] accordingly. The anisotropy energy is result is
$$E_{anis} = (2\pi t) \cdot 2\Delta RK, \tag{15}$$
Figure 4: The exchange energy per unit area as a function of width and size of the skyrmion: (a) with fixed width $\Delta = 2$nm, the exchange energy increase with size $R$; (b) with fixed size $R = 20$nm, the exchange energy exponentially decay with increase of the skyrmion width. Our approximation slightly underestimates the energy contribution of exchange energy. Both (a) and (b) shows that Taylor expansion first order gives better result compare to second order. [10]
where $K$ is the anisotropy energy density constant for a given material. The Zeeman energy is
$$E_{zee} = (2\pi t) \cdot BM_s \left[ -R^2 - 4 \left( 1 - \frac{8}{\pi^2} \right) \Delta^2 \right]. \quad (16)$$
Both Equation (15) and (16) agree with the results from Büttner’s group. (see Figure 5 and Figure 6, respectively).
Figure 5 is the anisotropy energy per unit area as function of the size ($R$) and transition width ($\Delta$) of the skyrmion (solid lines). It shows that anisotropy energy linearly increases with the size and the width of the skyrmion. Our anisotropy energy is another perfect fit with the result from Büttner et al. (dashed lines)
Figure 6 shows the Zeeman energy per unit area as a function of size ($R$) and transition width ($\Delta$) of the skyrmion. It shows that the Zeeman energy decreases with both width and sized of skyrmion. This is because the applied field here is in the same direction as the skyrmion core and so the magnetic field favours the expansion of the skyrmion. Compared with other energy contributes, the Zeeman energy contribution is smaller due to the size $B = 0.001$T used here.
Figure 5: The anisotropy energy per unit area as function of the width and size of the skyrmion:
(a) with fixed width $\Delta = 2$nm, anisotropy energy linearly increases with the skyrmion size $R$;
(b) with fixed size $R = 20$nm, anisotropy energy linearly increases with the skyrmion width $\Delta$.
Our predicted results is exact as same as the Büttner’s result [10] within the given intervals where skyrmion usually are found.
2.4 Stray-field energy
The stray field energy is the hardest contribution to calculate. The magnetic field outside a magnet is called stray field and within a magnet is known as the demagnetization field. Stray field can be found whenever the magnetization has a component normal to an external or internal surface or nonuniform magnetization.
From Livesey and Davidson’s past derivation, the shape of the demagnetization field function actually is similar to a negative magnetization function, which can be approximated by a similar piecewise function, which were used in other energy contribution. [12, 3, 13] The only difference is that the piecewise function profile would be slightly wider and the maximum magnetization would not reach 1. The difference in maximum magnetization is named the $\delta$. The demagnetization field function and $\delta$ value, which were calculated by Livesey, were used to develop the expression of demagnetization energy contribution. This produces simple functions of out-of-plane ($z$) demagnetization energy per unit area and in-plane ($\rho$) demagnetization energy.
Figure 6: The Zeeman energy per unit area as a function of width and size of the skyrmion: (a) with fixed width $\Delta = 2$nm, the energy decreases with the size of skyrmion $R$. Our results is identical as Büttner’s result [10]; (b) with fixed size $R = 20$nm, the energy decreases with the width $\Delta$, the energy from our function is almost as same as the Büttner’s result [10]. The Zeeman energy contribution is smaller compared to other contributions. This is due to the use of a small magnetic field $B = 0.001$T here.
\begin{equation}
E_{demag}^t = (2\pi t) \cdot \frac{1}{2}\mu_0 M_s^2 \left[ -2R \left( \Delta + \frac{t}{2} \right) - \frac{4}{2} R + t \left( \Delta + \frac{t}{2} \right) + \frac{t \left( \Delta + \frac{t}{2} \right)^2}{R} \left( 1 - \frac{4}{\pi^2} \right) \right]
\end{equation}
\begin{equation}
E_{demag}^\theta = (2\pi t) \cdot \mu_0 M_s^2 R \Delta \left( \frac{t}{t + 4\Delta} \right)
\end{equation}
Equation (17) and (18) are significantly simpler compared to the functions given by Büttner’s group, while the values that our functions predict are close to the results of Büttner et al., (see Figure 7 and Figure 8)
3 Energy minimization
In order to do the minimization step, all the energies of different contributors are added together. These are given in the summary Table 1.
The minimization step involves solving for the skyrmion size $R$ and the variation width $\Delta$ by differentiating the total energy $E$ and setting the results to zero, namely
\begin{equation}
\frac{\partial E(R, \Delta)}{\partial R} = \frac{\partial E(R, \Delta)}{\partial \Delta} = 0.
\end{equation}
Figure 7: The demagnetization energy (z direction) per unit area as a function of width and size of the skyrmion: (a) with fixed width $\Delta = 2$nm, the energy decreases with the size of skyrmion $R$; (b) with fixed size $R = 20$nm, the energy decreases with the width $\Delta$, both results are very close to Büttner’s result [10].
Equation (19) was solved when the external magnetic field $B = 0$. It offers decent analytical result for the width and size of a skyrmion on a thin film. The width of the skyrmion as a function of properties of magnetic material is
$$\Delta = \frac{-D\pi + \frac{3}{4}\mu_0 M_s^2 t}{4K + \frac{2\mu_0 M_s^2 t}{t - \frac{\pi D}{K}}} \quad (20)$$
The size of the skyrmion as a function of properties of magnetic material is
$$R = \left| D\pi - \frac{3}{4}\mu_0 M_s^2 t \right| \sqrt{\frac{A}{2\pi^2 A \left[ K + \frac{\mu_0 M_s^2 t}{2(t - \frac{\pi D}{K})} \right]^2 - \left( D\pi - \frac{3}{4}\mu_0 M_s^2 t \right)^2 \left[ K + \frac{\mu_0 M_s^2 t}{2(t - \frac{\pi D}{K})} \right]}} \quad (21)$$
When the thickness $t$ in Equations (20) and (21) goes to zero, i.e., when we treat the film as an infinitely thin film, the result agrees with the analytic result from Wang et al. [14].
4 Conclusion, Discussion and Future Work
We developed an analytic solution for the size and shape of skyrmions as a function of magnetic material properties. Our results show that the method of using piecewise function to approxTable 1: Simplified energy per unit area of each contributors for minimization
| Energy contributors | energy per unit area |
|---------------------|----------------------|
| DMI | $\frac{E_{DMI}}{2\pi R} = D\pi R$ |
| Exchange interaction| $\frac{E_{ex}}{2\pi R} = A \left[ \left( \frac{\pi^2}{4} \right) \frac{R}{\Delta} + \frac{2\Delta}{R} \right]$ |
| Anisotropy | $\frac{E_{anis}}{2\pi R} = 2R\Delta K$ |
| Zeeman | $\frac{E_{ze}}{2\pi R} = BM_s \left[ -R^2 - 4 \left( 1 - \frac{8}{\pi^2} \right) \Delta^2 \right]$ |
| Stray field / demagnetization (z) | $\frac{E_{demag}^z}{2\pi R} = \frac{1}{2}\mu_0 M_s^2 \left( -2R\Delta - \frac{3}{2}Rt + \Delta t \right)$ |
| Stray field / demagnetization ($\rho$) | $\frac{E_{demag}^\rho}{2\pi R} = \mu_0 M_s^2 R\Delta \left( \frac{t}{4 + 4\Delta} \right)$ |
Estimate the magnetization function is a viable way to simplify mathematical models while still being able to accurately predict experimental results. Comparing our results in Equations (21) and (20) to the analytical result given by Wang et al. (need reference), our result contains the actual thickness of the film $t$. This is more realistic and practical than assuming the magnetic film is vanishingly thick. As shown in Figure 9, the radius or size of the skyrmion blows up when the film is too thick ($t > 3$ nm). This cannot be predicted using a theory that ignores the film’s thickness.
Another interesting results by looking into the DMI energy and how it has effect on the size and width of the skyrmion. (see Figure 10) As theoretically predicted, DMI strength is one of the most important energy contribution which plays a major role in stabilizing skyrmions in a thin film. However, when the force of DMI too strong, skyrmions also could not exist on the think film.
Although the analytical result does not involve Zeeman energy contribution, Zeeman energy contribution (see Figure 6) relatively small compared to other energy contribution. Therefore, a small external magnetic field would have a big impact on our analytical result.
For the future work, we would like to compare our result for $R$ and $\Delta$ with the result from minimizing the total energy given by Büttner et al.. Their functions can not be minimized analytically but only numerically. To minimizing their function requires further work on numerically analysis process. However, their result would be a good benchmark for the accuracy of our analytical result and theory.
Figure 8: The demagnetization energy ($\rho$ direction) per unit area as a function of width and size of the skyrmion: (a) with fixed width $\Delta = 2$nm, the energy decreases with the size of skyrmion $R$; (b) with fixed size $R = 20$nm, the energy decreases with the width $\Delta$, both results are very close to Büttner’s result [10].
Figure 9: Size of skyrmion as a function of the magnetic film thickness: there is a limitation of thickness for skyrmions to exist in a thin film.
Figure 10: Size of skyrmion as a function of DMI strength: it proves that DMI stabilized the skyrmion, but skyrmions also could not form when DMI energy is too big.
Acknowledgement
The research is based on Karen Livesey’s work and under her supervision. I would like to thank her enormous contribution and devotion to this research project and guide me overcome knowledge gaps and through obstacles.
References
[1] Fert, A., Reyren, N., and Cros, V., 2017. Magnetic skyrmions: advances in physics and potential applications. *Naturae Review Materials*, 2(7), pp.1-15
[2] Wiesendanger, R., 2016. Nanoscale magnetic skyrmions in metallic film and multiplayers: a new twist for spintronics. *Naturae Review Materials*, 1(7), pp.1-11
[3] Davidson, J., 2020, Skyrmion Size and Shape Calculations. *The University of Colorado at Colorado Springs - internal document*.
[4] Fert, A., Cros, V., and Sampaio, J., 2013. Skyrmions on the track. *Nature Nanotechnology*, 8(3), pp. 152-156
[5] Büttner, F., Lemesh, I., and Beach, G.S., 2018. Theory of isolated magnetic skyrmions: From fundamentals to room temperature applications. *Scientific Reports*, 8(1), pp.1-12
[6] Coey, J.M., 2010. *Magnetism and Magnetic Materials*. Cambridge University Press.
[7] Livesey, K.L., 2022. Consequences of the Dzyaloshinskii-Moriya Interaction. *Surface Science Reports*. in preparation.
[8] Aharoni, A., 2000. *Introduction to the Theory of Ferromagnetism* (Vol.109). Clarendon Press.
[9] Moon, J.H., Seo, S.M., Lee, K.J., Kim, K.W., Ryu, J., Lee, H.W., McMichael, R.D. and Stiles, M.D., 2013. Spin-Wave Propagation in the Presence of Interfacial Dzyaloshinskii-Moriya Interaction. *Physical Review B*, 88(18), p.184404.
[10] Büttner, F., Lemesh, I., and Beach, G.S., 2018. Theory of isolated magnetic skyrmions: From fundamentals to room temperature applications. *Scientific Reports*. Supplementary material.
[11] Flores, C.Q., Stuart, A.R., Buchanan, K.S. and Livesey, K.L., 2020. Analytic calculation for the stray field about Néel and Bloch magnetic domain walls in a rectangular nanoribbon. *Journal of Magnetism and Magnetic Materials*, 513, p.167164.
[12] Livesey, K.L., 2020. Skyrmion demagnetization energy derivation. *The University of Newcastle - internal document*.
[13] Livesey, K.L., 2021. Dipolar Z linearpiecewise. *The University of Newcastle - internal document*.
[14] Wang, X.S., Yuan, H.Y. and Wang, X.R., 2008. A theory on skyrmion size. *Communications Physics*, 1(1), pp.1-7. |
Criminal Procedure - Guilty Pleas - Voluntariness Where Motivated by Desire to Escape Death Penalty under Unconstitutional Statutory Scheme - Brady v. United States, 397 U.S. 742 (1970); Parker v. North Carolina, 397 U.S. 790 (1970)
Denver Law Journal
Follow this and additional works at: https://digitalcommons.du.edu/dlr
Recommended Citation
Criminal Procedure - Guilty Pleas - Voluntariness Where Motivated by Desire to Escape Death Penalty under Unconstitutional Statutory Scheme - Brady v. United States, 397 U.S. 742 (1970); Parker v. North Carolina, 397 U.S. 790 (1970), 47 Denv. L.J. 540 (1970).
This Article is brought to you for free and open access by the Denver Law Review at Digital Commons @ DU. It has been accepted for inclusion in Denver Law Review by an authorized editor of Digital Commons @ DU. For more information, please contact firstname.lastname@example.org,email@example.com.
CRIMINAL PROCEDURE — GUILTY PLEAS — Voluntariness Where Motivated by Desire to Escape Death Penalty Under Unconstitutional Statutory Scheme. — Brady v. United States, 397 U.S. 742 (1970); Parker v. North Carolina, 397 U.S. 790 (1970).
EARLY in 1959 Robert Brady was indicted in federal court for kidnapping and for failing to release the victim unharmed in violation of the Federal Kidnapping Act.\(^1\) He faced a maximum penalty of death if the verdict of the jury should so recommend.\(^2\) After his codefendant had confessed and on advice of counsel, Brady entered a plea of guilty. The plea was accepted, but only after the trial judge had twice questioned Brady concerning its voluntariness.\(^3\) Brady was sentenced to 50 years (later reduced to 30 years) imprisonment.
Five years after Brady’s conviction, 15 year old Charles Lee Parker was arrested in Roanoke Rapids, North Carolina, for suspicion of burglary. After being questioned, he was placed in a jail cell where he spent the night. The following morning after a short period of questioning, Parker confessed to burglary and rape. He was subsequently indicted for first degree burglary, an offense which carries a mandatory death sentence\(^4\) unless the defendant pleads guilty or the jury recommends mercy.\(^5\) On advice of counsel Parker pled guilty; and, following a series of questions by the trial judge as to its voluntariness,\(^6\) the plea was accepted. Parker was thereupon sentenced to life imprisonment.\(^7\)
---
\(^1\) 18 U.S.C. § 1201 (a)(1) (1964).
\(^2\) Id. § 1201 (a)(1) to (2) (1964), provides that:
(a) Whoever knowingly transports in interstate or foreign commerce, any person who has been unlawfully seized, confined, inveigled, decoyed, kidnapped, abducted, or carried away and held for ransom or reward or otherwise, except, in the case of a minor, by a parent thereof, shall be punished (1) by death if the kidnapped person has not been liberated unharmed, and if the verdict of the jury shall so recommend, or (2) by imprisonment for any term of years or for life, if the death penalty is not imposed.
\(^3\) For a verbatim account of the exchange between Brady and the trial judge, see 397 U.S. at 743-44 n.2.
\(^4\) N.C. Gen. Stat. § 14-52 (1969).
\(^5\) N.C. Gen. Stat. § 15-162.1 (1965):
In the event [a guilty] plea is accepted, the tender and acceptance thereof shall have the effect of a jury verdict of guilty of the crime charged with recommendation by the jury in open court that the punishment shall be imprisonment for life in the State's prison; and thereupon, the court shall pronounce judgment that the defendant be imprisoned for life in the State's prison.
\(^6\) For a verbatim account of the exchange see Parker v. North Carolina, 397 U.S. at 793 n.3 (1970).
\(^7\) Id. at 793.
In 1968, doubt was cast on the validity of the guilty pleas of Brady and Parker by the decision in the case of *United States v. Jackson*,\(^8\) in which the Supreme Court invalidated the death penalty provision of the Federal Kidnapping Act on the ground that it imposed an impermissible burden upon the exercise of the Sixth Amendment right to trial by jury and the Fifth Amendment right not to plead guilty. The precise infirmity of the statutory provision considered in *Jackson* was that it immunized from the death sentence those willing to enter a guilty plea and, therefore, "needlessly" encouraged guilty pleas and jury waivers.\(^9\) Relying on only the implications of *Jackson*,\(^10\) Brady and Parker filed petitions in their respective forums seeking post-conviction relief on the ground that their guilty pleas were motivated by a desire to escape the death penalty, a motivation supplied by an impermissible, unconstitutional statutory scheme.\(^11\) Both petitions were denied, Brady's by the lower federal courts\(^12\) and Parker's by the North Carolina state courts.\(^13\) On review, the Supreme Court held that neither the record in *Brady* nor in *Parker* revealed any basis for disturbing the judgments of the respective courts below, i.e., that the petitioners' guilty pleas were tendered voluntarily and knowingly and were therefore valid.\(^14\)
I. Guilty Pleas and the Constitution: The State of the Law Prior to Brady and Parker.
It has long been established that a plea of guilty constitutes a waiver of several fundamental constitutional rights\(^15\) and is therefore subject to stringent safeguards.\(^16\) A constitutionally valid guilty plea
---
\(^8\) 390 U.S. 570 (1968).
\(^9\) Id. at 583.
\(^10\) It should be noted that the Court in *Jackson* was faced only with the question of the constitutionality of the death penalty provision of the Federal Kidnapping Act. The Court did not have before it a guilty plea tendered under that act because the defendant's motion to dismiss the indictment on the ground that the statute was unconstitutional had been granted by the district court; no plea was ever entered. Thus, the assumption by Brady and Parker that their guilty pleas were invalid for having been entered under constitutionally infirm statutory schemes was pure speculation and was not directly supported in *Jackson*.
\(^11\) While the North Carolina statute under which Parker had been convicted was not directly affected by the decision in *Jackson*, the effect of the North Carolina statute was the same; and Parker was safe in assuming that it would be invalidated under the principle announced in *Jackson*. Indeed, the statute was invalidated on the basis of *Jackson* in *Alford v. North Carolina*, 405 F.2d 340, 345 (4th Cir. 1968), rev'd on other grounds, 39 U.S.L.W. 4001 (1970).
\(^12\) Brady v. United States, 404 F.2d 601 (10th Cir. 1968).
\(^13\) Parker v. State, 2 N.C. App. 27, 162 S.E.2d 526 (1968).
\(^14\) Brady v. United States, 397 U.S. 742, 749 (1970); Parker v. North Carolina, 397 U.S. 790, 796 (1970).
\(^15\) See, e.g., Boykin v. Alabama, 395 U.S. 238 (1969): Expressly the right against self-incrimination, trial by jury, and confrontation of witnesses.
\(^16\) See Boykin v. Alabama, 395 U.S. 238 (1969); Machibroda v. United States, 368 U.S. 487 (1962); Von Moltke v. Gillies, 332 U.S. 708 (1948); Kercheval v. United States, 274 U.S. 220 (1927).
must be knowingly and voluntarily tendered.\textsuperscript{17} Although often discussed under the single generic heading of "voluntariness,"\textsuperscript{18} these two requirements are separate and distinct elements, and an infirmity in either serves to vitiate a particular guilty plea.\textsuperscript{19}
In order to constitute a "knowing" guilty plea, the defendant must be fully apprised "of the nature of the charges, the statutory offenses included within them, the range of allowable punishments thereunder, possible defenses to the charges and circumstances in mitigation thereof, and all other facts essential to a broad understanding of the whole matter . . . ."\textsuperscript{20} Implicit in such a test is the requirement that the information upon which the defendant relies must not be false or misleading.\textsuperscript{21}
The requirement that a guilty plea be entered voluntarily is a more illusive concept. From a philosophical point of view, the concept of voluntariness connotes the free exercise of a person's will; but what constitutes "free will" is open to alternative interpretations. On the one hand, it is possible to proceed from the premise that the mere existence of an extraneous inducement will be sufficient to deprive an act of its voluntariness. Under this view, a guilty plea would be involuntary unless motivated solely by the defendant's sense of guilt and remorse.\textsuperscript{22} On the other hand, "free will" can be defined in terms of a rational choice between genuine alternatives.\textsuperscript{23} With this interpretation, a guilty plea would be involuntary only when the impact of extraneous inducements is sufficient to cause the defendant to make an
\textsuperscript{17} Boykin v. Alabama, 395 U.S. 238, 244 (1969); Machibroda v. United States, 368 U.S. 488, 493 (1962); Johnson v. Zerbst, 304 U.S. 458, 465 (1938); Kercheval v. United States, 274 U.S. 220, 223 (1927).
\textsuperscript{18} Lassiter v. Turner, 423 F.2d 897, 900 (4th Cir. 1970).
\textsuperscript{19} McCarthy v. United States, 394 U.S. 459, 467 (1962).
\textsuperscript{20} Von Moltke v. Gillies, 332 U.S. 708, 724 (1948).
\textsuperscript{21} \textit{Id.} at 720. See generally Johnson v. Zerbst, 304 U.S. 458 (1938). Pursuant to such reasoning it has been held that a prosecutor's threat to bring charges not permitted by law or warranted by the evidence is tantamount to presenting a defendant with false and misleading information and a guilty plea tendered in reliance thereon is invalid. \textit{Lassiter v. Turner}, 423 F.2d 897, 900 (4th Cir. 1970). Likewise where a prosecutor fails to keep a promise of leniency upon which the defendant relied in tendering his plea, the plea will not be allowed to stand. \textit{Dillon v. United States}, 307 F.2d 445, 449 (9th Cir. 1962). This latter proposition probably has more to do with ethical due process than with the "knowing" requirement, but it is possible to argue that the element of deceit implicit in the broken promise is but another form of false and misleading information.
\textsuperscript{22} Fortunately for the administration of justice in the United States, the courts have not embraced this argument; for roughly 90 percent of all convictions in the United States result from guilty pleas (D. NEWMAN, \textit{CONVICTION: THE DETERMINATION OF GUILT OR INNOCENCE WITHOUT TRIAL} 8 (1966)), and most of these pleas are induced by permissible plea bargaining. Thus if the premise that "free will" is negated by the existence of any extraneous inducement were adopted by the courts, plea bargaining would be inherently coercive, and the administration of justice in the United States would be greatly impaired; but see Chalker, \textit{Judicial Myopia, Differential Sentencing and the Guilty Plea — A Constitutional Examination}, 6 AM. CRIM. L. Q. 187 (1968).
\textsuperscript{23} See Gilmore v. California, 364 F.2d 916 (9th Cir. 1966); Godlock v. Ross, 259 F. Supp. 659 (E.D.N.C. 1966); United States v. Tateo, 214 F. Supp. 560 (S.D.N.Y. 1963); Note, \textit{The Unconstitutionality of Plea Bargaining}, 83 HARV. L. REV. 1387 (1970).
irrational choice.\textsuperscript{24} This second view would require coercion-in-fact to render a guilty plea involuntary.
Judicial practice has drawn upon elements of both theories of voluntariness. For example, courts are often heard to say that it is necessary to look to all the relevant circumstances, \textit{i.e.}, the "totality of factors," to determine whether or not the defendant was in fact coerced.\textsuperscript{25} However, the inherent impropriety of a given inducement may compel the conclusion that, irrespective of its actual impact on the defendant's will, the mere presence of the inducement within his decision making milieu is sufficient to render a guilty plea invalid.\textsuperscript{26} In other words, such an inducement is deemed to be coercive \textit{per se}.
This dual approach to the problem of voluntariness is illustrated by the response of the lower courts to the decision in \textit{United States v. Jackson}.\textsuperscript{27} Prior to \textit{Jackson}, the mere fact that a defendant's decision to plead guilty was motivated by his fear of the death penalty was generally held to be insufficient to render his plea invalid.\textsuperscript{28} In the aftermath of \textit{Jackson}, however, the courts were faced with the problem of deciding what effect an \textit{unconstitutional} death penalty provision should have on the validity of a guilty plea made in fear thereof. Given the attitude of the courts toward improper inducements,\textsuperscript{29} it might have been expected that the courts would conclude that statutory schemes such as that condemned in \textit{Jackson} are inherently coercive and that all guilty pleas tendered thereunder should be declared invalid. However, inasmuch as express language in \textit{Jackson} forbade such an interpretation,\textsuperscript{30} the courts were forced to adopt alternative positions. From the quandary there emerged two distinct patterns.
The Fourth Circuit in the case of \textit{Alford v. North Carolina}\textsuperscript{31} opted for a "principal factor" test and held that a prisoner is entitled to relief if he can demonstrate that his plea was primarily the product of the
\textsuperscript{24} Note, \textit{The Unconstitutionality of Plea Bargaining}, 83 HARV. L. REV. 1387, 1398 (1970).
\textsuperscript{25} Haynes v. Washington, 373 U.S. 503, 513 (1963); Leyra v. Denno, 347 U.S. 556, 558 (1954); United States \textit{ex rel.} Brock v. La Vallee, 306 F. Supp. 159, 165 (S.D.N.Y. 1969); McFarland v. United States, 284 F. Supp. 969, 977 (D. Md. 1968); United States v. Colson, 230 F. Supp. 953, 955 (S.D.N.Y. 1964); United States v. Tateo, 214 F. Supp. 560, 565 (S.D.N.Y. 1963).
\textsuperscript{26} Euziere v. United States, 249 F.2d 293, 194-95 (10th Cir. 1957); United States v. Tateo, 214 F. Supp. 560, 567 (S.D.N.Y. 1963) [Promises of leniency or threats of harsher punishment by trial judge held to be coercive \textit{per se}.]
\textsuperscript{27} 390 U.S. 570 (1968).
\textsuperscript{28} Gilmore v. California, 364 F.2d 916, 918 (9th Cir. 1966); Laboy v. New Jersey, 266 F. Supp. 581, 584 (D.N.J. 1967).
\textsuperscript{29} See cases cited notes 21 & 26 \textit{supra}.
\textsuperscript{30} According to the Court "the fact that the Federal Kidnapping Act tends to discourage defendants from insisting upon their innocence and demanding trial by jury hardly implies that every defendant who enters a guilty plea to a charge under the Act does so involuntarily." 390 U.S. at 583.
\textsuperscript{31} 405 F.2d 340 (4th Cir. 1968), \textit{rev'd}, 39 U.S.L.W. 4001 (1970).
burdens placed upon him by the unconstitutional statutory scheme.\textsuperscript{32} According to the court in \textit{Alford}, when fear of an unconstitutional death penalty provision was the principal motivating factor in the defendant's decision to plead guilty, there is no need for a subjective inquiry into the voluntariness of the plea — the plea is invalid irrespective of whether or not the defendant was capable of making a rational choice.\textsuperscript{33}
Other federal courts have refused to assign any special status to the constitutionally infirm death penalty and have continued to apply the subjective "totality of factors" test\textsuperscript{34} in an effort to determine whether the defendant's will was actually overborne.\textsuperscript{35} This position appears to be more in keeping with the underlying purpose of the \textit{Jackson} rationale which was not to identify inherently coercive inducements and render guilty pleas entered in response thereto invalid, but rather to remove from the defendant's decisionmaking process inducements which needlessly penalize the assertion of constitutional rights.\textsuperscript{36} It is this position which is endorsed by the Supreme Court in the instant cases.
II. Brady and Parker: A Clarification
The Supreme Court's decision in the instant cases\textsuperscript{37} essentially reaffirms the traditional "totality of factors" test and, at the same time, redefines in more precise terms what constitutes an involuntary guilty plea.\textsuperscript{38} In arriving at its decision, the Court begins by reiterating what it said in \textit{Jackson} concerning the effect of statutory schemes, such as those condemned, on a guilty plea made thereunder. According to the Court in \textit{Brady}: "\textit{Jackson} ruled neither that all pleas of guilty encouraged by the fear of a possible death sentence are involuntary pleas nor that such encouraged pleas are invalid whether involuntary or not."\textsuperscript{39} Thus the Court rejects out of hand the assertion that unconstitutional
\textsuperscript{32}Id. at 347. For a discussion of the Supreme Court's reaction to the test devised by the Court of Appeals for the Fourth Circuit, see text accompanying notes 47 & 48, infra.
\textsuperscript{33}Two district court cases which have applied the \textit{Alford} test are \textit{Quillien v. Leke}, 303 F. Supp. 698 (D.S.C. 1969); \textit{Shaw v. United States}, 299 F. Supp. 824 (S.D. Ga. 1969).
\textsuperscript{34}See text accompanying note 25 supra.
\textsuperscript{35}United States \textit{ex rel}. Brock v. LaVallee, 306 F. Supp. 159, 165 (S.D.N.Y. 1969); Pindell v. United States, 296 F. Supp. 751, 753 (D. Conn. 1969); Wilson v. United States, 303 F. Supp. 1139, 1143 (W.D. Va. 1969); McFarland v. United States, 284 F. Supp. 969, 977 (D. Md. 1968).
\textsuperscript{36}390 U.S. 570, 583 (1968).
\textsuperscript{37}Since the Court's views on the issue under consideration are more complete in \textit{Brady} than in \textit{Parker}, for purposes of analysis the \textit{Brady} opinion will be used more extensively.
\textsuperscript{38}A third case, \textit{McMann v. Richardson}, 397 U.S. 759 (1970), decided on the same day as \textit{Brady} and \textit{Parker} also sheds light on the question of when a guilty plea is valid and when it is not; but inasmuch as it deals with the effect of an allegedly coerced confession on the validity of a guilty plea, it is beyond the scope of this comment.
\textsuperscript{39}397 U.S. 742, 747 (1970).
death penalty provisions are inherently coercive.\textsuperscript{40} In so doing, the Court appears to be endorsing a concept of voluntariness which is based entirely on the impact of the inducement in question on the defendant's ability to make a rational choice.\textsuperscript{41} In other words, the nature of the inducement has no bearing on the question of voluntariness.\textsuperscript{42}
In many respects the Court's holding in these two cases was inevitable. In \textit{Jackson} the Court had invalidated a statutory scheme which was said to encourage, as opposed to coerce, guilty pleas. Because the infirmity was said to be a tendency to encourage, the \textit{Jackson} decision cast grave constitutional doubts on any and all inducements which are calculated to encourage guilty pleas, including the time-honored practice of plea bargaining.\textsuperscript{43} If an unconstitutional death penalty provision and the practice of plea bargaining can be said to suffer from the same infirmity, it is clear that if the Court had held that all guilty pleas made in response to the encouragement offered by the unconstitutional statutory scheme are invalid, logic would compel the conclusion that guilty pleas made in response to like encouragement offered by plea bargaining would be equally invalid.
Of course at first glance the Court could have avoided this undesirable result by holding that the statute condemned in \textit{Jackson} was infirm not only because it needlessly encouraged guilty pleas but also because the encouragement involved was the threat of death — a threat which, by its nature, is coercive. By emphasizing the gravity of the threat, the Court could have resolved most of the doubts concerning the constitutionality of plea bargaining without doing violence to its holding in \textit{Jackson}. It would then have been free to invalidate the guilty pleas of Brady and Parker without the fear that its holding would be cited as justification for invalidating guilty pleas made in response to less offensive methods of encouragement. However, the Court would
\textsuperscript{40} As support for this proposition, the Court in \textit{Brady} cites the case of \textit{Laboy v. New Jersey}, 266 F. Supp. 581 (D.N.J. 1967), where a plea of \textit{non vult} under a similar statute was held voluntary even though the defendant was obsessed by the fear of death to the extent of suffering a temporary breakdown. \textit{Id.} at 747.
\textsuperscript{41} That this indeed represents the Court's view is illustrated by a revealing passage in the text of the opinion. In rejecting Brady's contention that his plea was involuntary, the Court notes that there was no evidence "that Brady was so gripped by fear of the death penalty or hope of leniency that he did not or could not, with the help of counsel, rationally weigh the advantages of going to trial against the advantage of pleading guilty." 397 U.S. 742, 750 (1970).
\textsuperscript{42} Mr. Justice Brennan, in a separate opinion, attacks the Court's position in \textit{Brady} and \textit{Parker} on the ground that it is "totally without precedent." 397 U.S. 790, 800 n.2 (1970). However, as has been previously noted, courts have often considered the impact on the defendant's ability to make rational choices to be the controlling factor in the issue of voluntariness. \textit{See} cases cited in note 23 \textit{supra}. Where the Court's position does differ from that of other courts is in its reluctance to hold that an improper or illicit inducement is inherently coercive.
\textsuperscript{43} \textit{See} Note, \textit{supra} note 23 at 1387.
still have been faced with the difficult task of showing how a statutory threat of the death penalty differs in its coercive effect from the plea bargaining situation in which the charges are reduced from first to second degree murder in return for a plea of guilty. Both inducements threaten the death penalty if the defendant goes to trial, and both offer a promise of leniency if he does not. Again logic would compel that if the death penalty provision is inherently coercive, so must be the plea bargaining situation when the threat of the death penalty is involved.
Thus no matter which way the court turned, a holding that a guilty plea is invalid if made within the context of the statutory scheme condemned in *Jackson* would have provided serious grounds for attacking other guilty pleas entered in response to a threat of greater punishment or an offer of leniency. The response of the Court to this dilemma was to revert to the "totality of factors" test\(^{44}\) and to determine the question of voluntariness on the record.
**III. THE IMPLICATIONS OF BRADY AND PARKER**
The Court's clear emphasis on the impact (as opposed to the nature) of inducements on the rationality of choice in determining voluntariness is not likely to produce any appreciable change in the prevailing judicial approach to the question of validity of guilty pleas. If the holding is given broad interpretation, it may be that the inherent coerciveness of threats or promises by judges\(^{45}\) will no longer be recognized, making it necessary to look to the impact of such inducements on the defendant's will to determine whether his ability to make a rational choice was actually overborne. On the other hand, because of the unequal bargaining power of the judge and defendant and because of the need to ensure impartiality, it may be that this apparent exception to the holding in the instant cases will be preserved.
As to unkept promises or threats by prosecutors, the requirement that the defendant be aware of all relevant circumstances, including the range of possible penalties, will serve to ensure that a guilty plea induced by deceit, whether intentional or unintentional, will not be sustained.\(^{46}\)
---
\(^{44}\) "The voluntariness of Brady's plea can be determined only by considering all of the relevant circumstances surrounding it." 397 U.S. 742, 749 (1970).
\(^{45}\) See cases cited note 26 *supra*.
\(^{46}\) Courts often hold that such promises or threats are coercive *per se*, but in fact the deception problem speaks to the knowledge requirement and not to the voluntariness requirement. Thus while deception will still have the effect of vitiating a guilty plea made in response thereto, courts will have to frame the infirmity in more precise terms.
CONCLUSION
In striking down the death penalty provision of the Federal Kidnapping Act in *United States v. Jackson*, the Supreme Court clearly manifested its disapproval of statutory schemes — and, by implication, of all official acts — which needlessly encourage the waiver of constitutional rights. What was not directly before the Court in *Jackson*, however, was the question of the validity of guilty pleas induced by such schemes. While it may have been logical to assume prior to *Brady* and *Parker* that had the *Jackson* Court been confronted with this question it would have opted for invalidity, the decisions in those cases expressly reject such a conclusion. Indeed, the decisions in *Brady* and *Parker* do not in any way affect the continued viability of *Jackson*. In *Brady* and *Parker* the Court merely answers the question left open in *Jackson* regarding the validity of guilty pleas tendered within the context of a constitutionally infirm statutory scheme.
By deciding the validity issue in *Brady* and *Parker* in terms of the "totality of factors" test, the Court has to a considerable extent clarified the concept of voluntariness: Only when an extraneous inducement, whether proper or improper, has the effect of rendering a defendant incapable of exercising rational choice will a guilty plea fail for involuntariness. This differs considerably from the "primary factor" test expounded by the Fourth Circuit in *Alford v. North Carolina* and endorsed by the concurring and dissenting justices in the instant cases. The primary difference between the two positions is one of emphasis. While both pay homage to some sort of "factors" test, the tack taken in *Alford* was to give conclusive weight to illicit inducements. Thus the emphasis there was on the nature of the inducement while the emphasis in the instant cases is on the impact of the inducement.
The Supreme Court recently had occasion to review the decision in *Alford*, and in so doing it expressly rejected the reasoning of the Court of Appeals for the Fourth Circuit. Relying on its decision in *Brady*, the Court held that the standard for determining the validity of guilty pleas "remains, [sic] whether the plea represents a voluntary and intelligent choice among the alternative courses of action open to the defendant. . . . That he would not have pleaded except for the opportunity to limit the possible penalty does not necessarily demonstrate that the plea of guilty was not the product of a free and rational choice
---
47 405 F.2d 340, 347 (4th Cir. 1968), rev'd 39 U.S.L.W. 4001 (1970).
48 397 U.S. 790, 808 (1970) wherein Mr. Justice Brennan stated: "If a particular defendant can demonstrate that the death penalty scheme exercised a significant influence upon his decision to plead guilty, then, under *Jackson*, he is entitled to reversal of the conviction based upon his illicitly produced plea."
49 North Carolina v. Alford, 39 U.S.L.W. 4001, 4002 (1970).
Thus, the Supreme Court in *Alford* clearly reaffirmed the principles announced in *Brady* and *Parker* and left little doubt as to what constitutes the proper test for determining the validity of guilty pleas.
Despite its clarity of statement, the test endorsed by the Court in *Brady* and *Parker* is limited by the obvious difficulty of quantifying the impact of the various inducements so as to be able to ascertain whether or not a particular defendant was rendered incapable of rational choice. Perhaps as the lower courts begin to apply the test, the mechanics of application will come into more precise focus.\(^{51}\)
---
\(^{50}\) *Id.* at 4002. It should be noted that a factual variation in *Alford* raised an additional issue apart from the question of the voluntariness of Alford's plea. It seems that after his guilty plea had been tendered but before it had been accepted, Alford denied he had committed the murder for which he had been charged. Nevertheless he reaffirmed his desire to plead guilty in order to avoid a possible death sentence. In spite of Alford's protestations of innocence, the trial court, after considering the strength of the State's case, accepted his plea and sentenced him to 30 years imprisonment. Thus, on review the Supreme Court was faced with the issue of whether a guilty plea can be accepted when it is accompanied by protestations of innocence and hence contains only a waiver of trial but no admission of guilt. In deciding this issue, the Court referred to language in *Brady* which in that context unequivocally stated that admission of guilt by the defendant is "[c]entral to the plea and is the foundation for entering judgment . . ." 397 U.S. 742, 748. In an obvious attempt to get around what would otherwise be troublesome language, the Court in *Alford* qualifies the statement in *Brady* by stating that admission of guilt is normally central to the plea. 39 U.S.L.W. 4001, 4003 (1970). Having surmounted this obstacle, the Court then proceeds to hold that "while most pleas of guilty consist of both a waiver of trial and an express admission of guilt, the latter element is not a constitutional requisite to the imposition of criminal penalty." *Id.* at 4004. Furthermore, a trial judge who accepts a plea which is accompanied by protestations of innocence does not commit constitutional error so long as he has reason to believe that there is a factual basis for the plea. *Id.*
\(^{51}\) It should be noted that two recent cases decided by the Supreme Court ameliorate to some extent the magnitude of this problem. In *McCarthy v. United States*, 394 U.S. 459 (1969), the Court held that the trial court, before accepting a guilty plea, must comply with the provisions of Rule 11 of the Federal Rules of Civil Procedure and satisfy itself as to the voluntariness, intelligence, and factual basis of the plea. *Id.* at 467. Where this requirement is met the appellate courts will not disturb the judgment of the trial court unless there is a clear abuse of discretion. *Id.* at 470. In *Boykin v. Alabama*, 395 U.S. 238 (1969), this requirement was extended to state courts. *Id.* at 243. Thus at least as to guilty pleas made after *McCarthy* and *Boykin* the appellate courts will seldom have to undertake the task of ascertaining from the record the impact of any particular inducement on the defendant's freedom of choice. |
Guru Gobind Singh Indraprastha University
Igniting Minds, Nurturing Values
VISION 2030
There’s a way to do it better — find it.
Thomas Alva Edison
ENVISIONING 2030
Setting on a Journey of Thousand Moons...
Born just a little over twenty years ago, the early years of Guru Gobind Singh Indraprastha University have been a tour de force, and its transformation is no less than a Cinderella story. We stand tall on many counts, and the founding fathers would be proud of our successes. While partnering with 127 academic institutions in the National Capital Territory of Delhi, we offer 177 academic programmes across many disciplines, and presently, are alma mater to more than 70,000 students.
We are an academic haven to India’s biggest social capital — her young people. We must therefore equip ourselves to prove equal to their growing ambition and imagination. If this land is to once again become a major economic power, global thought leader, and excel in all walks of human enterprise — be it in science, medicine, literature, philosophy, engineering, information technology, robotics, artificial intelligence, or bionics, the Universities of this country must take a solemn vow to build and nourish a robust ecosystem where mediocrity has no place, where free thinking is encouraged, and where innovation, invention and enterprise are treasured more than ever before.
This would require us to take a 360-degree turn. A heap of reforms would be vital, but more than that we would have to find true commitment, and create an environment of truthfulness, transparency and accountability. An unfettered, hugely encouraging, constructive environment, which ignites young minds, and inspires merit, while still safekeeping ethics, must become the order of the day.
This transformative journey of a thousand moons, must be filled with true earnestness. To my mind, a University is like a place of worship, and we must think and act with devotion to achieve our goals. This strategic document will help us build the terra firma that would be conducive to transformation. We must develop the structure, content and modes of delivery of teaching programmes to ensure that they map the current and future needs of the market.
We must transform the centre of our main campus by offering a gamut of enhanced student support facilities and we must innovate to nurture young minds so that they can thrive in a dynamic and rapidly changing world.
To achieve our goals, we must be more efficient in our governance and management systems, become more ambitious, pragmatic and strategic in the generation and deployment of resources, and instill a sense of pride, intellectual integrity and dutifulness in our student community, researchers and our administrative and teaching staff.
Prof. (Dr.) Mahesh Verma
Vice Chancellor
| Contents | Page |
|------------------------------------------------------------------------|------|
| 1. About the University | 1 |
| 2. Distinguishing Features | 4 |
| 3. SWOT Analysis | 8 |
| 4. Strategic Plan | 11 |
| 5. Short-term Goals | 12 |
| 6. Long-term Goals | 17 |
ABOUT THE UNIVERSITY
Guru Gobind Singh Indraprastha University was established by an Act of Govt. of NCT of Delhi on July 28, 1998. It was conceived as a teaching and affiliating University with explicit objective to facilitate and promote studies, research and extension work in areas of professional and technical education.
It is included under Section 2 (f) and 12 (B) of the University Grants Commission and has received ‘A’ Grade by National Assessment and Accreditation Council (NAAC), Bangalore for the period 2007-2012 and 2013-2018. The first academic session of the University was started in 1999-2000.
The University was ranked 82nd, 74th and 66th in 2017, 2018 and 2019 respectively in National Institutional Ranking Framework (NIRF), MHRD surveys. The School of Engineering was ranked 74th, 85th and 73rd in 2017, 2018 & 2019 respectively. The School of Management obtained ranking of 35, 51-75 and 62 in 2017, 2018 and 2019 respectively. The capacity of the University to deliver quality education is visible from consistent high accreditation and ranking.
GGSIPU is committed to provide outcome based, industry focused education and research and nurtures an inclusive sustainable culture to serve diverse needs of students, faculty & other stakeholders. The University is focused towards systems and processes for continuous quality enhancement.
VITAL STATISTICS
BIRTH
July 28, 1998
Government of National Capital Territory of Delhi
Guiding Philosophy
Teaching and Affiliating University with principal focus on Professional and Technical Education
TWELVE SCHOOLS
University School of Basic and Applied Sciences
University School of Humanities and Social Sciences
University School of Education
University School of Law and Legal Studies
University School of Medicine and Para-Medical Health Sciences
University School of Management Studies
University School of Architecture and Planning
University School of Information, Communication and Technology
University School of Chemical Technology
University School of Biotechnology
University School of Environment Management
University School of Mass Communication
CAMPUSES, AFFILIATIONS & PROGRAMMES
60.46 acres South-West Dwarka Campus
18.75 acres Eastern Surajmal Vihar Campus
127 Affiliated Academic Institutions
174 Academic Programmes
RECOGNITION
Recognized by the University Grants Commission under Section 12B of the UGC Act
Ministry of Human Resource and Development
Govt. of India
VISION
“The University will stimulate both the hearts and minds of scholars, empower them to contribute to the welfare of society at large; train them to adapt to the changing needs of the economy; advocate them for cultural leadership to ensure peace, harmony & prosperity for all.”
MISSION
“GGS Indraprastha University shall strive hard to provide a market oriented professional education to the student community of India in general and of Delhi in particular, with a view to serving the cause of higher education as well as to meet the needs of the Indian Industries by promoting establishment of colleges & Schools of Studies as Centers of Excellence in emerging areas of education with focus on professional education in disciplines of engineering, technology, medicine, education, pharmacy, nursing, law, etc.”
QUALITY POLICY
“GGS Indraprastha University is committed to provide professional education with thrust on creativity, innovation, continuous change and motivating environment for knowledge creation and dissemination through its effective quality management systems.”
University has 11 On-Campus Schools of Studies where in as many as 49 Undergraduate & Postgraduate academic programmes are being conducted for more than 4500 students. In addition, the University has a School of Medicine and Para Medical Health Sciences for managing various UG and PG programmes in the medical and allied areas such as MBBS, BDS, MD, MS, Yoga, BHMS, Forensic Sciences, BAMS, B.Sc. (Nursing), etc. being run in Government and Private hospitals and other health institutions.
The university since its inception has been striving to maintain excellence in teaching, research and extension activities. The University promotes a culture that fosters scientific temper, ethical values and quest for excellence.
On the affiliation front, the University has 127 affiliated institutes; of these, 91 are self-financed and 36 are owned and managed by the Govt. of NCT of Delhi/Govt. of India offering education to more than 74,000 students in more than 174 academic programmes. (Figure-1 and Figure-2). Out of these, 37 institutions have PG departments, 27 are NBA/NAAC Accredited institutions under the umbrella of GGSIP University, New Delhi.
University Regular review of existing courses and introduction of new courses of current national and international relevance (e.g., Artificial Intelligence, Machine Learning, Robotics, Data Analytics, etc.) have been a major activity of the university to produce human resource which is more relevant up-to-date, skilled and employable. (Figure-4).
There is a blend of conventional programme as well as inter-disciplinary programmes in emerging areas of technology such as Bio-Diversity & Conservation, Natural Resource Management, Legal Framework, Environment, etc.
To maintain high quality, the teaching and learning processes have been made more rigorous and effective. Evaluation process has been made more transparent and credible. In fact, University has undertaken a decisive step to fully adopt ICT enabled interventions in most of its functional units namely teaching, learning, evaluation, research, administration & governance, which will ensure efficiency, accountability and transparency. The process has been taken up on urgent priority. An automated file monitoring system enables tracking of various files in the administrative system.
Universities have a larger role to play in creation of new knowledge through research.
The university has not only made visible impact on national and international levels through quality research but has also competed well with other Peer Institutions. (Figure-5, 6 and 7). Recognition of the University School of Biotechnology and University School of Management Studies by UGC for its ‘Special Assistance Programme’ and support received from DST under FIST programme to University School of Basic and Applied Science the affiliation front, the University has, is the testimony of research advances made by the faculty members.
The selection of University School of Mass Communication by UGC to originate courses under digital marketing, public relation and photojournalism for community learning is a feather in the cap.
The Choice Based Credit System (CBCS) has been implemented in programs offered by the University to allow flexibility of learning to the students and enabling them to pursue studies in the courses of their choice including the courses on “Swayam” online courses. The credit transfer scheme from online courses under MOOCs is under adoption by the University.
Curriculum development in all Schools is undertaken through various Committees such as Board of Studies, Sub Committee of Academic Council and Academic Council that also includes external experts from academic institutions and industry indicating a delegation of responsibilities and collective decision making.
University has an in-built mechanism to regularly review programs based on students’ feedback, parents’ feedback, Alumni Feedback, the current needs, advances made in different subject areas and industry/employer feedback. This helps in ensuring relevance of the courses to meet the industry, research and societal needs.
The university is endowed with highly qualified teaching faculty, mostly having Ph.D. degree and an excellent track record of professional progression. Keeping in view the global higher education scenario, the curricular design has contemporary features namely, semester system, modularity, choice-based-credit system, credit transfer, interdisciplinary programmes, elective options thereby offering the warranted flexibility to students. University encourages use of interactive teaching methodology aided by state-of-the-art tools.
The University has undertaken several outreach programme for community development such as adoption of villages, swachh bharat abhiyan, Fit India movement, etc.
The University has initiated several initiatives/schemes for promoting research among the faculty and students such as IPRF (Indraprastha Research Fellowship), STRF to Ph.D students, Travel grant to teachers and students, FRGS, etc. (Figure 8, 9 and 10)
| YEAR | NO. OF STUDENTS | SANCTIONED AMOUNT |
|------------|-----------------|-------------------|
| | National | International | Total | |
| 2017-2018 | 27 | 08 | 35 | Rs. 5,74,148/- |
| 2018-2019 | 12 | 10 | 22 | Rs. 7,02,525/- |
| S. NO. | YEAR | NO. OF EDUCATIONAL TOUR | AMOUNT |
|--------|----------|-------------------------|-----------|
| 1 | 2017-18 | 06 | Rs. 13,10,703/- |
| 2 | 2018-19 | 06 | Rs. 17,20,064/- |
**Figure 8: Research & Grant Support Schemes-Students**
**Figure 9: Research & Grant Support Schemes-Students**
| | 2015-16 | 2016-17 | 2017-18 | 2018-19 |
|----------------|---------|---------|---------|---------|
| Travel Grant | 58,33,141 | 1,13,20,670 | 1,01,36,399 | 1,18,32,406 |
| FRGS | 0 | 96,96,551 | 87,67,171 | 1,24,95,132 |
| IPRF | 16,22,402 | 2,53,41,618 | 1,97,25,818 | 2,43,67,948 |
| STRF | 31,64,000 | 6,66,333 | 70,68,999 | 60,18,817 |
| Laptop & contingency grant | 3,40,900 | 9,06,852 | 52,42,692 | 24,77,513 |
| **Total** | 2,55,62,443 | 4,79,32,024 | 5,09,41,079 | 5,71,91,816 |
**Figure 10: Expenditure on Research schemes (Rs.)**
The University has been figuring in top 100 Universities of the country since the beginning of the ranking framework (NIRF). With reforms in management and governance systems, innovative ideas in academic and administrative spheres and the right moral and ethical orientation, the university can be given the status of being a University with difference. However, the purpose is not to continue as a large State University, but as a leading University which can be considered among the best institutions in the country and abroad.
SWOT ANALYSIS
S
• Strong present leadership
• Good basic infrastructure
• Corpus fund to support some basic initiatives
• Highly qualified, young faculty
• 170+ academic programmes
• Delhi - capital of India
• Annual student intake of 34000+
W
• Limited main campus space
• Lack of diversity as 85% students from Delhi
• Lack of direct intervention with affiliated institutions
• Low consultancy services
• No funding from Delhi Govt.
O
• Huge scope for expansion
• High demand for professional and technical programmes
• Delhi-NCR emerging as hub of companies-opportunity for consultancy & training services
T
• Increasing competition
• Affiliated institutions gearing up to become autonomous
• Upgradation of infrastructure required for professional and technical education
• Depleting corpus due to lack of financial support from State Govt.
STRENGTHS
• University has a relatively young faculty and staff with median age in the range of 35-40.
• Good basic physical infrastructure in terms of buildings, residential facilities, sports ground and community center and hostels for overall development for the current programs.
• University at present has developed financial stability and is operating almost on self-financing mode.
• University is slowly developing strong networking with its alumni, industries, research organizations, & leading national & international companies for better management practices, exposure & learning.
• Presence of University in terms of its campuses and affiliated colleges throughout NCR. University has two campuses i.e. West Campus at Dwarka and East Campus at Surajmal Vihar, Delhi.
• The University through its 12 (11+1) Schools of Studies and Affiliated Institutes offers 170+ academic programmes at UG, PG and Doctoral levels in knowledge and skill intensive areas having high job opportunities such as engineering, management, medical and para-medical sciences, education, IT and computer applications, law and mass media to name a few to about 74,000 students with an annual intake of more than 34,000+ students.
WEAKNESSES
• University has limited space of 60.46 acres in West Campus at Dwarka and 18.75 acres in East Campus at Surajmal Vihar, Delhi. The space is just sufficient to support the existing programmes of the University and thus restricts the future expansion plans of the University in the main campus.
• As compared to premier institutions in Delhi, the maximum number of students admitted in the University is of average background as it is catering to 85% of Delhi population.
• More than 95 percent of the students admitted in the University are getting education in self-financed institutions affiliated to the University. The University has little direct intervention in the management of these institutions thus leaving very limited direct role in improvement of these institutions.
• Transfer of knowledge created in the University to industry in the form of sponsored industry projects is very limited. This results in low consultancy services provided by the University to industries.
• 85% of the seats in PG programmes are reserved for Delhi Students only as against no such reservation in other similar Govt. Institutions in Delhi thereby restricting its national character, quality of students and diversity.
• There is no funding available to the University from Govt. of Delhi, for any of its expenses.
OPPORTUNITIES
• In Delhi, the institutions offering quality education are less than the requirement. In the absence, students are forced to seek admissions in institutions located in neighboring areas offering poor quality education. This leaves huge scope of expansion to offer quality education.
• Most of the job opportunities offered these days are by professional and technical programmes. The University, over the years, has created a strong base in these programmes and can further expand them to meet the rising requirement of these programmes in emerging disciplines such as Artificial Intelligence, Machine learning, Data analytics, etc.
• The University has advantage of being tagged as Govt. University thereby opening up opportunities to seek public funding from several sources.
• The University through its highly educated and research-oriented workforce can contribute to the creation of new knowledge, high impact research in socially relevant problems.
• The number of companies operating from NCT of Delhi/NCR are very large which provide opportunity to the University to offer consultancy and training services in these organizations as well as provide training to its students.
• Location of the University in Delhi offers opportunity for attracting experienced and talented faculty from diverse organizations.
THREATS
• Delhi/NCR has seen emergence of several institutions/Universities both in the Private and public sector thereby posing a demand challenge of student enrolment vis-à-vis the seat intake and therefore University always has to compete with these institutions.
• The institutions affiliated to the University are now gearing up to become autonomous which is likely to severely impact the financial stability of the University.
• The academic programmes offered by the University are professional and technical in nature which requires continuous up-gradation of infrastructure, faculty and regular interaction with industry.
• Majority of the students are getting education in self-financed affiliated institutions of the University. Their continuous improvement, up-gradation and better management is a big challenge. Any failure on their part can dent the reputation of the University and be problematic for the University any time.
• Regulatory framework for professional and technical education is complex in the country and any sudden change in it can create problems.
• The rising salaries require constant increase in sources of revenue, which are limited due to lack of financial support from government sources.
• The rapidly changing regulatory and other policies may have a huge impact on the sustenance of the financial strength of the University such as New Education Policy, EQUIP Scheme, etc.
• Lack of financial support from the Government for creating new physical infrastructure is depleting the corpus funds.
STRATEGIC PLAN
In order to bring about the transformation, the University needs to work out an action plan to enhance strengths, minimize threats, transform weaknesses into strengths and exploit/tap opportunities by analyzing the Political, Economic, Social and Technological (PEST) scenario. The following domains have been identified in order to achieve the vision and mission of the University.
**FOCUS DOMAINS**
- Strengthening and enhancing physical and academic infrastructure to support quality education ensuring equitable access and digital learning and assessment.
- Sustaining the demand for various programmes in terms of quality of courses taught, their social relevance, covering cutting-edge technologies.
- Enhancing research both in terms of social problem solving & creation of new knowledge through inter-disciplinary research.
- Promoting Industry and Institutional Linkages for promoting training & placements, entrepreneurship, consultancy and research among faculty and students.
- Creating a conducive culture promoting human and social values and ethics to develop good human beings and responsible citizens.
- Improving processes and operations to bring efficiency, transparency and accountability in Administration and Governance through digitalisation and e-governance.
- Quality-focused growth and expansion of the University.
- Harnessing Human Resource for productive contribution to Institutional building activities and their welfare initiatives.
- Enhancing Global linkages with foreign institutions for attracting students, faculty and employers.
- Financial sustenance of University for supporting the above plans through alternative sources of revenue in light of the emerging threats in Higher Education landscape, such as Draft NEP and EQUIP (Education Quality Upgradation and Inclusion Programme) Plan set out by Govt. of India.
THE ACTION PLAN ON QUALITY PARAMETERS IN LINE WITH MISSION AND VISION OF THE UNIVERSITY ARE:
1. ACADEMIC INFRASTRUCTURE
- Upgrade the academic facilities like workshops, ICT enabled classrooms,
- Smart classrooms, Internet, Labs, Studios, equipment, etc.
- Wi-Fi in the entire campus
- Enhancing the access to Library and other E-resources
- Upgradation of Class Room Furniture and its aesthetic
- Strengthening the Faculty cabins/rooms with decent infrastructure
- Improving Internet connectivity across campus to develop Intranet for paperless communication
- Setup digital infrastructure for enabling Online education and offering online courses
- AMC of all equipment
2. PHYSICAL INFRASTRUCTURE
- E-Governance adoption - for efficiency, transparency, and accountability
- Develop and adopt practices for Green campus, Waste management, rain water harvesting, re-cycling, etc.
- Enabling sports culture through development of additional facilities
- Improving Canteen and related facilities
- Tapping Govt. Funding for Infrastructure development, maintenance & augmentation
- Facility management system to be setup with online maintenance and support system
- Development of Auditorium, Placement Centre, Medical Centre, Fitness Centre, Gym facilities
- Entire facelift of campus in terms of Aesthetics, Layout, Artworks, University Museum, etc.
- Strengthening security through CCTV deployment, etc.
3. ENHANCING DEMAND FOR COURSES
- Improved Teacher-Student ratio
- Curriculum enrichment to suit industry needs
- Introducing programmes/courses in cutting-edge technologies and Setting up Centre of Excellence in Artificial Intelligence, Machine Learning, Robotics, IoT, etc.
- Introducing programmes and courses in socially relevant areas such as – Disaster preparedness, Cyber Security, Bio-diversity, Bio-Pharma, Health care, etc.
- Employment linked Skill based programmes
- To provide value added inputs through short term trainings, workshops, courses, etc.
- To Leverage technology to help students learn any course with their own pace through extensive use of e-PG Pathshala, Swayam portal courses through MOOCs platform and their credit transfer.
- Relevant and reputed Admission process to increase Access, Equity and Excellence in students.
- Encouragement for research schemes such as STRIDE, IMPRESS, LEAP, etc.
- Strengthen quality research to enhance H-index of the University through publication in high impact indexed journals.
- Improvement in Quality of Journals Published from USS.
5. INDUSTRY - INSTITUTIONAL LINKAGES
- MOUs with Industry to Promote Training and Placements, Consultancy and Joint Collaborative Research
- Setting up Advisory Committee in mentorship of Statutory body members for Industry-linkages
- Conducting Training Programmes and MDPs for Public and Private enterprises.
- MOUs for Training and Placement as per Corporate requirements
- Setting up Endowment and Research Chairs
- Undertaking Colloquium Lecture Series Instituting Corporate Awards
- Linkages with Corporate and Industry bodies such as FICCI, CII, ASSOCHAM, PHD, CCI, AIMA, etc.
4. RESEARCH & SUPPORT
- MOUs with National and International
- Research and other institutions for linkages and sharing of resources
- Tapping Extra mural funding for research and Govt funding, through CSR etc.
- Enhancing the submission of Research Project proposals to national and International Agencies
- Enhancing Patent awareness, IPR filing, etc.
- Enhancing institutional funding through Fellowships for doctoral students.
6. TRAINING & PLACEMENTS
- Career Guidance, Internships & placements to be undertaken proactively
- Appoint full time dedicated Training and Placement Officer for good results
- Soft Skills to be imparted to students to make them employable
- Using Alumni Network for Mentorship, Internships and placement.
6. EMPLOYMENT OPPORTUNITIES
- Creating opportunities through on-campus and pooled campus
- Value added inputs through short term trainings, workshops, courses, etc. to enhance employment opportunities
- Strengthening Placement Cell with Support Industry Liaison Executives and setting up facilities for Centralized Placement
7. INCUBATION & ENTREPRENEURSHIP
- Encouraging Entrepreneurship among students through Start up culture and funding tie-ups
- Vocational and skill-oriented courses to be introduced
- Strengthening of Incubation Activities through Corporate Structure with seed money
8. CULTURE FOR HUMAN DEVELOPMENT
- Develop and adopt practices to make Green campus through LEDs, Waste management, water harvesting, cycling, etc.
- Popularize NSS activities for community development
- Offer and expose about community services, Swachh Bharat, etc.
- Enhancing sports culture through development of Indoor Sports facilities and their maintenance
- Sports coaching to be strengthened More Regional/Zonal/National level events to be organized
- All festivals & important days to be celebrated
9. HR DEVELOPMENT & WELFARE
- Improved welfare measures for Staff and Faculty
- Proper and timely implementation of career advancement schemes
- Inputs on issues such as Patriotism, Gender Sensitization, NSS, Human Values, Ethics to be continuously pursued
- Fitness to be given due importance with development of infrastructure
- HRD Centre to be setup for Conducting or Participating in Training Programs/FDPs on regular basis
- Introduction of Awards and Reward system for Motivation of Staff and faculty
- Enhancement of Residential Facilities in the campus for staff/faculty
10. ACCREDITATION & RANKING
- NAAC ACCREDITATION with Grade > 3.26 to be eligible for Institution of Eminence (IOE) status, online distance learning plus host of other benefits including the colleges.
- NIRF Ranking to be in the top 50 in the overall ranking
- NBA ACCREDITATION for applicable programmes
- Global Rankings (QS/TIME/BRICS) to be attempted based on above.
- Academic Audit of USS and Affiliated institutions to be consolidated for effective quality improvement
11. GOVERNANCE & MANAGEMENT
- Participatory Management involving all stakeholders
- Enhancing Inputs on issues like Patriotism, Gender Sensitization, NSS, Human Values, Ethics to be continuously pursued
12. STUDENT SUPPORT & SERVICES
- Developing strong Alumni Network through
- Linkages and their engagement using Online portal
- Improved welfare measures for students such as Scholarships, Freeships, etc.
- Fitness to be given due importance with development of infrastructure
- To develop student support system for improving student experience through smartcard-based system services such as library, canteen, internet, issue of transcripts, employer verification and other activities
- Admission process to be made more transparent, timely and effective to attract talent and merit.
- Instituting scholarships/awards to meritorious students from community/memorial awards.
- Online attendance management
- ICT enabled examination system and services with efficient response to student grievances, Result Preparation, etc.
- Strengthening Mentor-Mentee System for improved management of Stress related and other personal issues
- Operationalizing Online multi-layer student grievance handling system
13. QUALITY FOCUSED GROWTH
- Consolidation of existing programmes and institutions with Quality Improvement
- New Institutions and programmes only if relevant in the current time or socially relevant i.e. new technologies, skills, etc.
- East Campus at Surajmal Vihar to be developed to start new relevant Programs
- Focus on quality programmes rather than quantity.
14. PERCEPTION AND BRAND BUILDING
- Improving perception among stakeholders through various initiatives using the social media presence through linkedin, facebook, twitter, instagram, etc.
- Community services such as participation in Unnat Bharat Scheme, Red Cross, Blood donation, Swachh bharat, and CSR activities.
- Website to be transformed into an effective portal for all information and services to stakeholders
15. INTERNATIONALIZATION
- MOUs for Student Exchange and Faculty Exchange on International Level.
- International Visiting/Adjunct Faculty to be retained for giving impetus to research and consultancy in emerging areas
- Attracting foreign students from developing countries
- Enhancing International Fellowships
- Enhancing Joint Research projects, Publications, Research Scholars.
16. FINANCIAL MANAGEMENT
- Automation of all accounting procedures using digital technologies
- All disbursements and receipts to be digitalized
- Compliance through Internal Audit and External Audit Systems
- Explore alternative sources of revenue
- Tap Govt funding for financing infrastructure projects
- Tapping Corporate financial support/grants for development of Infrastructure
LONG-TERM GOALS
• Setup a Staff Development Training Centre with Government funding
• To promote liberal broad-based education through flexible curriculum with multiple re-entry and exit options
• To attract global students from developed countries
• To develop global standard facilities for International Students and Faculty
• To improve global ranking of the University
• To undertake international placements and internships
• To align measure of success with the number of startups from campus
• To work towards improving the No of Patents from the University
• To work towards knowledge creation through Online Courses and lifelong learning for wider reach
• To prepare the University to ensure financial sustenance by exploring alternative sources of funding
• To ensure quality assurance in all the endeavors
“The secret of getting ahead is getting started”
- Mark Twain
@GGSIPUofficial @ggsipuindia @ggsipuindia
www.ipu.ac.in |
A Comparative Study of Common Nature-Inspired Algorithms for Continuous Function Optimization
Zhenwu Wang 1,*, Chao Qin 1,†, Benting Wan 2,3,‡ and William Wei Song 2,3,§,∥
1 Department of Computer Science and Technology, China University of Mining and Technology, Beijing 100083, China; firstname.lastname@example.org
2 School of Software and IoT Engineering, Jiangxi University of Finance & Economics, Nanchang 330013, China; email@example.com
3 Department of Information Systems, Dalarna University, S-791 88 Falun, Sweden
* Correspondence: firstname.lastname@example.org (Z.W.); email@example.com (W.W.S.)
Abstract: Over previous decades, many nature-inspired optimization algorithms (NIOAs) have been proposed and applied due to their importance and significance. Some survey studies have also been made to investigate NIOAs and their variants and applications. However, these comparative studies mainly focus on one single NIOA, and there lacks a comprehensive comparative and contrastive study of the existing NIOAs. To fill this gap, we spent a great effort to conduct this comprehensive survey. In this survey, more than 120 meta-heuristic algorithms have been collected and, among them, the most popular and common 11 NIOAs are selected. Their accuracy, stability, efficiency and parameter sensitivity are evaluated based on the 30 black-box optimization benchmarking (BBOB) functions. Furthermore, we apply the Friedman test and Nemenyi test to analyze the performance of the compared NIOAs. In this survey, we provide a unified formal description of the 11 NIOAs in order to compare their similarities and differences in depth and a systematic summarization of the challenging problems and research directions for the whole NIOAs field. This comparative study attempts to provide a broader perspective and meaningful enlightenment to understand NIOAs.
Keywords: nature-inspired algorithm; meta-heuristic algorithm; swarm intelligence algorithm; bio-inspired algorithm; black-box optimization benchmarking; statistical test
1. Introduction
Nature-inspired optimization algorithms (NIOAs), defined as a group of algorithms that are inspired by natural phenomena, including swarm intelligence, biological systems, physical and chemical systems and, etc. [1]. NIOAs include bio-inspired algorithms and physics- and chemistry-based algorithms; the bio-inspired algorithms further include swarm intelligence-based and evolutionary algorithms [1]. NIOAs are an important branch of artificial intelligence (AI), and NIOAs have made significant progress in the last 30 years. Thus far, a large number of common NIOAs and their variants have been proposed, such as genetic algorithm (GA) [2], particle swarm optimization (PSO) algorithm [3], differential evolution (DE) algorithm [4], artificial bee colony (ABC) algorithm [5], ant colony optimization (ACO) algorithm [6], cuckoo search (CS) algorithm [7], bat algorithm (BA) [8], firefly algorithm (FA) [9], immune algorithm (IA) [10], grey wolf optimization (GWO) [11], gravitational search algorithm (GSA) [12] and harmony search (HS) algorithm [13]. In addition to the theoretical studies of NIOAs, many previous works have made an in-depth investigation on how the NIOAs are applied to various domains. Single NIOAs have been reviewed comprehensively [14–25], which present the algorithms and their variants at a good breadth and depth. In the rest of this chapter, we summarize the current survey work of the NIOAs, discuss our motivations for this survey, present our research methodologies and scope of this work and finally, describe our contributions to this field.
1.1. Summary of the Current Survey Work
From our observation, although reviews of specific NIOAs [14–25] are very common, there have not been many attempts to compare various NIOAs in terms of the general criteria. Only a few surveys [26–30] (named horizontal NIOAs reviews) adopted the narrative literature review approach to discuss a series of NIOAs, including their basic principles, variants and application domains. Specifically, Chakraborty [26] discussed eight bio-inspired optimization algorithms (BIOAs) that can be divided into insect-based algorithms, inspired by ants, bees, fireflies and glow-worms and animal-based algorithms, inspired by bats, monkeys, lions and wolves; Kar [28] detailed the principles, developments and applications of 12 BIOAs, including the neural networks, GA, PSO, ACO, ABC, bacterial foraging (BFO) algorithm, CS, FA, shuffled frog leaping algorithm (SFLA), BA, flower pollination (FP) algorithm and artificial plant optimization algorithm (APOA); Parpinelli [30] summarized the principles, application fields and meta-heuristics information of nine BIOAs, such as bees algorithm, ABC, marriage in honey-bees optimization (MBO) algorithm, BFO, glow-worm swarm optimization algorithm (GSOA), FA, slime mold optimization algorithm (SMOA), roach infestation optimization (RIO) algorithm and BA. In addition to the above reviews, some literature compared the performance of NIOAs via a series of benchmark functions. Through a number of statistical tests, Ab Wahab [27] compared seven BIOAs, including GA, ACO, PSO, DE, ABC, GSOA and CS; Chu [29] only analyzed three BIOAs, including PSO, ACO and ABC, on three benchmark functions.
In all, all the above survey works provide good references for NIOAs, but these reviews are not comprehensive and in-depth. For example, the survey work [26,28,30] merely introduces the principles, variants and applications for different BIOAs, without involving their performance comparison, which provides an important basis to the improvement and application for BIOAs. Some reviews [27,29] discuss the performance comparison of BIOAs. However, the comparison in [27] for the seven BIOAs is inadequate because the chosen BIOAs are incomplete, most benchmark functions are low-dimensional and the experimental results are only represented by mean error (comparison of convergence speed is not considered). The review in [29] only compares three BIOAs on three benchmark functions; the comparison algorithms and experimental work are quite narrow. Besides the above shortcomings, these problems also exist: selected NIOAs are not popular, and the criterion of selection is not clear. Furthermore, common challenging problems of NIOAs have not been extracted and discussed in all the above survey work [26–30]. These are, for instance, the common characteristics and differences for the NIOAs, the challenges and future directions for the NIOAs field and the systematic summary of improvement methods for all the chosen NIOAs, just to mention a few.
1.2. Motivations
As discussed in Section 1.1, the current horizontal NIOAs reviews still have some important issues that have not been discussed at a sufficient depth; we think it worthwhile to make a comprehensive comparison and analysis study of the common NIOAs for the following four reasons.
1. Thus far, questions such as how many NIOAs have been proposed and which NIOAs are the research hotspots that should be addressed and discussed. It is necessary to distinguish the hotspots of NIOAs, but it is very difficult to collect all the NIOAs in an all-around way. With our best effort, we search for NIOAs that we can reach and identify the hotspot ones from our observation. To our knowledge, no similar work has been completed, and for existing horizontal NIOAs reviews, either the selection criteria for NIOAs [27–30] are not clear or the selected algorithms are not common [31–33].
2. Different proposals of the NIOAs for different purposes have created great confusion as to which method fits what situation, and it is strongly required to understand what the common characteristics are of these hotspots algorithms and what the differences are. To study and compare the characteristics of the hotspot NIOAs can provide not only a
broader perspective to the improvement of the current NIOAs but also a solid and feasible cornerstone for building up the new problem-oriented NIOAs.
3. Hybridization is an important method to improve the performance of NIOAs. When considering a hybridization of different NIOAs, many proposers more often than not claim that the selected NIOAs have shortcomings to be improved, for example, being easy to fall into local optimum and having a slow convergence speed, on the one hand, and consider that some algorithms have advantages that could be utilized as necessary complements, such as rapid convergence speed and good ability of global exploration, on the other hand. Thus, in order to validate and compare the performance of these common NIOAs, a construction of comprehensive experiments of common NIOAs is indispensable.
4. To our knowledge, most survey work focuses on introducing NIOAs’ principles, variants and application domains and little work has so far been completed to summarize the general improvement methods for all the chosen NIOAs and to analyze their challenges and future directions for the whole field of NIOAs, and all these issues are very important and critical to the development of NIOAs.
It should be noted that some so-called “novel” NIOAs that have been proposed from time to time are actually “the same old stuff with a new label” and deteriorate the research atmosphere of NIOAs. However, this issue is not the scope of this paper. It is undeniable that there are many excellent works in the field of NIOAs, which have greatly promoted the development of NIOAs. The main purpose of this work is to objectively analyze the existing commonly used NIOAs and discuss their characteristics, performance comparison, challenges and future directions.
1.3. Research Methodology
The research is conducted in multiple stages. Firstly, the meta-heuristic algorithms (MHAs) that are to be scrutinized are identified, which include NIOAs and non-nature-inspired optimization algorithms (NNIOAs). There are numerous MHAs, and they are being developed continuously. Most of the MHAs are independently developed and conducted and are labeled under the terms of swarm intelligence (SI), BIOA, NIOA and NNIOA. In this work, we search for MHAs as many as possible from the Web of Science, Google Scholar dictionary and Scopus database by using specific keywords, such as swarm intelligence optimization, intelligent algorithm, heuristic, meta-heuristic, bio-inspired algorithm and nature-inspired algorithm. After identifying these algorithms, we adopted the Google engine to confirm whether they are MHAs or not. Then, some “common” (or “hotspot”) NIOAs are selected to conduct the comparison task, and how to judge whether the algorithm is “common” has become another problem. In order to identify the “most common” NIOAs, we compute the average number of articles per year and the total number of published articles for a candidate NIOA. The total article number of a certain NIOA is computed through searching algorithm name as TITLE in Web of Science and Scopus databases. The advantage of this method is to ensure that classic NIOAs, such as GA, PSO, DE, are included in the comparison work, while those that, compared with the contemporary algorithms are relatively seldom used in the whole scientific community, are excluded: for example, the self-organizing migrating algorithm, spiral millipede-inspired routing algorithm and benchmarking-based optimization algorithm. According to this approach, more than 120 MHAs (see Table S1 in Supplementary Material A) are identified, and among them, 11 NIOAs are selected for scrutiny in this survey work. Through the aforementioned calculation methods, we include a NIOA which has been discussed in more than 100 published papers per year and in more than 1000 papers in total (described in Figures 1 and 2, respectively). The statistical performance comparison of these NIOAs is with the BBOB functions; compared with the functions described by explicit equations, they have uncertainty and noise, which can ensure the fairness of experimental results.
Figure 1. Number of papers published per year until 13 October 2020 (from Web of Science and Scopus databases).
Figure 2. Total number of papers published until 13 October 2020 (from Web of Science and Scopus databases).
1.4. Scope of Discussion
This survey work focuses on the single objective numerical optimization algorithms, which are the basis of the more complex optimization algorithms, such as the multi-objective optimization algorithms and the constrained optimization algorithms. We exclude the ant colony optimization (ACO) algorithm, ranked 4 in Table S1 of Supplementary Material A, from this survey study because it is designed to solve combinatorial optimization problems (COPs) [6] and is not in the scope of this work. We exclude the “Biogeography-Based Optimization” algorithm in Table S1 of Supplementary Material A, ranked thirteenth from this survey, since its total number of published articles 950, and the average number of articles published annually 73 both are lower than the values set for this selection of NIOAs. Finally, we consider that the selected 11 algorithms are reasonable. In this paper, we do not involve the variant methods and the applications of 11 NIOAs, since they have been discussed sufficiently in many reviews [14–25]. The selection method of NIOAs can identify the NIOAs of continuous hotspot research; some less studied algorithms are excluded, although they were brought up a long time ago (see Table S1 in Supplementary Material A).
1.5. Our Contributions
The contributions of this paper are as follows.
1. We present a comprehensive list of more than 120 MHAs, and make a preliminary statistical analysis of the basic information for the chosen NIOAs, which can provide a panoramic view for NIOAs study. It is the first attempt to systematically study the existing NIOAs, even though it is very hard work.
2. We analyze and summarize the common characteristics and differences of all the chosen NIOAs to provide a clear insight into the construction, design and application of NIOAs.
3. We compare and analyze the accuracy, the stability, the efficiency and the parameter sensitivity of the chosen NIOAs with different function features under high and low dimensional spaces, respectively, which can reflect the essential characteristics of each algorithm.
4. We discuss the challenges and future directions of the whole NIOAs field, which can provide a referencing framework for the future research of NIOAs.
1.6. Structure of the Paper
The rest of this paper is organized as follows. In Section 2, we extracted a unified representation as a foundation of comparison for the 11 NIOAs, and under this representation, the principles of 11 NIOAs have been discussed. In Section 3, the common characteristics and differences of 11 NIOAs have been analyzed and summarized. Section 4 focuses on the comparative study of the accuracy, the stability, the efficiency and the parameter sensitivity of these NIOAs with the 30 BBOB functions, and the Friedman test and Nememyi test are constructed to analyze the performance of the compared NIOAs. In addition, the 11 NIOAs are applied to solve a constrained engineering optimization problem in Section 4. We discuss the challenges and future direction of the NIOAs in Section 5 and conclude this paper in Section 6.
2. Common NIOAs
Actually, most of the NIOAs have a similar structure, although they are defined in various forms. In this section, first, the common process will be extracted to offer a unified description for the NIOAs, and then the principles of the 11 NIOAs will be outlined and discussed under this unified structure. The unified representation makes it convenient to analyze the similarity and dissimilarity of these algorithms.
2.1. The Common Process for the 11 NIOAs
The common process of most of NIOAs is described in Figure 3, which can be divided into four steps. In step S1, the population and related parameters are initialized. Usually, the initial population is generated by random methods, which ensure it covers as much solution space as possible; the population size is selected based on expert experience and specific requirements, and generally, it should be as large as possible. Most NIOAs use iterative methods, and the maximum iteration times and precision threshold are two common conditions of algorithm termination, which should also be initialized in step S1.

The fitness function is the unique indicator that reflects the performance of each individual solution, and it is designed by the target function (i.e., the BBOB functions will be described in Section 4.1), which usually has a maximum or minimum value. Generally, an individual has its own local optimal solution, and the whole population has a global optimum. In step S2, the fitness values of the population in each iteration are computed, and if the global best solution satisfies the termination conditions, NIOAs will output the results (in step S4). Otherwise, step S3 is implemented, which performs the key operations (defined by various components or operators) to exchange information among the whole population in order to evolve excellent individuals. Then, the population is updated, and the workflow jumps to step S2 to execute the next iteration. According to the above process, a set of commonly used symbols are given in Table 1 as a unified description for the 11 NIOAs, where $D$ represents the dimension number of objective functions, $M$ is the individual number of each NIOA and $N$ the total iterative times.
**Table 1.** The common symbols of NIOAs.
| Conceptions | Symbols | Description |
|------------------------------|-------------------------------------------------------------------------|----------------------------------------------------------------------------|
| Space dimension | $D$, $0 < d \leq D$ | The problem space description |
| Population size | $M$, $0 < i \leq M$ | Individual quantity |
| Iteration times | $N$, $0 < t \leq N$ | Algorithm termination condition |
| Individual position | $x_i(t) = (x_{i,1}(t), \ldots, x_{i,d}(t), \ldots, x_{i,D}(t))$ | The expression of the $i^{th}$ solution on the $t^{th}$ iteration, also used to represent the $i^{th}$ individual |
| Local best solution | $p_i(t) = (p_{i,1}(t), \ldots, p_{i,d}(t), \ldots, p_{i,D}(t))$ | Local best solution of the $i^{th}$ individual on the $t^{th}$ iteration |
| Global best solution | $p_g(t) = (p_{g,1}(t), \ldots, p_{g,d}(t), \ldots, p_{g,D}(t))$ | Global best solution of the whole population on the $t^{th}$ iteration |
| Fitness function | $f(\cdot)$ | Unique standard to evaluate solutions |
| Precision threshold | $\delta$ | Algorithm termination condition |
### 2.2. The Principles of the 11 NIOAs
#### 2.2.1. Genetic Algorithm (GA)
Holland [2] proposed the GA algorithm, which is based on natural selection (named “selection operator $s_o$”), genetic (named “crossover operator $c_o$”) and mutation (named “mutation operator $m_o$”) mechanisms. The encoding method of the GA algorithm is decided by the specific problems, and common encoding schemes include binary, natural number, real number, matrix, tree and quantum. There are many types of selection, crossover and mutation operators, such as roulette wheel selection, stochastic universal sampling, local selection and tournament selection for $s_o$, one-point crossover, two-point crossover, multi-point crossover and uniform crossover for $c_o$ and the basic mutation operator (that chooses one or more genes to randomly change), the inversion operator (that randomly chooses two gene points to inverse the genes between two points), for $m_o$. The types of three operators are associated with the encoding schemes. Supposing $\theta_1$, $\theta_2$ and $\theta_3$ are the probabilities of selection, crossover and mutation, respectively, the steps of the GA algorithm are described as Algorithm 1.
**Algorithm 1** GA
Input: the parameters $M$, $N$, $\delta$, $\theta_1$, $\theta_2$ and $\theta_3$
Begin
S1: encode and initialize $M$ individuals $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_g(t)$, if it satisfies $(t > N$ or precision $\leq \delta)$, then go to step S4; otherwise, go to step S3;
S3: execute $s_o$, $c_o$ and $m_o$ operations to generate new solutions according to $\theta_1$, $\theta_2$ and $\theta_3$, iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
2.2.2. Particle Swarm Optimization (PSO) Algorithm
Kennedy [3] put forward the PSO algorithm, which simulated the bird swarm behavior. The movement method, represented by the position and velocity of the $i^{th}$ individual in the $d^{th}$ dimension for the $(t + 1)^{th}$ iteration, is described in Equation (1).
$$v_{i,d}(t + 1) = v_{i,d}(t) + c_1 * rand_1 * (p_{i,d}(t) - x_{i,d}(t)) + c_2 * rand_2 * \left( p_{g,d}(t) - x_{i,d}(t) \right)$$
$$x_{i,d}(t + 1) = x_{i,d}(t) + v_{i,d}(t + 1)$$
where $c_1$ and $c_2$ are the learning factors, $rand_1$ and $rand_2$ are random numbers uniformly distributed in the range $[0, 1]$ and the velocity $v_i(t)$ is defined as $v_i(t) = (v_{i,1}(t), \ldots, v_{i,d}(t), \ldots, v_{i,D}(t))$. The steps of the PSO algorithm are described as Algorithm 2.
**Algorithm 2** PSO
Input: the parameters $M$, $N$, $\delta$, $c_1$ and $c_2$
Begin
S1: initialize $x_i(t)$ and $v_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_i(t)$ and $p_g(t)$, if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: update $x_i(t)$ and $v_i(t)$ according to Equation (1), iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
2.2.3. Artificial Bee Colony (ABC) Algorithm
Karaboga [5] presented the ABC algorithm, and there are three kinds of bees: employed foragers, scouts and onlookers. An employed forager associates with one food source and shares it with other bees by certain probability; scouts are in charge of searching new food sources and onlookers find food sources through sharing information with employed foragers. The position of the $i^{th}$ individual on the $t^{th}$ iteration is $x_i(t)$ that is generated by Equation (2).
$$x_{i,d}(t) = L_d + rand(0, 1) * (U_d - L_d)$$
Here, $L_d$ and $U_d$ are the lower and upper bounds in the $d^{th}$ dimensional space, respectively; $rand(0, 1)$ is the random number uniformly distributed in the range $(0, 1)$. Employed foragers search for new food sources according to Equation (3).
$$v_{i,d}(t + 1) = x_{i,d}(t) + \varphi \left( x_{i,d}(t) - x_{j,d}(t) \right)$$
where $0 < i, j \leq M$, $i \neq j$, $\varphi$ is the random number uniformly distributed in the range $(0, 1)$. Onlookers choose food sources according to Equations (4) and (5),
$$p_i = \frac{fit_i}{\sum_{i=1}^{N} fit_i}$$
$$fit_i = \begin{cases}
\frac{1}{1 + f(i)}, & f(i) \geq 0 \\
1 + abs(f(i)), & otherwise
\end{cases}$$
If a food source cannot be updated after Limit times searches, the ABC algorithm deletes it and the corresponding employed forager changes to the scout; supposing there are $F$ employed foragers initially, the steps of the ABC algorithm are described as Algorithm 3.
Algorithm 3 ABC
Input: the parameters $M$, $N$, $\delta$, $F$, $Limit$
Begin
S1: initialize $M$ individuals $x_i(t)$ randomly by Equation (2), and appoint $\frac{M}{2}$ bees to the employed foragers, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_S(t)$; if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: employed foragers search new food sources by Equation (3) and compute $f(i)$; update the food sources if the new one is better than the old one; onlookers choose food sources of employed foragers according to Equations (4) and (5), and generate new food sources by Equation (3); update the food sources if the new one is better than the old one; if there are some food sources which need to be given up (cannot be optimized after $Limit$ times searches); the corresponding bees become the scouts, and generate new sources by Equation (2); increase times $t = t + 1$, go to step S2;
S4: output the optimized results.
End
2.2.4. Bat Algorithm (BA)
Yang [8] presented the BA algorithm that is based on the echolocation behavior of bats. Suppose the frequency of a sound wave is $Freq \in [Freq_{min}, Freq_{max}]$, $Freq_{min}$ and $Freq_{max}$ lower and upper bound, respectively. The sound intensity and pulse emissivity are defined as $A \in [A_{min}, A_{max}]$ and $r$, respectively; the individuals update their positions by Equation (6).
$$Freq_i = Freq_{min} + (Freq_{max} - Freq_{min}) * \beta_i$$
$$v_i(t + 1) = v_i(t) + (x_i(t) - p_S(t)) * Freq_i$$
$$x_i(t + 1) = x_i(t) + v_i(t + 1)$$
where $\beta_i$ is the random number uniformly distributed in the range [0,1]. The bat generates a new solution by Equation (7),
$$x_{new} = x_{old} + \varepsilon * \overline{A}$$
where $\varepsilon$ is the random number in the range $[-1, 1]$, $\overline{A}$ is the average sound intensity of all the bats. $A_i$ and $r_i$ are updated by Equations (8) and (9).
$$A_i(t + 1) = \alpha * A_i(t)$$
$$r_i(t + 1) = r_i^0 * [1 - \exp(-\gamma * t)]$$
where $\alpha$ and $\gamma$ are two constants, $r_i^0$ is the initialized value of $r$, the steps of the BA algorithm are described as Algorithm 4.
Algorithm 4 BA
Input: the parameters $M$, $N$, $\delta$, $Freq$, $A$, $\alpha$ and $\gamma$
Begin
S1: initialize $M$ individuals $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_S(t)$; if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: update $Freq$, $x_i(t)$ and $v_i(t)$ by Equation (6), generate a random number $rand_1$, if $rand_1 > r$, the bat with the global optimum generates a new solution by Equation (7). Bats generate new solutions randomly, and BA generates a random number $rand_2$; if $rand_2 < A$ and the new solution is better than the old one, BA updates the corresponding position by Equation (7), and updates $A_i$ and $r_i$ by Equations (8) and (9); iterative times $t = t + 1$, go to step S2;
S4: output the optimized results.
End
2.2.5. Immune Algorithm (IA)
In 1958, Burnet [34] presented clonal selection theory, and Bersini [10] first used an artificial immune system to solve the discrete problem. Generally speaking, the common immune algorithm (IA) adopts the learning strategy similar to GA, while IA uses the affinity to guide the searching process [35]. The affinity is defined by information entropy; the average information entropy $H(i, j)$ of antibodies $i$ and $j$ is described as follows.
$$H(i, j) = \frac{1}{D} \sum_{l=1}^{D} H_l(i, j)$$ \hspace{1cm} (10)
where $H_l(i, j) = \sum_{s=1}^{s} - p_{s,l} \log p_{s,l}$ indicates the information entropy of the $l^{th}$ bit of genes for antibodies $i$ and $j$. $p_{s,l}$ is the probability of regarding the $l^{th}$ bit of genes for antibodies $i$ and $j$ as the gene letter $K_s$, $K_s \in \{K_1, K_2, \ldots, K_s\}$; $s$ is the number of gene letters. The affinity between antibodies $i$ and $j$ reflects the similarity of two antibodies, which is defined as follows.
$$A_{i,j} = \frac{1}{1 + H(i, j)}$$ \hspace{1cm} (11)
The concentration of antibodies reflects the diversity of the whole population. The density of the $i^{th}$ antibody is described as follows.
$$Con_i = \frac{1}{M} \sum_{j=1}^{M} C_{ij}$$
$$C_{ij} = \begin{cases}
1 & A_{i,j} \geq h_1 \\
0 & A_{i,j} < h_1
\end{cases}, j = 1, 2, \ldots, M$$ \hspace{1cm} (12)
where $h_1$ is the threshold of the affinity. The activity degree refers to the comprehensive ability of the antibody to respond to antigen and be activated by other antibodies; generally, the antibody with large affinity and small concentration will have a large activity degree. The activity degree of the $i^{th}$ antibody is defined as follows.
$$Act_i = \begin{cases}
\frac{f(i)(1-Con_i)}{\sum_{i=1}^{M} f(i)*Con_i}, & Den_i \geq h_2 \\
\frac{f(i)}{\sum_{i=1}^{M} f(i)*Con_i}, & Den_i < h_2
\end{cases}$$ \hspace{1cm} (13)
where $f(i)$ is the fitness value of the $i^{th}$ antibody, $Con_i$ is the concentration of the $i^{th}$ antibody, and $h_2$ is the threshold of antibody density. The steps of IA are described as Algorithm 5.
**Algorithm 5** IA
Input: the parameters $M$, $N$, $\delta$, $P_c$, $P_m$
Begin
S1: initialize $M$ antibodies $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i), 0 < i \leq M$, update $p_S(t)$; if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: compute the affinity, the concentration and activity degree according to Equations (11), (12) and (13); the selection operation is executed by the roulette method to choose antibodies with large activity degree, and then execute crossover and mutation operations according to the probabilities $P_c$ and $P_m$, respectively; iterative time $t = t + 1$, go to step S2;
S4: output the optimized results.
End
2.2.6. Firefly Algorithm (FA)
The FA [9] algorithm was proposed by Yang Xin-She, and its main operations are the update of firefly luminance $I$, the computation of firefly attraction degree $\beta$ and the update of firefly position. Suppose the attraction factor is $\gamma$, maximum luminance is $I_0$, the
maximum attraction degree is $\beta_0$ and the step factor is step. The luminance is defined as Equations (14) and (15).
\[
I = I_0 * \exp(-\gamma * r_{ij})
\]
(14)
\[
r_{ij} = \sqrt{\sum_{k=1}^{D} (x_{i,k} - x_{j,k})^2}
\]
(15)
The attraction degree is described as Equation (16).
\[
\beta(r_{ij}) = \beta_0 * \exp(-\gamma * r_{ij}^2)
\]
(16)
Equation (17) defines the position updating of the $i^{th}$ firefly when it moves toward the $j^{th}$ firefly. Here, $x_i(t)$ and $x_j(t)$ are the positions of the $i^{th}$ firefly and the $j^{th}$ firefly, respectively, at the iteration $t$. For convenience, we use $x_i(t)$ to represent the $i^{th}$ firefly.
\[
x_i(t + 1) = x_i(t) + \beta(r_{ij}) * (x_j(t) - x_i(t)) + \text{step} * \varepsilon_i
\]
(17)
where $\varepsilon_i$ is the random number that follows Gaussian distribution or uniform distribution. The steps of FA are described as Algorithm 6.
**Algorithm 6** FA
Input: the parameters $M$, $N$, $\delta$, $I_0$, $\beta_0$ and $\text{step}$
Begin
S1: initialize $M$ individuals $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i), 0 < i \leq M$, if $t > 1$ and the new position is better than the old one, update $x_i(t)$ and $p_x(t)$; if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: for the $i^{th}$ firefly $x_i(t)$, FA searches another firefly (suppose the $j^{th}$ firefly $x_j(t)$, and $i \neq j$) in the population that has the luminance calculated with Equation (14). If the luminance of $x_j(t)$ is larger than that of $x_i(t)$, $x_i(t)$ moves toward $x_j(t)$ by Equation (17), iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
### 2.2.7. Cuckoo Search (CS) Algorithm
The CS [7] algorithm was also put forward by Yang Xin-She, based on the brood parasitism of certain cuckoos species and the Lévy flights characteristics. The CS algorithm follows three idealized rules: (1) each cuckoo lays one egg at a time and dumps its egg in the randomly selected nest; (2) the best nest with the highest quality of eggs will carry over to the next generations; (3) the number of available host nests is fixed, and the egg laid by a cuckoo would be discovered by the host bird with a probability $p_a \in [0, 1]$, that is, the fraction $p_a$ of M nests would be replaced by the new nests. The $i^{th}$ individual updates its host nest $x_i(t)$ by Equation (18),
\[
x_i(t + 1) = x_i(t) + \alpha \otimes \text{Levy}(\lambda)
\]
(18)
where $\alpha$ is the scaling factor of step size and $\alpha$ usually equals 1. The product $\otimes$ means entrywise multiplications and $\text{Levy}(\lambda)$ indicates that the Lévy flight draws from the Lévy distribution. It is difficult to satisfy the real Lévy distribution, and Equation (19) is usually used to approximate Lévy flight:
\[
s = \frac{u}{|v|^{\beta}}
\]
(19)
where $u$ and $v$ follow the Guassian distribution, $u \sim N(0, \sigma^2)$, $v \sim N(0, 1)$,
\[
\sigma = \left\{ \frac{\Gamma(1+\beta)\sin(\frac{\pi\beta}{2})}{\beta\Gamma\left(\frac{1+\beta}{2}\right)2^{\frac{\beta-1}{2}}} \right\}^{\frac{1}{\beta}}, \quad \beta = 1.5.
\]
Some cuckoo eggs may be found and discarded by
the host, and the probability of abandonment is $p_a$. When a cuckoo egg is abandoned, the cuckoo needs to find a new boarding site and update it with the following Equation (20):
$$x_i(t + 1) = x_i(t) + \alpha s \otimes H(p_a - \epsilon) \otimes (x_j(t) - x_k(t))$$
(20)
where $x_j(t)$ and $x_k(t)$ are two different solutions selected randomly by random permutation, $H()$ is a Heaviside function, $\epsilon$ is a random number drawn from a uniform distribution, $s$ is the step size that is defined as a random number in the scope of $(0, 1)$ and $\alpha$ is a scaling factor of step size. The steps of the CS algorithm are described as Algorithm 7.
**Algorithm 7** CS
Input: the parameters $M$, $N$, $\delta$, $\alpha$ and $p_a$
Begin
S1: initialize $M$ host nests $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_{\text{gb}}(t)$; if it satisfies ($t > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: choose a cuckoo randomly to generate a new solution by Equation (18); choose a nest among $M$ individuals randomly; if the new solution is better than the chosen nest, replace it; a fraction ($p_a$) of the worst nests are replaced by the new nests by Equation (20), iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
### 2.2.8. Differential Evolution (DE) Algorithm
The DE [4] algorithm, proposed by Storn, has three operations: mutation, crossover and selection, but is different from the GA algorithm. The main differences are: (1) in GA, two sub-individuals are generated by crossing two parent individuals, whereas in DE, new individuals are generated by perturbing the different vectors of several individuals; (2) in GA, the progeny individual replaces the parent individual with a certain probability, while in DE, the individual is only updated when the new individual is better than the old one. More specifically, the basic strategy of DE can be described as follows.
1. **Mutation operation**
Each individual $x_i(t)$ can execute the mutation operation according to Equation (21),
$$v_{i,j}(t) = x_{r_1,j}(t) + F(x_{r_2,j}(t) - x_{r_3,j}(t))$$
(21)
where $x_{r_1}(t)$, $x_{r_2}(t)$ and $x_{r_3}(t)$ are three individuals selected randomly from the whole population, $r_1 \neq r_2 \neq r_3 \neq i$ and $F \in [0, 2]$ is the mutation factor.
2. **Crossover operation**
The crossover individual $u_i(t) = (u_{i,1}(t), u_{i,2}(t), \ldots, u_{i,D}(t))$ can be generated by the mutation individual $v_i(t)$ and its parent individual $x_i(t)$, as described in Equation (22),
$$u_{i,j}(t) = \begin{cases} v_{i,j}(t) & \text{if } rand \leq CR \text{ or } j = j\_rand \\ x_{i,j}(t) & \text{if } rand > CR \text{ and } j \neq j\_rand \end{cases}$$
(22)
where $rand$ is a random number in the range $[0, 1]$, $CR$ is the crossover factor and it is a constant in the range $[0, 1]$; $j\_rand$ is an integer selected randomly from the range $[1, D]$.
3. **Selection operation**
The DE algorithm adopts the “greedy” strategy; the next-generation individual is chosen between parent individual $x_i(t)$ and the crossover individual $u_i(t)$, which has the better fitness value, as described in Equation (23).
$$x_i(t) = \begin{cases} x_i(t) & \text{if } f(x_i(t)) \text{ is better than } f(u_i(t)) \\ u_i(t) & \text{otherwise} \end{cases}$$
(23)
The steps of the DE algorithm are described as Algorithm 8.
Algorithm 8 DE
Input: the parameters $M$, $N$, $\delta$, $F$ and $CR$
Begin
S1: initialize $M$ individuals $x_i(t)$ randomly, $0 < i \leq M$, iterative times $t = 1$;
S2: compute $f(i), 0 < i \leq M$, update $p_S(t)$; if it satisfies ($l > N$ or precision $\leq \delta$), then go to step S4; otherwise, go to step S3;
S3: execute mutation operation by Equation (21), execute crossover operation by Equation (22) and execute selection operation by Equation (23), iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
2.2.9. Gravitational Search Algorithm (GSA)
The GSA [12] algorithm was proposed by Esmat Rashedi which is based on the law of gravity and mass interactions. On the $t^{th}$ iteration, the force acting on particle $x_i(t)$ from particle $x_j(t)$ is defined as Equation (24),
$$F_{i,j}^d(t) = G(t) * \frac{I_{pi} * I_{qj}}{R_{ij}(t)} + \varepsilon * \left( x_{i,d}^t - x_{j,d}^t \right)$$ \hspace{1cm} (24)
where $I_{pi}$ is the active gravitational mass related to the particle $x_j(t)$, $I_{qj}$ is the passive gravitational mass related to the particle $x_i(t)$, $\varepsilon$ is a small constant, $G(t)$ is the gravitational constant at iteration $t$, which is defined as Equation (25):
$$G(t) = G_0 * e^{-\alpha * t / N}$$ \hspace{1cm} (25)
where $G_0 = 100$ and $\alpha = 20$. $R_{ij}(t)$ is the Euclidian distance between two particles $x_i(t)$ and $x_j(t)$, described as follows.
$$R_{ij}(t) = \sqrt{\sum_{k=1}^{D} \left( x_{i,k}^t - x_{j,k}^t \right)^2}$$ \hspace{1cm} (26)
On the $t^{th}$ iteration, the total force acting on the particle $x_i(t)$ in the dimension $d$ is defined as Equation (27),
$$F_i^d(t) = \sum_{j=1, j \neq i}^{M} rand_j * F_{i,j}^d(t)$$ \hspace{1cm} (27)
where $rand_j$ is a random number in the interval $[0, 1]$, the acceleration of the particle $x_i(t)$ in the dimension $d$ is defined as Equation (28):
$$a_i^d(t) = \frac{F_i^d(t)}{I_i(t)}$$ \hspace{1cm} (28)
where $I_i(t)$ is the inertial mass of particle $x_i(t)$; one particle updates its velocity and position according to its acceleration, as described in Equation (29).
$$v_{i,d}(t + 1) = rand_i * v_{i,d}(t) + a_i^d(t)$$
$$x_{i,d}(t + 1) = x_{i,d}(t) + v_{i,d}(t + 1)$$ \hspace{1cm} (29)
The GSA algorithm updates the gravitational and inertial masses by Equation (30).
$$I_{ai} = I_{pi} = I_{qi} = I_i, \ i = 1, 2, \ldots, M$$
$$i_i(t) = \frac{f_i(t) - worst(t)}{best(t) - worst(t)}$$
$$I_i(t) = \frac{i_i(t)}{\sum_{j=1}^{M} i_j(t)}$$ \hspace{1cm} (30)
where \( f_i(t) \) is the fitness value of the particle \( x_i(t) \), \( best(t) \) and \( worst(t) \) represent the best and the worst fitness value among all the particles, respectively; they are defined as Equation (31).
\[
best(t) = \min_{j \in \{1, 2, \ldots, M\}} f_j(t)
\]
\[
worst(t) = \max_{j \in \{1, 2, \ldots, M\}} f_j(t)
\]
(31)
The steps of the GSA algorithm are described as Algorithm 9.
**Algorithm 9** GSA
Input: the parameters \( M, N, \delta, G_0, \alpha \)
Begin
S1: initialize \( M \) particles \( x_i(t) \) randomly, \( 0 < i \leq M \), iterative times \( t = 1 \);
S2: compute \( f(i), 0 < i \leq M \), update \( p_g(t) \); if it satisfies (\( t > N \) or precision \( \leq \delta \)), then go to step S4; otherwise, go to step S3;
S3: update \( G(t) \) by Equation (25), update \( best(t) \) and \( worst(t) \) by Equation (31), update \( I_i(t) \) by Equation (30), calculate \( F^i(t) \) by Equation (27), compute \( d^i(t) \) by Equation (28), update \( v_{i,p}(t + 1) \) and \( x_{i,p}(t + 1) \) by Equation (29), iterative times \( t = t + 1 \); go to step S2;
S4: output the optimized results.
End
### 2.2.10. Grey Wolf Optimizer (GWO)
Grey wolf optimizer was proposed in 2014, in which four types of grey wolves are employed for simulating the leadership hierarchy, including alpha (\( \alpha \)), beta (\( \beta \)), delta (\( \delta \)) and omega (\( \omega \)). The \( \alpha \) wolf is called the dominant wolf because his/her orders should be followed by the pack, the \( \beta \) wolf is probably the best candidate to be the \( \alpha \) wolf in case the latter passes away or becomes very old, the \( \delta \) wolf should respect the \( \alpha \) but commands the other lower-level wolves as well, the lowest ranking grey wolf is \( \omega \) and it plays the role of scapegoat. Grey wolves encircle prey during a hunt. In GWO, the encircling behavior can be described by the following equations:
\[
D = |C * x_p(t) - x(t)|
\]
(32)
\[
x(t + 1) = x_p(t) - A * D
\]
(33)
where \( A \) and \( C \) are coefficient vectors, \( x_p(t) \) is the position vector of the prey (the solution) and \( x(t) \) is the position vector of a grey wolf. \( A \) and \( C \) are calculated as follows,
\[
A = 2 * a * r_1 - a
\]
(34)
\[
C = 2 * r_2
\]
(35)
where \( a \) is linearly decreased from 2 to 0 over the course of iterations, \( r_1 \) and \( r_2 \) are random values in the scope of \([0, 1]\). In order to mathematically simulate the hunting behavior of grey wolves, GWO supposes that \( \alpha, \beta \) and \( \delta \) have better knowledge about the potential location of prey; it saves the first three best solutions obtained so far and oblige the other wolves to update their positions according to the position of the best search wolves, described as follows.
\[
D_\alpha = |C_1 * x_\alpha(t) - x(t)|
\]
(36)
\[
D_\beta = |C_2 * x_\beta(t) - x(t)|
\]
(37)
\[
D_\delta = |C_3 * x_\delta(t) - x(t)|
\]
(38)
\[
x_1 = x_\alpha - A_1 * D_\alpha
\]
(39)
\[
x_2 = x_\beta - A_2 * D_\beta
\]
(40)
\[
x_3 = x_\delta - A_3 * D_\delta
\]
(41)
\[ x(t + 1) = \frac{x_1 + x_2 + x_3}{3} \]
(42)
Here, \( A_1, A_2, A_3 \) and \( C_1, C_2, C_3 \) are the coefficients that are generated by different random values. The steps of the GWO algorithm are described as Algorithm 10.
**Algorithm 10 GWO**
Input: the parameters \( M, N, \delta \)
Begin
S1: initialize \( M \) individuals \( x_i(t) \) randomly, \( 0 < i \leq M \), iterative times \( t = 1 \);
S2: compute \( f(i), 0 < i \leq M \), rank the solutions and find the current top three best wolves: \( x_\alpha(t), x_\beta(t) \) and \( x_\gamma(t) \); if it satisfies (\( t > N \) or precision \( \leq \delta \)), then go to step S4; otherwise, go to step S3;
S3: update the solution of each individual via Equation (42), update coefficients \( a, A \) and \( C \), iterative times \( t = t + 1 \); go to step S2;
S4: output the optimized results.
End
### 2.2.11. Harmony Search (HS) Optimization
Harmony search (HS) optimization algorithm [13] is inspired by improvisation in music-playing. In a band, all players adjust their pitch to achieve a wonderful harmony. In the process of global optimization, all decision variables constantly adjust their own values to make the objective function achieve global optimization. A harmony memory (\( HM \)) with size given by the parameter \( HMS \) (harmony memory size) stores the best harmony vectors during the optimization. A new harmony vector \( x'_i(t) = (x'_1(t), x'_2(t), \ldots, x'_D(t)) \) is generated from the \( HM \) based on memory considerations, pitch adjustments and randomization. For \( x'_i(t), i = 1, 2, \ldots, D \), it chooses a new value using the parameter of harmony memory considering rate (HMCR), which varies between 0 and 1 as follows.
\[
x'_i(t) = \begin{cases}
x'_j(t) & j \in \{1, 2, \ldots, HMS\} \quad r_1 \leq HRCM \\
x'_i(t) \in X_i & else
\end{cases}
\]
(43)
Equation (43) indicates that \( x'_i(t) \) would be updated from \( HM \) if a random-generated number \( r_1 \) is less than or equal to \( HMCR \), and would be randomly chosen one feasible value from \( X_i \), which is the definition space of the \( i^{th} \) dimensional variable. Every component \( x'_i(t) \) is examined to determine whether it should be pitch-adjusted. The procedure uses the parameter of pitch adjusting rate (\( PAR \)) that sets the rate of adjustment for the pitch chosen from the \( HM \) as follows.
\[
x'_i(t) = \begin{cases}
x'_j(t) + \alpha & r_2 \leq PAR \\
x'_i(t) & otherwise
\end{cases}
\]
(44)
Here, \( r_2 \) is a random-generated number and \( \alpha \) is defined in Equation (45):
\[
\alpha = BW * u(-1, 1)
\]
(45)
where \( BW \) is an arbitrary distance band width for continuous design variable, and \( u(-1, 1) \) is a uniform distribution between \(-1\) and \(1\). The steps of the HS algorithm are described as Algorithm 11.
Algorithm 11 HS
Input: the parameters $M$ (or $HMS$), $N$, $\delta$, $HMCR$, $PAR$ and $BW$
Begin
S1: initialize the harmony memory filled with $M$ solutions that are randomly generated, iterative times $t = 1$;
S2: compute $f(i)$, $0 < i \leq M$, update $p_S(t)$; if it satisfies ($l > N$ or precision $\leq \delta$), then go to step S4, otherwise, go to step S3;
S3: generate new harmony according to Equations (43) and (44); if it is better than one harmony in $HM$, replace it, iterative times $t = t + 1$; go to step S2;
S4: output the optimized results.
End
3. Theoretical Comparison and Analysis of the 11 NIOAs
3.1. Common Characteristics
As shown in Section 2, although the NIOAs simulate different population behaviors, all of them are the iterative methods and have some common characteristics which satisfy the Reynolds model [36] and this model describes the basic rules for the aggregation motion of the simulated flock created by a distributed behavioral model.
1. **Randomicity.** Randomicity is the uncertainty of an event with a certain probability and can enhance the global search capability of individuals. All the 11 NIOAs initialize individuals randomly, which can cover the space as large as possible, some other mechanisms have been adopted in them which can enhance the exploration and exploitation abilities, such as the mutation operators $m_o$ in GA, IA and DE, the random parameters $rand_1$ and $rand_2$ in PSO, $rand(0, 1)$ and $\varphi$ in ABC, $\beta_i$ and $\varepsilon$ in BA, $\varepsilon_i$ in FA, Lévy flight in CS, $rand$ and $rand_i$ in DE, $rand_i$ and $rand$, in GSA, $r_1$, $r_2$, $A_1$, $A_2$, $A_3$ and $C_1$, $C_2$, $C_3$ in GWO, $u(-1, 1)$ in HS, etc.
2. **Information Interactivity.** The individuals in the NIOAs should exchange information directly or indirectly, which can increase the probability of obtaining the global optimum. For instance, GA, IA and DE adopt the crossover operator $c_o$ to exchange information; for PSO, each particle utilizes the global optimum $p_g(t)$ to update its position; employed foragers or onlookers in ABC update their velocities $v_{i,d}(t + 1)$ using another different position $x_{j,d}(t)$; bats in BA use global optimum $p_g(t)$ to update their positions; in FA, firefly $x_i(t)$ moves toward $x_j(t)$ through mixing the positions information of $x_i(t)$ and $x_j(t)$; as to GSA, the force $F_{ij}^d(t)$ is computed according to the positions of particles $x_i(t)$ and $x_j(t)$, which are used to update the position of each particle; the wolf in GWO updates its position according to the positions of wolves $\alpha$, $\beta$ and $\delta_i$ and at last but not least, a new harmony in HS is generated from $HM$.
3. **Optimality.** The individuals in the NIOAs move toward the global best solution through different mechanisms of information exchange. For example, the good genes in GA and DE are chosen as the next generation through the operators $s_o$, $c_o$ and $m_o$; particles in PSO update their positions, influenced by the local optimum $p_i(t)$ and the global optimum $p_g(t)$; onlookers in ABC choose the food sources that have better fitness than the old one, the bat in BA generates a new solution and updates its position only if the new solution is better than the old one; the good antibodies in IA would save in a memory database to participate in the next iteration; as to FA, one firefly moves toward the fireflies that have larger luminance; CS replaces solutions only when the new one is better than the chosen solution and a fraction of worse solutions will be replaced by the newly generated solutions; in GSA, the gravitational and inertial masses are calculated by the fitness evaluation and the better individuals have higher attractions and walk more slowly; wolves in GWO update their positions according to the positions of wolves $\alpha$, $\beta$ and $\delta_i$ which have better solutions in the wolf population; and a new harmony vector in HS can replace the old harmony in $HM$ only if the new harmony is better than the old one.
In addition to the aforementioned common characteristics of theoretical implementation, these common NIOAs are varied to different versions to handle different problems,
including combinational optimization problems (COPs) and multi-objective optimization problems (MOOPs). Similar variant methods are adopted to improve the optimization performance of NIOAs, for example, adaptive technology, fuzzy theory, chaos theory, quantum theory and hybridization technology. The classic articles about the above work are listed in Section 3.2, which provides a comprehensive summary of the 11 different NIOAs.
### 3.2. Variant Methods of Common NIOAs
The summarization of variant methods in other survey work [26–30] are fragmented; in this work, we systematically summarize the popular variants of the 11 common NIOAs, and the popular methods are described in Table 2; because of the massive papers, the summarized literature are the state-of-the-art or representative papers, and the superscript is the citation times from the *Web of Science* and *Scopus* databases (by the end of 13 October 2020).
**Table 2.** The popular variants of 11 original NIOAs.
| NIOAs | Multiple Objectives | Adaptive | Spatial Property | Hybridization |
|-------|---------------------|----------|-----------------|---------------|
| | | | Discrete | Continuous | Fuzzy Theory | Chaos Theory | Combination among NIOAs | Others |
| GA | [37]894 | [39]204 | [40]142 | [42]1405 | [44]305 | [45]195 | [46]120 | [43]831 |
| | [38]44334 | | [41]173 | [43]831 | | | [47]1373 | [48]355 |
| PSO | [49]800 | [51]263 | [53]32 | [54]440 | [55]296 | [52]914 | [56]252 | [57]281 |
| | [50]67 | [52]114 | | | | | [58]11 | [50]67 |
| ABC | [61]334 | [62]47 | [63]235 | [64]851 | [65]42 | [66]187 | [67]122 | [68]428 |
| BA | [69]433 | [70]204 | [71]260 | [72]285 | [73]136 | [74]27 | [75]158 | [76]64 |
| FA | [77]81 | [78]66 | [79]43 | [80]165 | [81]142 | [82]45 | [83]140 | [84]99 |
| IA | [86]166 | [87]97 | [88]157 | [89]141 | [90]205 | [91]17 | [92]166 | [93]230 |
| CS | [94]192 | [95]114 | [96]142 | [97]438 | [98]41 | [99]104 | [100]77 | [101]308|
| DE | [102]350 | [103]198 | [104]375 | [105]219 | [1]1925 | [106]251 | [107]86 | [108]70 |
| GSA | [111]135 | [112]216 | [113]133 | [114]114 | [112]5909 | [115]154 | [116]145 | [117]253|
| GWO | [119]627 | [120]60 | [121]28 | [16]6135 | [122]225 | [123]188 | [124]29 | [125]105|
| HS | [126]221 | [127]186 | [128]429 | [18]6808 | [129]38 | [130]345 | [131]194 | [132]133|
As described in Table 2, the following observations have been made:
(1) All of the 11 common NIOAs have their versions of handling MOOPs and COPs and have been improved through adopting various adaptive strategies, for example, automatically tuning parameters.
(2) Some classic mathematical and physical theories have been used to enhance the performance of NIOAs, such as fuzzy theory [133], chaos theory [56] and quantum theory [48,132], and the exploration and exploitation abilities of NIOAs.
(3) Hybridization is another major method of improving the performance of NIOAs, which make combinational use of the advantages of multiple NIOAs, for instance, GA-IA [38], GA-PSO [47], PSO-ABC [134], PSO-BA [135], BA-DE [76], FA-DE [84], IA-DE [92], CS-GA [100], ABC-DE [108], DE-PSO [57,109], PSO-GSA [117] and PSO-EDAs [58]. In addition, some other methods are also hybridized to improve the performance of NIOAs, such as the Taguchi method [43,60], the Gradient algorithm [59] and the Nelder–Mead simplex method [68].
In general, the proposers maintain the view that a hybridization mechanism can make full use of the advantages of some NIOAs to overcome the disadvantages of the other
NIOAs. Following are some examples of this view: DE has the abilities of strong global exploration and rapid convergence; PSO is easy to fall into the local optimum; ABC has slow convergence speed and is poor to balance the exploration and exploitation; BA and FA have a good performance on low dimensional problems, but are not good at solving high dimensional and nonlinear problems; GWO has a poor diversity of population and slow convergence, etc. Although many excellent hybrid NIOAs have been proposed (as mentioned above), what we have to admit is a confusing trend and route of NIOAs enhancement: in order to improve some NIOAs, the “proposers” always claimed that the selected NIOAs have a series of shortcomings, such as being easy to fall into local optimum, slow convergence and not being good at solving high-dimensional problems, and the proposed hybrid methods can improve the performance of the selected NIOAs. It is very likely that, without being fully aware of the merits and demerits of the common NIOAs, a so-called “novel” algorithm is only a mixture of NIOAs. The performance of such hybrid NIOAs, including convergence speed, ability to solve high-dimensional problems and algorithm accuracy, need to be verified with comprehensive experiments. Section 4 will give a systematic comparison and analysis of the performance of the 11 common NIOAs.
3.3. Differences
From our observation, there are three important aspects that influence the performance of NIOAs, including the method of parameter tuning, the learning strategy and the topological structure of the population. As the original 11 NIOAs adopt empirical parameters, we will not discuss the problem of parameter-tuning in this paper. However, we will address the sensitivity of the algorithms to their parameters setting in Section 4.2. In this section, we discuss the differences in learning strategies, the topology structures, the time complexities and the interactions of algorithmic components for the 11 NIOAs.
1. Learning strategies
Each NIOA has its own learning strategy. GA exchanges information among genes through selection, crossover and mutation operators, while IA updates antibodies according to the conception of activity degree. In GA, the selection operation is executed by the probability $\theta_1$, while in IA, the selection operation is executed by the roulette method to choose antibodies with large activity degrees; the main differences between the GA and DE algorithms were described in Section 2.2.8. In PSO, $x_i(t)$ is updated by its local best optimum $p_i(t)$ and the global best optimum $p_g(t)$ of the whole population (see Equation (1)), and in BA, the updating methods of positions and velocities have some similarities to those of PSO. To some extent, BA can be regarded as a combination of the global optimum $p_g(t)$ in PSO with an extensive local search method, which is described by the sound intensity $A_i(t)$ and the pulse emissivity $r_i(t)$ (see Equations (7)–(9)); in FA, if the attraction factor $\gamma$ (see Equation (17)) tends to zero, it is a special case of PSO; according to Equations (1) and (29), CSA also can be regarded as a variant of PSO. The difference among them is that velocity updating of the former is influenced by local optimum $p_i(t)$ and global optimum $p_g(t)$, while those of the latter is affected by all the other individuals (named as the total force); in CS, each cuckoo randomly updates its solution by Lévy flight (see Equation (18)), and new solutions are generated to replace the worst solutions through learning from other individuals. The harmony in HS has interactions with the best individuals (solutions in harmony memory) according to certain probability, which updates solution based on memory considerations, pitch adjustments and randomization (see Equations (43) and (44)); compared to the other nine NIOAs, GWO and ABC have hierarchical evolution mechanisms and the roles can be changed dynamically according to the quality of individuals’ solutions. Wolves in GWO learn from the current top three best wolves $\alpha, \beta$ and $\delta$ (see Equation (42)), and onlookers in ABC learn from the selected employed foragers that found food sources (see Equation (4)).
2. Topological structures
According to the scope of interaction among individuals, the topologies of the 11 NIOAs can be roughly grouped into two categories: global neighborhood topology (GNT) and local neighborhood topology (LNT). In this paper, we consider the neighborhood topology in the period of each iteration of NIOAs, and thus we have the following observation and conclusions. GA is LNT, because it can only exchange information between two genes in each iteration; DE is also LNT for a similar reason; ABC can be regarded as LNT too because onlookers only follow certain employed foragers, and scouts generate new solutions randomly and have no interaction with other bees; FA belongs to LNT because a firefly moves towards another firefly that has larger luminance; CS is LNT because individuals either generate new solution independently (see Equation (18)) or produce new solutions and exchange information with other individuals (see Equation (20)); similar to GA, IA also is LNT because two antibodies exchange information through crossover factors. As each particle updates its position using $p_{g}(t)$, PSO is regarded as GNT; BA is classified as GNT for the same reason as PSO; GWO is also GNT because each wolf in GWO updates its position following the best three wolves; GSA belongs to GNT because each particle updates its position following the total force from all the other particles; HS updates solutions from HM that stored the best harmony vectors of the whole population, so it is GNT. The topologies of the 11 NIOAs are illustrated in Figure 4, where each circle represents an individual; the solid line represents those two individuals have information exchange in the current iteration and the dotted line indicates that two individuals maybe exchange information in the sense of probability during the whole evolution process.

**Figure 4.** Topologies of 11 compared NIOAs.
GNT and LNT have their own advantages and disadvantages. Generally speaking, all individuals in GNT are connected to each other and attracted are to the global best solution of the whole population; its merits include rapid convergence and strong exploitation ability, while it is more likely to be confined at a local optimum; on the contrary, each individual in LNT only connects to several other individuals in its neighborhood and is attracted by the best position of the neighborhoods. LNT can make individuals search diverse regions of problem space and has a strong exploration ability, while it may have a slow convergence speed.
3. The interactions of algorithmic components
It is necessary to consider insights into the contribution of each component in NIOAs. The interactions of algorithmic components can reflect the core optimization power of the overall method [136]. According to the learning strategies and topological structures of NIOAs, the interactions of algorithmic components can be described. GA, IA and DE exchange information through three components: selection, crossover and mutation operators. For GA, selection and crossover are two main components, and information exchange happens between two genes in each iteration; mutation is executed in a low probability which can increase the diversity of the population. Owing to the local topology, GA has slow convergence. For IA, it uses affinity to guide the searching process. For DE, the mutation is the main operation and generates new solutions by perturbing the different vectors of several individuals. PSO updates the velocity of each particle using its historical
and globally optimal solutions; the topology is a full connection, and information exchange is very fast, and thus it is easy to fall into local optimum. ABC has three components: employed foragers, scouts and onlookers. Employed foragers learn from other randomly selected individuals to update their velocity, onlooker finds solutions through sharing information with a specific employed forager and scouts can generate new solutions randomly. The topological structure of ABC is LNT, and the roles of bees can be changed dynamically. BA has a similar mechanism of exchanging information to those of PSO; it updates the velocity using the global optimum; its topology structure is GNT. FA updates its position by exchanging information with another firefly; it is regarded as a special case of PSO, but it belongs to LNT. The particle in GSA updates its velocity according to the acceleration, and the latter uses the total force from all the particles; thus, CSA belongs to GNT. GWO has four types of grey wolves, including alpha (α), beta (β), delta (δ) and omega (ω). GWO saves the top three best solutions (α, β, δ) obtained so far and obliges the other wolves to update their positions according to the position of the best three wolves. HS exchanges information with the best solutions (stored in harmony memory) according to a certain probability; it belongs to GNT.
4. Time complexities analysis
The time complexity of the 11 NIOAs is described in Table 3 below, where D, M and N represent the number of the dimensions of objective functions, M is the number of the individuals of each NIOA and N is the total iterative times, respectively, and has been defined in Table 1. In order to calculate the time complexity of individual operations in the NIOAs, we divide the NIOAs operations into various components and assign corresponding computational costs. Specifically, we use $T_{\text{init}}$ to denote the computational cost of initialization, $T_{\text{eval}}$ is the evaluation of a single solution and $T_{\text{iter}}$ is the computational cost of one iteration of the main loop of the NIOAs, which is determined by the cost of the operations of updating solutions ($T_{\text{upd}}$), evaluating solutions ($T_e$) and the calculating statistics ($T_{\text{stats}}$). For the 11 compared NIOAs, the computational cost of $T_{\text{init}}$, $T_{\text{eval}}$, $T_e$ and $T_{\text{stats}}$ have the same values, which are calculated as follows: $T_{\text{init}} = D \cdot M$, $T_{\text{eval}} = D$, $T_e = M \cdot T_{\text{eval}}$, $T_{\text{stats}} = M$. Thus, the computational cost of one iteration is reckoned as $T_{\text{iter}} = T_{\text{upd}} + T_e + T_{\text{stats}}$ and $T_{\text{upd}}$ varies with the different NIOAs, the total computational cost of an NIOA (for example, PSO) is defined as $T_{\text{PSO}} = T_{\text{init}} + T_{\text{iter}} \cdot T$. The time complexity of the 11 compared NIOAs is described in Table 3. From Table 3, we can see that the time complexity of PSO, GA, ABC, BA, CS, DE, GWO and HS is $O(D \cdot M \cdot N)$; that of GSA and IA is $O((D+M) \cdot M \cdot N)$ and FA $O(D \cdot M^2 \cdot N)$. The actual running time given by the benchmark functions (discussed in Section 4.2.3) agrees with the above estimates.
| NIOAs | Time Complexity | Comments |
|-------|-----------------|----------|
| PSO | $T_{\text{upd}} = T_{\text{vec}} + T_{\text{pos}} = D \cdot M + D \cdot M = 2 \cdot D \cdot M$; $O(T_{\text{PSO}}) = O(D \cdot M + (3 \cdot D \cdot M + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ denotes the cost of updating velocity ($T_{\text{vec}}$) and position ($T_{\text{pos}}$) |
| GA | $T_{\text{upd}} = T_{\text{cross}} + T_{\text{mut}} = D \cdot M + D \cdot M = 2 \cdot D \cdot M$; $O(T_{\text{GA}}) = O(D \cdot M + (2 \cdot M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ denotes the cost of crossover ($T_{\text{cross}}$) and mutation ($T_{\text{mut}}$) operations |
| ABC | $T_{\text{upd}} = T_{\text{emp}} + T_{\text{act}} + T_{\text{onk}} = D \cdot M / 2 + D \cdot M / 2 + M = D \cdot M + M$; $O(T_{\text{ABC}}) = O(D \cdot M + (2 \cdot M \cdot D + 2M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ denotes the cost of updating the positions of employed foragers ($T_{\text{emp}}$), scouts ($T_{\text{act}}$) and onlookers ($T_{\text{onk}}$) |
| BA | $T_{\text{upd}} = T_{\text{freq}} + T_{\text{vec}} + T_{\text{pos}} = D \cdot M + D \cdot M + D \cdot M = 3 \cdot D \cdot M$; $O(T_{\text{BA}}) = O(D \cdot M + (4 \cdot M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ denotes the cost of updating the frequency ($T_{\text{freq}}$), velocity ($T_{\text{vec}}$) and positions ($T_{\text{pos}}$) |
| IA | $T_{\text{upd}} = T_{\text{den}} + T_{\text{act}} + T_{\text{cross}} + T_{\text{mut}} = M \cdot M + M \cdot D + D \cdot M + D \cdot M = M(M + 1) + 2 \cdot D \cdot M$; $O(T_{\text{IA}}) = O(D \cdot M + (M \cdot (M + 4) + 3 \cdot M \cdot D) \cdot N) \approx O(D \cdot M \cdot N + M \cdot M \cdot N) = O((D + M) \cdot M \cdot N)$ | $T_{\text{upd}}$ is the cost of updating the density ($T_{\text{den}}$), activity ($T_{\text{act}}$), crossover ($T_{\text{cross}}$) and mutation ($T_{\text{mut}}$) operations |
| FA | $T_{\text{upd}} = M \cdot M \cdot D$; $O(T_{\text{FA}}) = O(D \cdot M + (M \cdot M \cdot D + M \cdot D + M) \cdot N) \approx O(D \cdot M^2 \cdot N)$ | $T_{\text{upd}}$ is the cost of updating the positions of fireflies |
| NIOAs | Time Complexity | Comments |
|-------|-----------------|----------|
| CS | $T_{\text{upd}} = 2 \cdot M \cdot D$; $O(T_{CS}) = O(D \cdot M + (3 \cdot M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ is the cost of updating the host nests of cuckoos |
| DE | $T_{\text{upd}} = M \cdot D + M \cdot D + M$; $O(T_{DE}) = O(D \cdot M + (4M \cdot D + 2 \cdot M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ is the cost of crossover mutation and selection operations |
| GSA | $T_{\text{upd}} = T_{\text{grav}} + T_{\text{vec}} + T_{\text{pos}} = M \cdot M + M \cdot D + M \cdot D$; $O(T_{GSA}) = O(D \cdot M + (M \cdot M + 3 \cdot M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N + M \cdot M \cdot N) = O((D + M) \cdot M \cdot N)$ | $T_{\text{upd}}$ is the cost of updating gravitational acceleration ($T_{\text{grav}}$), velocity ($T_{\text{vec}}$) and position ($T_{\text{pos}}$) |
| GWO | $T_{\text{upd}} = M \cdot D$; $O(T_{GWO}) = O(D \cdot M + (2M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ denotes the cost of updating the positions of wolves |
| HS | $T_{\text{upd}} = M \cdot D$; $O(T_{HS}) = O(D \cdot M + (2M \cdot D + M) \cdot N) \approx O(D \cdot M \cdot N)$ | $T_{\text{upd}}$ is the cost of updating harmony vectors |
### 4. Performance Comparison and Analysis for the 11 NIOAs
#### 4.1. The Description of BBOB Test Functions
In order to evaluate the performances of the 11 NIOAs, 30 Black-Box Optimization Benchmarking (BBOB) functions are adopted as fitness functions, which were proposed in the 2017 IEEE Congress on Evolutionary Computation. The optimum values of the 30 BBOB functions (F1–F30) are from 100 to 3000 with the step of 100; they include unimodal functions, multimodal functions, hybrid functions and composition functions. For the sake of fairness, each algorithm is run 20 times independently, and the individual quantity of every algorithm is 50; all the BBOB functions are tested in low dimension ($D = 10$) and high dimension ($D = 50$) and the iterative times are 1500 and 15,000 when $D$ equals 10 and 50, respectively. In order to analyze the sensitivity of the compared NIOAs to their parameter settings, we apply two groups of parameters for each of the 11 NIOAs. The first group of parameters is consistent with the parameters of the 11 original algorithms. The second group of parameters is different from the first group and is randomly selected according to the principles of the 11 NIOAs. For example, the mutation probability $\theta_3$ in GA should generally not be too large, and its value in the second group is set to 0.25. The parameters of the compared NIOAs are described in Table 4. For GA and IA in this work, we adopt the method of roulette wheel selection for the selection operators, the multi-point crossover and one-point crossover for the crossover operators and the basic mutation method for the mutation operators that chooses one or more genes to randomly change.
**Table 4.** The parameters of the 11 NIOAs.
| Algorithms | Parameters I | Parameters II |
|------------|--------------|---------------|
| GA | $\theta_1 = 1$, $\theta_2 = 0.8$, $\theta_3 = 0.2$ | $\theta_1 = 1$, $\theta_2 = 0.75$, $\theta_3 = 0.25$ |
| PSO | $c_1 = 2$, $c_2 = 2$ | $c_1 = 1.5$, $c_2 = 1.5$ |
| ABC | $F = M/2$, $Limit = 20$ | $F = M/3$, $Limit = 30$ |
| BA | $\alpha = 0.9$, $\gamma = 0.9$ $A_{\text{min}} = 100$, $A_{\text{max}} = 1$, $f_{\text{max}} = 100$, $f_{\text{min}} = 1$ | $\alpha = 0.8$, $\gamma = 0.8$ $A_{\text{max}} = 150$, $A_{\text{min}} = 1$, $f_{\text{max}} = 150$, $f_{\text{min}} = 1$ |
| IA | $P_c = 0.8$, $P_m = 0.2$ | $P_c = 0.75$, $P_m = 0.25$ |
| FA | $\gamma = 0.6$, $step = 0.4$, $\beta_0 = 1$ | $\gamma = 0.5$, $step = 0.5$, $\beta_0 = 1.1$ |
| CS | $\alpha = 1$, $P_a = 0.25$ | $\alpha = 1.1$, $P_a = 0.15$ |
| DE | $F = 0.5$, $CR = 0.1$ | $F = 0.6$, $CR = 0.2$ |
| GSA | $G_0 = 100$, $\alpha = 20$ | $G_0 = 90$, $\alpha = 15$ |
| GWO | None | None |
| HS | $HMCR = 0.995$, $PAR = 0.4$, $BW = 1$ | $HMCR = 0.85$, $PAR = 0.5$, $BW = 0.9$ |
4.2. Performance Comparison and Analysis on Benchmark Functions
4.2.1. The Comparison and Analysis on the Accuracy, Stability and Parameter Sensitivity
From the best fitness values (functions), we derive four kinds of criterion values, i.e., the best, the average, the worst and the standard deviation, to qualitatively indicate the effect of each algorithm for a given BBOB function under a certain dimension with a given group of parameters after 20 times of repeated experiments. They are denoted to be BEST, AVERAGE, WORST and STD. Tables S2–S7 of Supplementary Material B show the experimental results for the four kinds of criterion values with all the 30 BBOB functions under the first group of parameters when it is low dimension (=10). Tables S8–S13 of Supplementary Material C show the experiment results when it is high dimension (=50). The values in bold represent the algorithm of the 11 NIOAs with the best result.
Similarly, the experiment results for the four kinds of criterion values on the second group of parameters are presented in Tables S14–S19 of Supplementary Material D for the low dimension (=10) and Tables S20–S25 of Supplementary Material E for the high dimension (=50). Again, the values in bold represent the algorithm with the best result.
Based on the above experimental results, we count the number of times that these 11 NIOAs achieved a “good” result on the 30 functions, as described in Table S26 of Supplementary Material J. In order to objectively analyze the performance of the NIOAs, if the result of the BEST criterion is within 20% on the optimal solution, it is regarded as the “good” result; for the STD criterion, values are less than 50 of F1–F10, are less than 150 of F11–F20 and are less than 300 of F21–F30 are considered as “good” results. In order to compare the performance of 11 NIOAs, the results of Tables S2–S13 of Supplementary Materials B and C are briefed in Table 5, the results of Tables S14–S25 of Supplementary Materials D and E are briefed in Table 6. As described in Tables 5 and 6, the bold number is the number of wins for the NIOAs on a specific criterion, and the corresponding winning functions are shown in the brackets followed. In addition, as described in Tables S27–S30 of Supplementary Material K, we also give the mean error (AVERAGE ± STD) values of the 11 NIOAs on the 30 BBOB functions, where the values in bold represent the algorithm having the best result.
In this section, we not only compare the accuracy and stability of different NIOAs, but also analyze the sensitivity of each selected NIOA by setting two groups of parameters for the compared NIOAs. It should be noted that it is impossible to find a super NIOA to solve all optimization problems, and it is more meaningful to design a NIOA for a specific problem. However, the experimental results may be helpful for researchers to understand the learning strategy and topology of these NIOAs. In addition, two groups of parameters are chosen to analyze the sensitivity of the 11 NIOAs to their parameters setting, which can roughly reflect the robustness of the compared NIOAs.
1. Analysis of accuracy and stability
We compare the 11 algorithms in terms of accuracy and stability under the two groups of parameters. The rankings of accuracy and stability of the compared NIOAs is roughly the same. Based on the experimental results of Tables 5 and 6, and Table S121 of Supplementary Material J and Tables S122–S125 of Supplementary Material K, the following observations can be made:
(1) As described in Table S26, for the accuracy in low dimensional space, all the NIOAs can obtain good results on at least half of the BBOB functions, while in high dimensional space, the accuracy of all the NIOAs is much worse than that in the low dimensional space, only GSA, DE and CS obtain relatively good results. For the stability in low dimensional space, all the NIOAs can obtain good results on at least half of the BBOB functions, while in high dimensional space, the stability of all the NIOAs is much worse; only CS, DE, ABC and GSA obtain relatively good stability.
(2) As shown in Tables 5 and 6, DE and CS can receive better solutions and stability compared to the other nine NIOAs for both groups of parameters. Especially, for the BEST criterion on the two groups of parameters, DE achieves the best results among the 11 NIOAs. DE and CS are the most stable algorithms among the 11 NIOAs for the different parameter settings. According to the mean error values of Tables S27–S30, DE and CS can receive obvious better results in low dimensional space compared with the other nine NIOAs, and in high dimensional space, CS, DE and GWO can obtain better mean error values than the other eight NIOAs.
**Table 5.** The number of wins and corresponding functions of each criterion for 11 NIOAs on parameters I.
| NIOAs | Criteria | WORST | AVERGAE | BEST | STD |
|-------|----------|-------|---------|------|-----|
| DE | D = 10 | **10** (F5, F6, F8, F9, F10, F17, F20, F27, F29, F30) | **10** (F5, F6, F8, F9, F10, F17, F19, F20, F27, F30) | **13** (F6, F9, F11, F14, F15, F16, F17, F18, F19, F20, F21, F27, F30) | 7 (F5, F6, F8, F9, F23, F28, F30) |
| | D = 50 | **9** (F4, F6, F8, F16, F20, F25, F27, F28, F30) | **8** (F6, F9, F11, F20, F25, F27, F29, F30) | **8** (F6, F9, F11, F20, F22, F25, F27, F30) | 7 (F4, F6, F21, F25, F27, F28, F30) |
| CS | D = 10 | **17** (F2, F3, F4, F11, F12, F13, F14, F15, F16,F18, F19, F21, F23, F24, F25, F26, F28) | **16** (F2, F3, F4, F11, F12, F13, F14, F15, F16, F18, F21, F22, F24, F25, F26) | **7** (F2, F3, F4, F12, F13, F15, F16, F17, F18, F19, F20, F21, F27, F29) | **17** (F2, F3, F4, F10, F11, F12, F13, F14, F15, F16, F17, F18, F19, F20, F21, F27, F29) |
| | D = 50 | **8** (F3, F12, F14, F15, F18, F19, F24, F29) | **10** (F1, F3, F4, F12, F13, F14, F15, F18, F19, F28) | **8** (F3, F4, F12, F13, F14, F18, F19, F28) | **9** (F3, F10, F14, F15, F16, F18, F19, F22, F29) |
| HS | D = 10 | - | - | - | - |
| | D = 50 | **2** (F9, F23) | **2** (F8, F23) | **5** (F5, F8, F21, F23, F29) | **3** (P9, F23, F24) |
| GSA | D = 10 | **4** (F1, F7, F9, F22) | **3** (F1, F7, F9) | **3** (F1, F7, F9) | **5** (F1, F7, F9, F22, F25) |
| | D = 50 | **6** (F1, F2, F7, F11, F13, F26) | **3** (F2, F7, F26) | **4** (F1, F2, F7, F26) | **9** (F1, F2, F5, F7, F8, F11, F12, F13, F26) |
| GWO | D = 10 | - | **1** (F29) | **3** (F5, F8, F22) | **2** (F25, F26) |
| | D = 50 | **3** (F5, F17, F21) | **7** (F5, F10, F16, F17, F21, F22, F24) | **3** (F16, F17, F24) | - |
| ABC | D = 10 | - | **1** (F23) | **5** (F10, F23, F24, F28, F29) | - |
| | D = 50 | **2** (F10, F22) | - | - | **1** (F17) |
| PSO | D = 10 | - | - | **1** (F6) | - |
| | D = 50 | - | - | **2** (F10, F15) | - |
| FA | D = 10 | - | - | - | - |
| | D = 50 | - | - | - | **1** (F20) |
| BA | - | - | - | - | - |
| GA | - | - | - | - | - |
| IA | - | - | - | - | - |
Table 6. The number of wins and corresponding functions of each criterion for 11 NIOAs on parameters II.
| NIOAs | Criteria | WORST | AVERGAE | BEST | STD |
|-------|----------|-------|---------|------|-----|
| DE | D = 10 | 11 (F5, F6, F8, F10, F14, F15, F17, F18, F19, F20, F30) | 12 (F5, F6, F8, F15, F17, F18, F19, F20, F23, F27, F29, F30) | 14 (F5, F6, F8, F9, F14, F15, F16, F17, F18, F19, F20, F23, F27, F30) | 9 (F5, F6, F8, F15, F18, F19, F25, F28, F30) |
| | D = 50 | 7 (F4, F6, F20, F25, F26, F27, F28) | 4 (F6, F9, F25, F27) | 5 (F6, F9, F25, F27, F30) | 5 (F4, F6, F25, F27, F28) |
| CS | D = 10 | 16 (F1, F2, F3, F4, F11, F12, F13, F16, F21, F23, F24, F25, F26, F28, F29) | 14 (F2, F3, F4, F11, F12, F13, F14, F16, F21, F22, F24, F25, F26, F28) | 11 (F2, F3, F4, F11, F12, F13, F21, F22, F25, F26, F28) | 15 (F1, F2, F3, F4, F11, F12, F13, F14, F16, F17, F20, F21, F23, F27, F29) |
| | D = 50 | 12 (F1, F2, F11, F12, F13, F14, F15, F18, F19, F24, F29, F30) | 13 (F1, F2, F4, F11, F12, F13, F14, F15, F18, F19, F28, F29, F30) | 11 (F2, F4, F11, F12, F13, F14, F15, F18, F19, F28, F29) | 12 (F1, F2, F11, F12, F13, F14, F15, F17, F18, F19, F29, F30) |
| HS | D = 10 | - | - | - | 1 (F24) |
| | D = 50 | - | - | - | 7 (F5, F8, F16, F21, F23, F24, F26) |
| GSA | D = 10 | 3 (F7, F9, F22) | 3 (F1, F7, F9) | 3 (F1, F7, F9) | 3 (F7, F9, F22) |
| | D = 50 | 2 (F7, F10) | 3 (F7, F10, F26) | 4 (F1, F7, F10, F26) | - |
| | D = 10 | - | 1 (F10) | 2 (F10, F29) | - |
| | D = 50 | 7 (F5, F8, F16, F17, F21, F22, F23) | 9 (F5, F8, F16, F17, F20, F21, F22, F23, F24) | 8 (F5, F8, F16, F17, F20, F21, F23, F24) | - |
| ABC | D = 10 | - | - | - | 1 (F26) |
| | D = 50 | - | - | - | - |
| PSO | D = 10 | - | - | 1 (F24) | - |
| | D = 50 | 1 (F3) | 1 (F3) | 1 (F3) | 3 (F3, F7) |
| FA | D = 10 | - | - | - | 1 (F10) |
| | D = 50 | - | - | - | 2 (F20, F22) |
| GA | D = 10 | - | - | - | - |
| | D = 50 | 1 (F9) | - | - | 2 (F9, F10) |
| BA | D = 10 | - | - | - | - |
| | D = 50 | - | - | - | - |
| IA | D = 10 | - | - | - | - |
2. Analysis of the parameter sensitivity on the 11 NIOAs
Undoubtedly, the optimization results of all the compared NIOAs are sensitive to their parameters settings, as described in Supplementary Materials B–E. In this work, in order to analyze the degrees of sensitivity for the compared NIOAs, we think that an NIOA is sensitive to its parameters setting if the difference of the two results on the same BBOB function under two groups of parameters is greater than one order of magnitude (one result value is 10 times greater than the other). The statistical results are shown in Table 7, and if the experimental results of certain functions differ by two orders of magnitude, two stars are marked on the corresponding function, etc. As described in Table 7, the following observations can be made:
Table 7. The sensitivity comparison of each criterion for 11 NIOAs under two groups of parameters.
| NIOAs | Criteria | WORST | AVERGAE | BEST | STD |
|-------|----------|-------|---------|------|-----|
| | D = 10 | - | - | - | - |
| DE | D = 50 | 7 (F1 **, F2, F9, F12, F13 ***, F15 **, F30 **) | 8 (F1 **, F2, F7, F10, F12, F13 **, F15, F30) | 7 (F2, F3, F7, F8, F10, F18, F22) | 11 (F1 **, F2, F3, F4, F9, F12, F13 **, F15, F18, F22, F30 **) |
| CS | D = 10 | - | - | - | - |
| | D = 50 | 2 (F1, F18) | 1 (F18) | 2 (F8, F12) | 2 (F14, F29) |
| | D = 10 | - | - | - | - |
| HS | D = 50 | 8 (F1 ***, F2, F4, F5, F9, F12 **, F13, F30) | 8 (F1 ***, F2 **, F4, F8, F12 **, F13, F19, F30) | 8 (F1 **, F2, F4, F8, F12 **, F13, F19, F30) | 10 (F1 ***, F2, F4, F9, F12 **, F13, F15, F18, F25, F30) |
| GSA | D = 10 | - | - | - | - |
| | D = 50 | 6 (F1, F2, F9, F12 **, F13, F14 **) | 3 (F12 ***, F14, F22) | 4 (F8, F14, F19, F22) | 7 (F2 **, F9, F12 ***, F13, F14 **, F18, F19) |
| GWO | D = 10 | - | - | - | - |
| | D = 50 | 4 (F1, F9, F18, F19) | 3 (F4, F7, F13) | 3 (F1, F2, F19) | 3 (F13, F19, F24) |
| ABC | D = 10 | 2 (F18, F30) | 2 (F18, F30) | 2 (F18, F30) | 2 (F18, F30) |
| | D = 50 | 6 (F1, F11, F12, F13, F18, F19) | 2 (F1, F15) | 1 (F19) | 4 (F12, F13, F14, F19) |
| PSO | D = 10 | 12 (F1 **, F2, F3 **, F4 **, F11, F12, F13 **, F14 **, F15, F14, F15, F18, F19, F30) | 14 (F1 **, F2 **, F3 **, F4 **, F11, F12 **, F13 **, F14 **, F15 **, F18, F19 **, F26, F28, F30 **) | 15 (F1 ***, F2 **, F3 **, F4 **, F9, F11, F12 **, F13 **, F14 **, F15 **, F18 **, F19 **, F26, F28, F30) | 12 (F1 ***, F2 **, F3 **, F4 **, F7, F9, F12, F13, F14, F15 **, F18, F19, F30) |
| FA | D = 10 | - | - | - | - |
| | D = 50 | 4 (F2, F14, F17, F29) | 4 (F12, F14, F18, F29) | 4 (F12, F14, F15, F17) | 4 (F1, F13, F14, F17) |
| BA | D = 10 | - | - | - | - |
| | D = 50 | 3 (F2, F11, F19) | 6 (F2, F3, F12, F14, F15, F26) | 2 (F3, F9) | 6 (F2, F4, F11, F13, F19, F28) |
| GA | D = 10 | 3 (F1, F18, F19) | 1 (F1) | 1 (F1) | 2 (F1, F18) |
| | D = 50 | 4 (F15, F19, F26, F30) | 6 (F1, F14, F15, F18, F19, F26) | 5 (F1 **, F13, F15, F18, F26) | 4 (F1, F11, F15, F30) |
| IA | D = 10 | - | - | - | - |
| | D = 50 | - | 1 (F14) | - | 2 (F1, F13) |
(1) DE, CS, HS, GSA, GWO, FA, BA and IA are sensitive to their parameters setting on high dimensional space and are relatively insensitive in low dimensional space, which indicates that it should be careful to select the parameters of the above NIOAs when using them in high dimensional problems. Specifically, DE and HS are the most sensitive to their parameters setting on high dimensional space.
(2) ABC, PSO and GA are sensitive to their parameters setting both on high and low dimensional spaces, and PSO is the most sensitive to their parameters setting.
According to the above observations, we draw the following preliminary conclusions: (1) The NIOAs, which have explicit learning strategy of solution update, can acquire better performance than the NIOAs with large randomness (for example, probability method) to learn from other individuals. For example, In GA, the progeny individual
replaces the parent individual with a certain probability, while in DE, the individual is only updated when the new individual is better than the old one; the cuckoos in CS update their positions once the new solution generated by Lévy flights is better than the old one; while the individual in BA can be updated by the better one under the probability constraints $rand_2 < A$, which means that it may not be updated by the better individual; IA executes crossover and mutation operations through choosing antibodies with large activity degree, but the computation of large activity has some uncertainties, for example, the design of the threshold $h_1$ in Equation (12) and $h_2$ in Equation (13). It seems that the algorithms can achieve better performance, which continuously and randomly generate new solutions and firmly learn from excellent individuals.
(2) With the increase of dimensional number, all the NIOAs become more sensitive to their parameters setting, which indicates that it is more difficult to choose a suitable set of parameters for NIOAs on high dimensional problems, except for DE and CS, GSA and GWO, which perform better in high dimensional space than the other seven NIOAs.
4.2.2. The Efficiency Comparison and Analysis
For the sake of error elimination, we compute the average value of each iteration in 20 independent experiments and obtains the change curves of the global optimized fitness under 1500 and 15,000 iterations on 30 functions for the low and high dimensions, respectively. For the first group of parameters for compared NIOAs, when $D$ equals 10, the convergent curves of 11 NIOAs on 30 BB functions are described in Figures S1–S30 of Supplementary Material F, and for the case of $D = 50$, the corresponding curves are described in Figures S31–S60 of Supplementary Material G. For the second group of parameters, the corresponding convergent curves are described in Figure S61–S90 of Supplementary Material H and in Figures S91–S120 of Supplementary Material I, respectively. Based on these experimental results, the following observations can be made:
(1) FA and HS have the worst optimization efficiency for most of the 30 functions both on the low and high dimensions, because they either evolve solutions through adopting complete random strategies (see Equations (43) and (44)), for example, HS or learn from other individuals in local topology and are perturbed by a random factor (see Equation (17)), such as FA.
(2) With the increasing of iterative times, the curves of most compared NIOAs trend to be stable, while those of ABC and DE are the oscillatory curves for most of the 30 functions both on the low and high dimensions; the amplitude and frequency of oscillations of ABC are greater than those of DE, and they are larger in high dimensions than low dimension for the two algorithms, but from the whole iteration period, the optimization results of ABC and DE are gradually improved.
(3) PSO, GSA and GWO have the fast convergent speed for most of the 30 functions both on the low and high dimensions, because all of them adopt the explicit strategy of learning from the global best solution; that is, the individuals in these NIOAs learn firmly from the global optimum which leads to the rapid convergence.
4.2.3. The Comparison of Running Time
The running time of the 11 NIOAs on 30 BBOB functions are summarized in Table S31 of Supplementary Material L. Based on these data, the following observations can be made:
(1) DE and CS are the fastest algorithms on all the 30 functions for the dimension of 10 and 50, respectively. FA is the slowest algorithm both on the low and high dimensional spaces; its running time on the 30 functions is 1–2 orders of magnitude higher than the other 10 NIOAs. GSA has the second-worst running time for both dimensional spaces. The slowest running time of FA and GSA echoes their time complexity given in Table 3 of Section 3.3.
(2) PSO, GA, BAC, BA, GWO and HS are fast when $D = 10$, while in the high dimensional space, their running time is obviously longer than the low dimensional space. Thus,
from the view of running time, the above algorithms are more suitable to low dimensional problems, DE and CS are suitable to the problems on both high and low dimensional spaces.
(3) For $D = 10$, the running time of FA is 20 times that of DE for almost all the functions; when $D = 50$, FA is 20 times slower than CS (the maximum is 35) for all the functions. Thus, the difference in running time for the NIOAs is very large, and hence it is important to select fast NIOAs for the optimization problems with the strict requirement of running time.
4.3. Statistical Tests for Algorithm Comparison
In this study, we consider two statistical tests: the Friedman test [137] and Nemenyi test [137]. A Friedman test is constructed to analyze the performance of the compared NIOAs. Table 8 provides the Friedman test statistics $F_F$ and the corresponding critical value in terms of each evaluation criterion. As shown in Table 8, the null hypothesis (that all of the compared algorithms will perform equivalently) was clearly rejected for each evaluation criterion at a significance level of $\alpha = 0.05$ for the experimental results in both 10 and 50 dimensional spaces. Consequently, we proceed to conduction of a post hoc test [137] in order to analyze the relative performance among the compared NIOAs.
**Table 8.** Summary of the Friedman Statistics $F_F$ ($k = 11, N = 30$) and the critical value in terms of each evaluation criteria ($k$: #comparing algorithms; $N$: #data sets).
| Dimensions | NIOAs Parameters | Criteria | $F_F$ | Critical Value ($\alpha = 0.05$) |
|------------------|------------------|------------|-----------|---------------------------------|
| 10-dimensional space | Parameters I | WORST | 89.9707 | |
| | | BEST | 79.0949 | |
| | | AVERAGE | 94.9530 | |
| | | STD | 34.6416 | |
| | Parameters II | WORST | 80.1152 | |
| | | BEST | 78.3713 | |
| | | AVERAGE | 95.4905 | |
| | | STD | 27.9553 | 1.8634 |
| | Parameters I | WORST | 68.9997 | |
| | | BEST | 69.7277 | |
| | | AVERAGE | 71.4619 | |
| | | STD | 32.7366 | |
| | Parameters II | WORST | 61.3683 | |
| | | BEST | 92.1188 | |
| | | AVERAGE | 75.6435 | |
| | | STD | 14.9259 | |
The Nemenyi test [137] is used to test whether each of the NIOAs performed competitively against the other compared NIOAs in both the 10- and 50-dimensional spaces. In the test, two NIOAs are considered to have a significant difference in performance if their corresponding average ranks differ at least by the critical difference value given by
$$CD = q_{\alpha} \sqrt{\frac{k(k+1)}{6N}}.$$
For example, $q_{\alpha}$ is equal to 3.219 at the significance level $\alpha = 0.05$, and thus CD takes the value of 2.7563 ($k = 11, N = 30$). Figures 5 and 6 show the CD diagrams for each of the four evaluation criteria about the experimental results of the 10-dimensional space under the two groups of parameters. As CS obtains the best average rank on the 30 functions, CS is taken as the control algorithm. If any compared NIOA whose average rank is within one CD to that of CS, it is connected to CS with a red line, as described in Figures 5 and 6. The algorithms that are unconnected to CS are considered to have a significantly different performance between them. In Figure 5a WORST, for example, the average rank for CS was 1.6333, and the critical value would be 4.3896 by adding CD. Since CSA, BA, GA, HS, PSO, FA and IA obtained 5.6333, 5.8333, 6.9, 7.2667, 8.7, 9.2 and 10.5667 for their respective average rankings, they were significantly worse compared with CS. From Figures 5 and 6, we can see that CS and DE obtained the best average ranks on all four criteria, followed by ABC and GWO. CS and DE have obvious better performance.
than the other NIOAs. In other words, CS and DE obtained the best solutions and the best stability in low dimensional space.
**Figure 5.** Comparison of DE (control algorithm) against other compared algorithms using the Nemenyi test for the experimental results in 10-dimensional space under parameters I.
Figure 6 shows the experimental results of the CD diagrams for the four kinds of evaluation criteria with the 50-dimensional space under the first group of parameters, and Figure 8 shows it under the second group of parameters. CS and DE still perform well on the high-dimensional space. Especially, under the first group of parameters, DE obtains the best average rank on the WORST, BEST and AVERAGE criteria, ranked the second-best average rank on the STD criterion, whereas CS ranked the first best average rank on the STD criterion. For the second group of parameters, CS ranked the first best average rank on four criteria, while DE ranked the second on the WORST and STD criteria.
Figure 7. Comparison of DE (control algorithm) against other compared algorithms using the Nemenyi test for the experimental results in 30-dimensional space under parameters I.
Figure 8. Comparison of DE (control algorithm) against other compared algorithms using the Nemenyi test for the experimental results in 50-dimensional space under parameters II.
4.4. Performance Comparison on Engineering Optimization Problem
In order to further compare the performance of the 11 NIOAs, we apply them to solve the constrained engineering optimization problem, for example, Tension/Compression Spring Design. A spring is a kind of general mechanical part, which can produce a large elastic deformation under load. The weight of the spring (such as a valve spring of an internal combustion engine cylinder and spring of various buffers) has a great influence on the normal operation of the relevant mechanical equipment. The design, as shown in Figure 9, aims to minimize the weight of a tension/compression spring [138]. In this design, the constraints include the minimum deflection, shear stress, surge frequency and the limit of outer diameter.
There are three designed variables: the average coil diameter $x_1$, the wire diameter $x_2$ and the number of the active coils $x_3$, which together define the following complex constraints:
$$\min F(X) = (x_3 + 2)x_2 x_1^2$$
s.t. $g_1(X) = 1 - \frac{x_2^2 x_3}{71,785 x_1^2} \leq 0,$
$$g_2(X) = -\frac{4x_2^2 - x_1 x_3}{12,566(x_2 x_1^2 - x_3)} + \frac{1}{5108 x_1^2} - 1 \leq 0,$$
$$g_3(X) = 1 - \frac{140,45 x_3}{x_2^2 x_3} \leq 0,$$
$$g_4(X) = \frac{x_1 + x_2}{1.5} - 1 \leq 0,$$
where $0.05 \leq x_1 \leq 2$, $0.25 \leq x_2 \leq 1.5$, $2 \leq x_3 \leq 15$.
Not only the objective function but also the constraint conditions should be considered in solving such constrained optimization problems. The Penalty function method is one of the most commonly used constraint processing techniques, which transforms the constrained optimization problem into an unconstrained optimization problem according to the Penalty function to the original objective function. In this study, we adopt the dynamic Penalty function [139], defined as follows:
$$F(X) = f(X) + (C * t)^{\alpha} \sum_{i=1}^{m} G_i^{\beta}(X)$$
$$G_i(X) = \max(0, g_i(X))$$
Here, $t$ is the current iterative times, $C, \alpha, \beta$ are three parameters and in general $C = 1$, $\alpha = 1$, $\beta = 2$. We run each compared NIOA 20 times independently, and the iterative number of times is 1000. Table 9 gives the experimental results of the 11 compared NIOAs on the spring design problem. We can observe that all the 11 NIOAs have very close BEST values (between 0.012 and 0.013). The CS algorithm ranks first for all the four kinds of qualitative criteria: WORST, AVERAGE, BEST and STD. For the WORST criterion, CS, FA, DE, GSA and GWO achieve good results. The results of the engineering optimization problem indicate that all the 11 NIOAs obtain good results, and CS, FA, DE, GSA and GWO are better and more stable than the other six NIOAs.
Table 9. Experimental results of 11 NIOAs on spring design problem.
| Algorithm | WORST | AVERAGE | BEST | STD |
|-----------|---------|---------|---------|---------|
| GA | 0.029080| 0.016709| 0.012691| 0.004163|
| PSO | 0.030457| 0.015028| 0.012746| 0.005420|
| ABC | 0.016446| 0.014202| 0.012827| 0.001862|
| BA | 0.043477| 0.023193| 0.013137| 0.011202|
| IA | 0.031477| 0.021735| 0.013134| 0.006729|
| FA | 0.012880| 0.012733| 0.012718| $3.48 \times 10^{-5}$ |
| CS | **0.012670** | **0.012666** | **0.012665** | **$1.27 \times 10^{-6}$** |
| DE | 0.013397| 0.013007| 0.012755| 0.000201|
| CSA | 0.013073| 0.012953| 0.012740| $9.06 \times 10^{-5}$ |
| GWO | 0.012821| 0.012715| 0.012672| $3.00 \times 10^{-5}$ |
| HS | 0.032620| 0.020375| 0.012877| 0.006328|
5. Challenges and Future Directions
Indeed, how to improve the performance of NIOAs is a very complex problem, which is influenced comprehensively by the methods of parameter tuning, topology structure and learning strategy. In this study, we draw some preliminary conclusions in order to provide a referencing framework for the selection and improvement of NIOAs. In the past 30 years, a large number of meta-heuristic algorithms (more than 120 in our statistics) and their variants have been proposed in order to provide efficient and effective solutions to optimization problems in the field of AI. Although great progress has been made on the NIOAs, which have been widely and successfully applied to various application fields, challenging problems still exist, mainly reflected in the following four aspects.
1. The first one is the lack of sufficient research in fundamental theories and tools of NIOAs. From our observation, the challenges of the fundamental researches on NIOAs include the following four points.
(1) The biological or natural mechanisms imitated by the NIOAs are not yet fully clear. Most of the NIOAs are proposed by the experts of psychology or computer science and engineering, and close collaboration with biologists is extremely important in order to deeply understand and abstract such mechanisms and functions so that NIOAs can be reasonably and effectively integrated into nature, biology and the real environment.
(2) It is also necessary to lay a solid foundation of mathematical theories to support NIOAs. Such examples include a rigorous time complexity analysis and convergence proof, a deep analysis of topological structures of various NIOAs, a suitable and comprehensive theoretical explanation to balance the contradiction between easily falling into local optimum and slow convergence speed, and an in-depth analytic study of the methods of automatic parameters tuning in order to solve the parameter-dependence problem. Specifically, while working on classic fundamental works [140–142] with some achievements in time complexity analysis and convergence proof, the researchers give a list of future research directions, which we brief as follows: for topology analysis, it is indicated that the local neighborhood topology for some specific algorithm is more suitable for complex problems [143], and the investigation into the PSO paradigm finds that the effect of population topology interacted with the function is optimized [144]. Although these previous efforts have recommended population topologies, they still have not precisely identified the topological factors that may result in the best performance on a range of functions [144]. An automatic tuning process for parameters is usually computationally expensive, especially for real-world application problems; therefore, it is desirable to have a benchmark test that suits the NIOAs’ tuning toolbox and is easy to use [145]. Due to the lack of a solid mathematical foundation, almost all the NIOAs are working under the black-box mode; thus, there are always researchers proposing so-called “novel” algorithms and declaring that their optimizers find better solutions than other NIOAs [136].
(3) The research is not sufficient on the problem extension of basic continuous NIOAs to different optimization problems, including COPs and MOOPs. The study here on
different learning strategies and topological structures of more than 120 MHAs can provide diverse solutions to COPs and MOOPs. Actually, the current research of mathematical theories in the aforementioned (2) and problem extensions mainly focus on a few NIOAs, including GA, PSO and DE, so it is required to pursue further research to more NIOAs.
(4) Another problem is the visualization platforms of NIOAs research. From our observation, there are few discussions on this aspect except for an early simple attempt [146]. In addition, few benchmark tests suit specific optimization problems such as automatic parameter tuning [145]. Owing to the insufficient and inadequate theoretical investigation on the NIOAs, it becomes quite difficult to clearly distinguish the characteristics of different NIOAs (most of the algorithms look very similar) and this, per se, becomes another optimization problem of an optimal selection of the NIOAs for a given problem. This is also a motivation that we attempt to compare and analyze 11 common NIOAs theoretically and experimentally.
2. The second one is that NIOAs are less capable of solving continuous optimization problems in complex environments. The real environments are complicated, and the optimization problems can be high-dimensional, large-scale, multi-modal and multi-objective; the optimization environments can be dynamic, highly constrained and uncertain; the fitness evaluations may contain noises, be imprecise and time-consuming, and sometimes the fitness functions can be un-deterministic. The complexity of the real environments poses a great challenge to NIOAs. Although some efforts [147–149] have been made to solve the aforementioned problems, how to handle these issues is still a very difficult problem.
3. The third one is too few combinations of NIOAs with other related disciplines. NIOAs intrinsically have a parallel and distributed architecture, while less attention is paid to the combinations with parallel and distributed technologies, including GPU-based hardware, robot swarm and cloud platforms. A few works [150–152] focus on the above issues, and interdisciplinary research is a great potential for NIOAs.
4. The fourth one is that less effort has been made to apply NIOAs to various domain problem fields. Actually, on the one hand, it is impossible to have one single NIOA to adapt to all the application problems. On the other hand, a certain kind of NIOAs may be more effective for certain kinds of problems [134]. Existing enhanced methods of NIOAs (for example, a combination of different NIOAs) lack an in-depth and targeted discussion on the reason why the enhanced methods are adopted. Furthermore, various NIOAs have been adopted to handle the same application problem, but it is not clear why this NIOA was chosen (researchers just happened to use it).
Consequently, it is our belief that in the future, researchers should tackle the following three problems in the NIOAs. These three problems indicate three future research directions for the NIOAs study.
1. Strengthening solid theoretical analysis for the NIOAs. Some theoretical problems of NIOAs are only studied in specific NIOA (for example, PSO), such as the time complexity analysis, the convergence proof, topology analysis, the automatic parameter tuning, the mechanisms of the exploitation and exploration processes. There are still many problems to be solved in the existing research work [140–142], and the theoretical analysis of other NIOAs needs to be analyzed deeply. COPs and MOOPs should be further studied by extending and combining the various existing NIOAs. Furthermore, it is necessary to develop a visualization platform of deconstructing, modeling and simulation of the interactions of components in NIOAs, to make it convenient to study the mechanisms of self-organization, direct/indirect communication and the processes of intelligent emergence on various swarm systems and application cases. It is also necessary to establish a benchmark test suite and easy-to-use algorithm toolbox for different problems, for example, automatic parameter tuning and the aforementioned problems in complex environments.
2. Designing novel NIOAs to solve complicated optimization problems. Many real-world optimization problems are very complex, such as the multi-model and multi-objective problems, the constrained or uncertainty problems, the large-scale optimization problems, the optimization problems with noisy, imprecise or time-varying fitness evalutions. It is another important direction to design more targeted and effective NIOAs for the above problems.
3. Deep fusion with other related disciplines. In order to improve the performance of the current NIOAs, it is indispensable to combine the NIOAs with other related disciplines or directions, such as distributed and parallel computing, machine learning, quantum computation and robot engineering. More concretely, because NIOAs by nature possess the characteristics of distributed parallelism, it is more easily and natural for them to be implemented in distributed and parallel environments, such as cloud platforms and GPU-based hardware environments. Furthermore, for some large-scale optimization problems, the robot swarm can be a good solution that combines NIOAs and robot engineering. With the support from machine learning methods, NIOAs can become efficient to handle the multi-modal multi-objective optimization problems, and on the other way around, NIOAs can provide optimization support to machine learning tasks, such as the clustering problem and the association rules mining problem.
4. Combination with specific applications. It is necessary to design customized NIOA for specific application problems; the topological structure, learning strategy and method of parameters’ selection of customized NIOAs may be suitable to a specific problem, which can acquire the good convergence speed and optimization performance. Existing applications rarely have targeted design of NIOAs; more of them use NIOAs directly or cannot explain the reason for algorithm design with specific problems.
6. Conclusions
Nature-Inspired Optimization Algorithms (NIOAs) can provide satisfactory solutions to the NP-hard problems, which are difficult and sometimes even impossible for traditional optimization methods to handle. Thus, the NIOAs have been widely applied to various fields both theoretically and in practice; examples including function optimization problems (convex, concave, high or low dimension and single peak or multiple peaks), combinatorial optimization problems (traveling salesman problem (TSP), knapsack problem, bin-packing problem, layout-optimization problem, graph-partitioning problem and production-scheduling problem), automatic control problems (control system optimization, robot structure and trajectory planning), image-processing problems (image recognition, restoration and edge-feature extraction), data-mining problems (feature selection, classification, association rules mining and clustering).
Many NIOAs and their variants have been proposed in the last 30 years. However, for the specific optimization problems, researchers tend to choose the NIOAs based on their narrow experiences or biased knowledge because there lacks an overall and systematic comparison and analysis study of these NIOAs. This study aims to bridge this gap; the contributions of this paper are fourfold. First, we summarize the uniform formal description for the NIOAs, analyze the similarities and differences among the 11 common NIOAs; second, we compare the performance of 11 NIOAs comprehensively, which can reflect the essential characteristics of each algorithm; third, we present a relatively comprehensive list of all the NIOAs so far, the first attempt to systematically summarize existing NIOAs, although it is very hard work; fourth, we comprehensively discuss the challenges and future directions of the whole NIOAs field, which can provide a reference for the further research of NIOAs. Actually, we are not aiming to find a super algorithm that can solve all problems in different fields once and for all (it is an impossible task). Instead, we propose a useful reference to help researchers to choose suitable algorithms more pertinently for different application scenarios in order to take a good advantage and make full use of the different NIOAs. We believe, with this survey work, that more novel-problem-oriented NIOAs will emerge in the future, and we hope that this work can be a good reference and handbook for the NIOAs innovation and applications.
Undoubtedly, it is necessary and meaningful to make a 34 comprehensive comparison of the common NIOAs, and we believe that more efforts are required to further this review in the future. First, the state-of-the-art variants of the 11 common NIOAs will
be compared and analyzed comprehensively, discussing their convergence, topological structures, learning strategies, the method of parameter tuning and the application field. Second, there are more than 120 MHAs with various topological structures and learning strategies. For example, the recently proposed chicken swarm optimization (CSO) and spider monkey optimization (SMO) algorithms have a hierarchical topological structure and grouping/regrouping learning strategies. Thus, the comprehensive analysis of various topological structures and learning strategies of NIOAs is another future work.
**Supplementary Materials:** The supplementary figures and tables are available online at [https://www.mdpi.com/article/10.3390/e23070874/s1](https://www.mdpi.com/article/10.3390/e23070874/s1).
**Author Contributions:** Conceptualization, Z.W. and C.Q.; methodology, Z.W.; software, C.Q.; formal analysis, Z.W.; investigation, Z.W., B.W. and C.Q.; data curation, B.W. and C.Q.; writing—original draft preparation, Z.W.; writing—review and editing, W.W.S.; visualization, Z.W. and W.W.S.; supervision, B.W. and W.W.S.; project administration, Z.W.; funding acquisition, Z.W. and B.W. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was supported in part by the National Science Foundation of China (Grant nos. 71972177 and 61363075), the National High Technology Research and Development Program (“863” Program) of China (Grant no 2012AA12A308), the Yue Qi Young Scholars Program of China University of Mining and Technology, Beijing (Grant no. 800015Z1117), the Department of Science and Technology of Jiangxi Province of China (Grant nos. 20161BGC70078 and KJLD13031) and the Department of Education of Jiangxi Province of China (Grant no. GJJ180270).
**Institutional Review Board Statement:** Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** All the data are generated by 30 BBOB functions and 11 NIOAs, the results are included within the manuscript and supplementary materials.
**Acknowledgments:** The authors are grateful for the anonymous reviewers who have made many constructive comments.
**Conflicts of Interest:** The authors declare no conflict of interest.
**References**
1. Fister, I., Jr.; Yang, X.S.; Brest, J.; Fister, D. A Brief Review of Nature-Inspired Algorithms for Optimization. *Elektrotehniški Vestn.* **2013**, *80*, 116–122.
2. Holland, J.H. *Adaptation in Natural and Artificial Systems*; University of Michigan Press: Ann Arbor, MI, USA, 1975.
3. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948.
4. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Space. *J. Glob. Opt.* **1997**, *11*, 341–359. [CrossRef]
5. Dervis, K.; Bahriye, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. *J. Glob. Optim.* **2007**, *39*, 459–471.
6. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the 1st European Conference on Artificial Life, York, UK, 11–13 November 1991; pp. 134–142.
7. Yang, X.S.; Deb, S. Cuckoo Search via Levy Flights. In Proceedings of the 2009 World Congress on Nature and Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214.
8. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. *Eng. Comput.* **2012**, *29*, 464–483. [CrossRef]
9. Yang, X.S. *Nature-Inspired Metaheuristic Algorithms*; Luniver Press: Beckington, UK, 2008.
10. Bersini, H.; Varela, F.J. The Immune Recruitment Mechanism: A Selective Evolutionary Strategy. In Proceedings of the International Conference on Genetic Algorithms, San Diego, CA, USA, 13–16 July 1991; pp. 520–526.
11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. *Adv. Eng. Softw.* **2014**, *69*, 46–61. [CrossRef]
12. Esmat, R.; Hossein, N.P.; Saeid, S. GSA: A Gravitational Search Algorithm. *Inform. Sci.* **2009**, *179*, 2232–2248.
13. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. *Simulation* **2001**, *76*, 60–68. [CrossRef]
14. Lim, T.Y. Structured population genetic algorithms: A literature survey. *Artif. Intell. Rev.* **2014**, *41*, 385–399. [CrossRef]
15. Rezaee Jordehi, A. Particle swarm optimisation for dynamic optimisation problems: A review. *Neural Comput. Appl.* **2014**, *25*, 1507–1516. [CrossRef]
16. Dervis, K.; Beyza, G.; Celal, O.; Nurhan, K. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. *Artif. Intell. Rev.* **2014**, *42*, 21–57.
17. Chawla, M.; Dutta, M. Bat Algorithm: A Survey of the State-Of-The-Art. *Appl. Artif. Intell.* **2015**, *29*, 617–634. [CrossRef]
18. Dasgupta, D.; Yu, S.H.; Nino, F. Recent Advances in Artificial Immune Systems: Models and Applications. *Appl. Soft Comput.* **2011**, *11*, 1574–1587. [CrossRef]
19. Fister, I., Jr; Tang, X.S.; Brest, J. A comprehensive review of firefly algorithms. *Swarm Evol. Comput.* **2013**, *13*, 34–46. [CrossRef]
20. Mohamad, A.B.; Zain, A.M.; Bazir, N.E.N. Cuckoo search algorithm for optimization problems—A literature review and its applications. *Appl. Artif. Intell.* **2014**, *28*, 419–448. [CrossRef]
21. Swagatam, D.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. *IEEE Trans. Evol. Comput.* **2011**, *15*, 4–31.
22. Esmat, R.; Elahae, R.; Hosseini, N.P. A comprehensive survey on gravitational search algorithm. *Swarm Evol. Comput.* **2018**, *41*, 141–158.
23. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. *Theor. Comput. Sci.* **2005**, *344*, 243–278. [CrossRef]
24. Hatta, N.M.; Zain, A.M.; Sulhehuddin, R.; Shayfull, Z.; Yusuff, Y. Recent studies on optimisation method of Grey Wolf Optimiser (GWO): A review (2014–2017). *Artif. Intell. Rev.* **2019**, *52*, 2651–2683. [CrossRef]
25. Alia, O.M.; Mandava, R. The variants of the harmony search algorithm: An overview. *Artif. Intell. Rev.* **2011**, *36*, 49–68. [CrossRef]
26. Chakraborty, A.; Kar, A.K. Swarm Intelligence: A Review of Algorithms. In *Nature-Inspired Computing and Optimization*; Springer: Berlin/Heidelberg, Germany, 2017; pp. 475–494.
27. Ab Wahab, M.N.; Nefti-Meziani, S.; Atiyab, A. A Comprehensive Review of Swarm Optimization Algorithms. *PLoS ONE* **2015**, *10*, e0122827. [CrossRef]
28. Kar, A.K. Bio inspired computing—A review of algorithms and scope of applications. *Expert Syst. Appl.* **2016**, *59*, 20–32. [CrossRef]
29. Chu, S.C.; Huang, H.C.; Roddick, J.F. Overview of Algorithms for Swarm Intelligence. In Proceedings of the 3rd International Conference on Computational Collective Intelligence, GdyNIOA, Poland, 21–23 September 2011; pp. 28–41.
30. Parpinelli, R.S. New inspirations in swarm intelligence: A survey. *Int. J. Bio-Inspir. Comput.* **2011**, *3*, 1–16. [CrossRef]
31. Monismith, D.R.; Mayfield, B.E. Slime Mold as a Model for Numerical Optimization. In Proceedings of the 2008 IEEE Swarm Intelligence Symposium, St. Louis, MO, USA, 21–23 September 2008.
32. Havens, T.C.; Spain, C.J.; Salmon, N.G.; Keller, J.M. Roach Infestation Optimization. In Proceedings of the 2008 IEEE Swarm Intelligence Symposium, St. Louis, MO, USA, 21–23 September 2008.
33. Abbass, H.A. MBO: Marriage in Honey Bees Optimization A Haplometrosis Polygynous Swarming Approach. In Proceedings of the 2001 IEEE Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; pp. 207–214.
34. Burnet, F.M. *The Clonal Selection Theory of Acquired Immunity*; Cambridge Univ. Press: Cambridge, UK, 1959.
35. Xiao, R.B.; Wang, L. Artificial Immune System Principle, Models, Analysis and Perspectives. *Chin. J. Comput.* **2002**, *25*, 1281–1292.
36. Reynolds, C. Flocks, herds, and schools: A distributed behavioral model. *Comput. Graph.* **1987**, *21*, 25–34. [CrossRef]
37. Konak, A.; Coit, D.W.; Smith, A.E. Multi-objective optimization using genetic algorithms: A tutorial. *Reliab. Eng. Syst. Safe.* **2006**, *91*, 992–1007. [CrossRef]
38. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. *IEEE Trans. Evol. Comput.* **2002**, *6*, 182–197. [CrossRef]
39. Deb, K.; Beyer, H.G. Self-adaptive genetic algorithms with simulated binary crossover. *Evol. Comput.* **2001**, *9*, 197–221. [CrossRef] [PubMed]
40. Payne, A.W.R.; Glen, R.C. Molecular Recognition Using A Binary Genetic Search Algorithm. *J. Mol. Graph. Model.* **1993**, *11*, 74–91. [CrossRef]
41. Stanimirović, Z.; Kratica, J.; Đugošija, D. Genetic algorithms for solving the discrete ordered median problem. *Eur. J. Oper. Res.* **2007**, *182*, 983–1001. [CrossRef]
42. Leung, X.W.; Wang, Y.P. An Orthogonal Genetic Algorithm with Quantization for Global Numerical Optimization. *IEEE Trans. Evol. Comput.* **2001**, *5*, 41–53. [CrossRef]
43. Tsai, J.T.; Liu, T.K.; Chou, J.H. Hybrid Taguchi-genetic algorithm for global numerical optimization. *IEEE Trans. Evol. Comput.* **2004**, *8*, 365–377. [CrossRef]
44. Sarma, K.C.; Adeli, H. Fuzzy genetic algorithm for optimization of steel structures. *J. Struct. Eng.* **2000**, *126*, 596–604. [CrossRef]
45. Yuan, X.H.; Yuan, Y.B.; Zhang, Y.C. A hybrid chaotic genetic algorithm for short-term hydro system scheduling. *Math. Comput. Simul.* **2002**, *59*, 319–327. [CrossRef]
46. jiao, L.C.; Wang, L. A Novel Genetic Algorithm Based on Immunity. *IEEE Trans. Syst. Man Cybern.* **2000**, *30*, 552–561. [CrossRef]
47. Juang, C.F. A Hybrid of genetic algorithm and particle swarm optimization for recurrent network design. *IEEE Trans. Syst. Man Cybern. Part B* **2004**, *34*, 997–1006. [CrossRef] [PubMed]
48. Li, B.B.; Wang, L. A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling. *IEEE Trans. Syst. Man Cybern. Part B* **2007**, *37*, 576–591. [CrossRef]
49. Tripathi, P.K.; Bandyopadhyay, S.; Pal, S.K. Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients. *Inform. Sci.* **2007**, *177*, 5033–5049. [CrossRef]
50. Ahmed, E.; Shawki, A.; Robert, D. Strength Pareto Particle Swarm Optimization and Hybrid EA-PSO for Multi-Objective Optimization. *Evol. Comput.* **2010**, *18*, 127–156.
51. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S.H. Adaptive Particle Swarm Optimization. *IEEE Trans. Syst. Man Cybern. Part B* **2009**, *39*, 1362–1381. [CrossRef]
52. Shi, Y.H.; Eberhart, R.C. Fuzzy adaptive particle swarm optimization. In Proceedings of the Congress on Evolutionary Computation 2001, Seoul, Korea, 27–30 May 2001; pp. 101–106.
53. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. *Swarm Evol. Comput.* **2013**, *9*, 1–14. [CrossRef]
54. Liao, C.J.; Iseng, C.T.; Luarn, P. A discrete version of particle swarm optimization for flowshop scheduling problems. *Comput. Oper. Res.* **2007**, *34*, 3099–3111. [CrossRef]
55. Zhao, X.C. A perturbed particle swarm algorithm for numerical optimization. *Appl. Soft Comput.* **2010**, *10*, 119–124.
56. Wang, Y.; Liu, J.H. Chaotic particle swarm optimization for assembly sequence planning. *Robot. Comput. Integrat. Manuf.* **2010**, *26*, 212–222. [CrossRef]
57. Liu, H.; Cai, Z.X.; Wang, Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. *Appl. Soft Comput.* **2010**, *10*, 629–640. [CrossRef]
58. Santucci, V.; Milani, A. Particle Swarm Optimization in the EDAs Framework. In Proceedings of the 15th Online World Conference on Soft Computing in Industrial Applications, Electr Network, Online, 15–17 November 2010; pp. 87–96.
59. Plevris, V.; Papadakakis, M. A Hybrid Particle Swarm-Gradient Algorithm for Global Structural Optimization. *Comput. Civ. Infrastruct. Eng.* **2011**, *26*, 48–68. [CrossRef]
60. Bachaous, M.; Pandey, M.K.; Mahajan, C. Designing an integrated multi-echelon agile supply chain network: A hybrid taguchi-particle swarm optimization approach. *J. Intell. Manuf.* **2008**, *19*, 747–761. [CrossRef]
61. Akbari, R.; Hedayatzadeh, R.; Zarati, K.; Hassanzadeh, B. A multi-objective artificial bee colony algorithm. *Swarm Evol. Comput.* **2012**, *2*, 39–52. [CrossRef]
62. Song, X.Y.; Yan, Q.F.; Zhao, M. An adaptive artificial bee colony algorithm based on objective function value information. *Appl. Soft Comput.* **2017**, *53*, 384–401. [CrossRef]
63. Kashan, M.H.; Nahavandi, N.; Kashan, A.H. DisABC: A new artificial bee colony algorithm for binary optimization. *Appl. Soft Comput.* **2012**, *12*, 342–352. [CrossRef]
64. Pan, Q.K.; Tasgetiren, M.F.; Suganthan, P.N.; Chua, T.J. A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. *Inform. Sci.* **2011**, *181*, 2455–2468. [CrossRef]
65. Teimouri, R.; Baseri, H. Forward and backward predictions of the friction stir welding parameters using fuzzy-artificial bee colony-imperialist competitive algorithm systems. *J. Intell. Manuf.* **2015**, *26*, 307–319. [CrossRef]
66. Xu, C.F.; Duan, H.B.; Liu, F. Chaotic artificial bee colony approach to Uninhabited Combat Air Vehicle (UCAV) path planning. *Aerosp. Sci. Technol.* **2010**, *14*, 533–541. [CrossRef]
67. Jadon, S.S.; Tiwari, R.; Sharma, H.; Bansal, J.C. Hybrid Artificial Bee Colony algorithm with Differential Evolution. *Appl. Soft Comput.* **2017**, *58*, 1–24. [CrossRef]
68. Kang, F.; Li, J.J.; Xu, Q. Structural inverse analysis by hybrid simplex artificial bee colony algorithms. *Comput. Struct.* **2009**, *87*, 861–870. [CrossRef]
69. Yang, X.S. Bat algorithm for multi-objective optimisation. *Int. J. Bio-Inspir. Comput.* **2011**, *3*, 267–274. [CrossRef]
70. Khooban, M.H.; Niknam, T. A new intelligent online fuzzy tuning approach for multi-area load frequency control: Self Adaptive Modified Bat Algorithm. *Int. J. Electr. Power Energy Syst.* **2015**, *71*, 254–261. [CrossRef]
71. Mirjalili, S.; Mirjalili, S.M.; Yang, X.S. Binary bat algorithm. *Neural Comput. Appl.* **2014**, *25*, 663–681. [CrossRef]
72. Osaba, E.; Yang, X.S.; Diaz, F.; Lopez-Garcia, P.; Carballo, R. An improved discrete bat algorithm for symmetric and asymmetric Traveling Salesman Problems. *Eng. Appl. Artif. Intell.* **2016**, *48*, 59–71. [CrossRef]
73. Wang, G.G.; Guo, L.H. A Novel Hybrid Bat Algorithm with Harmony Search for Global Numerical Optimization. *J. Appl. Math.* **2013**, *2013*, 696491. [CrossRef]
74. Perez, J.; Valdez, F.; Castillo, O. Modification of the Bat Algorithm using Fuzzy Logic for Dynamical Parameter Adaptation. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 464–471.
75. Gandomi, A.H.; Yang, X.S. Chaotic bat algorithm. *J. Comput. Sci.-NETH* **2014**, *5*, 224–232. [CrossRef]
76. Fister, I.; Fong, S.; Brest, J.; Fister, I. A Novel Hybrid Self-Adaptive Bat Algorithm. *Sci. World J.* **2014**, *2014*, 709738. [CrossRef] [PubMed]
77. Wang, H.; Wang, W.J.; Cui, L.Z.; Sun, H.; Zhao, J.; Wang, Y.; Xue, Y. A hybrid multi-objective firefly algorithm for big data optimization. *Appl. Soft Comput.* **2018**, *69*, 806–815. [CrossRef]
78. Baykasoglu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. *Appl. Soft Comput.* **2015**, *36*, 152–164. [CrossRef]
79. Zhang, J.; Gao, B.; Chai, H.T.; Alma, Z.Q.; Yang, G.F. Identification of DNA-binding proteins using multi-features fusion and binary firefly optimization algorithm. *BMC Bioinform.* **2016**, *17*, 1–12. [CrossRef]
80. Sayadi, M.K.; Hafezlakotkob, A.; Naini, S.G.J. Firefly-inspired algorithm for discrete optimization problems: An application to manufacturing cell formation. *J. Manuf. Syst.* **2013**, *32*, 78–84. [CrossRef]
81. Wang, G.G.; Guo, L.H.; Duan, H.; Wang, H.Q. A New Improved Firefly Algorithm for Global Numerical Optimization. *J. Comput. Theor. Nano.* **2014**, *11*, 477–485. [CrossRef]
82. Chandrasekaran, K.; Simon Sishaj, P. Optimal deviation based firefly algorithm tuned fuzzy design for multi-objective UCP. *IEEE Trans. Power Syst.* **2013**, *28*, 460–471. [CrossRef]
83. Coelho dos Santos, L.; de Andrade Bernard, D.L.; Mariani, V.C. A Chaotic Firefly Algorithm Applied to Reliability-Redundancy Optimization. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation, New Orleans, LA, USA, 2011; pp. 517–521.
84. Abdullah, A.; Deris, S.; Mohamad, M.S.; Hashim, S.Z.M. A new hybrid firefly algorithm for complex and nonlinear problem. In *Distributed Computing and Artificial Intelligence*; Springer: Berlin/Heidelberg, Germany, 2012; pp. 673–680.
85. Satapathy, P.; Dhar, S.; Dash, P.K. Stability improvement of PV-BESS diesel generator-based microgrid with a new modified harmony search-based hybrid firefly algorithm. *IET Renew. Power Gen.* **2017**, *11*, 566–577. [CrossRef]
86. Luh, G.C.; Chueh, C.H.; Liu, W.W. Moia: Multi-objective immune algorithm. *Eng. Optim.* **2003**, *35*, 143–164. [CrossRef]
87. Shao, X.G.; Cheng, J.J.; Cai, W.S. An adaptive immune optimization algorithm for energy minimization problems. *J. Chem. Phys.* **2004**, *120*, 11401–11406. [CrossRef] [PubMed]
88. Zhao, X.C.; Song, B.Q.; Huang, P.Y.; Wen, Z.C.; Weng, J.L.; Fan, Y. An improved discrete immune optimization algorithm based on PSO for QoS-driven web service composition. *Appl. Soft Comput.* **2012**, *12*, 2208–2216. [CrossRef]
89. Isan, J.I.; Ho, W.H.; Liu, T.K.; Chou, J.H. Improved immune algorithm for global numerical optimization and job-shop scheduling problems. *Appl. Math. Comput.* **2007**, *194*, 406–424. [CrossRef]
90. Sahar, S.; Polat, K.; Kodaz, H.; Günes, S. A new hybrid method based on fuzzy-artificial immune system and k-nn algorithm for breast cancer diagnosis. *Comput. Biol. Med.* **2007**, *37*, 415–423. [CrossRef] [PubMed]
91. He, H.; Qian, F.; Du, W.L. A chaotic immune algorithm with fuzzy adaptive parameters. *Asia-Pac. J. Chem. Eng.* **2008**, *3*, 695–705. [CrossRef]
92. Lin, Q.Z.; Zhu, Q.L.; Huang, P.Z.; Chen, J.Y.; Ming, Z.; Yu, J.P. A novel hybrid multi-objective immune algorithm with adaptive differential evolution. *Comput. Oper. Res.* **2015**, *62*, 95–111. [CrossRef]
93. Ali Riza, Y. An effective hybrid immune-hill climbing optimization approach for solving design and manufacturing optimization problems in industry. *J. Mater. Process. Tech.* **2009**, *209*, 2773–2780.
94. Chandrasekaran, K.; Simon Sishaj, P. Multi-objective scheduling problem: Hybrid approach using fuzzy assisted cuckoo search algorithm. *Swarm Evol. Comput.* **2012**, *5*, 1–16. [CrossRef]
95. Mlakar, U.; Fister, I.; Fister, I. Hybrid self-adaptive cuckoo search for global optimization. *Swarm Evol. Comput.* **2016**, *29*, 47–72. [CrossRef]
96. Rodrigues, D.; Pereira, L.A.M.; Almeida, T.N.S.; Papa, J.P.; Souza, A.N.; Romos, C.C.O.; Yang, X.S. BCS: A Binary Cuckoo Search algorithm for feature selection. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems, Beijing, China, 19–23 May 2013; pp. 465–468.
97. Ouaraab, A.; Ahiod, B.; Yang, X.S. Discrete cuckoo search algorithm for the travelling salesman problem. *Neural Comput. Appl.* **2014**, *24*, 1659–1669. [CrossRef]
98. Guerrero, M.; Castillo, O.; Garcia, M. Fuzzy dynamic parameters adaptation in the Cuckoo Search Algorithm using Fuzzy logic. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 441–448.
99. Wang, G.G.; Deb, S.; Gandomi, A.H.; Zhang, Z.J.; Alavi, A.H. Chaotic cuckoo search. *Soft Comput.* **2016**, *20*, 3349–3362. [CrossRef]
100. Kanagaraj, G.; Ponnambalam, S.G.; Jawahar, N.; Nilakantan, J.M. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization. *Eng. Optim.* **2014**, *46*, 1331–1351. [CrossRef]
101. Wang, G.G.; Gandomi, A.H.; Zhao, X.J.; Chu, H.C.E. Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. *Soft Comput.* **2016**, *20*, 273–285. [CrossRef]
102. Ali, M.; Siarry, P.; Pant, M. An efficient Differential Evolution based algorithm for solving multi-objective optimization problems. *Eur. J. Oper. Res.* **2012**, *217*, 404–416. [CrossRef]
103. Cui, L.Z.; Li, G.H.; Lin, Q.Z.; Chen, J.Y.; Lu, N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-populations. *Comput. Oper. Res.* **2016**, *65*, 155–173. [CrossRef]
104. Wang, L.; Pan, Q.K.; Suganthan, P.N.; Wang, W.H.; Wang, Y.M. A novel hybrid discrete differential evolution algorithm for blocking flow shop scheduling problems. *Comput. Oper. Res.* **2010**, *27*, 509–520. [CrossRef]
105. Pan, Q.K.; Tasgetiren, M.F.; Liang, Y.C. A discrete differential evolution algorithm for the permutation flowshop scheduling problem. *Comput. Ind. Eng.* **2008**, *55*, 795–816. [CrossRef]
106. Maulik, U.; Saha, I. Modified differential evolution based fuzzy clustering for pixel classification in remote sensing imagery. *Pattern Recognit.* **2009**, *42*, 2135–2149. [CrossRef]
107. Dos Santos, C.L.; Ayala, H.V.H.; Mariani, V.C. A self-adaptive chaotic differential evolution algorithm using gamma distribution for unconstrained global optimization. *Appl. Math. Comput.* **2014**, *234*, 452–459.
108. Li, X.; Yin, M. Parameter estimation for chaotic systems by hybrid differential evolution algorithm and artificial bee colony algorithm. *Nonlinear Dynam* **2014**, *77*, 61–71. [CrossRef]
109. Sayah, S.; Hamouda, A. A hybrid differential evolution algorithm based on particle swarm optimization for nonconvex economic dispatch problems. *Appl. Soft Comput.* **2013**, *13*, 1608–1619. [CrossRef]
110. Wang, L.; Zou, F.; Hei, X.H.; Yang, D.D.; Chen, D.B.; Jiang, Q.Y.; Cao, Z.J. A hybridization of teaching–learning-based optimization and differential evolution for chaotic time series prediction. *Neural Comput. Appl.* **2014**, *25*, 1407–1422. [CrossRef]
111. Reza, H.H.; Modjtaba, R.A multi-objective gravitational search algorithm. In Proceedings of the 2nd International Conference on Computational Intelligence, Communication Systems and Networks, Liverpool, UK, 28–30 July 2010; pp. 7–12.
112. Mirjalili, S.; Lewis, A. Adaptive gbest-guided gravitational search algorithm. *Neural Comput. Appl.* **2014**, *25*, 1569–1584. [CrossRef]
113. Yuan, X.H.; Ji, B.; Zhang, S.Q.; Tan, H.; Hou, Y.H. A new approach for unit commitment problem via binary gravitational search algorithm. *Appl. Soft Comput.* **2014**, *22*, 249–260. [CrossRef]
114. Mohammad Bagher, D.; Hosseini, N.P.; Mashaalati, M. A discrete gravitational search algorithm for solving combinatorial optimization problems. *Inform. Sci.* **2014**, *268*, 94–107.
115. Sombra, A.; Valdez, F.; Melin, P.; Castillo, O. A new gravitational search algorithm using fuzzy logic to parameter adaptation. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1068–1074.
116. Gao, S.C.; Vairappan, C.; Wang, Y.; Cao, Q.P.; Tang, Z. Gravitational search algorithm combined with chaos for unconstrained numerical optimization. *Appl. Math. Comput.* **2014**, *231*, 48–62. [CrossRef]
117. Jiang, S.H.; Ji, Z.C.; Shen, Y.X. A novel hybrid particle swarm optimization and gravitational search algorithm for solving economic emission load dispatch problems with various practical constraints. *Int. J. Electr. Power* **2014**, *55*, 628–644. [CrossRef]
118. Sahu, R.K.; Panda, S.; Padhan, S. A novel hybrid gravitational search and pattern search algorithm for load frequency control of nonlinear power system. *Appl. Soft Comput.* **2015**, *29*, 310–327. [CrossRef]
119. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, Leandro Dos, S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. *Expert Syst. Appl.* **2016**, *47*, 106–119. [CrossRef]
120. Rodriguez, L.; Castillo, O.; Soria, J. Grey wolf optimizer with dynamic adaptation of parameters using fuzzy logic. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3116–3123.
121. Li, L.G.; Sun, L.J.; Guo, J.; Qi, J.; Xu, B.; Li, S.J. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding. *Comput. Intell. Neurosci.* **2017**, *2017*, 3116–3123. [CrossRef] [PubMed]
122. Radu-Emil, P.; Radu-Codrut, D.; Emil, M.P. Grey Wolf optimizer algorithm-based tuning of fuzzy control systems with reduced parametric sensitivity. *IEEE Trans. Ind. Electron.* **2017**, *64*, 527–534.
123. Mehak, K.; Sankalap, A. Chaotic grey wolf optimization algorithm for constrained optimization problems. *J. Comput. Des. Eng.* **2018**, *5*, 458–472.
124. Zhang, S.; Luo, Q.F.; Zhou, Y.Q. Hybrid Grey Wolf Optimizer Using Elite Opposition-Based Learning Strategy and Simplex Method. *Int. J. Comput. Int. Appl.* **2017**, *6*, 1–38. [CrossRef]
125. Zhang, X.M.; Kang, Q.; Cheng, J.F.; Wang, X. A novel hybrid algorithm based on Biogeography-Based Optimization and Grey Wolf Optimizer. *Appl. Soft Comput.* **2018**, *67*, 217–214. [CrossRef]
126. Gao, K.Z.; Suganthan, P.N.; Pan, Q.K.; Chua, T.J.; Cai, T.X.; Chong, C.S. Pareto-based grouping discrete harmony search algorithm for multi-objective flexible job shop scheduling. *Inform. Sci.* **2014**, *289*, 76–90. [CrossRef]
127. Wang, L.; Yang, R.X.; Xu, Y.; Niu, Q.; Pardalos, P.M.; Fei, M. An improved adaptive binary Harmony Search algorithm. *Inform. Sci.* **2013**, *232*, 58–87. [CrossRef]
128. Geem, Z.W. Original derivative of harmony search algorithm for discrete design variables. *Appl. Math. Comput.* **2008**, *199*, 223–230. [CrossRef]
129. Peraza, C.; Valdez, F.; Garcia, M.; Melin, P.; Castillo, O. A New Fuzzy Harmony Search Algorithm using Fuzzy Logic for Dynamic Parameter Adaptation. *Algorithms* **2016**, *9*, 69. [CrossRef]
130. Alatas, B. Chaotic harmony search algorithms. *Appl. Math. Comput.* **2010**, *216*, 2687–2699. [CrossRef]
131. Yuan, Y.; Xu, H.; Yang, J.D. A hybrid harmony search algorithm for the flexible job shop scheduling problem. *Appl. Soft Comput.* **2013**, *13*, 3259–3272. [CrossRef]
132. Layeb, A. A hybrid quantum inspired harmony search algorithm for 0–1 optimization problems. *J. Comput. Appl. Math.* **2013**, *253*, 14–25. [CrossRef]
133. Wang, Z.W.; Qin, C.; Wan, B.T.; Song, W.W. An Adaptive Fuzzy Chicken Swarm Optimization Algorithm. *Math. Probbl. Eng.* **2021**, *2021*, 8896794.
134. Li, Z.Y.; Wang, W.Y.; Yan, Y.Y.; Li, Z. PS-ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems. *Expert Syst. Appl.* **2015**, *42*, 8881–8895. [CrossRef]
135. Pan, T.S.; Dao, T.K.; Nguyen, T.T.; Chu, S.C. Hybrid Particle Swarm Optimization with Bat Algorithm. In Proceedings of the 8th International Conference on Genetic and Evolutionary Computing, Nanchang, China, 18–20 October 2015; pp. 37–47.
136. Soerensen, K. Metaheuristics—the metaphor exposed. *Int. Trans. Oper. Res.* **2015**, *22*, 3–18. [CrossRef]
137. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. *Swarm Evol. Comput.* **2011**, *1*, 3–18. [CrossRef]
138. Sadollah, A.; Bahreininejad, A.; Eskandar, H.I.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. *Appl. Soft Comput.* **2013**, *13*, 2592–2612. [CrossRef]
139. Joines, J.; Houck, C. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s. In Proceedings of the 1st IEEE Conference on Evolutionary Computation, Orlando, FL, USA, 27–29 June 1994; pp. 579–584.
140. He, J.; Yao, X. Drift analysis and average time complexity of evolutionary algorithms. *Artif. Intell.* **2001**, *127*, 57–85. [CrossRef]
141. Mohammad Reza, B.; Zbigniew, M. Analysis of Stability, Local Convergence, and Transformation Sensitivity of a Variant of the Particle Swarm Optimization Algorithm. *IEEE Trans. Evol. Comput.* **2016**, *20*, 370–385.
142. Mohammad Reza, B.; Zbigniew, M. Stability Analysis of the Particle Swarm Optimization Without Stagnation Assumption. *IEEE Trans. Evol. Comput.* 2016, 20, 814–819.
143. Chen, W.N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.H.; Chung, H.S.H.; Li, Y.; Shi, Y.H. Particle Swarm Optimization with an Aging Leader and Challengers. *IEEE Trans. Evol. Comput.* 2013, 17, 241–258. [CrossRef]
144. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–15 May 2002; pp. 1671–1676.
145. Huang, C.W.; Li, Y.X.; Yao, X. A Survey of Automatic Parameter Tuning Methods for Metaheuristics. *IEEE Trans. Evol. Comput.* 2020, 24, 201–216. [CrossRef]
146. Hart, E.; Ross, P. GAVEL.—A New Tool for Genetic Algorithm Visualization. *IEEE Trans. Evol. Comput.* 2001, 5, 335–348. [CrossRef]
147. Ryoji, T.; Hisao, I. A Review of Evolutionary Multimodal Multiobjective Optimization. *IEEE Trans. Evol. Comput.* 2020, 24, 193–200.
148. Ma, X.L.; Li, X.D.; Zhang, Q.F.; Tang, K.; Liang, Z.P.; Xie, W.X.; Zhu, Z.X. A Survey on Cooperative Co-Evolutionary Algorithms. *IEEE Trans. Evol. Comput.* 2019, 23, 421–440. [CrossRef]
149. Jin, Y.C.; Wang, H.D.; Chugh, T.; Guo, D.; Miettinen, K. Data-Driven Evolutionary Optimization: An Overview and Case Studies. *IEEE Trans. Evol. Comput.* 2019, 23, 442–458. [CrossRef]
150. Djennouri, Y.; Fournier-Viger, P.; Lin, J.C.W.; Djennouri, D.; Belhadi, A. GPU-based swarm intelligence for Association Rule Mining in big databases. *Intell. Data Anal.* 2019, 23, 57–76. [CrossRef]
151. Wang, J.L.; Gong, B.; Liu, H.; Li, S.H. Multidisciplinary approaches to artificial swarm intelligence for heterogeneous computing and cloud scheduling. *Appl. Intell.* 2015, 43, 662–675. [CrossRef]
152. De, D.; Ray, S.; Konar, A.; Chatterjee, A. An evolutionary SPDE breeding-based hybrid particle swarm optimizer: Application in coordination of robot ants for camera coverage area optimization. In Proceedings of the 1st International Conference on Pattern Recognition and Machine Intelligence, Kolkata, India, 20–22 December 2005; pp. 413–416. |
COVID-19
How the Pandemic Has Affected Medical Resources
Gene Therapy:
A CURE FOR HEMOGLOBINOPATHIES?
Debunking IG Therapy Myths
TO IMPROVE PATIENT OUTCOMES
THE INCREASING PREVALENCE OF Metabolic Syndrome
TRANSITIONING HEALTHCARE TO THE Retail Sector
NEW DEVELOPMENTS IN UNIVERSAL FLU VACCINES | PAGE 46
Guaranteed Channel Integrity® 8 Critical Steps
**STEP 1: Purchasing**
At FFF, we only purchase product from the manufacturer—never from another distributor or source—so the integrity of our products is never in question.
**STEP 2: Storage**
The healthcare products we store and transport are sensitive to temperature variations. Our state-of-the-art warehouse is temperature-controlled, monitored 24/7, and supported with backup generators in the event of power loss. In addition, we only stack products double-high to minimize pressure on fragile bottles and containers.
**STEP 3: Specialty Packaging**
At FFF, we use only certified, qualified, environmentally-friendly packaging, taking extra precautions for frozen and refrigerated products.
**STEP 4: Interactive Allocation**
FFF’s unique capability of interactive allocation allows us to do that through our field sales team’s close relationship with our customers. Our team understands customers’ ongoing requirements, responds to their immediate crises, and allocates product in real-time to meet patients’ needs.
Our commitment to a secure pharmaceutical supply chain is demonstrated by our flawless safety record. The 8 Critical Steps to Guaranteed Channel Integrity have resulted in more than 11,600 counterfeit-free days of safe product distribution.
800.843.7477 | Emergency Ordering 24/7
**STEP 5**
**Delivery**
Our delivery guidelines are in compliance with the State Board of Pharmacy requirements. Products we deliver must only be transported to facilities with a state-issued license, and only to the address on the license. We make no exceptions. And we will not ship to customers known to have a distributor’s license.
**STEP 6**
**Methods of Delivery**
We monitor for extreme weather conditions, and when the need arises, we ship overnight to maintain product efficacy. We also track patient need during life-threatening storms to make sure products are delivered when and where patients need them most.
**STEP 7**
**Verification**
In compliance with U.S. Drug Supply Chain Security Act (DSCSA) requirements, every product shipped from FFF is accompanied by a packing slip that includes information regarding the manufacturer and presentation, as well as the three T’s: Transaction Information, Transaction History, and Transaction Statement.
**STEP 8**
**Tracking**
To meet DSCSA requirements, FFF provides product traceability information on all packing slips. In addition, Lot-Track® electronically captures and permanently stores each product lot number, matched to customer information, for every vial of drug we supply.
Up Front
5 Publisher’s Corner
Shifting to a Wellness Model of Care
By Patrick M. Schmidt
Features
16 Effects of COVID-19 on Medical Resources
By Diane L.M. Cook
22 Healthcare Disrupted: Transitioning Primary Care, Diagnostics and Chronic Disease Management to the Retail Healthcare Sector
By Amy Scanlin, MS
30 Gene Therapy for Hemoglobinopathies
By Meredith Whitmore
34 Fact or Fiction: Debunking the Myths Surrounding IG Therapy Improves Patient Outcomes
By Luba Sobolevsky, PharmD, IgCP, Rachel Colletta, BSN, CRNI, IgCN, and Amy Clarke, RN, BSN, IgCN
42 The Rise of Metabolic Syndrome: A Cause for Concern
By Jim Trageser
BioFocus
46 Industry Insight
Universal Flu Vaccines Advance from Concept to Clinical Trials
By Keith Berman, MPH, MBA
52 Patient Profile
Osteoporosis: A Patient’s Perspective
By Trudie Mitschang
53 Physician Profile
Osteoporosis: A Physician’s Perspective
By Trudie Mitschang
BioTrends Watch
6 Washington Report
Healthcare legislation and policy updates
8 Reimbursement FAQs
Interpreting Payment Rule and Revenue Cycle Terminology
By Bonnie Kirschenbaum, MS, FASHP, FCSHP
10 Healthcare Management
Accreditation Can Drive Business Capacity for Healthcare Organizations
By José Domingos
12 Industry News
Research, science and manufacturer updates
About BioSupply Trends Quarterly
BioSupply Trends Quarterly is the definitive source for industry trends, news and information for healthcare professionals in the biopharmaceuticals marketplace.
BioSupply Trends Quarterly (ISSN 1948-2620) is a national publication, with quarterly themed issues.
Publisher: FFF Enterprises, Inc., 44000 Winchester Road, Temecula, CA 92590
Subscriptions to BioSupply Trends Quarterly are complimentary. Readers may subscribe by calling (800) 843-7477 x1351.
The opinions expressed in BioSupply Trends Quarterly are those of the authors alone. They do not represent the opinions, policies or positions of FFF Enterprises, the Board of Directors, the BioSupply Trends Quarterly Advisory Board or editorial staff. This material is provided for general information only. FFF Enterprises does not give medical advice or engage in the practice of medicine.
BioSupply Trends Quarterly accepts manuscript submissions in MS Word between 600 and 2,500 words in length. Email manuscripts to or request submission guidelines at email@example.com. BioSupply Trends Quarterly retains the right to edit submissions. The contents of each submission and their accuracy are the responsibility of the author(s) and must be original work. All submissions will be published elsewhere, without the written permission of BioSupply Trends Quarterly. A copyright agreement attesting to this and transferring copyright to FFF Enterprises will be required.
Advertising in BioSupply Trends Quarterly
BioSupply Trends Quarterly has a circulation of 40,000, with an approximate readership of more than 10,000 decision-makers who are comprised of general practice physicians, hospital and clinic chiefs of staff and buyers, pharmacy managers and buyers, specialist physicians and other healthcare professionals.
For information about advertising in BioSupply Trends Quarterly, you may request a media kit from Ronale Tucker Rhodes at (800) 843-7477 x1362, firstname.lastname@example.org.
Shifting to a Wellness Model of Care
AS THE healthcare industry adapts to an ever-changing landscape, transitioning to a wellness model of care looks to be in its future. That means adjusting to meet an increasing patient demand for care post-pandemic by expanding healthcare staffing, especially in certain sectors; focusing on high-quality care and outcomes by switching from a fee-for-service model to a patient-centered model; acknowledging and meeting the needs of “healthcare consumers;” and embracing new emphasis on preventing disease rather than treating it.
The forces driving this changing landscape are numerous, but most acknowledge the COVID-19 pandemic currently tops the list of contributors. As we highlight in our article “Effects of COVID-19 on Medical Resources” (p.16), these effects stem from staffing and revenue shortages to supply chain management challenges. Declines in patient visits and procedures during the pandemic substantially reduced revenue, with 75 percent of hospitals reporting adverse impacts. Yet, despite the downturn in visits and procedures, adequate staffing continues to be problematic as nurses were already in short supply prior to the pandemic. Recent surveys by several major healthcare organizations show nurses are now leaving their jobs due to forced overtime, burnout and fear of contracting the SARS-CoV-2 virus. And lack of staff isn’t limited to nursing. In another study, some 43 percent of physician respondents also reported burnout. Fortunately, the federal government is funding millions of dollars to address these shortages, and many hospitals are starting to report rising revenues. What’s more, nursing and medical school enrollment is on the upswing. However, supply chain challenges will continue until the system resolves the issues the pandemic raised.
As concerns over the pandemic diminish, the industry is bracing for a surge in patients due to an aging population and “healthcare consumers,” defined as patients engaged in their healthcare through technologies such as electronic health records, telehealth and wearables. An answer to this service gap, according to many, involves retail health centers (RHCs). As reported in our article “Healthcare Disrupted: Transitioning Primary Care, Diagnostics and Chronic Disease Management to the Retail Healthcare Sector” (p.22), while RHCs are not new, their growth is driven by healthcare consumers’ desire for more convenient office hours and clear pricing. RHCs provide a growing number of services that are mainly staffed by physician assistants and nurse practitioners, which can result in discord between these facilities and primary care practices. Yet, despite this friction, RHCs appear to be here to stay, and there seems to be no argument that they are serving patients in more convenient locations with hours and pricing that better suit consumer needs.
As always, we hope you enjoy the additional articles addressing the ways in which healthcare is shifting in this issue of BioSupply Trends Quarterly, and find them both relevant and helpful to your practice.
Helping Healthcare Care,
Patrick M. Schmidt
Publisher
OCR Issues Guidance on HIPAA, COVID-19 Vaccinations and the Workplace
The U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) issued guidance to help the public understand when the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule applies to disclosures and requests for information about whether a person has received a COVID-19 vaccine. According to the guidance, the HIPAA Privacy Rule does not apply to employers or employment records because it applies only to HIPAA-covered entities (health plans, healthcare clearinghouses and healthcare providers that conduct standard electronic transactions) and, in some cases, to their business associates.
“We are issuing this guidance to help consumers, businesses and healthcare entities understand when HIPAA applies to disclosures about COVID-19 vaccination status and to ensure that they have the information they need to make informed decisions about protecting themselves and others from COVID-19,” said OCR Director Lisa Pino.
HHS Announces the Availability of $25.5 Billion in COVID-19 Provider Funding
The U.S. Department of Health and Human Services (HHS) is making $25.5 billion in new funding available for healthcare providers affected by the COVID-19 pandemic. This funding includes $8.5 billion in American Rescue Plan (ARP) resources for providers who serve rural Medicaid, Children’s Health Insurance Program (CHIP) or Medicare patients, and an additional $17 billion for Provider Relief Fund (PRF) Phase 4 for a broad range of providers who can document revenue loss and expenses associated with the pandemic. “This funding critically helps healthcare providers who have endured demanding workloads and significant financial strains amidst the pandemic,” said HHS Secretary Xavier Becerra. “The funding will be distributed with an eye toward equity to ensure providers who serve our most vulnerable communities will receive the support they need.”
Consistent with the requirements included in the Coronavirus Response and Relief Supplemental Appropriations Act of 2020, PRF Phase 4 payments will be based on providers’ lost revenues and expenditures between July 1, 2020, and March 31, 2021. PRF Phase 4 will reimburse smaller providers — who tend to operate on thin margins and often serve vulnerable or isolated communities — for their lost revenues and COVID-19 expenses at a higher rate compared to larger providers. PRF Phase 4 will also include bonus payments for providers who serve Medicaid, CHIP and/or Medicare patients who tend to be lower income and have greater and more complex medical needs. The Health Resources and Services Administration (HRSA) will price bonus payments at the generally higher Medicare rates to ensure equity for those serving low-income children, pregnant women, people with disabilities and seniors.
Similarly, HRSA will make ARP rural payments to providers based on the amount of Medicaid, CHIP and/or Medicare services they provide to patients who live in rural areas as defined by the HHS Federal Office of Rural Health Policy. ARP rural payments will also generally be based on Medicare reimbursement rates. “We know that this funding is critical for healthcare providers across the country, especially as they confront new coronavirus-related challenges and respond to natural disasters,” said Acting HRSA Administrator Diana Espinosa. “We are committed to distributing this funding as equitably and transparently as possible to help providers respond to and ultimately defeat this pandemic.”
To expedite and streamline the application process and minimize administrative burdens, providers will apply for both programs in a single application. HRSA will use existing Medicaid, CHIP and Medicare claims data in calculating payments. The application portal opened Sept. 29, 2021. To help ensure these provider relief funds are used for patient care, PRF recipients will be required to notify the HHS Secretary of any merger with, or acquisition of, another healthcare provider during the period in which they can use the payments. Providers who report a merger or acquisition may be more likely to be audited to confirm their funds were used for coronavirus-related costs, consistent with an overall risk-based audit strategy.
Interim Rule Advances Key Protections Against Surprise Medical Bills
An interim final rule with comment period to further implement the No Surprises Act — a consumer protection law that helps curb the practice of surprise medical billing — details a process that will take patients out of the middle of payment disputes, provides a transparent process to settle out-of-network (OON) rates between providers and payers, and outlines requirements for healthcare cost estimates for uninsured (or self-pay) individuals. Other consumer protections in the rule include a payment dispute resolution process for uninsured or self-pay individuals. It also adds protections in the external review process so individuals with job-based or individual health plans can dispute denied payment for certain claims. “No one should have to go bankrupt over a surprise medical bill,” said U.S. Department of Health and Human Services (HHS) Secretary Xavier Becerra. “With today’s rule, we continue to deliver on President Biden’s Competition Executive Order by promoting price transparency and exposing inflated healthcare costs. Our goal is simple: giving Americans a better deal from a more competitive healthcare system.”
The rule is the third in a series implementing the No Surprises Act, a bipartisan consumer protection law. In early September, a rule was issued to help collect data on the air ambulance provider industry, in addition to a rule in July on consumer protections against surprise billing. Collectively, these rules took effect Jan. 1, 2022, and ban surprise billing for emergency services, as well as certain nonemergency care provided by OON providers at in-network facilities, and limit high OON cost-sharing for emergency and nonemergency services for patients.
“Price transparency is a reality in almost every aspect of our lives except healthcare,” said CMS Administrator Chiquita Brooks-LaSure. “The Biden-Harris Administration is committed to changing this. With today’s final rule, we are requiring healthcare providers and healthcare facilities to provide uninsured patients with clear, understandable estimates of the charges they can expect for their scheduled healthcare services.” ✦
Biden-Harris Administration Advances Key Protections Against Surprise Medical Bills; Giving Peace of Mind to Millions of Consumers Plagued by High Costs. U.S. Department of Health and Human Services press release, Sept. 30, 2021. Accessed at www.hhs.gov/about/news/2021/09/30/biden-harris-administration-advances-key-protections-against-surprise-medical-bills.html.
CMS Launches New Medicare.gov Tool to Compare Nursing Home Vaccination Rates
The Centers for Medicare & Medicaid Services (CMS) is making it easier to check COVID-19 vaccination rates for nursing home staff and residents by making vaccination data available in a user-friendly format. CMS and the Centers for Disease Control and Prevention are also continuing to use this data to monitor vaccine uptake among residents and staff and to identify facilities that may need additional resources or assistance to respond to the pandemic. “CMS wants to empower nursing home residents, their families and caregivers with the information they need when choosing care providers for their loved ones. As we continue to work with our partners to monitor the spread of COVID-19 and keep nursing home residents safe, we want to give people a new tool to visualize this data to help them make informed decisions,” said CMS Administrator Chiquita Brooks-LaSure. “CMS knows that nursing home staff want to protect their residents and is calling on them to get vaccinated now. The COVID-19 vaccine is safe, effective and accessible to all at no out-of-pocket cost.”
Medicare and Medicaid-certified nursing homes have been required to report weekly COVID-19 vaccination data for both residents and staff since May, and CMS has been posting the information on the CMS COVID-19 Nursing Home Data website at data.cms.gov/covid-19/covid-19-nursing-home-data. The addition of this new consumer-friendly data feature is another valuable tool for patients, residents and families to understand the quality of nursing homes when making healthcare decisions. ✦
CMS Launches New Medicare.gov Tool to Compare Nursing Home Vaccination Rates: Center for Medicare & Medicaid press release, Sept. 21, 2021. Accessed at www.cms.gov/newsroom/press-releases/cms-launches-new-medicaregov-tool-compare-nursing-home-vaccination-rates.
Interpreting Payment Rule and Revenue Cycle Terminology
By Bonnie Kirschenbaum, MS, FASHP, FCSHP
MANY FIND information concerning payments for drugs, biologicals and radiologicals, vaccines or other products and supplies difficult to understand. Therefore, the goal of this column is to put into perspective some of the terms used in rule sets pertaining to payment for inpatients, which go into effect during the fiscal year effective Oct. 1, as well as outpatient and physician fee services, which go into effect during the calendar year effective Jan. 1.
Coding for Payment
Telling the patient’s story accurately and completely in a manner that can be translated into codes is essential. Since all payment transactions are transmitted electronically, the codes chosen must match what actually has occurred during the patient visit/encounter/admission. This series of codes sent to the payer are not only used for payment but also become the clinical record that drives future decisions about treatment and payments.
The basis for transactions includes the disease state(s), problem list and symptoms the patient presents with that are assigned very specific ICD-10 codes representing procedure classifications. In 2022, there are updates to files that need to be incorporated into provider systems to ensure the problem list is accurately represented (www.cms.gov/medicare/icd-10/2022-icd-10-cm). Failure to update will result in a denied payment due to lack of medical necessity.
Drugs, biologicals, vaccines, radiologicals and other products and services are reported to payers as healthcare common procedure coding system (HCPCS) and/or current procedural terminology (CPT) codes, along with national drug codes (NDCs). The list of HCPCS Level II codes and descriptors are approved and maintained jointly by the alphanumeric editorial panel/workgroup whose members represent the Centers for Medicare and Medicaid Services (CMS), America’s Health Insurance Plans and Blue Cross and Blue Shield Association. CPT codes and descriptions are copyrighted by the American Medical Association.
Category I CPT codes describe surgical procedures, diagnostic and therapeutic services, and vaccine codes, while Category III CPT codes describe new and emerging technologies, services and procedures. Level II HCPCS codes (also known as alphanumeric codes) identify drugs, devices, ambulance services, durable medical equipment, orthotics, prosthetics, supplies, temporary surgical procedures and medical services not described by CPT codes. Drugs and biologicals are found in sections A, C, J, P and Q. Often, the term “J” codes is used when referring to payment codes. However, looking in only the J section of the table misses listings in all the rest of the coding tables. For example, the most lucrative new pass-through drugs almost exclusively have C codes.
From a CMS outpatient perspective, drugs, biologicals, vaccines and other products are assigned status indicators (SI). These can be found in Addendum B, which is updated quarterly and contains thousands of line items. Pharmacy products are assigned G, K, N and R SIs; pass-through products are assigned SI G; separately payable outpatient drugs based on a daily dollar value threshold ($130 per day based on average sales price [ASP]) are assigned SI K; drugs that will be paid for as part of a bundle/package are assigned SI N; and all blood products are assigned SI R. (See www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/HospitalOutpatientPPS/Addendum-A-and-Addendum-B-Updates.)
More specifically, pass-through products are assigned a three-year transitional pass-through payment period with additions and expirations updated quarterly. The Medicare, Medicaid and SCHIP Balanced Budget Refinement Act of 1999 (Pub. L. 106–113) provided pass-through payment provisions that require the Department of Health and Human Services make additional payments to hospitals for current orphan drugs as designated under section 526 of the Federal Food, Drug and
Cosmetic Act; current drugs and biologicals and brachytherapy sources used in cancer therapy; and current radiopharmaceutical drugs and biologicals. “Current” refers to those drugs or biologicals that are hospital outpatient services under Medicare Part B for which transitional pass-through payment was made on the first date the hospital outpatient prospective payment system (OPPS) was implemented. Transitional pass-through payments also are provided for certain new drugs and biologicals not being paid for as a hospital outpatient department service as of Dec. 31, 1996, and whose cost is “not insignificant” in relation to OPPS payments for procedures or services associated with the drug or biological. For pass-through payment purposes, radiopharmaceuticals are included as drugs.
All drugs with a SI G designation are paid at ASP+6% regardless of whether a facility is purchasing under the 340B drug program or not. The key is to be aware of the expiration of this G status and plan accordingly because the HCPCS code assigned to the product may change and the new SI may be either K or N. SI K products remain at ASP+6% for non-340B facilities but fall to ASP-22.5% for those purchasing under the 340B program. SI N products are bundled and are no longer eligible for waste billing. An incorrect HCPCS code results in an automatic payment denial.
**Average Sales Price (ASP)**
ASP is a market-based price that is updated quarterly to reflect the weighted average of all manufacturer sales prices and includes all rebates and discounts privately negotiated between manufacturers and wholesaler/distributor purchasers (with the exception of Medicaid and certain federal discounts and rebates). It should be noted that ASP does not reflect the price a facility pays for the drug, which may be higher. CMS publishes quarterly updated fee schedules that include the 6 percent markup, which will be the amount paid by facilities and practices not using 340B purchasing. Purchasing under 340B requires some simple arithmetic to calculate reimbursement. Remember this applies only to SI K drugs. To determine ASP for SI K drugs, divide the published ASP+6% by 106 and then multiply by 100. Or simply multiple the published ASP+6% by .943. Since 340B-purchased products are paid at ASP-22.5%, deduct 22.5 percent from the ASP just calculated to determine payment (see ASP Payment Example for 340B Reimbursement).
Keep in mind that for all payments regardless of 340B status, CMS pays 80 percent of the amount due, and the patient is responsible for the remaining 20 percent (either personally or through a secondary payer).
These updates are automatically electronically provided to all facilities and practices eligible for CMS payments. Providers can sign up for complimentary online publications of changes and updates ([public.govdelivery.com/accounts/USCMS/subscriber/new?pop=t&topic_id=USCMS_7819](http://public.govdelivery.com/accounts/USCMS/subscriber/new?pop=t&topic_id=USCMS_7819)).
**Sequestration**
Sequestration is an important concept to understand since it reduced Medicare reimbursement and all other government payment by 2 percent. Currently, sequestration applies to budget limits Congress created in the 2011 Budget Control Act. At that time, there was consensus to use sequester threats to force deficit limit agreements. Sadly, threats didn’t work, implementing the sequester to cut spending from 2013 through 2021. Subsequently, expiration dates continue to be extended into the future as each budget deficit looms larger (now into the 2030s).
How do past and present political squabbles affect facilities? The sequestration payment cut implemented in 2013 cut reimbursement by 2 percent for all government payments, including those for healthcare. This 2 percent reduction applies only to the 80 percent Medicare reimburses and not to the 20 percent patient co-pays.
The COVID-19 pandemic paused the sequestration minus 2 percent, which has been extended several times. However, the proposed infrastructure bill discussions maintain a Dec. 31, 2021, expiration with no further extensions of the pause.
**Claims Denials Can Be Overcome**
The most common reasons for denied claims include incomplete claims and coding errors coupled with failing to justify medical necessity in electronic record documentation or not being medically necessary. Understanding the terms discussed here and ensuring IT departments/providers are compliant will help to prevent these denials. Other payment denial issues include site-of-care shift rulings not recognized by a facility, multiple payers/stakeholders that are not recognized, payer-mandated step therapies and other commercial and Medicare Advantage payer requirements.
Accreditation Can Drive Business Capacity for Healthcare Organizations
By José Domingos
Ongoing Quality Improvement
For any healthcare organization, from a group practice to a corporate entity or hospital system, maintaining performance improvement should be the primary goal in seeking accreditation. Performance improvement is central to sustaining all other objectives — fulfilling legal requirements, attaining higher reimbursement and strengthening competitive advantage.
There is considerable evidence to show accreditation programs improve outcomes across a wide spectrum of clinical conditions.\(^1\) Actively engaging the entire organization — from administrators and practitioners to facility engineers and human resources — in a culture of improvement embeds the practice of accreditation into daily policies and procedures to improve the quality of care and strengthen the organization.
Quality improvement is a pervasive theme across accreditation standards, regardless of setting. The broad issues addressed may be rooted in patient safety and clinical care, but they are also building blocks of a high-performance organization. Elements include:
- Developing a broadly conceived program to touch every area of an organization through data collection activities. Whether employee-based or contracted service, there is very little operationally that cannot be covered by a comprehensive, effective quality improvement program.
- Attaching specific, measurable goals to each service area to establish data-driven, evidence-based protocols. Data for data’s sake is not useful. Context makes the data actionable.
- Fully communicating results to ensure engagement and establish accountability spanning from front-line staff through the governing body. At the staff level, quality data are collected and compared with past performance. At the management level, patterns are identified and recommendations are made to maintain a positive trajectory or adjust to correct off-target trends. The executive level holds ultimate responsibility for the quality of services delivered, and as the quality reporting is communicated upward, there is continuing evaluation of whether performance is serving to advance the organization’s mission and strategic goals.
In short, the more frequently organizations are thinking about accreditation, the easier it is to integrate the standards into daily, frontline activities and managerial decision-making. For executive leaders who embrace a performance improvement process as the nexus of their operating plan, an accreditation focus brings added value to business operations. Continuous, small course corrections are easier and more sustainable than instituting major overhauls when a survey is approaching. This principle applies equally to standards compliance and management of the business.
Optimizing Efficiencies
With healthcare organizations operating on slim margins, operational efficiency is critical to success. Administrators and
other leaders hold responsibility for compliance with complex federal and state laws, while simultaneously seeking to manage and reduce costs.
For an organization considering expansion, ensuring consistency in quality of care across all services and locations is essential. Whether a home health agency wants to expand into home infusion therapy or a physician group seeks a hospital partner for a joint venture in outpatient surgery, an accreditation resource offering comprehensive service solutions can support sustainable business growth. Taking an integrated approach promotes consistency of practice, optimizing efficiencies across service lines and locations.
Similarly, sharing best practices across service lines and/or facilities is a major benefit for an organization, regardless of size. For a system, a single accreditor facilitates internal benchmarking opportunities. For a smaller setting looking to expand service lines, it streamlines the launch process.
Using an already accredited facility as a template of quality care allows providers to adapt their model of success in other areas. With these best practices established, healthcare organizations also can demonstrate to investors the value of a new operation.
The documented benefits of accreditation are many and include enabling the establishment of better organizational structures and processes, promotion of quality and safety cultures and improvements in patient care.\(^2\) In a survey of health departments that had been accredited for one year, more than 90 percent reported experiencing benefits such as stimulation of quality improvement and performance improvement opportunities, increased accountability and transparency, and improved management processes.\(^3\)
Accreditation standards offer a framework to help organizations develop improved structures and operational excellence. Healthcare leaders should use the accreditation process to inform strategic management and operational decisions.
**Differentiating from Competitors**
Accreditation status can differentiate a healthcare organization within the community and offers significant competitive advantages. Achieving accreditation assures patients and potential partners that an organization provides the highest quality of care, giving them the confidence to choose your facility over one that is not accredited.
The ideal accreditor provides ongoing, comprehensive guidance and services to meet a range of needs such as recognition for specialties that distinguish facilities from their competitors. For example, a stroke center designation for a hospital means the local EMS can transport the patient to that facility knowing the patient will receive the specialized care necessary for quick assessment and treatment. This type of recognition focuses on the organization’s ability to provide a specialized service and stresses to the public the organization is dedicated to meeting the community’s need.
While accreditation standards are designed to meet federal and state requirements, healthcare providers should consider an implementation strategy that is customized and tailored to their organization to ensure adequate differentiation and relevant risk management. Ongoing access to accreditation resources, experts and education helps organizations identify high-risk areas and adjust to regulatory changes more smoothly and efficiently.
By using best practices and data collected to meet accreditation requirements, a process is already in place to adjust for risk or update methods and procedures to improve quality of care. This proactive approach to risk management should limit errors and lead to safer processes. As testament, many liability insurers recognize the benefits of accreditation and reduce premiums for accredited organizations.
Accreditation can be a vital tool to optimize and expand your healthcare business. Through ongoing support from an accreditation provider, an organization can realize the value of accreditation beyond the survey. Its optimal impact is achieved when an organization uses quality improvement and risk management to extend accreditation as a capacity-building tool.
**References**
1. Wols S. *The Impact of Accreditation on Healthcare Quality Improvement: A Systematic Review Study*. Journal of Health Organization and Management 2016; Vol 31(1): DOI: 10.1108/JHOM-12-2015-0074
2. Nicklin W, Furtado T, van Osterberg P, O’Connor M, and McCauley N. Leveraging the Full Value and Impact of Accreditation. International Journal for Quality in Health Care 2016; 28(2):102-104.
3. Kornstein J, Mehl M, Siegfried A, Hensley B, Bender K, and Carro L. Evaluating the Impact of National Public Health Department Accreditation — United States, 2016. Morbidity and Mortality Weekly Report. 65(30):804-808.
**JOSÉ DOMINGOS** is president and CEO of Accreditation Commission for Health Care (ACHC), a nonprofit healthcare accrediting organization with 35 years of experience promoting safe, quality patient care. ACHC develops solutions trusted by healthcare providers nationwide and is committed to offering exceptional, personalized service and a customized, collaborative accreditation experience tailored to individual needs. To reach José, email email@example.com or call (855) 937-2242. For more information about ACHC, visit www.achc.org.
Studies Reveal Antibody Responses of Pregnant Women Infected with SARS-CoV-2
Two recently published studies were effective in determining the antibody responses of pregnant women infected with SARS-CoV-2 and the effect of the fetal sex on those responses. They also found direct clinical implications for COVID-19 infection, as well as future maternal-fetal vaccination strategies.
One of the studies involved a systems serology approach to phenotype the anti-SARS-CoV-2 antibodies in the sera of pregnant, nonpregnant and lactating women following administration of mRNA-1273 or BNT162b2 COVID-19 vaccines. Results indicated pregnant women showed lower SARS-CoV-2 antibody titers, restricted IgG subclass responses and a decreased FcR-binding capacity following the first dose of the vaccine compared to nonpregnant women. However, minimal differences were observed after the second dose between pregnant and lactating women and nonpregnant women. Only in lactating women, increased natural killer (NK) cell-activating antibodies were observed following the second dose of vaccination.
Differences in responses to each mRNA vaccine formulation were also observed in pregnant women. For the mRNA-1237 vaccine, immune responses were enriched for neutrophil and NK cell-recruiting antibodies. In contrast, for the BNT162b2 vaccine, they were more enriched for less specific IgG1 and FcRYIIa-binding antibodies.
Concerning passive immunity, higher SARS-CoV-2 antibodies were observed in maternal sera compared to cord sera, most likely due to immunization at a later stage of the pregnancy. Additionally, this reduction in transfer may be due to a lower abundance of FcRYIIa-binding antibodies in pregnant women. However, in lactating women, higher antibodies with greater functional and FcR-binding qualities were observed after vaccination.
The other study investigated the antibody and antiviral interferon responses in COVID-19-infected and -uninfected pregnant women and whether the sex of the fetus had an impact on those responses. To determine the effect of fetal sex on the antibody response, the anti-SARS-CoV-2 antibody titers were quantified along with functions and specificities in maternal and cord blood sera of pregnancies with female and male fetuses.
Results indicated mothers carrying male fetuses had lower titers of IgG antibodies for all SARS-CoV-2-specific antigens. This suggests the fetal sex affects the maternal antibody responses. Furthermore, the transfer ratio of SARS-CoV-2 antibodies was lower in cord blood for male pregnancies compared to female pregnancies.
Placental staining and genome analyses were also conducted to determine whether sex-specific differences in placental FcR expression existed. Results indicated an increased expression of FcRn, FcRYII and FcRYIII, as well as increased co-localization of FcRn and FcRYIII in the male-derived placenta. Glycan profiling revealed that in male pregnancies, higher titers of antibodies were modified by glycosylation and fucosylation. Fucosylated antibodies are less efficiently transferred by the FcRYIIa-binding that explains the lower IgG transfer in male pregnancies.
According to the researchers, the studies emphasize the need for incorporating pregnant women at different stages of gestation in clinical trials for the development of vaccines.
Ottav C, Semmes EG, and Coyne CB. Pregnancy Influences Immune Responses to SARS-CoV-2. *Nature*; *Translational Medicine*. Oct 19, 2021. Accessed at www.nature.com/. doi:10.1126/scitransmed.a787701
FDA Approves Avacopan to Treat Rare Autoimmune Disease
The U.S. Food and Drug Administration (FDA) has approved ChemoCentryx Inc.’s Avacopan, sold under the brand name Tavneos, to treat antineutrophil cytoplasmic antibody-associated vasculitides — a group of conditions characterized by destruction and inflammation of small blood vessels and affecting different organs, particularly the kidney. Avacopan works by blocking the activity of a protein called C5a receptor that is responsible for causing numerous inflammatory diseases.
The company received mixed reviews from an expert panel to the FDA in May, with the committee’s vote split 9-9 on whether the efficacy data supported the drug’s approval.
ChemoCentryx Gets U.S. FDA Nod for Drug to Treat Rare Autoimmune Disease. Reuters. Oct. 8, 2021. Accessed at leaderpost.com/pmn/business-pmn/chemocentryx-gets-u-s-fda-nod-for-drug-to-treat-rare-autoimmune-disease.
Medicines
Kedrion to Market RYPLAZIM to Treat Rare Disease
Kedrion Biopharma, an international biopharmaceutical company specialized in the manufacture and distribution of plasma-derived therapeutic products used in treating rare and serious diseases, is now marketing and distributing RYPLAZIM (plasminogen human-tvhm) in the United States to treat plasminogen deficiency type 1, also known as C-PLGD, an ultra-rare condition affecting less than 2,000 people in the U.S. A lifelong disease, the most severe symptoms of C-PLGD are observed in infants and children. And, given its rarity, the condition is probably underdiagnosed in the U.S.
“The most important mission at Kedrion Biopharma is to improve the lives of people with rare and serious diseases,” said Val Romberg, CEO. “As the newest addition to our growing portfolio of products, RYPLAZIM is an excellent example of that dedication. RYPLAZIM meets an urgent unmet medical need for people who face plasminogen deficiency type 1, a potentially devastating, but treatable, medical condition. We are pleased and gratified to be in a position now to help these patients.”
Research
IVIG Plus Glucocorticoids Effective for Treating COVID-19 Pediatric Syndrome
A large multicenter clinical trial has found intravenous immune globulin (IVIG) plus glucocorticoids may be better than IVIG alone for treating multisystem inflammatory syndrome in children (MIS-C) caused by COVID-19.
In the study, 596 patients with MIS-C were treated at one of 58 U.S. hospitals, 87 percent (518) of whom were treated with at least one immunomodulatory agent. The median age of the patients was 8.7 years. More than half of the patients (286; 55 percent) had involvement of five or more organ systems, and 196 (38 percent) met the complete or incomplete criteria for Kawasaki disease, a vasculitis of childhood that the investigators noted has some overlapping presentations with MIS-C and responds well to IVIG therapy, the standard of care for the disease.
The primary outcome of the study was cardiovascular dysfunction, a composite of left ventricular dysfunction or shock resulting in the use of vasopressors, on or after day two of therapy. Secondary outcomes included the need for adjunctive treatments such as a glucocorticoid in patients not already receiving them, a biologic or a second dose of IVIG, and a persistent or recurrent fever.
Results showed initial treatment with IVIG plus glucocorticoids (103 patients) was associated with a lower risk for cardiovascular dysfunction on or after day two than IVIG alone (103 patients). The risks of the components of the composite outcome also were lower among those who received IVIG plus glucocorticoids: Left ventricular dysfunction occurred in 8 percent and 17 percent of the patients, respectively. The incidence of shock resulting in vasopressor use also was lower in the IVIG plus glucocorticoid regimen: 13 percent versus 24 percent with IVIG alone. The use of adjunctive therapy was lower among patients who received IVIG plus glucocorticoids than among those who received IVIG alone (34 percent vs. 70 percent), but the risk for fever was unaffected (31 percent and 40 percent).
Methylprednisolone was the most common glucocorticoid prescribed (353 patients; 68 percent), administered at a dose of 2 mg/kg of body weight per day in 284 of the patients (80 percent), and in pulse doses of 10 mg/kg to 30 mg/kg of body weight per day in 69 patients (20 percent).
The researchers acknowledged earlier studies have shown glucocorticoids and IVIG may be an effective regimen for MIS-C. But in many cases, the studies included fewer patients and less pronounced results. A French study, for example, “suggested” a lower incidence of cardiovascular dysfunction. “In our larger U.S. cohort, we confirmed that cardiovascular function was better, and the incidence of administration of adjunctive treatments was lower” among patients given the combined regimen versus those given IVIG alone.
In a study published in September, researchers suggested some people who get COVID-19 develop autoantibodies that attack their own proteins, a hallmark of many autoimmune diseases, which leads to inflammation that could trigger long COVID. Now, the National Institutes of Health is conducting a $470 million study to determine why COVID-19 symptoms persist for so long among many patients.
In the study, the researchers analyzed blood samples from 32 COVID-19 patients who donated plasma to the University of Arkansas, and another 15 who had been hospitalized there. Approximately 81 percent of the plasma donors and 93 percent of the hospitalized patients had developed a particular autoantibody that inhibited their ACE2 enzymes, which serve as ports of entry for the coronavirus to invade the body’s cells, but they’re also vital to calming the immune system down. When not enough ACE2 is present, the immune system can produce too much inflammation. “It’s the inhibition of that ACE2 enzyme that basically is plugging up the system,” said John Arthur, MD, PhD, a researcher at the University of Arkansas for Medical Sciences. “It’s like if you’ve got a bunch of hair in the drain and the water starts to accumulate on top.”
However, more research is needed to determine whether these ACE2 antibodies cause long COVID. Researchers also aren’t sure yet whether severe infections produce more autoantibodies than mild ones. A May study found that to be the case, but Dr. Arthur noted that long COVID is also common among people whose infections were initially mild.
If the theory that long COVID is an autoimmune disease, it would have implications for COVID-19 treatments. Certain blood-pressure medications, for instance, could be used to stifle the harmful cascade of inflammation. And there’s already some evidence that vaccines help alleviate long COVID symptoms, perhaps because they help regulate the antibody response.
Bendix A. Scientists Are Getting Closer to Classifying Long COVID as an Autoimmune Disease. Business Insider. Sept. 11, 2021. Accessed at www.businessinsider.in/science/news/scientists-are-getting-closer-to-classifying-long-covid-as-an-autoimmune-disease/articleshow/86301277.cms.
Sponsor a child with hemophilia
It's rewarding and teaches unforgettable lessons
Facing another morning infusion, 10-year-old Andrew* looks at the picture of his beneficiary, 12-year-old Abil from the Dominican Republic, and sees Abil's swollen knees from repeated untreated bleeds. Each time this reminds Andrew just how fortunate he is to live in a country with factor.
Become part of our world family. A sponsorship is only $22 a month!
A child is waiting for you at: www.saveonelife.net
Or email: firstname.lastname@example.org
* name has been changed
Making a Difference in Our Patients’ Lives...
so he can say “I do”
so he can witness her first steps
so he can cheer her on when she graduates
so he can enjoy time with his grandchildren
Nufactor is committed to exceptional customer service, product and patient safety, and secure product availability and affordability. Excellence is our standard, and we’ve earned the most respected name in homecare. Our customers know we care about them, and that makes all the difference.
Nufactor Specialty Pharmacy has earned the Joint Commission Gold Seal of Approval.
(800) 323-6832
www.nufactor.com
©2022 Nufactor, Inc. is the specialty pharmacy subsidiary of FFF Enterprises, Inc., the nation’s most trusted distributor of plasma products, vaccines and other biopharmaceuticals.
The SARS-CoV-2 virus has not only caused more than 44 million cases of illness and over 700,000 deaths\(^1\) in the United States, it wreaked havoc on the nation’s healthcare system. Despite extensive pandemic preparedness plans, the healthcare system was completely unprepared for the COVID-19 pandemic that caused widespread adverse effects on medical resources ranging from healthcare, staffing and revenue shortages to supply chain management challenges — all of which hindered the nation’s ability to provide specialized care for COVID-19 patients.
These shortages and challenges have cost the healthcare system hundreds of billions of dollars, and costs are expected to continue into the future. According to a recent article, “The pandemic is expected to cause a $3.3 trillion deficit in 2020, which is about 15 percent of the United States’ gross domestic product.”\(^2\) And,
adds McKinsey & Company, a healthcare system and services management consulting firm, “While the direct impact of COVID-19 has already been substantial, additional layers of delayed or indirect impact have the potential to dwarf the immediate effects. These additional layers of impact related to COVID-19 could result in $125 billion to $200 billion in incremental annual U.S. health system cost.”
**Declines in Healthcare Visits/Procedures**
Due to fears of contracting the SARS-CoV-2 virus and its more deadly variants such as Delta, many patients decided not to visit hospitals, resulting in delayed or canceled routine or emergency treatments, including surgeries. Coupled with undulating surges of COVID-19 patients at hospitals, this caused extensive healthcare shortages. According to McKinsey & Company, a recent survey it conducted showed U.S. hospital patient volumes moved back to 2019 levels in June 2021. “From March 2020 through July 2021, private sector systems surveyed in the U.S. reported, on average, between a 5 and 15 percent decrease in volumes by site of care compared to 2019 levels. Over this 17-month period, survey respondents reported that procedural volumes were down 13 percent; outpatient visits were down 13 percent; emergency room visits were down 12 percent; and inpatient admissions were down 7 percent,” says John Schulz, associate partner at McKinsey & Company.
**Staffing Shortages**
For several decades, there has been a severe, chronic shortage of nurses in the United States. Unfortunately, the COVID-19 pandemic exacerbated this shortage, and it will continue to do so until it is long over. The reason: Even with substantially reduced patient visits and procedures during the majority of the pandemic in the first, second and third waves, that was not enough to quell the ever-growing nursing shortage, especially during the fourth wave. In fact, countless nurses have left their jobs due to forced overtime, burnout and fear of contracting the SARS-CoV-2 virus.
To understand how serious the nursing shortage is, in August 2021, the American Association of Critical-Care Nurses surveyed 6,000 critical care nurses concerning the pandemic’s impact on their careers, 66 percent of whom said their experiences during the pandemic have caused them to consider leaving nursing.
On Sept. 1, 2021, the American Nurses Association (ANA), which represents 4.2 million nurses, urged the U.S. Department of Health and Human Services (HHS) “to declare the current and unsustainable nurse staffing shortage facing our country a national crisis.” Included in ANA’s letter is a directive that HHS must “convene stakeholders to identify short- and long-term solutions to staffing challenges to face the demand of the COVID-19 pandemic response.”
Two weeks later, ANA publicly supported the federal government’s “Path Out of the Pandemic: President Biden’s COVID-19 Action Plan” announced Sept. 7. “ANA supports the Biden Administration plan to use every lever to increase the number of people vaccinated as the only way to get out of this crisis [pandemic],” said ANA President Ernest Grant, PhD, RN, FAAN. Increasing the number of people getting the COVID-19 vaccine is expected to help ease the current Delta surge being experienced by hospitals and reduce the pressure and stress on nurses who care for COVID-19 patients.
**Due to fears of contracting the SARS-CoV-2 virus and its more deadly variants such as Delta, many patients decided not to visit hospitals, resulting in delayed or canceled routine or emergency treatments, including surgeries.**
On Oct. 14, 2021, it was announced the Biden Administration would direct $100 million to the National Health Service Corps to help address the healthcare worker shortage. The announcement came after the loss of 17,500 U.S. healthcare employees in September, according to the Bureau of Labor Statistics. In addition, the agency reported the country has lost 524,000 healthcare employees since the start of the pandemic, with the industry’s employment sitting at just under 16 million. The biggest job losses in the industry in September occurred in nursing, hospitals and residential care.
In McKinsey & Company’s 2021 Future of Work in Nursing survey, it found 22 percent of nurses indicated they might leave their current position of providing direct patient care in the next year, with more than half reporting they were seeking another career path, a nondirect care role or retirement. Gretchen Berlin, a senior partner at McKinsey & Company,
said the July 2021 survey of 100 private sector hospitals found operational leaders reported nursing turnover in the second quarter of 2021 was up 4.7 percentage points, and the nursing vacancy rate was up 3.7 percentage points (Figure).9
In addition, said Berlin, research conducted earlier in the pandemic (September 2020) found physicians are also experiencing burnout, which can contribute to shortages: “Almost 43 percent of the respondents reported experiencing burnout to some extent. Physicians reported seeing more medical complications, negative economic impact and higher costs as a result of patients putting off necessary care. A majority of the respondents said they are worried about their practice making it through the COVID-19 pandemic, and about a third of the respondents said that they are more likely to pursue a partnership with a larger organization, preferably with a health system, primarily for financial stability reasons.”
On a positive note, the American Association of Colleges of Nursing reported a 5.6 percent increase in 2020 nursing student enrollment.10 And, the Association of American Medical Colleges reported a 1.7 percent increase in first-year students in the 2020 academic year and an 18 percent increase in medical student enrollment in 2021.11
Revenue Shortages
With patient volumes down and hospitals experiencing multiple surges of COVID-19 patients over the previous 18-month period, revenues were understandably down. And although revenues are slowly returning to pre-pandemic levels, the amount of revenue lost during the four waves of the pandemic over a two-year period might never be recovered.
KaufmanHall, a healthcare management consulting firm, released its 2021 Healthcare Performance Improvement Report in October, which found “volumes in many service lines remain below pre-pandemic levels, putting downward pressure on revenues.” One highlight of the report was that “75 percent have experienced adverse revenue cycle impacts during the pandemic, including a higher percentage of Medicaid patients and increased rates of denial.”12
According to two other recent reports from KaufmanHall, “a resurgence of COVID-19 cases from rapid spread of the highly contagious Delta variant is raising new uncertainties for hospitals, health systems and physician practices across the country.”13 The company’s September 2021 National Hospital Flash Report, which draws on data from more than 900 hospitals, says the spread of the hyper-transmissible Delta variant continued to
**Figure. Factors Influencing Nurses’ Decision to Leave Their Job**
| % of respondents, n = 314 | Important | Neutral | Not important |
|---------------------------|-----------|---------|---------------|
| 59 | 33 | 8 | Insufficient staffing levels |
| 56 | 37 | 7 | Demanding nature/intensity of workload |
| 54 | 39 | 7 | Emotional toll of job |
| 51 | 40 | 9 | Don’t feel listened to or supported at work |
| 50 | 37 | 13 | Physical toll of job |
| 46 | 45 | 9 | Family needs and/or other competing life demands |
| 43 | 45 | 12 | Seeking higher paid position |
| 42 | 39 | 19 | Insufficient personal protective equipment |
| 38 | 38 | 24 | Retirement |
| 37 | 52 | 11 | Too much uncertainty or lack of control |
| 30 | 53 | 18 | Lack of respect from some patients or their families |
| 26 | 46 | 28 | Don’t feel prepared or trained sufficiently |
| 23 | 46 | 31 | Fear of COVID-19 infection for self or family |
| 21 | 57 | 22 | Don’t see an appealing professional development pathway |
strain hospitals and healthcare systems nationwide in August.
However, more than a year and a half into the pandemic, even though COVID-19 continues to undermine performance improvement efforts, revenues are starting to rise. “Given the increase in higher acuity cases and yearly rate changes, U.S. hospitals saw revenues increase year-to-date compared to both 2019 and 2020 for a sixth consecutive month,” states the report. “Gross operating revenue rose 9.6 percent year-to-date versus 2019 and 16.6 percent year-to-date versus 2020 [not including the Coronavirus Aid, Relief and Economic Security Act]. Outpatient revenue saw the biggest increases at 10 percent year-to-date versus 2019 and 20.3 percent versus 2020, while inpatient revenue was up 5.6 percent year-to-date compared to 2019 and 11.8 percent year-to-date compared to 2020.”\(^{14}\)
In addition, KaufmanHall’s August 2021 Physician Flash Report, which draws on data from nearly 100,000 providers representing more than 100 specialties, shows “physician groups across the country saw productivity and revenue improvements in the second quarter compared to the same period in 2020 and to pre-pandemic levels seen in the fourth quarter of 2019. However, significant increases in expenses and continued high levels of physician investment compared to the pre-pandemic period remain areas of concern. The changes are among multiple dramatic swings experienced across key physician performance metrics for the quarter, especially compared to the second quarter of 2020 when nationwide shutdowns and widespread concerns over potential exposure to the virus caused patient visits to plummet at the start of the COVID-19 pandemic.”\(^{15}\)
As of March 1, 2021, HHS’s $178 billion provider relief fund gave almost all Medicare-enrolled healthcare providers grants that amounted to at least 2 percent of their previous annual patient revenue, which can be used to cover lost revenue and unreimbursed costs associated with the pandemic.\(^{16}\) However, the grant is not a full representation of the costs of the pandemic. Additional costs include indirect costs borne across several dimensions, including increased caregiver turnover and clinical costs associated with patients whose medical conditions have exacerbated during the last 18 months. There are also costs from projects that were stalled or revamped due to the pandemic. For example, hospitals in the process of redesigning waiting rooms might have pivoted to allow for more social distancing or screening capabilities. Other hospitals might have reevaluated their need for the number of airborne infection isolation rooms or more air filtration.
“Our healthcare system is still learning the full breadth and scale of these effects of the pandemic, and the overall cost to the healthcare system remains uncertain, especially as additional variants continue to emerge and we gain a greater understanding of complications from the virus, including long-haul patients,” says Neil Rao, a partner at McKinsey & Company.
**Supply Chain Management Challenges**
In addition to patient, staffing and revenue shortages, the healthcare systems also experienced abrupt adverse challenges in its supply chain management system. And, many of these challenges were predicated on how the system operated prior to the pandemic.
One of those challenges is that the United States healthcare system is designed to provide highly individualized healthcare for complex diseases such as cancer or the central nervous system. However, when a pandemic occurs, mass illness of a specific organ system such as respiratory, as is the case with the COVID-19 pandemic, stresses the healthcare system far beyond what it was prepared for.
According to Daniel Moskovic, a partner at McKinsey & Company, the personal protective equipment and ventilator shortages experienced in the COVID-19 pandemic could theoretically have been mitigated by maintaining adequate/more supplies, rapidly introducing new supplies and/or putting into place product utilization and reengineering protocols. “The first two strategies would require substantial investment, which is unfavorable in an overall push to reduce healthcare costs year-to-year;
Most Trusted
In Specialty Distribution Since 1988
At FFF Enterprises, we know that at the end of every transaction, there’s a patient waiting for that product. That’s why you can count on us to supply your patients with safe, reliable medications at the best prices available.
(800) 843-7477 | FFFENTERPRISES.COM
©2022 FFF Enterprises Inc.
Healthcare Disrupted:
Transitioning Primary Care, Diagnostics and Chronic Disease Management to the Retail Healthcare Sector
Retail health centers are increasingly offering convenient care at lower prices, but debate surrounds their entry into the primary care arena.
By Amy Scanlin, MS
In March 2010, the healthcare industry changed when the Affordable Care Act (ACA) was enacted and a flood of newly insured became empowered to seek, consider and choose their own healthcare options. This historic event, coupled with a near simultaneous advancement in healthcare technology (electronic health records [EHRs], telehealth and wearables capable of tracking and reporting data without user intervention) turned the industry on its heels. Ten-plus years later, this newly engaged public has evolved in many respects into a new type of patient: the healthcare consumer.
Healthcare consumers seek simplicity and efficiency; they want convenient office hours and clear pricing. Enter the retail health center (RHC), a growing big-box and stand-alone trend that is filling voids and drawing interest, as well as raising questions. For some healthcare consumers, the lures of an RHC are convenience of location and availability of providers. For others, the lures are simple and more affordable pricing structures.
When RHCs first arrived on the scene in 2000, there were some considerable unknowns. For instance, would they cause care to be fragmented? How would they use EHRs, and would there be compatibility issues with other EHR systems? Yet, despite these unknowns, RHCs have continued to grow, serving an unmet healthcare need, particularly with declining numbers of primary care physicians.
The Doctor (or Physician Assistant/Nurse Practitioner) Will See You Now
Providers must deliver on patient needs. When operating hours and perceived level of care, including scheduling and billing, don’t meet patient expectations, the inclination may be to seek care elsewhere. This is where RHCs are gaining market share.
In turn, traditional healthcare is attempting to meet healthcare consumers’ needs by providing extended hours, easier appointment scheduling (including online portals) and improved access to telehealth. But expansion of hours and services isn’t always easy, particularly considering the prohibitive cost of staffing and technology. More than half of healthcare visits occur
on weekends and holidays, which RHCs seem better able to offer. “RHCs are not urgent care clinics,” stresses Nate Bronstein, COO of the Convenient Care Association (CCA). “We are not replacing doctors; we play an expanded role in the continuum of health.”
Originally created to treat limited acute conditions, RHCs have in many cases expanded facilities and services to routine care and management of chronic conditions. In fact, they are often patients’ first contact with the healthcare system. Generally, they are located within a 10-mile radius of nearly 50 percent of the population, and approximately 60 percent of their 50 million patients do not have an established primary care provider. According to Tine Hansen-Turton, founding executive administrator director for CCA, about 40 percent to 60 percent of those seeking care in RHCs do so for primary or chronic conditions.
**Practice Authority**
More than half of U.S. states and the District of Columbia have passed legislation permitting full practice authority for nurse practitioners (NPs), meaning they can evaluate, diagnose, order and interpret diagnostic tests and initiate and manage treatments for patients, including prescribing medications, under the exclusive licensure authority of their state board of nursing. According to the American Academy of Nurse Practitioners (AANP), those states without full practice authority generally see greater geographic healthcare disparities, higher chronic disease burdens, primary care shortages, higher costs of care and lower standings on national health rankings. For example, Bronstein cites Texas and Florida, the two states with the greatest number of RHCs, also have the greatest number of health disparities.
In these more restrictive states, NPs working in RHCs provide care under the remote supervision of an established medical practice, so they are not permitted to see patients and prescribe treatments without physician oversight. According to CCA, the fewer providers available, the more expensive these RHC practices become thanks to increasing collaborative agreement fees, insurance and other needed resources. CCA says the additional overhead could be as much as 5 percent to 10 percent. But, “that hasn’t impacted the model,” says Bronstein. “There are still more clinics needed.” Even so, he says, by granting full NP and physician assistant (PA) practice authority, the U.S. healthcare provider shortage could be reduced by 89 percent.
**Out of the Box**
But the American Medical Association (AMA) disagrees. AMA takes issue with RHCs as a solution to primary care shortages, particularly in underserved communities. In its opinion, the level of experienced care offered in RHCs is less than that of traditional healthcare facilities.
Originally created to treat limited acute conditions, RHCs have in many cases expanded facilities and services to routine care and management of chronic conditions.
And, while the American Academy of Family Physicians (AAFP) encourages use of RHCs, it does not think it should be at “the expense of the comprehensive, coordinated and longitudinal care available through a medical home.” In AAFP’s view, chronic care management and comprehensive longitudinal care should be provided by a primary care physician and medical home team, not by a retail clinic. In addition, it says in cases where certain chronic conditions could be managed in retail clinics, care management should only be under a collaborative agreement between the patient’s primary care physician and the retail healthcare facility specifying the “guidelines, procedures and protocols to be used to provide such care.”
Further, AMA urges patients seeking treatment in RHCs to become informed about the qualifications of the staff providing treatment, as well as their limitations in diagnosis and treatment. It also recommends RHCs have an established referral mechanism in the event the scope of care is beyond that of the practitioner or retail clinic.
However, with 89 percent of practicing NPs receiving training in primary care settings, AANP believes NPs play a significant role in providing patients a viable healthcare option. Citing satisfaction surveys that rate NP care equal or superior to physicians for the same problems, NPs make up the most rapidly growing component of the primary care workforce.
CCA agrees NPs and PAs provide valuable primary care roles, and that the establishment of relationships with the larger healthcare community is essential. Citing strong partnerships between member RHCs and hospitals, including RHCs that have been established by healthcare systems, says Hanson-Turton, “we are a national referral service for them.”
Introducing ALBUTEIN FlexBag™
NOW AVAILABLE
ALBUTEIN is more convenient than ever
- Easy-to-open protective overwrap
- No requirement for vented infusion sets or filters
- Flexible container that allows flexible storage
- Easy-to-remove twist-off cap
- Both the ALBUTEIN FlexBag flexible container and protective overwrap are latex-free and do not contain polyvinyl chloride (PVC), diethylhexyl phthalate (DEHP), or other plasticizers
- Durable and easy-to-spike port designed to avoid needle sticks
Order ALBUTEIN FlexBag™ 5% and 25% today!
The first and only 5% albumin in a 500 mL bag
| 25% SIZES | 5% SIZES |
|-----------|----------|
| 50 mL | 250 mL |
| 100 mL | 500 mL |
Please see Important Safety Information and brief summaries of full Prescribing Information for ALBUTEIN FlexBag 5% and 25% on adjacent pages.
Important Safety Information
ALBUTEIN® 25% (albumin [human] U.S.P.) is indicated for: hypovolemia, cardiopulmonary bypass procedures, acute nephrosis, hypoalbuminemia, ovarian hyperstimulation syndrome, neonatal hyperbilirubinemia, adult respiratory distress syndrome (ARDS), and prevention of central volume depletion after paracentesis due to cirrhotic ascites.
ALBUTEIN® 5% (albumin [human] U.S.P.) is indicated for: hypovolemia, cardiopulmonary bypass procedures, hypoalbuminemia, and plasma exchange.
ALBUTEIN 5% and 25% are contraindicated in patients with a history of hypersensitivity to albumin preparations or to any of the excipients, and in patients with severe anemia or cardiac failure with normal or increased intravascular volume.
Allergic or anaphylactic reactions require immediate discontinuation of the infusion and implementation of appropriate medical treatment.
Hypervolemia may occur if the dosage and rate of infusion are not adjusted to the patient’s volume status. At the first clinical signs of fluid overload, the infusion must be slowed or stopped immediately. Use albumin with caution in conditions where hypovolemia and its consequences or hemodilution could represent a special risk to the patient.
The colloid-osmotic effect of human albumin 25% is approximately five times that of blood plasma. Therefore, when concentrated albumin is administered, care must be taken to assure adequate hydration of the patient. Patients should be monitored carefully to guard against circulatory overload and hyperhydration. Patients with marked dehydration require administration of additional fluids.
Concentrated (20% - 25%) human albumin solutions are relatively low in electrolytes compared to 4% - 5% human albumin solutions. Regularly monitor the electrolyte status of the patient and take appropriate steps to restore or maintain the electrolyte balance when albumin is administered.
Regular monitoring of coagulation and hematology parameters is necessary if comparatively large volumes are to be replaced. Care must be taken to ensure adequate substitution of other blood constituents (coagulation factors, electrolytes, platelets and erythrocytes).
Regularly monitor hemodynamic parameters during administration of ALBUTEIN® 5% and 25% (albumin [human] U.S.P.).
ALBUTEIN 5% and 25% must not be diluted with sterile water for injection as this may cause hemolysis in recipients.
Albumin is a derivative of human blood. Based on effective donor screening and product manufacturing processes, it carries an extremely remote risk for transmission of viral diseases. A theoretical risk for transmission of Creutzfeldt-Jakob disease (CJD) is also considered extremely remote. No cases of transmission of viral diseases or CJD have ever been identified for ALBUTEIN 5% or 25%.
The most serious adverse reactions with use of albumin are anaphylactic shock, heart failure and pulmonary edema. The most common adverse reactions are anaphylactoid type reactions. Adverse reactions to ALBUTEIN normally resolve when the infusion rate is slowed or the infusion is stopped. In case of severe reactions, the infusion should be stopped and appropriate treatment initiated.
Please see accompanying full Prescribing Information for ALBUTEIN 5% and 25%.
ALBUTEIN
FlexBag 5% (albumin [human] U.S.P.)
5% solution
These highlights do not include all the information needed to use ALBUTEIN FlexBag 5% safely and effectively. See full prescribing information for ALBUTEIN FlexBag 5%.
ALBUTEIN FlexBag 5% (albumin [human] U.S.P.)
5% solution
Initial U.S. Approval: 1978
INDICATIONS AND USAGE
ALBUTEIN 5% is an albumin solution indicated for:
• Hypovolemia.
• Cardiopulmonary bypass procedures.
• Hyperalbuminemia.
• Plasma exchange.
DOSAGE AND ADMINISTRATION
For Intravenous Use Only
Dosage and infusion rate should be adjusted to the patient’s individual requirements.
| Indication | Dose |
|-----------------------------|----------------------------------------------------------------------|
| Hypovolemia | Adults: Initial dose of 20 g (including renal dialysis). For acute liver failure: initial dose of 12 to 25 g. |
| Cardiopulmonary bypass procedures | Adults: Initial dose of 25 g. |
| Hypocalbuminemia | Adults: 50 to 75 g For pre- and post-operative hypocalcemia: 50 to 75 g. For burn therapy after the first 24 h: initial dose of 25 g and dose adjustment to maintain plasma protein concentration of 2.5 g per 100 mL. Third space protein loss due to infection: initial dose of 50 to 100 g. |
| Plasma exchange | The dose required depends on the volume of plasma removed during the procedure. |
Do not dilute with sterile water for injection as this may cause hemolysis in recipients.
DOSAGE FORMS AND STRENGTHS
ALBUTEIN 5% is a solution containing 50 g per L of total protein of which at least 95% is human albumin.
CONTRAINDICATIONS
• Hypersensitivity to albumin preparations or to any of the excipients.
• Severe anemia or cardiac failure with normal or increased intravascular volume.
WARNINGS AND PRECAUTIONS
• Suspicion of allergic or anaphylactic reactions requires immediate discontinuation of the injection and implementation of appropriate medical measures.
• Hypercoagulability may occur if the dosage and rate of infusion are not adjusted to the patient's volume status. Use with caution in conditions where hypercoagulability and its consequences or hemorrhoid could represent a special risk to the patient.
• Monitor electrolytes, coagulation and hematology parameters, and hemodynamic status when albumin is given.
• Do not mix with sterile water for injection.
• This product is made from human plasma and may contain infectious agents, e.g., viruses and, theoretically, the Creutzfeldt-Jakob disease agent.
ADVERSE REACTIONS
The most common adverse reactions are anaphylactoid type reactions.
To report SUSPECTED ADVERSE REACTIONS contact Grifols Biologicals LLC at 1-800-GRIFFOLS (1-800-474-3657) or FDA at 1-800-FDA-1088 or www.fda.gov/medwatch.
USE IN SPECIFIC POPULATIONS
• Pregnancy: No human or animal data. Use only if clearly needed.
Manufactured by:
Grifols Biologicals LLC
5555 Valley Boulevard
Los Angeles, CA 90032, U.S.A.
U.S. License No. 1894
Revised: 07/2021
ALBUTEIN
FlexBag 25% (albumin [human] U.S.P.)
25% solution
These highlights do not include all the information needed to use ALBUTEIN FlexBag 25% safely and effectively. See full prescribing information for ALBUTEIN FlexBag 25%.
ALBUTEIN FlexBag 25% (albumin [human] U.S.P.)
25% solution
Initial U.S. Approval: 1978
INDICATIONS AND USAGE
ALBUTEIN 25% is an albumin solution indicated for:
• Hypovolemia.
• Cardiopulmonary bypass procedures.
• Acute nephrosis.
• Nephrotic syndrome.
• Ovarian hyperstimulation syndrome.
• Adult respiratory distress syndrome (ARDS).
• Prevention of central volume depletion after paracentesis due to cirrhotic ascites.
DOSAGE AND ADMINISTRATION
For Intravenous Use Only
Dosage and infusion rate should be adjusted to the patient’s individual requirements.
| Indication | Dose |
|-----------------------------|----------------------------------------------------------------------|
| Hypovolemia | Adults: Initial dose of 25 g (including renal dialysis). For acute liver failure: initial dose of 12 to 25 g. |
| Cardiopulmonary bypass procedures | Adults: Initial dose of 25 g. |
| Acute nephrosis | Adults: 25 g together with diuretic once a day for 7 - 10 days. |
| Hypocalbuminemia | Adults: 50 to 75 g For pre- and post-operative hypocalcemia: 50 to 75 g. For burn therapy after the first 24 h: initial dose of 25 g and dose adjustment to maintain plasma protein concentration of 2.5 g per 100 mL. Third space protein loss due to infection: initial dose of 50 to 100 g. |
| Ovarian hyperstimulation syndrome | Adults: 50 to 75 g over 4 hours and repeated at 4-12 hour intervals as necessary. |
Do not dilute with sterile water for injection as this may cause hemolysis in recipients.
DOSAGE FORMS AND STRENGTHS
ALBUTEIN 25% is a solution containing 250 g per L of total protein of which at least 95% is human albumin.
CONTRAINDICATIONS
• Hypersensitivity to albumin preparations or to any of the excipients.
• Severe anemia or cardiac failure with normal or increased intravascular volume.
WARNINGS AND PRECAUTIONS
• Suspicion of allergic or anaphylactic reactions requires immediate discontinuation of the injection and implementation of appropriate medical measures.
• Hypercoagulability may occur if the dosage and rate of infusion are not adjusted to the patient's volume status. Use with caution in conditions where hypercoagulability and its consequences or hemorrhoid could represent a special risk to the patient.
• When concentrated albumin is administered, care must be taken to assure adequate hydration of the patient.
• Monitor electrolytes, coagulation and hematology parameters, and hemodynamic status when albumin is given.
• Do not mix with sterile water for injection.
• This product is made from human plasma and may contain infectious agents, e.g., viruses and, theoretically, the Creutzfeldt-Jakob disease agent.
ADVERSE REACTIONS
The most common adverse reactions are anaphylactoid type reactions.
To report SUSPECTED ADVERSE REACTIONS contact Grifols Biologicals LLC at 1-800-GRIFFOLS (1-800-474-3657) or FDA at 1-800-FDA-1088 or www.fda.gov/medwatch.
USE IN SPECIFIC POPULATIONS
• Pregnancy: No human or animal data. Use only if clearly needed.
Manufactured by:
Grifols Biologicals LLC
5555 Valley Boulevard
Los Angeles, CA 90032, U.S.A.
U.S. License No. 1894
Revised: 05/2019
All oversight bodies without question agree on adherence to certain standards governing the setup and operation of RHCs, most importantly regulatory, certification and education requirements specific to the state in which care is being delivered. Currently, standards such as the use of evidence-based guidelines for diagnosing and treating patients, use of appropriate EHRs, evaluation of quality-of-care standards through peer and collaborating physician reviews and patient satisfaction surveys are in some cases law and in others best practice.
**Bridging the Gap**
For both traditional care settings and RHCs, opportunity can only exist in a proactive relationship in which established primary care patients know where to turn in the event care is needed outside of normal business hours. RHCs must have trusted places to turn when in-depth care is needed or when patients prefer a physician.
Established relationships also reduce the risk of fragmented care. Like the telephone game, the more relay points between a message, the more diluted the message becomes. With patients’ consent, the notification and forwarding of records to a primary care provider can be automatic, reducing the risk of information gaps and duplication of treatment protocols. It goes without saying that the mere establishment of relationships is not a panacea for fragmentation. The more access care points, the greater the risk of information lost in transit or translation. Dialogue with the patient and any outside providers are the keys to missing links.
Importantly for the stressed healthcare system, partnerships between hospitals, doctor offices and RHCs can help to reduce hospital readmissions, particularly when patients cannot get in to see their primary care providers. RHCs are also a viable option for patients who have follow-ups within 30 days of hospital release, which result in lower rates of readmission.
**Value-Based Care**
The movement toward value-based healthcare is resulting in shifting treatment to outpatient settings, reduced costs and improved patient experiences. It is also spurring a trend in consolidation whereby smaller entities are merging with larger entities to improve economies of scale and operational efficiencies. However, these larger entities, thanks to a dearth of competition, may be able to charge patients higher rates to better match insurance reimbursements.
On the other hand, RHCs that are staffed primarily by PAs and NPs offer a lower-cost alternative (in some cases between 30 percent and 80 percent) to traditional healthcare and generally accept most public and private insurance plans. In fact, 60 percent of smaller insurance plans and 73 percent of large plans cover services provided in RHCs, although AMA urges caution against the encouragement of retail clinics to take advantage of lower costs through
reduced or waived copayments. However, it is ACA’s position that patients seeking care in RHCs do so predominantly for minor ailments and reassurance that their condition is on the right track. Therefore, in its view, RHCs are potentially “inconsistent with value-based care and payment” because they create “new use” through “improved access.” Furthermore, AMA also estimates that were treatment for low-level conditions sought in RHCs versus emergency departments (about 20 percent of total visits), the healthcare system could save $4 billion annually.
Data Concerns
Retailers already collect and analyze a wealth of consumer data. When RHCs are added to the mix, where does customer marketing cross the line into violation of patient privacy?
Adequate use and data protection, including EHRs and telehealth technology, are at the forefront of healthcare. From Health Insurance Portability and Accountability Act (HIPAA) protections, Standards for Privacy of Individually Identifiable Health Information (known as the Privacy Rule) to the Health Information Technology for Economic and Clinical Health Act, numerous laws protect patient information and privacy. Even so, doing the bare minimum legally required may not be sufficient to satisfy healthcare consumers who have become increasingly concerned about privacy in the wake of breaches and poor security measures plaguing all aspects of online industries.
This begs a question: Although HIPAA protections prevent the sharing of patient care information to the retailer, what protections are in place when retailers through point-of-sale transactions identify who is being seen or who pays for care in RHCs? As data is collected, customers become viable marketing contacts, particularly when being opted-in or actively opting-in to retailer marketing.
The risks of this information collection came to light in April 2021 when consumer advocates urged District of Columbia Attorney General Karl A. Racine to stop the practice of some retail pharmacies from collecting customer information for marketing purposes as they signed up for COVID-19 vaccinations or inquired about appointment availability. Certainly, a 21st century extension of the Hippocratic oath could reasonably extend to that of patient data privacy, as is required for the ACA.
Here to Stay
At a time when primary care provider shortages are estimated to grow from 45,000 in 2020 to upwards of 51,000 by 2033, RHCs provide a necessary and viable option for patients seeking care. While the debate continues, perhaps with both parties agreeing to disagree on whether these clinics should be used in a primary care context, RHCs are here to stay, offering care and meeting patients and customers where they are: in their communities where they already shop, and with hours and pricing that may better suit their needs.
From a provider standpoint, RHCs offer an opportunity to reach an entirely new patient population, whether as a practicing clinician in an RHC or as part of the referral network for primary or specialty care. Clearly, these alliances between complementary providers have the potential to empower a greater focus on and awareness of health for the benefit of healthcare. As the industry balances the struggle between matching the long-term goals of patient health with short-term accessibility and payment options, it may be that the RHC model provides a key to success. Through RHCs’ adherence to established quality and practice standards and their ability to sustain satisfaction metrics while focusing on accessibility, healthcare consumers have every opportunity and every advocate for success.
References
1. Convenient Care Association. Convenient Care Clinics: Addressing Healthcare Needs. Accessed at www.ccaclinics.org/industry/PDF-CCA-Healthcare-Needs-2017.pdf.
2. American Medical Association. Report of the Council on Medical Service, Retail Health Clinics. (Report Nos. 705-A-16). 2016. Accessed at www.ama-assn.org/system/files/pdf/corp/media-browse/public/about-ama/councils/Council%20Reports/council-on-medical-service/cmo-report-7-16-16.pdf.
3. American Academy of Family Physicians (AAFP). Policies on Retail Clinics. Accessed at: www.aafp.org/about/policies/all-retail-clinics.html.
4. American Academy of Family Physicians. Patient Privacy in Primary Care. Accessed at www.aapf.org/advocacy/advocacy-resource/position-statements/patient-privacy-in-primary-care.
5. Electronic Privacy Information Center. Coalition Letter to Attorney General Karl A. Racine. April 2, 2021. Accessed at epic.org/2021/04/coalition-letter-dc-pharmacy-data-collection-140421.pdf.
6. Association of American Medical Colleges. New Findings Confirm Predictions on Physician Shortage. April 28, 2019. Accessed at www.aamc.org/news-insights/research/new-findings-confirm-predictions-physician-shortage.
AMY SCANLIN, MS, is a freelance writer and editor specializing in medical and fitness topics.
BioSupply® is the online product ordering platform by FFF Enterprises Inc., the largest and most trusted distributor of plasma products, vaccines, biosimilars and other specialty pharmaceuticals and biopharmaceuticals. Visit fffenterprises.com to learn more about us.
Point, Click, Confidence.
BioSupplyOnline.com makes ordering your products easy, fast and convenient!
Available Products
- Albumin/Plasma Protein Fraction
- Ancillaries
- Antithrombin
- BioSurgical
- Coagulation Factor
- Essential Medicines
- Hyperimmune Globulin
- Immune Globulin
- Influenza Vaccines
- Oncology
- Pediatric Vaccines
- Pharmaceuticals
We are proud to be an accredited NABP® Verified-Accredited Wholesale Distributor for all authorized U.S. plasma products manufacturers.
BioSupply is a quick and easy-to-use platform offering instant access to the critical-care products you need when you need them. Our customer-driven online portal empowers you to order what you want, when you want it, with just one click so you can better manage your inventory. With over 33 counterfeit-free years, you know you are buying from a trusted leader in the industry. BioSupply offers:
- At-a-glance access to your account information
- Links to view open orders and ordering history
- Shortcuts to frequently purchased products
- FFF Sales Team contact information
- Detailed product pages
- Product alternatives if products are back-ordered or unavailable
- Convenience and accessibility to drop-ship products
- Shopping Cart feature displays account number and shipping address to minimize purchasing errors
- My Favorites feature for frequently ordered products
- BioVision reporting tool provides analysis of purchasing patterns
For ordering support, contact our Wow! Customer Care team:
P: (800) 843-7477 Emergency Ordering available 24/7/365
F: (800) 418-4333
E: email@example.com
©2022 FFF Enterprises Inc. All Rights Reserved. FL379-NM Rev120821
New clinical studies show gene therapy may offer a cure for these chronic and expensive diseases in five to 10 years.
While hemoglobinopathies are not necessarily common in the United States, with only approximately 100,000 adults and children affected, they are more often found in other areas of the world.\(^1\) Approximately 7 percent of the world’s population are carriers, and hemoglobinopathies are the most common monogenic diseases, especially widespread in Asia, the Mediterranean and Africa.\(^2\) Today, hemoglobinopathies are spread globally because of increased migration rates.\(^3\) They are also a major health concern, with roughly 330,000 children born with the diseases worldwide every year. In the United States, Hispanic-Americans and Black or African-American populations are more at risk for hemoglobinopathies, and they often carry the autosomal recessive disease (two inherited mutated genes, one from each parent).\(^4\)
Patients living with hemoglobinopathies typically cope with a level of uncertainty or even grief because their lives are so deeply affected by the illnesses. They are often anxious, for example, about their constant need for comprehensive resources to ensure their effective and costly care. And, they are almost invariably concerned about their long-term prognosis.
What Are Hemoglobinopathies?
Hemoglobinopathies are a group of disorders passed down through families in which there is abnormal production or structure of the hemoglobin (the red protein responsible for transporting oxygen in the blood) molecule (Figure). The most common hemoglobinopathies are sickle cell disease (SCD) and thalassemia. SCD, an umbrella group of hemoglobinopathies that includes sickle cell anemia, is an inherited disorder caused by an abnormal form of a protein called beta-globin, which causes red blood cells to become sickle (crescent)-shaped and inflexible.
Thalassemia is an inherited blood disorder caused by a defect in the gene that helps control the production of hemoglobin. There are two main types of thalassemia: alpha and beta, which differ according to which protein is altered. In both cases, people with thalassemia have fewer healthy red blood cells. Two other rare hemoglobinopathies include congenital sideroblastic anemia and congenital dyserythropoietic anemia caused by low levels of functioning red blood cells and often high levels of iron in the body. All types rob the body of adequate blood and oxygen, which damages the kidneys, liver and spleen, among other organs, and can be fatal.\(^4\)
Currently, the only cure for SCD is a blood and bone marrow transplant. Transplants come from a human leukocyte antigen-matched sibling; however, only a small number of people are able and eligible for this treatment. There are other somewhat successful treatments that can reduce symptoms and prolong life, which are relatively available for patients who cannot afford or otherwise access a transplant. Severe cases of thalassemia are sometimes managed by frequent blood transfusions, while milder cases are prescribed folic acid to help treat anemia, typically to augment other therapies. For patients who are unresponsive to such remedies and who are merely managing symptoms, life without an available cure can be devastating.\(^4\)
**Gene Therapy: A Cure for Hemoglobinopathies?**
Today, gene therapy is providing a glimmer of optimism for hemoglobinopathy patients, with successful clinical trials pointing to a more accessible cure.
In simplified terms, gene therapy adds modified, functional copies of the beta-globin gene into a patient’s hematopoietic stem cells so the body can make functional hemoglobin molecules and, therefore, functional red blood cells. In several ongoing studies, patients with six or more months of follow-up after treatment for SCD had median sickle cell hemoglobin levels reduced to 50 percent or less of total hemoglobin without blood transfusions. And in thalassemia, studies found sufficient hemoglobin production to reduce or eliminate the need for transfusion support. As the first-ever gene therapy for either of the conditions, medical researchers are nearing approval to cure these diseases.\(^4\)
According to the authors of one recent study, “Gene therapy for hemoglobinopathies is now founded on transplantation of autologous hematopoietic stem cells genetically modified with a lentiviral vector expressing a globin gene under the control of globin transcriptional regulatory elements. Preclinical and early clinical studies showed the safety and potential efficacy of this therapeutic approach, as well as the hurdles still limiting its general application. In addition, for both beta-thalassemia and SCD, an altered bone marrow microenvironment reduces the efficiency of stem cell harvesting and engraftment. These hurdles still need to be addressed for gene therapy for hemoglobinopathies to become a clinical reality.”\(^5\)
*The New England Journal of Medicine* has published the work of two groups of researchers who used different types of gene therapy techniques that target the transcription factor BCL11a involved with globin switching, which have improved clinical outcomes in patients.
with SCD and thalassemia. According to Mark Walters, MD, a researcher at the University of California’s Blood and Bone Marrow Transplant Program, “These trials herald a new generation of broadly applicable curative treatments for hemoglobinopathies.” In one clinical trial with two patients, one with thalassemia and the other with SCD, researchers administered CRISPR-Cas9 gene edited hematopoietic stem and progenitor cells (HSPCs) with reduced BCL11A expression in the erythroid lineage. The product, CTX001, had been shown in a preclinical study to restore γ-globulin synthesis and reactivate production of fetal hemoglobin. Both patients underwent busulfan-induced myeloablation prior to receiving the treatment. The researchers suggested the CRISPR-Cas9-based gene-edited product could change the paradigm for patients with these conditions if it is found to successfully and durably graft, produce no “off-target” editing products and, importantly, improve clinical course.
In the second trial, which included six patients with SCD, researchers described results with infusion of gene-modified cells derived from lentivirus insertion of a gene that knocks down BCL11a by encoding an erythroid-specific, inhibitory short-hairpin RNA. They found that at median follow-up of 18 months, all patients had engraftment and a robust and stable HbF induction broadly distributed in red cells. And, clinical manifestations of SCD were reduced or absent during the follow-up period. “The field of autologous gene therapies for hemoglobinopathies is advancing rapidly,” lead researcher Erica Esrick, MD, and colleagues reported, “including lentiviral trials of gene addition in which the nonsickling hemoglobin is formed from an exogenous γ-globin or modified β-globin gene.”
Deepa Manwani, MD, director of pediatric hematology at Children’s Hospital and professor at Albert Einstein College of Medicine in New York City, maps out other major aspects of hemoglobinopathies in her American Society of Hematology presentation “Moving From Science Fiction to Clinical Reality.” In it, she answers key questions regarding the illnesses and their treatment. When asked about the rationale for beta-hemoglobinopathies and SCD, Dr. Manwani says, “These are very common hematologic disorders with a very high cost of care, as well as burden of disease, to the patients. There are limited options for treatment, and specifically for curative treatment. The only curative treatment that’s currently approved outside of genetic therapies, most of which are in clinical trials, is stem cell transplantation. [However], since these are genetic disorders, those treatments are available to a minority of patients. Less than 15 percent, for instance, of sickle cell patients will have a matched sibling donor who doesn’t have the disease since it’s genetic. That’s why it’s very important that these patients have access to newer therapies that can be accessed by many, many patients.”
With regard to recent advances in gene therapies, Dr. Manwani believes it is a “very, very exciting time. Three decades ago, when I decided I would focus my research on beta-hemoglobinopathies, we were talking about gene therapies being a reality in five years, and then we were talking about it every five years like it was going to happen, and it didn’t happen.” The problem was, she says, that “the gene that’s abnormal, the beta-globin gene, is very, very large, and that plagued scientists because they were not able to get it in and be expressed at the right levels. It was challenging technically.” But she also states that in the last five years, she and fellow researchers have seen “tremendous improvements,” and now the gene can be expressed “through a lentiviral vector.” The technology is now easier because the gene “gets into the right place, expressed
at the right level, and these patients are actually on clinical trials doing extremely well. It gives me great hope, and I think that this provides great promise for our patients.”
Major challenges in clinical trials versus real world use, according to Dr. Manwani, include treatment expense and the length of treatment. “This is not a therapy that is inexpensive and quick,” she says. “It’s a commitment on the part of the patient, and it is also extremely expensive. So this is not a therapy that’s a pill that can be taken by, for instance, children in Africa where the largest burden of sickle cell disease is. So it will be again, at least initially, available to fewer patients in high-resource settings, but I think that this opens the window to these types of therapies, and this is how we will continue to advance and finally provide those therapies to a wider group of patients at a lower cost.
“One of the biggest problems with the current strategies is it requires what is known as an autologous bone marrow, or stem cell transplantation approach, where the patient’s stem cells are actually harvested and modified, but then the patient has to receive chemotherapy to wipe out their bone marrow before these modified stem cells can be given back to the patient, and that’s not trivial therapy,” explains Dr. Manwani. “For one, it results in infertility. So those types of very toxic, preparative regimens can be a huge problem, especially facing these very difficult decisions about whether to opt for this therapy or not. And I think that researchers are well aware of the urgent need for different ways of preparing the patient’s bone marrow to receive the modified stem cells back. There’s some very exciting research that’s ongoing. And [recently], we’ve heard about so many wonderful advances, it gives me great hope. I think that we’re finally in an era where we’re going to continue to move forward, and at a very fast pace. I think in the next five to 10 years, we’ll see better and better approaches to delivering this type of care with less toxicity.”
Of course, the high cost of this therapy must be overcome, which can be a reality since various organizations and agencies are funding the work. For instance, says Dr. Manwani, in the U.S., the National Heart, Lung and Blood Institute is partnering with the Bill and Melinda Gates Foundation to focus on funding research that will allow this therapy to be delivered more easily without the high cost. One change that might be required to accomplish that, she explains, is called *in vivo* gene therapy, where the gene therapy can be given as a single shot to correct the abnormal gene without requiring the stem cell transplantation.
Given the initial reports with CRISPR/Cas9 and other viral vectors, she believes that is not outside the realm of possibility. “Even having the high-cost therapy in the high-resource settings is a huge step forward,” says Dr. Manwani. “When we talk to our patients, they tell us, ‘I know that as doctors and scientists you want everything to be perfect before you move forward, but we want the treatments that are possible now.’ I think that the shared decision-making that goes into actually preparing a patient for this type of therapy is going to be very important at this stage.”
**A Cure May Be Forthcoming**
The innovations, medical insights and genetic problem-solving will certainly not end with these studies and advances. As gene therapy for all diseases is developed and honed, patients worldwide who have unbearable, chronic and even life-threatening conditions such as hemoglobinopathies may soon be free of their physical ailments and emotional medical concerns.
**References**
1. Naegele R, Lam L, Mumford J, and Fleurenog B. Hematologic Conditions: Common Hemoglobinopathies. *PP Essentials*, 2019 Oct;485:24-31. Accessed at pubmed.ncbi.nlm.nih.gov/2019/10/485/24-31.
2. Kohnke E. Hemoglobinopathies: Clinical Manifestations, Diagnosis, and Treatment. *Deutsches Arzteblatt International*, Vol. 108, 31.32 (2011): S26-40. Accessed at www.researchgate.net/publication/31814428_Hemoglobinopathies_Clinical_Manifestations_Diagnosis_and_Treatment.
3. Zitterlein HA, Hartvold C, Reaver Flores S, et al. A Small Key to Open the Door: Gene Therapy for the Treatment of Hemoglobinopathies. *Frontiers in Genome Editing*, Feb. 4, 2021. Accessed at www.frontiersin.org/articles/10.3389/fgened.2020.576005/full.
4. Cleveland Clinic. #1. Gene Therapy for Hemoglobinopathies. Accessed at www.clevelandclinic.org/Programs/Top-10-Medical-Innovations-Top-10-for-2021/1-Gene-Therapy-for-Hemoglobinopathies.
5. Giuliani F, Cazzaniga M, and Mavilio F. Gene Therapy Approaches to Hemoglobinopathies. *Hematology/Oncology and Stem Cell Therapy of North America*, 2017 Oct;3(5):835-852. Accessed at pubmed.ncbi.nlm.nih.gov/28858936.
6. Bentley EM. Gene Therapy For Pain In Sickle Cell Disease and β-Thalassemia HCP Live. Jan 27, 2021. Accessed at www.hcplive.com/view/two-gene-therapies-for-sickle-disease-thalassemia.
7. Manwani N. ASH 2019: Gene Therapy for Beta Hemoglobinopathies. Accessed at www.youtube.com/watch?v=7GfW4kqjyjo.
**MEREDITH WHITMORE** is an English professor and freelance journalist in the Northwest.
Fact or Fiction: Debunking the Myths Surrounding IG Therapy Improves Patient Outcomes
Experts set the record straight about common misunderstandings regarding IVIG and SCIG products, their administration and possible reactions.
By Luba Sobolevsky, PharmD, IgCP,
Rachel Colletta, BSN, CRNI, IgCN,
and Amy Clarke, RN, BSN, IgCN
Photo courtesy of EMED Technologies featuring its SCiG60 Infusion System
IMMUNE GLOBULIN (IG) is made from pooled plasma collected from thousands of donors. It contains antibodies against a broad spectrum of bacteria and viruses, and it is used primarily to treat three categories of illnesses: primary immune deficiencies, autoimmune neuromuscular disorders and certain rheumatologic conditions. Historically, the first intravenous IG (IVIG) therapy was approved in 1981 to treat primary humoral immunodeficiency disorders, and in 2006, the first subcutaneous IG (SCIG) therapy was approved. Today, a growing number of patients are treated with IG, and the number of IG products and routes of administration continue to evolve. Yet, while patients treated with IG experience healthier lives, many may have misconceptions about the products, how they are administered and the reactions they can cause.
**Myth or Fact?**
**IG Products Are Interchangeable**
Myth: IG products are *not* interchangeable. While all products contain similar amounts of IgG antibodies, the similarities end there. Brands of IG can differ in IgG monomer, dimer and aggregate concentrations. They also differ in concentrations of IgA and IgM (Figure 1). Stabilizers, additives, sodium content, osmolarity and levels of impurities vary from product to product. Because of these differences, IG products cannot be used interchangeably or be mixed together.
Product differences should be considered when choosing the ideal product for each patient. In addition to product differences, patient differences such as comorbidities, tolerability, history of product use and patient lifestyle must be taken into account when choosing a product and route of administration.
Because products are tolerated differently by individuals, first doses of any product should be administered with caution. This is true even when a patient switches from one brand of product to another.
**Product differences should be considered when choosing the ideal product for each patient.**
The IG clinician’s role is to assess product tolerability and communicate with the healthcare team to ensure the patient has a safe and positive infusion experience.
Potential adverse drug reactions (ADRs) are listed in manufacturers’ labeling, which considers both ADRs noted during clinical trials and those seen with all IG products in general.
Overall, IG therapy is safe and well-tolerated in most patients, and clinical efficacy of all products is comparable. However, all IG products contain boxed warnings. IVIG products contain boxed warnings for thrombosis and renal dysfunction/acute renal failure (ARF), whereas SCIG and facilitated SCIG (fSCIG) products contain only boxed warnings for thrombosis (see *A History of the Thrombosis Boxed Warning*). Therefore, it is imperative a thorough clinical assessment is conducted prior to starting care, and there is astute monitoring during the infusion and post-infusion follow-up.
*Thrombosis.* Risk factors for thrombosis include advanced age (but not specified); prolonged immobilization; hypercoagulable conditions (easy/excessive...
blood clotting), which may be inherited (e.g., Factor V Leiden) or acquired (e.g., cancer, certain cancer medications, obesity, HIV/AIDS, pregnancy); history of venous or arterial thrombosis; use of estrogen; indwelling vascular catheters; hyperviscosity conditions, including hypergammaglobulinemia markedly increased triglycerides, cryoglobulinemia, paraproteinemia (e.g., macroglobulinemias, monoclonal gammopathy of undetermined significance [MGUS], multiple myeloma); and cardiovascular risk factors. However, thrombosis may occur in the absence of known risk factors.
Mitigation strategies for thrombosis include:
- Administering at the minimum dose feasible (when there is no specific recommendation, large doses can be divided over several days or administered on alternate days)
- Administering at the minimum infusion rate feasible (some brands include no recommendation and other brands recommend 3 mg/kg/minute maximum to 4 mg/kg/minute maximum)
- Ensuring adequate hydration in patients before administration (requirements differ between adult and pediatric patients)
- Monitoring for signs and symptoms of thrombosis (for example, deep vein thrombosis symptoms include lower-leg swelling and pain in knees; pulmonary embolism (PE) symptoms include shortness of breath/pain with breathing and chest pain; myocardial infarction symptoms include chest pain; and transient ischemic attack/cerebrovascular accident symptoms include confusion, slurred speech, drooling and loss of consciousness)
- Assessing blood viscosity in patients at risk for hyperviscosity
- Educating patients about the signs and symptoms of thrombosis
It should be noted that anti-thrombotic therapy concurrent with IVIG should be considered for patients at high risk of thrombosis.
Renal dysfunction and acute renal failure (ARF). Renal dysfunction, ARF, osmotic nephrosis and death may occur with IVIG products in predisposed patients. Renal dysfunction and ARF occur more commonly in patients receiving IVIG products containing sucrose. However, since the last sucrose-containing product was withdrawn from the market in 2018, renal dysfunction and ARF could occur with any brand.
Risk factors for renal dysfunction and ARF include any degree of pre-existing renal insufficiency, diabetes mellitus, age older than 65 years, volume depletion, sepsis, paraproteinemia (e.g., macroglobulinemias, monoclonal gammopathy of undetermined significance, multiple myeloma) and patients receiving known nephrotoxic drugs.
Package insert recommendations for mitigation strategies for renal dysfunction and ARF vary among brands, with some including more instruction than others and some recommendations described outside the boxed warning. Strategies include:
• Administering at the minimum dose feasible (same as thrombosis)
• Administering at the minimum infusion rate feasible (same as thrombosis)
• Administering at the minimum concentration available (this pertains to only two brands)
• Ensuring adequate hydration in patients before administration (same as thrombosis)
• Periodic monitoring of renal function and urine output in patients judged to be at increased risk of developing ARF
• Assessing renal function, including measurement of BUN and serum creatinine, before the initial infusion and at appropriate intervals thereafter
• Considering discontinuation if renal function deteriorates
An anaphylaxis kit should be readily available when every dose is administered, and the patient’s vital signs must be monitored. And, since anaphylaxis can occur with any infusion no matter how long the patient has been receiving IG therapy, patients should not self-infuse or be left alone for any period of time during the infusion.
Renal dysfunction, ARF, osmotic nephrosis and death may occur with IVIG products in predisposed patients.
After anaphylaxis symptoms resolve, the decision to restart an infusion should be made by the prescriber, patient, nurse and pharmacist. Mitigation strategies to prevent anaphylaxis include pretreatment with an antihistamine and corticosteroid, choosing another IVIG or SCIG brand if it is not IgA autoantibody-related or, if it is IgA autoantibody-related, switching to products containing lower levels of IgA or SCIG therapy.
Much more common than an anaphylactic reaction is an anaphylactoid reaction. Anaphylactoid reactions are similar in presentation to anaphylactic reactions since patients experience shortness of breath and chest tightness. However, these symptoms are much more gradual in onset and severity. Anaphylactoid reactions generally occur within the first half of the infusion and will dissipate with no intervention once the infusion has ended. And, because anaphylactoid reactions are not IgE-mediated, patients will typically experience hypertension rather than hypotension.
Anaphylactoid reactions may be caused by IgG aggregates or impurities not removed during the manufacturing and purification processes; however, the true cause of these reactions remains unknown. It is important for clinicians to understand these differences and be prepared to treat patients accordingly.
Serious ADRs include aseptic meningitis, hemolytic anemia and transfusion-related acute lung injury (TRALI).
Severe aseptic meningitis generally occurs after an infusion and lasts hours to days. The cause is IG-induced spinal cord inflammation, and it is often described as severe and debilitating. Frequently, it is accompanied by nuchal (nape of the neck) rigidity, drowsiness, photophobia,
History of the Thrombosis Boxed Warning
• 1986: First report in a Lancet Letter to the Editor
• 1986-2010: Thromboembolism (TE) reports were consistent with the number of grams sold
• 2010: A small cluster of cases was reported to FDA by a manufacturer
• 2010-2011: There were increased reports of TE with a U.S. brand and a foreign brand of IG
• 2011: Manufacturers implemented thrombogenic testing of products
• 2013: FDA required addition of thrombosis to boxed warning
painful eye movements and nausea (with or without vomiting). Cerebral spinal fluid studies may show increased white blood cell count and protein with a negative culture.
Risk factors for aseptic meningitis include high doses of IG, rapid infusion rate, dehydration and a history of migraines. Pretreatment is generally ineffective; however, there have been some reports of success with IV corticosteroids, IV hydration and antimigraine medication. Treatment may require aggressive pain management.
Mitigation strategies for aseptic meningitis include:
- Reducing the daily dose by dividing over several days
- Alternating days of dosing
- Reducing the maximum infusion rate
- Switching to an IVIG 5% product
- Switching to a different IVIG brand or to SCIG
Hemolytic anemia occurs when there is severe hemolysis-related renal dysfunction/failure or disseminated intravascular coagulation (a blood clotting disorder) caused by a destruction of red blood cells due to anti-A and anti-B blood type antibodies.
Risk factors for hemolytic anemia include non-O blood types, underlying inflammatory states, immune-mediated disorders and high IVIG doses (e.g., greater than 2 grams/kg, single or divided).
Signs and symptoms of hemolytic anemia generally present within days or weeks and may include fatigue (mild hemolysis), dark urine, jaundice of skin or eyes, heart murmur, increased heart rate and enlarged spleen/liver, which may be life-threatening and require blood transfusions. Therefore, patients should be educated about the signs and symptoms and when to call the prescriber.
Since there is little published evidence about the prevention of hemolytic anemia, mitigation strategies should include:
- Understanding the patient’s blood type
- Administering at the slowest rate feasible
- Reducing the daily dose, dividing the dose over several days or alternating days
- Considering an Hgb/HCT test prior to IVIG within approximately 36 hours and in seven days to 10 days if the patient is high risk
TRALI is a rare but potentially fatal complication of receiving blood products. It causes severe respiratory distress, pulmonary edema (non-cardiogenic), hypoxemia (below-normal level of oxygen in the blood), normal left ventricular function and fever. Symptoms typically appear within one hour to six hours following IVIG. It may be managed using oxygen therapy with adequate ventilatory support. There are no particular risk factors or mitigation strategies.
**Myth or Fact?**
**ADRs Can’t Be Mitigated**
Myth: ADRs can be mitigated. These reactions are generally related to factors such as the rate of infusion, the patient’s hydration status, patient comorbidities (e.g., history of migraine) and product choice. Whenever possible, the goal should be to prevent ADRs from occurring, which can usually be accomplished by patient education and proper product selection and administration.
---
**SCIG Therapy Clinical Standards**
**Pre-Infusion**
- Review the patient’s documentation.
- If missing clinical information prior to initial visit, perform an assessment.
- Is the patient appropriate for SCIG administration?
- Is the patient or caregiver able to self-infuse?
- Check for the presence of adequate tissue.
- Will adherence be an issue?
- Determine number of sites, volume to infuse, how much volume per site and site location, and then make sure the equipment and supplies are available.
- Assess patient/caregiver’s knowledge, provide ongoing education as needed, and document education and training.
**During Infusion**
- Document vital signs: baseline, rate changes and at completion.
- Document patient tolerance.
- Document infusion issues.
- Document IG brand, dose, lot number(s) and expiration dates.
- Document number of needle sites, gauge, length and flow rate.
- Document location of infusion site(s) used.
- Monitor for ADRs, document ADR management and inform pharmacy and prescriber.
**Post-Infusion**
- Provide training, education and support:
- Teach patient/caregiver to log infusions.
- Encourage independence in self-administration.
- Explain responses to therapy based on disease state.
- Explain potential reactions and troubleshooting.
- Know who/how/what to contact should issues arise:
- Facilitate referrals to community organizations, support groups and financial assistance organizations.
Managing ADRs starts with a risk assessment performed by the pharmacist prior to the start of therapy to determine what, if any, comorbidities exist, as well as the patient’s history with IG therapy and with previous products.
Product and route selection are the first steps in mitigating ADRs. Patients tolerate products differently, so there is not a one-size-fits-all solution for product selection. Prior history, comorbidities and patient lifestyle factors should be considered.
A well-hydrated patient runs a lower risk of experiencing infusion-related ADRs. Patients should be instructed to begin hydrating one day to two days before the infusion, and hydration should be continued throughout the infusion and into the next day. If patients are not able to consume the amount of fluids needed to fully hydrate, IV hydration may be used as a supplement.
Premedications may be administered as needed, so patients should be assessed for their need for analgesic, antihistamine, antiemetics, etc.
Customizing the infusion rate to patient tolerability is critical. Since most ADRs are related to rate of infusion, a three-step ramping process should be used for every infusion. If ADRs occur, the infusion should be stopped and restarted at the previous infusion rate when symptoms subside. Remember that infusion rates vary from patient to patient and should be reassessed with each infusion.
Effective and frequent communication with the healthcare team is imperative. Patients should be encouraged to report any ADRs so appropriate intervention can be taken. Patients should not have to manage severe ADRs during and after their infusions.
Common mild to moderate IVIG infusion-related reactions include headache (most common) often occurring during the infusion due to mild/moderate blood pressure changes; diarrhea, fatigue, low-grade fever, nausea and other flu-like symptoms, which may last up to 72 hours and can be treated symptomatically; rash/hives; and blood pressure changes. Methods to mitigate these reactions include:
- Stopping the infusion until symptoms resolve, and resuming at a slower rate (recommended)
- Slowing the rate of infusion (may indicate the maximum tolerated rate for the product)
- Repeating ordered antihistamine and analgesic premedications if enough time has passed
A number of factors can contribute to SCIG ADRs. Infusion-related factors include a history of infusion reactions, first infusion, amount of drug infused, rate of infusion and dehydration. Patient-related factors include infection or fever at time of infusion, age, autoimmunity, comorbidities (i.e., diabetes, hypertension, cardiovascular disease) and smoking.
Whenever possible, the goal should be to prevent ADRs from occurring.
SCIG ADRs can be local or systemic (Figures 2 and 3). Local reactions are common, occurring in 75 percent of patients. These include immediate swelling and redness at the site of infusion that usually resolves within 24 hours to 48 hours and lessens with subsequent infusions. In fact, occurrence and severity has been shown to decrease over repeated SCIG administrations. Systemic reactions are rare, occurring
in less than 1 percent of patients. These include back pain, migraine, diarrhea, fatigue, nausea, vomiting, rash and arthralgia (joint pain).
When experiencing local site reactions, the following management strategies can be tried:
- If tape sensitivity is suspected, use a skin preparation, different tape or change out the Tegaderm.
- Insert needle using a dry priming technique to decrease redness, itching and site reactions.
- Rotate needle insertion sites.
- Increase activity to help diffuse the product.
- Apply cold compresses 20 minutes on and 20 minutes off.
- Apply a cold topical anesthetic cream to the site, or use a device such as Buzzy.
- Slow or stop the infusion, and restart as the patient tolerates.
There may also be other issues related to SCIG infusions. If there is pain at the site of needle insertion, the needle length should be checked to ensure it is appropriate, and ice, a topical anesthetic or a device such as Buzzy can be used.
If there is leaking at the infusion site, the following should be checked:
- Needle dislodgement
- Needle length
- Subcutaneous tissue (is it adequate to absorb the volume of medication?)
- Infusion rate (is it too fast?)
If the infusion is taking too long, patency of tubing, rate of tubing and needle size should be assessed. Additionally, the site location should be assessed to determine if an additional site is needed. Lastly, the pump should be checked to ensure it is operating correctly.
If there is an acute or delayed infusion reaction (hives, swelling in the mouth or throat, itching, trouble breathing, fainting or dizziness), the infusion should be stopped and the infusion reaction protocol (antinuclear antibody-orders) should be initiated. Also, patients should contact their healthcare provider or emergency medical service if symptoms occur during self-administration.
Importantly, for each infusion, it should be checked and documented that the right drug and dose is being administered to the right patient using the correct route and duration.
The IgNS Standards of Practice published by the Immunoglobulin National Society (IgNS) recommend using a minimum of three rate ramping stages.
Myth or Fact?
Maximum Infusion Rates Vary from Patient to Patient
Fact: Maximum infusion rates do vary from patient to patient. IG infusions are titrated stepwise to a maximum rate tolerated by the patient and per the prescribing information, prescriber’s orders and organizational policies. Other factors that may impact the maximum infusion rate are the patient’s hydration status and comorbidities. The IgNS Standards of Practice published by the Immunoglobulin National Society (IgNS) recommend using a minimum of three rate ramping stages. For patients at risk for renal dysfunction and thrombotic events, IG should be administered at the minimum infusion rate feasible and no greater than the maximum rate specified in each manufacturer’s current prescribing information. Maximum infusion rates may vary from infusion to infusion based on the patient’s state of health on the infusion day.
Similar to finding the patient’s ideal product, finding the patient’s personal maximum infusion rate is key to providing a positive infusion experience.
Myth or Fact?
Long-Term IVIG Patients with No Adverse Reactions Do Not Require a Healthcare Professional to Monitor Infusions
Myth: Anaphylactic reactions can happen with any infusion, even in long-term patients with no history of adverse reactions. A patient who is experiencing an anaphylactic reaction will most likely not be able to manage the acute onset of symptoms.
Patient health status and lot-to-lot variability of IG products are a few of the reasons these reactions can occur at any time. IgNS recommends all IVIG infusions be monitored from start to finish by a competent healthcare clinician.
Luba Sobolevsky, PharmD, IgCP, is executive director of the Immunoglobulin National Society (IgNS). Rachel Colletta, BSN, CRNI, IgCN, is director of educational resources at IgNS and Amy Clarke, RN, BSN, IgCN, is the national director of nursing practice at Optum Infusion Pharmacy.
Editor’s note: This article was prepared from a presentation delivered by the authors at the 2021 Immune Deficiency Foundation National Conference.
Going BEYOND DISTRIBUTION by reinforcing the security and efficiency of the pharmaceutical supply chain.
FFF Enterprises is dedicated to improving and safeguarding specialty pharmaceuticals through Guaranteed Channel Integrity — our commitment to purchase only from manufacturers and ship only to licensed clinicians. And with the addition of our RightNow Inventory™ product management solutions, now the critical-care products your patients need, are already there.
Products You Need, When You Need Them.
- Adult Vaccines
- Albumin/Plasma Protein Fraction
- Ancillaries
- Antithrombin
- BioSurgical
- Coagulation Factor
- Controlled Substances
- Essential Medicines
- Hyperimmune Globulin
- Immune-Globulin
- InfluenzA Vaccines
- Oncology
- Ophthalmology
- Orthopedic
- Pediatric Vaccines
- Pharmaceuticals
RESPONSIBLE DISTRIBUTION
Controlled Volume Measures™
Ensuring healthcare systems are prepared with adequate supplies of essential medications, especially during times of crisis or shortage.
FAIR PRICING
Costparency™
Our pledge to reject deceptive pricing tactics and only provide the best available lowest cost for products and services.
WORRY-FREE PRODUCT TECHNOLOGY
RightNow Inventory™
Inventory management system continuously monitors critical-care pharmaceuticals to minimize product waste and maximize patient safety.
(800) 843-7477 | FFFenterprises.com
Biosupply.FFFenterprises.com | MyFluVaccine.com
The Rise of Metabolic Syndrome: A Cause for Concern
An abundance of food and low activity levels have resulted in an increasing prevalence of metabolic syndrome that causes serious health risks.
By Jim Trageser
Despite considerable technological achievements, the law of unintended consequences still holds sway over human endeavors. Take hunger as an example. For millennia, hunger was the bane of rulers across the globe. Until recently, every society struggled to provide enough sustenance for its people, and at the end of the day, most human beings went to bed hungry. Malnutrition brought with it a host of medical issues, including rickets, stunted growth, anemia, etc., and physicians were well-acquainted to treating them.
But by the late 1700s, the Industrial Revolution brought forth new planting and harvesting machines that allowed
farmers to grow more food on the same acreage. Over the ensuing decades, better understanding of irrigation and crop rotation also contributed to increasing yields, as did selective crossbreeding of crops. The arrival of rail and steamships coupled with modern refrigeration allowed for the development of vast new distribution networks that could bring food from the farm to cities quickly. Automated canning factories and the development of quick-freezing methods combined with the earlier developments ensured most Americans (and soon, others around the world) had access to more food than their parents and grandparents could ever have imagined. Moreover, it was more varied and more affordable than had been enjoyed by royalty a century earlier.
All these advances greatly reduced the incidence of mass starvation across the globe. But as the law of unintended consequences kicked in, two developments arose out of the sudden, unexpected bounty of cheap, available foodstuffs:
1) The newly efficient agricultural sector needed far fewer farm workers to harvest the additional food, leading to a mass exodus from the countryside and into cities (a process still occurring in parts of India, China and Africa). And, these new city dwellers found themselves with jobs that were far less physically strenuous than the farmwork in which their parents and grandparents had engaged. Plus, the advent of radio and television also led to many people’s leisure hours being spent sitting passively.
2) Cheap, available food led to a dramatic rise in the average daily caloric intake of most people. It turned out that when food was plentiful and affordable, people consumed more than their bodies needed.
This combination of too much food and too little physical activity has led to what is an unprecedented outbreak of diseases formerly associated with wealth: obesity, cardiovascular disease and type 2 diabetes. Today, these conditions affect people from all demographics in the West. In fact, the poor are more likely to suffer from some of these than are the wealthy.
Metabolic syndrome is one condition associated with overnutrition and a sedentary lifestyle, another unexpected development from the successful effort to reduce mass starvation. And it is affecting more people than ever before — more than a third of all U.S. adults.\(^1\)
**What is Metabolic Syndrome?**
Metabolic syndrome is the name given to a collection of risk factors that heighten the chance of developing heart disease, stroke and/or type 2 diabetes.\(^2\) The National Institutes for Health lists these risk factors as (Figure 1):\(^3\)
- A large waistline (35 inches or greater for women, 40 inches or greater for men)
- A high triglyceride level
- A low HDL cholesterol level
- High blood pressure
- High fasting blood sugar
Having three or more of these indicators generally leads to a diagnosis of metabolic syndrome.
What is now referred to as metabolic syndrome was first described in 1966 by French physician Jean-Pierre Camus, although he referred to it as a “metabolic trisynodrome.”\(^4\) Twenty-two years later, Gerald Reaven, MD, referred to this cluster of factors as “Syndrome X” in a talk at the American Diabetes Association
---
**Figure 1. What Is Metabolic Syndrome?**
- **High BP**
- ≥130/≥85 mmHg
- **Central Obesity**
- >40 inches for men
- >35 inches for women
- **Low HDL**
- <40 mg/dl in men
- <50 mg/dl in women
- **Glucose Intolerance**
- Fasting glucose ≥110 mg/dl
- **High TG**
- ≥150 mg/dl
national meeting, which led to a flurry of interest in this condition. Studies and papers about it accelerated as researchers realized this was a growing problem associated with the abundance of food and a growing amount of highly processed foods heavy in sugars in the average diet.
At one time, metabolic syndrome was considered the same condition known as insulin resistance, which occurs when the body’s cells don’t react normally to insulin, preventing glucose from being absorbed into cells. However, while there remains a high correlation between metabolic syndrome and insulin resistance, they are generally now viewed as two distinct albeit related conditions.
As the name metabolic syndrome indicates, researchers believe this collection of risk factors is likely caused by an underlying “abnormal carbohydrate and lipid metabolism.”
Smoking, high alcohol intake and high levels of stress also seem to have a correlative relationship. Other possible contributing factors include sleep apnea, gallstones and ovarian cysts.
Health Risks Associated with Metabolic Syndrome
Individuals with metabolic syndrome are already suffering damage to their cardiovascular system, as well as their ability to process nutrients at the cellular level. Hence, they are at elevated risk for developing full-blown heart disease and type 2 diabetes.
Recent research suggests the long-term systemic inflammation caused by obesity is the driving factor in developing metabolic syndrome, with patients having a high correlation for high-sensitivity C-reactive protein, a marker for systemic inflammation. Other inflammation markers found in higher-than-normal levels in patients diagnosed with metabolic syndrome include tumor necrosis factor-alpha, interleukin (IL)-6, IL-18 and oxidation of LDL.
In fact, the serious bodily damage caused by metabolic syndrome has led the American Heart Association to predict it will soon eclipse smoking as the main cause of heart disease.
Prevalence of Metabolic Syndrome
A major study conducted a decade ago found the incidence of metabolic syndrome among adults in the United States increased from 25.3 percent in 1994 to 34.2 percent in 2012. The researchers noted the correlation between the increase in the percentage of adults with metabolic syndrome and the percentage of adults who are overweight or obese, which now tops two-thirds of the population in the United States. (Even in Kazakhstan, which is not yet as developed as the United States, more than 20 percent of the population was obese as of 2017.)
Among the clinically obese, 61.6 percent suffer from metabolic syndrome, according to one recent study. But even 8.6 percent of American adults at a healthy weight had metabolic syndrome.
More troubling than the increase in the percentage of adults developing metabolic syndrome, though, are recent signs that children are also now suffering the effects of a high-fat, nutrient-poor diet combined with a lack of physical activity. A 2017 study in Chile found 18 percent of children had early onset obesity, and half of those remained obese into their teens and had a high-risk factor for metabolic syndrome.
Considering the troubling worldwide numbers, it is easy to see why the American Heart Association has labeled metabolic syndrome as the greatest future threat to cardiovascular health in the United States.
Metabolic Syndrome and Expected Life Span
While metabolic syndrome obviously increases the chances of developing life-threatening conditions such as arteriosclerosis or suffering a stroke, research indicates that charting a clear mortality risk from the diagnosis remains fuzzy. Numerous studies have shown patients with metabolic syndrome have a higher mortality rate than those without it, but nearly all of these studies caution against trying to determine a quantitative value. In fact, researchers pointed
out that other underlying conditions also contribute to mortality and trying to assign mortality rates to what are overlapping conditions is impossible.
**Best Practices**
The reality is while there are genetic factors at work in triggering metabolic syndrome, it is largely driven by behavior. The most effective way to reverse a diagnosis is weight loss and an increase in physical activity. When these are both achieved, even at modest levels, blood pressure generally improves, and weight loss also lowers the systemic inflammation associated with obesity. However, changing behavior in human beings is one of the most challenging tasks (and attempting to do so undoubtedly contributes significant stress to the professional lives of physicians).
Controlling blood pressure with medication will not reverse a diagnosis of metabolic syndrome, but it will significantly reduce the risk of cardiovascular damage. Controlling triglyceride levels and cholesterol are also effective methods of lowering the long-term health risks of metabolic syndrome.
One recent study recommended a treatment blending lifestyle changes with proven medications to lower risks while pursuing longer-term improvements. According to the researchers, “While therapeutic lifestyle changes (TLCs) should be strongly recommended, clinicians should not let the perfect be the enemy of the possible. Evidence-based doses of statins, aspirin and angiotensin-converting enzyme inhibitors, or angiotensin II receptor blockers, should be prescribed as adjuncts, not alternatives, to TLCs.”
**Looking Ahead**
Humans spent millions of years honing the skills necessary to find enough food to sustain another day. So, adjusting to the influx of an overabundance of food is likely to take some time to adjust to. Individuals are programmed by nature to seek out high-calorie foods, and overcoming that innate drive that allowed our ancestors to survive is difficult for most. This explains why recent studies show the prevalence of metabolic syndrome continues to rise. And, as more nations raise the standard of living for their people, metabolic syndrome will undoubtedly increase in those societies as well.
While new treatments and medications to assist with control of symptoms or assisting with weight loss will undoubtedly come to market, it is unlikely there will ever be a magic pill that allows people to simply undo the effects of poor eating habits. Consequently, for the foreseeable future, the only effective treatment for metabolic syndrome will consist of working with patients to establish healthy eating and exercise regimens, augmented with medications to regulate blood pressure, triglycerides and cholesterol.
As one of the greatest public health crises of the next generation, it is a challenge that will likely be met in clinical settings rather than in research laboratories.
**How to Prevent Metabolic Syndrome**
- **Know your genetics:**
- Understand what to work against
- **Keep stress levels low:**
- Exercise
- Meditation
- Talk with family or friends
- Visit a mental health professional
- **Avoid too much inactivity**
- Avoid sitting all day
- Engage in moderate to vigorous exercise several times a week
- Expend at least 1,000 calories a week during exercise
- **Eat a heart-healthy diet:**
- Fruit
- Vegetables
- Whole grains
- Soy products
- Soluble fiber
- Omega-3 fatty acids
- **Limit consumption of:**
- Alcohol
- Sodium
- Saturated fats
- Refined carbohydrates
**References**
1. Mayo Clinic. Metabolic Syndrome. Accessed at www.mayoclinic.org/diseases-conditions/metabolic-syndrome/symptoms-causes/syc-20351924.
2. Cleveland Clinic. Metabolic Syndrome. Accessed at my.clevelandclinic.org/health/diseases/10783-metabolic-syndrome.
3. National Heart, Lung and Blood Institute. Metabolic Syndrome. Accessed at www.nhlbi.nih.gov/health/topics/metabolic-syndrome.
4. Camus JP. Gout, Diabetes, Hyperlipemia. A Metabolic Triad from Revue du Rhumatisme et Des Maladies Osteo-Articulaires. Jan-Feb 1968. Available at pubmed.ncbi.nlm.nih.gov/5931881.
5. Reaven GM. Banting Lecture 1988: Role of Insulin Resistance in Human Disease. Diabetes. 1988 Dec;37(12):1558-601. Accessed at pubmed.ncbi.nlm.nih.gov/3085678.
6. American Heart Association. About Metabolic Syndrome. Accessed at www.heart.org/en/health-topics/metabolic-syndromes/about-metabolic-syndrome.
7. Johns Hopkins Medicine. Metabolic Syndrome. Accessed at www.hopkinsmedicine.org/health/conditions-and-diseases/metabolic-syndrome.
8. Roberts CB, Kukull WA, and Bandeen-Roche K. Metabolic Syndrome and the Risk of Renal Impairment: The Modification by Exercise Training. Comprehensive Physiology, January 2013. Accessed at www.ncbi.nlm.nih.gov/pmc/articles/PMC3429661.
9. El-Saed A, Muntner P, Raffaele A, et al. Metabolic Syndrome: The Linking Mechanism and the Complications. Archives of Medical Science. March 31, 2016. Accessed at www.ncbi.nlm.nih.gov/pubmed/27160085.
10. Misra JK, Chaudhury N and Akincigil T. Metabolic Syndrome Prevalence by Race/Ethnicity and Sex in the United States, National Health and Nutrition Examination Survey, 1988–2012. Preventing Chronic Disease. March 16, 2017. Accessed at www.cdc.gov/pcd/issues/2017/17_0287.htm.
11. Daspuro V, Yelourov G, Radnyeva I, et al. The Life Expectancy of Diabetic Patients with Metabolic Syndrome After Weight Loss Study Protocol for a Randomized Clinical Trial. Trials Journal, April 8, 2013. Accessed at trialsjournal.biomedcentral.com/articles/10.1186/1745-6215-14-100.
12. Shi TK, Wang B, and Natrajran S. The Influence of Metabolic Syndrome in Predicting Mortality Risk among U.S. Adults: Implications for Public Health Interventions to Attain a Normal Weight: Preventing Chronic Disease. May 31, 2020. Accessed at www.cdc.gov/pcd/issues/2020/20_0204.htm.
13. Perkins S, Hsu S, Lai E, Reyes A, et al. Early Onset Obesity, and Risk of Metabolic Syndrome Among Children Adolescents: Preventing Chronic Disease. Oct. 12, 2017. Accessed at www.cdc.gov/pcd/issues/2017/17_0132.htm.
14. Knowler WC, Barrett-Connor E, Fowler SE, et al. Metabolic Syndrome and Mortality in Older Adults. JAMA Internal Medicine, May 12, 2008. Accessed at jamanetwork.com/journals/jamainternalmedicine/fullarticle/200000.
15. Shelving DH, Perumareddi P, and Hemmeker C. Metabolic Syndrome: Journal of Cardiovascular Pharmacology and Therapeutics. July 2017. Accessed at pubmed.ncbi.nlm.nih.gov/28875739.
**JIM TRAGESER** is a freelance journalist in the San Diego area.
“Scientists have identified components of the influenza virus that do not really change much at all. The critical challenge is getting a vaccine to induce a response to those components.”
— Anthony Fauci, MD, Director, National Institute of Allergy and Infectious Diseases
YEAR AFTER YEAR, the extraordinary mutability of influenza (flu) viruses that enables their progeny to escape immune detection to reinfect us translates into more than 450,000 hospitalizations and more than 40,000 flu-related deaths annually.\(^1\) If this were not enough, the ongoing COVID-19 pandemic serves as a harsh reminder that we may someday face an influenza pandemic to rival the catastrophic 1918 pandemic that claimed more than 650,000 U.S. lives — at a time when our population was less than one-third the size it is today.
This ability of influenza viruses to continually reinvent themselves has another important ramification. Those spontaneous RNA mutations in the large mushroom-like head region of the hemagglutinin protein that decorates the viral membrane surface necessitates a complex, costly global effort each year to isolate and produce new vaccines against emergent influenza strains believed most likely to circulate in the upcoming flu season.
Adding to the fact that selecting the eventual epidemic strains is an imperfect art, ongoing genetic drift of the selected A and B strains over the months that elapse before availability of mass-produced vaccines can enable them to evade antibody-mediated immunity induced by the inactivated whole-virus or synthetic antigen vaccine. As a consequence, the effectiveness of seasonal flu vaccines can differ widely from one year to the next; over the last decade, it has ranged from about 50 percent to as low as 20 percent.\(^2\) This in turn partly accounts for why more than one-half of U.S. adults don’t elect to get the annual flu shot.\(^3\)
For decades, virologists and public health experts have touted the concept of “universal” flu vaccines capable of inducing broad immune protection against both seasonal and pandemic influenza outbreaks. Ideally, such vaccines would eliminate the need for annual vaccination, and provide at least some degree of herd immunity to help reduce infection risk in those who fail to get immunized. The National Institute of Allergy and Infectious Diseases (NIAID) has defined several criteria for any universal influenza vaccine, including the ability to:
• Be at least 75 percent effective;
• Protect against both group I and II influenza A viruses;
• Provide durable protection that lasts at least one year; and
• Be suitable for all age groups.
Advances over the last decade in virology and molecular genetics have enabled academic, government and industry scientists to design, produce and
test a diverse spectrum of universal flu vaccine candidates. Today, more than 100 university and private sector-based laboratories are working on novel universal flu vaccines of one type or another, at least 16 of which are currently in clinical-stage development (Table).4
The strategy behind all of these candidate vaccines essentially amounts to eliciting a robust host immune response to one or more viral proteins — the hemagglutinin (HA) stem domain, matrix proteins M1 and M2, nucleoprotein (NP) and neuraminidase (NM) — that are highly conserved across different influenza strains and subtypes. But the vaccines themselves and the technology platforms used to produce them broadly fall into six distinct categories (Table):
- Nucleic acid-based vaccines
- Recombinant influenza virus-based vaccines
- Recombinant protein vaccines
- Virus-vectored vaccines
- Virus-like particle (VLP) vaccines
- Non-VLP nanoparticle vaccines
Several vaccine candidates in each of these categories have currently advanced to human trials, and numerous others are being tested in animal models to characterize their safety, immunogenicity and tolerability.
**Nucleic Acid-Based Vaccines**
Population-based experience over this last year of the COVID-19 pandemic has proven that messenger RNA (mRNA) vaccines are safe and highly protective against multiple strains of SARS-CoV-2. mRNA vaccines can direct expression of virtually any membrane-bound or soluble target antigen, thus mimicking antigen expression that occurs in a natural infection. The ability to be rapidly formulated and manufactured on a large scale can additionally help avert antigenic drift over the multiple months required for egg-based vaccine production.
*mRNA lipid nanoparticle vaccines* (*Moderna*). In essence, mRNA is a temporary set of instructions that directs cells to make a protein. This may include virtually any membrane-bound or soluble viral antigen, mimicking the antigen expression that occurs in a natural infection. A particularly strong appeal of mRNA influenza vaccines is the ability to rapidly formulate and manufacture them on a large scale, helping to avert the problem of antigenic drift that occurs over the roughly six months between early identification of anticipated circulating strains and large-scale production of whole-virus influenza vaccines in chicken eggs or mammalian cells.
Over the five years prior to the COVID-19 pandemic, Moderna had already been developing mRNA vaccines targeting a number of viral infections, including seasonal and pandemic influenza. The company recently completed a pair of Phase I dose-ranging studies evaluating lipid nanoparticle-encapsulated mRNA vaccines directed against potentially pandemic avian H10N8 and H7N9 influenza viruses.5 Both vaccines were well-tolerated and elicited robust humoral immune responses in healthy adult volunteers, as measured both by hemagglutinin inhibition (HAI) and microneutralization assays.
*Modified mRNA vaccines* (*Pfizer/BioNTech*). In September 2021, Pfizer announced the first study participants had received a single dose of monovalent
---
**Table. Universal Influenza Vaccine Candidates in Preclinical and Clinical Development**
| Platform | Preclinical | Phase I clinical trials | Phase II clinical trials | Phase III clinical trials |
|---------------------------------|-------------|-------------------------|--------------------------|---------------------------|
| Virus-like particle (VLP) vaccines | 17 vaccines | M2e-based VLPs (Sanofi) | | Quadrivalent VLP (Medicago) |
| Non-VLP nanoparticle vaccines | 18 vaccines | Stabilized headless HA stem nanoparticles (NIH/NIH/Sanofi) FluMos-v1 (NIH/NIH) | OVX836 (Osivax) | NanoFlu (Novavax) |
| Nucleic acid vaccines | 14 vaccines | mRNA lipid nanoparticles (Moderna) Modified mRNA vaccine (Pfizer) Micro-consensus DNA vaccine (Inovio/Wistar Institute) | | |
| Recombinant flu virus vaccines | 8 vaccines | cHA-based LAIV combinations (Icahn/Mount Sinai) Codagenix (Codavax) | deltaFLU (Ivaldi Biosciences) RedeeFlu M2SR (FluGen) | |
| Recombinant protein vaccines | 22 vaccines | M2e-based recombinant fusion proteins (VA Pharma) | FLU-v (Imutex/SEEK) | |
| Virus-vectored vaccines | 14 vaccines | NasoVAX (Alitimmune) | MVA-NP+M1 (Vacitech) | |
or bivalent investigational quadrivalent mRNA influenza vaccines.\textsuperscript{6} This Phase I trial in more than 600 healthy adults aged 65 years to 85 years will assess the safety, tolerability and immunogenicity against an FDA-approved standard quadrivalent influenza vaccine used as a control. While it is a seasonal mRNA flu vaccine, its performance in this and later efficacy studies is an important first step toward gauging the potential utility of a pandemic mRNA flu vaccine.
Numerous other novel mRNA and DNA-based vaccine candidates are currently in preclinical development. For example, collaborators at the University of Pennsylvania and the Icahn School of Medicine have shown that a single intradermal dose of their modified mRNA-lipid nanoparticle vaccine targeting a combination of conserved influenza virus antigens (HA stem, NM, NP) induced a strong immune response and was provided protection from challenge with pandemic H1N1 virus at 500 times the lethal dose in a murine model.\textsuperscript{7} Strong immunogenicity and broad protection against pandemic viruses was also shown in ferrets immunized with Denmark-based Statens Serum Institute’s polyvalent influenza A DNA vaccine, which encodes HA and NA proteins derived from the pandemic 2009 H1N1 and 1968 H3N2 virus strains, as well as matrix proteins from the pandemic 1918 strain.\textsuperscript{8}
**Recombinant Influenza Virus-Based Vaccines**
Two of four recombinant virus-based universal flu vaccines currently in clinical development have advanced to Phase II testing: live attenuated influenza virus (LAIV) vaccines developed by FluGen in Madison, Wis., and Austria-based Vivaldi Biosciences.
*Single-replication (SR) recombinant live influenza vaccine (FluGen).* Licensed from the University of Wisconsin, FluGen’s novel M2SR vaccine contains genetically engineered influenza viruses in which a portion of the M2 gene has been deleted. Delivered intranasally like another licensed LAIV, FluMist, M2SR can infect cells and express the entire spectrum of influenza RNA and proteins, but cannot produce any infectious virus particles or cause any pathological signs of infection. Further, the M2SR vaccine can be engineered to express HA and neuraminidase antigens common to different influenza virus strains.
Healthy adults enrolled in a Phase II human challenge study received a single low intranasal dose of the “supraseasonal” M2SR vaccine constructed with the H3N2 virus Bris2007, then were challenged with an H3N2 influenza strain seven years drifted from the vaccine. Despite the mismatch of vaccine and challenge strains, the subset of subjects with a neutralizing antibody response had significantly reduced rates of infection after challenge and reduced illness.\textsuperscript{9} A dose-escalation study has shown that up to 10-fold higher doses of M2SR induce protective immune response in a higher proportion of recipients. In May 2021 with support from NIAID, FluGen initiated the first placebo-controlled study of M2SR in older adults aged 65 years to 85 years who are most vulnerable to serious complications and death from the flu.
*Replication-deficient LAIV vaccine (Vivaldi Biosciences).* Austria-based Vivaldi recently completed Phase I and II clinical testing of DeltaFLU, another intranasally administered LAIV universal influenza vaccine missing a specific viral protein that prevents viral replication. According to the company, findings indicate that DeltaFLU “shows potential for universal protection against all influenza A and B virus strains, including drifted seasonal influenza strains and emerging pandemic strains.”\textsuperscript{10}
Vivaldi has also announced positive preclinical data that supports further development of a novel intranasal combination vaccine called Delta-19, which is designed to confer protection against both COVID-19 and all influenza strains.
*Chimeric hemagglutinin (cHA)-based LAIV vaccine (Icahn/Mount Sinai).* This research team has developed a sequential chimeric HA vaccination strategy that combines the highly conserved stem domain with immunodominant head domains from avian influenza virus subtypes. Boosting with a cHA construct that contains the same stem but a different head induces a stronger recall response against the stem than the initial low-level “immune priming” response.
A Phase I study in healthy 18- to 39-year old subjects documented a strong, durable and functional immune response.
targeting the conserved HA stem domain, suggesting that “chimeric hemagglutinins have the potential to be developed as universal vaccines that protect broadly against influenza viruses.”\textsuperscript{11}
**Recombinant Protein Vaccines**
A number of laboratories have developed recombinant peptide vaccines that match conserved antigens present in specific internal or external viral proteins. Among the leading efforts is a collaboration between UK-based Imurex and NIAID to conduct Phase IIb clinical studies of FLU-v, a mixture of four recombinant peptides that originate from highly conserved internal proteins (M1, M2 and NP) common to all influenza A and B viruses.
A pair of recently completed Phase II studies found healthy adults who received a single dose of an adjuvanted version of FLU-v mounted a protective T cell-mediated response and were significantly less likely than control subjects to develop mild-to-moderate flu following intranasal challenge with a single H1N1 strain.\textsuperscript{12,13}
In addition to the potential for FLU-v to confer protective immunity against any influenza strain, the selective cellular immune response could be of particular benefit for the 10 percent to 20 percent in the general population who fail to mount a good antibody response against the exposed HA region of the virus.
Numerous other laboratories across the globe are currently in preclinical development with their own universal recombinant protein vaccines to try to induce T cell and humoral immunity directed against conserved epitopes on the viral HA stem, M1, M2 and NP proteins. But several of the most advanced candidate vaccines have failed in clinical testing, most disappointingly BiondVax Pharmaceuticals’ M-001 vaccine comprising nine highly conserved HA head domain epitopes common to some 40,000 isolated influenza virus strains. After 15 years of largely encouraging preclinical and clinical findings, the company announced in late 2020 that data from a pivotal Phase III trial of M-001 failed to show a significant difference in flu illness or severity in more than 12,000 adult subjects (half of whom were age 65 and older) over the 2018-2019 flu season.\textsuperscript{14}
**Virus-Vectored Vaccines**
Similar to how gene therapy uses viral vectors to carry genetic instructions to host cells to express key missing functional proteins, novel vaccines are being developed to induce our cells to express influenza virus proteins that are largely conserved across strains and subtypes.
Of more than a dozen initiatives in progress, Vaccitech’s modified vaccinia Ankara (MVA)-vectored construct expressing influenza A-derived NP and M1 protein has completed a Phase IIb safety and immunogenicity study in 846 adults aged 65 years and older. While this VMA-NP+M1 vaccine induced a substantial M1-specific T cell response,\textsuperscript{15} the study sample was too small to draw any conclusions about potential efficacy endpoints such as incidence and duration of influenza-like illness (ILI) or number of days with moderate or severe symptoms during an ILI episode.\textsuperscript{16}
Other promising virus-vectored influenza vaccines have reached Phase II clinical development. In particular, a single dose of Altimimmune’s intranasally delivered replication-deficient adenovirus-based vaccine, NasoVax, mediates expression of the HA protein found on a targeted flu virus strain, and elicits robust mucosal and systemic immune responses. However, it is strain-specific and therefore is not designed to confer broad protection against other flu strains and subtypes.\textsuperscript{17}
**Virus-Like Particle Vaccines**
Comprising one or more viral structural proteins, virus-like particles (VLPs) are molecules that closely resemble their live virus counterparts, but are noninfectious because they contain no viral generic material. More than a decade ago, intranasal immunization of mice with recombinant VLPs generated from structural proteins of the pandemic 1918 H1N1 virus were first shown to be protective against a lethal challenge with both the 1918 virus and a highly pathogenic avian H5N1 virus.\textsuperscript{18}
Furthest along among nearly 20 influenza VLP development programs is Medicago, a privately held Canadian firm whose investigational HA-bearing quadrivalent VLP (QVLP) vaccine is produced in a relative of the tobacco
plant. In a large-scale multinational study in elderly participants covering two influenza seasons between 2017 and 2019, the QVLP vaccine met its primary noninferiority endpoint relative to standard quadrivalent influenza vaccine for the prevention of ILI caused by any strain.
A number of laboratories have developed recombinant peptide vaccines that match conserved antigens present in specific internal or external viral proteins.
Non-VLP Nanoparticle Vaccines
Perhaps most exotic of all are the nanoparticle vaccines, which are novel constructs of conserved viral antigens displayed on a nonviral nanoparticle. A prime example is a stabilized leadless HA stem nanoparticle vaccine being co-developed by Sanofi Pasteur and NIAID. Numerous HA stem portion “spikes” are presented on the surface of a microscopic nonhuman ferritin nanoparticle, mimicking the natural organization of HA on the influenza virus.
While HA stem antigens were derived from an H1N1 flu virus, this candidate vaccine protected both mice and ferrets against a lethal H5N1 flu virus, despite the fact that H5N1 is an entirely different viral subtype. This vaccine has also elicited broadly neutralizing antibody responses to diverse H1 and H3 viruses in nonhuman primates. NIAID completed a safety, tolerability and immunogenicity study earlier this year, and findings are currently being analyzed.
Another non-VLP nanoparticle vaccine showing promise is NIAID’s “mosaic” quadrivalent flu vaccine that displays 20 HA antigens arranged in repeated patterns, sending a strong “danger” signal to the immune system that prompts a vigorous antibody response. Dubbed FluMos-v1, this universal flu vaccine candidate began Phase I clinical testing in May 2021.
Many Candidate Vaccines Boost Prospects
“Our ultimate aspirational goal is to have vaccines that you can give relatively infrequently — maybe every five or 10 years — that provide protection against the broad array of influenza viruses that we encounter,” NIAID Director Anthony Fauci, MD, recently noted. But he cautioned that effective universal flu vaccines could arrive in a stepwise fashion, with successive iterations providing protection against increasing portions of the numerous influenza subtypes and groups.
Time will tell, but the many novel universal flu vaccine candidates entering the pipeline and advancing from preclinical development to human testing offer new hope that the realization of a decades-old dream is not far off.
References
1. Salje H, Metzler-Badju A, Fitzgerald MC, et al. Future epidemiological and economic impacts of universal influenza vaccine. Proc Natl Acad Sci USA 2019 Oct 8;116(41):20786-92.
2. U.S. Centers for Disease Control and Prevention. Effectiveness of Inactivated Influenza Vaccine (IIV) in Older Adults. Accessed at www.cdc.gov/flu/vaccines-work/effectiveness-studies.htm.
3. U.S. Centers for Disease Control and Prevention. Flu vaccination coverage among adults aged ≥18 years. Accessed at www.cdc.gov/flu/flu-activity-surveillance/coverage-1818estimates.htm.
4. Center for Infectious Disease Research and Policy. University of Minnesota. Universal Influenza Vaccine Technology Landscape. Accessed at: nr.citrap.umn.edu/universal-influenza-vaccine-technology-landscape.
5. Adams RA, Klimov AI, Smolentsev I, et al. mRNA vaccines against H1N1 and H7N9 influenza viruses of pandemic potential are immunogenic and protective against homologous challenge in phase I randomized clinical trials. Vaccine May 31;2019:3059-64.
6. Pfizer. A Study to Evaluate the Safety, Tolerability, and Immunogenicity of a Modified DNA Vaccine Against Influenza. Accessed at: clinicaltrials.gov/ct2/show/NCT04409034.
7. Freyn AW, da Silva JR, Rosado VC, et al. A multi-targeting, nucleoside-modified mRNA influenza vaccine provides broad protection in mice. Mol Ther Nucleic Acids 2020 Jul;20(1):150-60.
8. Gakidou E, Majumdar S, Skelton S, et al. Protective efficacy of a polyvalent influenza A DNA vaccine against both homologous and heterologous challenge in the ferret model. Vaccine 2021 Aug 9;39(35):4098-106.
9. Eiden J, Voldkast B, Rudenko D, et al. M2-deficient single-replication influenza virus elicits robust immune responses associated with protection against homologous challenge with H1N1 and H5N2 influenza strain. J Infect Dis 2021 July 23; online ahead of print.
10. Vaxdal Biosciences. Oct 21, 2021 press release. Accessed at vaxdalbiosciences.com/news-events/press-releases/2021/10/21.
11. Nachtegaal R, Fosar J, Nabaty A, et al. A chimeric hemagglutinin-based universal influenza virus DNA approach induces broad and lasting immunity in mice via a佐剂化的 plasmid-controlled phase I trial. Nat Med 2020 Dec;26(12):1706-14.
12. Plengeborg D, Dille J, de Groot S, et al. Immunogenicity, safety and efficacy of a standard universal influenza vaccine, FLU+V, in healthy adults aged 18–64 years. Vaccine 2020 Jul;38(30):4317-4326.
13. Plengeborg D, James E, Fernandez A, et al. Efficacy of FLU+V, a phase 1/2 universal influenza vaccine, in a randomized Phase IIb human influenza challenge study. NPJ Vaccines 2020 Mar 19;5:25.
14. BioNTech announces topline results from Phase 3 clinical trial of the M-001 universal influenza vaccine candidate, Oct 23, 2020. Accessed at www.biontechusa.com/press-releases/2020/10/biontech-announces-topline-results-from-phase-3-clinical-trial-of-the-m-001-universal-influenza-vaccine-candidate.
15. Paksundan S, Ahmed MS, Sharma R, et al. Modified vaccinia Ankara-based vaccine expressing NP and M1 activates mucosal M1-specific CD4+ T cells. J Virol 2020 Sept 12;94(20):8071-89.
16. Sanofi. Nov 19, 2020. Fujimori PM, et al. Efficacy and safety of a modified vaccine combined with QV in people aged 65 and older: a randomized controlled clinical trial (IMVIKTUS). Vaccine 2021;98B:851.
17. Tasker S, O’Reilly W, Syrjänen K, et al. Safety and immunogenicity of a modified vaccinia Ankara-based vaccine in a phase 2 randomized, controlled trial. Vaccine (Bristol) 2018 Mar 5;9(5):224.
18. Pereira A, Ahuja A, Singh A, et al. Intranasal vaccination with 100 μg of mosaic-like particulate vaccine protects ferrets from lethal H198 and H5N1 influenza viral challenge. J Virol 2009 Jun 8;83(12):5728-34.
19. Lee BJ, Ahuja A, Singh A et al. Efficacy, immunogenicity, and safety of a plant-derived, quadrivalent, virus-like particle influenza vaccine in adults (≥64 years) and older adults (≥65 years): two multicenter, randomized phase 3 trials. Lancet 2020 Nov 7;396(10261):1491-1503.
20. Yassine HM, Bogart LA, McTernary PM, et al. Hemagglutinin-stabilized nanoparticles protect against subtype influenza protection. Nat Med 2015 Sep;21(9):1066-70.
21. National Institute of Allergy and Infectious Diseases (NIAID), Trivac. Accessed at www.niaid.nih.gov/news-events/trivac-universal-flu-vaccine.
22. C-Span. Dr. Anthony Fauci and Rick Bright on a Universal Flu Vaccine. Accessed at www.c-span.org/video/?510782-1/trivac-universal-flu-vaccine.
KEITH BERMAN, MPH, MBA, is the founder of Health Research Associates, providing reimbursement consulting, business development and market research services to biopharmaceutical, blood product and medical device manufacturers and suppliers. He also serves as editor of International Blood/Plasma News, a blood products industry newsletter.
SMART system. SMART refrigeration.
MinibarRx®, the SMART refrigeration system designed specifically for storage, handling and inventory management.
• Ready to provide your patients treatment NOW
• Improve the quality care of your patients
• Improve the patient experience; no waiting or rescheduling of treatment
Through our solution...
• Forward-Deployed Inventory (FDI)
• Reduce financial acquisition and inventory costs
• Invoice on dispense
• Right-sized inventory
• Effortless and seamless technology
• Enhanced data granularity
• State-of-the-art temperature control and monitoring
• Full suite of smart management tools
...and much more!
© 2022 MinibarRx, LLC. All Rights Reserved FL709-SP 020520
firstname.lastname@example.org | MinibarRx.com
Osteoporosis: A Patient’s Perspective
By Trudie Mitschang
Mary Hettinger, who has a family history of osteoporosis, knew the seriousness of getting regular bone density screenings, and since her diagnosis at age 61, she has made lifestyle changes to decrease her risk of bone breaks.
WHEN MARY Hettinger was diagnosed with osteoporosis in 2019 at age 61, the news came as no surprise. Six years earlier, Mary tested positive for osteopenia, a condition that indicates an overall weakening of the bones and is often a precursor to osteoporosis. Because Mary’s mother also had osteoporosis, Mary says she knew her risk factor was higher than average. “My mom suffered from the disease and had that telltale hunch in her back,” she recalls. “I started getting bone density screenings early because I knew osteoporosis can be present long before symptoms appear.”
The word “osteoporosis” literally means “porous bone.” It’s a disease that weakens bones and puts those who suffer from it at increased risk for bone fractures due to diminished bone mass and strength. According to the National Osteoporosis Foundation, approximately 10 million Americans have osteoporosis and another 44 million have low bone density, placing them at increased risk for developing osteoporosis later in life. The condition is more common in women than men, affecting almost one in five women aged 50 and older. And, while genetics plays a factor (as was the case with Mary), decreased estrogen levels after menopause, a diet low in calcium, a sedentary lifestyle, caffeine consumption and smoking tobacco can all contribute to bone mass loss as people age.
Because many people with osteoporosis do not know they have it until they break a bone, regular bone density screenings are one of the best ways to obtain an early diagnosis and begin potential interventions. “After my doctor told me I had osteopenia, I did make some small lifestyle adjustments, like giving up caffeine,” says Mary. “I was already fairly active and embraced a healthy diet overall, but since my osteoporosis diagnosis, I have made exercise an even higher priority.”
Mary works full time as a management consultant, a career that includes large chunks of time spent in front of a computer screen. Since her job is primarily sedentary, Mary blocks time on her calendar to go to the gym three to four times a week, where she combines strength training with aerobic classes to keep herself strong and limber. “I do weight-bearing routines at least twice a week, including a group exercise class that uses barbells,” she says. “We also have a home gym with weights in our basement in case I miss a class.”
After her diagnosis, Mary’s doctor also recommended she include more calcium-rich foods in her diet and prescribed a generic alendronate, a medication that has been shown to slow the progression of bone loss. In addition to her prescription that is a pill taken once-weekly, Mary takes a twice-daily calcium and vitamin D supplement.
Statistics show people who suffer osteoporotic bone breaks are most likely to have them occur in the hip, spine or wrist, so Mary (whose osteoporosis is currently limited to her spine) is careful to avoid high-risk activities like skiing or softball. In terms of her overall health, Mary says she considers herself fortunate to have caught the disease early enough to treat it. At the time of this writing, her next bone density screening was in December 2021, and she is hopeful the medication and lifestyle adjustments she’s made are making a positive difference. “I am a big believer in screenings,” says Mary. “If you have any family history of low bone density or other risk factors, tell your doctor and do your research. I know doctors don’t always love it when patients do Internet research and, of course, there is misinformation out there, but I felt more empowered when I learned all the facts about this condition.”
Reference
1. National Osteoporosis Foundation. Osteoporosis Fast Facts. Accessed at osf.ndrfgwv-content/uploads/2015/12/Osteoporosis-Fast-Facts.pdf.
Osteoporosis Fast Facts
- Osteoporosis is often called a “silent disease” because individuals cannot feel their bones getting weaker; they may not even know they have it until after they break a bone.
- Osteoporosis-related bone breaks cost patients, their families and the healthcare system $19 billion annually.
- By 2025, experts predict osteoporosis will be responsible for three million fractures resulting in $25.3 billion in costs.
- Osteoporosis is preventable. Studies show building strong bones during childhood and adolescence can help prevent osteoporosis later in life.
- Osteoporosis is manageable. Eating a healthy diet and exercising regularly can help slow or stop the loss of bone mass and help prevent fractures from occurring.
Osteoporosis: A Physician’s Perspective
Dr. Sarah Berry is at the forefront of osteoporosis research, studying modifiable risk factors for falls.
SARAH BERRY, MD, MPH, has dedicated her life to the study of bone health. She is the associate director at the Musculoskeletal Research Center, associate scientist at the Hinda and Arthur Marcus Institute for Aging Research and associate professor of medicine at Harvard Medical School, Beth Israel Deaconess Medical Center. Dr. Berry’s primary research has focused on outcomes following hip fractures both in the community and nursing homes. Given the strong link between falls and fractures, she is also interested in studying novel and modifiable risk factors for falls and is at the forefront of osteoporosis research.
BSTQ: Have you seen or been involved in any promising research for how to effectively treat or manage osteoporosis?
Dr. Berry: Currently, I am participating in a multisite study to test the effects of low-dose testosterone combined with exercise in older women recovering from a hip fracture. We don’t yet know the results of the trial and whether the testosterone will be helpful, but it is exciting to consider and learn about new approaches.
BSTQ: What dietary changes can help prevent osteoporosis?
Dr. Berry: Dairy foods are high in calcium, which is important to maintain bone health. It is particularly important that children and teenagers consume enough calcium since this is a period of rapid bone growth. There is some evidence to support adequate protein intake is also important to maintain bone health.
BSTQ: What is a little-known fact about osteoporosis?
Dr. Berry: Osteoporosis is a silent disease. Typically, people don’t realize they have weak bones until they have a fracture. Because of that, it’s better to focus on preventing osteoporosis rather than waiting to find out you have it.
BSTQ: At what age should someone request a bone density screening?
Dr. Berry: Women should get screened beginning at age 65, and men beginning at age 70. However, if a man or woman has risk factors such as paralysis or a history of adult fracture, they should get screened earlier.
BSTQ: How do you screen for osteoporosis?
Dr. Berry: A bone density test is similar to an X-ray (but with less radiation than a chest X-ray). It measures how tough your bones are. Another option is to use the FRAX model, an online tool developed by the World Health Organization that assesses the risk of osteoporosis over a 10-year period based on age, weight, family health history and other factors.
BSTQ: Is it ever too late to “grow” new bones?
Dr. Berry: Your skeleton turns over every 10 years. After age 30, the rate of bone loss outpaces the rate of bone growth. Bone loss also increases in women after menopause. However, it is possible to rebuild bone and increase bone strength.
BSTQ: What advice do you have for patients with high-risk factors for osteoporosis?
Dr. Berry: I recommend speaking with your doctor. Exercise, especially weight-bearing exercise like walking and dancing, is helpful to strengthen bones.
BSTQ: What factors make osteoporosis a life-threatening disease?
Dr. Berry: Most people don’t realize it can be deadly because of common complications such as infections, blood clots and loss of mobility. Pain medications can affect cognition and cause confusion. Twenty percent of people with hip fractures die within a year, while another 20 percent end up needing long-term care.
BSTQ: Your research encompasses risk factors for falls. Tell us more about that.
Dr. Berry: Prescription medications are one of the most common risk factors for falls because so many cause side effects. It’s important for patients to speak with their doctor regularly about their medicines and ask about the lowest dose available that still works for them.
BSTQ: What lifestyle adjustments can people make to strengthen their bones and prevent osteoporosis?
Dr. Berry: It is so important to exercise because it strengthens the bones, which can prevent falls. Talk to your doctor to see if you are getting enough calcium and vitamin D or if you need to be taking a prescription medication to prevent fractures. Understand the indication for all your medications, and work with your doctor to use the lowest dose effective for you. It’s important to lay the foundation for strong bones now, no matter your age. By incorporating good healthy habits, you can reduce the risk of fracture later.
TRUDIE MITSCHANG is a contributing writer for BioSupply Trends Quarterly magazine.
Medical Terminology: A Quick & Easy Reference Book – Basics of Terminology, Anatomy, and Abbreviations
Author: Medical Resources Team
This companion to medical study guides includes information related to CPC billing and coding, nursing entrance exams such as TEAS and HESI A2, NCLEX, MCAT, certified medical assistants, nursing assistants/aides and more. In addition, the book covers all facets of medical terminology for health professionals, including a refresher course on Greek and Latin affixes, all affixes listed alphabetically and by body system, a full list of common medical abbreviations and a list of body positions and anatomical reference terms.
www.amazon.com/MEDICAL-TERM-INOLOGY-Reference-Terminology-Abbreviations/dp/B08512NQ5
Physician Leadership: The 11 Skills Every Doctor Needs to Be an Effective Leader, 1st Edition
Author: Karen J. Nichols, DO
This book is a concise guide for busy physicians doing their best to successfully lead people and organizations. It covers foundational leadership essentials every physician needs to master to transform themselves from a highly motivated novice leader into an effective, skilled and productive leader. Each chapter offers readers a summary of the crucial points found within, sample questions, exercises and a bibliography of the relevant academic literature for further study. Actionable, real-world advice for practicing and aspiring physicians is provided, including a thorough introduction to personal approach and style when interacting with patients, managers, boards and committees; an exploration of how to employ the principles of effective communication to achieve desired results and practical techniques for implementing those principles; practical discussions of the role perspectives play in shaping an organization’s culture and how those perspectives affect leadership efficacy; and in-depth examinations of approaches to decision-making that get buy-in from others and achieve results.
www.amazon.com/Physician-Leadership-Skills-Doctor-Effective/dp/1119817544
The Resilient Healthcare Organization: How to Reduce Physician and Healthcare Worker Burnout, 1st Edition
Author: George Mayzell, MD, MBA
The Resilient Healthcare Organization focuses on physicians’ and healthcare professionals’ experiences and how they overcame a loss of enthusiasm for work, feelings of cynicism and a low sense of personal accomplishment. The feelings of emotional exhaustion are characterized by depersonalization and perceived ineffectiveness — the cardinal features that define “burnout” and affect almost 50 percent of physicians and 30 percent to 70 percent of nurses. Addressed are why burnout is viewed as a threat and how it can be fought. Included is a discussion of the contributing factors and solutions at the health system and societal levels. Additionally, the book explores the current and future etiology and impacts on physicians and healthcare professionals, with a significant emphasis on solutions at both the individual and system levels.
www.amazon.com/Resilient-Healthcare-Organization-Physician-Burnout/dp/1032173025
Review of Medical Microbiology and Immunology, 17th Edition
Authors: Warren E. Levinson, Peter Chin-Hong, Elizabeth A. Joyce, Jesse Nussbaum and Brian Schwartz
This book covers both basic and clinical aspects of bacteriology, virology, mycology, parasitology and immunology. Important infectious diseases are discussed using an organ system approach using a mix of narrative text, color images, tables, figures, Q&As and clinical vignettes. This updated edition reflects the latest research, treatment and developments, as well as a chapter on COVID-19 with images.
www.amazon.com/Review-Medical-Microbiology-Immunology-17th-ebook/dp/B09H3NB6RL
Transition from Clinic- to Home-Based IVIG/SCIG Is Successful to Decrease Exposure to COVID-19
A recent study shows transition of clinic-based to home-based intravenous immune globulin (IVIG)/subcutaneous IG (SCIG) infusion can be successfully done to decrease potential exposure during a pandemic in a high-risk immunosuppressed population, with no impact on patient satisfaction, adherence or efficacy. In addition, home-based infusions were associated with a reduction in costs to patients and an increase in available chair time in the infusion clinic.
In the study, criteria were developed to identify high-risk immunosuppressed patients who would be appropriate candidates for potential conversion to home-based IVIG infusions. Data were collected via chart review, and cost analysis was performed using Medicare Part B reimbursement data. A patient outcome questionnaire was developed for administration through follow-up phone calls.
From March 2020 to May 2020, 45 patients met criteria for home-based infusion, with 27 patients (60 percent) agreeing to it. Posttransition patient outcomes assessment, conducted in 26 patients (96 percent), demonstrated good patient understanding of the home-based infusion process. No infusion-related complications were reported, and 24 patients (92 percent) had no concerns about receiving future IVIG and/or SCIG doses at home. No patient tested positive for COVID-19 during the study period. Clinic infusion visits decreased by 26.6 visits per month, resulting in a total of 106 hours of additional available infusion chair time per month and associated cost savings of $12,877.
Perreault S, Schiffer M, Clinchy-Jarmozko V, et al. Mitigating the risk of COVID-19 exposure by transitioning from clinic-based to home-based immune globulin infusion. Am J Health Syst Pharm 2021 Jun 7;78(12):1112-1117.
Study Shows COVID-19 Infection Correlates with Autoimmune Markers
A new study demonstrates how severe acute respiratory syndrome coronavirus disease 2 (SARS-CoV-2) infection could be associated with an autoimmune response and development of autoantibodies.
In the study, the researchers elucidated whether SARS-CoV-2 stimulates autoantibody production and contributes to autoimmunity activation. Forty adult patients (66.8 years mean age) were enrolled and admitted to Alessandria Hospital between March 2020 and April 2020. All patients had a confirmed COVID-19 diagnosis and no previous clinical record of autoimmune disease. Forty blood donors were analyzed for the same markers and considered as healthy controls. The patients had high levels of common inflammatory markers such as C reactive protein, lactate dehydrogenase, ferritin and creatinine. Interleukin-6 concentrations were also increased, supporting the major role of this interleukin during COVID-19 infection. Lymphocyte numbers were generally lower compared with healthy individuals. All patients were also screened for the most common autoantibodies.
Results showed a significant prevalence of antinuclear antibodies, antineutrophil cytoplasmic antibodies and ASCA immunoglobulin A antibodies. Patients having a de novo autoimmune response had the worst acute viral disease prognosis and outcome. According to the researchers, the results sustain the hypothesis that COVID-19 infection correlates with the autoimmunity markers. However, they concluded other investigations are necessary to define the possible link between SARS-CoV-2 infection and autoimmune disease onset.
Sacchi MC, Tamiazzo S, Stabbione P, et al. SARS-CoV-2 infection as a trigger of autoimmune response. Clin Transl Sci 2021 May;14(3):898-907.
## Medicare Immune Globulin Reimbursement Rates
Rates are effective Jan. 1, 2022, through March 31, 2022
| Product | Manufacturer | J Codes | ASP + 6% (before sequestration) | ASP + 4.3%* (after sequestration) |
|--------------------------|--------------------|-----------|---------------------------------|-----------------------------------|
| **IVIG** | | | | |
| ASCENIV | ADMA Biologics | J1554 | $963.54 | $948.09 |
| BIVIGAM | ADMA Biologics | J1556 | $140.98 | $138.72 |
| FLEBOGAMMA DIF | Grifols | J1572 | $71.79 | $70.63 |
| GAMMAGARD SD | Takeda | J1566 | $139.18 | $136.95 |
| GAMMAPLEX | BPL | J1557 | $101.61 | $99.98 |
| OCTAGAM | Octapharma | J1568 | $83.23 | $81.89 |
| PANZYGA | Octapharma/Pfizer | 90283/J1559 | $130.09 | $128.01 |
| PRIVIGEN | CSL Behring | J1459 | $90.02 | $88.57 |
| GAMMAGARD LIQUID | Takeda | J1569 | $93.33 | $91.84 |
| GAMMAKED | Kedrion | J1561 | $93.02 | $91.53 |
| GAMUNEX-C | Grifols | J1561 | $93.02 | $91.53 |
| CUTAQUIG | Octapharma | 90284/J3590 | $135.26 | $133.09 |
| CUVITRU | Takeda | J1555 | $147.48 | $145.11 |
| HIZENTRA | CSL Behring | J1559 | $117.85 | $115.96 |
| HYQVIA | Takeda | J1575 | $154.19 | $151.72 |
| XEMBIFY | Grifols | J1558 | $132.96 | $130.83 |
*ASP + 4.3% applies only after April 1, 2022, after which a 1% reduction in payment will apply until July 1, 2022, unless further Congressional action is taken to extend the moratorium.
Calculate your reimbursement online at www.FFEnterprises.com.
## Immune Globulin Reference Table
| Product | Manufacturer | Indication | Size |
|--------------------------|--------------------|------------|--------------------|
| **IVIG** | | | |
| ASCENIV LIQUID, 10% | ADMA Biologics | PI | 5 g |
| BIVIGAM LIQUID, 10% | ADMA Biologics | PI | 5 g, 10 g |
| FLEBOGAMMA 5% DIF Liquid| Grifols | PI | 0.5 g, 2.5 g, 5 g, 10 g, 20 g |
| FLEBOGAMMA 10% DIF Liquid| Grifols | PI, ITP | 5 g, 10 g, 20 g |
| GAMMAGARD S/D Lysophilized, 5% (Low IgA) | Takeda | PI, ITP, B-cell CLL, KD | 2.5 g, 5 g, 10 g |
| GAMMAPLEX Liquid, 5% | BPL | PI, ITP | 2.5 g, 5 g, 10 g, 20 g |
| GAMMAPLEX Liquid, 10% | BPL | PI, ITP | 5 g, 10 g, 20 g |
| OCTAGAM Liquid, 5% | Octapharma | PI | 1 g, 2.5 g, 5 g, 10 g, 25 g |
| OCTAGAM Liquid, 10% | Octapharma | ITP, DM | 2 g, 5 g, 10 g, 20 g, 30 g |
| PANZYGA Liquid, 10% | Octapharma/Pfizer | PI, ITP, CIDP | 2.5 g, 5 g, 10 g, 20 g, 30 g |
| PRIVIGEN Liquid, 10% | CSL Behring | PI, ITP, CIDP | 5 g, 10 g, 20 g, 40 g |
| **IVIG/SCIG** | | | |
| GAMMAGARD Liquid, 10% | Takeda | IVIG: PI, MMN | 1 g, 2.5 g, 5 g, 10 g, 20 g, 30 g |
| SCIG: PI | | | |
| GAMMAKED Liquid, 10% | Kedrion | IVIG: PI, ITP, CIDP | 1 g, 2.5 g, 5 g, 10 g, 20 g |
| SCIG: PI | | | |
| GAMUNEX-C Liquid, 10% | Grifols | IVIG: PI, ITP, CIDP | 1 g, 2.5 g, 5 g, 10 g, 20 g, 40 g |
| SCIG: PI | | | |
| **SCIG** | | | |
| CUTAQUIG Liquid, 16.5% | Octapharma | PI | 1 g, 1.65 g, 2 g, 3.3 g, 4 g, 8 g |
| CUVITRU Liquid, 20% | Takeda | PI | 1 g, 2 g, 4 g, 8 g, 10 g |
| HIZENTRA Liquid, 20% | CSL Behring | PI, CIDP | 1 g, 2 g, 4 g, 10 g |
| | | | 1 g PFS, 2 g PFS, 4 g PFS |
| HYQVIA Liquid, 10% | Takeda | PI | 2.5 g, 5 g, 10 g, 20 g, 30 g |
| XEMBIFY Liquid, 20% | Grifols | PI | 1 g, 2 g, 4 g, 10 g |
CIDP Chronic inflammatory demyelinating polyneuropathy
CLL Chronic lymphocytic leukemia
DM Dermatomyositis
ITP Immune thrombocytopenic purpura
MMN Multifocal motor neuropathy
PI Primary immune deficiency disease
PFS Prefilled syringes
## 2021-2022 Influenza Vaccine
**Administration Codes:** G0008 (Medicare plans)
**Diagnosis Code:** V04.81
| Product | Manufacturer | Presentation | Age Group | Code |
|--------------------------|--------------------|------------------|--------------------|--------------|
| **Quadrivalent** | | | | |
| AFLURIA (IIV4) | SEQIRUS | 0.5 mL PFS 10-BX | 3 years and older | 90686 |
| AFLURIA (IIV4) | SEQIRUS | 5 mL MDV | 6 months and older | 90688 |
| AFLURIA PEDIATRIC (IIV4)| SEQIRUS | 0.25 mL PFS 10-BX| 6-35 months | 90685/90687 |
| FLUAD (IIV4) | SEQIRUS | 0.5 mL PFS 10-BX | 65 years and older | 90694/90654 |
| FLUARIX (IIV4) | GSK | 0.5 mL PFS 10-BX | 6 months and older | 90686 |
| FLUBLOK (ccIIV4) | SANOFI PASTEUR | 0.5 mL PFS 10-BX | 18 years and older | 90682 |
| FLUCELVAX (ccIIV4) | SEQIRUS | 0.5 mL PFS 10-BX | 2 years and older | 90674 |
| FLUCELVAX (ccIIV4) | SEQIRUS | 5 mL MDV | 2 years and older | 90756* |
| FLULAVAL (IIV4) | GSK | 0.5 mL PFS 10-BX | 6 months and older | 90686 |
| FLUMIST (LAIV4) | ASTRAZENECA | 0.2 mL nasal spray 10-BX | 2-49 years | 90672 |
| FLUZONE (IIV4) | SANOFI PASTEUR | 0.5 mL PFS 10-BX | 6 months and older | 90686 |
| FLUZONE (IIV4) | SANOFI PASTEUR | 0.5 mL SDV 10-BX | 6 months and older | 90686 |
| FLUZONE (IIV4) | SANOFI PASTEUR | 5 mL MDV | 6 months and older | 90688 |
| FLUZONE HIGH-DOSE (IIV4)| SANOFI PASTEUR | 0.7 mL PFS 10-BX | 65 years and older | 90662 |
*ccIIV4 Cell culture-based quadrivalent inactivated injectable
IIV4 Egg-based quadrivalent inactivated injectable
LAIV4 Egg-based live attenuated quadrivalent nasal spray*
* Providers should check with their respective payers to verify which code they are recognizing for Flucelvax Quadrivalent 5 mL MDV product reimbursement for this season.*
WHEN MINUTES MATTER
VIPc CABINET INSTALLATION
REAL-TIME 24/7 MONITORING & INVENTORY VISIBILITY
RFID TAG TECHNOLOGY
PRODUCT SAFETY
The VERIFIED
Spend less time managing inventory
Emergency Ordering 24/7
THE PRODUCTS YOU NEED ARE ALREADY THERE
With FFF’s Verified Inventory Program (VIPc)® Forward-Deployed Inventory (FDI) System, you have plenty of time to care for your patients.
- Eliminate carrying costs
- Real-time 24/7 monitoring
- Inventory PAR levels
- Increased visibility
- Accurate reporting
- Temperature alerting
- High-touch service
- Integrated split-billing
CONVENIENT WEB PORTAL & AUTOMATED ALERTS
CUSTOMIZED INVENTORY
NATIONWIDE COVERAGE
Advantage and more time focusing on patient care.
(800) 843-7477 | FFFenterprises.com
PREPARE TO #FightFlu NEXT SEASON
With MyFluVaccine.com easy online ordering
Don’t give flu a fighting chance to be the co-respiratory disease we confront next season. Together, let’s #fightflu. Visit MyFluVaccine.com and place your order today to help minimize the impact of the 2021-22 flu season.
YOU PICK THE DELIVERY DATE(S) – Conveniently secure YOUR best delivery date(s)
YOU PICK THE QUANTITY – Choose from a broad portfolio of products
WE SAFELY DELIVER – Count on FFF’s secure supply channel with Guaranteed Channel Integrity™
YOU PICK THE PREFERRED DATE • YOU PICK THE QUANTITY • WE DELIVER
MyFluVaccine.com | 800-843-7477 | FFFenterprises.com
© 2022 FFF Enterprises, Inc. All Rights Reserved FLB58 SP 010521 |
Together Chaffee County
2020 COMPREHENSIVE PLAN
PREPARED BY CUSHING TERRELL
Photo: Scott Peterson
ACKNOWLEDGMENTS
The 2020 Chaffee County Comprehensive Plan update was not possible without the support, input, guidance and consideration from the Chaffee County Board of County Commissioners and Planning Commission.
**Board of County Commissioners | December 2020**
Keith Baker
Rusty Granzella
Greg Felt
**Planning Commission | December 2020**
Anderson Horne, Chair
Bruce Cogan
Bill Baker
Marjo Curgus
JoAnne Allen
David Kelly
Hank Held
**Chaffee County Planning Staff**
Dan Swallow, Director of Development Services
Jon Roorda, Planning Manager
Christie Barton, Planner
The 2020 Comprehensive Plan update was primarily driven by the voices of hundreds of participants representing the people of Chaffee County. Special thanks goes to those who participated in the process by contributing thoughts and ideas online or in-person at any or all of the workshops, open houses, drop-ins or other events.
**2020 Chaffee County Comprehensive Plan**
Prepared by:
**Cushing Terrell**
Cushing Terrell [cushingterrell.com](http://cushingterrell.com)
303 E 17th Ave, Suite 105 | Denver, CO 80203
ACKNOWLEDGMENTS
Citizen Organizations and Community Groups 2019 to 2020
The following citizen-led advisory organizations and community groups dedicated time to ensuring the Comprehensive Plan Vision, Goals, Strategies and Implementation Actions aligned with specific community ideals.
- Arkansas Headwaters Recreation Area
- Central Colorado Conservancy
- Chaffee County Community Foundation
- Chaffee County Early Childcare Council
- Chaffee County Equity Coalition
- Chaffee County Visitor’s Bureau
- Chaffee Green
- Chaffee Shuttle
- Clean Energy Chaffee
- Economic Development Corporation
- Envision Chaffee
- Full Circle Restorative Justice
- GARNA
- Guidestone
- Heritage Area Advisory Board
- Housing Policy Advisory Committee
- Public Health Department
- Small Business Development Center
- Sustainable Salida
- Transportation Advisory Board
- Upper Arkansas Conservation District
- Watershed Partnership
COMANCHE
# Contents
1. **INTRODUCTION** .......................................................... 12
2. **VISION, GOALS & STRATEGIES** ........................................... 20
3. **FUTURE LAND USE PLAN** .................................................. 41
- 3.1 **PURPOSE & OVERVIEW** .................................................. 42
- 3.2 **FUTURE LAND USE DESIGNATIONS** ................................. 46
- 3.3 **LAND USE POLICY RECOMMENDATIONS** ......................... 63
- 3.4 **BUENA VISTA SUB AREA PLAN** ....................................... 67
- 3.5 **MID-VALLEY/NATHROP SUB AREA PLAN** .......................... 74
- 3.6 **SALIDA SUB AREA PLAN** ............................................... 81
- 3.7 **PONCHA SPRINGS/MAYSVILLE SUB AREA PLAN** ............... 88
- 3.8 **SCENARIO ALTERNATIVES** ............................................. 95
4. **IMPLEMENTATION PLAN** .................................................... 107
5. **DATA & TRENDS** .............................................................. 123
6. **APPENDICES** ................................................................. 184
- A.1 **COMMUNITY ENGAGEMENT SUMMARY** ............................ 185
- A.2 **COMMUNITY PROJECT RECOMMENDATIONS** .................. 199
- A.3 **MODEL CONSERVATION SUBDIVISION GUIDELINES** ........... 203
- A.4 **DECISION MAKING GUIDANCE** ....................................... 209
- A.5 **BACKGROUND** ............................................................. 212
Adopted December 2020
Maps, Figures & Illustrations
Future Conditions Maps:
County-wide Future Land Use .......................... 62
Sub Area Plans .............................................. 66
Buena Vista Sub Area:
Existing Land Use ..................................... 70
Future Land Use ....................................... 71
Pattern of Development ............................. 72
Existing & Proposed Infrastructure ............ 73
Mid-Valley Sub Area:
Existing Land Use ..................................... 77
Future Land Use ....................................... 78
Pattern of Development ............................. 79
Existing & Proposed Infrastructure ............ 80
Salida Sub Area:
Existing Land Use ..................................... 84
Future Land Use ....................................... 85
Pattern of Development ............................. 86
Existing & Proposed Infrastructure ............ 87
Poncha Springs Sub Area:
Existing Land Use ..................................... 91
Future Land Use ....................................... 92
Pattern of Development ............................. 93
Existing & Proposed Infrastructure ............ 94
Growth Scenarios ......................................... 95
Existing Conditions Maps:
Recreation & Trails ....................................... 130
Agriculture .................................................. 133
Housing Density .......................................... 136
Traffic ....................................................... 141
Wildland Urban Interface ............................... 148
Sensitive Lands ........................................... 149
Wetlands & Riparian Areas ............................. 151
Jobs & Employment ...................................... 155
Existing Land Use ........................................ 161
Points of Interest ......................................... 176
Land Ownership .......................................... 179
Natural Resources ........................................ 180
Elevation .................................................... 182
Well Density, 2003 ...................................... 184
Nitrate Concentration in River Basin, 2001 .... 182
Figures
Future Land Use Table ................................... 60
Population Projections ................................... 124
Housing Inventory ........................................ 135
Housing Growth Projections ............................ 159
Cost of Growth ............................................ 163
Community Engagement Input Summary .......... 189
The Together Chaffee County Comprehensive Plan is the first long range vision and plan for Chaffee County in over 20 years. Despite some similarities in community perspectives towards growth from the late 1990s to today, unprecedented changes in the way people live and work required an updated strategic approach and growth framework. The previous comprehensive plan from the year 2000 could not have addressed 2020’s challenges which include a wave of incoming residents, recreation pressures, a changing climate and a growing home-based employment sector. Some of these stem from the 2020 global public health crisis, but all undoubtedly affect the way Chaffee County moves, socializes and communicates. Although the prior plan was substantially outdated, recent efforts including Envision Chaffee County, its Recreation in Balance program, a next-generation wildfire protection plan, and the creation of the Chaffee County Office of Housing create the foundation for this new plan. The aim is to create a healthier, more connected and economically successful county that sustains the lifestyles and landscapes that attract so many people to Chaffee County.
The following summarizes the new tools and ideas in the 2020 Comprehensive Plan that will help plan for a successful future.
**DEEP-ROOTED COMMUNITY ENGAGEMENT**
The Together Chaffee County Comprehensive Plan is the culmination of a nearly two-year engagement process anchored by several cycles of community input. Chaffee County has a vigilant and engaged community with local expertise in many topics congruent with comprehensive planning. From neighborhood meetings at the Maysville fire hall, to large open houses at the Steamplant, to drop-ins with local hotel owners, the process included widespread geographic outreach to capture the voice of county residents. At farmers market booths, visitors talked about their first impressions of the County, while large employers were engaged to discuss the challenges they face with changing housing and market conditions. After 13 in-person events, numerous online forums and worksessions, and several surveys, over 3,500 data points were synthesized to develop the plan's foundation.
This plan could not have been created without the attention, effort and time commitment of the Chaffee County Planning Commission and Board of County Commissioners during joint worksessions, where in the closing six months the plan was vetted, fine-tuned and reorganized to ensure efficacy and simplicity.
A NEW VISION FOR THE FUTURE
Much of the community and stakeholder input gathered in first 10 months was formed into Visions, Goals and Strategies—aspirational statements that emphasize how the County will achieve desired outcomes.
Revised Goals and Strategies: The Vision, Goals and Strategies found in Section 2 were revised to reflect contemporary aspirations of current and future Chaffee County residents and visitors. These are organized into seven planning themes: 1) People and Community Services which addresses healthy and equal access to services and fiscal responsibility; 2) County Character to protect the County’s culture, heritage, recreational activities and lifestyle; 3) Affordable and Inclusive Housing to help supply a mix of housing types and affordability; 4) Resilient and Sustainable Environment to strive for a sustainable future in the face of hazards and to harbor stewardship of the natural environment; 5) Connectivity, Mobility and Access which seeks to make all modes of transport easier and safer across the County; 6) Jobs and Economy that focuses on building upon the County’s unique qualities to foster economic growth and a diverse workforce; and 7) Growth and Land Use to focus directly on changing land use patterns.
Scenario Planning: Scenarios for future growth were illustrated in the 2000 Comprehensive Plan and were updated to harbor community-wide discussion on growth management direction and goals. Future growth scenarios were created using maps, models and illustrative concepts to embody three different outcomes. The community was re-engaged to determine a desired scenario, and advantageous elements of several scenarios were assembled to inform maps and policies.
GROWTH MANAGEMENT
Providing guidance for the location, design, pattern and character of new development in unincorporated Chaffee County emerged as a key motivation for introducing new growth management policies. Throughout the process, citizens voiced the idea of “keeping the town, town and the country, country” resulting in an outcome where new growth should occur in existing communities where infrastructure and services already exist. Rather than limit or shift development, several new tools are offered to promote smarter growth in places where a vision had previously not been provided. This is accomplished largely through the Future Land Use Plan and Sub Area Plans, and is also implemented with expanded guidance for conservation subdivision usage and design concepts.
Future Land Use Plan and Maps: Found in Section 3 of this plan, the Future Land Use Plan and associated maps provide a vision for growth in the unincorporated county by illustrating the character and intensity of uses and activities that occur on lands outside municipalities. These maps offer guidance for decisions on land use applications and are derived from input from landowners, stakeholders, municipal representatives, the general public and an infrastructure analysis that projected future development patterns based on proximity to amenities and utilities. Future land use planning has not been a significant tool for leaders and decision makers in the past and is intended to be adapted and amended to ensure flexibility and compatibility with future conditions and to form a basis for revisions to the Land Use Code.
Sub Area Planning: Because the geographic range and unique qualities of Chaffee County’s rural communities, Sub Area Plans are introduced to
capture locally-specific needs and to continually coordinate land use plans between the County and local jurisdictions. The four subareas include: 1) Buena Vista; 2) Mid-Valley/Nathrop; 3) Salida; and 4) Poncha Springs/Maysville. Sub Area policies and projects focus on lands adjacent to municipal boundaries (with the exception of Mid-Valley/Nathrop where there are no incorporated communities) and were developed with members of town or city administration, planning departments and local elected officials.
**Focus Areas for Future Planning:** Sub areas consider land use and planning policy at a more granular level, but the Future Land Use Plan goes further to recognize Focus Areas where additional master planning is beneficial due to particular challenges or opportunities on specific sites. Focus Areas generally would benefit from additional study and outreach to overcome infrastructure issues, physical constraints or to gather support for housing or economic development projects that further community goals.
**AN APPROACH TO MITIGATE REGIONAL HOUSING CHALLENGES**
The Together Chaffee County Comprehensive Plan builds on the energy and momentum brought in when the County’s Office of Housing was created and offers land use-oriented strategies to help all Chaffee County residents have access to safe and attainable housing. The Office of Housing is a county-wide department that takes a multi-jurisdictional approach to housing issues and the Comprehensive Plan offers a similarly regional set of goals and strategies to promote the supply of affordable units and to avoid displacement of residents. This plan honors the housing production targets from the Chaffee County Housing Needs Assessment and offers a future land use pattern that acknowledges locations where housing growth may occur as existing communities grow while seeking to promote residential development in rural subdivisions that utilize conservation subdivision design concepts and do not continue to strain the County’s ability to provide services.
**Conservation Subdivision Design:** Designing subdivisions with conservation in mind is an approach to laying out new residential plats so that a significant percentage of buildable area is permanently protected to create interconnected networks of conservation lands or open space. This is seen as a tool to implement a county-wide vision for preserving agricultural and working landscapes in rural areas while allowing for well-designed residential growth. The Future Land Use Plan discusses ideal locations for such designs. While Chaffee County’s current Land Use Code allows for and promotes conservation subdivisions, this Comprehensive Plan offers a model for design concepts that is intended to be implemented through a formal ordinance.
**A PLAN FOR ACTION**
Through several worksessions with local community groups, advisory committees, volunteers and local leaders, the Comprehensive Plan concludes with an ambitious Implementation Plan of actions and steps to enact community goals. The Comprehensive Plan cannot be implemented by one jurisdiction, agency or group, therefore the implementation actions are intended to be fostered by any and all pertinent groups, and not solely by the County itself. Intended as constantly evolving targets, the matrix outlines priority and level of resources needed for each action as a guidepost for future leaders to review, revise and complete over time.
1. Introduction
Photo: Scott Peterson
Anyone who lives in or visits Chaffee County loves the community’s exceptional beauty, strong western heritage, quality recreation, and friendly people. But Chaffee County is at an important crossroads where the decisions it makes today will have long lasting implications. Over the past twenty years, tourism has become the County’s largest industry bringing an influx people here to visit and to live.
However, the steady increase in population creates both opportunity and new challenges. The County has seen an economic expansion, yet its benefits are not being experienced by everyone. Residential and commercial development is placing significant strain on the region’s land availability, housing affordability, and infrastructure capacity. And, finally, the community’s social fabric is stressed as long-standing and newer values systems come into conflict.
The County realized the 2000 Comprehensive Plan was woefully inadequate when it came to providing the county guidance on how to address these, and many more, challenges. In 2019, it undertook the comprehensive plan update in order to develop a clear vision and a roadmap that would enable the County to direct and anticipate, rather than simply react, to community change.
The Chaffee County Comprehensive Plan 2020 is the guiding policy document, supported by Colorado law, that the County government can use to inform decision making about land use, development, infrastructure and investments. It works in conjunction with other more detailed subordinate County plans and policies, such as the Hazard Mitigation Plan and the Chaffee County Land Use Code. Over the next few years, the County will work to align and integrate its plans for consistency with the vision articulated in this new comprehensive plan.
WHAT IS A COMPREHENSIVE PLAN?
The 2020 Comprehensive Plan is a guidebook for how Chaffee’s citizens, workforce, visitors and developers can help the County grow into a sustainable and connected community. The following maps, charts, descriptions and illustrations that make up this comprehensive plan communicate Chaffee County’s 2020 values to policy makers and leaders; to use as future generations make the County their own.
Under Colorado law, the 2020 Comprehensive Plan is an advisory document and is not regulatory in nature. This means that its importance is to provide the priorities needed to guide updates to Chaffee County’s policy and development regulations. For the plan to become reality, the County will need to create and adopt a variety of regulatory (e.g. zoning provisions) and non-regulatory tools (e.g. cluster development incentives).
This plan is designed to be used in conjunction with other County and Town planning documents. Subordinate plans more specific in nature like the Community Wildfire Protection Plan should be consulted in conjunction with this plan when making decisions related to wildfire planning.
WHY IS UPDATING THE COMPREHENSIVE PLAN IMPORTANT?
The Together Chaffee Comprehensive Plan has been updated to communicate an aspirational vision for the future, to build consensus on growth outcomes, and to recommend policies and projects that protect working landscapes while preserving private property rights. As new planning issues emerge, new tools and approaches can be developed and incorporated into this land use plan to build upon recent successes to tackle the County’s most sophisticated challenges.
A community is like an ecosystem; to thrive, all of the parts must work together in order to sustain the County’s future. This comprehensive plan is meant to help policymakers and implement programs to protect the health, safety, and welfare of our community and preserve our community character for future generations. To do this in a community that is 83% public land, we must have a comprehensive plan organized around stewardship of our ecological resources. This plan extends that ethic to the private lands in Chaffee County, where most of the community lives and works.
To reach its vision, Chaffee County is not starting from scratch; this comprehensive plan instead incorporates how past decisions were made and provides new visionary thoughts and ideas to strive for better outcomes.
To use this plan, decision-making should, in the future, be based on analyzing alternatives and understanding of trade-offs. The public outreach process reached far and wide to understand the public vision from previously unreached or marginalized communities. One outcome of the extensive outreach, for example, was a vision for “keep town town, keep rural, rural”, or focusing density in towns to avoid sprawl in the rural parts of the County. The Plan demonstrates these ideas in the Future Land Use Maps. This plan maintains the use of strategies and maps to convey community values on growth, and creates a venue for cross-jurisdictional coordination in the Sub Area Plans.
Photo: Scott Peterson
LAND USE PLANNING PRIORITIES
While there are many prioritized actions in the Implementation Plan spanning across multiple sectors that are all important, the comprehensive plan and in particular, the Future Land Use Plan, establishes a vision for the physical development of Chaffee County that is a priority for and responsibility of the Chaffee County government explicitly. This vision reflects two interrelated and interdependent principles. First, a desire to focus high-quality growth near existing communities, and second, to ensure development in unincorporated areas meets high quality design and use standards. As the current land use code and planning policies do not currently enable this vision, the County needs to prioritize:
1. A land use code update that includes a new zoning code that allows for a mix of development types at different densities.
2. Municipal – County collaboration and the necessary infrastructure analyses to enable more development in and around the municipalities at greater densities.
3. Completion of the resource assessments, maps, and plans to guide the development of appropriate subdivision and zoning standards that will protect priority sensitive areas, open lands and community assets.
4. Completion of a Regional Multimodal Transportation Plan that addresses the need for better multimodal connectivity.
5. Identification of specific sites and zoning districts that prioritize the development of affordable and attainable housing to support the economy and local workforce.
6. Identification of funding sources to support development of multimodal infrastructure (roads, pedestrian and bicycle trails, airport, rail, telecommunication, freight and transit) and affordable housing.
The Implementation Plan includes the specific actions associated with the above priorities. The County Planning Commission will work to steward implementation and communicate progress annually.
TOGETHER CHAFFEE: MOVING FROM PLANNING TO ACTION
Over the past few years, our community has repeatedly demonstrated we can work together to solve big challenges. We have seen the value and benefits of efforts like Envision Chaffee County and the Housing Policy Advisory Committee in creating community-driven solutions. Together Chaffee integrates this collaborative approach into the comprehensive plan. The Together Chaffee planning effort has built bridges across the community’s municipalities, organizations, agencies, and community groups to create an integrated vision and action plan that reflects the tremendous work already underway in our community to actualize our shared community values. While the County will take a leadership role in stewarding plan implementation, success will be based on how we continue to work together to get things done. Together Chaffee will continue to engage the community, communicate progress, and celebrate successes as the plan is implemented.
HOW TO USE THE COMPREHENSIVE PLAN
This comprehensive plan communicates the community’s vision and priorities for the future that can inform community action and development. For the nonprofit sector, this plan demonstrates community support for policy and program development. For the development community and property owners, this plan provides greater clarity about the desired character of neighborhoods and type of future development. For local government, it reflects community desires for resource investments and should inform decision making for consistency with the Plan’s Vision and Goals.
The plan includes:
1. **The Vision, Goals, and Strategies:** Establish the aspirational and strategic direction for the County on where it wants to go in the future for seven thematic areas that address issues related to the community, the economy, the natural environment, and the built environment.
2. **The Future Land Use Plan and four Sub Area Plans:** The physical manifestation of the plan, this guides the long-term physical change in the County. This includes the County-wide Future Land Use framework, description of Future Land Uses applied to the unincorporated areas of the County, and Future Land Use Maps.
3. **The Implementation Plan:** A matrix for the specific actions steps necessary to accomplish the Goals and Strategies. It includes the institutional lead and priority level with an emphasis on the next 1-3 years.
4. **Community Trends Summary:** A summary of data reflecting current conditions and drivers of change in the County. This trend data is useful to understand the extent and scope of issues, and it provides supporting data for grant makers and policy decisions.
5. **Appendices:** The public process and input summary and other valuable information can be found here as a reference.
A comprehensive plan should be a living document. Should conditions in the County change significantly, the Comprehensive Plan can be amended to reflect more current realities or be updated completely.
Guiding Principles and Values
GUIDING PRINCIPLES
The development of this Comprehensive Plan was grounded in the following principles that should also be used as the County moves into action with implementation.
STRENGTHEN CIVIC CULTURE: Support, improve, and strengthen public and civic processes.
FOSTER REGIONAL COLLABORATION: Foster a climate of intergovernmental and interagency collaboration and achieve continuity and systemic integrity by synchronizing and coordinating plans across intergovernmental, interagency, interdepartmental, and private volunteer organizations and non-governmental organizations to lower costs, increase efficiencies, and maximize results.
RESPECT PROPERTY RIGHTS: Take a balanced approach to regulatory, voluntary, and incentive based approaches to achieve community goals respecting all parties.
SUPPORT INNOVATION & CREATIVITY: Approach community development and decision-making with openness, creativity, and seek win-win solutions.
PROMOTE HEALTH & EQUITY: Include health considerations in policy making across different sectors that influence health (transportation, agriculture, land use, housing, public safety, and education) to address policy and structural factors.
ACT HOLISTICALLY: Given all things are connected, consider decisions across scales from the human, site, and landscape scale and across the comprehensive plan’s themes. Consider both the connections among things and the potential for unintended consequences when making decisions.
MANAGE GROWTH: Direct growth to compatible areas where growth makes sense using analysis to consider economic, physical, social, and ecological constraints. Plan and develop sustainably, with the goal of protecting our natural resources, wildlife and viewsheds.
BUILD PROSPERITY: Make future oriented decisions that contribute to the health of our economic, social, and environmental values.
COMMUNITY VALUES
A community’s values reflect the culture of a community and reflect who a community is at their best. These values reflect the intangible qualities of Chaffee County that residents love and what makes it a great place to live. While the Comprehensive Plan goals and strategies are intended to support, protect, and enhance these values, community leaders and residents should also use these values to guide future action.
- **We are a caring community.** We value our strong sense of community where we know our neighbors, are welcoming and friendly, and support each other when in need.
- **We are good stewards.** We value the natural beauty in our backyard provided by the Arkansas River, surrounding mountains, and valley. We care for the environment that supports our communities and natural systems.
- **We are civically minded and engaged.** We value the willingness of people to work together and collaborate towards the betterment of our community.
- **We are authentic.** We value our rich heritage, distinctive communities and creative residents that contribute to our unique local character that sets us apart from other places.
- **We are a healthy and active community.** We prioritize the physical and mental wellbeing of all community members.
- **We are a great community for children and families.** We value being an ideal place to raise a family where we support young people and parents by providing services, affordable housing, a strong educational foundation and safe environment.
- **We are a safe community.** We value living in a County with close-knit communities where familiarity fosters trust. We respect our diversity, and we strive to ensure everyone has access to economic opportunities that offer financial security.
2. Vision, Goals & Strategies
Goals are broad statements that push forward the aspirational elements of the plan as stated in the public vision. Goals - along with associated strategies - are statements representing recommendations of what needs to be done, how, and by whom.
Action Steps, which are found in the Implementation Plan on page 107, are a translation from strategies to strategic moves that individuals, organizations or agencies can perform to carry out the plan’s goals.
The Plan Elements are a way to organize key elements of the Comprehensive Plan using seven value-based themes. Themes were derived from the topics the community stated were highly valuable in the input process. The Plan Elements provide a discussion on how Chaffee’s existing community characteristics will guide long term change.
Chaffee County manages growth and land use in a manner consistent with our community values, our recognition of the natural wonder in which we live, and in fairness and equity to the people who live here and respecting the importance of private property rights.
Chaffee County cultivates a vibrant and resilient year-round economy, valuing renewable energy and innovation, offering diverse employment opportunities and an affordable, thriving, healthy mountain community.
Chaffee County stewards its resources in a manner that enhances community resilience and the natural environment for future generations.
Chaffee County is a harmonious community where all people are valued, diversity is welcomed, all are respected and have opportunities to achieve their highest potential.
Residents and visitors celebrate Chaffee County’s access to the outdoors, unique landscapes and heritage which unites the community around a common connection to place.
Chaffee County residents have the opportunity to live in safe, stable, healthy and affordable homes.
All Chaffee County businesses, residents and visitors can move around easily and safely with access to a high-quality multi-modal transportation system.
VISION STATEMENT:
Chaffee County is a harmonious community where all people are valued, diversity is welcomed, all are respected and have opportunities to achieve their highest potential.
Buena Vista Parade
Photo: Scott Peterson
GOAL 1.1: Identify as a generous and inclusive community embodying our community values (see page 18) where all people feel valued and are motivated to be engaged in making Chaffee County a more amazing place to live.
STRATEGIES:
A. Actively promote a harmonious community (County) through community-building, including events and gathering spaces and places with the support of the Community Foundation and non-governmental groups.
B. Identify and support diverse communities and activities that support these communities with the support of the Community Foundation and non-governmental organizations.
C. Create a County-wide kindness awareness project and support existing initiatives that promote community understanding.
GOAL 1.2: Recognize the contribution the creative arts make to our community character and economy and enhance and expand creative arts activities and opportunities.
STRATEGIES:
D. Support the creative arts organizations through promotion, awareness, branding, events and transportation.
GOAL 1.3: Promote and support physical and mental health for all residents.
STRATEGIES:
E. Invest in facilities and infrastructure that aid in physical activity and promote wellness.
F. Support individual and community health behaviors that reduce the disease burden in the community.
G. Encourage practices and activities to achieve healthy food access for all residents.
GOAL 1.4: Develop an aging-friendly community.
STRATEGIES:
H. Develop an age in place action plan.
I. Provide a wide range of housing types accessible to people at all stages of life.
J. Ensure adequate access to health care for later stages of life.
GOAL 1.5: Develop a child, youth and family-friendly community.
STRATEGIES:
K. Provide an adequate supply of affordable childcare and after school programs that support working families.
L. Invest in comprehensive services for the health and wellbeing of children, youth and families that addresses disparities and inequities.
M. Ensure County leadership offers support to all schools for programs considered in their master plans that seek County-level support.
**GOAL 1.6:** Create a culture where all residents feel motivated and empowered to be involved in making Chaffee County an amazing place to live.
**STRATEGIES:**
N. Recognize and support the wealth of volunteerism present in Chaffee County.
O. Enhance access to information about County activities that support civic participation.
**GOAL 1.7:** Ensure emergency services are adequately funded and staffed to maintain high quality service in the County as the population increases.
**STRATEGIES:**
P. Review the impact fees for fire and other services for new development to determine if it is sufficient to meet future level of service needs.
Q. Strengthen relationships with proven diversion programs that can reduce impacts to the criminal justice system as the population increases.
**GOAL 1.8:** Support vulnerable residents with appropriate services.
**STRATEGIES:**
R. Collaborate with local coalitions and service providers to deliver services for Chaffee County’s most vulnerable residents including people who are experiencing homelessness, domestic violence, or food insecurity.
**GOAL 1.9:** Maintain a fiscally-stable and effective County government.
**STRATEGIES:**
S. Ensure growth does not adversely affect fiscal health.
T. Coordinate with municipal, state and other governments to determine potential for cost savings through inter-jurisdictional agreements to provide new services.
U. Communicate routinely with the public through the most convenient and accessible channels during and in anticipation of public health events or human services concerns.
V. Complete a Chaffee County Strategic Plan.
County Character
VISION STATEMENT:
Residents and visitors celebrate Chaffee County’s access to the outdoors, unique landscapes and heritage which unites the community around a shared connection to place.
Photo: Scott Peterson
GOAL 2.1: Responsibly manage the County’s recreation opportunities across all jurisdictions to: A) Maintain quality of life and outdoor recreation experiences, B) Sustain recreation-based economic contribution, and C) Maintain healthy forests, waters, wildlife and working lands.
STRATEGIES:
A. Adopt the Recreation in Balance (RIB) program and its efforts to maintain healthy forests, waters and wildlife in balance with outdoor recreation.
B. Collaborate with municipalities and public agencies on recreation assessments, planning, and project development to meet regional recreational needs.
C. Work with local non-profit organizations and public partner agencies to identify, maintain, and provide equitable recreational access to public lands.
GOAL 2.2: Preserve the County’s historic and cultural resources and working landscapes recognizing how heritage contributes to economic development and broadens awareness of local culture and history.
STRATEGIES:
D. Support the Chaffee County Heritage Area Advisory Board, Historic St. Elmo & Chalk Creek Canyon, and the Buena Vista Historic Preservation Commission.
E. Identify and preserve historically significant structures and sites and create a database, whether geographic or tabular, that is accessible to all. Reference the Colorado Historic Register and the National register.
F. Enhance historic preservation education, outreach, and awareness.
G. Protect Chaffee County’s iconic viewsheds and Scenic Byways designation.
H. Keep working lands working by supporting agricultural economics and helping agricultural operations manage increasing conflicts and costs associated with increasing population and visitation/recreation use.
GOAL 2.3: Update the regulatory framework to support quality of life goals.
Affordable & Inclusive Housing
VISION STATEMENT:
Chaffee County residents have the opportunity to live in safe, stable, healthy and affordable homes.
New housing in Buena Vista
GOAL 3.1: Support the development of affordable housing within all jurisdictions in Chaffee County.
STRATEGIES:
A. Collaborate regionally to address the affordable housing issues faced by Chaffee County.
B. Adopt policies that promote the development and preservation of housing types across the housing spectrum that serve residents across a range of demographics and incomes.
C. Develop a dedicated housing fund to be used to support the Chaffee Housing Authority’s affordable housing programs, affordable housing development projects and affordable housing preservation.
D. Work with the Chaffee Housing Authority to meet the regional affordable housing production goals as established in the Chaffee County Housing Needs Assessment.
E. Ensure residents have access to safe and livable homes.
Connectivity, Mobility & Access
VISION STATEMENT:
All Chaffee County businesses, residents and visitors can move around easily and safely with access to a high-quality multimodal transportation system.
Photo by Scott Peterson
GOAL 4.1: Design and fund a multimodal transportation network that provides options of travel and serves existing population and activity centers as its priority.
STRATEGIES:
A. Create the Chaffee County Multimodal Transportation Plan (CCMTP) to address elements of Safety, Mobility, Economic Vitality, Maintenance, and Strategic Policies.
B. Adopt the CCMTP as an addendum to the Chaffee County Comp Plan 2020 and use it to guide consistency in decision making across other plans.
C. Update the 2007 Chaffee County Trails Master Plan as part of the CCMTP to improve trail connectivity and town trail connections.
D. Develop a Sustainable Funding Plan to offset County investment needed for the multimodal system.
GOAL 4.2: Enhance safety in Chaffee County by reducing fatalities and serious injuries in all modes.
STRATEGIES:
E. Work collaboratively to support the Colorado Department of Transportation goal of zero transportation-related deaths.
GOAL 4.3: Improve mobility and access across Chaffee County.
STRATEGIES:
F. Prioritize multimodal transportation opportunities and choices to all project developments by integrating bicycle, pedestrian, transit and telework connectivity in developments.
G. Maintain and improve access to public lands and recreational assets.
GOAL 4.4: Improve economic vitality through strategic transportation investments.
STRATEGIES:
H. Provide appropriate infrastructure to support economic development.
GOAL 4.5: Improve multimodal infrastructure maintenance across Chaffee County.
STRATEGIES:
I. Maintain the quality and functionality of the existing multimodal system now and in the future.
J. Provide a safe and efficient County-wide multimodal network that minimizes maintenance costs and supports achieving the community’s vision and goals.
K. Maintain and improve public transportation in the Chaffee County through coordinated planning and investments.
**GOAL 4.6:** Ensure development regulations support transportation goals for safety, mobility, economic vitality, maintaining the transportation system and strategic policies.
**STRATEGIES:**
L. Update the development code to achieve transportation goals.
Resilient & Sustainable Environment
VISION STATEMENT:
Chaffee County stewards its resources in a manner that enhances community resilience and the natural environment for future generations.
Photo: Scott Peterson
GOAL 5.1: Cut the risk severe wildfire poses to human life, water, critical infrastructure, homes, wildlife habitat and recreation assets/economy in half by 2030, while also enhancing habitat and forest health.
A. Implement the Chaffee County Community Wildfire Protection Plan led by the Envision Forest Health Council - an existing collaborative connecting 21 agencies and organizations.
GOAL 5.2: Maintain and improve community preparedness, emergency response and recovery capacity to ensure public health and safety.
STRATEGIES:
B. Enhance government hazard mitigation and response planning.
C. Plan and support vulnerable populations to be resilient to hazards, epidemics, extreme weather and climate-related events, including drought and flooding concerns.
D. Promote disaster-resilient community infrastructure including housing, utilities, transportation systems and healthcare resources.
E. Build and grow in a manner that is resilient to wildfire and other natural hazards.
GOAL 5.3: Identify, promote, and expand the use of energy efficient practices and renewable energy resources.
STRATEGIES:
F. The County will collaborate with Federal, State and municipal governments, voluntary organizations, energy experts, businesses and others to establish and commence implementation of energy conservation goals.
G. Lead by example in County operations to conserve energy, use renewable energy sources in an effective manner, and take steps to reduce greenhouse gas emissions.
GOAL 5.4: Protect public health and the environment through proper waste management.
STRATEGIES:
H. Minimize land filling through an integrated waste management system in accordance with the state hierarchy of waste reduction, reuse, recycling, composting, and waste to energy.
GOAL 5.5: Manage water resources to ensure a resilient and sustainable water supply that can support people and ecosystems.
STRATEGIES:
I. Ensure new development has a sufficient and sustainable water supply.
J. Promote water conservation and efficiency in new development, redevelopment and County facilities.
K. Support an integrated water resource management approach and collaborate across sectors, jurisdictions and agencies on implementation.
L. Support policies and programs that protect agricultural water rights and agriculture’s contribution to hydrological functions.
O. Protect vegetation that enhances infiltration of precipitation for groundwater recharge and erosion prevention.
P. Ensure erosion and stormwater standards adequately protect water quality and functions of groundwater recharge.
Q. Adopt development standards, based on best management practices, that reliably protect the Arkansas River and its tributaries.
R. Support increased collaboration and alignment of efforts to protect the watershed between different agencies, organizations, and governments.
S. Support payments for ecosystem services programs for landowners to enhance floodplain function and to protect river corridors.
T. Increase knowledge and understanding about the health of the Arkansas River and its watershed.
GOAL 5.6: Protect the water quality and quantity in Chaffee County’s rivers and streams systems.
STRATEGIES:
M. Develop a river corridor overlay map to guide and inform development impacting the river corridor.
N. Mitigate impacts of older, less reliable septic systems.
GOAL 5.7: Protect critical wildlife habitat, connectivity and migration corridors cross-jurisdictionally.
STRATEGIES:
U. Develop mapping and geospatial modeling of the most impactful wildlife habitat and migration corridors in Chaffee through the Envision Recreation Plan and the Chaffee Recreation Suitability Map.
V. Use the wildlife map and model data to develop and then implement wildlife protection standards for all new development.
W. For highest priority habitat areas, adopt a conservation subdivision overlay with appropriate development standards.
X. Work with private landowners, non-profits, and government agencies to invest in the protection and restoration of priority wildlife habitats.
GOAL 5.8: Become a model County for sustainability.
STRATEGIES:
Y. Develop a County-wide Sustainability Plan.
Z. Conduct education and outreach to increase understanding and build consensus on the needs of our environment and support citizen’s and government’s actions in achieving sustainability.
AA. Create the capacity in the County to oversee implementation.
VISION STATEMENT:
Chaffee County cultivates a vibrant and resilient year-round economy, valuing renewable energy and innovation, offering diverse employment opportunities and an affordable, thriving, healthy mountain community.
Downtown Buena Vista
Photo by Scott Peterson
GOAL 6.1: Develop, grow, and maintain existing local businesses.
STRATEGIES:
A. Provide business support for a vibrant agricultural economic sector through a variety of financial incentives and programs.
B. Provide educational opportunities for entrepreneurs and small business owners.
GOAL 6.2: Attract new and innovative industries that align with community values and provide long-term employment for a diverse workforce.
STRATEGIES:
C. Recruit and support industry sectors including those focused on: agriculture, food production, recreation, sustainability, and light manufacturing.
D. Invest in quality of life services in Chaffee County that makes it an attractive location for entrepreneurs and location-neutral employees to relocate.
E. Leverage the need for Forest Treatment and Community Wildfire Protection Plan to create jobs.
GOAL 6.3: Manage the tourism sector to ensure the values and resources that make Chaffee County a great and desirable destination are protected and economic benefits are year-round.
STRATEGIES:
F. Work closely with the land management agencies, Recreation in Balance program, the Chaffee County Heritage Board, the Chaffee County Visitor’s Bureau, the Chaffee County Community Foundation, the Chaffee County Economic Development Corporation and others to develop strategies for year-round economic resiliency.
GOAL 6.4: Provide appropriate infrastructure to support economic development.
STRATEGIES:
G. Provide high-quality infrastructure including mobility, telecommunications public utilities and workforce housing. Support improvements to broadband coverage, connectivity and diversity.
GOAL 6.5: Support the needs and advance opportunities for the County’s workforce.
STRATEGIES:
H. Support and expand post-secondary educational institutions including community colleges, vocational and trade school programming to promote a locally-grown workforce.
GOAL 6.6: Ensure the land use code supports economic development goals.
STRATEGIES:
I. Update the code to reduce and/or eliminate unnecessary regulations.
VISION STATEMENT:
Chaffee County will manage growth and land use in a manner consistent with our community values, our recognition of the natural wonder in which we live, and in fairness and equity to the people who live here and respecting the importance of private property rights.
St. Elmo
Photo: Scott Peterson
GOAL 7.1: Create distinct communities focusing growth around existing towns that enhances community character and maximizes public investments in infrastructure and services.
STRATEGIES:
A. Strengthen cooperation and intergovernmental agreements to execute annexation, utility connections, and public services.
B. Use strategies to incentivize and direct growth to existing towns, such as density bonuses.
C. Encourage flexible and creative development in the unincorporated center of Nathrop and encourage more pedestrian-oriented development in Johnson Village and infill in the municipalities.
E. Identify market-based incentives to adopt into the land use code that support planned development and achieve desired resource protection.
F. Develop an overlay zone and review process to protect environmentally important lands.
G. Consider land use policies that continue to preserve high value scenic and historic resources in the County and support implementation of the Collegiate Peaks Scenic & Historic Byway’s Corridor Management Plan.
GOAL 7.3: Ensure adequate and well-planned infrastructure meets the needs of current and future residents and businesses, including telecommunications, water and wells, wastewater and sustainable energy.
STRATEGIES:
H. Encourage development in areas that have the ability to provide infrastructure.
I. Ensure the planning, funding and construction of infrastructure projects are coordinated across government agencies and jurisdictions and allow public input.
GOAL 7.4: Create a regulatory framework that supports the new vision of this plan.
STRATEGIES:
J. Update the County Land Use Code.
3. Future Land Use Plan
How and where our community grows has a major influence on how community members get around, the character of the County and its communities, natural resources, and sense of community. The Future Land Use Plan responds to the need to accommodate a growing population and the subsequent demand for additional housing, services, and employment in a manner consistent with the Comprehensive Plan Vision.
The purpose of the Future Land Use Plan is to establish a framework that illustrates the desired locations and patterns for this growth. While development potential is constrained by the fact the county is 83% publicly-owned land, analysis indicates there is approximately 25,908 acres of physically developable land which is currently undeveloped. There is another 12,740 acres with residences, but on lots over 40 acres in size, which could be further subdivided. While only advisory, the Future Land Use Plan is intended to be a roadmap to guide updates to the land use code and to nudge new development in a direction consistent with the community’s vision for orderly, efficient, and sustainable growth in appropriate locations of the unincorporated county and in or near existing communities.
**ACCOMMODATING PROJECTED GROWTH**
While projecting increases in population is inexact, it offers an opportunity to consider how the county might grow. The table on the following page uses U.S. Census estimates to quantify how much growth the County needs to allocate in the Future Land Use Plan. These estimates indicate that over the next 10 years, the county’s total population will increase by around 4,000 people.
When considering that a portion of the population will live outside the municipalities, the implications for land use planning are clear. At the base level forecast, about 600 new housing units may be necessary in the unincorporated county. This number increases to over 1,700 with a more rapid population growth estimate. If the county were to continue developing as it is now at 1 dwelling unit per 2 acres, accommodating this growth into
County Growth Projection 2020-2030
| | 2020 Estimate | 2030 Projection (Base) | 2030 Projection (Lower Bound) | 2030 Projection (Upper Bound) | Net Growth (At Base Projection) | Net Growth (At High Projection) | Est. Land Required |
|----------------------|---------------|------------------------|-------------------------------|-------------------------------|---------------------------------|---------------------------------|-------------------|
| **Population** | 20,799 | 24,899 | 21,210 | 28,588 | 4,100 | 7,789 | – |
| **Housing Units** | 5,980 | 6,575 | 5,456 | 7,695 | 596 | 1,715 | 1,260 to 3,677 Ac |
| **Employment** | 8,400 | 9,463 | 8,532 | 10,394 | 1,063 | 1,994 | 70 to 150 Ac |
*2020 figures are interpolated from 2000 to 2018 US Census estimates. Includes populations within municipalities
**Only includes units in the unincorporated county (US Census)
new subdivisions would occupy about 1,200 acres. However, illustrating the value of better planned growth, allocating this development into higher density subdivisions around the towns (8 dwelling units per acre), the amount of land needed to accommodate this new growth is reduced to 75 acres. The Future Land Use Plan aims to intensify development potential around existing communities and maximize investments in infrastructure in order to reduce pressure on the County’s rural areas, agricultural lands, and sensitive ecological assets.
Population growth also results in new economic activity. Chaffee County could add about 1,000 jobs in the next decade. While most of the employment is centered in the municipalities, especially in and around Salida, the Future Land Use Plan recommends the land use code allow for more mixed use development and redevelopment of existing commercial areas to support a growing and diversifying economy.
ENHANCING REGIONAL COORDINATION
Keeping “the country, country and the Towns, towns” has been a long articulated ideal in Chaffee County. However, achieving this desired growth pattern has been challenged by a lack of adequate infrastructure and effective County and Municipality intergovernmental coordination on development. Fortunately, at the time of the development of this plan, intergovernmental agreements, or IGAs, were either adopted or in process between the County, Salida, Buena Vista and Poncha Springs demonstrating a significant shift towards regional planning.
The capacity of infrastructure for roads, water rights availability, and water and sanitation services will be the primary driving factor for how growth occurs in and around the municipalities. The Future Land Use Plan was developed in close coordination with the municipalities and aligned with their more recently developed growth and infrastructure plans. In particular, the Future Land Use Plan reflects the 3 Mile Areas Plans for each of the municipalities. These plans, required by the State, plan for future growth within an area of influence of the town boundaries extending out to 3 miles. These plans include how the municipalities plans to offer utility service extensions, areas targeted for annexation, and areas of desired growth. This Future Land Use Plan reflects an alignment between all the jurisdictions on these preferred areas of growth.
FUTURE LAND USE MAPS
The Future Land Use Plan includes county-wide Future Land Use Maps (FLUMs) and FLUMs for each of the four Sub Areas. The FLUMs represent desired future land use character and are intended to inform the development of future zoning districts and code revisions. The map reflects areas of the County where existing conditions are unlikely to change and areas where infill and expansion of suburban and urban growth will occur based on:
- Proximity to a municipal planned growth area
- Presence and capacity of existing water and sanitation infrastructure
- Location on or near a major road or transportation network
- Character and intensity of surrounding or proposed development
- Location relative to natural resources constraints including natural hazards, high fire risk zones, wildlife habitat and conservation areas, and scenic resources
- Real estate market conditions.
Where transportation plans exist, the Future Land Use Maps include desired or planned road and transportation infrastructure extensions. Roads are indicated by dashed red lines and pedestrian by dashed green lines.
FUTURE LAND USE DESIGNATIONS
The Future Land Use Designations articulate the general character of land use areas identified on the maps. These designations are not regulatory, nor do they reflect specific parcel boundaries. Instead they are intended to guide an update to the zoning code and inform development review for consistency in what kind of growth is desired for an area. The Future Land Use Plan contains both descriptions of the future land use and character as well as a two-page future land use table which offers a summary for use in reviewing the maps.
Future Economic Activity Nodes, Focus Areas and Opportunity Sites
The Sub Area Plans also identify areas that would benefit from more detailed site or neighborhood scale master plans. These Nodes, Focus Areas and Opportunity Sites were identified based on their potential for redevelopment and infill areas where change is likely, as a location with unique characteristics or attributes that are underutilized or has development challenges, is an existing small community/neighborhood, and/or may require physical or infrastructure improvements. These areas would benefit from additional planning processes such as a master plan, corridor plan, or neighborhood plan to offer greater guidance for future development. These areas are listed in the table on the following page.
The Opportunity Sites differ from the nodes and focus areas in that they are unique parcels in the unincorporated county that offer a high likelihood of development that could further the goals of the Comprehensive Plan. These sites in the unincorporated area were identified based on criteria analyzed using GIS that identified parcels within:
- 400 feet of existing infrastructure including water, sanitation, roads/sidewalks, etc.
- 2,000 feet of existing amenities or activity nodes (schools, parks, grocery stores, healthcare institutions, cultural institutions) and representing a roughly 20-minute walking radius
- 500 feet of a trail network
And also considered:
- Physical site constraints
- Recent nearby development
These sites were then evaluated based on property ownership and included in the infrastructure analysis for feasibility of extending key infrastructure, particularly public water and sanitation lines to service the parcel.
**SUB AREA PLANS**
Given Chaffee County spans such a large and diverse geography, the Comprehensive Plan includes four Sub Area Plans for the areas of:
1. Buena Vista
2. Mid Arkansas Valley/Nathrop
3. Salida
4. Poncha Springs/Maysville
Each Sub Area Plan was developed based on analysis of the municipalities’ long-range plans and Three Mile Area Plans, Intergovernmental Agreements (IGAs), existing and platted subdivisions, vacant land, current studies, population and housing projections, and public input. Each Plan includes a Vision, a Future Land Use Map (FLUM), and additional land use planning considerations for development of specific sites and neighborhoods.
These Sub Area Plans are intended as a guide for development that account for the unique qualities of each area and support greater collaboration and coordination between the County and Municipalities to achieve desired growth consistent with the community’s vision.
3.2 | Future Land Use Designations
Public Lands
FUTURE LAND USE & CHARACTER
Objectives:
• Support multi-jurisdictional interests on the conservation, protection and responsible use of public lands.
• Balance recreational activities with high-value scenic and ecological resources, including sensitive wildlife habitat, riparian areas, scenic byways and/or the wildland urban interface.
• Partner with the multiple jurisdictions involved to continue to provide public access.
• Create a future land use framework for public lands that protects backcountry areas with privately-owned mining claims from hazards.
Location Description: Municipal, County, State or Federally-controlled public lands and open space.
General Character: No major change in land use envisioned. Land uses are generally related to supporting recreation and outdoor tourism-based activities, the administration of public lands and access to outdoor resources. No residential development is permitted on public lands except on mining claims under specific circumstances. Mining claims wholly surrounded by public lands are in private ownership and have challenges with access, sanitation, and natural hazards, particularly wildlife. The Mining Claim designation accompanies this designation, as well as a recommended future Backcountry Overlay to protect landowners under specific circumstances.
Envisioned Density Range: Residential uses are limited in Public Lands but may exist in mining claims under specific circumstances.
Objectives:
• Provide for additional open space in and around existing or proposed development to provide opportunities for active and passive recreational space.
Location Description: Permanently protected open space held as a land dedication, parkland or conservation easement.
General Character: Intended to preserve open space, particularly in subdivisions, on critical conservation areas or along existing or proposed trail networks to expand County recreation and connectivity goals. Features both passive and active open space uses including trails, parks, open areas, public recreation facilities and conservation areas.
Envisioned Density Range: Not applicable.
Mining Claim
FUTURE LAND USE & CHARACTER
Objectives:
• Provide for recreational opportunities on small privately-owned parcels currently existing in or surrounded by public lands.
• Allow for seasonal limited-access shelters that do not contribute to wildland fire risk.
• Restrict insensitive buildings or uses from proliferating in backcountry environments.
Location Description: Tracts of privately-owned land formally used as mining claims as designated by the County Assessor.
General Character: Limited and seasonal backcountry cabins or structures with approved facilities and constructed to standards for minimal impact or no degradation of natural resources, particularly waterways, and avoidance of natural hazards.
Envisioned Density Range: One shelter per individual parcel.
Rural/Agricultural
FUTURE LAND USE & CHARACTER
Objectives:
• Preserve the character of rural areas in Chaffee County.
• Support the agricultural economy.
• Create opportunities for advancing agricultural and sustainable farming practices.
• Promote the preservation of open land through conservation subdivisions.
Location Description: Areas with large acreages of mostly undeveloped land distant from urban settings.
General Character: Intended for areas with very low-density residential development, farmsteads, and/or agricultural activities. Uses include farming and ranching, value-based agricultural manufacturing, low impact renewable energy and other ancillary uses that support the agricultural economy, conservation of open lands and habitat and the rural character of the County. Includes areas with natural hazards such as steep slopes and floodplains. Residential uses are generally on large parcels, housing related to agricultural operation and not in platted subdivisions. Large developments in the future should reflect open space design practices such as clustering or conservation subdivisions.
Envisioned Density Range: One Dwelling Unit per 2 Acres, cluster subdivisions or larger lot residential.
Rural Residential
FUTURE LAND USE & CHARACTER
Objectives:
• Provide opportunities for large-lot, low density residential uses in appropriate locations.
• Manage appropriate land uses at the interface between residential and agricultural uses.
• Promote the preservation of open space through conservation subdivisions.
Location Description: Areas of the unincorporated county with single family residential houses in platted subdivisions with lots found farther from municipalities and not intended to receive utility services.
General Character: Generally designated for established residential subdivisions where no change in use is intended. Accommodates low density residential uses with activities that are consistent with the current Rural Zone District in the Chaffee County land use code including small scale farming and agricultural activities. May support parks, trails, and open space facilities.
Envisioned Density Range: Typical lot sizes are currently upwards of 1 Dwelling Unit per 20 Acres. Promotion of similar densities to adjacent subdivisions to maintain existing character is encouraged, creating a range of development types around the County. Future developments at 1 Dwelling Unit per 2 Acres should reflect open space design practices such as clustering or conservation subdivisions.
Suburban Residential
FUTURE LAND USE & CHARACTER
Objectives:
• Maintain orderly and consistent growth of existing neighborhoods in the County’s municipalities or established unincorporated communities.
• Provide a mix of housing to serve a range of people and incomes, including the workforce, the elderly, and families of various income levels.
• Adhere to an orderly and efficient vision that encourages denser growth near existing communities and anticipates municipal annexation through intergovernmental agreements.
Location Description: Areas in or near existing communities of the unincorporated county or municipalities or along major transportation corridors that are included in Intergovernmental Agreements and where municipal services may be extended in the future.
General Character: Generally low density residential uses consistent with the RES Zone District in the current Chaffee County land use code. Intended to support affordable and attainable housing goals with a diversity of housing types to serve a range of income levels. May accommodate locally serving commercial uses that offer amenities to the neighborhood such as small scale retail and home occupied businesses.
Envisioned Density Range: 1 to 4 Dwelling Units per Acre; Higher densities are contingent on incentives for projects that incorporate affordable housing into the development, and projects that are located near amenities or activity centers. Building heights range from 1 to 3 stories depending on location.
Mixed Residential
FUTURE LAND USE & CHARACTER
Objectives:
• Provide a mix of housing to serve a range of people and incomes, including the workforce, the elderly, and families of various income levels.
• Located near major transportation corridors, activities and services such as schools, grocery stores and employment nodes.
• Targets orderly and efficient growth patterns that encourage denser development near existing communities. Development here should not strain the capacity of municipal roads, water, sanitation or other utilities/services.
• Anticipates municipal annexation and servicing through intergovernmental agreements.
• Promotes a jobs/housing land use balance by creating opportunities for housing near places of work.
Location Description: Areas desired for annexation adjacent to existing incorporated or unincorporated communities or along major transportation corridors where higher densities may be appropriate and near existing water and/or sanitation utilities.
General Character: Envisioned to accommodate a mix of housing types and residential densities, affordable housing, institutional uses such as schools or public facilities, and appropriately scaled commercial uses for walkable amenities. Encourages non-traditional subdivision design with smaller lots and conservation subdivisions to promote more compact development.
Envisioned Density Range: 1 to 4 Dwelling Units/Acre; Higher densities are contingent on incentives for projects that incorporate affordable housing into the development, and projects that are located near amenities or activity centers. Building heights range from 1 to 3 stories depending on location.
Rural Commercial
FUTURE LAND USE & CHARACTER
Objectives:
• Promote job growth and economic development by expanding opportunities for businesses that generate employment and diversify Chaffee County’s economic base.
• Accommodate existing commercial in unincorporated Chaffee County.
Location Description: Existing and future commercial areas
General Character: A diverse mix of locally and regionally serving commercial activities that are integral to the region’s economy such as business parks, flex offices, utilities, sand and gravel storage, personal services, etc. Residential uses may be permitted in mixed use developments with density bonuses to incentivize the inclusion of affordable units. Development standards require buffers between residential and commercial uses.
Envisioned Density Range: 2 to 16 Units per Acre when incorporated into commercial development. Building height between 1 and 3 stories.
Rural Mixed-Use
FUTURE LAND USE & CHARACTER
Objectives:
• Promote a mix of uses best served by transportation corridors and near existing agricultural activities.
• Provide opportunities for low-density, low-impact commercial uses that accommodate rural businesses.
• Promote job growth and economic development by preserving sites for highway-oriented manufacturing and freight transportation.
Location Description:
General Character: Similar in types of uses as that of the Mixed Use Corridor, but at lower densities and intensities to reflect the surrounding rural character. Envisioned to promote economic development consistent with the existing recreation, agricultural, and tourism based economy such as recreation companies, campgrounds, and hospitality activities.
Envisioned Density Range: 1 Unit per 2 Acres to 2 Units per Acre with higher densities contingent upon bonuses given to incorporate affordable housing into development. Building heights between 1 and 3 stories.
Mixed-Use Corridor
FUTURE LAND USE & CHARACTER
Objectives:
• Promote a mix of uses best served by major transportation corridors and near existing higher intensity activities.
• Adhere to a targeted, orderly and efficient vision for growth that encourages denser buildings near existing communities with existing services and infrastructure.
• Target mixed-use and denser uses to areas near existing communities.
• Promote job growth and economic development by locating sites for transportation-dependent offices, services and goods delivery near major highway corridors.
Location Description: Areas located near existing municipalities, along or near major transportation corridors, included in intergovernmental agreements for the municipalities’ three-mile planning areas, and within in existing or future utility service areas.
General Character: Targeted growth designation that envisions a mix of uses and higher densities to promote growth near existing communities and around gateways. Includes locally and regionally serving commercial activities that support the local economy such as transportation dependent activities such as retail, personal services, offices and goods delivery. Well connected to schools and other activity nodes with roads, trails, pathways and sidewalks. Development should be consistent with municipal plans and designed to municipal standards to ensure compatibility of development for potential annexation.
Envisioned Density Range: 2 to 16 Dwelling Units per Acre; Higher densities are contingent on bonuses given to projects that incorporate affordable housing. Building heights are between 1 and 3 stories.
Objectives:
- Support a sustainable economy centered on clean energy, agriculture, and manufacturing using renewable resources.
- Bolster existing businesses and manufacturing uses that provide long-term jobs.
- Support business that may be large in scale and character by locating them next to similar uses and away from residential neighborhoods.
- Harbor opportunities for next-generation industries that align with the economic and environmental goals in this plan.
Location Description: Areas located farther away from existing and future residential areas.
General Character: Intended to accommodate existing and future manufacturing areas that support the economic development goals of this plan. Includes singular use and larger buildings with potentially noxious activities. Includes uses such as sand and gravel facilities, large scale energy generation, value added manufacturing with associated offices or limited onsite housing appropriate for operations.
Envisioned Density Range: Not applicable.
Objectives:
- Promote innovation and pursue new industries as well as training or educational facilities (e.g. vocational or college institutions) that support and train the workforce for such businesses.
- Target ideal areas to apply for Colorado Enterprise Zone Tax Credits or other incentives to promote development that satisfies this plan’s economic goals and strategies.
- Target areas or sites within the Opportunity Zone (Census Tract 000401) to fast track eligible projects in the northern half of the County.
- Incentivize and recruit businesses that foster opportunities for workers in Chaffee County to further their economic growth.
- Pursue commercial activities that promote the County’s identity as a leader in sustainable development, environmental stewardship and balanced outdoor recreation.
Location Description: Johnson Village, south of Salida, adjacent to Poncha Springs and in Nathrop.
General Character: Focused area for future economic development implemented through strategic allocation of resources and projects that assist in the creation of new businesses or jobs. May include areas designated for renewable resource infrastructure, community gateways, tourism oriented commerce, new and emerging industries, or new planned mixed use development. Offers potential for annexation into municipalities.
Opportunity sites are parcels that—due to certain criteria—offer a high likelihood for development that could further the goals and strategies of the Comprehensive Plan. Located in the unincorporated county, these sites illustrate locations where the County may focus resources to catalyze surrounding development or build specific land uses that have otherwise been lacking, particularly affordable housing. Although the County only proposes these as theoretical opportunities in 2020, these sites communicate to the development community and the public locations in Chaffee County where partnerships may be formed to cultivate the kind of buildings the public would like to see. These Opportunity Sites (indicated with the symbol on the FLUM) are recommended for special design and development standards that would supplement standards identified in Article 2 of the Chaffee County Land Use Code and would apply to major new development that meets the intent of this plan.
Objectives:
- This designation is given to public or private parcels in the unincorporated county identified through an infrastructure analysis.
- Potential sites for supplying affordable/workforce housing since their locations meet many of the criteria for such projects.
- Determined through community input and GIS infrastructure analysis.
- Offer opportunities to meet this comprehensive plan’s goals and strategies through their future development.
- Facilitated by public-private partnership particularly in providing infrastructure.
| Land Use Designation | Density Range | Description |
|------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Public Lands | -- | Municipal, county, state, or federally controlled public lands and open space. |
| Open Space | -- | Permanently protected open space held as a land dedication, parkland, or conservation easement. |
| Mining Claim | -- | Mining claims as designated by the County Assessor with small seasonal structures. |
| Rural/Agriculture | 1 DU/2 Acres | Areas with large acreages of mostly undeveloped land distant from urban settings that contribute to agriculture and the rural quality of the County. Intended for very low density residential, farmsteads, agriculture, ancillary uses, and/or conservation subdivisions. |
| Rural Residential | 1 DU/2-20 Acres | Areas of the unincorporated county in existing platted subdivisions on parcels between 2 and 20 acres. Generally designated for established residential subdivisions where no change in use is intended. |
| Suburban Residential | 1-4 DU/Acre | Areas in or near existing communities or along major transportation corridors that are included in Intergovernmental Agreements and where municipal services may be extended in the future. Generally low density residential intended to support affordable and attainable housing goals with a diversity of housing types to serve a range of income levels. |
| Mixed Residential | 4-16 DU/Acre | Areas desired for annexation adjacent to existing incorporated or unincorporated communities or along major transportation corridors where higher densities may be appropriate and near existing water and/or sanitation utilities. Envisioned to accommodate a mix of housing types and residential densities, affordable housing, institutional uses such as schools or public facilities, and appropriately scaled commercial uses appropriate for walkable amenities. Encourages non-traditional subdivision design with smaller lots and conservation subdivisions to promote a more compact development form. |
| Rural Commercial | 2-16 DU/acre | A diverse mix of locally and regionally serving commercial activities that are integral to the region’s economy such as business parks, flex offices, utilities, sand and gravel storage, personal services, etc. Residential uses in mixed use development with density bonuses to incentivize the inclusion of affordable units. Development standards with buffers between residential and commercial uses. |
| Rural Mixed Use | 1 DU/2 Acres to 2 DU/Acre | Envisioned to promote economic development consistent with the existing recreation, agricultural, and tourism-based economy such as recreation companies, campgrounds, and hospitality activities. |
| Land Use Designation | Density Range | Description |
|------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Mixed Use Corridor | 2-16 DU/Acre | Areas located near existing municipalities, along or near major transportation corridors, included in intergovernmental agreements for the municipalities three-mile planning areas, and within existing or future utility service areas. Areas targeted for a mix of uses and higher densities to promote growth near existing communities and around gateways. |
| Light Industrial | -- | Areas located farther away from existing and future residential areas intended to accommodate existing and future manufacturing areas that support the economic development goals of this plan. |
**Other Land Use Character Areas**
| Description |
|----------------------------------------------------------------------------|
| Future Economic Activity Node |
| Areas in Johnson Village, south of Salida, adjacent to Poncha Springs and in Nathrop that offer opportunity for economic development. |
| Opportunity Sites |
| Determined through community input and GIS infrastructure analysis, these are sites that offer opportunities to meet this comprehensive plan’s goals and strategies through their future development. Opportunity sites also provide for potential locations for affordable/workforce housing developments since their locations meet many of the criteria for such projects. |
CHAFFEE COUNTY:
Future Land Use Map
- Rural/Agricultural, No Change in Use
- Rural Commercial
- Mixed Use Rural
- Mixed Use Corridor
- Rural Residential
- Open Space
- Public Lands
- Rural Residential
- Urban/Rural Residential
- Industrial
- Mining Claim
- 1/2 Mile Area
- Municipal Boundary
- Conservation Area
Note: Areas with no color are designated as No Change in Use.
3.3 | Future Land Use Policies
COUNTY-WIDE FUTURE LAND USE POLICIES AND RECOMMENDATIONS
To achieve the Comprehensive Plan goals, additional land use tools should be integrated into the County’s land use regulations. The policies, described below, complement the goals, strategies and actions already in the plan by providing additional guidance on implementation.
Develop a Natural Resources Overlay
While the community placed high value on the County’s natural assets, the County’s land use code currently lacks adequate protection of both water bodies and wildlife habitat. The Comprehensive Plan’s Resilient & Sustainable Environment section includes specific actions for increasing protection through the adoption of new land use tools. A natural resource overlay is an effective tool for maintaining views, a healthy watershed, and protection of wildlife habitat patches and corridors.
Overlay zones are most effective when they integrate:
- Maps
- Site development standards for vegetation management, setbacks, resource avoidance, etc.
- Resource assessments as part of the development application materials
- Flexible site design
- Density limitations
- Development review procedures and criteria for fair evaluation of proposals.
The County should capitalize on the numerous natural and scenic resource studies being conducted by Envision Chaffee County, Colorado Parks & Wildlife, etc. as a foundation to initiate the development of appropriate natural resource overlays for wildlife habitat and scenic resources.
Adopt an Arkansas River & Tributaries Overlay
The Arkansas River and its tributaries are central to the County’s identity, recreation economy, and ecosystem. While the Browns Canyon section of the Arkansas River is protected and managed under the Browns Canyon Management Plan, development pressure along other reaches of the river is increasing at a rapid pace. An overlay would ensure the use, access, and natural features of the river and streams are preserved and protected. An overlay could:
- Create a regional shared vision for land uses along river corridors
- Include clear guidance on how to conduct riparian area and floodplain assessments
- Establish streamside setbacks, buffers, and limits on vegetation disturbance that offer greater protection to riparian habitat, water quality, and the visual quality of the river corridor
- Include design standards, site planning guidelines, and review procedures intended to preserve wetlands, floodplains, open space and river views.
Chaffee County is fortunate to be home to many professionals who are experts in this field and can offer guidance to the County on how to develop an appropriate scope of work to accomplish this task.
Adopt a Conservation Subdivision Policy
Chaffee County has both a statutory and cluster subdivision standard that supports more creative site design and promotes open area conservation. However, the Rural Cluster Subdivision Ordinance, Article 5, Sec. 5.3 of the land use code, needs to be revised to strengthen how to use flexible site design to better protect natural, scenic and heritage resources while allowing for economically feasible developments. The adoption of a Conservation Subdivision Ordinance is a central strategy in the Future Land Use Plan for the Agriculture/Rural, Rural Residential, and Suburban Residential designations.
To update the land use code, the County should:
1. Consider revising LUC and application process to mandate a study of the property’s ecological attributes during the design process to:
- Identify and avoid critical habitat of endangered species
- Locate the Wildland Urban Interface
- Locate flood/fluvial or other hazard areas
- Locate productive agricultural lands
- Identify other sensitive or critical areas as defined the Chaffee County Planning Commission, County Commissioners or other recommending bodies.
2. Enhance the pre-application process to ensure subdivision design meets standards earlier in the process.
3. Illustrate desired design standards in a “Conservation Development Design Guideline” book for distribution.
4. Increase or make flexible bonus lot allowances.
5. Incorporate a post-development review process to ensure standards have been met.
6. Provide accessible asset mapping for property in the County available for the public.
SEE MODEL CONSERVATION SUBDIVISION DESIGN GUIDELINES IN THE APPENDIX (page 204).
Scenic Resource Overlay
The Chaffee County Heritage Area Advisory Board is in the process of updating the management plan for the Collegiate Peaks Scenic Byway, a major county asset and economic driver. The County’s development standards should be aligned with the results of this updated plan and adopt an overlay zone to provide clear and objective guidance for property owners and developers on how to conduct a site analysis for visual resources, mitigation standards, and flexible site design options.
Conduct Focus Area Master Planning
Each of the six focus areas have unique conditions that warrant further study. Developing master plans of each of the areas offers an opportunity to explore the specific opportunities and constraints of each site. The master plans should explore:
- Physical and ecological constraint.
- Need for improved public infrastructure, access, facilities or other physical improvements
- Opportunities for catalytic or visionary projects
- Collaborative partners
- Goals and an action plan for site enhancements.
Develop a Backcountry Overlay
Residential development in the high country, on inholdings surrounded by public lands, and on mining claims pose risks to public safety and environmental sustainability. The County should conduct an assessment of these areas to better understand access, natural hazards, and other resource constraints to inform the development of a Backcountry Overlay with the intent of ensuring uses in these areas do not place a burden on public safety resources, are built to appropriate safety standards, and meet environmental protection standards.
Sub Area Plans
- Buena Vista
- Mid-Valley/Nathrop
- Salida
- Poncha Springs/Maysville
3.4 | Buena Vista Sub Area Plan
VISION & CHARACTER
• The Buena Vista Sub Area has experienced and embraced an evolving identity, ranging from mining to music.
• This historical charm and instant access to the Arkansas River will continue to attract curious and active travelers coming for a weekend or staying for a lifetime. Continued support of the area’s significant rafting community and culture will pave the way for a vibrant economic future.
• Leveraging the Town’s diverse and growing economy, new businesses reflect the community’s values of sustainability and healthy lifestyles; businesspeople are attracted to the Buena Vista area’s recreational economy and ease of access.
• As the main gateway to Chaffee County from the Front Range, this Sub Area benefits from steady vehicle traffic that supports highway-oriented businesses, balanced by alternative mobility options including an extensive trail system connecting rural areas to the town center.
• The Buena Vista Sub Area—through logical growth in places like Johnson Village—can promote community goals of supplying affordable housing by envisioning a mix of unit types at the right size and in the right places.
• The IGA between the Town and County supports coordination on growth management with agreement on allocation of new development and extension of services and infrastructure.
SUB AREA FUTURE LAND USE
Elements of the FLUM include:
- **Mixed Use Corridor:** Along the community gateways and corridors of Highways 24 and 285 to promote economic development. Intended to support existing commercial uses (e.g., rafting companies and auto shops) that are located along the highways as well as provide opportunity for a mix of new commercial businesses that are limited in size and scope. Major commercial activities should be directed to the municipalities.
- **Rural Mixed-Use:** Extends along the Highways 285 and 24 corridors north and south of Town where surrounding land uses are more rural in character.
- **Mixed Residential:** Aligns with lands within the Town of Buena Vista’s planning areas where future development is likely.
- No change is proposed for existing residential subdivisions with lots under 20 acres in the **Rural Residential** and **Rural/Agriculture** designations.
- No change is proposed for **Public Lands** except for a parcel identified for a land exchange in the Town’s Three Mile Plan.
- The average in-town residential lot is 2,500 square feet.
- Cottonwood Creek, the Town’s water supply for portions of the Town, is over appropriated and places limits on development potential within that zone.
- The northern edge of Buena Vista has limitations for water infrastructure due to challenges with elevation.
- The Buena Vista Sanitation District reaches 75% capacity during the summer.
- Highway 285 is seeing increasing traffic issues and congestion.
SUB AREA SPECIFIC POLICIES
While these may also be policies that are relevant throughout the County, these were identified as regional priorities or have greater relevance.
- Use with the Buena Vista Water Resource Master Plan and recent County Infrastructure Study to estimate the feasibility of water service expansion.
- Explore funding models (taxes, in lieu of fees, etc.) for the development and maintenance of regional open space and trails.
- Require new development to include inclusive open space and/or parks.
- Plan for development of public spaces and facilities for youth.
- Promote the production of local food and resources for small scale agriculture.
SUMMARY OF KEY ISSUES
- The subdivisions around Buena Vista still have some limited capacity for infill with 310 vacant lots.
- The Sub Area’s Plan is congruent with the Buena Vista Three Mile Plan which identifies 10 Areas of Desired Growth and Municipal Service Areas.
JOHNSON VILLAGE
FOCUS AREA, NODE, AND
OPPORTUNITY SITE
Johnson Village, with high traffic and visibility, offers opportunities for future growth building on existing commercial development catering to the recreation and tourism economy. Highway 285 is envisioned as a future economic activity node intended to promote uses such as:
- Food and beverage
- Recreation businesses
- Retail commercial
- Affordable and workforce housing
- Travel centers and convenience stores
- Renewable energy
- Designated campgrounds
However, development in this area is not without challenges. A master plan for Johnson Village should be developed to address the following issues:
- Water and sewer infrastructure capacity will limit future commercial and residential growth. An assessment is needed of the availability of water rights, infrastructure capacity, and costs of service extensions.
- A study should be conducted to assess financial implications of incorporation, annexation, or a special district and how to finance infrastructure system expansion.
- Economic development in Johnson Village should not compete with the Town of Buena Vista. However, this area should be assessed for potential as a State Opportunity Zone.
- The creation of a true “village” that balances highway orientation with the needs of residential quality of life including pedestrian and bike connectivity and infrastructure including sidewalks and safe crossings, signage, commercial design standards, and as a County gateway.
BUENA VISTA: Existing Land Use
- Residential Vacant (platted, unbuilt)
- Residential Medium Density (multi-family)
- Residential Low Density (<2 acre lots)
- Residential Suburban (2-5 acre lots)
- Residential Rural (<5 acre lots)
- Mobile Home
- Commercial
- Commercial / Industrial Vacant
- Rural Commercial
- Institution
- Industrial
- Agriculture / Open Space
- Open Space (dedicated)
- Recreation
- Mining Claim
- Public Land
- City Limits
PUBLIC LANDS
Game Trail
Buena Vista
Johnson Village
PUBLIC LANDS
BUENA VISTA SUB AREA:
Future Land Use Map
- Rural/Agricultural: No Change in Use
- Rural/Agricultural: Future Economic Activity Node
- Mixed Use Corridor
- Mixed Use Corridor
- Mixed Use Residential
- Suburban Residential
- Rural Mixed Use Corridor
- Industrial
- Public Lands
- 3-Mile Area
- Municipal Boundary
- Conservation Area
- Three Mile Plan Areas of Desired Growth
Note: Areas with no color are designated as No Change in Use.
Adopted December, 2020
BUENA VISTA SUBAREA:
Pattern of Development, 2009-2019
- Building Permits 2009 to 2012
- Building Permits 2013 to 2016
- Building Permits 2017 to 2019
- 5-Mile Area
- Municipality Boundary
Scale: 0 0.5 1 2 Mi
N
BUENA VISTA SUB-AREA:
Existing & Proposed Utilities
- Sanitary Line, Existing
- Water Line, Existing
- Sanitary Line, Conceptual
- Water Line, Conceptual
- Sanitary Connection
- Water Connection
- Lift Station, Conceptual
- Sensitive Area
- Municipal Boundary
Potential Growth/Service Expansion Areas
Adopted December 2020
3.5 | Mid-Valley Sub Area Plan
VISION & CHARACTER
• The Mid-Valley and Nathrop areas exhibit the visual characteristics of the County that attract folks to this area: vast open spaces, a tight-knit small town culture, and neighbors that know each other, whether seasonal visitors or year-round residents.
• Here you’ll find working landscapes with a long legacy of farming and ranching, clean air and water, and abundant access to the surrounding recreational amenities.
• Old and new developments are designed to “fit” in the landscape rather than overwhelm it, by preserving view corridors and clustering houses so as to leave as much of the valley bottom open while still encouraging the right size and type of development.
SUB AREA FUTURE LAND USE
Elements of the Mid-Valley FLUM include:
- **Rural Mixed Use** along the highway 285 Corridor around Nathrop to support limited economic development intended for existing recreation oriented companies.
- **Mixed Residential** and **Suburban Residential** focus on small scale, neighborhood oriented development replacing current commercial and residential zone districts.
- In **Rural Residential** and **Agricultural/Open Space**, generally no change is envisioned for existing development with lots under 20 acres.
SUB AREA SPECIFIC POLICIES
While these may also be policies that are relevant throughout the County, these were identified as regional priorities or have greater relevance.
- Review and update the County’s existing Dark Skies standards to comply with current best practices and increase efficacy.
- Zone the Nathrop Townsite as mixed-use commercial.
- Protect historic properties in Nathrop.
- Place emphasis on conservation subdivisions to retain rural character of Mid-Valley.
SUMMARY OF KEY ISSUES
- Growth pressures and agricultural conversions are of concern. The Sub Area has been the site of one of the County’s largest subdivisions in the past decade.
- Existing subdivisions have approximately 261 vacant lots that could potentially develop.
- Some subdivisions are served by community water and/or sewer systems. The Chateau Chaparral wastewater treatment facility does not meet water quality protection standards.
- Significantly increasing intensity of development in Nathrop would require very expensive infrastructure investments.
- The County landfill needs an updated master plan.
NATHROP FOCUS AREA AND NODE
The Nathrop townsit has seen minimal change in the past, despite the Highway 285 corridor being zoned commercial in the existing land use regulations. This planning effort identified a community desire to have economic activities oriented towards the neighborhood and not be oriented towards highway businesses. Given limited infrastructure and the fact that part of the townsit is bisected by the highway limiting pedestrian access, modest future commercial uses along the highway are envisioned. Community serving commercial is desired for areas adjacent to residential development.
The node is intended to promote:
- Neighborhood serving commercial food and beverage
- Existing recreation-oriented businesses
- Workforce housing.
A neighborhood master plan for this area should address:
- A fiscal analysis to assess the feasibility of infrastructure improvement and service extension
- The appropriate governance and/or management model to sustain adequate infrastructure
- Adequacy and opportunity for affordable/attainable housing development
- A Townsite Overlay to support desired economic or mixed-use development.
**CHALK CREEK/MT. PRINCETON RESORT NODE**
This area includes a mix of uses and intensities. The residents of the area value the openness and do not want to see the continued subdivision of lands in the future Rural Residential and Rural/Agriculture into small lot subdivisions. The Mt. Princeton Resort is a keystone of the County’s economic health and future development plans are accounted for in the Future Land Use Map. A Scenic Resource Overlay is envisioned for the Highway 285 Scenic Byway and stretches of Chalk Creek Road.
MID-VALLEY: Existing Land Use
- Residential Vacant (platted, unbuilt)
- Residential Low Density (>2 acre lots)
- Residential Suburban (2-5 acre lots)
- Residential Rural (<5 acre lots)
- Mobile Home
- Commercial
- Institution
- Agriculture / Open Space
- Open Space (dedicated)
- Recreation
- Water Rights
- Public Land
Adopted December 2020
MID-VALLEY SUB AREA:
Future Land Use Map
- Forest/Agricultural: No Change in Use
- Mixed Use Rural
- Mixed Use Rural
- Mixed Residential
- Mixed Residential
- Mixed Residential
- Public Lands
- Rural Residential
- Rural Residential
- Rural Mixed Use Corridor
- Rural Corridor
- Strong Corridor
- Weak Area
- Weak Boundary
- Conservation Areas
Note: Areas with no color are designated as No Change in Use.
MID-VALLEY SUBAREA
Pattern of Development, 2009-2019
- Building Permits 2009 to 2012
- Building Permits 2013 to 2016
- Building Permits 2017 to 2019
- 5-Mile Area
- Municipal Boundary
Adopted December 2020
MID-VALLEY SUB-AREA:
Existing & Proposed Utilities
- Sanitary Line, Existing
- Water Line, Existing
- Sanitary Line, Conceptual
- Water Line, Conceptual
- Lift Station, Conceptual
- Sanitary Connection
- Water Well, Conceptual
- 3-Mile Area
- Municipal Boundary
Potential Growth/Service Expansion Areas
Sources: Esri, USGS, NOAA
3.6 | Salida Sub Area Plan
VISION & CHARACTER
• The Salida Sub Area, surrounding the largest municipality in the County and the County Seat, attracts and balances a higher percentage of growth and development than other areas.
• Its major thoroughfares efficiently and safely bring people to town in all forms of transport, including a well-connected system of trails that allow one to ride a bike from the Main Street to the mountaintop.
• A diverse mix of places to live and work within and outside of town coupled with a milder climate than found elsewhere, the Salida community thrives year round.
• Cultural identity is strong in this Sub Area, from its historic ranches and rural art studios to energetic sporting events and a busy restaurant scene.
• Salida, being the County Seat, is at the center of Chaffee County’s administrative and governing activity.
SUB AREA FUTURE LAND USE
- **Mixed Use Corridor** is designated for the community gateways and transportation corridors along State Highways 50, 285, and State Route 291. These areas are intended to support existing commercial uses such as business and industrial parks similar to the County’s current COM and IND zones.
- **Industrial** includes the area near and surrounding Harriet Alexander Field Airport to accommodate future aviation business and/or facilities.
- **Rural Commercial** includes the area surrounding the businesses and Smeltertown industrial park south of State Highway 291.
- Tiered residential development is accommodated in **Mixed Residential** in areas covered by the County – Salida Intergovernmental Agreement, particularly west of the municipal boundaries, where future development and annexation into the municipality is desired due to existing or future utility services. The next tier includes **Suburban Residential** uses which surround Mixed Residential uses.
The average lot size in Salida is .73 acres although parcels are being split to create more development potential in town.
Water and sanitation capacity in the City of Salida is about 50% of capacity during peak summer months, and the fact that Salida provides sanitation services to Poncha Springs affects the capacity.
Affordable housing in Salida is the least available throughout the County.
SUB AREA SPECIFIC POLICIES
While these may also be policies that are relevant throughout the County, these were identified as regional priorities or have greater relevance.
- Build on existing policy for stub roads to ensure all new developments include road and trail easements that allow for connection between existing and future subdivisions.
- Coordinate with the City of Salida to assess the need for additional policy balancing short term rentals and full-time residences in the Sub Area.
- Extend the community buffer concept between Salida and Poncha Springs to include low density development onto the Mesa for the neighborhood between CR 140 and 145 to CR 120 and Airport Road.
- Collaborate with the City of Salida on the Highway 50 Corridor Plan.
- Collaborate with the City of Salida to accommodate new development in desired areas of growth with appropriate infrastructure based on the City’s service capacity.
- Develop a plan for managing increased access and traffic along CR 120.
SUMMARY OF KEY ISSUES
- Development constraints – floodplains, steep slopes, and public land – have resulted in development in the unincorporated county on the Mesa and along Highway 50 along the South Arkansas moving towards Poncha Springs. It is also going out along Highway 291. The land west of Salida along the South Arkansas and between Salida and Poncha Springs along Highway 50 has conservation easements including the Hutchinson Ranch which acts as a community buffer between the two municipalities.
Higher density development (in Mixed Use Corridor and Mixed Residential areas) should include more intensive joint review with the City of Salida.
Collaborate with the City of Salida on trail connections between City and County jurisdictions.
**COUNTY ENCLAVE FOCUS AREA**
This roughly 30-acre focus area is a wholly-surrounded by the City of Salida and is located at the southern gateway to the City at the junction of US Highway 50 and State Route 291. Further master planning is warranted here due to the strategic location for commercial use and the adjacency to the Vandaveer Ranch master planned development. A future master plan of this area should anticipate annexation into the municipality.
Considerations for future master planning in this Focus Area should include:
- Ensure future land use consistency with City.
- Evaluate infrastructure provision to potential development in anticipation of annexation.
- Alignment of new rights-of-way including trail and pathway connections as proposed in future transportation plans.
- Feasibility of supplying affordable housing.
- Opportunities for gateway features.
- Floodplain mitigation.
- Work with the City of Salida and CDOT regarding the Highway 291/50 intersection improvements and any necessary upgrades for the South Arkansas River bridge over Highway 50 and CR 105.
SALIDA: Existing Land Use
- Residential Vacant (platted, unbuilt)
- Residential Low Density (>2 acre lots)
- Residential Suburban (2-5 acre lots)
- Residential Rural (<5 acre lots)
- Mobile Home
- Commercial
- Commercial / Industrial Vacant
- Rural Commercial
- Institution
- Industrial
- Agriculture / Open Space
- Open Space (dedicated)
- Recreation
- Mining Claim
- Public Land
- City Limits
PUBLIC LANDS
0 0.25 0.5 1 Mi.
N
SALIDA SUB AREA:
Future Land Use Map
- Rural/General, No Change in Use
- Rural/General, Change in Use
- Mixed Use Rural
- Mixed Use Corridor
- Mixed Use Industrial
- Open Space
- Park/Open Space
- Rural Residential
- Suburban Residential
- Rural Mixed Use Corridor
- Rural Mixed Use Industrial
- Moving Glam
- 50 Mile Area
- Wildland Boundary
- TNC area
- Conservation Area
- Future Downtown
- Future Business
- Opportunity
- Sites
- Future Road Connections
- Future Rail Connections
- Arkansas River
- Salida River
- Salida Municipal Services Area
Note: Areas with no color are designated as No Change in Use.
Adopted December 2020
SALIDA SUBAREA:
Pattern of Development, 2009-2019
- Building Permits 2009 to 2012
- Building Permits 2013 to 2016
- Building Permits 2017 to 2019
- 5-Mile Area
- Municipal Boundary
N
0 0.25 0.5 1 Mi
SALIDA SUB-AREA:
Existing & Proposed Utilities
- Sanitary Line, Existing
- Water Line, Existing
- Sanitary Line, Conceptual
- Water Line, Conceptual
- Sanitary Connection
- Water Connection
- 5 Mile Area
- Municipal Boundary
Potential Growth/Service Expansion Areas
Sources: Esri, USGS, NOAA
Adopted December 2020
3.7 | Poncha Springs Sub Area Plan
VISION & CHARACTER
• Uniquely nestled at the base of high mountain peaks, Poncha Springs and Maysville celebrate their small-town character and offer a variety of landscapes from open working ranches to rocky cliff faces.
• Poncha Springs continues to be a welcoming and physically diverse community that is a safe and affordable place to live and work as it grows and changes.
• Old and new developments provide a balance of commercial and residential growth to create walkable neighborhoods and remain a family and business-friendly community.
FUTURE LAND USE MAP
Elements of the Poncha Springs/ FLUM include:
- **Mixed Use Corridor** includes the entrances along Highway 50 and is intended to support existing commercial uses located in business and industrial parks. Development in this designation should complement, not detract from the efforts of the Town of Poncha Springs to create commercial centers.
- Tiered residential uses include **Mixed Residential** for land within the Intergovernmental Agreement areas, particularly east of the municipal boundary, where future development and annexation into the municipality is desired and utilities exist or are planned. The next tier includes **Suburban Residential** uses which surround Mixed Residential uses.
- No change in use is envisioned for residential subdivisions with lots under 20 acres in **Rural Residential** or **Rural/Agriculture**.
SUMMARY OF KEY ISSUES
- The average parcel size in Poncha Springs is 3.2 acres, however they have a form-based development code that enables much smaller lot sizes.
- The South Arkansas River corridor is a long-term target for conservation and many private or municipally owned easements already exist.
- Public land access and recreational trails are underdeveloped around Poncha Springs.
- Expansion of the Town’s water supply will likely require a multi-zoned system.
- Intergovernmental coordination between Poncha Springs and the County on land use planning is improving and should continue to develop a shared vision for planned growth.
- County Road 120 traffic is increasing as more people use the road between Poncha and Salida to access businesses located on or near CR 120.
- Maysville wells occasionally run dry.
SUB AREA SPECIFIC POLICIES
While these may also be policies that are relevant throughout the County, these were identified as regional priorities or have greater relevance.
- Work with HOAs on private roadways to balance increasing recreational use with residential traffic, particularly around access to trailheads.
- Conduct a CR 120 Transportation Study.
COUNTY FAIRGROUND FOCUS AREA
This 60-acre, County-owned parcel is adjacent to the Town of Poncha Springs and offers development potential due to its size. A master plan for this site should determine how this parcel could be used in the public’s best interest including:
- Affordable housing development
- Rights of way alignments
- Public facilities, such as schools or recreation facilities
- Expansion of uses/facilities that support the County Fair.
HIGHWAY 50 CORRIDOR FOCUS AREA
A corridor plan for Highway 50 between Salida and Poncha Springs should address land use and transportation including:
- Directing commercial growth to the highway that does not detract from Salida’s or Poncha Springs’ desire for town centers.
- Establishes safe pedestrian and vehicle routes, access, and crossings.
- Promotes a distinction between rural and urban uses that emphasizes community gateways for both Salida and Poncha Springs.
HIGHWAY 285 FOCUS AREA
The area north of Poncha Springs at the intersection of Highway 285 and Airport Road is envisioned as Rural Commercial on the Future Land Use Map due to land use decisions from previous decades that harbored a commercial node at that location. The community’s vision, however, is to focus growth towards the town. When facing future development in this area, the County should work with Poncha Springs and surrounding landowners to assess the most appropriate type of development at this intersection.
MAYSVILLE TOWNSITE FOCUS AREA
Maysville’s townsite residents enjoy the peaceful location and access to the outdoors. Considerations for a neighborhood plan for this area include:
- Assess extent of current infrastructure (roads, water, wastewater, cellular, broadband) to ensure it is adequate and safe for existing residences.
- Understand whether infrastructure improvements would benefit the neighborhood vision for this area and fiscal options for implementation.
- Explore pathway and trail connectivity projects to connect to Poncha Springs and a regional network.
PONCHA SPRINGS: Existing Land Use
- Residential Vacant (platted, unbuilt)
- Residential Low Density (1-2 acre lots)
- Residential Suburban (2-5 acre lots)
- Residential Rural (<5 acre lots)
- Commercial
- Rural Commercial
- Industrial
- Agriculture / Open Space
- Open Space (dedicated)
- Recreation
- Mining Claim
- Water Rights
- Public Land
- City Limits
Maysville
Poncha Springs
Adopted December 2020
PONCHA SPRINGS SUB AREA:
Future Land Use Map
- Rural/Agricultural: No Change in Use
- Rural Commercial
- Mixed Use Rural
- Rural Corridor
- Mixed Residential
- Open Space
- Public Lands
- Rural Residential
- Rural Mixed Residential
- Rural Mixed Use Corridor
- Industrial
- Water Claim
- 3-Mile Line
- Municipal Boundary
- Conservation Area
Note: Areas with no color are designated as No Change in Use.
Future Economic Activity Node
Opportunity Sites
Proposed Vehicle Connections
Proposed Pedestrian Connections
PONCHA SPRINGS/MAYSVILLE SUBAREA:
Pattern of Development, 2009-2019
- Building Permits 2009 to 2012
- Building Permits 2013 to 2016
- Building Permits 2017 to 2019
- 5-Mile Area
- Municipal Boundary
Adopted December 2020
PONCHA/MAYSVILLE SUB-AREA:
Existing & Proposed Utilities
- Sanitary Line, Existing
- Water Line, Existing
- Sanitary Line, Conceptual
- Water Line, Conceptual
- Sanitary Connection
- Water Connection
- Municipal Well, Existing
- 3-Mile Area
- Municipal Boundary
Potential Growth/Service Expansion Areas
Sources: Esri, USGS, NOAA
3.8 | Scenario Alternatives
GROWTH IS GOING TO HAPPEN AND WE HAVE TO HAVE A PLAN AND BE MORE PROGRESSIVE…IF WE DO NOT DO THAT, THE GROWTH WILL STILL HAPPEN, BUT WE WILL NOT HAVE CONTROL OVER IT. GROWTH IS NOT A BAD THING.
— Public Comment
Land use planning scenarios offer a method to assess how different policy decisions can support desired development patterns (Scenario B & C) that are different to the current trend (Scenario A). Both Scenario B and Scenario C shift the balance of development from the current trend to proposed development areas (municipal and/or unincorporated) in order to protect the County’s agricultural, ecological, and scenic resources. The discussion of Scenarios resulted in the selection of a Preferred Alternative for Chaffee County that blended Scenario B: Conservation, Corridors, and Connectivity with Scenario C: Growth Focused to Existing Communities.
The Preferred Alternative aims to:
- Focus high quality growth near existing communities but acknowledges that until infrastructure capacity and service expansion is possible, there will be limits in the next decade to achieving desired densities and patterns in this alternative.
- Therefore, it allows for well-designed development in unincorporated areas if it meets high quality design and use standards.
The current land use code and planning policies do not support this preferred alternative and therefore a code update is essential. While all the goals, strategies and actions outlined in this comprehensive plan do support achieving this vision, in particular the County needs to prioritize the following:
1. Update the zoning code to ensure a mix of development types at different densities are allowed, as identified in this plan.
2. The regional capacity for implementation of the Intergovernmental Agreements and planning for preferred growth areas is increased.
3. Necessary studies and analyses are done to support infrastructure development that will enable more development in and around the municipalities at greater densities.
4. Resource assessments, maps and plans are completed to inform the development of appropriate subdivision and zoning standards that will protect priority sensitive areas, open lands and community assets.
5. A regional multimodal transportation plan addresses the need for greater road, trail, and pedestrian connectivity.
6. Sites and districts are identified that prioritize and incentivize the development of affordable and attainable housing to support the economy and local workforce.
7. Funding sources are identified that support infrastructure (road, trail, pedestrian, recreation) and affordable housing.
To encourage new development contributing to this vision, the County should include a “Conformance with Preferred Scenario” standard in the land use application review process to make findings on a proposed project’s compliance in furthering the goals of the Comprehensive Plan.
These scenarios were presented for discussion by the public at open houses and results were work shopped at a Planning Commission worksession. No single scenario best captured the County’s ideal growth outcome, however key elements from Scenarios B and C were amalgamated into a preferred alternative designed to instill a new framework that resolves key growth management challenges.
Scenario A: Existing Land Use Framework
Description: Scenario A
- Continues growth patterns under the existing land use code, zoning map and
Issues:
- Tools to promote conservation-oriented and well-designed residential development (such as conservation subdivision design standards in unincorporated county are missing or .
- Land use framework needs additional guidance to allow Staff/Planning Commission/BOCC to make findings that direct growth to existing communities based on community vision.
- Existing zoning allows for range of densities and land use types in unincorporated county.
Scenario B: Conservation, Corridors and Connectivity
Scenario C: Growth Focused to Existing Communities
Scenario B: Aspiration
- Envisions high-quality growth near existing communities and corridors.
Issues Resolved:
- Lack of high-quality and well-located development near existing communities and in proximity to existing infrastructure.
How to Get There:
Strategies/Actions
- **4.2.E:** Reduce highway traffic by promoting active land uses in and near existing communities.
- **5.7.W:** Develop “Chaffee County Certified” design guidelines.
- **7.1.B:** Use strategies to incentivize/direct growth to existing communities (density bonuses, cluster developments, conservation development, or other density targeting regulations).
- **7.1.C:** Engage local community members to develop specific area plans.
- **7.3.H:** Create dedicated funding source for aiding municipalities with infrastructure and annexation.
Scenario C: Aspiration
- Envisions improved growth management standards to promote well-designed subdivisions in the unincorporated county through good design and rural county preservation.
Issues Resolved:
- Haphazard rural subdivision location
- Lack in quality subdivision design in rural areas
- Provides preservation tools for environmentally sensitive areas and elsewhere in the rural county.
How to Get There:
Strategies/Actions
- **5.1.D:** Collaborate with landowners on preservation strategies.
- **5.6.R:** Develop wildlife conservation map.
- **7.1.B:** Look at what peer communities are doing to incentivize good subdivision design.
- **7.2.E:** Identify incentives to adopt into the land use code that support planned development and resource protection.
- **7.2.G:** Consider integrating visual resource with an environmentally-important lands overlay.
Preferred Scenario
Preferred Scenario: Aspiration
- Focuses high-quality growth near existing communities.
- Allows for development in unincorporated county if it meets advanced design and use standards.
Recommendations for Implementation:
Strategies/Actions
1. Implement all Strategies and Actions from Scenarios B and C.
2. Establish a Natural Resources Overlay Land Use District through Envision Chaffee and HDGP Grant through revision to the Chaffee County Land Use Code to:
- Identify lands viable for preservation through a environmental resources study.
- Map such areas and initiate a public review process.
- Officially adopt overlay as part of land use code revision.
3. Though intergovernmental coordination, designate Opportunity Sites for envisioning future growth, including affordable and workforce housing.
4. Explore designating targeted areas near existing municipalities for desired development.
5. Incorporate a “Conformance with Preferred Scenario” commentary into the land use application review process to make findings on a project’s applicability in furthering the Preferred Scenario’s vision.
Scenario A:
EXISTING LAND USE FRAMEWORK
Within the Existing Land Use Framework Scenario, future growth and land development would occur within the density, design and character standards as currently regulated by the zoning and subdivision code.
Outcome Objectives:
Based on existing trends, by 2030, 50% of all new and existing households in unincorporated areas will be within a half mile (comparable to a 20 minute walk) of a main street center, corridor, or neighborhood center with access to goods and services to meet some of their daily needs. This is compared to about 40% percent in 2020.
Policy Considerations:
- Maintain existing minimum lot sizes across all zone districts.
- Continue growth management practices that increase density for residential development that includes public water and sanitation (rather than single on-site treatment).
- Assumes continued build-out of platted but unbuilt parcels in existing subdivisions will occur.
- Development within 400 feet of existing infrastructure should make a reasonable attempt to connect to the existing system.
- Continuation of County/Town coordination through intergovernmental agreements.
Considers infill within municipal boundaries will occur at an equivalent rate to in unincorporated areas.
Land occupied by new development will spread throughout intergovernmental agreement areas with municipalities and new land formerly in agriculture will be further subdivided.
SCENARIO A
Potential Growth ‘Heat Map’
Colors correspond to projected absorption of total units (e.g. 40% of all new growth in housing units will be located in the top category)
- 40% of total units
- 30%
- 20%
- 10%
RUR Zone
Public Lands
N
0 2.5 5 7.5 10 Mi
The Conservation, Corridors and Connectivity Scenario considered a consistent amount of housing units and rate of absorption and growth rate as Scenario A but generally conserves the highest-value landscapes—whether for preservation, recreation or otherwise—according to environmental conditions and community values as expressed in the input process. This includes agricultural and working landscapes, scenic areas, high flood, fire or fluvial hazard areas, or critical wildlife habitat.
The intention of this Scenario was to illustrate a moderate shift in land use patterns from continued subdivision of formerly unplatted land to directed growth toward places where existing transportation, water, sanitation infrastructure and amenities currently exist.
Outcome Objectives:
By 2030, 60% of all new and existing households in unincorporated growth areas will be within a half mile (comparable to a 20 minute walk) of a main street center, corridor, or neighborhood center with access to goods and services to meet some of their daily needs. This is compared to about 40 percent in 2020.
Between 2020 and 2030, 25% of new housing will locate in rural areas. Rural areas are defined as outside of the cities, towns, unincorporated urban growth areas (e.g. more than 10 minutes driving time from an incorporated place).
Policy Considerations:
- Adjust location of RUR zone district.
- Allowable density increase in targeted areas near transportation network, job/activity nodes, adjacent to municipalities and infrastructure.
- Encourage denser housing types like duplex, townhomes, multi-story apartments in these areas.
- Identifies Opportunity Sites for envisioning future growth that meets the public vision, including affordable/workforce housing.
- Includes incentives for density if affordable housing is built over targeted unit counts (e.g. inclusionary land use regulations for affordable housing).
- Introduction of potential locations for affordable/workforce housing on Future Land Use Map as well as strategic actions to supply such housing on the respective properties.
SCENARIO B
Potential Growth ‘Heat Map’
Colors correspond to projected absorption of total units (e.g. 40% of all new growth in housing units will be located in the top category)
- 40% of total units
- 30%
- 20%
- 10%
RUR Zone
Public Lands
N 0 2.5 5 7.5 10 Mi
Scenario C is an outcome where the land use regulatory framework only permits low density, large-lot development in outlying unincorporated areas and strongly incentivizes development in and near municipalities that would be annexed, or to existing unincorporated villages such as Nathrop or Johnson Village. In discussion at Open Houses, Scenario C was favored by the public.
Outcome Objectives:
By 2030, 75% of all new and existing households in unincorporated growth areas are envisioned to be within a half mile (comparable to a 20 minute walk) of a main street center, corridor, or neighborhood center with access to goods and services to meet some of their daily needs. This is compared to about 40% percent in 2020.
Between 2020 and 2030, 15% of new housing is envisioned to locate in rural areas. Rural areas are defined as outside of the cities, towns, unincorporated urban growth areas (e.g. more than 10 minutes driving time from an incorporated place). Municipalities, through annexation and intergovernmental coordination with the County will absorb the majority of new and existing households through continued annexation into planning areas as defined in inter-governmental agreements and three-mile plans.
Policy Considerations:
- Consider increase of minimum lot sizes in RUR zone.
- Adjustment of RUR zone district location through map amendment.
- Increase minimum lot sizes in transitional residential/agricultural interface areas where RUR and RES zones are adjacent.
- Incorporation of a Natural Resource Overlay (NRO) areas or “Backcountry” land use overlay on select REC and public lands. Several counties in Colorado have adopted similar land use codes along with a mechanism such as transfer of development rights to ensure equitable transfers.
- Address mining claim land uses: Develop a structure that allows the transfer of development rights from mining claims “sending areas” to receiving areas that the County would not be responsible for/taxpayers would not be burdened with in the case of a wildfire.
- Establish targeted areas for Affordable/Workforce Housing based on Opportunity Sites on future land use map. This designation acts as a land use overlay in conjunction with bonuses for number of units supplied.
• Incorporate a development scorecard in the review process to establish criteria for density increases in targeted areas. These criteria may include sites near transportation or trail network, job/activity nodes, adjacent to municipalities, and proximity to existing infrastructure.
• Duplex, townhomes, multi-story apartments encouraged in areas adjacent to municipalities with anticipation of annexation.
• Identifies Opportunity Sites for County partnerships in supplying affordable/workforce housing as well as actions or incentives to supplying housing in strategic areas.
• Includes incentives for density if affordable housing is built over targeted unit counts particularly robust inclusionary zoning in select zone districts to be determined after community input.
• Specifies density “Target Areas” near municipalities desired for growth where the County supports and directs resources to housing projects that supply affordable/workforce units. These areas will be annexed into the municipalities.
• Considers revising intergovernmental agreements specific to annexation for consistency with future land use maps.
• Suggests partnering with municipalities to share costs of infrastructure to attract desired development, particularly affordable/workforce housing on designated Opportunity Sites.
• Adopt Conservation Subdivision Design Standards.
SCENARIO C
Potential Growth ‘Heat Map’
Colors correspond to projected distribution of total units (e.g., 40% of all new growth in housing units will be located in the top category)
- 40% of total units
- 30%
- 20%
- 10%
RUR Zone
Public Lands
NRO/Backcountry Overlay
Public Lands
N 0 2.5 5 7.5 10 Mi
4. Implementation Plan
The Implementation Matrix below organizes Action Steps that are associated with Strategies that are found in Section 2. Included are project initiation timeframe goals, as well as organizational lead and estimated cost. This is a living document that will be updated on the Together Chaffee website as more time is spent refining implementation.
Each Action Step’s corresponding Goal and Strategy is linked in the “Ref” column.
$ = Under $100,000
$$ = $100,000 - $5 Million
$$$ = Over $5 Million
## 1. PEOPLE & COMMUNITY SERVICES
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|---------------------------------------------------------------------------------------|-------------------------------------------|----------|---------------|
| 1.1.A | Facilitate opportunities for inclusive community-building events, such as a Chaffee Heritage Day Celebration. | | | |
| 1.1.A | Facilitate communication of community events through a central events calendar / County Visitors Bureau. | | | |
| 1.1.A | Seek opportunities to introduce diversity and encourage inclusive behaviors in the County conversation. | Full Circle Restorative Justice | | |
| 1.1.A | Address harm, crime and conflict in the community in ways that build community, integrate all those impacted, serve victims and increase public safety. | Full Circle Restorative Justice | | |
| 1.1.B | Assess the needs of all residents in Chaffee County, particularly vulnerable or underserved groups in our region. | Chaffee County Equity Coalition | | |
| 1.1.B | Provide/facilitate access to essential County functions particularly to overcome communication barriers. | Chaffee County Equity Coalition | | |
| 1.1.B | Raise awareness of the cultural contributions to our County by diverse communities. | Chaffee County Equity Coalition | | |
| 1.1.B | Promote inclusivity in all county functions and activities, so as to be welcoming to all faiths and belief systems. | Chaffee County Equity Coalition | | |
| 1.1.B | Address issues of isolation and deficient services. | Chaffee County Equity Coalition | | |
| 1.2.D | Support arts education for all ages and arts organizations that promote and offer arts education throughout the County. | Chaffee County Community Foundation | | |
| 1.2.D | Form a County-wide collaborative to represent the arts and creative communities. | Chaffee County Community Foundation | | |
| 1.2.D | Support and promote young creative artists. | Chaffee County Community Foundation | | |
| 1.2.D | Acknowledge the financial contribution that the creative arts make to the economy of Chaffee County. | Chaffee County Community Foundation | | |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------------|--------------------|------|
| 1.3, E | Maximize use of existing facilities and coordination with municipalities for health, wellness, and fitness activities. | | | |
| 1.3, F | Support the Chaffee County Health Coalition with implementation actions to address community health issues. | Chaffee County Public Health | Short & ongoing | $ |
| 1.3, F | Update the Community Health Improvement Plan. | Chaffee County Public Health | Medium & ongoing | $ |
| 1.3, F | In partnership with diverse stakeholders launch an awareness campaign of the effects of behaviors that negatively affect health and the availability of preventive health services. | Chaffee County Public Health | Short & ongoing | $ |
| 1.3, F | Foster a working relationship between Chaffee County Public Health and the Planning Commission to share expertise and resources to achieve shared goals. | Chaffee County Public Health | Short & ongoing | $ |
| 1.3, F | Work with CCPh to develop an orientation for new PC members on the interconnections between land use planning and public health. | Chaffee County Public Health | Short & ongoing | $ |
| 1.3, F | Review and adopt recommendations from the Chaffee County Food Assessment. | Guidestone, CCHED | | |
| 1.3, F | Support the Chaffee Local Foods Coalition’s work to build a resilient food system and food access. | Guidestone, CCHED | | |
| 1.4, H | Evaluate assets and needs of seniors in areas such as parks and public spaces, housing, community participation, respect and social inclusion, communication and information, community and health and in-home services, and grief support services. | Chaffee County Public Health | Short & ongoing | $ |
| 1.4, H | Financially and culturally support community organizations that serve seniors. | Senior Master Plan, CCPH | Short & ongoing | $ |
| 1.4, H | Support the Chaffee County Early Childcare Council recommendations for expanding childcare access in the county by addressing facilities needs. | Chaffee County Early Childhood Council | Short & ongoing | $$$ |
| 1.5, K | Support the Chaffee County Early Childcare Council recommendations for expanding childcare access in the county by addressing staffing needs. | Chaffee County Early Childhood Council | Short & ongoing | $$ |
| 1.5, K | Support the Chaffee County Early Childcare Council recommendations for expanding childcare access in the county by supporting economic viability of childcare providers. | Chaffee County Early Childhood Council | Short & ongoing | $ |
| 1.5, K | Support new centers and childcare homes through the licensing process. | Chaffee County Early Childhood Council | Short & ongoing | $ |
| 1.5, L | Assess needs for health services in the schools. | BV & Salida School Districts | Short | |
| 1.5, L | Implement services to meet the needs identified in the health services for schools assessment. | BV & Salida School Districts | | |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|----------|
| 1.5, L | Support extracurricular youth programming. | BV & Salida School Districts | | |
| 1.6, N | Support and encourage local community volunteer organizations. | CCCF | | |
| 1.6, N | Encourage and facilitate participation and public engagement in community projects. | CCCF | | |
| 1.7, Q | Perform facilities needs assessments and capital improvements planning to ensure serving needs are being accommodated in proportion with the pace of growth. | CCCF | | |
| 1.8, R | Conduct a county wide assessment of vulnerable individuals and their need. | Human Services | | |
| 1.8, R | Conduct an inventory of available services for vulnerable populations. | Human Services | | |
| 1.8, R | Strengthen outreach systems and a central coordinated access point to information about accessing services. | CCPH | Short & ongoing | |
| 1.8, R | Partner with and financially support entities uniquely positioned to assess community member needs and provide advocacy and support for victims and survivors. | Full Circle Restorative Justice | | |
| 1.8, R | Provide funding as necessary to public agencies and resources that provide services to crime victims, the incarcerated, unhoused, etc. | Full Circle Restorative Justice | | |
| 1.8, R | Prepare for anticipated stresses on all support systems that will occur due to anticipated population increase. | Full Circle Restorative Justice | | |
| 1.9, V | Assess which County services are vulnerable to future financial strain. | County Planning Department | | |
| 1.9, V | Conduct an impact fee study for services or facilities affected by growth. | County Planning Department | | |
| 1.9, V | Perform capital improvement plans and facility assessments to ensure adequate facilities and staffing needs are accounted for. | County Planning Department | | |
| 1.9, V | Enhance systems for civic participation in local governance. | County Planning Department | | |
## 2. COUNTY CHARACTER
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|----------|
| 2.1, A | Support the development and execution of the RIB’s Chaffee County Recreation and Resource Protection Plan. | Envision | Short | Funded |
| 2.1, B | Develop appropriate master plans, such as a County-wide Parks, Open Space, Trails and Recreation Master Plan, that are amendments to the comprehensive plan and guide land use decision making and investments. | Envision | Short | Funded |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|------|
| 2.1, B | Envision Chaffee County will facilitate development and implementation of the Chaffee Recreation Plan, with leadership by an in-place Chaffee Recreation Council including top leaders from all land management agencies, municipal governments, county and community. The plan is fully funded and in progress for 2021 completion, followed by implementation. | Envision | Short | Funded |
| 2.1, B | Map all existing recreation and tourism-related lodging and services to identify current areas of focused commercial activity | Envision | Short | Funded |
| 2.1, B | Promote clustering of future tourism-related lodging in the land use code update to minimize impacts on residential or agricultural areas. | Envision | Short | Funded |
| 2.1, B | Communicate Chaffee Recreation Plan and the Recreation Suitability Map when complete to the group working on the Trails Master Plan (the Rec Plan should act as a precursor). | Envision | Short | Funded |
| 2.1, B | Ensure all resulting map data and modeling is communicated to PC for use in development planning. | Envision | Short | Funded |
| 2.1, C | Support organizations that provide access opportunities for individuals, youth, and families who are typically unable to enjoy the outdoors. | CCHP | Long | |
| 2.2, E | Update the Chaffee County historic structures and sites inventory. | CCHAAB | Medium | $$ |
| 2.2, E | Identify and nominate properties for National Historic Register designation. | CCHAAB | Medium | $ |
| 2.2, E | Collaborate with property owners on preservation strategies. | CCHAAB | Medium | |
| 2.2, D | Appoint a Planning Commissioner, County Commissioner, or staff to represent the County on the CCHAAB. | County Planning Department | Short | No cost |
| 2.2, F | Raise awareness of historic structures and sites. | CCHAAB | Long | $$ |
| 2.2, F | Support CCHAAB with enhancing collaboration and coordination amongst regional organizations working on protection of community assets such as, Buena Vista Heritage, Salida Museum, Salida Historic Preservation Committee, Hutchinson Homestead & Learning, Salida Area Parks, Open Space & Trails. | CCHAAB | Long | $$$ |
| 2.2, F | Provide funding for promotion of county heritage historic sites and events. | CCHAAB | Short | $ |
| 2.2, F | Support historical and cultural education in local schools. | CCHAAB | Long | $$ |
| 2.2, G | Support updates to the Chaffee County Heritage Area & Collegiate Peaks Scenic and Historic Byways Management Plan. (SHBMP) | CCHAAB | Short | $$ |
| 2.2, G | Integrate the SHBMP into local and regional economic development plans. | County/CCVB/CCEDC | Medium | $ |
| 2.2, G | Align trail and transportation corridor design and development with the SHBMP. | County Planning Department | Long | $$ |
| 2.2, G | Integrate the updated SHBMP recommendations into the County’s land use code update. | County Planning Department | Long | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 2.2, H | Provide education for new residents and visitors on the value of agricultural lands and how to support them. | CCC, Upper Ark Conservation District, Envision | Medium | $ |
| 2.2, H | Support agricultural producers to be sustainable, with consideration of impacts to agriculture in code (tax code, land use code), enforcement of the Chaffee County Right to Ranch policy and support for programs that help brand and promote local agricultural products. | Envision, Chaffee County Right to Ranch, Guidestone, PC | Medium | $ |
| 2.2, H | Involve an agricultural representative in the Chaffee Recreation Plan to ensure it considers impacts of recreation to rancher operations. | Envision | Short | $ |
| 2.3 | Review and amend development regulations related to subdivisions to achieve consistency with County plans for public land access, trail easements, open space dedications as well as municipal standards in urban growth areas. | Planning Commission | Short | $ |
| 2.3 | Develop a GIS map of potential mining areas by conducting an assessment of geology, mining claims, and mining rights to inform the update to the land use code. | | | |
## 3. AFFORDABLE & INCLUSIVE HOUSING
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 3.1, A | Support the soon to be created Chaffee Housing Authority by meeting financial commitments and participation by County leadership. | Elected officials, appointed officials, and municipal staff | Short | $$$ |
| 3.1, A | Designate the Chaffee Housing Authority as the public entity to represent Chaffee County in affordable housing projects. | Elected officials, appointed officials, and municipal staff | Short initially | $ |
| 3.1, A | Update the Regional Housing Assessment every 5 years. | Housing Authority | Short | $$ |
| 3.1, A | Use the Chaffee Housing Authority Strategic Plan and Regional Housing Assessment to periodically measure progress in achieving regional goals for affordable housing. | Housing Authority, with input from other housing entities | Medium | $ |
| 3.1, B | Incentivize housing for low- and moderate-income households in designated districts using tools such as: fee reductions, density bonuses, and expedited approval. | Elected officials, appointed officials, and municipal staff | Short | $ |
| 3.1, B | Research applicability of a County-wide inclusionary zoning ordinance for major subdivisions. | Elected officials, appointed officials, and municipal staff | Medium | $ |
| 3.1, B | Refer to the Chaffee Housing Authority’s Strategic Plan to align land use policies between the County and municipalities to support the production of affordable housing. | Housing Authority, HPAC to convene parties to draft policies | Medium | $ |
| 3.1, B | Help fund and facilitate a County-wide study on vacation rentals and the feasibility of an impact fee to contribute to affordable housing projects. | Housing Authority | Short | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 3.1, B | Fast track approval of plans for continuum of care services facilities and affordable housing for seniors near municipalities | County staff | | $ |
| 3.1, B | Research and adopt zoning and development standards within designated districts that permit different housing types including duplexes, multiplex, tiny homes, ADUs, etc. as use by right. | Planning Collaborative (convened by HA) | Medium | $ |
| 3.1, C | Build support for an affordable housing development and preservation fund. | Housing Authority & HPAC | Short | $ |
| 3.1, C | Create a local impact investment structure to facilitate local investment in affordable housing development. | Housing Authority & CCCF | Short | $ |
| 3.1, D | Identify potential contributions to the acquisition, rehabilitation or development of existing and/or new affordable developments. | Elected officials, appointed officials, & municipal staff | | $ |
| 3.1, D | Acquire and bank land and/or structures for future development, redevelopment, or financial equity. | Housing Authority | Short | $$$ |
| 3.1, D | Identify potential contributions to the acquisition, rehabilitation or development of existing and/or new affordable developments | Elected officials, appointed officials, & municipal staff | Short initially | $$$ |
| 3.1, D | Identify potential financial contributions to development costs of public private partnerships which result in permanent or long-term affordability | Elected officials, appointed officials, & municipal staff | Short initially | $$$ |
| 3.1, D | Promote, facilitate and contribute to public private partnerships for the development of affordable housing | Housing Authority | Short initially | $$$ |
## 4. CONNECTIVITY, MOBILITY & ACCESS
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 4.1, A | Task the Transportation Advisory Board (TAB) with development of the CCMTP. | TAB, Planning Manager | Short | $$ |
| 4.1, D | Task the TAB with development of a Five-Year Sustainable Funding Plan with add-a-year/drop-a-year updates annually to the Board of County Commissioners. | TAB, Planning Manager | Medium | |
| 4.2, E | Involve County Sheriff and emergency service agencies in the planning, building and the everyday use of County airports, roads, freight routes, transit routes and bike and pedestrian trails to reduce user conflict and increase public safety. | TAB, BoCC, County Staff, Sheriff's Office | Medium | |
| 4.2, E | Reduce vehicle trips on highways by promoting telecommunication, active land uses in and near existing communities, enhancing access to employment, recreation, public events, services and businesses by use of public transportation and other multimodal options. | TAB, County staff | Medium | |
| 4.2, E | Identify areas with safety hazards and explore ways to improve safety and comfort. Prioritize bicycle and pedestrian infrastructure projects to promote safe nonvehicular commuting. | TAB, County staff, local communities | Medium | |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|-----------------------------|
| 4.2, E | Improve County highway bicycle and pedestrian crossings. Identify Safe Routes to School and prioritize bicycle/pedestrian improvements to and from schools. | TAB, County staff, local communities | Medium | |
| 4.5, I | Create an ongoing annual Strategic Assessment and Maintenance Plan. | TAB, County Staff | Medium | |
| 4.5, I | Maintain existing County roads, bridges and trails that connect residents, employment and visitors to destinations across the County. | County Staff | Short | Requires increase in budget |
| 4.5, I | Develop County management and operations solutions to create and extend the life-cycle of corridors of all uses. | County Staff, BoCC | Medium | Requires increase in budget |
| 4.5, I | Coordinate with local partners across sectors and jurisdictions on all multimodal projects. | TAB, County Staff, BoCC | Medium | Variable |
| 4.5, I | Coordinate during the planning and design of improvement projects applying the context of “transportation corridors” for all modes to provide for efficiency of construction of multiple projects at once to maximize funding. | TAB, County Staff, CDOT, local communities | Medium | Variable |
| 4.5, J | Align road development standards and road hierarchies across municipalities and the County for consistency with the Chaffee County Multimodal Transportation Master Plan and designated growth areas. | TAB, County Staff, BoCC, CDOT, local communities | Short | Variable |
| 4.5, J | Conduct a fiscal impact study to assess costs of road construction and to address issues such as future maintenance and State Highway access improvements. | TAB, County Staff, BoCC, CDOT | Medium | Variable |
| 4.5, J | Develop a multi-year Capital Improvements Plan for County roads to prioritize road improvements. | BoCC, County Staff, PC | Medium | Variable |
| 4.5, J | Assess strategies for maintaining the ability of agriculturalists to safely move agricultural machinery and move livestock and agricultural products around the county. | BoCC, ranching representatives, CDOT, Sheriff | Medium | Variable |
| 4.1, C | Prioritize trail construction and improvements that connect high concentrations of people to activity centers and connect those with high mobility needs and under-served populations to community services, schools and housing. | TAB, SPOT, local communities | Medium | Variable |
| 4.1, C | Align open space and trail dedication policies, plans, and standards across municipalities and the County. | TAB as part of CCMTP | Short | Variable |
| 4.1, C | Invest in the construction of a multi-use long-distance trail connections throughout the County utilizing designated transportation corridors that connects Salida, Poncha Springs, and Buena Vista to each other. | TAB as part of CCMTP | Short | Variable |
| 4.1, C | Use available rights of way where available for trails to maximize limited funding. | TAB as part of CCMTP, RIB | Short | Variable |
| 4.3, G | Work collaboratively with federal, state, and municipal partner to identify public land access points to prioritize for maintaining access as the region grows. | TAB, local communities, BoCC, PC | Short | Variable |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------|----------|------|
| 4.3, G | Assess existing facilities and access points for capacity and manage to minimize over-use. | County Staff, SPOT, BoCC | Short | Variable |
| 4.5, K | Study the feasibility of a more robust and efficient mass transit system in the County and/or region. | TAB, Chaffee Shuttle, CDOT, RIB | Medium | Variable |
| 4.5, K | Provide support to the transit providers to expand service to have local circulating routes in each community and between communities for recreational activities, service and employment to connect Chaffee County to services and activities outside the county. | Chaffee Shuttle, BoCC, PC | Short | Variable |
| 4.5, K | Implement a one-stop-shop for transit with a call center, website and mobile application to make accessing transit services easier. | Chaffee Shuttle | Short | $$ |
| 4.4, H | Identify transportation barriers to business employers and employees, including tourism, and assist with their needs. | TAB, Chaffee Shuttle | Medium | Variable |
| 4.4, H | Assure public access to County telecommunication systems. | TAB | Short | Variable |
| 4.4, H | Develop freight movement and accessibility criteria to improve delivery options. | TAB | Short | Variable |
| 4.4, H | Invest in the construction of bicycle lanes, sidewalks or multi-use paths, broadband/fiber optic cable on both sides of the road when County roads undergo significant maintenance projects. | BoCC, PC, TAB | Short | Variable |
| 4.4, H | Evaluate the expansion of airports to support their commercial activities. | Airport Managers in Salida and BV | Short | Variable |
| 4.6, L | Review development regulations for safe and redundant access in natural hazard zones and amend to ensure future County developments are designed to provide for redundancy of ingress/egress. | TAB, BoCC, County Staff, PC | Short | Variable |
| 4.6, L | Review and amend development regulations to require consideration of planned multimodal transportation infrastructure within developments such as safe routes to school, transit stations/shelters, bus bike racks, park-and-rides, transit pull-ins/pullouts, telecommunication links and connections, etc. | TAB, BoCC, County Staff, PC | Short | Variable |
| 4.6, L | Ensure the development review process includes consistency with transportation related plans, such as the Chaffee County Multimodal Transportation Plan, in order to achieve transportation goals between and within developments. | TAB, BoCC, County Staff, PC | Short | Variable |
| 4.6, L | Consider the use of the TAB as a review agency for developments where consistency with state, regional, and county transportation goals are applicable. | BoCC, PC, TAB | Medium | $ |
## 5. RESILIENT & SUSTAINABLE ENVIRONMENT
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|---------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 5.1, A | Treat 30,000 acres of public and private lands by 2030 to decrease risk wildfire poses to community resources by half, while also enhancing wildlife habitat. | Envision Forest Health Council | Short | $$$ |
| 5.1, A | Develop collaborative funding to support above treatment (with estimated cost of $45 million) with the Forest Health Council and creation of an Upper Arkansas Forest Fund that builds upon/leverages Agency and Common Ground funds. | Envision Forest Health Council, National Forest Foundation | Short | $$$ |
| 5.1, A | Implement Chaffee Chips, an inter-agency program enabling private landowners to create defensible space and enhance the health of their local forest. | Envision, Chaffee Fire, CSFS | Short | $ |
| 5.1, A | Implement Collaborative Communications to transparently track and celebrate progress. | Envision | Short | $ |
| 5.1, A | Collaborate with the Planning Commission and the Envision Forest Health Council to implement immediate risk code modifications identified in the CWPP (by 2021) and to update zoning/codes to enhance fire resilience in new development. | County Staff, PC, Envision Forest Health Council | Short | |
| 5.2, B | Complete the Chaffee County Hazard Mitigation Plan Update. | County Staff | Medium | |
| 5.2, B | Collaborate with Planning Commission to integrate the recommendations for hazard mitigation into the land use code update. | County Staff | Medium | |
| 5.5, J | Adopt a revised water adequacy supply standard for development | County Staff, PC | Short | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 5.2, E | Invest in and protect wetlands and riparian areas that attenuate floods and capture sediment to build our resilience to fire and flood. | | Long | |
| 5.2, E | Execute program for payments for ecosystem services for landowners to enhance floodplains and river corridors | Common Ground, CCC | Medium | Funded |
| 5.2, F | Maintain existing and historic ditches that are essential to agriculture and support groundwater recharge. | UAWCD | Long | $$$ |
| 5.2, F | Research and develop market alternatives to buy and dry. | UAWCD | Medium | |
| 5.3, G | Formalize and adopt a Chaffee County Energy Plan; building off the plan created by Clean Energy Chaffee (CEC) in March 2020. | CEC, Chaffee Green, GARNA, Salida Sustainability | Medium | $ |
| 5.3, G | Establish goals and strategies of the Energy Plan based on reducing the identified major sources of carbon emissions in the County, in line with Governor’s Roadmap to 100% Renewable Energy by 2040 and Bold Climate Action. | Clean Energy Chaffee, Salida Sustainability | Short | $ |
| 5.3, G | Work with experts in the field of energy conservation and with energy providers to complete a modified energy supply cost/benefit analysis. | Clean Energy Chaffee, Salida Sustainability | Medium | $ |
| 5.3, G | Using the results of the energy supply cost/benefit analysis, identify and plan for implementation of the lowest cost alternatives to reduce carbon emissions. | Clean Energy Chaffee, Salida Sustainability | Medium | $ |
| 5.3, G | Develop and adopt procedures and protocols to improve the energy efficiency of its own operations and facilities. | Clean Energy Chaffee, Salida Sustainability | Long | $ |
| 5.3, G | Make cost-effective bulk purchases of energy efficient supplies and equipment for its own use and those of the municipalities, businesses and homeowners. | Clean Energy Chaffee, Salida Sustainability | Medium | $ |
| 5.3, G | Educate the public on energy, DIY and finance options as needed. Research and encourage involvement of resources to educate the consumer on how to address energy challenges. | Clean Energy Chaffee, Salida Sustainability | Short | $ |
| 5.3, G | To promote job growth and energy efficiency in single family homes, create and host a partnership with Energy Smart Colorado to assist with and subsidize the cost of professional energy audits for homes. | County Staff, CCEDC | Short | $$ |
| 5.3, G | Support and encourage citizens in their efforts to establish solar gardens within the County. | Clean Energy Chaffee | Medium | $ |
| 5.3, G | Create a map of future energy sites by geothermal and solar potential. | Clean Energy Chaffee | | |
| 5.3, G | Promote the 100% free energy audit program from Northwest Colorado Council of Governments to promote energy efficiency and lower utility bills for low income residents. | County DHS, CEC | Short | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------------|----------|------|
| 5.3, H | As County-owned vehicles are retired, replace them with electric vehicles, supported by an appropriately extended vehicle charging infrastructure. | CEC, Salida Sustainability | Medium | $$$ |
| 5.3, H | Develop an electric vehicle charging infrastructure throughout the County. | CEC, Salida Sustainability | Medium | $$$ |
| 5.3, H | Coordinate permitting and siting processes across jurisdictions to expedite renewable energy development. | CEC, Salida Sustainability | Medium | $$$ |
| 5.3, H | Establish and Audit Retrofit procedure for commercial buildings and low income housing to a high-performance standard to improve their energy efficiency. | CEC, Salida Sustainability | Medium | $$ |
| 5.3, H | Expand non-vehicle transportation alternatives throughout the County and encourage human-powered transportation and pedestrian-oriented land use patterns to reduce greenhouse gas emissions. | | Medium | |
| 5.3, H | Investigate the possibility of reviving the dormant railroad tracks in the Upper Arkansas Valley. | | Medium | |
| 5.3, H | To promote job growth and energy efficiency in commercial buildings, expand and promote the "Colorado Commercial Property Assessed Clean Energy Program" or C-Pace to provide affordable, long-term financing for energy and water efficiency and renewable energy projects. | CCEDC, CEC | Short | $ |
| 5.4, I | Evaluate the current waste management systems in the County on their effectiveness and efficiency in diverting waste from the landfill. | Chaffee Green, GARNA | Short | Funded |
| 5.4, I | Develop a waste management plan independent of the Sustainability Plan. Create team to determine details about how to achieve goals | Chaffee Green, GARNA | Medium | |
| 5.4, I | Develop a County-wide integrated waste management system that focuses on next-generation recycling techniques and composting. | Chaffee Green, GARNA | Medium | $ |
| 5.5, L | Work closely with the Upper Arkansas Conservation District to enhance resiliency in water rights management. | County Staff | | |
| 5.5, L | Collaborate with Buena Vista to adopt an overlay for Cottonwood Creek that reduces future well density and protects the Town of Buena Vista's water supply and riparian corridor. | County Staff and PC | Medium | |
| 5.5, L | Collaborate with Salida on protection of the municipal water supply along the South Arkansas. | County Staff | Medium | |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------------|----------------|------|
| 5.5, K | Assess County facility cost benefits for integration of water efficiency. | County Staff | Medium | |
| 5.5, K | Integrate water efficiency into the land use code update. | County Staff | Medium | |
| 5.6, M | Explore a river corridor overlay as part of the land use code update to identify best use. | County Staff, PC | Short | |
| 5.6, N | Assess the potential of increased nitrate concentrations from septic failure. | CCPHED | Medium | $$ |
| 5.6, O | Review current streamside setback, wetland, and riparian habitat standards for consistency with best management practices. | County Staff, PC | Short | |
| 5.6, P | Assess land use code for vegetation disturbance limits and revegetation standards, zero net runoff for development and low impact development, and other best practices. | County Staff, PC | Medium | |
| 5.6, R | Support an Arkansas River health study to inform a long-range action plan for the river and watershed. | UAWCD | Medium | $$$ |
| 5.6, R | Support the development of a watershed collaborative to develop a Stream Management Plan and/or Integrated Water Resource Management Plan. | UAWCD, GARNA | Medium | $$$ |
| 5.7, W | Work with CPW and nonprofit partners to identify habitat priority conservation areas for focal species. | Envision | Short (in progress) | $ |
| 5.7, U | Envision Chaffee County will engage with the Colorado Forest Restoration Institute at CSU, the Chaffee Recreation Council and the community to complete mapping/modeling by summer 2021. | Envision | Short (in progress) | |
| 5.7, V | Ensure all Envision data is housed at the County. | Envision, County Staff | Medium | |
| 5.7, V | Research best practices in land use mitigation standards for focus species (wildlife human conflict, migration corridors, winter habitat, riparian habitat, etc.). | County Staff, PC | Short | |
| 5.7, V | Use research to inform the development of appropriate development standards that mitigate impacts to wildlife. | County Staff, PC | Short | |
| 5.7, U | Work with CPW and nonprofit partners to use the habitat conservation map to inform location and development standards for highest priority habitat. | County Staff, PC | Short | $$$ |
| 5.7, X | Leverage funding, grants, and collaborative partnerships to enhance stewardship and protect wildlife habitat on private lands. | Envision Forest Health Council | Medium | |
| 5.7, X | Educate the community regarding wildlife needs and what community members can do to protect wildlife. | Envision Rec Council | Medium | |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------------|----------|------|
| 5.8, Y | Assist in developing a citizen advocacy group like Clean Energy Chaffee, to coordinate education and outreach to citizens relating to reduction of carbon footprint individually and in the County. | BoCC | Long | |
| 5.8, Y | Expand on the accomplishments Salida has made in completing a Greenhouse Gas (GHG) Inventory by conducting a County-wide GHG Inventory. Data from the inventory can be used as a tool for setting goals and priorities around GHG emission reductions. | BoCC | Medium | |
| 5.8, Z | Develop Chaffee County Certified Design that offers design guidance for voluntary actions based on local knowledge of the County weather including information about solar orientation, insulation, driveway orientation to minimize snow drifts, etc. | Chaffee Green, GARNA, CEC | Short | $$ |
| 5.8, Z | Chaffee County accepts the Governor’s Sustainability Challenge to integrate environmental science with community-based education approaches, resulting in collaboration, participation and innovation. | Chaffee Green, GARNA, CEC | Medium | $ |
| 5.8, Z | Work with local builders and residents to educate them on good building practices and the adoption and application of the latest building methods and codes. | Chaffee Green, GARNA, CEC | Medium | $$ |
| 5.8, Z | Identify key priority areas for an outreach campaign to educate the general public, and other stakeholders | Chaffee Green, GARNA, CEC | Medium | $ |
| 5.8, AA | Organize County services and staff to provide a focus on ecological health and services by appointing an existing employee or creating a new position called "Sustainability Manager" by 2021 | Chaffee Green, GARNA, County Staff | Short | $ |
| 5.8, AA | Support this new position to focus on serving the public needs relative to smart energy transition resulting in employment and lower utility bills. | | Medium | $ |
### 6. JOBS & ECONOMY
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------------|----------|------|
| 6.1, A | Educate on "Agritourism" and other innovative economic methods to maintain working landscapes and boost the local economy. | Guidestone, Envision | Short | $ |
| 6.1, A | Create a County wide Agricultural Sustainability Advisory committee to develop, promote, and support programs to support local agriculture and ranching encompassing environmental concerns, economic concerns, etc. | Guidestone, CCCF, UAWCD | Medium | $ |
| 6.1, A | Support and education in farming and ranching through local educational institutions (high school, CMC). | UAWCD, Guidestone, CSU Extension | Medium | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 6.1, B | Encourage the formation of a citizen entrepreneur advocacy group that focuses on the specific needs of existing businesses. Educate on economic tools for marketing, cooperative, branding, etc. | Small Business Development Center | Short | $ |
| 6.2, E | Develop an early wins plan. | CCEDC, Envision | Medium | $ |
| 6.2, C | Attract new agriculture, aquiculture, and small-scale farming industries to support local food production. | CCEDC | Medium | |
| 6.3, F | Ensure the Collegiate Peaks Scenic and Historic Byways is integrated into the Chaffee County Visitor's Bureau annual work plan. | CCHAAB, GARNA, County Staff, CCVB | Short | $$ |
| 6.5, H | Work with CMC and other educational partners to offer programming. | CMC, CCEDC, Salida & BV School Districts | Short | |
| 6.6, I | Host a building and development community charrette to discuss opportunities to streamline and improve the land use regulations to support economic opportunity, creativity, and innovation. | County Staff, CCEDC | Short | |
| 6.4, G | Maintain and update the broadband plan. | County Staff, CCEDC | Short | |
## 7. GROWTH & LAND USE
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-------------------------------------------|----------|------|
| 7.1, B | Research land use incentives used by peer communities to direct growth to inform the land use code update. | PC (in retreat) | Medium | $ |
| 7.1, C | Engage local community members and municipalities to develop and approve of specific area plans and engage in joint planning efforts. | PC (in retreat) | Medium | $ |
| 7.2, E | Research and prioritize strategies to achieve better development including but not limited to density bonuses, cluster developments, conservation development, transfer of development rights and other appropriate strategies. | PC | Short | $ |
| 7.2, E | Update the development regulations to integrate most appropriate strategies. | PC | Short | $$ |
| 7.2, F | Identify existing gaps and deficits in overlapping agency responsibilities for protecting critical land values. | County Staff | Short | $ |
| 7.2, F | Assess relevant plans and the land use code for existing policies and processes to inform recommendations for development. | County Staff | Short | $ |
| 7.2, F | Update land use regulations as necessary based on assessment. | County Staff | Short | $ |
| Ref | Implementation Action | Lead | Timeline | Cost |
|-----|--------------------------------------------------------------------------------------|-----------------------|----------|------|
| 7.2, G | Update the development code to include recommendations in the updated Collegiate Peaks Scenic Byway’s Corridor Management Plan. | County | Medium | $$ |
| 7.2, G | Develop a scenic byways visual resource assessment methodology to include in an update to scenic resources development standards. | CCHAAB | Medium | $$ |
| 7.2, G | Consider integrating visual resource with a scenic value overlay. | County Staff | Medium | $$ |
| 7.3, H | Create a collaborative environment between municipalities and County to analyze the cost/benefit of providing infrastructure or services. | County Staff | Short | $$ |
| 7.3, H | Create a dedicated funding source for aiding municipalities with infrastructure service costs when annexations occur. | County Staff | Medium | $ |
| 7.3, H | Hold County-owned parcels near existing municipalities for future schools, housing, water/wastewater treatment plants, landfills, and other future infrastructure needs. | County Staff | Long | $ |
| 7.3, I | Create an accessible geographic database to communicate information on existing conditions such as county infrastructure, facilities, utilities, wildlife habitat and migration corridors. | County Staff | | |
| 7.4, J | Write a proposal to DOLA to fund an update the development code. | County Staff | Short | $$$ |
| 7.4, J | Research regulatory policy examples to achieve community goals. | County Staff | Medium | $ |
| 7.4, J | Create overlays showing land use types/zones, viewsheds, open space, existing and proposed multimodal transportation routes, telecommunications, and water supply. | County Staff, Office of Housing | Medium | $ |
| 7.4, H | Develop Focus Area master plans for Johnson Village, Nathrop Townsite, Maysville Townsite, and Highway 50 Corridor. | | | |
5. Data & Trends
PEOPLE & COMMUNITY SERVICES
DATA & DISCUSSION
WHY THIS THEME IS IMPORTANT
With changing demographics and an increasing population, Chaffee County is experiencing shifting priorities and growing pains when providing community services to its residents and their changing needs. From education to senior services, the current and future residents of the County rely on these services to meet essential needs.
KEY DATA POINTS
Population & Projections
- Chaffee County has experienced unprecedented growth over the last 20 years, with the fastest growth rates occurring in recent years. According to the Department of Local Affairs (DOLA), from 2015 to 2018 Chaffee County added approximately 1,438 people, an average growth rate of 7.7%, up 3.3% from that of 2010 to 2015 (4.5%).
- Utilizing U.S. Census data from 2000 through today, an exponential smoothing forecast estimates that Chaffee County’s population will increase by 33% by 2035, reaching 26,949 people based on historic growth trends.
Population Growth: Population and projection, 2000 to 2040 (Colorado Department of Local Affairs, Cushing Terrell)
Pending unforeseen circumstances, Chaffee County will add between 6,000 and 13,000 new residents between 2020 and 2035 who will need adequate housing, services, goods, and recreational activities to maintain the livability that attracts people to this community.
**Demographics**
- **Chaffee County’s population is aging**, with a median age of 49.1, which is much older than the state’s median age of 36.6.
- An estimated 64% of the population is between the ages of 30 and 75, and half is over the age of 50. The County’s population pyramid indicates current and future demand for housing and services for an aging population, such as assisted living facilities and expanded health care.
- Personal incomes in Chaffee County are below average for the state, but are increasing at a faster rate. According to the US Census 5-year estimates, the median income of Chaffee County residents increased from $42,941 in 2010 to $54,580 in 2018. This 27% rate of increase is higher than the state of Colorado’s 22% increase in the median income in that time. According to HUD and CHFA, the Area Median income for 2020 is $50,000 for a single person.
- **An increase in second homes and occasional use homes point to a more seasonal population.** In 2018, the US Census estimated that 19% of housing units in the County were for seasonal or occasional use only, around 2,069 homes. At an average of 2.2 persons per unit in 2018, that represented an estimated seasonal population of around 4,552 persons.
**Age and Gender: Population Pyramid**
*Source: US Census ACS 2017*
| Age Range | Male | Female |
|-----------------|------|--------|
| Under 5 years | | |
| 5 to 9 years | | |
| 10 to 14 years | | |
| 15 to 19 years | | |
| 20 to 24 years | | |
| 25 to 29 years | | |
| 30 to 34 years | | |
| 35 to 39 years | | |
| 40 to 44 years | | |
| 45 to 49 years | | |
| 50 to 54 years | | |
| 55 to 59 years | | |
| 60 to 64 years | | |
| 65 to 69 years | | |
| 70 to 74 years | | |
| 75 to 79 years | | |
| 80 to 84 years | | |
| 85 years and older | | |
**Median Income Comparison, 2010 - 2018**
| Area | Median Income 2010 | Median Income 2018 | % Change |
|---------------|--------------------|--------------------|----------|
| Chaffee | $42,941 | $54,580 | 27% |
| Colorado | $56,456 | $68,811 | 22% |
| United States | $51,914 | $60,293 | 16% |
*Source: US Census ACS 2018*
**Schools**
- The County is serviced by two public school districts, one in Buena Vista and the other in Salida. Each has an elementary, middle/junior, and high school.
- The Montessori School and the Darren Patterson Christian School also operate in the County.
Higher Education
- Residents within the Salida School District voted in 2019 to join the Colorado Mountain College (CMC) Tax District and pay the attendant mill levy on real property. These funds will be used to support much-needed courses and programs through the College.
- CMC is already gearing up its programming in the southern half of the County, with significant emphasis on technical training programs and careers.
Medical Care
- The Heart of the Rockies Medical Center in Salida is the only hospital in the County, and there are many services not available at this facility that require trips to the Front Range.
- Services that were identified as lacking or needed include: urgent care, pediatric dentists, transitional care, substance abuse programs, continuum of care and hospice.
Behavioral Health
- Solvista Health provides mental and behavioral health services for Fremont, Chaffee, Lake and Custer Counties out of Salida. In 2019, they received a grant to build a treatment center adjacent to the Heart of the Rockies Medical Center.
Emergency Response Services
- Capacity for first responders like police, fire and EMS, although potentially adequate for the current population, might be challenged as they serve a growing population. Goal 1.7 “Ensure emergency services are adequately funded and staffed to maintain high quality service in the County as the population increases” is intended to find ways to proportionally increase facilities, personnel, and resources for emergency service branches at the same pace of proposed growth. Additional studies and capital planning should be completed to build an understanding of servicing needs.
COMMUNITY INPUT
- Almost half (45%) of residents who took the first online community survey have lived in Chaffee County for over 10 years, while 38% have lived here for less than 5 years.
- When asked what they valued most about Chaffee County, 25% of all answers mentioned the people and the sense of community. Many mentioned the friendly people of Chaffee, and the tight-knit nature of the Arkansas Valley community.
• About 39% of survey responders thought that Chaffee was experiencing the right amount of growth, 37% thought that there was too much growth, and 20% responded that there was way too much growth. One responder stated “Growth is going to happen and we have to have a plan and be more progressive…if we do not do that, the growth will still happen, but we will not have control over it. Growth is not a bad thing!”
**COMP PLAN IMPLICATIONS**
With changing demographics comes the need to create the facilities, supply the housing types and construct mobility features for all ages and abilities, including healthy recreation options. This means projects should be oriented to accessibility with gathering spaces that have features and facilities for all ages, such as playgrounds, shady resting areas and safe, visible gathering spaces where the community can interact.
Additional focus is therefore placed in existing town centers, where a critical mass of housing, jobs and job creation opportunities and gathering spaces would encourage the vitality and vibrancy desired by the Chaffee community.
Much study and discussion has been held on the social capital within Chaffee and how it relates to growth and development. The value in tapping into Chaffee’s social networks was communicated in the Envision Chaffee process which stated several grassroots-level ambitions including (paraphrased from ‘Envision Chaffee County’):
• Chaffee County has a severe shortage of licensed child care locations. Providers decreased from 27 to 12 in recent years, and 65% of 2018 Child Care Availability Survey respondents indicate they may leave due to child care shortages.
• Concern about finding common ground and building community capacity to discuss difficult issues, and develop and achieve solutions.
• Support vulnerable residents: Currently resources are limited for supporting domestic violence programs or resources at the county-level. Goal 1.8 “Support vulnerable residents with appropriate services” is intended to consider expanding those programs to offer support for such services.
Fiscal responsibility is paramount to this comprehensive planning effort, particularly as a global pandemic continues to impact the day-to-day financial well-being of Chaffee County. To the extent that it is feasible, development and growth must only occur if it does not cause undue burden on the County's finances or its ability to provide a high level of public services. As such the plan has a focus within this theme relating directly to the subject of fiscal responsibility with Goal 1.9: Maintain a Fiscal effective County government, and a list of strategies consistent with the community’s desire to maintain financial health while promoting the right amount and character of growth.
County character is visualized in the historic architecture of the core business districts of town and city centers, the historic ranches lining the Collegiate Peaks Byway, the ruins of old mines, and the many pedestrian and bike trails connecting people to recreation opportunities.
Character comes out in the feeling of being in Chaffee County and walking down the streets of Salida, Buena Vista, and Poncha Springs. It comes out of driving down Collegiate Peaks Byway, or one of the winding mountain passes. It’s the sounds of the Arkansas River and the glee of people rafting its rapids.
Character can be measured by the demographic, social, or economic characteristics of the County’s people. It can also be measured visually by surveying the natural and built environment. Both means of measurement are important to understand in discussing goals and policies that could bring about or maintain desired character in the future. The following discusses this theme in terms of how character has shifted and the implications of such change.
Maintaining public management of public lands at all levels is a mission of Chaffee County’s leadership. This Comp Plan builds upon that legacy.
Recent growth (residential subdivision, land development or infill redevelopment) has been a catalyst for change in community character. The threats in Chaffee County are found among many other mountain and resort communities, including the loss of agricultural landscapes and open spaces in place of suburban sprawl. This exacerbates a jobs and housing imbalance as residences continue to be built on the easiest-to-develop land further out in the unincorporated county instead of near employment centers.
**Key Data Points**
**Parks and Public Land**
- **Chaffee County is 83% public land** owned by Federal, State and Local organizations.
- Resolution 2017-10 “Recognizing the Value of Federal Public Lands” which was adopted in 2017 to state the County’s continuing support for public management, maintenance and control of federal lands. The resolution states the public value inherent in these lands and intends to maintain that public use for future generations. Chaffee County does not maintain a formal park or recreation system or facilities such as campgrounds. **Citizens have access to large tracts of state and federal lands for recreational purposes and hundreds of miles of developed trails and over 900 private and public campsites in the County.**
The U.S. Forest Service (USFS) manages 70% of the 83% of public land in Chaffee. The Forest Service has over 300 miles of trails within the Chaffee County Forests.
The Bureau of Land Management manages 8% of public trails in Chaffee County.
The State of Colorado owns over 20,000 acres in Chaffee County, managed by the Colorado State Forest Service, State Land Board, Department of Corrections and the Colorado Division of Wildlife.
The Arkansas Headwaters Recreation Area (AHRA) is managed through a unique partnership of Colorado State Parks, the Bureau of Land Management, Colorado Division of Wildlife and the U.S. Forest Service. The AHRA manages recreation along a 152-mile extent of the Arkansas River.
**Historic & Cultural Resources**
Chaffee County has a tremendous bank of natural and cultural resources such as abundant wildlife, scenic natural areas like the Chalk Cliffs, historic towns and sites, natural hot springs and pools, and blue-ribbon trout rivers and streams.
The County has a wealth of sites of historic and archaeological interest. These range from mines to historic cemeteries and ghost towns such as Turret. Of particular note is the town of St. Elmo, a National Historic District. Currently, there are no land development or zoning regulations that protect these historic and archaeological resources in the County.
Salida’s Creative District and art scene have offered residents increased opportunities to interact with art after being selected as one of only two inaugural “Certified Creative Districts” in Colorado in 2012.
Many widely varies cultural and community events have put Chaffee County on the map as a unique destination to enjoy the creative arts, the Arkansas River, music and history.
**Agriculture**
In recent years, farms and ranches in Chaffee County have been increasing in number while decreasing in size. According to USDA Census of Agriculture estimates, from 2012 to 2017 the number of ranches and farms in the County increased by 30%, while the acreage of farm and ranchland decreased by 15%.
Most sales from farming and ranching in Chaffee County come from livestock, poultry and products (67%) while 33% come from crop farming.
There are approximately 16,464 acres of irrigated land in Chaffee County, which is 25% of land in farms and ranches.
90% of farms in Chaffee County are family farms.
**Chaffee County Agriculture**
| County Farms | 2017 | % Change since 2012 |
|--------------|------|---------------------|
| Number of farms | 289 | +30% |
| Land in farms (acres) | 66,297 | -15% |
| Average farm size (acres) | 229 | -34% |
| Land in farms by use | 2017 |
|----------------------|------|
| Cropland | 26% |
| Pastureland | 62% |
| Woodland | 7% |
| Other | 5% |
Source: USDA Census of Agriculture, 2017
CHAFFEE COUNTY: Trails & Recreation
- Trail
- Campground
- Fishing Access
- Group Campground
- Hotel, lodge, resort
- Interpretive Visitor Center
- Lake/Reservoir
- Picnic Site
- Recreation residence
- Ski Area
- Trailhead
0 2.5 5 10 Mi
N
Recreation
- **Recreation use is growing by 15% per year.** We currently have about 4.3 Million visitors per year. At the current growth rate this will double to more than 8 Million in 6-7 years. Impact is increasing in parallel. Of 1,005 dispersed campsites monitored in 2019, 37% have trash and/or human waste and roughly 40% are within 100 feet of water. In the Fourmile area, dispersed campsite numbers are increasing by 23% per year - or doubling every 4-5 years and doubling impacts. This was also identified as a top 5 community concern in the Envision survey and is the second greatest threat to forest health (just after severe wildfire). The challenge is retaining quality experiences and economic benefits while also maintaining health of forests, waters, wildlife and working lands - all of which are currently being impacted by recreation growth.
- There are **over 800 miles of trails** within Chaffee County for hiking, biking and horseback riding. Some trails are open for ATV/OHV and dirt bike recreation, like the 10,000-acre Fourmile Travel Management Recreation Area.
- The **102 miles of whitewater on the Arkansas River** that runs through the County are open to fishing, white water rafting and kayaking. Boaters can find everything from Class IV and Class V rapids, to milder Class II and Class III sections.
- Monarch Mountain offers over 800 skiable acres, 670 of which can be accessed by ski lifts. There are 66 total trails, and the Mountain gets an average annual snowfall of 350 inches.
**COMMUNITY INPUT**
- When asked what the greatest risk to Chaffee County’s current quality of life was, 26% survey responders mentioned uncontrolled growth and over-development, and another 26% believed that the lack of affordable workforce housing posed the biggest threat.
- Responders appeared to value the County’s open space and were cautious of growth encroaching on natural lands and changing the character of the County.
- Survey responders took pride in the character of the County – when asked what makes Chaffee County a great place to live, 23% answered the rural, small-town feel of their community. Many mentioned the laid-back or “slow” lifestyle as something to be preserved.
- While 97% of the 1,500 citizens who participated in the Envision survey indicated that “working lands are important to my quality of life,” there has been a 15% decrease in agricultural lands in the last 5 years or ~3% a year. At that rate, agricultural lands will be halved in about 16 years, vastly impacting County character and also the ecosystem (see Theme 5).
Agriculture, recreation, western heritage and open space contribute to Chaffee County’s character and allow its people their chosen lifestyles. The Comp Plan recognizes this and proclaims the need for flexibility in the face of change. Character is after all defined by the County’s people and when the people change, the plan must be adaptable.
Development of rural lands for suburban single family residential use has been a topic of debate for decades. Newcomers create demand for lots which increases land prices. This incentivizes the sale of rural land, while chronically declining commodity prices, family dynamics and Federal estate taxes can and do force the sale of agricultural lands. Similar issues were evident in the 2000 Comprehensive Plan and more recently the issue was discussed in the Citizens’ Land Use Roundtable. Demand continues to drive the need to provide housing for newcomers. Ranchers and landowners are able to take advantage of the market by conveying pieces of vacant or agricultural land to developers for residential subdivision. Meanwhile, smaller pieces of farmland have been conveyed through subdivisions exempt from full land use review through exemptions in state law.
This pattern has left unincorporated Chaffee County with little land available for subdivision when public lands and previously platted subdivisions are removed from consideration (See Theme 7: Growth & Land Use for additional discussion on developable land). As that is, future land use policy in Part III of this plan provides guidance to keep agricultural lands in the rural county while respecting the rights of ranchers, farmers and landowners to manage their lands in their best interest for future generations.
CHAFFEE COUNTY: Agriculture
- Irrigation ditch
- Irrigated lands
- Lands leased for agricultural use
Adopted December 2020
WHY THIS THEME IS IMPORTANT
Understanding the interrelationship between housing costs, jobs, local economics and sustainable growth, Affordable and Inclusive Housing plans for a future where housing is accessible for all segments of the population. This theme centers on the lifestyle that includes all people of all ages, social groups and income levels.
Along with the State of Colorado and the country itself, Chaffee County and the region face challenges to providing sustainable housing for its workforce. A regional housing shortage, lack of diversity of housing, and low-wage jobs have priced certain income levels out of the County, and have continued to push existing residents elsewhere.
Understanding the regional nature of housing, the community partnered in completing the Chaffee County Housing Needs Assessment in 2016 which assessed trends, evaluated supply and demand conditions and identified gaps where resources should be focused.
In general the study found that most new jobs added to the County in the past 20 years have been low-wage, tourism-based work, coupled with the rapidly increasing cost of housing and land, resulting in the widening of the gap between wages, incomes and housing. With the high cost of constructing housing preventing adequate numbers of new homes on the market and non-local ownership and short term rentals constraining the supply of housing to new workers, the gap will continue to widen.
KEY DATA POINTS
Pace of Housing Growth
- The number of housing units in the County has increased, with growth of 2,739 units County-wide from 2000 to 2018, according to DOLA. However, the County only grew by 1,975 households over that time. Since one household is equivalent to one occupied housing unit, this faster growth of housing units compared to households indicates an increase in second home ownership.
- According to US Census 5-year estimates, 26.8% of housing units in Chaffee County are vacant, which is much higher than the state vacancy rate of 10%. The Census also estimates that in 2018 2,069 units (19% of total housing units) were used as second homes, classified as “for seasonal, recreational or occasional use”, compared to Colorado’s 5% second homes.
- A large portion of the housing stock in Chaffee is in the unincorporated county, according to DOLA. Of the approximate 11,188 housing units, 29% are in Salida, 14% are in Buena Vista, 4% are in Poncha Springs, and 53% are in the unincorporated area. Development in the unincorporated area is generally low density, dispersed, and without municipal utilities.
The Chaffee County Housing Needs Assessment concluded that the region’s households in most need of housing are those below 120% AMI. It identified a 1,262-unit shortfall for households at 60% AMI and below, an 834-unit shortfall for those at 60-120% AMI, and a 330-unit shortfall for the “missing middle” households between 120 and 160% AMI in 2016.
**Housing Type**
- Based on 2018 US Census estimates, single-family detached housing units made up 81% of the county-wide housing stock, followed by mobile homes at 9.8%.
- Excluding the municipalities, in the unincorporated areas of the County, the majority (85%) of housing units are single family detached, up 9% since 2010.
- The number of mobile home units in unincorporated Chaffee decreased from 20% of all units to just 11% from 2010 to 2017.
The housing stock has become less diverse over the last 15 years in the County, with only 5% of units being multifamily. According to the Housing Needs Assessment, there has been very little apartment construction due to a lack of available sites with the right zoning and rents that did not cover the cost of construction.
**Housing Cost and Affordability**
- The dramatic increase in County housing costs largely occurred within the last two to three years. According to the Realtors of Central Colorado, the median sales price for a single family home in Chaffee County was $426,978 in December 2019, up 11% from 2018 and up 21% from 2016.
- Similarly, the median sales price for a townhouse or condo was $325,200 in December 2019, up 12% from 2018 and up 27% from 2016.
- According to US Census 5-year estimates, 23% of the total housing units in Chaffee County are rentals.
### Housing Inventory by Type
| Area | Single Family | Duplex | Multi Family | Mobile Home | Townhome/ADU | Total |
|---------------|---------------|--------|--------------|-------------|---------------|-------|
| Unincorporated county | 4,860 | 29 | 58 | 648 | 95 | 5,700 |
| Salida | 2,139 | 232 | 368 | 233 | 94 | 3,066 |
| Poncha Springs| 236 | 8 | 53 | 28 | 19 | 344 |
| Buena Vista | 1,250 | 31 | 113 | 9 | 27 | 1,430 |
| Johnson Village| 41 | 0 | 0 | 119 | 33 | 193 |
| Maysville | 90 | 0 | 0 | 0 | 0 | 90 |
| **Total** | **8,575** | **300**| **592** | **918** | **235** | **10,630** |
| % of Total | 80.7% | 2.8% | 5.7% | 8.6% | 2.2% | 100% |
Source: US Census ACS 2017
CHAFFEE COUNTY: Housing Density
- 0 - 2 Units per acre
- 2.1 - 25 Units per acre
- 25.1 - 75 Units per acre
- 75.1 - 150 Units per acre
N
0 2.5 5 10 Mi
According to the 2016 Chaffee County Housing Needs Assessment, rents have been rising and were between $1,200 and $1,600 per month. Most rental properties are single family homes, and when properties become available they are usually rented within one month. An average rent of $1,200 per month is not affordable to a household earning less than 100% AMI. Anecdotally, current local ads show rents between $1,600 and $2,000.
**Homelessness**
- Although accurate numbers of persons experiencing homelessness are difficult to find for Chaffee County, the school officials report an unexpectedly high number of students who claim to not have “housing security”.
- Chaffee has a Homeless Coalition formed in 2019 that focuses on finding solutions and resources for homeless community members. Limited wintertime facilities are available in the south end of the County.
- Homelessness has had significant impacts on the management of public lands, as the rapidly increasing housing prices in the County have forced community members to live in tents and vehicles full-time without proper facilities.
**COMMUNITY INPUT**
- When asked what the greatest risk to Chaffee’s quality of life was, 26% of responders to the first community survey referred to lack of affordable housing.
- Similarly, 22% of responders mentioned that affordable and/or workforce housing was the number one thing that Chaffee County should focus on in a new comprehensive plan.
- Ninety percent of community survey responders lived in a single-family detached house, while only 3% lived in a townhouse, 2% lived in a duplex or apartment, and 2% lived in a mobile home.
• The majority (89%) owned their home, and of those who rented over half (53%) stated that the greatest barrier to owning was that it was too expensive in their community, while 24% said that there were not enough dwelling types that they needed.
• The majority (63%) of responders thought that short term housing rentals (STRs) should be reined in Chaffee County, while only 10% thought they should be encouraged, and 26% said that they were not an issue for them personally.
• In Survey #2, 25% of responders said they would support a flat-rate annual licensing fee for STRs that contributes to an affordable housing fund, while 22% said they would support a fee at the percentage of the listing price paid by the vacationer. Only 5% of Survey #2 responders said that they did not support any additional fees for STRs.
• Responders were split on what type of housing they would like to see more of in Chaffee County. Forty percent answered free-standing small homes and small-lot freestanding houses, while other popular answers included duplex-fourplex (18%) and vertical mixed-use building (12%).
• In response to the open-ended question “What type of affordable community housing should the County produce?”, over one quarter of responders described medium to high density apartments in and around existing towns and activity nodes.
• Just over half (53%) of Survey #2 responders were supportive of creating a local dedicated tax for affordable housing, while 29% said they would not support a tax and 19% said they did not know.
• When asked what type of tax they would support, 35% of responders said a hotel/lodging tax, 22% said an excise tax or development impact fee, 15% said a real estate tax or dedicated property tax, and 13% said they did not support a tax for affordable housing.
• When asked if they would support an inclusionary housing ordinance requiring
10-20% affordable housing for every new subdivision developed in the County, 69% responded yes and 31% said that they would not support an ordinance.
**COMP PLAN IMPLICATIONS**
Ultimately the question the community must answer is “How can we continue to supply a strong mix of housing types and prices so that we can alleviate the issues caused by and the economic impacts brought on by losing segments of the population who are unable to participate in the community--either economically or culturally--due to housing costs?” Long commutes, displacement, and empty homes change the community’s character and vibrancy.
The dominant housing type is the single family detached unit, and that is unlikely to change. But as people of all types continue to choose Chaffee County as their next home as they depart large cities, the single family home-type will be challenged to absorb the mounting growth pressure. Demand in the rental market is for attached units and apartments, as well as units for seasonal and other workers. Opportunities to increase supply at modest densities will be beneficial in relieving this pressure.
Planning to provide opportunities for a mix of housing - particularly affordable units - for all incomes across the county began with the Housing Needs Assessment and is continued in the Comp Plan by expanding opportunities for housing in the right locations near existing communities and through strategic action steps. Such locations should be designated in the Sub Area Future Land Use Maps.
From what was heard in the community input, conservation subdivision design appears to be increasingly an alternative to sprawl in applicable locations on transportation corridors or within a municipal planning area where infrastructure may be accommodated. County regulations should incentivize or provide guidance for developers to pursue these designs instead of traditional large-lot rural subdivisions.
As such, the County has made this part of their vision through the goals, strategies, projects and future land use maps. All of the above strive to promote compact, dense housing and mixed use development in the most appropriate areas near or within existing communities.
WHY THIS THEME IS IMPORTANT
In this document, transportation incorporates Connectivity, Mobility & Access within Chaffee County to create a connected system network for all types of transportation movement through the Arkansas Valley and beyond.
This theme is important as it addresses existing transportation patterns and provides proactive alternatives and practices to accommodate future growth pressures and economic trends. Issues that will affect transportation in the future include population growth, County land development, infrastructure needs and the funding and maintenance of existing and planned transportation networks. This comprehensive plan set the objectives to provide and promote multimodal alternatives to travel, improve connectivity within and between towns, adopt and support safety strategies, support public transportation and telecommunication investments.
Of particular concern is potential growth between the County’s employment centers of Buena Vista and Salida. Highways with increasing vehicle traffic and bicycle and pedestrian safety create complicated issues throughout the County.
KEY DATA POINTS
County Roads
- The Road & Bridge department has a staff of 18 employees and a budget of $3 Million per year. The budget is funded by HUTF, PILT and a small portion of property tax.
- The County manages 334 miles of roads. 156 miles are paved, the majority of which are in poor condition. The budget for asphalt will allow for 2 miles of 2 inch overlay and 10 miles of chip and seal per year. There are also 59 bridges managed by the County.
- 178 miles of road are gravel. The challenge with gravel roads is keeping the road users happy with the road condition and dust. Demand is increasing for dust control.
- School bus routes utilize approximately 110 miles of county roads.
- Maintenance of these roads involves geohazards such as landslides, rock falls and flooding, tunnels, pipelines (gas, water, other fuels, etc.), signs, incidents such as roadkill and damage from weather and crashes, and telecommunications infrastructure (phone, internet, cable TV, etc.).
Highway Traffic
- Three major highways serve Chaffee County; U.S. Highway 50 is the primary east/west link and Highways 285 and 24 are the primary north/south links. Traffic backups occur on these highways during the summer months when high concentrations of tourists move throughout the County.
CHAFFEE COUNTY: Traffic
Change in Average Annual Daily Traffic from 2010 to 2020 (Projected)
- 21-30% Change
- 31-40% Change
- 41-55% Change
Adopted December 2020
Average Annual Daily Traffic (AADT) has increased significantly on portions of both US 285 and US 24 from 2010 to 2020. The section of US 24 that goes through Buena Vista saw an AADT increase of 46% in ten years.
Other areas that have seen traffic increases include US 285 at Poncha Pass, US 285 through Johnson Village, and the intersection of Highway 50 and 291.
**Commuting Patterns**
- The majority of Chaffee County residents work and live in Buena Vista and Salida. This indicates that the majority of residents in Chaffee County live where they work.
- According to US Census estimates, almost half (47%) of the workforce in Chaffee both live and work in the County, while 25% work in the County but live outside and 28% live in Chaffee County but commute outside for work.
- An estimated 70% of workers in Chaffee County drove alone to work in 2018, according to US Census estimates. Only 7% carpooled, and 11% walked or biked.
**Place of Work vs. Place of Residence**
| Area | Where talent works | Where talent lives |
|------------|--------------------|-------------------|
| Salida | 58% | 57% |
| Buena Vista| 32% | 30% |
| Poncha Springs | 3% | 3% |
| Nathrop | 4% | 10% |
| Monarch | 2% | |
Source: Chaffee County Economy Overview, 2018
**Telecommunications**
- Chaffee County has emergency and commercial telecommunication systems including broadband fiber-optic cable, cell and radio towers, telephone lines and satellites, providing internet/cellular and television access to subscribers.
- Telecommuting has become more prevalent in recent years (12% of workers worked from home in 2018), thus putting higher demands on the systems. Tourism, which is a major sector of the county’s economy, also strains the existing telecommunications.
- Broadband access is essential to economic development, purchase and delivery of goods for businesses, and residents.
**Chaffee Shuttle Operations**
| Chaffee Shuttle Riders | Percent of riders |
|------------------------|-------------------|
| 60+ disabled | 27% |
| 60+ (non-disabled) | 30% |
| Under 60 disabled | 14% |
| General public | 30% |
| Chaffee Shuttle Trips | Percent of trips |
|-----------------------|------------------|
| Shopping | 27% |
| Medical appointments | 25% |
| Social/recreational activities | 13% |
| Nutrition, employment or education | 11% |
Source: Chaffee Shuttle, 2019
Alternative Transportation
- **The Chaffee Shuttle** is a transit organization operated under the non-profit agency of Neighbor to Neighbor Volunteers. It is a shared ride and public transit service, providing transportation to Chaffee County residents for medical appointments, work, shopping, and social activities. The Shuttle has been expanding to provide connecting services for San Luis Valley residents to and from Chaffee County for the same types of trips. Of significance, it is the only public transit agency within CDOT’s Transportation Planning Region #5, which includes Chaffee, Alamosa, Conejos, Costilla, Mineral, Rio Grande and Saguache Counties.
- **The Chaffee Shuttle operates with a small office staff and drivers, all of whom are paid through the grants, donations and some fees charged for fixed routes.**
- Bustang, a fee-operating service offered by the Colorado Department of Transportation (CDOT), runs a daily bus line from Denver to Gunnison that goes through Buena Vista and Salida. It also runs a bus from Salida to Buena Vista, Salida to Fairplay and Salida to Denver.
- Other transportation options include private taxi and non-profit ride services.
Trails
- Chaffee County has a network of motorized and non-motorized trails used for hiking, walking, mountain biking, and ATV/OHVs. Over the past 30 years, non-motorized systems have developed in each of the three municipalities with connections along county roads leading to the multi-use trails in the public lands surrounding the valley.
- Planning efforts for future trail connections and extensions continue as trail popularity has grown substantially. Continued trail improvements with-in and between communities, counties and the country are encouraged through public support. A Trails Master Plan was completed by the county in 2008 and will be updated as part of the Chaffee County Multimodal Transportation Plan (CCMTP).
Aviation
- Chaffee County is served by two general aviation airports. Harriet Alexander Field is located two miles west of Salida and is owned jointly by the City of Salida and Chaffee County and is operated by the County. It serves a variety of private, commercial and government users, including area hospitals.
- Central Colorado Regional Airport is located one mile south of Buena Vista. It is owned by the Town of Buena Vista and is operated by Arkansas Valley Aviation. It serves a mix of private, commercial and government users, including firefighting, search and rescue, and emergency medical operations. (Note: Airport Master Plans for both airports are adopted by this comprehensive plan).
Rail, Trucking and Freight
- The former Denver and Rio Grande Western provides an intact railroad corridor from the southeast corner of the County to the northern border. It has been in Active Reserve status with the Surface Transportation Board (STB) since 1998. Although there have been a number of efforts to utilize the corridor over the years, the STB is holding the corridor for the possible resumption of rail traffic.
How far do you travel to get to work?
Survey #1 Results
- Based on the first online survey, Chaffee residents voiced concern over the lack of multimodal options and infrastructure in the County. They were concerned with State Highway congestion, parking issues and that Highway 285 is the only way to get through the County.
- Safety concerns were also often brought up when discussing pedestrian and bicycle interactions on the Highways across the County.
- Although persons who telecommute from home or commute to Chaffee County to work were not surveyed, these are important elements of Chaffee County’s economy.
- Responders took great pride in the existing trail network and its importance for active lifestyles. Thirty-three percent of all responses to the question “What spaces in and around Chaffee County are most important to you?” mentioned trails.
- Due to the Chaffee Shuttle’s limited capacity (and the lack of awareness among citizens that this service is available), the need for a County-wide circulator shuttle was brought up frequently through the engagement process. At the Together Chaffee Drop-in Events in an investment activity, 7% of residents voted to invest in a County-wide circulator shuttle.
Community input indicates the existing trail network is critical to the County’s identity, yet progress is needed to ensure recreational and commuter travel is safe and convenient for Chaffee’s residents over the next decade.
Implications for this comprehensive plan include the need for maintaining and improving the County transportation system by analyzing existing conditions and implementing policies, procedures, funding and infrastructure to accommodate the future functionality of the County.
This plan encourages the use of existing studies and promotes intergovernmental, agency and community planning in the decision making process concerning the County transportation corridors: roads, trails, waterways, railroads, easements, rights-of-way, air space and telecommunications. The completion of the Chaffee County Multimodal Transportation Plan is essential to meeting these needs.
Land use and connectivity or mobility intersect particularly when exploring ideas to promote density over sprawl. Transit-oriented development has long been discussed as a strategy, such as building density near Bustang stops or along high-volume non-vehicle transport routes.
WHY THIS THEME IS IMPORTANT
Fostering a Resilient & Sustainable Environment means being a regional and national leader in policy-making and resource allocation that emphasizes sustainable development. It means approaching County planning ecologically, understanding that Chaffee’s way of life and some of its highest valued ecological assets require active protection and monitoring.
This comprehensive plan is meant to protect the health, safety, and welfare of our community and preserve our community character for future generations. To do this in a community that is 83% public land, we must have a comprehensive plan organized around stewardship of our ecological resources. This plan extends that ethic to the private lands in Chaffee County, where most of the community lives and works.
KEY DATA POINTS
With continuing efforts from the Greater Arkansas River Nature Association and Chaffee Green, community-based organizations, and the general public, Chaffee County has endeavored to become a leader in the field of sustainable and resilient Western living. There are numerous projects, programs, and efforts driven by grassroots and governmental agencies aimed at pushing the Resilient and Sustainable Environment agenda.
Sustainability-oriented agencies, nonprofits, or other groups have emerged in Chaffee County in the past 20 years. The following organizations had direct input into this comprehensive plan:
- GARNA
- Chaffee Green
- Central Colorado Conservancy
Chaffee Common Ground and Citizen Advisory Committee
- An outcome of the Envision Chaffee County action plan. Common Ground is community-led initiative to maintain the quality of life and resources that attract so many people to the region.
- Enabled through a ballot-initiated grant process that leverages a portion of local sales tax, Common Ground makes possible programs and projects that protect “the county’s most spectacular scenic views, the health of forest ecosystems, watersheds and water quality, and wildlife and their habitats. The Common Ground Fund helps preserve our community’s unique character and enhances the assets that support our local economy” (from Common Ground’s website).
- Common Ground is overseen by a Citizen Advisory Committee that provides recommendations to the County Commissioners on efforts or projects.
- Common Ground’s guiding principles are congruent with this Comp Plan’s motivations (as discussed on page 197) with forest health, sustainable agriculture and mitigating impacts from recreational uses as high priority subjects.
• Along with the Recreation in Balance Program from Envision Chaffee, this Comp Plan supports the efforts put forth in the mission of these efforts, as well as the urgency in competing critical tasks to further their effectiveness. Further information on these tasks can be found in the CCG 2019 Annual Report (link).
**Forest Health**
• In the Envision Chaffee effort, forest health was identified as one of the top two concerns of our community.
• Decades of fire suppression has led to a mostly climax forest that lacks diversity and is susceptible to disease. **The spruce beetle is transforming the spruce zone from a forest with 3-4 standing dead trees per acre to 120 standing dead trees per acre.**
• There is also spruce budworm, beetle, and a disease wiping out stands of aspen trees. This has set the stage for catastrophic wildfire.
**Wildfire Risk**
• According to the Chaffee County Wildfire Protection Plan, wildfires can be classified by how they are managed on a scale of Type 5 (very small fires) to Type 1 (large, complex fires and natural disasters). Ten years ago, the Upper Arkansas River Headwaters Region in Chaffee and Lake Counties had only experienced one Type 3 wildfire ever. In the decade since, there have been two more Type 3’s (Treasure and Lodgepole), the first Type 2 (Hayden Pass), and the first two Type 1’s (Weston Pass and Decker).
• While lightening statistically causes the most forest fires, rapid growth in recreation use exacerbates the threat. Fire management will be a critical concern when dealing with future fires.
• The Collegiate Peaks Wilderness covers a significant portion of the western side of the County, and much of the municipal water supply derives from that region. The fire management practices, coupled with the difficulty of the terrain make fighting fire in this region extremely difficult and dangerous.
• The CWPP calls for a proactive approach to prioritizing forest treatment, not to eliminate fire, but to try to eliminate **catastrophic** fire.
**Preserved/Protected Lands**
• With over 502,500 acres of public land, making up 83% of the County, it’s undeniable that Chaffee’s economic success and future growth is dependent upon preserving its natural assets. The same land that provides spaces for outdoor recreation and drives the tourism industry is also crucial habitat for fish and wildlife, as well as the foliage that sustains the county’s grazing agricultural population.
• All of Chaffee’s public land is managed by agencies, such as the U.S. Forest Service, the National Park Service, and the U.S. Bureau of Land Management, whose responsibilities include managing snowpack repositories, runoff, and surface reservoirs. The proper management of all these elements directly affect the quality and quantity of water supply which the people of Chaffee County rely on.
**Water Resources**
• Water is a limited resource, and in Chaffee County it is an issue of concern voiced often by the public and County leadership.
• **The Upper Arkansas River Basin has an average of 12 inches of precipitation or less,** and rivers and streams rise with spring runoff from snowmelt and flows reduce significantly during the hot summer months, making it essential to manage water prudently throughout the year.
CHAFFEE COUNTY: Wildland Urban Interface
- Structures within the Wildland Urban Interface (WUI)
CHAFFEE COUNTY: Sensitive Lands
- **Floodplain**
- **Protected Lands**
**Wildlife Habitat Significance**
- **Highest**
- **Higher**
- **High**
**Biodiversity Significance**
- **Outstanding**
- **Very High**
- **High**
- City Limits
Adopted December 2020
• The Upper Arkansas Water Conservancy District is responsible for managing water resources through storage, augmentation, legal and engineering activities.
• In Colorado, water in every natural river and stream is owned through water rights, and every drop of water in the Arkansas River Basin is appropriated and thereby owned by individual entities, private and public.
• In Chaffee County, water flows through irrigation ditches which are owned and maintained by ranchers across the County.
• Protection of riparian corridors is critical to the survival of wildlife species in the County.
**Watershed Health**
• A number of cooperating federal and state agencies have worked to clean the Arkansas River from decades of mining and other contaminants that had negative health impacts to the river and its native trout population.
• One major indicator of the success of these cleanup efforts is the **return of a healthy trout population**. The Arkansas is rated Gold Metal Fishing, which means a body of water must consistently support a minimum trout standing stock of 60 pounds per acre, and the Arkansas has an average of 170 pounds per acre.
• A major concern for the Arkansas River is pollution by sedimentation from erosion. A 2010 watershed assessment determined that historical human uses of the water have put pressure on the river as its flows have been channelized, bends straightened, its bank eroded and its wetland habitat degraded.
• Declining forest health and tree mortality impacts the ability of the watershed to hold snow until summer, and weakens the groundwater recharge function as well.
**Wildlife & Habitat Loss**
• Chaffee is fortunate to have abundant wildlife. Some species are thriving while there are issues with other species of State concern.
• Elk are currently managed at the low end of target range. Mule deer population declined 33% in the past decade, and deer collar data indicates that deer wintering in Chaffee County populate 8 counties in other months - so our winter habitat impacts regional populations. Chaffee is a center of T&E Boreal Toads which are rapidly declining.
• Studies in other places (Eagle County) have demonstrated up to 50% decline in Elk populations in 10 years directly related to rural lands development and impacts from increasing recreation - exactly what is happening in Chaffee now.
• CPW also indicates that the crossings of major highways with migration corridors are a key threat at traffic increases with increasing population and visitation.
CHAFFEE COUNTY: Wetlands
- Wetlands
- Riparian Areas
Adopted December 2020
Energy Use
- Clean Energy Chaffee (CEC) is a citizen group dedicated to the advancement of clean energy and energy conservation in the County. The group produced a Clean Energy Plan, in which they recommend strategies for Chaffee County to achieve net zero carbon emissions by 2050. This plan should be used as a guiding document regarding energy use in the County.
- Private energy providers and the State of Colorado are making great efforts to expand renewable energy use statewide. The Colorado Energy Plan updated in 2018 aims to achieve 55% renewable energy on the grid by 2026, and reduce carbon emissions by about 60 percent from 2005 levels through a $2.5 billion investment.
- From geospatial surveys in Chaffee County, the aerial extent of the geothermal reservoir in the Mt. Princeton area has been estimated by the Colorado Division of Water Resources as containing between 3.81 Qs and 68.6 Qs of energy. A “Q” is equal to one quadrillion (10 to the 15th power) British Thermal Units (BTUs). If early estimates prove accurate, this is a major renewable energy source. One Q is equivalent to 160 million barrels of oil.
- Congress has recently established tax incentives for businesses locating in Opportunity Zones, two of which lie south of Buena Vista. This is in addition to State Tax Incentives available throughout most of Chaffee County. In late 2019, Congress re-instituted and increased both the Investment Tax Credit and the Production Tax Credits, which incentivize most types of renewable energy investments.
Community Input
- Those who provided community input voiced concern that development and growth is encroaching on open space and sensitive environments.
- Of particular concern was wildlife habitat and corridors, watershed health, water quality and quantity, wildfire risk, air quality, waste disposal and landfill capacity, and overall ecosystem health.
- One responder to the first online community survey stated “The current County Land Use Code, with 2-acre minimum lot size for single family residential is the primary driving force in creating sprawl, compromising water quality, wildlife habitat, and impacting the landscape to the maximum degree.”
- Another said “The greatest risk to Chaffee County’s quality of life is the loss of our wilderness areas and wildlife due to growth, wildfire, and not putting an emphasis on sustainability.”
- In Survey #2, when asked if they would support the creation and adoption of a County-wide Sustainability Plan, 90% of responders said yes. Without a sustainability department or manager, many residents expressed their concern that Chaffee County is lagging behind other Colorado communities in planning for sustainability.
- In the effort to update the County’s Wildfire Protection Plan, a survey was conducted to better understand perceptions about forest fire, fire resilience, treatment activities and preparedness for a major wildfire event. The results of the survey indicated that nearly half of citizens were not prepared for wildfire, 40% had no established evacuation plan and 62% had no arrangements related to children at home alone during an emergency.
Would you support the creation and adoption of a County-wide sustainability plan? Survey #2
Results
- 90% YES
- 10% NO
Survey responses also indicated that private landowners have little sense of urgency to act to remove vegetation or to change the characteristics of their home to protect their residences from wildfire.
Regarding new private land development, the survey data appeared to indicate strong support for wildfire-related provisions in building codes.
Community members recognize that renewable energy competes with other sectors for inputs, particularly land. Poor siting can adversely affect local residents and disrupt tourism, which is a large source of income and employment in Chaffee County.
Local social acceptance by stating clear benefits to our local community will be critical to introduce renewable energy projects.
In addition, several key participants in Envision Chaffee and key personnel behind the Recreation in Balance effort were interviewed and provided input to this Comp Plan’s goals, strategies and action steps as found in this theme.
COMP PLAN IMPLICATIONS
The key intersection between sustainable planning, climate change and County growth is fire resiliency planning. The heightened awareness caused by the Decker Fire of 2019 may have had a galvanizing effect regarding community awareness of high fire hazard areas and the real dangers of threats to existing homes in such areas.
This residual consciousness is intended to be used to not only prevent future hazards by eliminating future growth in hazardous areas, but also by putting an eye towards prevention efforts, projects and resources not only for fire, but for combating climate change as a whole by promoting smart and efficient growth through the Comprehensive Plan.
Alternative energy should not be considered as a standalone sector within Chaffee County’s economy. Potential backward and forward linkages with local industries such as forestry or tourism should be developed through an integrated approach to renewable energy deployment. Collective action should be stimulated through intermediate institutions active in our community and policy makers should aim at involving a larger number of stakeholders in policy interventions to stimulate sustainable development and improve local support.
WHY THIS THEME IS IMPORTANT
Conventionally known as a recreational mecca and tourist destination, Chaffee County aims to reinvent itself as a more diverse and productive place of business so as to attract workers from elsewhere and balance the inflow and outflow of people commuting throughout the valley in their own vehicles.
Housing and employment characteristics are affected by the land use patterns in a region, and this plan strives to balance opportunities for new jobs with housing to mitigate impacts such as increased traffic.
Attracting new and diverse industries would broaden Chaffee County’s economy while providing an opportunity for the County’s workers with differing education and backgrounds to obtain employment.
Support for existing businesses across the region is also critical, with the growing trends in remote employees working from home. Supporting the technology and infrastructure required for such businesses is a key component of plan implementation.
KEY DATA POINTS
Jobs & Employment
- According to US Census LEHD estimates, there were 7,350 jobs in Chaffee County in 2018. Jobs are projected to increase to an estimated 9,500 jobs by 2030.
- An emerging trend in Chaffee County in recent years is the expanding population of remote workers - in 2018 11.8% of residents worked from home, which is higher than Colorado’s 7.7% remote workers.
- Envision Chaffee found: “telecommuting represents a rapid change in the fabric of our County, with 37% of people living in Chaffee County now working in other locations, an increase from 3% in 2000. Continued broadband development will enable continued telecommuting economy growth.”
Labor Force
- According to US Census estimates, in 2017 Chaffee County’s labor force participation was 52.9%, which is lower than the state participation of 68.2%.
Employment in Chaffee County
| Area | % of total jobs |
|--------------------|----------------|
| Salida | 44% |
| Buena Vista | 21% |
| Poncha Springs | 5% |
| Unincorporated county | 28% |
Source: US Census LEHD estimates, 2018
CHAFFEE COUNTY: Jobs & Employment
- 1-5 workers employed
- 6-25 workers employed
- 26-50 workers employed
- 51-75 workers employed
- More than 75 workers employed
Unincorporated Chaffee County: 2,076 jobs
Buena Vista: 1,659 jobs
Salida: 3,265 jobs
Poncha Springs: 350 jobs
In 2018, the ratio of jobs to labor force in Chaffee was 0.86:1, which indicates a shortage of jobs in the County. This has increased from 2015, when the jobs to labor force ratio was 0.77:1 and the 2010 ratio was 0.68:1, indicating that jobs have grown faster than the labor force in the past 10 years.
**Job Sectors**
- Chaffee County’s economy benefits from its world-renowned recreational opportunities and the tourists that they attract, and the County’s labor force and employment picture reflects a tourist economy.
- According to Census estimates from 2017, the top three job sectors in the County were accommodation and food services (16.9%), health care and social assistance (13.5%), and retail trade (12.3%). That’s 29% of the population that works in the tourism industry, postulating that almost one third of Chaffee County workers may have to work multiple jobs at relatively low wages. Other prominent job sectors are public administration (9.7%) and construction (9.2%).
- The dominance of food service and retail employers in the County indicates a lack of large-scale employers. According to the 2018 Chaffee County Economy Overview, 42.6% of businesses employ between 1 and 4 workers, 27.9% employ 5-9 workers, 17.5% employ 10-19 workers, and less than 3% employ over 50 workers.
- **Government remains the largest employer with almost 2,000 jobs.** There is some growth in administration and professional jobs, along with wholesale trade that reflects some diversification.
- The growth of craft brewing, wine, and distillery businesses, along with marijuana companies, highlights the diversification and growth of this industry, wholesale trade.
- Health care and construction are the fastest growing and among the highest paying jobs in the County. The Heart of the Rockies Regional Medical Center is the largest employer followed by the Buena Vista Correctional Facility.
• Monarch Mountain Ski area, which anchors the economy during the winter season, and Mt. Princeton Hot Springs, are among the largest employers.
• Building on an already established music scene, new hotels and music venues have catalyzed the expansion of this industry along with the creation of seasonal jobs. Such visitor-oriented job creating venues and events provide for additional seasonal employment, however impacts to public services or facilities should not place undue strain on the County’s ability to provide a safe and fiscally responsible environment.
**COMMUNITY INPUT**
• Residents who responded to the first online survey appeared to be concerned with the lack of economic diversity Chaffee County. When asked what the top issues facing Chaffee planning were, lack of job diversity and low wages was the third most common answer.
• Survey responders were split on the types of businesses they thought are most needed in the County. The most common answers were better paying businesses (23%), small/local businesses (19%), high-tech businesses (13%), and more restaurants (10%).
• In response to the open-ended question “What can Chaffee County do to make businesses more successful?”, survey responders offered a variety of ideas. Some thought that small and local businesses should be given financial assistance in the form of tax breaks and incentives, others mentioned workforce housing to support employees.
• Other common answers included providing high-speed internet to businesses and offering more post-high school educational opportunities like vocational training programs.
**COMP PLAN IMPLICATIONS**
The public vision for Chaffee’s future economic identity pushes towards innovative and sustainability-oriented businesses, while looking for opportunities to foster the next-generation worker and workspace. A somewhat large amount of presumed remote workers or sole proprietorships as seen in the data provided by Envision Chaffee may indicate latent demand for live/work uses or buildings where entrepreneurs can start up their dream business.
Similarly, promoting new office formats such as shared office concepts offers young or cash-burdened future businesspeople lower overhead costs, flexible leasing, equipment and technology, and a quality environment conducive to creativity and innovation.
GROWTH & LAND USE
DATA & DISCUSSION
WHY THIS THEME IS IMPORTANT
Growth and Land Use directly addresses the regulatory framework in place which has guided growth across the Upper Arkansas Valley. Growing smart in a community with limited resources and tax base to provide public services creates the potential for challenges if the pace of growth exceeds its ability to provide services and infrastructure. Understanding the capacity of public systems and associated facilities is an important function of the Comprehensive Plan. A growth plan must provide for a pattern of development that has mechanisms to harness growth - or in some cases leverage it - to ensure adequate levels of civic services are maintained.
With three incorporated municipalities with their own Three-Mile Planning Areas housing almost half of the County’s homes, and the other half scattered throughout various unincorporated nodes, there is considerable overlap in long range land use strategies. Communicating this intended vision becomes critical to building consensus on what happens on the ground throughout the County.
KEY DATA POINTS
Amount of Developable Land
The comprehensive planning process addresses growth and capacity, and the amount of land physically available for development—in this case referring generally to land that would be subdivided for single family housing—has helped make determinations on how, where and what kind of growth can be managed in the long-term. A planning-level inventory was used to calculate the amount of land that could potentially be developed. The parameters for an available property were:
- Privately-owned
- Vacant or partially-vacant: residential property occupied by an allowed land use which is large enough to be further subdivided or developed
- Not previously platted in a subdivision
- Not in an environmentally sensitive or unbuildable area (e.g. conservation easement, floodplain/wetlands, habitat, steep slopes).
Based on these parameters there are approximately 38,648 acres that are physically able to be developed. About 12,740 of that total are partially-used residential properties over 40 acres that still that could still be potentially subdivided further.
Housing Distribution in Chaffee County
| Area | % of Housing units |
|--------------------|--------------------|
| Salida | 30% |
| Buena Vista | 14% |
| Poncha Springs | 4% |
| Unincorporated county | over 50% |
Source: Chaffee Housing Needs Assessment, 2016
Housing Unit Growth: *Pace of housing growth and projection, 2000 to 2040 (US Census, Cushing Terrell)*
**Pace of Housing Growth**
- Utilizing U.S. Census data from 2000 through today, an exponential smoothing forecast estimates that **Chaffee County’s housing stock will increase by 19% by 2035, reaching 6,874 housing units** based on historic trends. This forecast also predicts an upper confidence bound of 8,184 units, which indicates that plans for where future growth should occur should accommodate this number of new housing units.
- According to the 2016 Housing Needs Assessment, of the approximately **10,400 housing units in Chaffee County**, 30% are in Salida, 14% in Buena Vista, 4% in Poncha Springs, and over **50% in the unincorporated county**. The Assessment also found that 75% of new housing in Chaffee County was built in the unincorporated area over the past 15 years (2000–2015).
**Water**
- Water is supplied in the Salida, Buena Vista and Poncha Springs Sub-Areas by municipal water systems. **Outside of the municipalities, a number of community water systems exist within county limits.**
- There are three non-transient, non-community water systems that serve at least 25 of the same people at least six months per year (Mount Princeton Hot Springs, Monarch Mountain Lodge and Salida KOA Campground).
- Additionally, there are 45 transient, non-community water systems are defined as water systems that serve at least 25 people more than 60 days per year.
**Sanitation**
- Buena Vista, Johnson Village, Salida and Poncha Springs are served by municipal sanitation districts.
- There are 8 minor wastewater dischargers permitted through the Colorado Department of Public Health and Environment (CDPHE) under Regulation 22 – Site Location and Design Approval Regulations for Domestic Wastewater Treatment Works in Chaffee County.
- **Outside of the municipalities of Chaffee County, most homes use on-site wastewater treatment systems (OWTS), also known as septic systems and leach fields.**
According to CDPHE, Chaffee County has experienced a 15% annual increase in OWTS permits filed.
**Electricity**
- Chaffee County is served by two electric providers: Sangre de Cristo Electric and Xcel Energy.
**Broadband & Cell Phone Coverage**
- High-speed, redundant broadband is not available in many areas of Chaffee County. Colorado Central Telecom and Spectrum provide internet service to customers throughout the County, although community members have cited redundancy as a significant issue.
- Cell phone coverage has also been reported as unreliable, as outages and poor coverage are common (particularly in the north end of the County). These weaknesses inhibit diversification of the economy and creation of higher paying jobs.
**Fire and Emergency Services**
- Chaffee County Fire Protection District’s (CCFPD) predominately volunteer force is responsible for the efficient mitigation of emergencies and delivering service to the community within the 1,000 sq mile Fire District.
- CCFPD has 35 pieces of mobile equipment/apparatus allocated between **6 fire stations (located in Buena Vista, Nathrop, Poncha Springs and Maysville)** that are professionally staffed by **40+ volunteer firefighters and 4 paid firefighters**.
- The Salida Fire Department consists of professional fire personnel on duty 24/7 to serve the Salida municipal area with mutual and automatic aid agreements to assist throughout Chaffee County.
**Law Enforcement**
- Law enforcement is provided by the Chaffee County Sheriff Department. The department has 23 sworn officers, and in 2019 they received 6,535 calls. The department struggles with employee retention due to lack of funding to pay officers competitive salaries.
- There are two municipal police forces in Salida and Buena Vista, with mutual aid agreements with surrounding agencies.
**Road and Bridge**
- Chaffee County Road and Bridge is responsible for maintaining approximately 300 miles of roads in Chaffee County. There are 44 bridge structures of various types in Chaffee County.
- The primary responsibility of the Road and Bridge Department is the routine maintenance of county roads. This includes plowing snow, grading dirt roads, maintaining barrow ditches, cattleguards and drainage along county roads, weed and brush control, and patching asphalt.
- **County road maintenance is currently unable to keep up with the impacts of growth**, and public expectations of road conditions continues to rise.
**Waste Services**
- The Chaffee County Landfill is located off Highway 285, approximately 11 miles from Salida and 14 miles from Buena Vista. Many County residents contract with private haulers to pick up and dispose of household trash.
- The landfill property and landfill enterprise are critical elements of County infrastructure, and while the projected lifespan of the landfill is many decades, replacing it will be a massive challenge. Efforts to best manage this resource should be continued.
CHAFFEE COUNTY: Existing Land Use
- Residential Vacant (platted, urban)
- Residential Medium Density (multi-family)
- Residential Low Density (<2 acre lots)
- Residential Suburban (<2-5 acre lots)
- Residential Rural (>5 acre lots)
- Mobile Home
- Commercial
- Commercial / Industrial Vacant
- Rural Commercial
- Transportation
- Industrial
- Agriculture / Open Space
- Open Space (dedicated)
- Recreation
- Mining Claim
- Water Rights
- Public Land
- City Limits
Adopted December 2020
Existing Land Use
| Land Use | % of total acres |
|---------------------------|------------------|
| Public land | 83% |
| Agriculture | 7% |
| Residential | 9% |
| Recreation/mining claims | 2% |
Source: Cushing Terrell, 2019
The majority (83%) of Chaffee County is public land, most of which is open space. An estimated 7% of the land is agriculture/open space, over 45,000 acres. The remaining 10% of land is mostly (7%) rural residential (on lots larger than 5 acres). Approximately 3,100 acres in unincorporated Chaffee County are Residential Vacant - which are platted but unbuilt lots.
Chaffee County’s intergovernmental agencies operate four recycling centers in the County, and according to the UAACOG Waste Optimization Regional Study, the diversion rate for Chaffee County increased from 4.8% in 2005 to 16% in 2016. However, the amount of waste generated per household was 8.45 pounds per day, compared to the national average of 4.41 pounds per day. Desire for a more robust recycling program was expressed by the community and County leadership.
Community members that participated in the first online survey expressed concern over the nature of growth and development in Chaffee. When asked what the greatest risk to Chaffee County’s quality of life is, 25% of responders mentioned uncontrolled, sprawling growth in rural areas.
One survey responder answered “The greatest risk to Chaffee County’s current quality of life is over-development of rural areas: rural sprawl. Development must be regulated.”
Which of the following natural resource protections would you support in Chaffee County? Survey #2 Results
- All of the above: 38%
- Scenic Resource Overlay: 17%
- Natural Resource Overlay: 16%
- Wildlife permeability requirements: 15%
- Clustering/buffering requirements: 13%
- None of the above: 2%
to accommodate the large and growing population,” and another stated “Sprawl development that stretch our infrastructure, place homes in greater fire danger and consume more water.”
- Across all engagement channels, the 4th most common comment was concern over the capacity of existing infrastructure (water, sewer, roads) to sustain growth. The 5th most mentioned topic was the idea of density over sprawl - residents would rather see dense growth in existing residential and commercial centers to preserve open space in the rural parts of the County.
- In Survey #2, when asked which tools used to direct growth to municipalities they would support, 38% of responders wanted to use every tool listed (all of the above), 17% wanted a Scenic Resources Overlay, 16% wanted a Tiered Natural Resources Overlay, 15% wanted wildlife/human conflict design requirements and 13% wanted clustering and buffering incentives and requirements.
- Responders to Survey #2 were closely split on their feelings toward the 2-acre minimum lot sizes in the Rural Zone; 34% said they should stay the same, 30% said they should be increased to 5 acre minimums, 28% said they should be increased to 10 acre minimums, and 8% said they should be decreased.
**COST OF GROWTH**
Authors and researchers have generally posited that sprawling residential development may create a financial burden on local governments as the tax revenue generated from new subdivision rarely covers the cost of its ongoing servicing and maintenance. While it is difficult to know the true cost of providing services to future growth, it is true that the cost of providing public infrastructure and services for “new sprawling development is higher than to service that same population in a smart growth or infill development” (W. Coyne, *The Fiscal Cost of Sprawl*, 2003). In Colorado where local government finance is driven by TABOR limits or restrictions established in the 1990s, this issue is even more pressing as Chaffee County’s values and revenues are likely to see dramatic variations, as typically consistent revenue streams from seasonal tourism will be affected by variations in visitation in 2020.
Continuation of low-density residential growth has potential to place additional strain on the of these services or activities (for example emergency services, schools, recreational facilities, or capital infrastructure projects) required to maintain Chaffee County’s quality of life and additional discussion is warranted to understand the affects of growth on the County’s budget and revenues.
**Property Tax and Growth**
- Chaffee County’s taxable value and revenues have increased from 2016 to 2019 across all land categories.
**Taxable Value in 2019**
- Lake County: $241,015,313
- Park County: $480,858,626
- Chaffee County: $495,320,120
- Fremont County: $525,500,048
- Gunnison County: $722,124,010
Adopted December 2020
Increase in Taxable Value (County-wide, 2016-2019)
- Lake County: 14%
- Park County: 17%
- Chaffee County: 27%
Taxable Value Distribution, Chaffee County (County-wide, 2019)
- Residential: 53.6%
- Commercial: 22.7%
- Industrial: 2.4%
- Agricultural: 2.4%
- Natural Resources: 1.0%
- Other: 17.9%
Chaffee County’s recent increase in county-wide taxable value (27%) from 2016 to 2019 was much larger than nearby Park County (14%) and Lake County (17%).
Increases in assessed values of residential property contributed to much of the overall increase, and residential property made up the majority of all taxable value in 2019 (53.6%).
Budget
Other significant revenue sources include sales taxes and building permit fees. Both have been steadily increasing until 2020 and the outlook is unclear.
Ballot Issue 1A passed in 2018 increasing sales taxes to support conservation efforts. 1A has allowed for the transfers of some funding to other departments where support is needed such as Road and Bridge. Despite these transfers Road and Bridge is still unable to financially accomplish their major projects.
COMP PLAN IMPLICATIONS
The county budget for the last five years has largely been balanced with the help of increases in property taxes, sales taxes and building permit fees, but any change in these sources would threaten the ability to cover expenses which in 2020 were estimated at $36,823,855. Even prior to the events of 2020 creating uncertainty in
these sources, the County grappled to provide adequate FTE employment at standard pay levels and was unable to complete essential projects such as chip seal/asphalt services on the county’s roads. Anecdotal evidence suggests Chaffee County’s real estate market is as busy as it has ever been and it is likely the inflow of people moving into to Chaffee County from coastal areas or large cities to pursue their lifestyle choices will continue.
If new residential growth continues at present rates and if property values continue to increase, Chaffee County will struggle to generate revenue to provide adequate levels of service or infrastructure without new and creative sources. TABOR limitations will further challenge the County’s service providers. Shifting policy initiatives such as a proposed repeal of the Gallagher Amendment further complicate long-term financial predictions. As such, additional advisory information is needed to ensure fiscal responsibility when approving new development.
To accommodate new growth, the County should continue to establish policies that equip decision makers with the best possible information. This includes relying on smart growth principles in this plan that support infill development and pushing or incentivizing conservation subdivision design. Areas where this Comp Plan proposes recommendations that will affect Growth & Land Use include:
- With TABOR hampering the ability to raise property taxes, the County should explore additional funding sources for providing infrastructure and services.
- Amend or modify Future Land Use Maps regularly to ensure the growth rate of new residential subdivisions will not outpace the County’s ability to fund services.
- Look at creative uses of existing revenue sources such as Payments in Lieu of Taxes (PILT).
- Create Future Land Use Maps that envision infill development near municipalities that will save the County on services/infrastructure.
- Action steps for implementing land use code revisions.
- A plan for additional funding for the Road and Bridge department for maintenance of county roads.
- Evaluate all land use applications in terms of cost/benefit to the County.
**How do you feel about the 2-acre minimum lot sizes in the Rural Zone District? Survey #2**
Results
| Percentage | Response |
|------------|---------------------------|
| 34% | They should stay the same |
| 30% | They should be increased to 5 acres |
| 28% | They should be increased to 10 acres |
| 8% | They should be decreased |
Buena Vista Sub Area
EXISTING CONDITIONS
Population
- The Buena Vista Sub Area has an estimated population of 6,333 - with 3,613 people living in unincorporated Chaffee County and 2,720 in the Town of Buena Vista.
- According State Demographer’s numbers, the population of the Town of Buena Vista is expected to surpass 3,000 people in 2020.
Jobs
- The Buena Vista Sub Area had an estimated 1,999 jobs in 2017, with 1,659 of those being in the Town of Buena Vista, and about 24 of the total in Johnson Village. The unincorporated area around the Town (including Johnson Village) had 316 jobs in 2017.
- According to geographic data from the US Census, many of these jobs in the County are found in rural subdivisions, indicating many people work out of their homes.
Land Use and Development Patterns
- Growth has been affected by public lands and conservation areas established in decades past. As a result, the overall amount of available land for future growth is limited when one considers sensitive areas (e.g. wildlife habitat, fire hazard areas).
- Existing subdivisions already occupy much of the Sub Area’s land, and will remain in place as low-density residential land uses. These older subdivisions exhibit a very suburban pattern of development that consumes land less efficiently than what was desired in the public vision, which in 2020 is to promote development near community centers and corridors.
- These subdivisions have many unbuilt parcels, indicating they will continue to add housing units without the approval of any new subdivisions. Among the area’s subdivisions, approximately 310 lots are vacant and available in subdivisions like Game Trail. Although many of these lots are “holding” parcels purchased by an adjoining landowner to maintain as vacant lots, such parcels could potentially supply additional housing without consuming more open land.
- The Buena Vista Three-Mile Plan designates 10 Areas of Desired Growth (ADG) where future growth is desired, but that are outside of the Municipal Services Area (MSA) where the Town can currently provide water.
- The Three-Mile Plan provides future growth policies for new development, annexation and provision of infrastructure in these areas. The Future Land Use Map is congruent with these recommendations in its vision for future growth, and anticipates development in the Residential Mixed future land use district to be annexed into the Town, and developed to the Town of Buena Vista standards for access and infrastructure.
- Based on approved land use and building permit records for the last 10 years, much of the residential development in and around the community has been in existing platted subdivisions, in new subdivisions or within the Town limits. Approximately 210 permits were issued in the unincorporated county of the Buena Vista Sub Area from 2009 to 2019, of which 94% were residential (see Buena Vista Sub Area: Patterns of Development map on the following pages). Estimates for permits inside the Town are 169 total residential permits between 2018 and 2019 (source: Town of Buena Vista).
Physical Character
- In the Buena Vista Sub Area, the average parcel size is just over five acres. **In the Town of Buena Vista, the average parcel size is 1.5 acres, while in the unincorporated area surrounding it, the average parcel size is 9.8 acres.** Excluding some larger commercial parcels, the average in-town residential lot size in downtown Buena Vista is 2,500 square feet (0.057 acres), which is significantly smaller than the County’s average lot size (16.8 acres).
Recreation
- The Buena Vista Sub Area has an estimated 7.8 miles of trails managed by the Town of Buena Vista and four fishing access points. There are four campgrounds located in the Sub Area.
Existing Infrastructure & Capacity for Growth
Water
- The Buena Vista water system currently contains one source, which provides up to 1.5 million gallons per day (MGD) of potable water. **It is anticipated that a portion of the existing service lines will need to be modified to meet the demands of new, higher-density users.**
- Additionally, watering landscapes and lawns in the summer creates a huge demand on the water supply. Reducing this type of high-maintenance outdoor space through redevelopment and zoning modifications may aid the health of the overall water supply.
- The only area of concern for natural growth is to the north of the existing water district. Due to elevations increasing in this direction, water pressure within the system will fall below operational standards without the addition of a booster pump.
Sanitation
- The Buena Vista Sanitation District (BVSD) provides sanitary sewer services to those properties within the District limits and to the unincorporated community of Johnson Village through the Intergovernmental Agreement with Chaffee County.
- The District limits include all of the properties within the Town of Buena Vista, as well as the Buena Vista Correctional Complex and a number of individual parcels, which were accepted into the BVSD, but did not annex into the Town.
- The BVSD also accepts septage collected from area septic systems, at a rate not to exceed 5,000 gallons per day (gpd). The infrastructure of this system consists of approximately 25 miles of public sewer pipe, ranging from 8 inches to 21 inches in diameter.
- According to 2019 collection data, the plant is currently operating near **35% in the winter and 75% in the summer.** This facility is rated to treat up to 1.5 million gallons per day (MGD). This permit will expire in October 2020, and the renewal may be subject to new terms and conditions, which are expected to be issued before the end of 2024.
Mid-Valley/Nathrop Sub Area
EXISTING CONDITIONS
Population
• The Mid-Valley Sub Area had an estimated population of 3,309 in 2017. Population is geographically centered toward the main County Road corridors and in Nathrop.
Jobs
• The Mid-Valley Sub Area had an estimated 221 jobs in 2017. The largest employer in this area is Mount Princeton Hot Springs, with employment fluctuating seasonally.
Land Use and Development Patterns
• The Mid-Valley/Nathrop Sub Area represents the open space and rural character that the public desires to maintain in land use and activity.
• In the past 10 years, growth has occurred around very large ranches, many of which have been owned by families for multiple generations. As a result, the overall amount of available land for future growth is limited considering privately-held ranches and sensitive areas (e.g. wildlife habitat, fire hazard areas). However, this land is not necessarily protected in perpetuity by conservation easements, agricultural covenants, or similar controls.
• Some older rural subdivisions have many unbuilt parcels, indicating they will continue to add housing units without the approval of any new subdivisions. Among the area’s subdivisions, approximately 261 lots are vacant and could potentially have a house built on them in subdivisions like Mesa Antero. Many of these lots may be held by neighboring parcel owners for later in order to maintain adjacent open space.
Physical Character
• The average parcel size in the Mid-Valley Sub Area is 11.4 acres, the largest average size among the four Sub Areas.
Recreation
• The Mid-Valley Sub Area has an estimated 3.6 miles of trails and one campground. There are three fishing access areas on the Arkansas and Chalk Creek.
EXISTING INFRASTRUCTURE & CAPACITY FOR GROWTH
Water
• The Mid-Valley Sub Area is served by community water systems, as there is no municipal water available.
• Nathrop has a transient, non-community water system which serves Chateau Chaparral and parts of the Nathrop Townsite. Chateau Chaparral has a minor wastewater treatment facility, which is rated for 12,100 gallons per day (gpd). The Chateau Chaparral WWTF is a small sequencing batch reactor which was upgraded a few years ago and is meeting permit discharge limits. The WWTF was designed to provide service to the Chateau Chaparral development and is operating near the 12,000 gpd capacity.
• Mount Princeton Hot Springs is served by a non-transient, non-community water system. The Mount Princeton Hot Springs minor wastewater treatment facility is rated for 0.0936 MGD, and is currently in compliance.
Sanitation
- The Mid-Valley Sub Area is not part of a sanitation district and is primarily served by on-site wastewater treatment systems (OWTSs), also known as septic systems and leach fields.
Future Infrastructure Planning
Specifically in Nathrop, growth is paced by the availability of public utilities. The current pace of development generally meets the community’s needs in 2020, as Nathrop is considered rural in character with limited rationale for residential growth due to its distance from amenities and services. However, lower land values and land availability has spurred recent construction of new low-income housing, and infrastructure could be improved to further supply affordable housing in Nathrop. Major infrastructure improvements would be required to build additional housing at a very high cost. Connecting to the BVSD wastewater treatment plant, for example, would require a four-mile sanitary line extension with several lift stations.
Another high-cost option is to establish a special district to eventually provide an internal network of infrastructure and improve existing wastewater treatment systems to move away from on-site septic and individual wells, which may pose public health risks in the future.
Updated long-range planning for the County Landfill site near Centerville is warranted, as the site’s current use as a landfill may be evaluated for enhanced facility upgrades or another use entirely.
Salida Sub Area
EXISTING CONDITIONS
Population
- The Salida Sub Area has an estimated population of 7,284 – with 1,719 people living in unincorporated Chaffee County and 5,565 living in the City of Salida.
Jobs
- The Salida Sub Area has an estimated 4,020 jobs. The City of Salida is home to 3,511 jobs and Smeltertown 27 jobs, while the unincorporated area surrounding the City has 482 jobs.
Land Use and Development Patterns
- Development surrounding the City of Salida is affected by the County’s two main waterways—the Arkansas and South Arkansas Rivers. Floodplain, wildlife corridors and steep slopes to the south and east of the City have pushed much of the residential subdivision in unincorporated areas to the bench above Salida near the airport and along the Highway 50 and 285 corridors.
- Some subdivisions have unbuilt parcels, indicating they will continue to add housing units without the approval of any new subdivisions. Among the area’s unincorporated subdivisions, approximately 102 lots are vacant and available.
- Approximately 98 permits were issued in the unincorporated county of the Salida Sub Area from 2009 to 2019 (see Salida Sub Area: Patterns of Development map on the following pages).
- Open space immediately to the west of Salida has remained undeveloped, either through conservation easements or because wetlands or other physical constraints made subdivisions more costly to service than lands with municipal services.
- Community trails traverse this area and public input regarded this as highly valued for preservation of open space.
Physical Character
- The estimated average parcel size in the Salida Sub Area is 2.5 acres, while in the City of Salida it’s 0.2 acres for the average developed lot. In the unincorporated area surrounding the City, the average parcel size is 8.4 acres.
Recreation
- The Salida Sub Area has an estimated 14.3 miles of trails and two campgrounds. There are two fishing access areas on the Arkansas River.
EXISTING INFRASTRUCTURE & CAPACITY FOR GROWTH
Water
- The City of Salida provides potable water service to all of its residents, as well as several in the surrounding unincorporated areas. In total, there are approximately 2,500 taps in operation.
- There are three sources in the Salida system: two infiltration galleries and a surface intake from the South Arkansas River. The current combined production capacity is 5.3 million gallons per day (MGD).
- Current usage is at approximately 50% of production capacity in the summer (highest usage), with an average demand of roughly 1.5 MGD throughout the course of the year.
Sanitation
• The Salida Sewer System provides service within the City of Salida and the Town of Poncha Springs municipal boundaries. The intergovernmental agreement between the Town and the City states that the transmission line along Highway 50 must be maintained and upgraded so that it does not become a limiting factor to the growth or development of the Town of Poncha Springs.
• The wastewater treatment plant (WWTP) also accepts the waste that is pumped from private septic systems, known as septage, from a number of collection companies that operate in Chaffee and surrounding counties.
• In total, the plant treats an average 0.61 million gallons per day (MGD). Approximately 10% of this is collected from the Town of Poncha Springs, another 30% is from commercial properties in Salida, and the remainder is residential (via public sewer or septage).
• The infrastructure of this system consists of approximately 45 miles of public sewer pipe, ranging from 8 inches to 30 inches in diameter. This facility is rated to treat up to 2.7 million gallons per day (MGD). Given the 2014–2018 collection data, the plant typically operates around 30% capacity and has reached approximately 50% capacity during the summer, when outdoor watering and tourist populations are at their highest.
Poncha Springs Sub Area
EXISTING CONDITIONS
Population
- The Poncha Springs Sub Area has an estimated population of 2,282 - with 1,487 people living in unincorporated Chaffee and 795 living in the Town of Poncha Springs.
Jobs
- The Poncha Springs Sub Area has an estimated 463 jobs, 364 of those in the Town of Poncha Springs and 99 in the unincorporated county.
Land Use and Development Patterns
- Historically, development patterns in this Sub Area have been affected by conservation areas and public lands. Public lands include the San Isabel National Forest, federal BLM lands, Colorado State Land Board lands, Chaffee County lands (fairgrounds) and Town of Poncha Springs-owned lands, mostly south of the Town.
- The South Arkansas River is a scenic riparian corridor running through the sub area, which is rich in wildlife diversity and provides habitat for native species. As such, the river corridor has several conservation easements that will preserve the land for agricultural uses or open space.
- Other large land holdings such as portions of the Hutchinson Ranch near Poncha Springs have conservation easements which push growth elsewhere. Over all, there are about 984 acres of land under conservation easements in this Sub Area.
- The Town of Poncha Springs has annexed lands in recent years to accommodate municipal growth and extension of services. As such, much of the recent development activity and building permit activity has occurred within the Town’s boundaries.
- About 310 building permits have been issued in the last 10 years in the sub area for residential or commercial construction. About 61% of those (190 permits) were within or immediately adjacent to the Poncha Springs municipal boundary, many of those within new subdivisions such as Little River Ranch and Quarry Station.
- Rural subdivisions have seen additional development activity, particularly in the last five years in subdivisions such as Cameron Meadows Estates, Eagle Moon Ranch and Weldon Creek Subdivision. Average parcel size for these developments was approximately 6.5 acres.
Physical Character
- The average parcel size in the Poncha Springs Sub Area is 8.1 acres; in the Town of Poncha Springs it’s 3.2 acres and in the unincorporated area surrounding the Town, the average is 14.3 acres.
Recreation
- The S. Arkansas River corridor is a long-term target for conservation and many easements already exist or are planned by town. Likewise river trails and fishing accesses exist or are planned along the river. The FLUM identifies areas where proposed trails will further connect the town to public lands and National Forest access. A regional trail system that will connect Maysville to Poncha Springs and Salida is in the planning stages and has support from various community organizations. Poncha Pass is a popular hill climb for road cyclists. Additionally, there are two campgrounds in Maysville.
EXISTING INFRASTRUCTURE & CAPACITY FOR GROWTH
Water
- The Town of Poncha Springs provides potable water to its nearly 1,000 customers. Any property interested in connecting to public water must be annexed into the Town if it is not already located within those boundaries.
- The Town currently has 6 wells. The total current production capacity is 0.63 million gallons per day (MGD).
- Once treated, the potable water is stored in one of three tanks across Town. The total storage capacity is 0.66 million gallons (MG), which is adequate to meet required fire flows. The overall distribution system is currently contained within a single pressure zone. It is anticipated that any further expansion to the system would require booster pumps and/or pressure-reducing valves (PRVs), which would create a multi-zoned system.
Sanitation
- The Salida Sewer System serves all properties contained within the City of Salida and the Town of Poncha Springs municipal boundaries. The intergovernmental agreement between the Town and the City states that the transmission line along Highway 50 must be maintained and upgraded so that it does not become a limiting factor to the growth or development of the Town of Poncha Springs.
- The wastewater treatment plant (WWTP) also accepts the waste that is pumped from private septic systems, known as septage, from a number of collection companies that operate in Chaffee and surrounding counties.
- In total, the plant treats an average 0.61 million gallons per day (MGD). Approximately 10% of this is collected from the Town of Poncha Springs, another 30% is from commercial properties in Salida, and the remainder is residential (via public sewer or septage).
- This facility is rated to treat up to 2.7 million gallons per day (MGD). Given the 2014–2018 collection data, the plant typically operates around 25% capacity and will stretch to 50% capacity during the summer, when outdoor watering and tourist populations are at their highest.
REGIONAL CONTEXT
GEOGRAPHY
Chaffee County is one of the most strikingly beautiful areas in the United States. Surrounded by high mountain peaks, it is graced with alpine rivers and streams, broad expanses of ranch land and meadows, and landscapes that vary from rolling pinon and juniper forests to rugged wilderness. Located in the Upper Arkansas Valley on the eastern slope of the continental divide, land in Chaffee County ranges from about 6,900 feet to over 14,000 feet in elevation.
There are more 14,000-foot peaks here than in any county in the United States. The Collegiate Peaks are the most striking and prominent physical feature in the County, providing a breathtaking backdrop for the County and some of the most important views from venues such as Trout Creek Pass. Running north to south Oxford, Harvard, Columbia, Yale and Princeton grace the western viewshed of the Collegiate Peaks Byway. These dramatic peaks are the core of the Sawatch, the highest continuous mountain range in North America.
The Arkansas River is the other primary physical feature of the County, running roughly through its midsection from Granite to Salida. US Highways 285, 24 and 50, the three major transportation corridors in the County, parallel the Arkansas River as it runs north to south and then west to east in the County. The views from these three highways towards the surrounding mountains and across the Arkansas River do much to establish the rural, scenic character of the County.
Over a billion years ago a long sequence of immense uplifts interspersed by sea sedimentation, volcanic action, and water and wind erosion began in the region. In more recent geologic time, huge glaciers chiseled the Upper Arkansas Valley’s monumental Collegiate Peaks. The dramatic white of Mt. Princeton’s Chalk Cliffs comes from the kaolinite in granitic material from hot springs leaching up from fault lines.
Most of the County has slopes of over 10% and over one third has slopes in excess of 25%. The Valley soils are typically thin, rocky, and somewhat alkaline. However, soils in a broad band running along the west side of the Arkansas River are generally suitable for development and agricultural uses.
ENVIRONMENT & ECOLOGY
Ecosystems change with elevation, from prairie grasses and cottonwoods along valley watercourses to alpine tundra. Pinon-juniper hills with yucca and cacti rise up to mountain slopes blanketed successively by ponderosa pine, Douglas fir, aspen, Colorado blue spruce, white fir, lodgepole pine, Engelmann spruce, subalpine fir, bristlecone pine and limber pine. Wildflowers grace the spring and early summer landscape. In the fall, aspen color blazes from yellow-gold to flame orange.
CHAFFEE COUNTY:
Points of Interest
▲ Mountain Destinations
■ Municipalities or Unincorporated Places
Adopted December 2020
Many people come to Chaffee County for the alpine scenery, extensive public lands, exceptional whitewater and the Collegiate Peaks. As such the County celebrates prevalent natural and ecological resources found in the Arkansas Valley by promoting their conservation for existing and future generations to enjoy.
Ecological assets of high value include:
- Arkansas River and its riparian area
- Wildlife habitat, including several species of North American megafauna like elk, deer, black bear and moose
- Lakes and reservoirs (Trout Creek)
- Browns Canyon National Monument
- Collegiate Peaks Wilderness Area
- Buffalo Peaks Wilderness Area
- Cottonwood Hot Springs
- Mt. Princeton Hot Springs
**WILDLIFE**
The Arkansas River and tributaries harbor prize native wild brown trout. Alpine lakes, streams and reservoirs are stocked with rainbow and cutthroat trout reared in Chalk Creek and Mt. Shavano state fish hatcheries. Elk, moose, mule deer, bighorn sheep, mountain goats, mountain lions, bobcats, coyotes, bears and beaver thrive in the area. Large elk herds gather in the fall and winter on the grasslands lining the County roads near Buena Vista, Mt. Antero and Mt. Princeton. Bald eagles, peregrine falcons, wild turkeys, mountain bluebirds and hummingbirds are also found in Chaffee County. Yellow-bellied marmots, pikas and white-tailed ptarmigan live in the alpine tundra.
AMENITIES AND ACTIVITY NODES
Residents and visitors are drawn to Chaffee because of its wealth of amenities associated with outdoor activities. Hiking one of the fifteen 14,000 peaks, road and mountain cycling on off-street paths and trails, whitewater rafting, fly fishing, rock climbing, and Nordic and alpine skiing are all popular activities located in Chaffee County.
Retail, shopping and entertainment destinations in Salida and Buena Vista also bring people to the County. The two hot springs (Cottonwood and Mt. Princeton) attract people from all over the state. All of these destinations bring in families and individuals from across the nation and beyond who stay at local hotels, shop in local stores and eat at cafes all of which make tourism an important share of Chaffee’s current and future economy.
Elk herd
Photo by Scott Peterson
CHAFFEE COUNTY: Land Ownership
- Private Lands
- Chaffee County
- State of Colorado
- Brown's Canyon National Monument
- National Forest Wilderness
- Bureau of Land Management
- United States Forest Service
- City Limits
N 0 2.5 5 10 Mi
CHAFFEE COUNTY: Natural Resources
- **Permitted Mine**
- **Mineral Estate / Subsurface Ownership**
- **Mining Claim**
- **City Limits**
Adopted December 2020
RIVER AND FLOODPLAIN
The Arkansas River and its tributaries can pose hazards to structures and property in the County when these channels flow at their peak due to warm temperatures and snowmelt. The River has flows that range from 275 cubic feet per second (CFS) in the winter to over 4,000 CFS during spring runoff. Peak flows typically occur in June or early July, presenting challenging conditions not only for those who choose to recreate on the rivers, but also to property adjacent to the rivers in or near the flood hazard areas as provided by flood insurance studies from the Federal Emergency Management Agency (FEMA).
This comprehensive plan intends to continually guide development away from flood hazard areas, or where necessary, to mitigate impacts by following federal, state and local regulations regarding development in flood hazard areas.
FIRE AND WILDLAND URBAN INTERFACE (WUI)
The Wildland Urban Interface is defined as the fringe area where urban or residential areas meet with undeveloped land or vegetation. Historically in the Western US, considerable housing development has occurred in the WUI, as it is often the most desirable and valuable land with scenic vistas, dense forests and privacy. The idea of the WUI was formally introduced in a 1987 Forest Service Research document but was not properly recognized as a major component for federal land use and fire management until the 2000 National Fire Plan.
Over 80% of Chaffee’s landscape is federal land, with the vast majority forested, posing high risk of wildfire. Nearly half of the County’s population resides outside of municipalities, many in the WUI.
A 2019 survey found that 80% of residents believe that a major wildfire is very or extremely likely within the next five years in Chaffee County, and that 58% of residents are concerned about a fire near their residence. Land management agencies are challenged to keep pace with the need to effectively manage our forests. Recognizing the urgency of the issue, Chaffee County partners with them, as well as local fire districts, homeowners and the community to improve wildfire resilience, water quality, and wildlife habitat. Chaffee Common Ground funds projects through these partnerships to improve forest health and reduce wildfire danger to make the community safer and help protect our forests for future generations.
CHAFFEE COUNTY: Elevation (Feet)
- 6,900 - 8,100
- 8,101 - 9,400
- 9,401 - 10,600
- 10,601 - 11,900
- 11,901 - 13,100
- 13,101 - 14,400
Adopted December 2020
GROUNDWATER
The majority of the groundwater within Chaffee County is managed by the Colorado Division of Water Resources and the Upper Arkansas Water Conservancy District (UAWCD), as it has been deemed tributary to the Upper Arkansas River. The UAWCD was founded in 1979 to protect and secure water in the Upper Arkansas Valley. The UAWCD now spearheads basin-wide projects which secure and augment groundwater and to increase the water supply available to the valley. Part of the District’s endeavor is to oversee and augment water rights throughout Chaffee County, as all of the wells within Chaffee County have been deemed tributary to the Arkansas River. The United States Geological Survey completed a study in 2003 to evaluate characteristics of groundwater in Chaffee County including aquifer characteristics, aquifer depletion, aquifer storage, and nitrate/nitrite contamination.
The heatmap (Figure 1) on the next page shows areas with high well density, and based on the hydrogeology, the density may be more than the aquifer can support with groundwater withdrawals.
Figure 2 from the USGS report shows areas of increased nitrate + nitrite contamination. The EPA regulates nitrate and nitrite with a maximum contaminant level of 10 milligrams per liter (mg/L), as these contaminants are classified as an acute health risk and can cause methoglobinemia ("Blue Baby" syndrome). Areas with higher nitrate + nitrite concentrations may be correlated to areas of high densities of failing septic systems or septic systems installed in unsuitable soils.
Comp Plan Implications
The availability and quality of groundwater are major concerns. The Arkansas River Basin is overappropriated and we have a compact with Kansas. There is no such thing as a "free" river as there is always a call in place. Many uses of water, such as wells or ponds, result in out-of-priority depletions to the system - water is lost or consumed out of priority and this can cause injury to surface water rights, either here or downstream. The state engineer has determined that there is a nexus between surface and ground water, so well pumping creates an impact to surface water availability. Augmentation replaces the depleted water so that surface water rights are not injured. The availability of an augmentation plan, both the water and the legal/engineering, is a critical component to growth in areas outside of the municipal service areas. Even within the municipalities, municipal wells need augmentation too.
These state reports suggest that the time may have come to consider – at the planning level – incorporating measures to further protect existing and potentially outdated OWTS in the rural county. Implementing Transfer of Title ordinance as per Regulation 43 (See Lake 2018 County On-Site Wastewater Treatment Regulations for policy precedent). The early stages of such a policy should be for information-gathering purposes only so as to build a regional database of systems.
At the appropriate time, the County may decide to incorporate remediation measures that would require a landowner to upgrade the system to specifications required of new on-site treatment systems.
Source:
Watts, K. R. (2005). Scientific Investigations Report 2005-5179: Hydrogeology and Quality of Ground Water in the Upper Arkansas River Basin from Buena Vista to Salida, Colorado, 2000-2003. Retrieved January 6, 2020, from https://pubs.usgs.gov/sir/2005/5179/pdf/SIR2005-5179.pdf.
Figure 1: Density of domestic and household wells in and near the study area, 2003
EXPLANATION
Well density (well per acre)
- Less than 0.025
- 0.025 to less than 0.05
- 0.05 to less than 0.1
- 0.1 to less than 0.2
- 0.2 to less than 0.4
- 0.4 to less than 0.5
Study area boundary
Base from U.S. Geological Survey digital data, 1983; 1:100,000
Albers Equal Area Conic projection
Standard parallels 37°30' N and 40°30'N, central meridian 105°30'W
Figure 2: Concentrations of dissolved nitrite plus nitrate, as nitrogen, in water from aquifers in the upper Arkansas River Basin, September - October, 2001
EXPLANATION
Concentration of nitrite plus nitrate, as nitrogen, in ground water, in milligrams per liter
- Less than 1
- 1 to less than 2
- 2 to less than 4
- 4 to 6.3
Study area boundary
Basin-fill well—Shows location of well completed in basin-fill aquifer
Base from U.S. Geological Survey digital data, 1983; 1:100,000
Albers Equal Area Conic projection
Standard parallels 37°30'N and 40°30'N, central meridian 105°30'W
Adopted December 2020
6. Appendices
A.1 | Community Engagement Process Summary...........176
A.2 | Community Project Recommendations..................189
A.3 | Model Conservation Subdivision Ordinance.......193
A.4 | Decision Making Guidance.............................................199
A.5 | Background & Regional Context.................................202
A.1 | Community Engagement Process Summary
1. **Basecamp**
Chaffee County Comprehensive Plan kickoff! Including Open House #1 and historic data deep dive.
2. **Gather**
Data gathered by team. Open House #2 and neighborhood workshops are held.
3. **Explore**
Interviews with stakeholders, Community Advisory Board, and student workshops are held.
4. **Challenge**
Conclusions are made based on engagement, initial recommendations presented to community.
5. **Narrow**
Together Chaffee Drop-ins and Open Houses in Buena Vista and Salida provide input from public on Plan draft. Strategy sessions held with focus groups.
6. **Workshop**
Planning Commission and Board of County Commissioners hold their first joint meeting to refine every element of the draft. PC holds outdoor in-person input session.
7. **Recommend**
An implementation workshop is held with stakeholders to develop the implementation plan. A virtual public input session is held for revising the Future Land Use Plan.
8. **Summit**
Plan approved.
Input from the people living, working, visiting and leading the community was critical to the validity and success of the 2020 Comprehensive Plan, as well as several other inputs acknowledged such as relevant plans or studies. The following summarizes the results of the public input process, as well as other inputs to discuss how this comprehensive plan utilized a breadth of ideas to craft implementation measures.
**PUBLIC OUTREACH PROCESS AND SUMMARY**
Over the 18-month course of the initial public outreach process, 5,500 data points were recorded and analyzed to determine recommendations for the Comprehensive Plan Update.
**WHEN did outreach occur?**
Shortly after the project kickoff in June 2019, the first open house was held at Mount Princeton Hot Springs on June 26, 2019. The process continued for 18 months, with 16 live, in-person events and a constant presence at the project website: [TogetherChaffeeCounty.org](http://TogetherChaffeeCounty.org) where a series of online surveys were available and where news and updates were continually posted. Due to the COVID-19 pandemic, in-person public engagement was minimal after March 2020, however virtual events were held and members of the Planning Commission had two outdoor in-person input sessions in October 2020.
**WHAT type of outreach were people engaged in?**
The comprehensive plan update required broad, high-quality input from a variety of sources which meant utilizing a variety of techniques and venues. This is summarized as follows:
4 **Public Open Houses** were designed to be open to all participants and scheduled for after-work hours in locations across the community (Mount Princeton, Salida, X and X). Each Open House was designed to progress information received from the previous event:
- Open House #1 introduced the comprehensive planning process, received comment on themes established through the Envision Chaffee initiative.
- Open House #2 presented findings and key themes from Open House 1, and engaged participants in a live poll about planning topics.
- Open House #3 & #4 presented draft plan items such as goals and strategies, proposed projects and growth scenarios.
5 Neighborhood Meetings were scheduled in strategic locations to capture information specific to particular neighborhoods living and working across the County. Targeted for intimate discussion among groups of neighbors, these sessions identified very local issues for areas including: Buena Vista, Poncha Springs, Maysville, Johnson Village and Mid-Valley/Nathrop.
2 Together Chaffee Drop-ins were held at the Scout Hut in Salida and a workspace on Main Street in Buena Vista, these events presented draft plan items like Goals & Strategies, and offered a summary of what we had heard so far through engagement for folks to comment on. As well as an opportunity to have one-on-one conversations about the future of Chaffee County with planning staff, the Drop-in events allowed participants to spend half a million dollars in “Chaffee Bucks” on various potential investments.
3 High School Classroom Sessions were held at Buena Vista High School and Salida High School with approximately 45 junior and senior-level students. The students were asked to complete worksheets prior to the sessions about their favorite parts of Chaffee County, their fears for the future, and they felt the County’s greatest needs were. When presented with the summary of the worksheets, the students had open-ended discussion. They also got to participate in the investment exercise conducted at the Drop-in events.
52 One-on-One Stakeholder Interviews were conducted with identified key stakeholders, representing local government heads, cultural institutions or prominent figures in Chaffee County’s local business community, governance or leadership.
6 Focus Group Work Sessions were held to workshop the vision, goals and strategies for each of the Plan’s themes in January 2020. Community groups and subject experts convened to refine the draft.
15 Joint Work Sessions were held with Planning Commission and the Board of County Commissioners between May and July of 2020 to refine an initial draft released in March. County leadership went through each section of the plan and made changes so that this document could better guide future decision making. The public was welcome to attend these virtual meetings, which had much higher attendance than regularly scheduled Planning Commission meetings.
1 Implementation Work Session was held to develop the Implementation Plan. Leaders, representatives from community groups and County staff convened to assign responsibility for initiating the Action Steps in this plan, as well as a timeline for implementation and estimated cost. This resulted in actionable steps to achieve the Goals and Strategies in this plan, utilizing existing groups with boots on the ground in Chaffee County.
1 Virtual Public Meeting was held in September 2020 to present the draft plan and gather public input. Almost 70 people attended, and a recording of the meeting was posted to the project website with space for the public to leave comments.
2 Comp Plan Pop Ups at the Salida and Buena Vista Farmer’s Market were held to give folks a chance to give in-person input in a safe, outdoor environment. County staff and members of the Planning Commission had large Future Land Use Maps and copies of the Future Land Use Plan for public comment.
All throughout the public outreach process, the same engagement information was available online, with additional mapping outlets on the Together Chaffee website.
Tools and methods used included visual preference surveys, online surveys, quick comment cards, online mapping applications, live audience mobile device polling, "Chaffee Bucks" investing, Zoom meetings and online forums. Advertising methods included the project website, direct email, flyers, social media platforms, and local print and radio media outlets. A sub-committee called the Outreach Team was formed for the sole purpose of ensuring that underrepresented portions of the population were aware of and attended public events.
**WHY was it important to create such a broad outreach plan?**
In the past history of planning efforts, phone or mail-out surveys were predominantly the tool for capturing public input and feedback on planning ideas and land use policy. However on the brink of a new decade, response rates for these traditional methods are dwindling, and with most having an online presence via smart phones, engagement is higher than ever when utilizing digital input collection.
It was important for the team to spread the word as wide as possible, therefore any and all outlets and venues were found useful. When a demographic was found to be less engaged, the outreach plan was adjusted to develop new methods to capture less-vocal cohorts.
**HOW much information and input was received?**
Of the 5,490 unique data points received through this input, the majority was through direct scoring or voting mechanisms (i.e., dot voting, investment exercises or otherwise directly voting on something in-person or online) allowing the Comprehensive Plan to have solid grounding on tangible elements such as improvement projects or growth scenarios.
**WHAT did the Comprehensive Plan do with all this data?**
Using the quantitative, qualitative, anecdotal and direct comments, the input team mined the data to find direct commonalities that could be combined with all other inputs and translated into the community Vision, the seven Plan Element themes, Goals, Strategies and Action Steps.
**SO WHAT did everyone say?**
When all input channels and methods were added up, the results for the top ten most important themes/issues were:
1. Increase affordable housing options (“workforce housing”)
2. Growth management, sprawl prevention, protecting open space and ranches
3. Need for a County-wide Sustainability Plan, include sustainability in planning and building regulations
4. Water/sewer/road infrastructure capacity to sustain growth/infrastructure maintenance issues
5. Density over sprawl, encourage mixed-use residential and commercial in towns
6. Affordable childcare/preschool
7. Need for a recreation center in the County
8. More specific/different planning and building regulations and development review
9. Need for more multifamily/apartments in the County
10. Limit/regulate short-term rentals
CONCLUSIONS
The 2020 Chaffee County Comprehensive Plan update received an outpouring of input and support from an active, sophisticated and diverse community. Throughout this plan you will find quotes, themes, comments and stories that complement and prescribe some findings and policy recommendations.
Because of the quality and amount of input received, this plan was able to capture and represent the identity and future vision of the Chaffee community in every goal, strategy, action step or illustration. Special thanks goes out to all who participated.
SUMMARY OF NEIGHBORHOOD MEETINGS
Buena Vista Neighborhood Meeting
The main topics discussed at the Buena Vista Neighborhood Meeting were the Town’s role in the County, mobility, housing and growth.
Many residents believed that Buena Vista hasn’t fully shaped into who it wants to be, indicating that there is room to define their identity as a community. Meeting attendees were proud of how open-hearted and open-minded their community is, and how it has a unique historical presence without the look of a “classic town.” One resident stated that Buena Vista is the “Gateway to the outdoors – the community is based on people being outside.”
The mobility discussion primarily focused on non-vehicle transportation options; many residents voiced concern that the County has not been proactive in pursuing bike lanes, and that they desired more sidewalks in BV.
Cost of housing was the main issue of concern for the residents of Buena Vista, who have seen the negative impacts of housing unaffordability affect them and their neighbors. Many residents at the meeting believed that density is the key to affordable housing and viewed The Farm as an example to be replicated. A few action items that residents referenced were enacting restrictions on short-term vacation rentals, the purchase of County and Town land for deed-restricted housing and creating incentives in the Town to get developers to build affordable housing and annex into Town.
Attendees made the connection between housing and jobs, and spoke of how businesses are unable to retain employees due to the shortage of affordable rental properties. Most meeting attendees agreed that they wanted to continue to be environmental stewards and hoped that pursuing businesses in the green economy will bring attractive, higher-paying jobs to the County.
Johnson Village Neighborhood Meeting
The main topics discussed at the Johnson Village Neighborhood Meeting focused on community identity, water infrastructure capacity, and annexation.
Residents of Johnson Village understand the important role it plays as one of the three main gateways into the County, and acknowledge that historically they have been an overlooked, pass-through community. While some meeting attendees said they would like the convenience of having restaurants and a grocery store in the Village, others were less concerned with the drive to Buena Vista for these amenities. Residents were concerned with the availability and capacity of water infrastructure, and how businesses are leaving the area due to water access issues. When the potential to annex into Buena Vista was brought up, residents were skeptical but saw the potential benefit of having better access to water. Growth in the County has created traffic issues coming through Johnson Village on Highway 285, especially in summer months. With increased traffic, one resident stated “We need visibility - proper signage is key to support commercial growth.”
Nathrop Neighborhood Meeting
The main topics discussed at the Nathrop Neighborhood Meeting were community identity, protection of ranchland and open space, and commercial development.
The residents of Nathrop and the Mid-Valley area agreed that they love the rural character of their neighborhoods and take pride in the sense that they are a refuge area from more developed areas of the County. One resident stated “I love that you can drive by where my house is, not blink, and still miss it.” Another resident pointed out that the stretch from Nathrop to Centerville is the prime example of what rural character means Arkansas Valley.
Meeting attendees voiced their fear of development pressure in their community, particularly concerning the agricultural lands owned by a few remaining ranchers. The discussion between the ranchers and their neighbors focused on how to balance community objectives with the private property rights of residents whose families have been a part of Chaffee County for over 100 years. Another community goal discussed was environmental and river protection, particularly the sale and purchase of mining rights.
Some meeting attendees recognized that Nathrop is one of the last areas in the County that is considered by the public to be affordable to buy property or homes for families, but there are very few amenities for families like a place to buy food, a playground, or a gas station. Others were less concerned about the drives to Salida or Buena Vista for those amenities.
Open House #2 (Salida)
The main topics discussed at Open House #2 held in Salida were affordable housing, trail connectivity, open space and ranches, health and childcare, watershed health and sustainability.
The residents of Salida value their walkable downtown with local businesses and a focus on the river. They are a self-proclaimed “lifestyle community,” and are known for their friendly people and recreational assets. They want to continue to be a city with a vibrant local economy and fear the effects that short term rentals and big box stores will have on the community’s character. The severe lack of affordable housing and rapidly increasing home values were a major concern discussed at the meeting, as well as the effect these issues have on retaining a local workforce. Residents expressed a desire to continue the pedestrian-oriented grid network of downtown into the Highway 50 corridor, which is now auto-centric and dangerous to cross as a pedestrian. Trail connectivity within and outside of the City was important to many meeting attendees.
Many residents stated “keep the city areas city, and the rural areas rural”, which appeared to be agreed upon by the greater Salida community. People were interested in ways to preserve ranch lands while protecting the private property rights of ranchers. Meeting attendees were also concerned about the lack of healthcare services in the County like mental and behavioral health, assisted living and childcare. Finally, concern over the impacts of growth on the health of the watershed and the desire to incorporate sustainability in planning were expressed.
Poncha Springs Neighborhood Meeting
The main issues raised at the Poncha Springs Neighborhood Meeting were connectivity, growth, commercial development, town identity and economic vitality.
The connectivity discussion was centered around the intersection of Highways 285 and 50. Poncha Residents recognize that being at a major crossroad in the County has disadvantages, like traffic congestion and the difficulty of non-vehicle highway crossings. However, the Town benefits from the access and visibility from these 2 major highways. Residents stated they
would like a safe way for pedestrians to cross the highways, like a signaled crosswalk. The highways physically divide the neighborhoods of Poncha, which has made it difficult to integrate or create a cohesive identity.
Residents and Town Staff both acknowledged that Poncha has embraced growth, while other communities within the County have less available land and infrastructure to accommodate larger residential developments. The lack of commercial development has prevented Poncha Springs from having a town center, and residents expressed the need for a place to gather with their community members, as Town Hall is currently the only option. Residents stated they would like more small businesses where they can interact with other locals.
Skiing is an important facet of the town’s identity, and meeting attendees saw this as a market opportunity to for a hotel to capture skiers who would drive through Poncha to Salida for lodging. This could enhance the Town’s economic vitality, which would benefit from an industry that retains long-term jobs. One resident stated “Poncha Springs is an escape from Salida and its congestion — but it comparably lacks tourism appeal and the missing commercial development has created an identity confusion due to the lack of a ‘downtown’ area.”
**Maysville Neighborhood Meeting**
The main topics discussed at the Maysville Neighborhood Meeting were community character, recreational assets, short-term rentals, commercial development and traffic.
An almost exclusively residential area, a meeting attendee described Maysville as “A community hub for surrounding areas.” Residents at the Maysville Neighborhood meeting value the small, historic, rural, feel of their community, while also appreciating their proximity to municipalities. One resident stated “we have all of the perks of living in a suburb minus the location.” People were concerned that the lakes owned by the local utility provider—historically a recreational amenity for locals—may be sold for water rights. Meeting attendees would like improved regulation and enforcement of short-term rentals as they cause water shortages for residents and they don’t provide as much benefit to the community since Maysville does not collect taxes on them. People would like to see cellular phone service and broadband expanded and improved.
Maysville community members are proud of the community-driven effort to preserve the School House, which serves as a landmark. Residents in attendance seemed to share the notion that they did not want commercial activities developed in their residential neighborhoods. Concerns about traffic, highway crossings, dirt road maintenance and signage were also discussed at the meeting.
As identified through the Envision Chaffee effort, most issues faced by Chaffee County are connected to growth, and the County’s capacity to continue to provide safe and reliable public services, housing and economic opportunities to all citizens. But on a deeper level, the fundamental issue is change, and where people have negative feelings or resistance to change, that usually ties back to—real or perceived—changes to quality of life. In 2020, issues include the health of our forests and watershed, the impacts of increasing visitation, the quickly rising cost of living, lack of affordable housing, shifting demographics and changing community values and threats to the future of agriculture. All of these evoke a sense of loss from long-time community members. Newer arrivals bring a different perspective than many of the long-time residents and while they may embody changing set of values, they share many concerns about Chaffee County’s future.
**ISSUES OF CONCERN**
The following issues of concern are based on the pressing challenges the County is dealing with in 2020 and are intended to be broad enough to guide decision-making despite certain changes in conditions, such as shifts in economic markets. The following is a discussion of these issues as they provide motivation for the comprehensive planning elements and land use actions put forth in this plan.
**Natural Resources and Healthy Landscapes:**
1. **Forest health** in Chaffee County is a major concern as historic practices of fire suppression and global climate change have set the stage for catastrophic wildfire, pests and disease currently ravage the forests. Real threats include spruce budworm and pine beetles resulting in massive tree mortality. Wildfire in nearly any part of the County threatens human health, life and property, and uncertainty on the forest’s future is immense. High priority should be placed on forest treatment as identified in the Community Wildfire Protection Plan.
2. **Water:** The availability and quality of water are major concerns, not just for the County but for the region and nation. Water in the Arkansas River Basin is over-appropriated and surface and groundwater supply is depleting. Surface water rights are delivered based on availability of native water to satisfy such rights in accordance with the Prior Appropriation System. There is a nexus between surface and ground water, so well pumping creates an impact to surface water availability. Augmentation replaces the depleted water so that surface water rights are not injured. The availability of an augmentation plan, both the water and the legal/engineering, is a critical component to growth in areas outside of the places served by municipal water service areas. The Upper Arkansas Water Conservation District and its augmentation plan have been critical to most development in the unincorporated parts of the county. Water quality has increasingly been questioned as it relates to aging septic systems.
3. **Sprawl:** Development pressure threatens to change the working lands and open spaces to sprawling subdivisions and change the character of Chaffee County from rural to suburban. Success in maintaining agricultural lands and functions needs to be as much about making agriculture successful as it is about protecting lands from development.
4. **Working Lands:** Preservation of working lands and appreciation of the ecological function they provide is growing stronger. There are several large conservation easements in the planning stages, and the Office of Housing has model legislation to adopt conservation subdivision planning.
5. **Wildlife Habitat:** As land has developed in Chaffee County, wildlife habitats and corridors are under increasing threat. How does wildlife fit into development patterns and private property rights?
6. **Recreation in Balance:** Increased use of recreational assets has impacted the landscape and watershed health. Continued support for the Recreation in Balance program is critical to preserving our quality of life.
**Built Environment & Infrastructure:**
7. **Railroad:** The history of Chaffee County has been physically shaped by the railroads and the future use of rail corridors presents interesting potential. The intact corridor is a unique and valuable asset that should be kept intact, whatever its final use may be. It is not currently being actively maintained. The corridor could potentially be used for routing of utilities in addition to use by trains.
8. **Transportation:** Roads and their capacity for vehicle traffic continues to present challenge conditions in the face of growth. County road maintenance is not able to keep up with impacts of increasing inflow or outflow traffic and, at the same time, public expectations of road conditions continues to rise.
**Human Environment:**
9. **Schools and Institutions:** Growth in population impacts schools and their institutional capacity to deliver a high-quality learning environment to the County’s youth. Care must be taken to ensure public institutions have the resources and facilities they need, and this includes K-12 systems as well as post-secondary education and vocational education or supplemental learning environments. School superintendents and representatives from institutions such as Colorado Mountain College and other regional stakeholders must help in the effort to find tools and procedures that ensure growth does not outpace our ability to fulfill those in the County that desire further learning.
10. **Demographic Change:** The County’s population is about 12 years older than the State of Colorado on average. The volume of retirees that have worked an entire career to move to Chaffee County has been substantial, while younger people are leaving. High quality continuum of care means there is a smooth transition between different stages of healthcare needs and providing this is a challenge. Many couples are forced to move away upon developing health issues that are not addressed locally. Plans to build facilities to mitigate these issues exist, however none have come to fruition.
11. **Public Health:** Physical, mental and social public health and wellness is of utmost importance at the time of this Comprehensive Plan. Rural areas like Chaffee County struggle in general against lack of resources, however in times of global strife, even more strain is placed on the County’s public health officials and stakeholders. Anecdotally, there are two Chaffee Counties; one that is self-sufficient with social care and safety nets, and a significant number of folks who do not have such resources. This disparity is growing as the cost of living increases. The services provided by non-profits and faith-based groups continues to be essential.
12. **Behavioral Health** is a key concern for the future, and ensuring Chaffee County’s communities have adequate facilities for the treatment and transport of those in need is a major concern. The County is fortunate to have entities such as Sol Vista who as of this planning effort received a significant grant to start work on a treatment center adjacent to the hospital to serve this critical need. Transporting patients to the Front Range—rather than providing services for them in Chaffee County—is a growing burden.
13. **Culture, Arts and Music:** Cultural activities have come to reshape the region’s identity in a short time. Several large music festivals and events drive a portion of the economy and contribute to the County as a cultural destination. Being “on the map” for music and social events continues to bring in ancillary activities like weddings and seasonal visitors like never before. The Salida Creative District contributes to this, as well as community organizations like Chaffee Arts in Buena Vista and further cement the area’s position as a place to experience authentic Colorado culture. People will continue to come to the County from across the state and country and many will stay for short time. Some will stay for the rest of their lives.
14. **Recreation** and the identity and culture of Chaffee County is another major motivation for the policies in this plan. With high seasonal traffic on county roads and state highways, crowded waterways, intrusion onto private property near camp sites, and over 100,000 people hiking the County’s 14,000-foot peaks every season, impacts from human activity to environmental resources are very visible and strongly felt when considering the recreational economy in Chaffee County.
15. **Affordable Housing:** Along with the State of Colorado and the country itself, Chaffee County and the region face challenges to providing sustainable housing for its workforce. A regional housing shortage, lack of diversity of housing, and low-wage jobs have priced certain income levels out of the County, and have continued to push existing residents elsewhere. Permanently affordable housing is critical to ensuring Chaffee County is an equal-opportunity place to live, work and recreate.
A.2 | Community Project Recommendations
Projects came from stakeholder and community input, and review of existing plans and studies, including:
- Buena Vista Three-Mile Plan
- Salida Comprehensive Plan
- Poncha Springs Comprehensive Plan
The list of projects was then brought to the community in Open Houses 3 and 4 where the public voted on which projects they would like to see prioritized, which is indicated under the "votes" column.
Attendees vote on proposed projects
## Buena Vista Sub Area Specific Projects
| Buena Vista Sub Area Projects | Votes |
|------------------------------|-------|
| 1 Significant affordable housing project in the Sub Area | 17 |
| 2 Study the execution/prioritization of improving the sanitary system in Johnson Village to comply with Buena Vista Sanitation District standards (refer to the Infrastructure Study) | 4 |
| 3 Bring people and vitality to Johnson Village by zoning land for commercial and recreational uses, facilitate appropriate growth as it relates to the capacity of infrastructure | 2 |
| 4 Collaborate with the Town of Buena Vista to identify water resources safety risk for Cottonwood Creek | 1 |
| 5 Identify and convert one or both of the junk yards in Johnson Village to a project for public benefit | 1 |
| 6 Install proper signage in Johnson Village and SH 24 North that signifies the gateway into the County and wayfinding to the County’s assets | |
| 7 Create a Johnson Village Master Plan to set long-term neighborhood strategies for infrastructure, connectivity and land use | |
| 8 Engage with the Town on an alluvial water storage study and project for Cottonwood Creek to ensure coordination in the long-term management of water supply in the area | |
## Buena Vista Sub Area Transportation Projects
### Work with CDOT on the following projects in order of prioritization:
| Project | Votes |
|------------------------------------------------------------------------|-------|
| 1 Construct a paved bike path at Crossman Avenue | 14 |
| 2 Improve the intersection at US 24 and DePaul Avenue | 5 |
| 3 Construct a bike/ped path from Johnson Village to Browns Canyon Monument with safe highway crossing | 4 |
| 4 (Current project underway) - Collaborate with all project stakeholders to improve the intersection at US 24 and Steele Drive | 1 |
| 5 Improve pedestrian crossings along Highway 285 | |
| 6 Make US 285 through Johnson Village into more of a main street/entryway with pedestrian amenities like sidewalks and highway crossings | |
### Work with the Town of Buena Vista on the following projects:
| Project | Votes |
|------------------------------------------------------------------------|-------|
| 7 Connect the subdivisions outside Buena Vista's boundaries with roads (referring to the Buena Vista Transportation & Three Mile Plan and prioritize connectivity) | 3 |
| 8 Collaborate with project stakeholders (CDOT, RR, etc.) on the improvement of intersection at US 24 and Steele Drive | 1 |
| 9 Continue the paved bike trail along Gregg Drive (CR 321) through the County past the municipal boundary | 1 |
| 10 Construct a road connecting Gregg Drive and CR 306 at CR 361 | 1 |
## Mid-Valley Sub Area Specific Projects
| Mid-Valley Sub Area Projects | Votes |
|-----------------------------|-------|
| 1. Work with the residents of Nathrop to develop a Community Master Plan for Nathrop (refer to Infrastructure Study) | 4 |
| 2. Begin master planning for a comprehensive water/wastewater system to mitigate quality issues and promote community amenities | 2 |
| 3. Significant affordable housing project in the Sub Area | 2 |
| 4. Develop something like the old "Nathrop Mall" - a small market (like the general store at St Elmo) in the old townsite to provide easy access to food in the Mid-Valley Sub Area | |
| 5. Construct a park and playground in Nathrop | |
| 6. Construct bike lanes in Nathrop and various subdivisions in the Mid-Valley area | |
## Mid-Valley Sub Area Transportation Projects
**Work with CDOT on the following projects in order of priority:**
| Mid-Valley Sub Area Transportation Projects | Votes |
|--------------------------------------------|-------|
| 1. Ensure that Cogan’s Curve is on CDOT’s list of prioritized improvement projects | 5 |
| 2. Construct a public trail and bridge from Nathrop to Browns Canyon National Monument | 1 |
| **County Road Project Prioritization** | |
| 3. Improve County Road 162 from US 285 to Mount Princeton | 1 |
| 4. Construct bike lanes in Nathrop and various subdivisions in the Mid-Valley area | |
## Salida Sub Area Specific Projects
### Salida Sub Area Projects
| Salida Sub Area Projects | Voting |
|--------------------------|--------|
| 1. Significant affordable housing project in the Sub Area (City of Salida) | 6 |
### Salida Sub Area Transportation Projects
**Work with CDOT on the following projects in order of prioritization:**
| Salida Sub Area Transportation Projects | Voting |
|----------------------------------------|--------|
| 1. Work with both CDOT and the City of Salida to complete projects and plans outlined in the Future 50 Highway Plan | |
| 2. Completion and implementation of Corridor Plan; Intersection control evaluation of US 50/SH 291 intersection | |
| **Work with the City of Salida on the following projects:** | |
| 3. Monarch Spur Trail Improvements | 1 |
## Poncha Springs Sub Area Specific Projects
| Poncha Springs/Maysville Sub Area Projects | Voting |
|-------------------------------------------|--------|
| 1 Referencing the future Trails Master Plan, utilize Conservation Subdivision Design standards and other means to preserve areas by the river for potential trail development | 1 |
| 2 Maintain access to recreational opportunities along the river in Maysville to allow for continued recreational use | |
| 3 Team to perform studies that explore solutions to rural water supply issues around Maysville | |
| 4 Significant affordable housing project in the Sub Area | |
## Poncha Springs Sub Area Transportation Projects
Work with CDOT on the following projects in order of prioritization:
| Poncha Springs Sub Area Transportation Projects | Voting |
|-------------------------------------------------|--------|
| 1 Ensure that improvements to the intersection of SH 285 and US 50 is on the CDOT priority list | 3 |
| 2 Improve US HWY 50 pedestrian crossings prioritizing intersections where existing and future housing exist (for example, US 50 & CR 120) | 2 |
| 3 Enhance the bicycle experience of the Continental Divide Cycling Route - Highway 285 along Poncha Pass | 3 |
| 4 Explore paths to construction of a multi-use path between Poncha Springs and Maysville | 1 |
A.3 | Model Conservation Subdivision Guidelines
DESIGN STANDARDS
Purpose: To create a very livable neighborhood, interspersed with functional open space, to improve the quality-of-life of the new residents.
The following is a memorandum provided by Randall Arendt, FRPPI, ASLA, for the Chaffee County Office of Housing to serve as a model for a future ordinance. Among the special features of any such new neighborhood are the following sixteen design concepts that are recommend to be incorporated into designs for projects with public, or central, water and sewer infrastructure, where greenway design is desired.
1. **Foreground Meadow:** Many roadside parcels can be developed with an attractive side facing the main public road, but with homes pulled back from it to reduce negative effects of traffic noise, etc. The park-like open space that is thereby created can be planted with a variety of deciduous and coniferous trees. This area buffers homes from the busy road running along the front of the property, and provides greater backyard livability. It also presents the traditional front facades to the public street, instead of lining it with backyards or fences. In the sketch at right, the conventional approach is at the top; the recommended approach is at the bottom.
2. **Numerous Neighborhood Greens:** Many properties lend themselves very well to the concept of creating separate but related “outdoor rooms”, defined by central open space. This special new neighborhood offers a variety of greenspace ranging from a central green to an informal ballfield. In the above example, the author redesigned a conventional extension to a small village by trimming lot sizes and creating both a neighborhood green and an informal playing field with the acreage that was not needed for house lots.
3. **Terminal Vistas:** Greenspace is deliberately positioned either at the ends of streets, or along the outside edge of curving streets, so that the visibility of these amenities will be maximized.
4. **Closes:** As an alternative to the standard cul-de-sac, a “close” consists of a one-way street looping around a small central green. The turning radius at the far end is designed to meet engineering standards for turning movements required by long vehicles, such as fire trucks and moving vans.
Sometimes the central green area can be used as a rain garden, where stormwater pools for a few days before being absorbed by the soil, cleansing the runoff and replenishing the aquifer.
5. **Green Streets:** When garages are located in the back, accessed by rear laneways, opportunities exist for eliminating the street that traditionally runs in front of houses. In this example, the street area has been landscaped as a green space, with sidewalks for pedestrians. While a sidewalk down the middle might seem obvious, it does not create the same parklike atmosphere as two sidewalks bordering a central green. In a street grid pattern, these streets would be located in lieu of minor cross-streets, and could be repeated in line across a number of blocks to form a greenway spanning an entire neighborhood, perhaps linking homes with a larger part, shops, or a school.
This design approach can be used in neighborhoods of single-family detached homes, or in ones involving attached housing, such as condominium units or apartments.
6. **Bungalow Courts:** Bungalow courts, sometimes called “pocket neighborhoods”, can be seen as an extension of the “green street” design concept illustrated at bottom right. However, there are differences, the principal one being they are designed around a central, shared open space that is their basic building block around which homes are arranged. Sometimes homes frame it on all four sides, and often the neighborhood green is broader than that in a “green street” design. The definitive book on this subject is *Pocket Neighborhoods*, by Ross Chapin (2011, Taunton Press).
7. **Rain Gardens:** Provision should be made for the creation of “rain gardens” within parks and the greenspace bounded by the neighborhood greens. These engineering features allow the first flush of runoff from most storms to infiltrate directly into the ground, irrigating the trees and other park vegetation, and also replenishing the aquifer.
These design elements work best when the street pavement is sloped inward toward the central greenspace, with curbing along only the outside edge of the street, not the inside edge, to allow sheet runoff to enter the rain gardens.
8. **Attached Greens**: An "attached green" is one where a row of houselots abuts the greenspace directly, with the street located at the far edge of the green (and garage access provided via rear laneways running along the back lot-lines). It is a useful design approach, particularly along busy streets. This orientation greatly enhances the livability of homes, whose residents step directly from their front porches right onto the greens.
9. **Mid-Block Connections**: Footpaths and sidewalks should provide ways for pedestrians to cut across long blocks midway between street intersections. In Britain they are known as "twittens".
10. **Garage Orientation**: When lots are less than 60 feet wide, builders often locate garages as appendages to the housefronts, with the result that protruding garages doors become a central feature of the street facades, dominating the streetscape and defining the neighborhood in a distinctly non-traditional way. A far better alternative is to recess front-loaded garages, or to provide rear laneways. One way that municipalities can easily ensure that front-facing garages do not dominate streetscapes is to require a minimum front setback for garage doors that is 10-15 feet deeper than the front setback for homes.
When lots are less than 60 feet across, they really benefit when homes have rear-facing garages (accessed via laneways). Municipalities can require such laneways, and can prohibit front-facing garage doors, when lot widths are less than that.
When laneways are provided, they should be planted with shade trees, just the same as the streets in front (see next design concept, below). Rear access can take the form of private common drives or “back lanes”, maintained by condominium corporations.
11. **Back Lanes for Rear Garage Access:** Laneways are important design features to avoid front-facing garages that dominate streetscapes on lots less than 60 feet wide, or with some-detached (duplex) homes having two-car garages.
12. **Semi-Detached Homes:** Front-facing garage doors are possible with duplexes having single-car garages, as illustrated at right. But when these homes have two-car garages, they can be done successfully only with rear access via laneways. (The image at right second from bottom is a two-family home designed to resemble a single-family home. Each front porch has its own front door, and the porch structures help to visually subordinate the two entrances. This effect is enhanced by one porch being set back farther from the street than the other.)
13. **Shade Trees:** It cannot be emphasized enough how very important shade tree planting is along neighborhood streets, particularly in treeless agrarian landscapes. Shade trees should be planted at 40-foot intervals on both sides of every street, between the curb and the sidewalk, in tree-lawns at least six feet wide. If they are not required at the outset, they are seldom planted afterwards, and almost never in any consistent manner.
The photo at bottom right shows shade trees in Boise, which was originally built in a very open, treeless landscape, demonstrating what can be accomplished in such areas.
If the jurisdiction fails to require such shade tree planting from the developer as a conditional of approval, chances are they will never be planted, and decades later the streetscape will remain barren and unattractive. The upper left photo on this page shows how well shade trees grow in new developments located on former farmland, which is often subdivided in rural areas.
14. **Sidewalks for Safety:** Separating pedestrians from cars and trucks is always a worthy cause. Older communities routinely required them as a matter of course. However, in recent decades the importance of this basic safety feature has often been overlooked.
15. **Traffic Calming:** The two one-way streets encircling the first “outdoor room” or neighborhood green (middle right) are designed with T-intersections requiring drivers to come to a full-stop before proceeding through. Other parts of the street networks have been consciously designed to calm traffic, by deliberately introducing tighter curves and three-way intersections where motorists must slow down. Another traffic-calming device is the informal central median. Where fencerow trees exist, they can be incorporated into the design. Where they do not, such medians should be planted with canopy shade trees capable of filling the “celestial space” overhead, in the fullness of time.
16. **Drainageways as Boulevard Features:** Retaining a line of existing drainageways can help preserve key features of the rural landscape and add value to any project.
AN ECOLOGICAL APPROACH TO PLANNING
Ecology is sometimes thought of as being synonymous with environmentalism, but it is much more than that. “Ecology” is the interdisciplinary study of relationships of organisms with their physical surroundings – including the built environment – and with each other. Humans as organisms are an important element of this discussion, as is the understanding that ecology examines these relationships on an ecosystem or landscape scale.
There are four laws of ecology that apply at this level of discussion:
1. Everything is connected to everything else (What affects one affects another).
2. Everything must go somewhere (Matter cannot be destroyed). Waste products and byproducts of all processes must go somewhere. The fundamental question we must ask ourselves when making land use and permitting decisions is “Where do the waste and byproducts go? How do we dispose of them? What are the potential impacts to human and ecosystem health and stability?”
3. Nature is the ultimate arbiter (Nature knows best). Natural disasters and unforeseen events – wildfire, flooding, rockslides, disastrous climate and weather events, chemical spills, and more – are likely to overcome our best human designs and engineering. Similarly, the two previous laws illustrate the potential for unintended consequences – how using one pesticide may lead to the rise of another more damaging pest, for example.
4. There is No Such Thing as a Free Lunch (All decisions involve trade-offs). Every gain comes at some cost. There is no such thing as “zero impact.” For example, an electric vehicle is charged with electricity produced by some source – either fossil fuel or perhaps solar panels containing rare-earth metals that may have been mined a long ship-ride away. The batteries in the vehicle are similarly manufactured and must ultimately be disposed of properly (see # 2 above). Building large developments of “affordable housing” in rural areas may be an attractive option in terms of land costs, but wells, waste disposal, transportation, and other costs may exceed any savings rendering the housing unaffordable and imposing other costs on the community.
When applied to county-wide land use policy, these tenets guide this plan’s philosophy of promoting logical land use patterns in appropriate areas where growth makes sense by it’s true costs and benefits. An ecological approach finds the best place for something by layering information to optimize suitability. This is particularly applicable to the Growth Scenarios in Part III, which use GIS to take into consideration physical conditions to determine the most optimal criteria for certain human activities.
PRINCIPLES FOR DECISION MAKING
TESTS FOR COURSES OF ACTIONS.
A course of action is a plan or series of plans that will accomplish, or is related to, the accomplishment of an objective that will take us toward achieving our vision. The following tests are considerations when evaluating a possible course of action.
ADEQUACY: Will the course of action (COA) actually accomplish the goal when carried out successfully? Will it solve the problem? Is it aimed at the right objectives?
FEASIBILITY: Do we have the necessary resources – funding, personnel, equipment, facilities - or can they be obtained within reasonable timeframes and costs?
ACCEPTABILITY: Even though the contemplated action will accomplish the objective and we have the resources, is it worth the cost? Is it consistent with community values? Will the public support it?
COMPLETENESS: Does the recommendation adequately answer
- Who will be responsible for executing it?
- What specific tasks and actions are required?
- When is it to begin? When should it be completed?
- Where will it take place?
How will it be accomplished? There is no inhibition to clearly explaining how, in general terms, the COA will be executed.
CONSISTENCY: Does the recommendation support our vision? Is it consistent with the Chaffee County Comprehensive Plan and regional and intergovernmental agreements?
CONSTRAINTS: Does the COA encompass all the things we are required to do by higher authority, thus limiting our freedom of action? Constraints are things we MUST do.
RESTRAINTS: Does the COA comply with all directives by higher authorities that prohibit certain actions? Restraints are things we MUST NOT do.
COLLABORATIVE PRINCIPLES
To collaborate as partners with our state, regional, municipal, civic, business, and NGO/PVO partners, we recognize, value, and utilizes the following decision-making principles.
- During all meetings, discussions, brainstorming sessions, and decision-making processes, we will be respectful, be constructive, and be productive.
- Interagency working groups and committees will endeavor to make decisions based on a gradient of consensus. When possible, decision items will be presented and decided upon in one meeting. Policy level and more complex decisions (i.e., policy recommendations to elected bodies) will be vetted through standing interagency committees (inter alia, Planning Commission, Transportation Advisory Board, Administrators
Committee, Housing Policy Advisory Committee, Heritage Area Advisory Board, et al.) or task-oriented interagency working groups.
- Members of standing committees and interagency working groups must be empowered to speak for the organizations they represent. It is understood for some decisions organizational representatives will need to consult with their boards and leadership before making a decision or before committing their organization to a particular course of action.
- Government members serving in advisory or ex-officio capacities can (and in some cases should) abstain from taking a formal position in a standing committee or interagency working group.
- In making decisions, each member of a standing or ad hoc interagency working group will indicate his or her concurrence on a specific proposal using the five-point gradient below:
1. Full endorsement: I like it! I fully support it.
2. Support with reservations: I have some reservations, which I have shared, but I can live with it. I will not withhold support of the group or the decision-making body’s ultimate decision and will work for project success.
3. Abstain: This issue either does not pertain to me or my organization cannot take a position on this.
4. Cannot support: I register my organization's dissent on this proposal or COA because negative effects will outweigh the benefits. NOTE: in a consensus model, this is tantamount to blocking the proposal, though the ultimate decision may lie with an elected decision-making body. A 4 should be used sparingly and only as a last resort after efforts to resolve concerns have failed.
5. Indecision: If a member cannot make a decision without more information or without consulting with their board or leadership, the member must specify what additional information is needed and what guidance he or she needs from leadership. Ideally, the need for more information and leadership guidance are managed before the meeting.
**CONSIDERATIONS FOR DECISIONS ON LAND USE AND DEVELOPMENT PROJECTS.**
The following are guidelines to staff and decision makers for making determinations on land use approvals under the future land use framework of this Comprehensive Plan:
- Does the project generally meet the intent of the future land use map, its designations, use locations, and any other special elements?
- Can the project’s location be reconsidered to better meet the intent of any of this plan’s goals, strategies or land use character designations?
- Does the project directly further a goal, strategy, incentive or other project as mentioned in this or any other ancillary plan or study?
- If a residential use is being considered, does the use further Strategy 3.1.B promoting the development and preservation of housing types across the housing spectrum that serve residents across a range of demographics and incomes?
- Does the project generally advance the community vision of the Preferred Scenario?
A.5 | Background
LOCAL HISTORY
EARLY SETTLEMENT
Chaffee County’s history is largely influenced by the geology and geography that make this portion of the Arkansas River Valley such a unique place. The settling of Chaffee County saw a succession of Native Americans, miners, ranchers and railroads. By the 1600s the Ute Indians roamed the valley, valued for its rich hunting grounds and temperate climate. Several of the highest mountains, including Antero, Shavano, Tabeguache, Ouray, Chipeta and Pahione, are named after the Utes. By the early 19th Century, the Arapaho and Cheyenne occasionally wintered near the Arkansas River.
MINING
Colorado’s 1859 gold discoveries brought a mining boom to Chaffee County when thousands of “Pikes Peak or Bust” prospectors moved on from the Front Range to the central mountains. Situated in the center of the Colorado Mineral Belt, the Chaffee mountain ranges yielded a wealth of gold, silver, copper, iron, zinc, lead, limestone, marble, fluorite, travertine, gypsum and feldspar.
Of the many mines located within Chaffee County, the Mary Murphy Mine in the Chalk Creek District, operated by the Mary Murphy Gold and Silver Mining Company of St. Louis, was the most famous. Another prominent mine operating in Chaffee County was the Calumet Iron Mine run by the Colorado Coal and Iron Company (CC&I). Mining in Chaffee County peaked between 1885 and 1888, when production of gold, silver and lead totaled more than $1 Million each year. CC&I quarried large quantities of limestone from the Monarch District above Garfield for use in the Pueblo Steel mills into the 1980s. The loading tripple is still visible from US Highway 50.
Smelters located in Buena Vista and Salida processed the raw ore, and provided employment for hundreds of men until the end of WW1. The 300-foot brick chimney in Salida is all that remains of a once prominent industry.
RAILROADS
The three railroad lines contributed immensely to the growth of mining, agriculture and towns in Chaffee County. The Denver, South Park & Pacific Railroad crossed South Park from Denver and reached Buena Vista in 1880 on the way to Leadville, the mines near St. Elmo, and across the Continental Divide to Crested Butte.
The Colorado Midland was based in Colorado Springs and traversed Trout Creek Pass on its way to the silver mines in Aspen. The line was abandoned after World War I and is now the backbone of the Buena Vista trail system.
The Denver & Rio Grande Railroad (D&RG) traveled from Denver to Leadville following the Arkansas River, and founded Salida in 1880 erecting extensive railroad facilities. The D&RG provided Chaffee County with a direct rail connection to the East Coast via Pueblo and St. Louis, and to the Pacific Coast via Salt Lake City and San Francisco. By 1920 the D&RG was still a prominent travel route, but highways began to eclipse the railroads after WWII and the D&RG ended passenger and freight service to the valley in 1964. Union Pacific still owns this rail line connecting Pueblo and Mintern, but it has not been used since 1997.
BUENA VISTA
Buena Vista was settled in 1864 as people were drawn to its fertile land. Known as BV by the locals, it became an agricultural center eventually serviced by three railroads. To the south, the small township of Nathrop saw its beginnings as the ranch owned by Charles Nachtrieb in 1865.
COLORADO STATEHOOD
Colorado became a state in 1876, and in 1879 the state government divided Lake County into northern and southern parts. The southern portion was named Chaffee County after Jerome Chaffee, a businessman and politician who had invested in local mines. The town of Granite, in northern Chaffee County, was designated as the county seat, but later residents voted to move the county seat to Buena Vista, a more centrally located city.
TOWN SETTLEMENTS
With the establishment of the Colorado Territory in 1861, present-day Chaffee County became part of Lake County.
today’s historic town centers in Salida and Buena Vista.
Salida was founded in 1880 with the arrival of the Denver & Rio Grande Railroad. “Salida,” exit in Spanish, refers to its location where the rail line exited Bighorn Sheep Canyon and entered the Upper Arkansas River Valley. The 1887 SteamPlant provided electricity to the County for 70 years. In 1928 Chaffee County voted to move the County seat for a second time from Buena Vista to Salida. Construction of the new Chaffee County Courthouse in Salida was finished in 1932.
**RANCHING HISTORY**
A 150-year cattle ranching heritage runs deep in Chaffee County. Following the end of the gold rush in the late 1860s, homesteaders began to establish farms and ranches along the river. Early ranchers found many advantages to settle in Chaffee County: fertile bottomland, public domain grasslands, a temperate climate, water for irrigation and a good market in the mining boomtowns and Front Range cities. Ranchers drove their cattle to the high country for summer grazing while the irrigated pastures produced alfalfa, hay, oats, wheat and barley.
ECONOMY
In 1939 a federal Works Progress Administration (WPA) project built the Salida Hot Springs complex. Monarch Ski Area was another WPA project, which officially opened in 1939, though skiers had been using the mountain since 1914.
The corrections business has been a consistent employer in Chaffee County since 1891, when the first state reformatory was built near Buena Vista. The facility began as a reformatory for juvenile offenders housing between 94 and 153 juvenile inmates in its first two decades of operation. In 1978 it became an adult, medium-custody facility: the Buena Vista Correctional Facility (later renamed the Buena Vista Correctional Complex). The site now has capacity for up to 1,259 inmates and is one of the largest correctional facilities in the state.
No longer a mining community or transportation hub, Chaffee was missing a strong economic and cultural identity into the early 20th Century. Agriculture and ranching, however, continued to be a mainstay of the area’s character. Like many communities in Colorado, Chaffee’s economy shifted to tourism, catering to visitors who traveled there to take advantage of the area’s abundant natural resources and recreation opportunities.
The Chaffee County Economic Development Corporation (CCEDC) was formed in 2009 and hired its first director part-time in 2010. Over the past decade the CCEDC has focused their efforts to develop and strengthen the economic development “ecosystem” in Chaffee County.
2000 Comprehensive plan
The recent generation of planning began in the late 1990s in response to the County’s rapid growth, abundance of available land, and lack of water. This prompted the creation of the 2000 Comprehensive Plan to help guide future development. The Plan placed emphasis on focusing residential and commercial growth around the existing towns of Buena Vista, Salida, and Poncha Springs. It stated that growth in unincorporated county should only occur where infrastructure was available to accommodate it. The preferred development scenario served as an update to the future land use map.
The Plan additionally addressed the County’s financial infeasibility to provide infrastructure and services, increasing prices in the housing market, and loss of agricultural land.
Implications for the 2020 Comprehensive Plan
The 2000 Comprehensive Plan was written in a similar moment in Chaffee’s growth and development as is seen in 2020. The pace of growth has exceeded that seen in the late 1990s, and the potential for more growth is eminent.
Although the 2000 Plan had aspirations of focused growth, it was not followed by an updated land use code that legally enforced growth in existing towns and connected to available infrastructure. Consequently, 75% of new housing in Chaffee County was built in the unincorporated area from 2000 to 2015. Additionally, although the 2000 Plan offered support for a program to purchase and sell development rights on agricultural lands, it never designated receiving areas for the transfer of these rights, leaving the program obsolete.
Chaffee County Heritage Area & Collegiate Peaks Scenic Byway and Historic Byway Management Plan 2008
The Chaffee County Heritage Area & Collegiate Peaks Scenic and Historic Byway Management Plan was created to protect and preserve the distinct natural environment and the cultural, historical and recreational resources unique to Chaffee County. It is both the state-mandated Corridor Management Plan for the Collegiate Peaks Scenic Byway and the County-wide Heritage Area Management Plan.
The Plan addresses such issues as growth and development, land use policy, scenic character, recreational conflicts, lack of knowledge of significant resources, economic viability, wayfinding and visitor amenities. It recommends the pursuit of key funding strategies through Federal and State grants for the protection of historic and natural resources. The Plan includes an inventory of natural, cultural, historic, and
archaeological resources and recommends that the County provide education to residents and tourists alike as part of the management plan.
**Implications for the 2020 Comprehensive Plan**
Resources inventoried in the Heritage Area & Byway Management Plan were considered in the Future Land Use mapping of this Plan.
**Chaffee County Citizen’s Land Use Roundtable Recommendations 2008**
The Citizen’s Roundtable was formed in an effort to update Chaffee County’s land use code and provided recommendations to improve County land use patterns and procedures. These recommendations were approved by the Board of County Commissioners in 2008. The Roundtable recommendations included the creation of new land use classifications their proposed densities and the implementation of tools to encourage and financially motivate ranchers to seek alternatives to land subdivision.
The Roundtable recommended focused growth by planning appropriate land uses in and around existing towns and focusing commercial and industrial development in or near existing developed commercial areas to preserve the unique community character of Chaffee County. The Roundtable directed the County to promote clustered development and allow higher densities in appropriate areas to leave more land in productive use. Clustering incentives could include a more streamlined approval process, negotiable open space (to promote quality, not quantity), increased density and lower fees.
Land planning staff was directed to develop a voluntary Agricultural Overlay District that allows site-specific higher density, flexibility in land sales as well as some additional commercial land uses.
**Implications for the 2020 Comprehensive Plan**
The Roundtable continued the ideas brought forth in the 2000 Comprehensive Plan, and offered regulatory tools and incentives to promote focused growth. Recommendations from the Roundtable fed directly into the regulatory framework provided in this plan.
**Chaffee County Housing Needs Assessment 2019**
The Chaffee County Housing Needs Assessment provides an understanding of the dynamics, interdependencies, and the face of County housing needs. It identified unaffordable housing prices, inventory shortages, and an increase in second home buyers and investors as the issues causing the most strain on the County’s housing market. The analysis found that the greatest need for housing is for households earning less than 60% AMI, as the supply of these units has decreased – partially due to the increase in the cost of new construction and the lack of available sites with higher-density zoning.
**Implications for the 2020 Comprehensive Plan**
Recommendations in the Housing Needs Assessment were introduced into the comp plan update wherever feasible. The assessment provided a basis of knowledge for exploring specific challenges and opportunities in the HPAC workshop with stakeholders.
**HISTORY OF ZONING SUMMARY (RELATED TO LOT SIZES)**
| Zoning Resolution | Residential (R-1) | Agricultural Residential (AR) | Agricultural Suburban (AS) | Agricultural (A) | Recreation (RC) | Commercial (C) | Industrial (I) |
|-------------------|------------------|-------------------------------|---------------------------|-----------------|----------------|---------------|--------------|
| 1974 Zoning Resolution | 2 acres* | 10 acres | 10 acres | 35 acres** | 10 acres | No minimum | No minimum |
| 1979 Zoning Resolution | 2 acres* | 10 acres | 2 acres | 35 acres** | 2 acres | No minimum | No minimum |
| 1984 Zoning Resolution | 2 acres* | 2 acres* | 10 acres* | 35 acres** | 2 acres | No minimum | No minimum |
| 1990 Zoning Resolution | 2 acres*** | 2 acres*** | 2 acres*** | 5 acres for agriculture, 2 acres for rural residential | 5 acres for agriculture, 2 acres for rural residential | 2 acres | 2 acres |
*Unless connected to public water/sewer system OR Planned Unit Development (PUD)
**Unless active mining claim
***1 acre with connection public water, 1/2 acre with public water AND sewer
**Zoning Resolutions**
A series of zoning resolutions in Chaffee County’s planning history had a dramatic impact on patterns of development, particularly in the agricultural (later renamed “rural” zones). The minimum lot sizes in these zone districts gradually decreased with each resolution, resulting in the current 2-acre lot size minimum in all zones. Beginning with the **1979 Zoning Resolution**, which changed the minimum lot size in the Agricultural Suburban and Recreation Zones from 10 acres to 2 acres.
The **1984 Zoning Resolution** renamed the Agricultural Zone Districts to “Rural Residential” at 2-acre minimum lots (unless connected to central water and sewer) and “Rural Suburban” at 10-acre minimum lots. The Agricultural Zone, renamed “Rural”, remained at 35-acre minimums. The **1990 Zoning Resolution**, which initiated efforts to create the 2000 Comprehensive Plan, decreased the minimum lot sizes in the Rural Suburban Zone from 10 acres to 2 acres (or 1-acre with public water and ISDS, half an acre with public sewer). This Resolution also shrunk the minimum lot sizes in the Rural Residential Zone from 10 acres to 2 acres. The Rural Zone, which had remained at 35-acre minimum lot sizes until 1990, changed to 5 acres for agriculture and 2 acres for rural residential.
In November of 2008, the County adopted **Resolution 2008-69** directing the Planning Commission to revise the land use code to reflect the recommendations of the Citizens’
CURRENT COUNTY ZONING
| Zoning Resolution | Residential (Res) | Rural (RUR) | Recreation (REC) | Rural Commercial (RCR) | Commercial (COM) | Industrial (IND) |
|-------------------|------------------|-------------|-----------------|------------------------|------------------|-----------------|
| 2014 Zoning Resolution | 2 acres* | 2 acres* | 2 acres* | No minimum** | No minimum** | No minimum** |
*1 acre with public water; 1/2 acre with public water AND sewer
**With connection to a central sewer or water system
Roundtable and set the minimum lot size in rural Chaffee at 5 acres. In January, 2014, the County adopted by Ordinance 2014-01 a revised land use code setting the maximum residential density in the rural areas of the County at 1 unit for each 5 acres being developed. It allowed for a variation in lot sizes in a development down to 1 acre as long as the average for the development was 1 unit for each 5 acres.
Finally, Resolution 2016-52 included a text amendment to increase density in all zone districts (except Commercial and Industrial which have no minimum lot sizes) to 1 residence for each 2 acres being developed, and retained the allowed variation in lot size. The clustering density incentive became irrelevant with the 2016 increase in rural density. Where they never existed before, an increase in 2-acre subdivisions on historically agricultural land created a more dense residential landscape.
Implications for the 2020 Comprehensive Plan
The decrease in minimum lot sizes in rural zone districts caused a development pattern of scattered small and large lots that impacted the “rural feel” of the County. The public and County leadership look to this plan for a way to preserve open landscapes and agricultural land while protecting private property rights and remaining an open and welcoming community that is attainable for people of all socio-economic backgrounds.
The Chaffee County Community Health Assessment and Health Improvement Plan provides a summary of the health and wellness landscape of the County and a community-wide action plan to address several top health concerns derived from the assessment. The top five concerns identified through the assessment include: lack of affordable housing, availability of providers, lack of assisted living, substance use and mental health.
The Community Health Improvement Plan recommends pursuing three categories of health as top priorities. First priority is children’s oral health, and action items include public water fluoridation, expanded access to oral health care and education. Second priority is behavioral health, where the assessment recommends advancing policy approaches to improve social and emotional health and preventing prescription drug abuse. Third priority is senior services with action items that include supporting and promoting existing programs that support seniors and identifying and filling gaps in senior services.
**Implications for the 2020 Comprehensive Plan**
The top five concerns identified in the Health Assessment and Improvement Plan are congruent with issues identified through engagement for the 2020 Comprehensive Plan Update, and are addressed through the Goals, Strategies and Projects in this plan.
**Envision Chaffee County 2019**
Envision Chaffee County is a community action plan that charts the course for the future of Chaffee County. It defines four vision statements based on the widely adopted values of Chaffee County residents. Such visions are stated as: our forests, waters, and wildlife are healthy and in balance with outdoor recreation; our community members are able to live locally and benefit from a resilient economy; our community remains friendly, engaged, and culturally connected; and we have sustainable agriculture, beautiful rural landscapes, and development focused in and around towns. Envision Chaffee County identified many strategic initiatives to address challenges, one of which was the update to the Comprehensive Plan.
**Implications for the 2020 Comprehensive Plan**
Envision Chaffee County formed the basis for developing the 2020 Comprehensive Plan Update’s six-theme planning framework, which formed the organizational method for most of the plan. Community values heard in the public process for Envision were congruent with those heard in 2020, particularly in relation to preserving open space and affordable housing.
**Chaffee County Community Wildfire Protection Plan 2020**
The Chaffee County Community Wildfire Protection Plan was developed in an effort to harness community momentum and deliver solutions that reduce wildfire risk. This forest
health action plan identifies risks posed by severe wildfire and preventative measures to be taken by the County and its residents to decrease risk and improve forest health. Part of the planning effort included a community survey, which identified that Chaffee County residents are not prepared for a major wildfire event. The plan calls for prevention methods like fuel treatment, and found that treating 5 to 10% of the Chaffee County landscape may reduce the risk that severe wildfire poses to community assets by 50 to 70%.
**Implications for the 2020 Comprehensive Plan**
Fire resiliency planning played a critical role in the development of this plan’s Goals, Strategies and Action Steps. The Community Wildfire Protection Plan provided methods to reduce future hazards by eliminating future growth in hazardous areas and pursuing prevention efforts, projects and resources.
**Chaffee County Clean Energy Plan**
Clean Energy Chaffee (CEC) is a citizen group dedicated to the advancement of clean energy and energy conservation in the County. In March 2020, the group produced a Clean Energy Plan, in which they recommend strategies for Chaffee County to achieve net zero carbon emissions by 2050. This plan should be used as a guiding document regarding energy use in the County.
**MODAL PLANS, PROGRAMS, SURVEYS, STUDIES AND IGAS**
**Chaffee County Trails Master Plan (2003). Update 2014**
The Chaffee County Trails Master Plan (CCTMP) is a tool for the development of a county-wide trail system. The plan supplements the municipal Comprehensive Plans and assist agencies, groups and individuals with objectives and maps to guide future trail development in the county.
The plan incorporates existing and proposed trail plans and maps from the three municipalities, Buena Vista, Salida and Poncha Springs and identifies trail opportunities within the county that will connect communities and provide public access to the trails on public lands.
The goal of the plan is to create an interconnected trail system in Chaffee County by identifying missing links and connections, identifying opportunities for new trails and interpretation, protecting important access routes, encouraging coordination and cooperation in trail planning efforts and incorporating existing plans into its plan.
**Additional Modal Plans, Programs, Surveys, Studies and IGAs**
Over the years a variety of modal plans, studies, programs, surveys and IGAs have developed within and among Chaffee County municipalities, agencies and organizations.
Examples of plans include the Salida Regional Transportation Plan (2009), the Buena Vista M3P Trail Plan, Highway and downtown Improvements plans and Parks, Open-space and Trails plans. Community programs have include Share the Road and Safe Routes to School and surveys and studies have been conducted pertaining to wildlife, water, parking and traffic. Intergovernmental Agreements have also helped coordinate land development and growth within and between the three municipalities and other public properties.
Implications for the 2020 Comprehensive Plan
Chaffee County Multimodal Transportation Master Plan
The Chaffee County Comp Plan has incorporated the growth trends and strategies of existing plans and recommends to development the Chaffee County Multimodal Transportation Plan (CCMTP). This forthcoming document, to be adopted by referenced in the 2020 Comp Plan, will explore safety, mobility, economic vitality, maintenance and strategic policies which are addressed in the Transportation Theme in the Chaffee County Comp Plan 2020.
GLOSSARY OF TERMS
The following definitions clarify the Comprehensive Plan’s descriptions for commonly used terms found in this plan.
**Affordable Housing:** For sale or for rent housing units are considered affordable if individuals and families spend no more than 30% of their income on housing expenses.
**Area Median Income (AMI):** The middle point of an area's income distribution for families. Half of families in an area will earn more than the median and half earn less than the median.
**Attainable Housing:** A descriptive term referring to housing that is supplied in amounts and types that are attainable to all members of the community. It refers to the composition of housing supply rather than a housing type intended for an individual or group. Generally, attainability means housing costs do not exceed 30% of gross household income.
**Connectivity:** The physical characteristics of the transportation system that promote a complete, efficient network for all modes of transport.
**HPAC:** The Housing Policy Advisory Committee, an organization of volunteers working to improve the availability of housing in Chaffee County.
**Infill Development:** Development or redevelopment of vacant or underutilized lands that are generally in previously developed areas, as opposed to agricultural areas.
**Market Rate Housing:** Housing units that are designed and supplied at fair market rents or sales prices within the real estate market.
**Mixed Use Development:** A project or development that incorporates more than one unique land use on a parcel or development site.
**Mobility:** The capability of a person to move across the landscape between destinations and activities on all forms of transportation.
**Resiliency:** The capability of the County’s systems, infrastructure, or its inhabitants to maintain a quality of life through unforeseen shocks and stresses, while positively adapting and transforming towards a more sustainable future.
**Smart Growth:** A way to develop a community that encourages a mix of building and land uses, diverse living and mobility options, infill development, density where appropriate, and high public interaction.
**Sustainable Development:** Strategic projects, programs and policies that provide both short and long-term solutions to benefit the people, environment and economy of the Chaffee community.
**TAB:** The Transportation Advisory Board consists of volunteers who work to provide advice on matters associated with transportation planning and implementation of transportation services.
**WUI:** Wildland Urban Interface. The fringe area where urban/residential areas meet with undeveloped land and vegetation.
Back Cover |
This document consists of downloaded copies of 11 articles (numbers 21-31) published in the electronic journal "Educational Policy Analysis Archives" during 1999: (1) "Facing the Consequences: Identifying Limitations of How We Categorize People in Research and Policy" (Cynthia Wallat and Carolyn Steele); (2) "Teachers in Charter Schools and Traditional Schools: A Comparative Study" (Sally Esposito, Rick Ginsberg, and Brian Cobb); (3) "Academic Approval and Review Practices in the United States and Selected Foreign Countries" (Don G. Creamer and Steven M. Janosik); (4) "Autonomia Universitaria no Brasil: Uma Utopia?" (Maria de Lourdes de Albuquerque Favero); (5) "The Quality of Researchers' Searches of the ERIC Database" (Scott Hertzberg and Lawrence Rudner); (6) "Solving the Policy Implementation Problem: The Case of Arizona Charter Schools" (Gregg A. Garn); (7) "Homeschooling and the Redefinition of Citizenship" (A. Bruce Arai); (8) "Project Hope and the Hope School System in China: A Re-evaluation" (Samuel C. Wang); (9) "Block Scheduling Effects on a State Mandated Test of Basic Skills" (William R. Veal and James Schreiber); (10) "Grade Inflation Rates among Different Ability Students, Controlling for Other Factors" (Stephanie McSpirit and Kirk E. Jones); and (11) "Children's Rights and Education in Argentina, Chile, and Spain" (David Poveda, Viviana Gomez, and Claudia Messina). (SLD)
Education Policy Analysis Archives
(Articles 21 thru 31)
editor:
Gene V. Glass
Arizona State University
BEST COPY AVAILABLE
U.S. DEPARTMENT OF EDUCATION
EDUCATIONAL RESOURCES INFORMATION
CENTER (ERIC)
This document has been reproduced as
received from the person or organization
originating it.
Minor changes have been made to
improve reproduction quality.
Points of view or opinions stated in this
document do not necessarily represent
official OERI position or policy.
| Article | Title | Pages |
|---------|----------------------------------------------------------------------|-------|
| 21 | Wallat & Steele: Facing the Consequences: Identifying the Limitations of How We Categorize People in Research and Policy | 18 |
| 22 | Bomotti, Ginsberg & Cobb: Teachers in Charter and Traditional Schools: A Comparative Study | 27 |
| 23 | Creamer & Janosik: Academic Program Approval and Review Practices in the United State and Selected Foreign Countries | 19 |
| 24 | Fávero: Autonomia Universitária no Brasil: Uma Utopia? | 15 |
| 25 | Hertzberg & Rudner: Quality of Researchers' Searches of the ERIC Database | 14 |
| 26 | Garn: Solving the Policy Implementation Problem: The Case of Arizona Charter Schools | 22 |
| 27 | Arai: Homeschooling and the Redefinition of Citizenship | 23 |
| 28 | Wang: Project Hope and the Hope School System in China: A Re-evaluation | 21 |
| 29 | Veal and Schreiber: Block Scheduling Effects on a State Mandated Test of Basic Skills | 15 |
| 30 | Mc Spirit and Jones: Grade Inflation Rates among Different Ability Students, Controlling for Other Factors | 19 |
| 31 | Poveda, Gómez & Messina: Children's Rights and Education in Argentina, Chile and Spain | 26 |
This article has been retrieved 786 times since July 8, 1999
Education Policy Analysis Archives
Volume 7 Number 21
July 8, 1999
ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Facing the Consequences: Identifying the Limitations of How We Categorize People in Research and Policy
Cynthia Wallat
Florida State University
Carolyn Steele
Florida State University
Abstract
Social policy researchers and policy rules and regulation writers have not taken advantage of advances in assessing ways in which social representations of ideas about people can convey alternative explanations of social life. During the past decade a growing number of scholars have considered how representational practices and the representations that are outcomes of such practices have value. Neglecting to consider representational practices has consequences including failure to mobilize and sustain alternative ideologies that reject narrow perspectives on families and communities. As evidenced by recent OMB rulings on census categories, the dominant sense of meaning of population—and hence family and community—is quite similar to the 17th century sense of people as objects of a particular category in a place from which samples can be taken for statistical measurement. However, the contrastive analysis presented in this paper points out how sustained attention to consequences of use of sets of information categories collected to enumerate population to inform social policy can still materialize. In the wake of federal welfare reform, policy makers are particularly interested in questions of benefit relative to social service delivery and community revitalization. The presentation includes lessons learned from several dozen family, youth, school and community research projects.
Introduction
During the past few years, the population categories of race, ethnicity, gender, have been scrutinized by legal and political institutions, as well as social science disciplines and associations (e.g. Begley, 1995; Hollins, King & Hayman, 1994; Hill & Greenhaugh, 1997; Hughey, 1998; Hutchinson & Smith, 1996; Schlosberg 1998). Acting on recommendations presented by Members of the Presidential Advisory Board on Race known as the President's Council for One America, the fiscal 2000 budget included a proposal to create new types of social science population data that will provide ways to measure racial bias in everyday life and educate the public about population categories such as racial and ethnic groups (Ross, 1999; Watson, 1998). At the same time, Federal Courts are reexamining the nature and legitimacy of principles of public justification of decades old consent degrees that lead to dividing public school populations into different groups (Siskind, 1994). In academic arenas, the goal of formulating a knowledge base for teaching about diverse populations has been judged inadequate on several counts. "A major element in the confusion and conflict surrounding the field of 'ethnic phenomena' has been the failure to find any measure of agreement about what the central concepts of ethnicity signify or how they should be used" (Hutchinson & Smith, 1996, p. 15). Assessment of the analytical contributions of idioms of population such as pluralism and multiculturalism has also been negative. One set of negative judgments is that continued concern with technical matters of demography fail to advance understandings of renewed ethnic polarizations and the conditions in which numerous ethnic, religious or cultural groups coexist within a society, (e.g. Greenhalgh, 1995, Higham, 1998; McNicoll, 1994, Schlosberg, 1998, Webster, 1997).
Representatives of multiple social science disciplines argue the need for policy scientists to remake population analysis by incorporating historical contingency and societal specificity in narrative modes of explanation. Schlosberg (1998) argues that such approaches provide "an acknowledgment of multiplicity—an openness to ambiguity and the differences its spawns" (p. 603). Restating McNicoll's (1992) plea for a demography for a more turbulent world, Greenhalgh (1995) calls for policy researchers to direct audiences' attention to studies that attempt multilevel analysis to provide explanations that embrace "not only the social and economic, but also the political and cultural aspects of demographic change" (p. 49). Greenhalgh (1995) raises the question, How can the agenda of studying population as a phenomena of interest across social science disciplines be contextualized in the social and economic terms of demography and in political and cultural terms as well?
Overview
In this article, we provide examples of current work in social science disciplines which addresses the policy research argument that understanding the impact of changes in human numbers on social and
cultural life requires moving beyond current standards of empirical categories. For example, the United Nations suggests enumeration of the structure of the world's populations and their patterns of change involves collecting information on at least 4 sets of empirical facts: (1) Demographic, including sex, age, marital status, birthplace, place of usual residence, relationship to head of household, number of children; (2) Economic or type of activity, occupation; (3) Social and Political, including language, ethnic or religious affiliation; (4) Educational including literacy or level of education, school attendance (cf. "census" Encyclopedia Britannica Online, 1999).
The meaning of these sets of words and ideas about people are taken for granted and used as a referent in social policy, courts and other legal institutions to link the individual with society. Yet, few researchers make clear how their categorization and measurement of individuals along social identity and ethnic lines is linked to a conceptual foundation or theoretical base. "While conceptually researchers are pointing to the dynamics and fluid nature of ethnicity, empirically they are measuring ethnicity [and social identity] as a static entity" (Leets, Giles & Clement, 1996, p. 11).
The common tendency has been to use measurement categories such as suggested by the United Nations to project that the world will include 6 billion people in the 21st century. Such projections are predicated without examination of just what it is about standards categories of human numbers that will impact social life (Kertzer, 1995). Consequently, policy researchers point to a need for exploring how different categories of people are linked to different communicative practices (Wallat & Piazza, 1991; 1997). One argument is that a focus on "plurality of meanings" and "variable functions of communication" could bring attention to both internal and external influences on the "construction of the subjectivity that group membership and citizenship built upon" (Schlosberg, 1998, p.160). Practices of communication as a key issue in policy research are proposed as a strategy to: (a) affirm the theoretical richness of available notions of pluralism such as "the irreducible plurality of the social realm" (cf. Schlosberg, 1998, p. 586), and (b) provide "an acknowledgment of multiplicity - an openness to ambiguity and the differences it spawns" (cf. Schlosberg, 1998, p. 603).
Reconsidering the need within social science to expand its discursive practices to address the consequences of the projected 21st century number of 6 billion people on economy, government and society is also a current focus of the American Anthropological Association (Hill & Greenhalgh, 1998). Marking 1998 as the bicentennial of the publication of "Essay on the Principle of Population as it Affects the Future Improvement of Society," association members have reminded social scientists that the empirical observations on the realities of poverty reported by Thomas Malthus in 1798 have defied attempts to identify factors that increase the likelihood that institutional adaptation will occur fast enough to deal with current and prospective populations (Bean, 1990, p. 27).
The American Anthropology Association Annual Program Meeting Chair Susan Greenhalgh suggested that population questions,
including, Who is counting whom? Why is counting taking place? and, How are the variables constructed?, can be reformulated and addressed as areas of inquiry. Examples of such areas for examination include: (a) Population categories as pattern, that is behavior conceptualized as social organization and culture change, (b) Population categories as discourse, that is how notions of discourse shape construction of discursive categories, (c) Population categories as politics, that is attention to the negotiations and contestations surrounding population as an issue or problem.
Commentaries by members of the association on the proposed questions provide further suggestions on how they might be developed as a framework for analysis of social science literature. Charles Briggs (1998), for example, suggests focusing on the extent to which public discourse terms can be taken in a marked sense, as issues of standard population measurement versus representations of populations as contested categories of cultural, political and economic power. Such contrastive analysis could provide examples of the extensive variety of ways of seeing and interpreting the study of humankind.
The work reported in this article is organized to address these questions, areas of inquiry, and framework for analysis. For our purpose a contrast between population as a marked term and representations of population is as follows: the marked sense of population is what can be learned about a social - political construct enacted in legislation as social control indicators that are countable, manageable and amenable to manipulation in policy prescription; representations in observational studies include what has been learned from accounts of the consequences of social control statistics of populations such as ethnicity on understanding individuals' development of social identity. We propose that policy analysis can take advantage of how advances in assessing social representations of people convey alternative explanations of social life. We point to examples of recent ethnographies that illustrate consequences of use of prevailing categories of the substance of people embedded in social policy. In the wake of federal welfare reform, policy makers are particularly interested in questions of benefit relative to family and community revitalization and possible misdirection of funding contingencies. For example, The Congressional Record provides hundreds of references for the terms "youth" and "community services" in policy debates and appropriation hearings (http://thomas.loc.gov). Our presentation includes findings from studies of youth organization projects supported through such policy initiatives. Overall the findings from studies of youth organization and dominant health and education institutions suggest that the formulation of appropriation rules and regulations for American family adolescents members may be misdirected by standard categories of people. Ethnographers of schools and communities illustrate how young people represented in policy as populations at risk are resisting pejorative values embedded in such appropriation categories. Rather they portray their styles of social and individual identity in ways that leave ethnic and racial population categories behind (e.g. Davidson, 1995, Heath & McLaughlin, 1993, McCarthy, 1997, Miron, 1996,
Munoz, 1998). Thus a more anthropologically oriented position, including avoiding a priori assumptions about social identity or community affiliation, is indicated.
**What Do We Mean by Population Categories? Who Is Counting Whom? Why Is Counting Taking Place? How Are the Variables Constructed?**
During the past several decades scholars from a number of disciplines have focused on the practices used across the human sciences to shape and create objects of knowledge such as population. Researchers trace the historical development of ideologies as particular ways of "seeing" and interpreting collective identity to the 17th century (e.g., Popkewitz, 1991, Laosa, 1984). Popkewitz highlights tensions which have accompanied the intersection of knowledge, power and historically situated practices in the following way:
Beginning in the 17th century, there was a shift from a classical view in which [a] word was representative of the object [observed] to a world in which people [were attributed with the capacity to] reflect and be self-conscious about their historical conditions. A view of change occurred that tied progress to reason...and systematic human intervention to social institutions. The new sets of relations between knowledge and social practice inhered in a variety of social relations. Accompanying the emergent [ideology indexed as the] Enlightenment was the creation of the nation-state, where, for the first time, people were assigned a collective identity that was both anonymous and concrete. Abstract concepts of...constitutional, democratic rules produced new sets of boundaries, expectations, and possibilities of the general notion of citizen. At the same time, people could be considered in specific and detailed ways as populations that could be characterized into subgroups distinct from any sense of the whole. The concept of population made possible new technologies of control, since there was greater possibility for the supervision, observation, and administration of the individual. (p. 32)...
*People* came to be defined as populations that could be ordered through the political arithmetic of the state, which the French called *statistique*. State administrators spoke of social welfare in terms of biological issues such as reproduction, disease, and education (individual development, growth, and evolution). Human needs were seen as instrumental and empirical in relation to the functioning of the state. (p. 38)
Laosa (1984) cited policies established over the past 400 years in which children, youth and families were defined by a variety of ancestry ties, codified as people in treaties and laws, and denied opportunities to deal with their social and economic subordination (cf.
p. 7). As evidenced by recent OMB rulings on census categories, the dominant sense of meaning of population—and hence family and community—is quite similar to the 17th century sociologists' sense of "population" as objects of a particular category in a place from which samples can be taken for statistical measurement. In contrast to the 100 plus possible social identity representations identified in the 1980 *Harvard Encyclopedia of American ethnic groups* (Thernstrom, 1980) and the 1998 *Atlas of American Diversity* (Shinagawa & Jang, 1998), the year 2000 census information will delimit the meaning of population to five minimum categories for data on race and two categories for data on ethnicity (i.e., American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, White, Hispanic or Latino).
In 1995, Ruth McKay prophesied delimitation of social identity would continue to occur as standards for the classification of federal data on population because of conceptual and affect problems that occurred in interviews that were conducted to try new versions of race and ethnicity questions. "Many respondents were uncomfortable answering any question about race, because they feared the questionnaire was really about racism, and...a covert attempt to learn if they were really racist" (McKay & del la Puente, 1995, p. 4). Interview questions were based upon a technical frame of reference for collection of data needed to monitor policy prescriptions rather than local knowledge (cf. Pike, 1954). Questions asked included, "Please tell me what you think is the most important characteristic that defines race [and] Do you think there is any difference between race, ethnicity, and ancestry?.... Several respondents thought the [interviewer] was asking about the ethical character of races. One [person] thought the word 'characteristic' meant that we were asking about [their] character" (McKay & del la Puente, 1995, p.4). Hence, by law and policy U.S. population means the marked standards designed by the Office of Management and Budget for collecting data on the race and ethnicity of broad population groups in this country, "and are not anthropologically or scientifically based" (Office of Management and Budget, 1997).
Examination of Congressional bills during 1997-1998 (*http://thomas.loc.gov*) also suggests that population issues will continue to be legislatively framed as population management, family planning, and ancestry and social-economic identity. We question whether the consequence of continued use of a technical base for policy evaluation continues use of stereotypes. To counter myths or broad social meanings that shape experience and evaluation of attributes requires finding ways to "pay attention to the particulars, the specifics, the concrete reality, with all its blemishes and contradictions" (Lyc, 1997, p. 2)
Under these circumstances, attempting to counter prevailing population ideology by further engaging in examining "practices of decoding and re-encoding, of translation and interlocution, and of rhetorical deconstruction" (Brown, 1995, p. 13) may seem foolhardy. Yet, Charles Goodwin (1994) argues that the phenomena of legal argumentation surrounding social policies be subjected to further
attention as objects of knowledge that members of the profession can contest. In his article, "Professional Vision," Goodwin illustrates how the activities of coding, highlighting, and producing and articulating ways of seeing and interpreting, can be applied to the politics of representation. He believes this may occur as the following three questions are reformulated in a new era of studies on discursive practices used across social science: (a) What are the conditions in which modes of representations are accepted in social science and humanities as objective, valid, or legitimate? (b) How are accounts of social norms made adequate to their respective purposes and audiences through discursive and political practices? (c) How can sustaining interest in rhetorical analysis of genres or texts be directed towards attention to claims, proofs, and propositions as well as to the communicative contexts in which "members of a profession hold each other accountable and context the constitution and perception of the objects that define their professional competence" (p. 606).
Richard Brown (1995) has also produced a collection of arguments by anthropologists and sociologists to persuade others to make problematic the construction and presentation of representations by focusing on the how of representation--of objectivity, of native view, of group, of culture---and so forth (p.13). The unifying perspective presented by Brown, is that an emphasis on deconstruction and rhetorical analysis may counter current pessimism and suspicion flagged in both academic and public discourses on the limits of social science (cf. Wallat & Piazza, 1999).
According to John Van Maanen (1995), however, the consequences of the introspection of written representations of culture produced by specific ethnographers since the 1960s, as well as the spread of methodological self-consciousness across the "cultural representation business" remains to be seen. What is needed is examples of how this turn towards displaying problems that social science representations face, and cracking open representational practices alters -- if at all--traditional practices in educational community, and legal arenas (cf. Van Mannen, 1995).
The following section provides a compilation of such examples.
**Focus on the Extent to Which the Term Should Be Taken in its Marked Sense, As Issues of Population versus the Representations of Populations**
The value of Charles Briggs' advice to develop critiques of the concept population as a contrastive analysis of marked sense of the term in legal documents such as government standards for the classification of federal data on race and ethnicity, versus representations of populations that may demystify such standards through drawing attention to particulars of family and community experiences, is beginning to emerge in studies of school populations. For example, contrastive analysis is possible due to the availability of primary sources for reviewing school population issues as they are marked in reports developed by The National Center for Educational Statistics (*http://nces.ed.gov*) through funding appropriated to this
agency and a growing number of published collections of life experience narratives.
Recent ethnographies of African American and Asian American students and their teachers, families, and communities (e.g., Fordham, 1996; Lee, 1996), "pay attention to the particulars, the specifics, the concrete reality, with all its blemishes and contradictions" (Lye, 1997, p. 2). Analysis of the contributions of such studies is the researchers' ability to point out that a major consequence of population categories in educational domains is that "Whiteness remains the dominant racial ideology, not by promoting Whiteness as superior, but by promoting Whiteness as normative" (Spina & Tai, 1998, p. 36). For example, the population category "at risk youth" continues to be a term synonymous with Black, and Latino youth while Asian America students are represented as "academic superstars." The power of the dominant normative stance "does not stop at simply defining Others.... It supports the assumption that White youth are not all 'at risk' nor are they all 'academic superstars.' This position grants White youth the privilege to determine their own academic destiny" (p. 36).
Reviewers of such ethnographies of students, teachers, families and communities (e.g. Sleeter, 1992) provide a means of publicly contesting limited knowledge of concrete realities of and continued use of "prefabricated panethnicity" (Spina & Tai, 1998, p. 40) such as White, Black, Hispanic and Latino in public discourse. Educational researchers are beginning to recognize that more can be learned about "how power lies not in the making of generalizations, but in making generalizations stick" (Spina & Tai, 1998, p. 36). As Greg Urban stated in his response to the year long *Anthropology Newsletter* discussion on the known and unknown in social science, the question should not be: What is the relationship between the culture being represented in an ethnography and the world. "Rather, because culture is both in the world and about the world, the question [we should be asking participants in our studies to help us explore is] What is the relationship between culture that is out there and culture that is a representation of what [you believe] is out there?" (Urban, 1997, p. 1). Compilations of stories of youth, families and communities, representing individuals' attempts to define their personal and social identity provide new images of the concept of power through considering how persons receive, resist, contest, or transform dominant representations.
**Facing the Consequences of Traditional Research on Youth Development**
The General Accounting Office (GAO) has identified 131 programs administered by 16 different federal departments and other agencies that direct four billion dollars a year at communities represented as disadvantaged to support the creation of empowerment zones, comprehensive community services delivered through schools, gang prevention efforts, and programs that serve runaway or delinquent youth (U.S. General Accounting Office, 1996). A study
panel that produced the 1996 National Research Council (NRC) report "Youth Development and Neighborhood Influence: Challenges and Opportunities" (Chalk & Phillips, 1996) considered the long term gains and consequences of such federal support and concluded that investments in social strategies and community resources to promote youth development require "more attention to the types of social resources that youth seek out and create, as well as consideration of the ways in which youth gain information and control over their environment" (p.25).
The study panel also noted that such efforts require shifting from a prior problem categories such as delinquency and dropping out of school to social setting perspectives and approaches that may stimulate "interest in recognizing how adolescents themselves perceive role models of successful adult behavior, how they protect themselves during periods of danger or uncertainty, and how they seek out individuals or groups that constitute community assets capable of helping" (Chalk & Phillips, 1996, p. 7).
The NRC report noted the contribution of private foundations to research and development efforts along these lines as well as pointing out that ethnographic research has alerted social science to new possibilities for research on family and community research and policy. Their Study Panel noted that research efforts that rely on demographic and census data to assess change and development within neighborhoods and examine pathways by which ethnicity and racial heritage messages affect youth development, "have revealed many uncertainties in understanding how teenagers negotiate critical transitions...the formation of self identity, and the selection of life options" (p.3). Examples of private foundations projects were noted as examples of ways of dealing with issues in the concept of population, with formulating new policies on children, youth and families, and with crafting new lines of research inquiry highlighting the need to integrate children, youth and family development literature with research on community development and organizations. Efforts mentioned include the Casey Foundation's nationwide Kids Count project to identify model programs and policies (http://www.aecf.org), the Ford Foundation's Community Revitalization programs (http://www.fordfoundation.org), the Carnegie Foundation on Adolescent Development (http://www.carnegie.org), and foundation sponsored research grants programs.
One such foundation's research grants program provides an excellent example of the questions, areas of inquiry, and framework for analysis described in the introduction section of this paper. The Spencer Foundation (http://www.spencer.org) supported a five-year study of 60 different organizations described by local city officials as located in "'the projects,' 'the barrio,' or, alternately 'communities suffering from poverty, crime, [and] severe ethnic tensions'" (Heath & McLaughlin, 1993, p. 5).
The project called "Language, socialization, and neighborhood based organizations," included exploring how members of neighborhood based organizations in the 1990s perceive their social settings, as well as tracing 20th century family and youth policy
notions (James, 1993). Fundamental differences among the crafters of youth policy and the youth from 60 different organizations who participated in this study ranged from perspectives on the role of ethnicity to types of processes and structures that set up contingent attributes of valuable life experiences. Youth avoid programs defined in terms of population policy labels and people as object statistics categories such as reduction in crime, lowered rates of school dropouts. Youth do not elect to participate in programs that label them as deviant, 'at risk,' or in some way deficient or negative. "What works' for inner city youth conforms to the contexts in which an activity is embedded and to the subjective realities of the youth it intends to advance, not to distant bureaucratic directives" (p. 227).
**Summary**
The formulation of population categories to aid in understanding the nature of people and the properties of sociocultural systems hold consequences for social science and public policy. Following scientific conventions, the many things that can be said or predicated of objects of inquiry can be subject to criticism of method and substance. Correspondingly, difficult questions have been raised for centuries about procedures for observing events, processes or phenomena glossed as the study of human nature. However, changes in the world we have lived in during the past few decades have brought a host of new, more concrete issues into the social science intellectual agenda (Greenhaugh, 1995). Concepts, categories and representations of people are being scrutinized in terms of how events, processes or phenomena are ordered and denoted.
Major consequences of the realities of funding formulas based upon statistical meanings of people that began taking hold in the 17th century are being uncovered as organizations attempt to serve youth and families. The challenge to explicate and use local knowledge in contrast to relying on a prior categories of people in the design and delivery of services is being formulated in reports of personal life experiences of health, education and social service providers and the children, youth and families who constitute the pluralistic community of these dominant social institutions. Such life experience stories explicate patterns of exclusion, as well as elicitation methods for overcoming patterns of silence about exclusion (e.g., Davidson, 1996, McCarthy, 1997, Miron, 1996, Munoz, 1995, Olsen, 1997, Pang & Cheng, 1998, Spindler & Spindler, 1994; Wallat & Steele, 1997).
As Kenneth Pike (1954) pointed out nearly a half century ago when he introduced the concepts of emic and etic knowledge, the foundation for documenting the structure of local knowledge including how individuals receive or resist dominant representations such as ethnic identity stands in sharp contrast to continuing to document categories of people marked by statisticians as a means of collecting technical descriptions of objects.
**Note**
Portions of this paper were presented at the American Anthropology Association Annual Meeting, December 1998, Philadelphia.
References
Bean, F. D. (1990, April 1). Too many, too rich, too wasteful. *The New York Times*, Section 7, p.27.
Begley, S. (1995, February 13). Three is Not Enough: Surprising New Lessons From the Controversial Science of Race. *Newsweek*, pp. 67-69.
Briggs, C. (1998). Linguistic anthropology and "Population and the Anthropological imagination." *Anthropology Newsletter*, 39 (2), 45.
Brown, R. H. (Ed.). (1995). *Postmodern representations: Truth, power, and mimesis in the human sciences and public culture*. Chicago, IL.: University of Illinois Press.
"census" Encyclopedia Britannica Online http://www.eb.com:180/bol/topic?/eu=22402&sctn=2
Chalk, R. & Phillips. D. A. (Eds.). (1996). *Youth development and neighborhood influences: Challenges and opportunities*. Washington, D.C.: National Academy Press.
Davidson, A. L. (1996). *Making and molding identity in schools: Student narratives on race, gender, and academic engagement*. Albany, N.Y.: State University of New York Press.
Fordham, S. (1996). *Blacked out: Dilemmas of race, identity, and success at Capital High*. Chicago, IL.: The University of Chicago Press.
Goodwin, C. (1994). Professional vision. *American Anthropologist*, 96 (3), 606 - 633.
Greenhalgh, S. (Ed.). (1995). *Situating fertility: Anthropology and demographic inquiry*. New York, NY: Cambridge University Press.
Heath, S. B. & McLaughlin, M. W. (Eds.). (1993). *Identity and inner city youth: Beyond ethnicity and gender*. New York: Teachers College Press.
Higham, J. (1998). Multiculturalism and Universalism: A History and Critique. In M. W. Hughey (Ed.), *New Tribalisms: The Resurgence of Race and Ethnicity* (pp. 212 - 236). New York, NY: New York University Press.
Hill, J. & Greenhalgh, S. (1998). Population and the Anthropological imagination: 97th AAA Annual Meeting Theme. *Anthropology Newsletter*, 39 (1), 1,7.
Hollins, E., King, J. & Hayman, W. (Eds.) (1994). *Teaching Diverse
Populations: Formulating a Knowledge Base. Albany, NY: State University of New York Press.
Hughey, M. (Ed.). *New tribalisms: The resurgence of race and ethnicity*. New York, NY.: New York University Press.
Hutchinson, J. & Smith, A. (Eds.) (1996). *Ethnicity*. New York, NY: Oxford University Press.
James, T. (1993). The winnowing of organizations. In S. B. Heath & M. W. McLaughlin (Eds.) *Identity and inner city youth: Beyond ethnicity and gender* (pp. 176 - 195). New York, N.Y.: Teachers College Press.
Kertzer, D. I. (1995). Political - economic and cultural explanations of demographic behavior. In S. Greenlaugh (Ed.). *Situating fertility: Anthropological and demographic inquiry* (pp. 29-51) New York, NY.: Cambridge University Press.
Laosa, L. M. (1984). Social policies toward children of diverse ethnic, racial, and language groups in the United States. In H. W. Stevenson & A. E. Siegel (Eds.), *Child development research and social policy* (pp. 1 - 109). Chicago, IL.: The University of Chicago Press.
Lee, S. J. (1996). *Unraveling the "model minority" stereotype: Listening to Asian American youth*. New York, NY.: Teachers College Press.
Leets, L., Giles, H. & Clement, R. (1996). Explicating ethnicity in theory and communication research. *Multilingua*, 15, 115-147. [Online] Available: http://www.stanford.edu/~leets/leets.html
Lye, J. (1997). *Ideological analysis: Some questions to ask of the text*. [Online] Available: http://www.brocku.ca/english/courses/4F70/characteristics
McCarthy, C. (1997). *The uses of culture: Education and the limits of ethnic affiliation*. New York, N.Y.: Routledge.
McKay, R. B. (1995, November). *Social and cultural considerations in developing the Current Population Survey Supplement on race and ethnicity*. Paper presented at the meeting of the American Anthropological Association. Washington, D.C. [McKay, R. B. & de la Puente, M. (1995). Cognitive research in designing the CPS Suplement on race and ethnicity. *Proceedings of the Bureau of the Census' 1995 Annual Conference* (pp. 435 - 445). Rosslyn, VA.]
McNicoll, G. (1992). The agenda of population studies: A commentary and a complaint. *Population and Development Review*, 18 (3), 399-420.
Miron, L. F. (1996). *The social construction of urban schooling: Situating the crisis*. Cresskill, N.J.: Hampton Press.
Munoz, V. L. (1995). *Where "something catches:" Work, love, and identity in youth*. Albany, N.Y.: State University of New York Press.
Nordhaus, William. D. (1996, January 14). Elbow room. *The New York Times*.
Office of Management and Budget. (1997). *Revisions to the standards for the classification of federal data on race and ethnicity: Notice of decision*. [Online] Available: http://www.whitehouse.gov/WH/EOP/OMP/html/fedreg/Ombdir15.html
Olsen, L. (1997). *Made in America: Immigrant students in our public schools*. New York, N.Y.: The New Press.
Pang, V. O. & Cheng, L. L. (Eds.) (1998). *Struggling to be heard: The unmet needs of Asian Pacific American children*. Albany, N.Y.: State University of New York Press.
Pike, K. (1954). Language in relation to a unified theory of the structure of human behavior. Glendale, CA.: Summer Institute of Linguistics. (Available in J. Van Willigen & B. R. Dewaly (1985). *Training manual in policy ethnography. Special Publication No 19*. Washington, D.C.: American Anthropological Association)
Popkewitz, T. S. (1991). *A political sociology of educational reform: Power / knowledge in teaching, teacher education, and research*. New York, NY.: Teachers College Press.
Ross, S. (1999, February 3). President proposes formal study of racial bias in today's America. *Tallahassee Democrat*, pp. 1A, 5A.
Schlosberg, D. (1998). Resurrecting the pluralist universe. *Political Research Quarterly*, 51 (3): 583 - 615.
Shinagawa, L. H. & Jang, M. (1998). *Atlas of American diversity*. Walnut Creek, CA.: AltaMira Press, A Division of SAGE Publications.
Siskin, L. (1994, July 13). Counting upon the law. *The Recorder: American Lawyer Media*.
Sleeter, C. (1992). The white experience in America: To whom does it generalize? *Educational Researcher*, 17 (3), 13 - 23.
Spina, S. U. & Tai, R. H. (1998). The politics of racial identity: A pedagogy of invisibility. *Educational Researcher*, 27 (1), 36 -40, 48.
Spindler, G. & Spindler, L. (1994). *Pathways to cultural awareness: Cultural therapy with teachers and students*. Thousand Oaks, CA.: SAGE Publications.
Therstrom, S. (Ed.). (1980). *Harvard encyclopedia of American ethnic*
groups. Cambridge, MA.: Harvard University Press.
Thompson, J. B. (1990). *Ideology and modern culture: Critical social theory in the era of mass communication*. Stanford, CA.: Stanford University Press.
Urban, G. (1997). Culture: In and about the world. *Anthropology Newsletter*, 38 (2), 1-7.
U.S. General Accounting Office. (1996). *At-risk and delinquent youth*. (GAO Report No. HEHS-96-34). Washington, D.C.: General Accounting Office.
Van Maanen, J. (Ed.). (1995). *Representations in ethnography*. Thousand Oaks, CA.: SAGE Publications.
Wallat, C., & Piazza, C. (1999). *Critical examinations of the known and unknown in social science: Where do we go from here?*. Manuscript submitted for publication.
Wallat, C. & Piazza, C. (1997). Early childhood evaluation and policy analysis: A communicative framework for the next decade. *Education Policy Analysis Archives*, 15 (5). [Online] Available: http://epaa.asu.edu/epaa/v5n15.html
Wallat, C. & Piazza, C. (1991). Perspectives on production of written educational policy reports. *Journal of Education Policy*, 6 (1), 63 - 84.
Wallat, C. & Steele, C. I. (1997). Welfare reform: The positioning of academic work. *The Qualitative Report*, 3 (1), 1 - 18. [Online] Available: http://www.nova.edu/ssus/QR/QR3-1
Watson, D. L. (1998, May 15). Connerly Gives Heat not Light to National Dialogue on Race. *The Baltimore Sun*.
Webster, Y. (1997). *Against the Multicultural Agenda: A Critical Thinking Alternative*. Wesport, CT: Praeger.
**About the Author**
Cynthia Wallat
Department of Educational Foundations and Policy Studies
306 Stone Building
Tallahassee, FL. 32306 4070
E-Mail: firstname.lastname@example.org
Cynthia Wallat is Professor of Social Sciences and Education in the Department of Educational Foundations and Policy Studies at Florida State University. Her research emphases and publications address:
socialization and language; comparative social policy; and institutional and professional development. Her interest in relating language and policy centers on demonstrating how attention to forms of communication and community can address the known and unknown about diversity in and out of school.
Carolyn Steele
School of Social Work
University Center
Florida State University
Tallahassee, FL 32306 - 2570
Email: email@example.com
Carolyn Steele is Associate Professor in the clinical track curriculum at the School of Social Work at Florida State University. In addition to her teaching, research and practice interests in how the field of human services can broaden its analytical and educational functions in terms of curriculum development related to clinical social work, and the psychological and social problems related to physical and mental health and illness, she is a licensed psychologist, clinical social worker, and marriage/family therapist.
Copyright 1999 by the Education Policy Analysis Archives
The World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, firstname.lastname@example.org or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: email@example.com. The Commentary Editor is Casey D. Cobb: firstname.lastname@example.org.
EPAA Editorial Board
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs-Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | email@example.com |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Seriven | firstname.lastname@example.org |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | email@example.com |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | firstname.lastname@example.org |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina - Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois---UC |
| Robert T. Stout | Arizona State University |
BEST COPY AVAILABLE
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
email@example.com
Adrián Acosta (México)
Universidad de Guadalajara
firstname.lastname@example.org
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho.dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Rollin Kent (México)
Departamento de Investigación Educativa- DIF CINVESTAV
email@example.com
firstname.lastname@example.org
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
email@example.com
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Daniel Schugurensky (Argentina-Canadá)
OISE U.T. Canada
email@example.com
Jurio Torres Santomé (Spain)
Universidad de A Coruña
firstname.lastname@example.org
J. Felix Angulo Rasco (Spain)
Universidad de Cádiz
email@example.com
Alejandro Canales (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Josué González (U.S.A.)
Arizona State University
firstname.lastname@example.org
Maria Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
email@example.com
This article has been retrieved 1040 times since July 22, 1999
Education Policy Analysis Archives
Volume 7 Number 22
July 22, 1999
ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Teachers in Charter Schools and Traditional Schools: A Comparative Study
Sally Bomotti
Rick Ginsberg
Brian Cobb
Colorado State University
Abstract
Teachers from charter and traditional schools in Colorado were queried about their perceptions of their level of empowerment, school climate, and working conditions. Using a cluster sampling design, approximately 160 teachers from 16 charter schools and 100 teachers from seven traditional schools were surveyed by combining several well-established instruments to measure empowerment, school climate, and working conditions. Factor analyses yielded three composite variables each for the three constructs. One-way analyses of variance were used to explore these teachers' differences in perceptions. Results yielded consistent and practically significant differences in these charter and traditional school perceptions of empowerment, school climate, and working conditions. Not all of these differences, however, were consistent with expectations given the educational and legislative contexts driving Colorado's charter school movement. Implications and recommendations for future research are given.
Introduction and Background to the Study
Charter schools are one of the fastest-spreading, dynamic, and controversial educational reform movements to emerge in response to widespread demands for better public schools and more school choice. A majority of states have now passed legislation allowing parents, teachers, and community members to start these more autonomous schools, which receive public funds but operate unfettered by most state and local school district regulations governing other public schools. Charter schools attract a diverse array of people who advocate reform of the current public school system for a variety of reasons. But at the heart of the charter school concept is a shared set of assumptions about how and why such schools will improve public education (Corwin & Flaherty, 1995; García & García, 1996; RPP International, 1998; Wells and Associates, 1998;). Supporters claim that, in exchange for freedom from burdensome rules and regulations, charter schools will be more accountable for student learning. In addition, charter schools will infuse a healthy competition into a bureaucratic and unresponsive public system by providing more educational choices to parents and students. Because of their enhanced autonomy, they will encourage educational innovation, provide more professional opportunities for teachers, and operate more efficiently than regular public schools. For these reasons, charter schools are also expected to serve as educational research and development laboratories and a spur to reform of the public education system as a whole. The appeal of these ideas is apparent in the speedy rate at which new charter schools are opening. Although counts vary, it is conservatively estimated that, during the 1997-1998 school year, about 781 charter schools were in operation in 23 states and the District of Columbia, serving more than 100,000 students (RPP International, 1998). President Clinton has called for quadrupling the number of charter schools by the year 2002.
This study was designed to examine the claim that charter schools offer teachers opportunities to enhance their professional lives. Charter school laws passed in many states explicitly intend to empower teachers to become more self-directed professionals by providing them with the autonomy, flexibility and authority they need to design new and innovative approaches to teaching and learning. (Contreras, 1995; Mulholland & Bierlein, 1993; Wells and Associates, 1998). Advocates suggest that such empowerment means that teachers in charter schools will be encouraged to take on aspects of a more "professional" role outside the classroom. Examples of this more professional role could include exerting greater influence over school-wide decisions, and having more say in how they organize their day and how they structure relationships with colleagues (Corwin & Flaherty, 1995). Ultimately, according to this theoretical perspective, these more empowered teachers would be better able to serve their students by creating educational environments that will lead to improved student outcomes (Marks & Louis, 1997).
Research Questions
This study examines whether charter schools provide more professional opportunities for teachers by comparing the perceptions of teachers in charter schools and traditional public schools about aspects of teaching and their work environment. There were three central questions:
1. How do charter school teachers perceive issues of empowerment compared to teachers in traditional public schools?
2. How do charter school teachers perceive aspects of school climate compared to teachers in traditional public schools?
3. How do charter school teachers perceive aspects of working conditions compared to traditional public school teachers?
**The Colorado Charter Schools Act**
When Colorado legislators passed one of the nation's earliest and strongest charter school laws in 1993, they explicitly adopted the perspective that local control of schools and "teacher professionalism" must increase if public education is to improve. The Colorado charter school law is considered "strong" because it includes a mechanism for appealing disputed charter school applications to the Colorado State Board of Education. That is, local boards of education and/or school districts do not alone have final say over whether a charter school will or will not be approved for their district.
According to the state's Charter Schools Act, a charter school in Colorado is a public school operated by a group of parents, teachers, and/or community members as a semi-autonomous school of choice within a school district, operating under a contract between the members of the charter school community and the local board of education. Such schools were purposefully created to provide an avenue for educators and others "to take responsible risks and create new, innovative, more flexible ways of educating all children within the public school system." Essential characteristics of charter schools were to be school-centered governance, autonomy, and a clear design for how and what students learn. Another clearly stated objective was "to create new professional opportunities for teachers, including the opportunity to be responsible for the learning program at the school site." During the 1998 legislative session, the Colorado General Assembly re-authorized the Charter Schools Act without a future sunset, signaling the evolution of charter schools from a reform experiment to a permanent part of the public education infrastructure in Colorado. Another bill, passed in 1999, increased the required amount of state per-pupil allotment going to charter schools from 80% to 95% of the public school average, as well as the base upon which that percentage was calculated. Although charter schools still served only a small percentage of the state's public school students in 1999, the charter schools movement found a receptive audience in Colorado. During the 1998-1999 school year, for example, approximately 60 charter schools were in operation statewide, serving approximately 14,000 students, and it was anticipated that another ten to twelve
schools would open in the 1999-2000 school year (Colorado Department of Education, 1999).
What We Know About Teachers in Charter Schools
Charter schools are still a relatively new phenomenon, and their sheer diversity has made them a difficult subject to study in any systematic way. Significant research on charter schools is just beginning to emerge. Like map-makers in a foreign land, the early charter school researchers have been largely concerned with describing the broad contours of the movement and the new schools it has produced through the collection of descriptive statistics and case studies that provide portraits of different charter schools in various states. Questions have focused on areas such as the variations in charter school laws; the reasons charter schools are started; their educational programs; their start-up problems; school, parent, teacher and student characteristics and satisfaction levels; charter school relations with school districts; and questions about how to measure student achievement and hold charter schools accountable for improved student learning. Although questions about the experiences of teachers are included in most questionnaires and evaluations of charter schools, only a couple of relatively small studies to date have teachers as their primary focus.
Early studies of charter schools have, however, provided preliminary evidence about who teaches in charter schools. Charter school teachers are often younger than their counterparts in traditional schools, have less teaching experience, hold fewer advanced degrees, and are mostly--but not always--certified, although charter school legislation generally does not require certification (Center for Applied Research and Educational Improvement [CAREI], 1997; Colorado Department of Education, 1996; 1998; Finn, Manno, Beirlein, & Vanourek, 1997). Teachers report going to work in charter schools for a variety of reasons, including more freedom and flexibility, family teaching and learning atmosphere, increased decision-making, dedicated staff, and enhanced accountability (Beirlein, 1996). The research suggests that charter school teachers are generally quite satisfied so far with their experiences despite what appear to be some fairly common concerns, such as heavy workloads, inadequate facilities, relatively low salaries and tenuous job security (Beirlein, 1996; Corwin & Flaherty, 1995; Finn, Manno, Beirlein, & Vanourek, 1997; Wells and Associates, 1998). In Colorado, for example, the average teacher salary in 32 charter schools included in the state's most recent evaluation study was $26,802, significantly lower than the $37,240 state-wide average teacher salary (though this may be a by-product of the years of experience of teachers in the different types of schools). Of the teachers who responded to the evaluation questionnaire, 5% were current members of their local teachers association, compared to about 80% statewide. Teachers reported a high levels of satisfaction with most key aspects of their schools, but listed as their top concerns inadequate facilities/resources, heavy workload, parents, leadership/board, staff/teachers, and salary/benefits.
In one of the earliest and most extensive studies on charter schools nationwide, researchers at the Hudson Institute (Finn, Manno, & Bierlein, 1996) collected survey data from 521 teachers working in 36 charter schools in 10 states. The researchers found that charter school teachers are a diverse lot who prize what the school is doing, like working in it, and believe it is succeeding. Satisfaction was highest when it came to educational matters (curriculum, teaching, class size, etc.) and lowest when it came to non-educational matters (food, facilities, sports, etc.). Similarly, the Minnesota Charter Schools Evaluation (CARE!, 1997) also found that teachers reported high levels of satisfaction with their charter school experience (81 percent satisfied or very satisfied versus 6 percent dissatisfied or very dissatisfied). About one in four expressed dissatisfaction with the condition of their school building or salaries. However, this evaluation report also noted that compared to teachers nationwide who have completed the same survey, charter school staff members' level of satisfaction is fairly typical for all categories surveyed.
**Theory to Practice: Charter Schools and the Empowerment of Teachers**
Although appearing new to many observers, the charter school concept is actually based on ideas that have been evolving among educational policy-makers, practitioners, and researchers for the past 25 years (Anderson & Marsh, 1998). Most of these ideas revolve around the educational benefits to be derived from the de-centralization of school decision-making and include such related goals as the re-organization of schools around the task of improving teaching and learning, and the need to enhance the professionalism of teachers. Teacher professionalism, in particular, emerged as an educational reform initiative in the mid-1980s and often accompanied policies to increase decision-making authority and accountability at the school level. Recognizing teachers as a source of technical expertise for the improvement of schools, advocates of enhanced professionalism or "empowerment" argued for increasing the authority of teachers over both school and classroom working conditions (Marks & Louis, 1997; Sykes, 1990). A "professional" conception of teaching, as opposed to a more centralized, bureaucratic conception of teaching, came to include such attributes as: (a) school-level decentralized decision-making in democratically governed schools; (b) extensive professional control with collective autonomy and decision-making authority over curricula, school policies, assessment, budget, hiring, and evaluation of peers; (c) collegiality among staff; (d) flexible work schedules; (e) collaborative, on-going professional development of teachers; and (f) accountability as measured by the effectiveness of instruction (Boettiger, 1998; Dusewicz & Beyer, 1988;). In short, enhanced teacher professionalism, or empowerment, is generally viewed as teacher participation in all decision-making directed toward carrying out the school's instructional mission, both in the classroom and throughout the school.
Several studies have addressed the issue of teacher professionalism, or empowerment, in charter schools, although there is little agreement as to the definition of the term as it applies to charter schools, or on how to measure it. For example, it is not always clear whether questions about teacher empowerment refer to an enhancement of teachers' long-standing classroom autonomy or increased teacher decision-making in a wider, school-wide arena. Most of the data are based on teacher self-reports with very little use of comparisons from traditional schools. Nevertheless, there is some preliminary evidence that charter school teachers do tend to feel more "professional"—however the term is defined. For example, Shore (1997) explored newly created opportunities for teachers in California charter schools. Based on textual analysis of 86 charter proposal documents and interviews with selected directors and teachers, she concluded that most charter school teachers have primary responsibility for governance, participate in hiring and peer evaluation, experience fewer bureaucratic constraints, and have considerable control over their working environments. Corwin and Flaherty (1995) also asked questions about what roles charter school teachers perform. In an analysis of 230 questionnaires returned by teachers in 66 charter schools operating in California, they found that teachers reported more influence over the curriculum and discipline policy than over grouping students and in-service instruction. Teachers in new (not converted) charter schools, elementary schools, and "high-autonomy" charter schools, in particular, reported having more influence and being less constrained by rules. A high percentage of teachers said they experimented more in the classroom, were freer to teach as they wished, and had more influence over the content and subjects that they teach. Most of these teachers considered the charter structure essential or valuable to changed practice. Finn, Manno, Beirlein, and Vanourek (1997) reported that most of the charter school teachers they surveyed were finding "personal fulfillment and professional reward" (Part I, p. 1) and had more chances to be involved with school policy making and planning. More than 90% of the teachers said they were "very" or "somewhat" satisfied with charter schools' educational philosophy, size, fellow teachers and students; more than three-quarters of the teachers said they were satisfied with school administrators, the level of teacher decision-making, and the challenge of starting a new school. More than 70% of Colorado charter school teachers recently surveyed reported that they were satisfied with "teacher participation in school decisions" (Colorado Department of Education, 1999).
Other researchers point out that the experience of teachers in charter schools is likely to vary, based on school culture and context (Datnow et al., 1994). That was the conclusion of the Minnesota Charter Schools Evaluation (CAREI, 1997) which was based in part on site visits at 16 different charter schools. The evaluation found that the professional roles of teachers vary dramatically: some schools have a designated principal who serves as the authority figure; while others have expanded or significantly modified the teacher role to include additional responsibilities.
One of the most extensive studies of charter schools to date—the
UCLA Charter School Study (Wells and Associates, 1998)--suggested that the enhanced teacher empowerment found in some charter schools can be a mixed blessing, bringing more freedom but little support. In their case studies of 17 charter schools and 10 school districts in the state of California, these researchers found that teachers in charter schools value their freedom, relatively small classes, and esprit de corps, but heavy workloads are an issue. They continue: "On the issue of empowerment, our primary findings are mixed. First, the teachers in our study found great satisfaction in the intimate, personal settings that small charter schools offered and took professional pride in being among a select group of school reform pioneers. Yet, many of these teachers, inundated by non-classroom responsibilities, struggled with weariness and exhaustion, and openly speculated about their ability to sustain their level of commitment over the long haul" (p. 49).
**Feelings or Fact?**
From the beginning of the charter school movement a few studies have reported that teachers said they "felt like professionals" in charter schools (Bierlein, 1996), but didn't offer much, if any, data on how those feelings were directly tied to practice. Interestingly, two recent reports have raised the possibility that such feelings of enhanced professionalism--or what Wells and Associates (1998) call the "esprit de corps effect"--may be more feeling than reality. The UCLA study noted that teachers in charter schools often differentiated themselves from teachers in regular public schools, and that those differences included being "more professional" than their public school counterparts, and feeling great pride in their charter school setting. Yet, despite this "esprit de corps" in these schools, they found little differences in how teachers actually taught. They concluded, "most teachers could not say what it was that they do in a charter school that they could not have done in a regular public school, indicating that their new professional identity may be based on factors other than their teaching practice" (p.51).
A second study, conducted by SRI International for the California state legislature (Anderson & Marsh, 1998), mentioned the same uncertainty as to what teachers mean when they say they feel more powerful in charter schools. After conducting case studies of 12 charter schools and collecting descriptive data from telephone interviews with members of 124 charter schools, researchers reported that charter school status gave staff members a sense of empowerment and of being part of a significant reform process. One teacher explained: "The fact that we are a charter, that we are in charge of our destiny, has forced an attitude change. We have a sense of power we never had before, whether it is true or an illusion" (p. 19).
**Method**
**Sample and Data Collection**
This study employed a comparative survey design to compare
perceptions of teachers in charter schools and traditional public schools (TPS) about aspects of their work and work environment. A matched cluster sampling procedure was used to access comparable samples of teachers in the two types of schools. First, the cooperation of administrators and teachers in about two dozen Colorado charter schools that had been operating for at least two years in 1997-1998 was solicited, resulting in a sample of 16 charter schools from across the state. Those schools were then matched, to the degree possible, with existing traditional public schools, based on school location and grades taught. Given the special nature of charter schools, close matches were usually not possible: in Colorado, charter schools on average are significantly smaller than TPSs and generally do not fit the traditional grade level configuration of elementary, middle, or high school. Many serve a combination of elementary and middle school students, and some include all grades. Because charter schools are small, two or three charter schools were matched to each cooperating TPS, with seven TPSs used as matches. Thus, the schools represented the "clusters" in the cluster sampling design. The final sample of teachers was then drawn from the two cluster types of schools, and included 99 charter school teachers and 103 traditional public school teachers. A total of 217 surveys were administered to charter school teachers and 219 to TPS teachers. Response rate was 46% for charter school teachers and 47% for TPS teachers.
**Instrumentation**
The survey instrument included forty forced-choice, five open-ended, and eight demographic questions. The forced-choice items measured dimensions of teacher "empowerment," school climate, and working conditions. The "empowerment index" was derived from Marks and Louis (1997) which included fourteen questions the authors divided into four dimensions. We ran a factor analysis with our data and derived three composite variables we labeled as "empowerment in the school wide arena" (dealt with issues like involvement of teachers in hiring, budgeting, determining professional assignments, determining content of in-service programs), "empowerment in the classroom with students" (dealt with issues like how much control teachers felt over disciplining students, determining the behavior code, setting policy on grouping of students, involvement and influence in decisions that affect them), and "empowerment in the classroom with curriculum content" (included control over selecting content, teaching techniques, instructional materials, and establishing the curriculum).
The "school climate" scale was adopted from Dusewicz and Beyer (1988) which included twelve questions presented in three dimensions. Again, we factor analyzed the scale with our data and derived three new composite variables. These included "collective responsibility for teaching and learning" (items like shared responsibility for achieving school goals, all being involved in goal establishment, articulation, and review, participatory techniques being employed, teachers working amicably on common problems, the
school having a consistent and shared value system), "emphasis on academic learning" (dealt with the school having high expectations for academic achievement, there being an academic emphasis and belief that all can learn, staff believing that they can help all students to learn, the school motivating students to learn), and "school rewards students for high achievement" (included items like the school giving honors and awards for academic achievement, the school providing opportunities for children to excel and recognizing such efforts).
The working conditions component of the instrument included a "job satisfaction scale" from Bacharach (1986) and additional questions about working conditions derived from Ginsberg and Berry (1990). A factor analysis of these fourteen questions yielded three composite variables. They were labeled as "job contentment" (included authority to carry out work, sense of present job in light of career expectations, the chance the job provides to be successful, current satisfaction with school discipline, and the extent to which working conditions enable effectiveness), "teaching and learning conditions" (dealt with satisfaction with workload, class size, preparation and planning time), and "physical plant and support conditions" (included satisfaction with issues like the school's physical condition, the classroom's physical condition, instructional resources available, opportunities for professional growth, job security, and salary).
We conducted reliability analyses of each sub-scale we derived from the instrument. The Alpha coefficients were all acceptable, ranging from .59 to .87, with all scales but one at the .7 level or above. The five open-ended questions asked teachers to describe the most positive and negative things about being a teacher at their schools, if their students regularly worked with computers, and (for charter school teachers who have taught in a regular public school) how teaching in a charter school differed from teaching in a TPS. Teachers were also asked to volunteer any other comments they might have. Demographic questions addressed gender, age, race/ethnicity, the highest degree earned, years of teaching experience, certification status, and grade(s) taught.
The higher proportion of females in the charter school sample is related to the fact that many charter schools in Colorado serve elementary or elementary and middle school students and a higher proportion of females teach at those levels. An analysis of gender and grade level taught by respondents broken out by type of school shows that the male teachers in the sample tend to cluster at the high school level in traditional public schools. Moreover, only a small minority of Colorado charter schools fit the traditional grade level configuration of elementary, middle, or high school, making true matched comparisons to traditional public schools difficult.
Data Analysis
We began our analyses of the research questions by conducting one-way analyses of variance for each of the three sets of dependent variables and type of school as the two-level factor. Given the wide
discrepancies between the two groups of teachers on the "years of experience" and "school size" variables noted in Table 1 however, we followed up these initial ANOVAs with one-way analyses of covariance using the "years of experience" and "school size" variables as covariates. None of the ANCOVA analyses produced probability values that deviated from the findings of statistical significance obtained from the ANOVA analyses. Thus, the ANOVAs are reported for ease of understanding. In addition, a series of tables are given which present descriptive statistics and corresponding effect sizes. Finally, the open-ended questions were individually analyzed for themes and patterns.
Results
Demographic Characteristics of Teachers
Survey respondents were asked a number of demographic questions, including their gender, age, race/ethnicity, years of teaching experience, highest degree earned, certification status, and grade(s) taught. Table 1 provides a comparison of the final sample of charter school teachers and traditional public school teachers revealing some differences between the two groups: in general, more charter school teachers are female, are slightly younger, have earned fewer post-baccalaureate degrees, and have fewer years of teaching experience than their TPS counterparts. The basic differences between the two groups are very similar to those reported in two Colorado charter school evaluations (Colorado Department of Education, 1998; 1999). One result of that difficulty is an imbalance in response from the two groups in regards to grade level taught, with respondents from charter schools including more elementary teachers and respondents from traditional public schools including more high school teachers.
Table 1
Descriptive Information of Charter and Traditional School Teachers
| Characteristic | Charter Schools | Traditional Schools |
|--------------------------------|-----------------|---------------------|
| **Gender** | | |
| Female | 89.9% | 63.7% |
| Male | 10.1% | 36.3% |
| **Age (mean)** | 39.1 | 42.1 |
| **Race/ethnicity** | | |
| White | 96.9% | 94% |
| Black | 0% | 1% |
| Hispanic | 0% | 3% |
| Native American | 2% | 0% |
| Asian | 1% | 2% |
| **Highest degree earned** | | |
| Bachelor's | 59.6% | 38.2% |
| Master's | 40.4% | 60.8% |
| Doctorate | 0% | 1% |
| **Years of teaching experience (mean)** | 9.1 | 15 |
| **Grade(s) taught** | | |
| Elementary | 63.5% | 30.4% |
| Middle school | 24% | 7.8% |
| High School | 7.8% | 61.8% |
| **Size of School (mean)** | 327 | 998 |
| **Certification status** | | |
| Certified in area teaching | 82.5% | 97% |
| Not certified in area teaching | 7.2% | 3.0% |
| Not certified | 10.3% | 0% |
**Analyses of Research Questions**
The first research question asked: "How do charter school teachers perceive issues of empowerment compared to teachers in traditional public schools?" The ANOVA comparisons for each of the empowerment composite variables are presented in Table 2, and corresponding
descriptive and effect size information in Table 3. As can be seen in Table 2, analyses of two of the three empowerment variables, "empowerment in the school wide arena," and "empowerment in the classroom with students", yielded statistically significant differences between the teachers from the charter schools and the teachers from the traditional schools \((F[1,191] = 8.60, p = 0.004)\), and \((F[1,196] = 11.00, p = 0.001)\) respectively. Looking at the means in Table 3, teachers in the traditional schools perceived themselves to be more empowered in the school-wide arena (3.00 and 2.60 respectively), but less so in the classroom with students (3.64 and 4.03 respectively). Effect sizes associated with those mean differences (-0.48 and +0.46 respectively) suggest moderately strong practical significance to those mean differences.
**Table 2**
One-Way ANOVAs for Empowerment Variables
| Empowerment Variables | df | F Value | p |
|----------------------------------------|------|---------|------|
| In the school-wide arena | 1, 191 | 8.60 | 0.004|
| In the classroom with students | 1, 196 | 11.00 | 0.001|
| In the classroom with curriculum content | 1, 199 | 1.31 | 0.254|
**Table 3**
Descriptive Statistics and Effect Sizes for Empowerment Variables
| Empowerment Variables | n | Mean | SD | ES |
|----------------------------------------|-----|------|-----|-----|
| **In the school-wide arena** | | | | |
| Charter schools | 92 | 2.60 | 1.07| -0.48|
| Traditional schools | 101 | 3.00 | 0.85| |
| **In the classroom with students** | | | | |
| Charter schools | 99 | 4.03 | 0.81| +0.46|
| Traditional schools | 99 | 3.64 | 0.86| |
| **In the classroom with curriculum content** | | | | |
| Charter schools | 98 | 3.50 | 1.27| -0.18|
| Traditional schools | 103 | 3.68 | 1.01| |
Finally, the mean scores for the third empowerment dimension, "empowerment in the classroom with curriculum content," were similar for teachers in traditional and charter schools, with no statistically significant difference.
The second research question asked: "How do charter school teachers perceive aspects of school climate compared to teachers in traditional public schools?" Analysis of variance (ANOVA) comparisons for each of the school climate composite variables are presented in Table 4, and corresponding descriptive and effect size information in Table 5. As can be seen in Table 4, the pattern of findings was very similar to the findings with the empowerment variables reported above, with two of the three contrasts ("school rewards students for high achievement" and "emphasis on academic learning) achieving statistical significance ($F[1,200] = 11.47$, $p = 0.001$, and $F[1,200] = 18.81$, $p < 0.000$), and the third not. Also similar to the empowerment variables, the directions of those findings differed for each of the two contrasts achieving statistical significance.
**Table 4**
| School Climate Variables | df | F Value | p |
|-------------------------------------------------|------|---------|-------|
| School rewards students for high achievement | 1, 200 | 11.47 | 0.001 |
| Emphasis on academic learning | 1, 200 | 18.81 | 0.000 |
| Collective responsibility for teaching and learning | 1, 197 | 3.36 | 0.068 |
Looking at Table 5, teachers in traditional schools perceived their respective schools to have a climate that rewarded their students for high achievement at a significantly greater level than teachers in charter schools (4.32 and 3.92, respectively). Conversely, charter school teachers perceived the schools in which they worked to have significantly greater emphasis on academic learning than did their traditional school teacher counterparts (4.52 and 4.11 respectively). Effect sizes associated with those mean differences (-0.56 and +0.54 respectively), again similar to the empowerment variables, suggest moderately strong practical significance to those mean differences.
**Table 5**
Descriptive Statistics and Effect Sizes for School Climate Variables
| School Climate Variables | n | Mean | SD | ES |
|--------------------------------------------------------------|----|------|------|-----|
| School rewards students for high achievement | | | | |
| Charter schools | 99 | 3.92 | 0.94 | |
| Traditional schools | 103| 4.32 | 0.71 | -0.56 |
| Emphasis on academic learning | | | | |
| Charter schools | 99 | 4.52 | 0.56 | |
| Traditional schools | 103| 4.11 | 0.74 | +0.54 |
| Collective responsibility for teaching and learning | | | | |
| Charter schools | 99 | 3.65 | 0.97 | |
| Traditional schools | 100| 3.88 | 0.84 | +0.28 |
The final research question asked: "How do charter school teachers perceive aspects of working conditions compared to traditional public school teachers?" Tables 6 and 7 present the results of the analyses in a manner similar to the empowerment and school climate variables.
For the composite variable we labeled as "job contentment," the charter school teachers had a slightly higher mean score than the traditional public school teachers, but this difference was not statistically significant, using a criterion alpha level of $p < 0.0167$. In terms of "teaching and learning conditions," the charter school teachers had a statistically significant ($F[1,197] = 12.41, p = 0.001$) higher mean score (3.44) than did the traditional public school teachers (2.94). The effect size for this comparison (+0.49) was also indicative of a legitimate practical difference between the perceptions of these two groups of teachers. For the third composite working conditions variable, "physical plant and support conditions," the traditional public school teachers had a higher mean score (3.72) than the charter school teachers (2.65). This difference was statistically significant ($F[1,198] = 40.82, p < 0.000$), and the effect size of -0.93 reflected the strongest magnitude of differences between these two groups of teachers.
Table 6
One-Way ANOVAs for Job Satisfaction Variables
| Job Satisfaction Variables | df | F Value | p |
|--------------------------------------------|-----|---------|-----|
| Job contentment | 1, 196 | 4.04 | 0.046 |
| Teaching and learning conditions | 1, 197 | 12.41 | 0.001 |
| Physical plant and support conditions | 1, 198 | 40.82 | 0.000 |
Table 7
Descriptive Statistics and Effect Sizes for Job Satisfaction Variables
| Job Satisfaction Variables | n | Mean | SD | ES |
|---------------------------------------------|-----|------|-----|-----|
| **Job contentment** | | | | |
| Charter schools | 98 | 4.11 | 0.86| |
| Traditional schools | 100 | 3.87 | 0.82| +0.29 |
| **Teaching and learning conditions** | | | | |
| Charter schools | 98 | 3.44 | 0.95| |
| Traditional schools | 101 | 2.94 | 1.01| |
| **Physical plant and support conditions** | | | | |
| Charter schools | 98 | 2.65 | 1.21| |
| Traditional schools | 102 | 3.72 | 1.15| -0.93|
Discussion
The findings related to empowerment issues reveal some expected and some surprising results. That traditional school teachers have a statistically significant higher mean score than the charter school teachers on the empowerment variable, "empowerment in the school wide arena," contradicts much of the rhetoric and early literature suggesting that teachers in charter schools will be able to take on a more "professional" role outside the classroom, such as participation in hiring decisions, budgeting, determining professional assignments, the content of in-service programs, or any other school-wide issue that may ultimately impact the delivery of the instructional program. Clearly, the literature on teacher empowerment included such school-wide issues as part of their conception of what empowerment should mean (Marks & Louis, 1997; Sykes, 1997), yet our comparative data show that such hopes for teachers in charter
schools may not be as predicted. This finding may reflect the emphasis in many school districts on site-based management practices. In addition, those who suggested greater authority and increased decision-making for charter school teachers (e.g. Bierlein, 1996; Finn et al., 1997; Mulholland & Bierlein, 1993; Shore, 1997) based their findings on analyses of laws and/or reports of charter teachers alone, without the use of comparative data. The nature of governance and parent involvement at many Colorado charter schools may also be a factor: although the state's Charter Schools Act provides for the founding of charter schools by teachers, almost all of them are parent-founded schools in which parents hold a majority on governing boards. Some of the responses to our open-ended questions may help explain this reality for charter school teachers, as many complained about poor administration or overly intrusive charter school boards. Other than their concern about inadequate facilities, such school wide management-related issues were the most common negative comments reported to us. For example, charter teachers reported: "The board is made up of parents and many are not educators and lack knowledge and experience which causes many problems;" - "there is a lack of trust by the administration and the board;" - "the most negative thing about this school is the politics occurring between the board, the administration and the staff;" - "parent control of the school is excessive...many want to pick the textbooks and don't know how to do it..;" - "Our governing board has too much power! They are micro-managing and do not value teachers;" - "Our board is inflexible and only listen to a small parent component;" - "The most negative thing is a parent board, who are not educators, making academic decisions."
On the other hand, the finding that charter school teachers had a statistically significant higher mean score than traditional public school teachers for classroom-related empowerment supports the researchers and charter school advocates who predicted greater autonomy, influence, freedom and flexibility in these schools (e.g., Bierlein, 1996; Corwin & Flaherty, 1995; Shore, 1997). And the charter school teachers' responses to our open-ended question about what they liked most about teaching in this setting underscored this sense of classroom empowerment with students. Teachers consistently reported that they enjoyed a great deal of flexibility, that the small class size allowed them to do a variety of different things, and that they enjoyed working with students who clearly wanted to be there. Some typical comments included: "I get to teach here...there is less disciplining;" - "I have the freedom to try new techniques;" - "I actually get to teach...I am not pulled out of the classroom for the multitude of district pull-outs;" - "I have the freedom to be creative and innovative while expanding and elaborating on the global core curriculum;" - "the teaching situation allows great freedom and flexibility;" - "I have the freedom to develop my program as I see appropriate;" - "we have enthusiastic students who want to be here;" - "There are few discipline problems and students come prepared to learn;" - "the small class size allows for close relationships with students--none can be ignored or fall into the woodwork."
Interestingly, there was no difference in the responses of charter or traditional public school teachers in the area of "empowerment with the curriculum content." This finding is intriguing because the potential ability
of smaller, more autonomous charter schools to serve as laboratories for teacher-driven educational innovation has always been a strong argument for such schools. Our comparison revealed that teachers in both types of schools felt pretty good about their degree of control over curriculum (mean of 3.5 for charter school teachers, 3.6845 for traditional school teachers). In the open-ended comments by both sets of teachers there were consistently positive comments regarding the flexibility they felt they had over curriculum decisions. While the charter school teachers were more adamant in their remarks in our open-ended questions, the statistical comparison suggests that this feeling may be driven more by the hype surrounding the charter school movement than any real difference in curriculum-related empowerment. To some degree, this finding is probably related to what we label as the "back-to-the-future" nature of the educational programs in many Colorado charter schools: that is, a significant number of the state's charter schools (17 out of 49 schools existing in 1997) are back-to-basics schools that use the largely pre-determined, highly structured Core Knowledge curriculum. Thus, this version of educational reform we characterize as being "back-to-the-future." Interestingly, representatives from a number of professional educational organizations in the state (Colorado Parent Teacher Association, Colorado Education Association, Colorado Association of School Executives, Colorado Association of School Boards, and the Colorado Federation of Teachers) recently expressed concern that charter schools have not, as yet, established themselves as labs of innovation or experimentation (Colorado Department of Education, 1999).
Given the literature regarding charter schools, our findings regarding the three school climate composite variables also reveal some unexpected results. Since charter school teachers are hired to fit the specific mission of the charter, it was expected that the charter school teachers would score higher on the school climate variable "collective responsibility for teaching and learning." But just as Wells and Associates (1998) and Anderson and Marsh (1998) found that charter school teachers could not articulate why they felt their professional identity was different from traditional public school teachers, we found no statistically significant difference on the school climate variable dealing with shared responsibility, collective action and common mission/goals. Interestingly, the charter school teachers expressed a strong sense of a common mission and shared goals in the open-ended question about what was positive in their school, which the public school teachers only rarely asserted. Yet, the statistical comparison shows no difference in this school climate factor.
However, the charter school teachers did have a statistically significant higher mean score on the school climate factor we labeled as "emphasis on academic learning." Perhaps this is where the idea of a shared mission in the charter schools is being expressed. Clearly, the comments by charter school teachers highlighted their academic emphasis and the ability to focus on academics given small class size and few discipline problems. But it is almost counterintuitive that the traditional public school teachers would then have a statistically significant higher mean score on the school climate variable, "school rewards students for high achievement." This may be a result of the state's emphasis and
pressure on improving test scores, and suggest that charter schools expect high achievement and therefore don't reward it as the traditional public schools do. But it seems inconsistent given the opposite difference reported regarding emphasis on academic learning.
The results regarding working conditions were most consistent with the current literature. Concerning what we labeled as "job contentment," we found no statistically significant difference between the charter and traditional public school teachers. While much of the charter school literature reports that teachers in these schools are very satisfied with their jobs (Colorado Department of Education, 1999; Finn et al., 1997; RPP International, 1997), our finding of no difference in job contentment underscores what the Minnesota evaluation reported, namely that the charter teachers' levels of satisfaction were typical for other teachers nationwide. Clearly, both the traditional public school teachers and charter school teachers had areas of distinct dissatisfaction. These are revealed in the differences in the working conditions composite variables of "teaching and learning conditions," and "physical plant and support conditions." The charter school teachers had a statistically significant higher mean score on the teaching and learning conditions factor, while the traditional public school teachers had a higher score on the physical plant and support conditions factor. These findings support the literature which reports that charter school teachers appreciate the smaller class sizes, lack of discipline problems, parents who are active and supportive, while they disdain poor facilities, classroom conditions, lack of support materials, and questionable job security (Bierlein, 1996; Finn et al., 1997; Colorado Department of Education, 1999; Corwin & Flaherty, 1995; RPP International, 1997). Indeed, lack of financial support and poor facilities were the most common concerns expressed by charter school teachers to our open-ended query about the most negative aspect of teaching there. In terms of support in schools, the responses to our question about computer use revealed that traditional public school teachers had far greater access to computers than most charter school teachers, underscoring the lack of support these teachers cited. And as our statistical comparisons suggest, for the public school teachers it was issues like student apathy, discipline problems, and large class size that dominated their concerns expressed in the open-ended questions about the most negative aspects of teaching.
Conclusions and Recommendations
Our goal was to examine the claim that charter schools will empower teachers to become more self-directed professionals by providing them with the increased autonomy, flexibility, and authority necessary to assume responsibility for the development and delivery of new, innovative approaches to teaching and learning at their school sites. Although charter schools obviously employ a corps of dedicated teachers who feel energized by the role they play in founding these new schools, a review of our findings shows that, for the most part, the rhetoric and early research findings regarding enhanced teacher "empowerment" in charter schools outpaces the reality of actual teacher experience when compared to the experiences of teachers in traditional public schools. The data do indicate that charter school teachers enjoy more professional flexibility
within the four walls of their classrooms. However, they are generally not taking on an expanded role in the larger school arena, and do not appear to have any deeper involvement in curricular decision-making or innovation than their TPS counterparts. Perhaps the most obvious conclusion that can be drawn from these results is that teachers in both charter schools and traditional public schools are relatively content with their work and have much in common, despite some fairly significant differences between the two groups when it comes to identifying primary sources of job-related satisfactions and dissatisfactions.
Regarding teachers' traditional classroom role, the data suggest that working in smaller, more independent charter schools does provide teachers with a sense of empowerment. This "freedom to teach" is, perhaps not surprisingly, related to smaller class size and better disciplined students. Charter school teachers generated higher mean scores on the composite variable called "empowerment in the classroom with students" (which deals largely with classroom management and student behavior), and on the working conditions variable called "teaching and learning conditions" (which deals largely with class size). Charter school teachers consistently reported in the open-ended responses that they enjoyed a great deal of flexibility in the classroom, that small class size allowed them to do a variety of different things, and that they enjoyed working with students who chose to be there with more involved parents. They also related their sense of classroom empowerment to the charter structure--although that perception appears to be more related to class size and student discipline than it is to any real difference in teacher participation in school governance or control over the curriculum. Interestingly, there was no statistically significant difference between charter school teachers and traditional public school teachers on the composite variable called "empowerment in the classroom with curriculum content." The fact that the processes surrounding the instructional core remain similar to other public school settings indicates that the charter school movement has not resulted in the degree of educational innovation and experimentation envisioned by its advocates, either in individual teacher's classrooms, or on the school-wide level. The additional fact that traditional school teachers have a statistically significant higher mean score than charter school teachers on the variable "empowerment in the school wide arena" underscores the impression that charter schools, as practiced in Colorado, are not delivering on the significantly enhanced level of teacher professionalism hoped for by educational reformers.
Charter school teachers also scored significantly higher on the school climate factor we labeled as "emphasis on academic learning." Again, this finding is probably not surprising. Given that the central argument for charter schools is that they will be more accountable for the academic achievement of their students than regular public schools, it makes sense that charter school teachers would pay sharp attention to this critical mission. Smaller class size, fewer discipline problems, and more involved parents certainly help (when queried about the most negative aspects of teaching, traditional public school teachers were more likely than charter school teachers to complain about student apathy, discipline
problems, and large class size). However, this picture is not without its complications - one of the most interesting findings of this study is the lack of statistically significant difference between charter school teachers and traditional public school teachers on the school climate variable dealing with collective responsibility for teaching and learning, which includes such critical measures as shared responsibility for achieving school goals, a shared value system, school-wide review of values and goals, participatory management, and teachers working together on common problems. Because charter schools are in a better position to hire teachers to fit their specific missions, it was expected that charter school teachers would score higher on this variable. The unexpected results are probably an indication of the dedication and commitment of both groups of teachers, and underscores again what they have in common rather than their differences.
One of the most encouraging findings of this study is the relatively high level of contentment both groups of teachers find when they go to work each day, despite distinct areas of dissatisfaction. Indeed, the two groups' areas of dissatisfaction are almost mirror images of each other. While charter school teachers value their smaller class sizes and greater freedom to focus on academics, they are considerably less pleased with their school facilities, the availability of instructional resources (including technology), their salaries, or their job security. Traditional public school teachers, on the other hand, are much more satisfied with almost all aspects of the support they receive, but tend to be somewhat less satisfied with the teaching and learning conditions they find in their classrooms. Aside from pointing out that being a teacher in any setting has its rewards and its frustrations, these findings raise an important issue of sustainability for charter school teachers: Given their relative lack of support, how long will they be willing or able to keep going?
Finally, this study raised some very interesting issues about the mixed blessings of high parental involvement in charter schools. There is no doubt that concerned and supportive parents are an invaluable resource for children and for schools. On the other hand, parents who want greater involvement in their children's schooling can apparently be a very formidable group with whom to work. For teachers who are scrambling to help set up new schools, get an educational program running, and meet high expectations, intrusive parental involvement can present another set of challenges-- particularly if parents insist on interjecting themselves into academic decision-making. At the very least, it is ironic that parents who create schools that are theoretically intended to enhance the professional roles of teachers can so often undermine their own good intentions.
In terms of future directions, we see a number of areas for potential research. Clearly, this study did not control for any measure of school effectiveness, and future studies should examine the impact of student performance in both types of schools on teacher empowerment issues. One of the most interesting and compelling directions for future research is drawn from the widely divergent policy contexts that surround the charter school initiatives in each state. They differ so dramatically (Bulkley, 1999; Mauhs-Pugh, 1995) that it may be nearly impossible to conduct state-delimited research and expect to generalize to other states. If multiple
states are included in future research studies, the states themselves should probably be included as a variable in the analyses, at least until their contribution in those analyses is found to be non-significant.
Another potential area for future research is suggested by the relatively large standard deviations on many of the composite variables reported in Tables 3, 5, and 7. Given that the charter school and TPS samples are fairly large, the large number of standard deviations exceeding .8 indicates potentially interesting within-group variations that could illuminate the between-group variations that were reported.
The highly charged political nature of charter schools also necessitates rigorous attention to important design features, particularly when multiple schools are involved and student performance is among the dependent variables. Such design features might include: (1) waiting until charter schools are at least two or three years old to give them a chance to mature and "become themselves"; (2) equating (either through sampling design or covariance measurement) students at entry into the schools; and (3) disaggregating analyses to accommodate important school-level variables such as mean enrollments of low SES students, special education students, language minority students, and mobility rates. The stakes are high as research continues on this important experiment in American schooling. It is essential that future research quality be meticulously high as well to inform the policy debate in a credible, even unassailable way for stakeholders from across the policy arena.
References
Anderson, L., and Marsh, J. (1998). *Early results of a reform experiment: Charter schools in California*. Menlo Park, CA: SRI International.
Bacharach, S. B. (1986). *The learning workplace: The conditions and resources of teaching*. Ithaca, N.Y.: Organizational Analysis and Practice, Inc.
Bierlein, L. A. (1996). *Charter schools: Initial findings*. Baton Rouge: Louisiana State University, Louisiana Education Policy Research Center.
Boettiger, B. (1998). *Colorado's charter school policy and teacher professionalism: School level interpretations*. Paper presented at the annual meeting of the American Educational Research Association. San Diego, CA.
Bulkley, K. (1999). *Telling stories: The political construct of charter schools*. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.
Center for Applied Research and Educational Improvement (1997). *Minnesota charter schools evaluation: Interim report*. Minneapolis: The University of Minnesota.
Colorado Department of Education (1999). *1998 Colorado charter schools evaluation study*. Denver, CO: Clayton Foundation.
Colorado Department of Education (1998). *1997 Colorado charter schools evaluation study*. Denver, CO: Clayton Foundation.
Colorado Department of Education (1996). *Colorado charter school information packet and handbook*. Denver: Author.
Contreras, A.R. (1995). Charter school movement in California and elsewhere. *Education and Urban Society*. 27 (2), 213-228.
Corwin, R. G. and Flaherty, J. F. (1995). *Freedom and innovation in California's charter schools*. Dallas, TX: Southwest Regional Laboratory.
Datnow, A., (1994). *Charter schools: Teacher professionalism and decentralization*. Paper presented at the annual meeting of the American Educational Research Association. New Orleans, LA.
Dusewicz, R. A., and Beyer, F. S. (1988). *Dimensions of excellence scales: Survey instruments for school improvement*. Philadelphia, PA: Research for Better Schools.
Finn, C., Manno, B., and Bierlein, L. (1996). *Charter schools in action: What have we learned? First-year report*. Washington, DC: Hudson Institute.
Finn, C., Manno, B., and Bierlein, L., and Vanourek, G. (1997). *Charter schools in action: Final report*. Washington, DC: Hudson Institute.
Garcia, G. F., and Garcia, M. (1996). Charter schools--another top-down innovation. *Educational Researcher*. 25(8), 34-36.
Ginsberg, R., and Berry, B. (1990). *Teaching in South Carolina: A retirement initiative*. Columbia, S.C.: South Carolina Educational Policy Center.
Marks, H. .M., and Louis, K. S. (1997). Does teacher empowerment affect the classroom? The implications of teacher empowerment for instructional practice and student academic performance. *Educational Evaluation and Policy Analysis*. 19(3), 245-275.
Mauhs-Pugh, T. (1995). Charter schools 1995: A survey and analysis of the laws and practices of the states. *Education Policy Analysis Archives*. 3(13), http://epaa.asu/cpaa/v3n13/.
Mulholland, L. A., and Bierlein, L. (1993). *Charter schools: A glance at the issues*. Tempe, AZ: Arizona State University Morrison Institute for Public Policy.
RPP International (1997). *A study of charter schools: First-year report*. Washington, DC: U. S. Department of Education.
RPP International (1998). *A national study of charter schools. Second-year report*. Washington, DC: U. S. Department of Education.
Shore, R. (1997). New professional opportunities for teachers in the California charter schools. *International Journal of Educational Reform*. 6(2), 128-138.
Sykes, G. (1990). Fostering teacher professionalism in schools. In R. F. Elmore & Associates (Eds.). *Restructuring schools: The next generation of school reform* (pp. 59-96). San Francisco, CA: Jossey-Bass.
Wells, A. S., Grutzik, C., and Carnochan, S. (1996). *Underlying policy assumptions of charter school reform: The multiple meanings of a movement*. Paper presented at the annual meeting of the American Educational Research Association. New York City, NY.
Wells and Associates (1998). *UCLA Charter School Study*. Los Angeles: University of California at Los Angeles.
**About the Authors**
**Sally Bomotti**
firstname.lastname@example.org
Sally Bomotti, Ph.D., is an Assistant Professor in the School of Education at Colorado State University. She also works as a research associate at the Research and Development Center for the Advancement of Student Learning, a university-school research collaborative based in Fort Collins. Her recent research has focused on charter schools and school choice.
**Rick Ginsberg**
email@example.com
**Brian Cobb**
School of Education
Colorado State University
Email: firstname.lastname@example.org
Brian Cobb is a Professor in the School of Education at Colorado State University and Co-Director of the Research and Development Center for the Advancement of Student Learning, a community research collaborative in Ft. Collins, Colorado. His research interests presently focus on a variety of educational reform topics including charter schools, high stakes testing, and block scheduling.
Copyright 1999 by the *Education Policy Analysis Archives*
The World Wide Web address for the *Education Policy Analysis Archives* is [http://epaa.asu.edu](http://epaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
EPAA Editorial Board
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs-Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison J. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois - UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
J. Félix Angulo Rasco
(Spain)
Universidad de Cádiz
firstname.lastname@example.org
Tercsa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
Maria Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul - UFRGS
firstname.lastname@example.org
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Angel Ignacio Pérez Gómez
(Spain)
Universidad de Málaga
firstname.lastname@example.org
Daniel Schugurensky (Argentina-Canadá)
OISE/UT. Canada
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
Academic Program Approval and Review Practices in the United State And Selected Foreign Countries
Don G. Creamer
Steven M. Janosik
Virginia Polytechnic Institute and State University
Abstract
This report outlines general and specific processes for both program approval and program review practices found in 50 states and eight foreign countries and regions. Models that depict these procedures are defined and the strengths and weakness of each are discussed. Alternatives to current practice by state agencies in the U.S. are described that might provide for greater decentralization of these practices while maintaining institutional accountability.
Introduction
Responding to multiple challenges in the governance and
coordination of higher education, state agencies increasingly are examining their structures for carrying out their mandates. Writing for the Education Commission of the States (ECS), McGuinness (1997) predicted that changes would be necessary to correct some structures that were designed for earlier times. Challenges such as the integration of technology into delivery systems for higher education, market pressures, instability in state government leadership, and growing political involvement in state coordination and governance are among the most compelling forces that make these changes likely.
One of the most common responsibilities of state coordinating or governing agencies for higher education is academic program approval and program review. Program approval refers to the process for approval of new academic programs by state higher education agencies or boards and generally is done to curb unnecessary duplication of programs among public institutions. Program review refers to the process of critique of existing academic programs and generally is seen as a strategy for quality and productivity improvements. According to Barak (1991), 45 state agencies undertake some form of program approval and 34 state agencies review at least some existing programs (some reviews are conducted at the state level).
Concerns for quality and for accountability in higher education is increasing in state governments and agencies, putting pressure on traditional statewide coordination functions of these agencies such as program approval and review, budgeting, planning, and monitoring quality. Increasingly, state agencies are seeking ways to decentralize certain functions while, at the same time, increasing accountability. McGuinness (1997) predicted that states would turn to two quality assurance mechanisms to accommodate these trends. First, he saw a reliance shift from regulatory controls to incentives to ensure public interests. Second, he surmised more coordinated tactics among state and federal governments, accrediting agencies, institutional governing boards, and disciplinary and professional organizations.
This study was undertaken at the request of the State Council of Higher Education for Virginia (SCHEV) that sought policy alternatives to their current academic program approval and review practices to enable a more decentralized approach to the process in an environment of greater accountability. The study sought to provide these policy alternatives by first collecting base-line information about current academic program approval and review practices in all 50 states and in selected foreign countries and, second, to formulate policy alternatives based upon a synthesis of these findings and upon reasoned judgment about such practices in higher education.
**Methodology**
All 50 states in the U.S. and eight foreign countries and regions, including Australia, Canada, England, Germany (Lower Saxony), Hong Kong, the Netherlands, New Zealand, and Scotland, were selected for study. Data for this study were obtained in two forms: (a) documents obtained from web sites or direct mail and (b) interview results with academic officers of state agencies (no interviews were
conducted with agency representatives from foreign countries or regions). Documents were analyzed to illuminate their academic program approval and review policies and practices. Semi-structured interviews with agency academic officers were conducted to determine perceived strengths and weaknesses of current practices and future plans for change in these procedures. Useful data in one or both forms were obtained from all foreign countries and regions and from 46 states. Information from these sources varied in content. Some written policies were explicit and detailed; others were vague and confined in scope. Likewise, interview results varied from expansive and illuminating to narrow and sketchy. Most data were instructive, however, clearly revealing current practice. Inquiries about future plans for change were met with limited success. Either the officers did not know what changes might occur in their agencies or were cautious in their comments for a variety of reasons.
Information from these sources was analyzed for patterns in responses. These synthesized patterns of practice were used to report the findings from the study. Discovered patterns were more normative among program approval practices than from program review practices; therefore, variations in review practices also were synthesized from the data and reported as substratum patterns.
Findings
Findings are presented in four parts: (a) generalized findings about program approval processes, (b) generalized findings about program review processes, (c) summary of program approval and review practices, and (d) generalized program approval and review findings from foreign countries and regions. Generalized findings about program approval and program review are followed next by steps in the processes and, finally, by strengths and weaknesses of the generalized model presented. Distinctive features of the practices from foreign counties and regions follows their generalized findings.
Program Approval Practices in the States
Practices regarding state agency program approval can be summarized and displayed in a generalized model. This model is depicted in Figure 1 and shows widely accepted practices at the institution and at the state agency level where multiple decisions and actions are possible.
Notes:
1. Early screening is used by some states to save time and resources. Good proposals are helped. Poor proposals are discouraged.
2. The process used by state agencies may include internal review by staff, external reviewers, or peer reviews. Criteria include need, demand, duplication, cost, ability to deliver, etc.
3. Some states will aid institutions to revise their proposals even after a program has been disapproved.
4. Review at this stage comes as part of a conditional approval or as part of the criteria for full approval. It may involve rigorous review and include accreditation agencies, outside consultants, or agency staff.
**Figure 1. Typical State Approval Process**
When proposals are disapproved at the state agency level, some agencies may help institutions improve their proposals and encourage them to resubmit. When proposals are approved at the state agency level, some agencies schedule a subsequent review as part of the approval process while others grant automatic continuation unless a review is triggered by productivity concerns.
**Steps in Program Approval by State Agencies**
Program approval procedures by state agencies generally followed these steps:
1. Institution determines the feasibility of its intent to plan a new program.
2. Institution notifies the state agency of intent.
3. Institution prepares a draft proposal containing a brief statement identifying the program and addressing the following issues:
- Relation to institutional mission, strategic plan, goals and objectives;
• Projected source of resources (reallocation, external funds, request for new dollars);
• Student need;
• Relationship to other programs in the system and region.
4. The state agency distributes the proposal to other affected institutions to elicit comments and recommendations.
5. State agency staff comments and makes recommendations on the draft proposal.
6. Institution submits the full proposal addressing some or all of the following issues:
• Centrality to institutional mission and planning
• Need for the proposed program
o Societal need
o Occupational need
• Student availability and demand (Enrollment level)
• Reasonableness of program duplication, if any (not including general education programs)
• Adequacy of curriculum design and related learning outcomes
• Adequacy of resources to support the program
o Adequacy of finances
o Faculty resources
o Library resources
o Student affairs services
o Physical facilities and instructional equipment
• Adequacy of program administration
• Adequacy of the plan for evaluation and assessment of the program
• Diversity plan for increasing the number of students from underrepresented populations
• Accreditation (Is there a recognized accreditation agency for the program? Will accreditation be pursued?)
• Use of technology
7. The full proposal is reviewed by one or a combination of appropriate governance bodies, external consultants, and/or program review committees consisting of a representative(s) of the program proposing unit, state agency staff, and/or external experts in the area.
8. State agency takes action to:
• Approve (provisional approval or full approval)
• Disapprove
• Defer
9. If provisionally approved, the institution will address the issues raised by the state agency. The state agency reviews the program after a relatively short period (e.g., for one year).
10. If fully approved, the institution will develop and implement the program.
11. If disapproved, the institution may have the right to appeal.
12. After the graduation of the first class of the new program, the program may receive an in-depth comprehensive review.
13. Change to current program status.
**Summary of Strengths and Weaknesses of Program Approval Processes by State Agencies**
Certain strengths and weaknesses in current were identified as follows:
**Strengths of Current Practice**
- Tends to improve the quality of the academic program
- Increases interinstitutional communication and collaboration
- Incorporates future assessment criteria and accountability measures
- Ensures demand and need
- Reduces duplication
- Conserves resources
- Stresses application of state planning priorities
**Weaknesses of Current Practice**
- Reduces autonomy of the institutions
- Can delay the initiation of needed academic programs
- Decision making may be politicized or arbitrary
- Staffing requirements may be excessive
**Generalized Patterns of Program Review Practices in States**
All states do not conduct program reviews. Where they are conducted, as occurs in a majority of states, they are conducted in differentiated or even idiosyncratic patterns. Though practiced in a variety of approaches, program review procedures in state agencies can be normatively represented in a conceptual scheme. This arrangement is depicted in Figure 2.
Notes:
1. Programs are selected for review on a cyclical or "triggered" basis. Cyclical patterns are based on varying recurring time frames. "Triggered" reviews occur due to response to results from productivity measures or interest on the part of the state legislature.
2. As programs are selected, some states use a peer review process as a precursor to the full review process. This process helps in the data collection phase.
3. Program reviews take a variety of forms. They may be done in conjunction with self-studies or accreditation visits. The institution, state agency, or external consultants may conduct them.
4. May be formal or informal. Programs that are approved conditionally are usually given a specific period of time to correct shortcomings. The programs are monitored and additional reviews may be conducted to determine the program's fate.
5. May lead to modification, consolidation, or elimination.
**Figure 2. State Agency Academic Program Review Process Model**
This conceptual scheme suggests three generalizations about academic program review processes. First, some external agent such as the state legislature or state agency selects programs for review. The review may be triggered by a concern such as productivity or mission-related matters. Second, institutions are requested to take certain actions such as conducting self-studies of program effectiveness. Third, state agencies take certain actions such as forming agency review committees or other structures which may include internal or external consultants and/or representatives from accrediting agencies to determine a program's approval status. Reviews, where conducted, often are focused on disciplines (or discipline clusters) or on broad categories such as degree level programs.
Academic program review practices can be placed in one of three general approaches:
*Independent Institutional Review.* In this approach, the state agency delegates the authority to conduct program reviews to the institution. The state agency does not exercise any supervision or audit of the processes (e.g., Michigan, Minnesota, Nevada, and New Jersey).
*Interdependent Institutional Review.* In this approach, the institution conducts the program review on a regular basis but does so under the guidance and audit of the state agency. The institution determines the review processes and criteria to be used consistent with
the context and characteristics of the institution. The institution submits its program review report to the state agency according to an annual or cyclical state-determined plan. Program review reports conducted in this manner often include:
- Descriptive program information,
- Year of last program review,
- Documentation of continuing need,
- Assessment information related to expected student learning outcomes and the achievement of the program’s objectives,
- Plans to improve the quality and productivity of the program, and
- Program productivity indicators.
Based on the information that the institution provides the state agency will make recommendations to modify, consolidate, or eliminate the program(s) (e.g., Hawaii, Kansas, and Montana).
**State-Mandated Review.** In this approach, the state agency determines the procedures and criteria of the program review, and conducts or commissions the review of the selected programs within the state system. The state agency staff will participate in the review process. System wide (lateral) program review of similar programs within the state may be carried out at the same time as can be seen in Illinois. The state agency also may conduct post-audit reviews of new programs following the graduation of the first class using pre-determined criteria (e.g., Georgia and North Dakota).
Variations on these program review approaches include the use of productivity reviews (normally triggered by evidence of below standard efficacy) and cyclical reviews. When productivity reviews are incorporated into the process, productivity indicators (such as credit hours, course enrollments, number of majors, number of degrees awarded, cost, and related information) are examined annually as reported by the institution. The state agency identifies low productivity and/or duplicative programs and takes action based on their determinations (e.g., Virginia and New Hampshire). Sometimes when reviews are triggered in this manner, the state agency reviews all similar programs in the state (e.g., Montana). When cyclical reviews are conducted, all programs are examined on some pre-determined schedule such as once each 3, 5, 7, or 10 years (e.g., South Carolina and Illinois).
External consultants may be used as a complement with any of the generalized approaches to program review. External consultants form an advisory committee to participate directly in the program review process. On-site visitations may be performed. Most states require the use of external consultants. External consultants may be selected from several groups of experts:
1. **External evaluators:** Qualified professionals selected from in-state or out-of state to provide objectivity and expertise.
2. **Representatives from peer institutions with similar programs:** Selected from similar institutions with similar programs to permit
informed exchange and to establish comparable standards (e.g., Georgia and Wisconsin).
3. Accreditation agencies: Representatives from specialized and regional accreditation recognized by the state agency may be used in the reviews (e.g., Montana and Georgia).
4. Representatives from state agencies of elementary and secondary education: Selected to achieve better linkage among the different educational levels.
5. Local lay people and other interested parties: Selected to address societal and occupational needs.
The consultants and/or representatives comment upon the quality of the program, resources available to the program, outcomes of the program, program costs, and other factors. An external review report is provided on the findings and each institution may have the opportunity to review the report and make comments. The final report and comments of the institution are reviewed by the state agency where further action may be taken.
The generalized academic program review approaches may occur in combination with one another and may be combined with the use of external consultants. Some of these combinations may be described as follows:
- Example 1 features interdependent and state-mandated reviews with the use of external consultants (e.g., Arizona, Wisconsin, and Idaho).
- Example 2 features interdependent review and the use of consultants (e.g., Washington and Georgia).
- Example 3 features independent review and the use of external consultants (e.g., Michigan, Minnesota, Nevada, and New Jersey).
- Example 4 features state-mandated review characterized by productivity review approaches or cyclical state-mandated reviews in combination with the use of external consultants (e.g., Virginia and West Virginia).
- Example 5 features independent review under state agency guidelines (e.g., New Hampshire).
**Summary of Strengths and Weaknesses of Program Review Processes by States**
Strengths and weaknesses of program review practices may vary according to the model or approach chosen; however, they may be characterized generally as follows:
**Strengths of Program Review Practices**
- Provides an on-going quality assurance check
- Even when done on an irregular basis, the process serves as an incentive to ensure quality at the institutional level
- When outside reviewers are used, a greater measure of
objectivity can be obtained
Weaknesses of Program Review Practices
- Institutions may focus on the review process and do little with the results
- Reviews are not done with great enough frequency to provide real quality control
- Process is time consuming
- Process is expensive
Summary of Program Approval and Review Practices by States
Program approval and program review can be seen as part of the integrated components of quality assurance practices within a state system of higher education. In this view, program approval is the initial and authorizing stage of program quality assurance and program review is a continuation and revalidation of the approval process. The objectives of program approval and program review are the same: ensure mission compatibility, maintain academic standards, assure continuing improvement of academic programs, and guarantee accountability of academic programs. Issues in both program approval and program review also are the same: mission compatibility, need, program structure, availability of resources (financing, faculty and staff, facilities, technology, etc.), and quality assurance.
Program approval and program review processes can be both internal and external, that is, they can be carried out both within the institutions themselves and/or by external agents. External agents may include the state agency, external consultants, peer institutions, accreditation agencies, and other interested parties.
Internal program approval and program review can best safeguard the institution's autonomy, integrate the processes with the institutional self-improvement efforts, be more flexible, and boost the morale of the faculty and administrators of institutions. However, internal program approval tends not to provide sufficient stimulation and motivation for improvement. External program approval and review procedures are part of the internal program operating processes, exercise outside monitoring, challenge existing program development notions, ensure maximum objectivity and expertise, and encourage the exchange of good practices. However, external review approaches may intrude on institutional autonomy and bring extra financial and reporting burdens to the institutions.
Distinctions between program approval and review practices between undergraduate and graduate programs cannot be clearly drawn from this study. Some states clearly are more concerned with one level of academic program than the other, but no systematic pattern in these concerns was evident from the data.
Program Approval and Review Practices by Foreign Countries and Regions
Program approval and review practices of Australia, Canada, England, Germany (Lower Saxony), Hong Kong, the Netherlands, New Zealand, and Scotland are summarized as a single practice. In these international practices, program approval and program review often are intertwined and are called quality assurance.
Quality assurance approaches in international locations are similar to practices in the United States in many respects. Three general models are evident:
1. Self-regulating (regulation by the institution or provider of the educational program), as seen in Canada where universities have the authority and responsibility for quality assurance.
2. Externally regulated (regulation by an external agency), as seen in Australia. The federal government of Australia plays a direct and intrusive role in educational policy.
3. A combination of the two (mixed or collaborative regulation), as seen in most of the countries and regions, such as in England, Scotland, the Netherlands, Hong Kong, Germany (Lower Saxony), and New Zealand, though the degree of the external control varies to a great extent. For example, in England and Scotland, quality assurance is more government-driven than in the Netherlands where the institutions are delegated more autonomy. This approach features institutional self-evaluation and cyclical review conducted by a quality assurance agency.
**Distinctive Features of Program Approval and Review Practices in Foreign Countries and Regions**
- Institutional self-regulation (self-study) is combined with external quality assurance agency review or audit. The quality assurance agency ensures that the institutions implement their own quality assurance procedures effectively.
- The institution may either design its own quality assurance procedures or adopt a formal quality assurance policy determined by the quality assurance agency or by the government. Adopting the formal quality assurance policy helps to emphasize system priorities and ensures consistency and comprehensiveness of comments and judgement of external reviewers across the system.
- External reviewers (assessors) play a very important role to ensure objectivity and expertise, promote the exchange of good practices, and respond to the needs of the society. In some countries, external reviewers are drawn from foreign countries (e.g., Hong Kong and the Netherlands), from industry (e.g., the Netherlands), and from the local lay people (e.g., Hong Kong). External reviewers may receive training from the quality assurance agency before visiting institutions under review (e.g., Scotland).
- In some countries, quality assurance initiatives are very extensive, including an assessment of institutional teaching and learning practices of all academic programs and an assessment of the research skills and training of junior academic staff (e.g.,
United Kingdom and Germany).
- Quality assurance results are scored (e.g., United Kingdom), ranked (e.g., the Netherlands) or published (e.g., United Kingdom) in some countries. Decision-making, such as funding and program elimination, is based on these scores or ranks.
- On-site visits involve meetings with groups of faculty, students, administrative staff, and those responsible for running support services. Time is spent in direct observation of teaching and learning.
- To reduce the administration burden, participants are encouraged to share proposals, databases, and trend analyses electronically (e.g., New Zealand).
**Discussion**
According to Barak (1998), more than half of the 50 states are considering deregulation or decentralization of program approval and review practices. This study confirmed a widespread interest in finding alternatives to current practices that still meet statutory or policy requirements.
Academic program approval and review procedures generally are conducted to address program quality and program productivity at the institutional level. Statewide concerns include access and capacity, quality, occupational supply and demand, and program costs and institutional productivity. Interest in decentralization or deregulation of academic program approval and review policies in a context of accountability was evident in state agencies though most demonstrated this interest only in their future plans.
Evidence from this study suggested that overall program approval and review practice in 50 states and eight foreign countries and regions can be distilled into three conceptual models for practice:
- **State Regulatory Model.** A centralized model for quality control characterized by development and application of centralized regulatory requirements for program approval and review by state-level agency.
- **Collaboration Model.** A consolidated model for institution and state agency cooperation characterized by jointly developed and administered program approval and review procedures by institution and state agency.
- **Accreditation Model.** A decentralized standards-based model characterized by the development and application of standards and guidelines for program approval and review and by cyclical audit by state and consulting agents from outside the institution.
The State Regulatory Model and Collaboration Model are derived from practices in the 50 states. The Accreditation Model primarily is used in foreign countries and regions, though aspects of accreditation are used in program review practices in this country. These models can be depicted along a continuum of state control as shown in Figure 3.
Analysis of information from this study was used to formulate two suggested alternative models for consideration by SCHEV—the Quality Assurance Audit Model (see Figure 4) and the Modified Collaboration Model (see Figures 5 and 6).
The Quality Assurance Audit Model is a decentralized model of program approval and review characterized by:
Figure 4. Quality Assurance Audit Model of Program Approval and Review
Figure 5. Modified Institution/State Collaboration Model
- Delegation of appropriate state agency authority to institutional governing boards,
- Development and application of institutional-level quality assurance policies and procedures (referring to policies and practices that include quality, duplication, and productivity issues), and
- Cyclical or triggered state-level audit of these policies and procedures.
- The Modified Collaboration Model is a centralized model of program approval and review characterized by:
- Shared institution and state-level oversight authority,
- Institutional-level program approval by classification according to mission relatedness (within mission, related to mission, outside of mission) and the requirement for new resources, and
- Cyclical reviews by state-level agency (for example, at five-year intervals) depending upon classification of initial approval.
The degree of centralization among the five models, those that represent current practice and those proposed as alternatives, can be depicted as shown in Figure 7 on the continuum of state agency control.
Both alternative models are attractive for different reasons relative to state interests in deregulation or decentralization of program approval and review practices. The Quality Assurance Audit Model places the agency in a policy/coordination role that enables the agency staff to provide broad oversight for the process of quality assurance. The state agency would be integrally involved in process development and management but would leave the implementation of the process to its respective institutions.
The most apparent disadvantages of the Quality Assurance Audit Model is that too much authority and control are delegated to the
institutions (although this runs counter to stated interests in deregulation or decentralization). However, using a periodic system-wide audit of program offerings noting year-to-year changes might serve as an excellent way to monitor institutional activity. Self-study reports and accreditation visits, processes already in place in most public institutions, would provide additional information on institutional decision making in the area of program approval and review.
The Modified Collaboration Model is attractive because it stratifies the approval and review process based on two critical factors--mission and cost. The model prescribes that additional attention be given to programs that require supplementary resources and fall outside an institution's current mission--the areas of greatest risk to the institution and the state. At the same time, however, institutions building new mission-related programs by reallocating existing resources receive additional control and authority. The disadvantage in this process is that risk-taking and innovation may be reduced if institutions act to avoid the more rigorous reviews that come with programs that may fall outside their current mission or require new resources.
Implications
State agencies that wish to modify their current academic program approval and program review practices to accomplish goals of deregulation or decentralization in an environment of accountability may find policy alternatives suitable to accomplish the goal. Most current practices currently are reasonably well portrayed in either the State Regulatory Model or the Collaboration Model. As currently practiced, however, neither of these models accomplish the goals of deregulation or decentralization very well.
Two alternative models to current practice were developed as part of this study. These new models, when appropriately constructed on policies consistent with the applicable statutory requirements, can release state agencies from burdensome practices without relinquishing responsibility or diminishing accountability. Both the Quality Assurance Audit Model and the Modified Collaboration Model may serve this purpose although clearly the Quality Assurance Audit Model moves the agencies further from current practice than does the Modified Collaboration Model.
States coordinating and governing boards across the country are struggling to find new, more effective ways of dealing with program approval and program review. This synthesis of current practice, along with the two alternative models suggested here, may prove helpful as these discussions continue.
Notes
1. Special thanks are due to Virginia Tech doctoral students Chunmei Zhao, Michael Perry, and Miya Simpson who assisted in all phases of the research for this project.
2. A copy of the complete study report and a complete bibliography of materials used for the original study is available at: http://cpaa.asu.edu/cpaa/v7n23/v7n23.pdf
References
Barak, R. J. (1991). *Program review and new program approval of state education boards*. (Report to State Higher Education Executive Officers) Denver, CO: SHEEO.
Barak, R. J. (September, 1998). Personal communication.
Creamer, D. G., Janosik, S. M., Zhao, C., Simpson, M., & Perry M. (1998). *Academic program approval and review practices in states and selected foreign countries*. (Special Study Report to the State Council of Higher Education for Virginia). Blacksburg, VA: Virginia Tech.
McGuinness, A. C., Jr. (1997). Essay: The functions and evolution of state coordination and governance in postsecondary education. In Education Commission of the States, 1997 state postsecondary education structures sourcebook: *State coordinating and governing boards* (pp. 1-48). Denver, CO: Education Commission of the States.
About the Authors
**Don G. Creamer**
*email@example.com*
Don G. Creamer is Professor and Coordinator of Higher Education and Student Affairs in the Department of Educational Leadership and Policy Studies at Virginia Polytechnic Institute and State University (Virginia Tech). He is Director of the Educational Policy Institute of Virginia Tech.
**Steven M. Janosik**
Steven M. Janosik is Associate Professor and Senior Policy Analyst in the Department of Educational Leadership and Policy Studies at Virginia Polytechnic Institute and State University (Virginia Tech). He is Associate Director of the Educational Policy Institute of Virginia Tech and recently served as Deputy Secretary of Education for the Commonwealth of Virginia.
Copyright 1999 by the *Education Policy Analysis Archives*
The World Wide Web address for the *Education Policy Analysis Archives* is [http://cpaa.asu.edu](http://cpaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, [firstname.lastname@example.org](mailto:email@example.com) or reach
him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
**EPAA Editorial Board**
| Name | Institution |
|-----------------------------|--------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs-Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina -- Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education|
| Mary McKcown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois—UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodriguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
Erwin Epstein (U.S.A.)
Loyola University of Chicago
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
J. Félix Angulo Rasco (Spain)
Universidad de Cádiz
firstname.lastname@example.org
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
José Contreras Domingo
Universitat de Barcelona
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
firstname.lastname@example.org
Marcela Mollis (Argentina)
Universidad de Buenos Aires
email@example.com
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
firstname.lastname@example.org
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
Este artículo ha sido consultado 107 veces desde el 12 de agosto de 1999
Education Policy Analysis Archives
Volume 7 Number 24 Agosto 12, 1999 ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Autonomia Universitária no Brasil: Uma Utopia?
Maria de Lourdes de Albuquerque Fávero
Resumen
El objetivo de este trabajo es hacer un recorrido histórico por las distintas etapas por las que ha pasado la autonomía universitaria brasileña. La revisión comprende desde 1931, en que la Reforma de la Enseñanza Superior promovida por el Ministro Francisco Campos concede "autonomía relativa" a la universidad, hasta la actualidad. El artículo relaciona las modalidades históricas de la autonomía universitaria en Brasil con los proyectos nacionales y con las formas de régimen político que ha tenido ese país en el período considerado. El trabajo concluye con una reflexión sobre las tareas pendientes para reforzar la autonomía universitaria brasileña ante los desafíos del presente.
Abstract
The purpose of this work is to trace the historical stages through which university autonomy in Brazil has evolved. It begins with 1931 when Minister Francisco Campos conceded "relative autonomy" to the universities and describes developments to the present day. The history of autonomy in Brazilian universities is related to various political regimes and national movements through which Brazil has passed in the last 70 years. Final thoughts on the challenges facing academic autonomy in present-day Brazil are presented.
Preliminares
As reflexões que vamos desenvolver estão centradas no princípio da autonomia universitária, reconhecido como uma das questões nucleares da Universidade no Brasil. É em torno dela que muitas outras se concentram. A história das instituições universitárias no País permite reconhecer que a autonomia tem sido negada, com frequência, por meio de dispositivos legais ou de mecanismos de controle e contenção. Tal questão, entre nós é anterior até mesmo à criação da primeira universidade oficial (Moacyr, 1942: 71-88 e Cunha, 1986). O termo aparece na legislação de ensino, pela primeira vez, em 1911, na Reforma Rivadávia Corrêa (Decreto nº 8.659). Essa temática é levantada em resposta a um movimento de contenção do crescimento das inscrições nas faculdades, propiciada pelo ingresso irrestrito dos egressos das escolas secundárias, tanto nas oficiais como nas privadas.
O resultado não surtiu os efeitos esperados. Se, por um lado, reduziu o número de estudantes que entravam nas instituições oficiais, obrigados a um exame de ingresso, por outro, o mesmo não vai ocorrer com as chamadas "escolas livres" que, apoiando-se no princípio de
autonomia garantido por decreto, proporcionavam todas as facilidades aos candidatos. Em decorrência, em 1915, o termo autonomia foi suprimido pela Reforma Carlos Maximiliano (Decreto nº 11.530) que reorganizou o ensino secundário e superior do PZ's. As instituições de ensino superior públicas perderam, entre outras prerrogativas, o direito de eleger seus dirigentes, que passaram a ser nomeados livremente pelo Presidente da República dentre os professores catedráticos efetivos ou jubilados.
Apoiado na Reforma de 1915, o Governo Federal cria, em 1920, através do Decreto nº 14.343, a primeira instituição universitária no País, a Universidade do Rio de Janeiro, sendo o reitor e os diretores das unidades nomeados pelo Presidente da República. O controle sobre as universidades federais, a partir daí, torna-se cada vez mais explícito. Com a Reforma do Ensino Superior promovida pelo Ministro Francisco Campos, em 1931, um ponto bastante acentuado é a concessão da autonomia relativa à universidade, como uma preparação para a autonomia plena. Apesar da justificativa de não ser possível, naquele momento, conceder-lhes "autonomia plena", tanto no plano didático, como no administrativo, a questão ficou, a rigor, em aberto.
Análise mais cuidadosa da exposição de motivos que acompanha os decretos números 19.851, que expede o Estatuto das Universidades Brasileiras e o 19.852, que dispõe sobre a Reorganização da Universidade do Rio de Janeiro (URJ), evidencia certa ambigüidade: ora a autonomia é assegurada à universidade, ora ela é admitida de modo restrito, alternando aberturas momentâneas e fechamentos, o que não deixa de ser uma forma de controle e centralização. Isso se torna claro na organização dos currículos das instituições de ensino superior (IES). Quanto à autonomia administrativa, apresenta-se também limitada: a escolha de reitores, diretores de unidades e membros do Conselho Técnico Administrativo (CTA) é feita pelo Governo, mediante lista tríplice.
Com essa reforma, a Universidade do Rio de Janeiro é instada e efetuar sua primeira reorganização. Seus estatutos são reformulados para poderem adequar-se aos dispositivos dos decretos antes mencionados. E, em julho de 1937, ela é reorganizada pela segunda vez, havendo, por parte do Governo Central, a preocupação de imprimir-lhe caráter nacional, dando-lhe a denominação de Universidade do Brasil (UB). Entre as propostas apresentadas em plano federal, desde 1935, em relação a essa Universidade uma delas é bastante expressiva: "deve a universidade federal constituir o mais sólido reduto, onde se resguardem as tradições, se firmem os princípios, se assinalem as diretrizes, que assegurem à nação brasileira a continuidade do progresso, o equilíbrio e a liberdade". A seguir, afirma-se: "A semelhante universidade, que assim se propõe exercer tamanha influência nacional, cabe bem a denominação de Universidade do Brasil" (MESP, 1935: 31). Embora, desde 1935, fosse justificada a concepção da Universidade do Brasil como "modelo padrão", o que se verifica é que, a partir de 1937, essa instituição passa a ser o "modelo outorgado" pelo governo central para as demais universidades e cursos superiores no País.
As diretrizes ideológicas que vão nortear a educação durante o
Estado Novo (1937-45) são pautadas por caráter fortemente centralizador e autoritário, o que traz sérios problemas para as instituições universitárias, no País. No período, as universidades se tornam vítimas de uma organização monolítica do Estado, sem qualquer autonomia. Há uma exacerbada centralização de todos os serviços, decorrendo daí a concepção de que o processo educativo poderia ser objeto de estrito controle legal. Com essa orientação, o Governo chama para si, como veremos a seguir no caso da UB, o pleno direito de designar em comissão os dirigentes universitários. Assim, tanto o reitor como os diretores de unidades são escolhidos pelo Presidente, dentre os respectivos catedráticos.
Após o Estado Novo, em 1945, e ainda durante o Governo Provisório, a Universidade do Brasil passa a gozar de autonomia administrativa, financeira e disciplinar, mediante o decreto nº 8.393/45. O reitor volta a ser escolhido pelo Presidente da República mediante lista tríplice, tal como estava disposto na Reforma de 1931. Quanto aos diretores de unidades, sua nomeação passa a ser feita pelo reitor, com prévia autorização do Presidente da República, obtida por intermédio do Ministério da Educação, sendo a escolha feita a partir de lista tríplice organizada pela respectiva congregação.
Com a Lei de Diretrizes e Bases da Educação Nacional, de 1961 (Lei nº 4.024), fica estabelecido, em termos gerais, que as universidades gozarão de autonomia administrativa, financeira, didática e disciplinar. Todavia, é importante lembrar que, os dispositivos contidos no projeto original, que definiam os tipos de autonomia, foram vetados. Mas a própria lei vai se encarregar de restringir a autonomia concedida às universidades, quando prescreve, entre as atribuições do Conselho Federal de Educação, "aprovar estatutos das universidades e promover sindicâncias por meio de comissões especiais em quaisquer estabelecimento de ensino superior, tendo em vista o fiel cumprimento desta lei". Merece ser observado ainda que, se tal dispositivo não teve implicações mais significativas até o golpe militar de 1964; a partir daí, esse dispositivo foi aplicado, em alguns casos, de forma bastante discricionária.
Em 1964, o regime militar implantado teve como uma de suas preocupações básicas modernizar a universidade. Os Decretos-Leis números 53/66 e 252/67 foram o ponto de partida para medidas mais amplas, no sentido de modernização das instituições de ensino superior (IES). Em função desses dispositivos, as universidades federais tiveram de reformular os estatutos, determinando significativas modificações em sua estrutura interna de poder. Somente em 1968, a Reforma Universitária veio a consolidar-se, com a Lei nº 5.540, de 28 de novembro daquele ano. Análise cuidadosa dessa Lei mostra que, ao mesmo tempo que reconhece o princípio de autonomia didático-científica, disciplinar, administrativa e financeira da universidade, ela o limita. Tal limitação é fortemente reforçada por atos de exceção baixados pelo governo militar, sobretudo através do Ato Institucional n.º 5 (AI-5), de 13 de dezembro de 1968, e do Decreto-Lei nº 477, de fevereiro de 1969, com base no § 1º desse Ato. Nesse dispositivo, o governo militar define as infrações disciplinares praticadas por professores, alunos e funcionários ou empregados de
estabelecimentos públicos ou particulares e as medidas a serem adotadas nos diversos casos.
Complementando as determinações desse Decreto-Lei, medidas foram emitidas pelo Governo, tais como: as Portarias Ministeriais números 149/69 e 3.525/70. Tais medidas contribuíram ainda mais para o processo de paralisia dos membros das instituições universitárias, abrindo espaço para ações de caráter persecutório de dirigentes universitários em relação a seus subordinados.
Da legislação ordinária referente ao ensino superior, promulgada a partir da Emenda Constitucional nº 1/69, merece destaque a Lei nº 6.420, de 3 de junho de 1977, que altera o art. 16 da Lei nº 5.540/68, determinando a apresentação de listas sêxtuplas para a escolha dos dirigentes das escolas oficiais. No caso das universidades federais organizadas sob a forma de autarquias, o Reitor e o Vice-Reitor passam a ser nomeados pelo Presidente da República, a partir de uma lista elaborada por uma Colégio Eleitoral, constituído, em geral, pelos Conselhos Universitários, de Ensino e Pesquisa e de Curadores. Em alguns casos, a elaboração dessa lista apresenta-se marcada por interesses estranhos à universidade, resultado de um processo político manipulado pela reitoria. Em outros, tal lista serve para garantir ao poder instituído externo, de forma não clara, a inclusão e/ou escolha de nomes de sua preferência.
Quanto às fundações universitárias públicas, a partir da Lei nº 6.733/79, elas não detêm qualquer forma de autonomia para escolher seus dirigentes. Reitor e Vice-Reitor são escolhidos pelo Presidente da República sem a exigência de lista sêxtupla, bem como os membros do Conselho Diretor da Universidade. Assim, os cargos de direção passam a ser cargos de confiança.
Como foi mencionado antes, importa lembrar também que na história das instituições universitárias no País, não é a primeira vez que o Poder Central chama a si o pleno direito de designar em comissão os dirigentes de universidades públicas. Um pouco antes de ser instalado o Estado Novo, a Lei nº 452, de 5 de julho de 1937, reorganiza a Universidade do Rio de Janeiro e institui a Universidade do Brasil como modelo padrão para as demais universidades. O art. 27 dessa Lei estabelece que tanto o Reitor, como os Diretores dos estabelecimentos de ensino deveriam ser escolhidos pelo Presidente da República, dentre os respectivos catedráticos, e nomeados em comissão. Observamos que essa forma de escolha de dirigentes universitários, adotada em um momento de grande centralização e autoritarismo no Brasil, e executada durante o Estado Novo, é retomada três décadas mais tarde pela Comissão Meira Mattos. Ao analisar a "crise de autoridade do sistema educacional brasileiro", entre outras recomendações, a Comissão propõe: "a alteração do atual sistema de nomeação de Reitores das Universidades e Diretores de Estabelecimentos de Ensino Superior, atribuindo ao Presidente da República o poder de preencher tais cargos, independentemente da indicação das respectivas universidades ou congregações". Após doze anos, esse procedimento passa a ser aplicado nas fundações universitárias públicas, em decorrência da Lei nº 6.733/79 (Sguissardi, 1993).
Durante o regime militar, a gravidade do que acontece em relação à universidade não está expressa claramente nos dispositivos legais, apesar de alguns deles, como o Decreto-Lei nº 477/69 ser demasiado contundente. A gravidade se expressava no regime de terror e de silêncio a que foram submetidas a universidade e a sociedade. Exemplo típico dessa situação foi a criação e a manutenção das "assessorias de segurança" dentro das universidades, a fim de impedir que mecanismos democráticos, mesmo quando previstos em lei, pudessem ser usados de forma efetiva, para que a "perfeita ordem" fosse garantida e a "paz" pudesse reinar. Tais "assessorias" só foram totalmente extintas, nas universidades públicas federais, em 1985.
Cabe observar, ainda, que a reforma contribuiu para fortalecer o processo de concentração de poder autoritário dentro das instituições universitárias, através de mecanismos de poder, monopolizados, em boa parte, por facções de antigas cúpulas que temiam um processo de radicalização e de contestação contra o regime. Tal situação recrudesce e adquire sua expressão máxima, quando o mecanismo de eleição de dirigentes das universidades públicas é alterado. Refirimo-nos à mudança da lista tríplice pela lista sêxtupla, pela qual o controle por parte de eleitores e da comunidade acadêmica se torna mais difícil, aumentando a possibilidade de inclusão de pessoal de confiança ou favorecendo as medidas do poder estabelecido (Cunha, 1986). É pertinente recordar que, se a década de 70 ficou marcada pela desmobilização estudantil - resultado dos anos de autoritarismo -, foi no final dela que surgiu o movimento docente, caracterizado nos anos 60 como um coletivo ausente, ou seja, até aquele momento, os docentes não se fazem sentir como uma força organizada. Somente mais tarde começaram a lutar de forma solidária em defesa do processo de democratização das universidades e de sua autonomia.
No limiar dos anos 80, reinicia-se no País a luta pela redemocratização da sociedade e, como parte dela, a da universidade. Há, também, para significativo número de professores, consciência de que alguns dos problemas relevantes da universidade são o do poder e o da tomada de decisões, na relação entre representantes e representados, governantes (Estado, mantenedoras) e governados. Assim sendo, um projeto alternativo de reforma das universidades para surtir efeito teria de estar vinculado a um projeto de democratização da sociedade. Entre as questões que perpassam as discussões, colocam-se a autonomia acadêmica, científica e administrativa da universidade, bem como a crescente desobrigação do Estado em relação à escola pública. Com tais preocupações, representantes das associações de docentes do Rio de Janeiro elaboram uma proposta, que foi apresentada na Reunião Anual da SBPC - Sociedade Brasileira para o Progresso da Ciência, realizada em Fortaleza, em julho de 1979. Todavia, merece registro que, enquanto os docentes discutiam uma proposta sobre a reforma da universidade, o Governo, dispensando a participação da comunidade acadêmica, cria uma Comissão Interministerial para examinar três anteprojetos: autarquia de regime especial; escolha e nomeação de dirigentes; e a reestruturação da carreira do magistério superior. Diante da reação da comunidade acadêmica, os dois primeiros foram engavetados, ainda na
administração do Ministro Eduardo Portela, na Pasta da Educação. O terceiro foi sancionado em dezembro de 1980, como desfecho de uma greve nacional de docentes das federais, pelo então Ministro da Educação, o General Rubem Ludwig. A partir da chamada "Nova República", outras medidas foram adotadas em relação às instituições universitárias. Em março de 1985, é instituída a Comissão Nacional para a Reformulação da Educação Superior. No Relatório Final dessa Comissão percebe-se que a idéia de autonomia permeia todo o documento. Vale registrar, no entanto, que, se sob alguns aspectos houve avanços nas propostas da Comissão em relação à autonomia e à democratização, isso não se dá por acaso: é fruto de anos de luta da comunidade acadêmica como um todo e do movimento docente em particular, que, desde 1979, juntamente com outras entidades, organizam-se para reivindicar seus direitos, enfrentando, em alguns casos e momentos, o arbitrio e o autoritarismo do poder constituído (Fávero, 1994, pp. 149-77).
Com a finalidade de repensar e melhor adequar as propostas da Comissão Nacional, contidas no seu *Relatório Final*, é criado, no MEC, em fevereiro de 1986, o *Grupo Executivo para a Reformulação da Educação Superior* (GERES). Em relação à autonomia da universidade, o GERES não lhe assegura esse princípio, por não haver autonomia sem democratização da universidade.
Arrematando este item, observamos ainda que a autonomia universitária quando mal compreendida poderá contribuir não apenas para reforçar a tutela estatal, mas também interesses corporativos existentes no interior da universidade.
**O princípio de autonomia na Constituição de 1988 e na atual Lei de Diretrizes e Bases da Educação**
A Constituição Federal de 1988 consagrou a autonomia universitária protegida pelo seu art. 207 que dispõe: "As universidades gozam de autonomia didático-científica, administrativa e de gestão financeira e patrimonial, e obedecerão ao princípio de indissociabilidade entre ensino, pesquisa e extensão".
Importa observar a precisão dos termos: "as universidades gozam de autonomia (...) e obedecerão ao princípio (...)" Os verbos são imperativos. Em sua acepção própria, o vocábulo princípio traduz a idéia "de origem, começo, causa primária" (Ferreira, 1986: 1393). E esta é a idéia que está presente na expressão "princípio de autonomia universitária" a designar não um princípio constitucional ou uma norma constitucional de princípio--norma programática-- mas um princípio universitário, ou mesmo de "direito educacional" por ser inerente à atividade universitária, e não à ordem jurídica, no sentido de orientação axiológica para a compreensão do sistema jurídico nacional" (Ranieri, 1994: 100). Assim entendida, a autonomia é causa primária da atividade universitária e é neste sentido que deve ser compreendida a expressão "princípio de autonomia".
É pertinente lembrar que a expressão *entidade autônoma* pertence ao direito público interno. Governa-se por si própria internamente, mas externamente tem seus limites traçados pela Constituição; ou seja, pelo modo de sua participação política no
conjunto de uma nação soberana. Chamamos atenção ainda para o fato de que, apesar de a Constituição deixar claro que a Universidade goza de todos os atributos propostos à autonomia, em momento algum é dito que ela goza de autonomia política, por não ser ela nem uma nação, nem um Estado (Cury, 1991: 27). A autonomia, tal como dispõe o art. 207, é um modo de ser institucional e exige liberdade para a universidade se autodeterminar. Esse artigo, no entanto, não pode ser analisado isoladamente, uma vez que a Constituição tem que ser vista na seu todo e interpretada de maneira sistemática. Assim, não podemos discutir esse artigo sem relacioná-lo com outros dispositivos constitucionais, tais como: o art. 212 que trata dos recursos públicos destinados ao ensino público e privado e o 206 que dispõe sobre a liberdade de aprender, de ensinar, de pesquisar e de divulgar o pensamento e o saber, como princípios basilares do ensino (Barracho, 1996, pp. 1-2). (Note 1)
Entendida nessa perspectiva, a autonomia didático-científica implica liberdade da universidade para: a) estabelecer seus objetivos, organizando o ensino, a pesquisa e a extensão sem quaisquer restrições doutrinárias ou políticas de graduação e pós-graduação e outros a serem realizados sob sua responsabilidade; b) definir linhas de pesquisa; c) criar, organizar, modificar e extinguir cursos; d) elaborar o calendário escolar e o regime de trabalho didático; e) fixar critérios e normas de seleção, admissão, promoção e transferência de alunos e f) outorgar graus, diplomas, certificados e outros títulos acadêmicos. Na mesma linha, do ponto de vista administrativo, as universidades têm plena liberdade de: a) organizar-se internamente estabelecendo suas instâncias decisórias, na forma que lhes aprouver; b) elaborar e reformular seus estatutos e regimentos; c) estabelecer seu quadro de pessoal docente e técnico-administrativo, de acordo com seu planejamento didático-científico.
A terceira dimensão refere-se à autonomia de gestão financeira e patrimonial. Na acepção mais corrente, gerir significa "ter gerência sobre: administrar, dirigir, reger, gerenciar (Ferreira, 1986, p. 848), o que implica poder elaborar, executar e reestruturar os orçamentos; constituir patrimônio e dele dispor. No caso das universidades públicas significa: a) outorgar competência à universidade para elaborar seu orçamento e executar suas despesas, a partir de suas unidades básicas, submetendo-as à aprovação dos colegiados superiores; b) receber os recursos que o Poder Público é obrigado a repassar-lhe para o pagamento de pessoal, despesas de capital e outros custeiros; c) administrar os rendimentos próprios de seu patrimônio e deles dispor, na forma de seu estatuto; d) receber heranças, legados e cooperação financeira resultante de convênios com entidades públicas e privada; e) realizar contratos referentes a obras, compras, alienação ou concessão, de acordo com os procedimentos administrativos de licitação. Do exposto, pode-se inferir que, se por um lado nunca houve no País a autonomia universitária em sentido pleno, apesar de proclamada na Constituição e nos documentos oficiais, por outro, observa-se que existe, de forma cada vez mais consciente, uma luta pela construção efetiva dessa autonomia, por parte de entidades, associações científicas e grupos organizados dentro e fora das universidades. Todavia, a Lei
de Diretrizes e Bases da Educação Nacional (LDB), que foi sancionada em dezembro de 1996, não contempla esses anseios.
Leitura atenta dessa lei não deixa claro que a autonomia da universidade visa garantir a liberdade de produção e transmissão do conhecimento, como também a autogestão de seus recursos para o atendimento de suas finalidades e que a autonomia administrativa, de gestão financeira e patrimonial decorrem e estão subordinadas à autonomia didático-científica como meios de garantir a sua efetividade.
Face ao exposto, o que se faz necessário agora é um trabalho em defesa dos princípios adotados pela Constituição e o que se procurou construir durante a tramitação da Lei de Diretrizes e Bases da Educação Nacional. Apoiando-nos em colocações de Marilena Chauí, questionamos: "deve a universidade pública gozar de autonomia acadêmica para definir suas atividades e o modo de realizá-las?" (...) É a universidade que, autonomamente, decide em que, como e quando relacionar-se com as empresas ou ao contrário?" Não se trata, como bem demonstra essa autora, "de sacralizar nem satanizar os interesses das corporações empresariais, nem os das corporações universitárias, mas de indagar se a discussão sobre a universidade pública democrática deve ser feita no campo dos interesses ou no dos direitos. Se no dos interesses é preciso provar que uns são mais legítimos que outros; se no dos direitos, então a autonomia universitária é pré-condição para definir campos de interesses" (Chauí, 1995, p. 61).
Análise cuidadosa do último documento do Ministério de Educação, sobre "Autonomia Universitária (MEC, 1999), provavelmente levará a entender que, no caso das universidades públicas, como bem a sinala Chauí, "de fato, a autonomia universitária se reduz à gestão de receitas e despesas, de acordo com o contrato de gestão pelo qual o Estado estabelece metas e indicadores de desempenho que determinam a renovação ou não do contrato. A autonomia significa, portanto, gerenciamento empresarial da instituição e prevê que, para cumprir as metas e alcançar os indicadores impostos pelo contrato de gestão, a universidade tem "autonomia" para "captar recursos de outras fontes fazendo parcerias com as empresas privadas". Na linguagem do Ministério da Educação a "flexibilização é o corolário da autonomia" (Chauí, 1999: 3). Nessa perspectiva, "a posição da universidade no sector de prestação de serviços confere um sentido bastante determinado à idéia de autonomia universitária e introduz termos como qualidade universitária, avaliação universitária e flexibilização universitária" (Ibid.).
No que tange às instituições públicas, o Documento do MEC, Fundamentos para uma Lei que regule a autonomia das universidades federais...reduz a gestão de receitas e despesas de acordo com o contrato de gestão? como já assinalado?, segundo o qual o Estado estabelece metas e indicadores. De forma explícita isso aparece, nesse documento, nos itens 8, 9, 10, 11 e 12 que tratam, respectivamente, sobre: "a) Possibilidade de ampliação da autonomia gerencial, orçamentária e financeira das universidades federais, mediante a celebração de contrato de desenvolvimento institucional. Opção da
universidade federal b) Os requisitos legais inerentes ao contrato; c) Termos de contrato. Afinidade com outros textos legais em vigor; d) Vantagens gerenciais, orçamentárias e financeiras decorrentes da celebração do contrato; e) Disposições finais. Regra de transição sobre pessoal e competência do Ministro de Estado e Educação para editar a regulamentação operacional" (MEC, 1999:8-11).
Após análise atenta desse documento, somos levada a indagar: o que fazer para que a universidade pública, atualmente, não acabe se tornando mera prestadora de serviços, relegando a segundo plano seu papel de instituição social que deveria aspirar à universalidade do conhecimento, à reflexão e à crítica?
Concluindo, cabe indagar: o que fazer?
Se a autonomia é entendida não como um fim em si mesma, mas como condição necessária para garantir as razões de ser da universidade, não se pode perder de vista, que ela não é uma dádiva e sim resultado de exaustiva conquista. Lembramos, também, que a universidade não é um ente abstrato, separado da sociedade que a mantém e do Estado que lhe dá existência jurídica. E, se por um lado nunca houve autonomia universitária em sentido pleno no Brasil, apesar de proclamada na última Constituição e em documentos oficiais, por outro, observa-se que se trava uma luta para a efetiva construção desse princípio. Tal luta, no entanto, não poderá excluir avaliação e controle social da produção universitária, a partir do conhecimento e acompanhamento de suas práticas.
Não se pode esquecer que atravessamos um momento difícil no País, em especial, em termos de universidade pública. Vivemos um período marcado pelo "sucesso" do modelo neoliberal, ainda que seus desacertos sociais e culturais já se façam sentir em outros países da América Latina. Neste sentido, o texto publicado pelo Banco Mundial--La Enseñanza Superior. Las lecciones derivadas de la experiencia--é bastante elucidativo. Sua leitura permite ver como as propostas apresentadas pelo Ministério da Educação, no Brasil vão ao encontro das recomendações propostas no documento. Neste, a crítica às instituições universitárias públicas surge, não como mera acusação abstrata, mas relacionada às condições materiais da sociedade, pela adoção por parte do governo da ideologia neoliberal, na qual se defende "a transformação do espaço de discussão política em estratégia de convencimento publicitário; a celebração da suposta eficiência e produtividade da iniciativa privada em oposição à ineficiência e ao desperdício dos serviços públicos; a redefinição da cidadania pela qual o agente político se transforma em agente econômico e o cidadão em consumidor, são todos elementos centrais importantes do projeto liberal global. É nesse projeto global que se insere a redefinição da educação em termos de mercado" (Banco Mundial, 1995: 15). Nessa ótica, aqueles que criticam a universidade pública propõem como saída a "universidade de resultados", a "universidade de serviços", cujo modelo padrão é dado pelas empresas. Para aumentar a eficiência e a qualidade, no que tange ao ensino superior, o Banco Mundial propõe, entre outros, os seguintes pontos chave: "a) fomentar maior diferenciação das instituições, incluindo o
estabelecimento de instituições privadas; b) proporcionar incentivos para que as instituições públicas diversifiquem as fontes de financiamento, entre elas, a participação dos estudantes nos gastos e a vinculação entre o financiamento fiscal e os resultados e c) redefinir a função do governo em relação ao ensino superior e adotar políticas que estejam destinadas, concretamente, a priorizar os objetivos de qualidade e equidade" (Ibid.: 29).
No que tange à autonomia, propõe-se que "uma maior autonomia institucional é a chave do êxito da reforma do ensino público de nível superior, a fim de utilizar os recursos de forma mais eficiente. E que a experiência recente tem indicado que as instituições autônomas respondem melhor aos incentivos para melhorar a qualidade e aumentar a eficiência" (Ibid.: 69-70). O alcance dessa proposta fica mais claro quando se lê o que é pensado a respeito, por exemplo, da organização de um sistema nacional de pesquisa" (Ibid.: 80-1) ou "sobre as estratégias que os governos devem utilizar para a implantação das reformas (Ibid.: 29 e 95). Em nome de uma instituição eficiente e modernizada, o que se quer fazer é privatizar as instituições públicas, inibindo o trabalho ou a autonomia criadora, fazendo-a funcionar à semelhança de uma empresa, na qual o "espaço público de discussão e exercício da democracia ficarão cada vez mais distantes" (Silva, 1994, p. 26).
Quanto aos que têm compromisso com a universidade pública, é extremamente importante que se construa um projeto alternativo para essa universidade em sintonia com as demandas mais amplas da sociedade, não se limitando apenas a discutir o conteúdo das propostas neoliberais e conservadoras. Não basta, portanto, discutir a crise a partir de nós mesmos e das questões mais imediatas. Urge construir uma compreensão histórica de universidade, enquanto instituição que transcende pessoas e gerações, tendo-se presente que esta instituição aponta para o futuro e ultrapassa governos (Vieira, 1991, p. 16), pois sua missão é promover o avanço do saber, da descoberta e ser espaço de socialização do saber.
Assim sendo, será preciso não apenas reagir às críticas às universidades públicas, muitas delas provenientes daqueles que defendem um modelo neoliberal para o País, mas apresentar propostas para o cumprimento efetivo das funções básicas da universidade na sociedade, da qual ela é parte, em contraposição ao que tem sido proposto por alguns autores ou artifícies de medidas legais: "uma universidade de serviços". Em suma, é imprescindível recuperar na universidade pública, mais do que nunca, a autoridade resultante do conhecimento. Tal empenho cabe sobretudo a nós, que integramos e produzimos a universidade e não ao governo e a outros setores ligados ao poder instituído ou ao mundo empresarial. Enfim, urge reconstruir, com seriedade e competência o trabalho universitário.
Finalizando, cabe recordar: como lugar de pesquisa, de produção de conhecimento, a universidade é ao mesmo tempo, espaço de socialização do saber, na medida em que divulga e socializa o saber nela e por ela produzido. Vista sob essa ótica, a autonomia universitária não é um fim em si mesmo, mas condição necessária para a concretização dos fins da universidade. É uma exigência que se apóia
no próprio ser dessa instituição, não uma dádiva, mas uma utopia a ser conquistada.
Notas
1. A respeito ver, também, da ANDES-SN. A Diretoria discute com a sociedade brasileira questões relacionadas à autonomia e novas propostas de financiamento ao ensino superior, 1999.
Referências
ANDES-SN. *A Diretoria discute com a sociedade questões relacionadas à autonomia e novas propostas de financiamento ao ensino superior*. 1999, mimeo.
BANCO MUNDIAL. *La Enseñanza Superior*. Las lecciones derivadas de la experiencia. Washington, D.C.: Banco Mundial, 1995.
Brasil. *Constituição da República Federativa do Brasil*, promulgada em 05 de outubro de 1988.
Brasil. MESP. Ministério da Educação e Saúde Pública. *Plano de Reorganização do Ministério da Educação e Saúde Pública*. Rio de Janeiro: Imprensa Nacional, 1935.
Brasil. Ministério da Educação. *Autonomia Universitária. Fundamentos para uma Lei que regula a autonomia das universidades federais, nos termos em que estabelece a Lei de Diretrizes e Bases da Educação Nacional, assim como dispõe sobre a possibilidade de ampliação da autonomia mediante contrato de desenvolvimento institucional*. Brasília, abril de 1999.
BARRACHO, José Alfredo de Oliveira. *Autonomia Universitária: questões constitucionais ilegais à autoaplicabilidade do art. 207* (Parecer ANDIFES). Belo Horizonte, 28 de setembro de 1996.
CHAUÍ, Marilena. Em torno da universidade de resultados e de serviços. *Revista USP*, São Paulo, n. 25, mar./maio, 1995.
CHAUÍ, Marilena. A Universidade operacional, In: *Folha de S. Paulo*, Caderno Mais, Domingo, 9 de maio de 1999, 3.
CUNHA, Antônio Geraldo de. *Dicionário Etimológico. Nova Fronteira da Língua Portuguesa*. Rio de Janeiro: Nova Fronteira, 1982.
CUNHA, Luiz Antônio. Autonomía universitaria: desafíos conceptuales y políticos, In: *Autonomía Universitaria: tensiones y esperanzas*, Washington: OEA, 1986, 61-73 (Serie Universidad, n. 1)
CURY, Carlos Roberto Jamil. A Questão da autonomia universitária.
Revista Universidade e Sociedade, ano 1, n. 2, novembro 1991.
FÁVERO, Maria de Lourdes de A. Autonomia universitária: necessidade e desafios. Cadernos CEDES, n. 22, 1988, p. 7-16.
FÁVERO, Maria de Lourdes de A. Vinte e cinco anos de reforma universitária: um balanço. In: Morosini, M. C. (org.) Universidade no Mercosul. São Paulo: Cortez, 1994, p. 149-77.
FERREIRA, Aurélio Buarque de Holanda. Novo Dicionário da Lingua Portuguesa. 2. ed., Rio de Janeiro: Editora Nova Fronteira, 1986.
MOACYR, Primitivo. A Instrução e a República, v. 4, Imprensa Nacional, Rio de Janeiro, 1942, 71-88.: RANIERI, Nina. Autonomia Universitária. As Universidades Públicas e a Constituição Federal de 1988. São Paulo: EDUSP, 1994.
SGUSSARDI, Valdemar. Universidade, Fundação e Autoritarismo. O Caso da UFSCAR. São Carlos: EDUFSCAR, 1993.
SILVA, Tomaz Tadeu da. A "nova" direita e as transformações na pedagogia política e na política. In: GENTILI, P. A.A. e SILVA, T. T. da (orgs.) Neoliberalismo, Qualidade Total e Educação. Petrópolis: Vozes, 1994, p. 11-29.
VIEIRA, Sofia Lerche. A Universidade Federal em Tempos Sombrios. Revista Universidade e Sociedade, ano 1, n. 2, novembro 1991, p. 10-16.
About the Author
Maria de Lourdes de Albuquerque Fávero
Coordenadora do PROEDES
Faculdade de Educação da UFRJ e Pesquisadora do CNPq
Brasil
Copyright 1999 by the Education Policy Analysis Archives
The World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho disl.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
Erwin Epstein (U.S.A.)
Loyola University of Chicago
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa-DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Jurjo Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
J. Félix Angulo Rusco (Spain)
Universidad de Cádiz
firstname.lastname@example.org
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
José Contreras Domingo
Universitat de Barcelona
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul-UFRGS
firstname.lastname@example.org
Marcela Mollis (Argentina)
Universidad de Buenos Aires
email@example.com
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
firstname.lastname@example.org
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
EPAA Editorial Board
Michael W. Apple
University of Wisconsin
John Covalcskie
Northern Michigan University
Alan Davis
University of Colorado, Denver
Mark E. Fetter
California Commission on Teacher Credentialing
Thomas F. Green
Syracuse University
Arlen Gullickson
Western Michigan University
Aimee Howley
Ohio University
William Hunter
University of Calgary
Daniel Kallós
Umeå University
Thomas Mauhs-Pugh
Green Mountain College
William McInerney
Purdue University
Les McLean
University of Toronto
Anne L. Pemberton
email@example.com
Richard C. Richardson
Arizona State University
Dennis Sayers
Ann Leavenworth Center for Accelerated Learning
Michael Scriven
firstname.lastname@example.org
Robert Stonehill
U.S. Department of Education
David D. Williams
Brigham Young University
Greg Camilli
Rutgers University
Andrew Coulson
email@example.com
Sherman Dorn
University of South Florida
I. Jhard Garlikov
firstname.lastname@example.org
Alison I. Griffith
York University
Ernest R. House
University of Colorado
Craig B. Howley
Appalachia Educational Laboratory
Richard M. Jaeger
University of North Carolina--Greensboro
Benjamin Levin
University of Manitoba
Dewayne Matthews
Western Interstate Commission for Higher Education
Mary McKeown-Moak
MGT of America (Austin, TX)
Susan Bobbitt Nolen
University of Washington
Hugh G. Petrie
SUNY Buffalo
Anthony G. Rud Jr.
Purdue University
Jay D. Scribner
University of Texas at Austin
Robert E. Stake
University of Illinois--UC
Robert T. Stout
Arizona State University
This article has been retrieved 3657 times since August 25, 1999
Education Policy Analysis Archives
Volume 7 Number 25 August 25, 1999 ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
The Quality of Researchers’ Searches of the ERIC Database
Scott Hertzberg
ERIC Clearinghouse on Assessment and Evaluation
University of Maryland
Lawrence Rudner
ERIC Clearinghouse on Assessment and Evaluation
University of Maryland
Abstract
During the last ten years, end-users of electronic databases have become progressively less dependent on librarians and other intermediaries. This is certainly the case with the Educational Resources Information Center (ERIC) Database, a resource once accessed by passing a paper query form to a librarian and now increasingly searched directly by end-users. This article empirically examines the search strategies currently being used by researchers and other groups. College professors and educational researchers appear to be doing a better job searching the database than other ERIC patrons. However, the study suggests that most end-users should be using much better search strategies.
A critical component of conducting almost any kind of research is to examine the literature for both related content and previously employed research methods. By reviewing the related literature, researchers are better able to formulate their research questions, build on past research, and design more effective studies. In the field of education, a usual first step in identifying related literature is to search the over 950,000 citations included in the Educational Resources Information Center (ERIC) database.
With its availability on the Internet and on CD-ROM, the ERIC database is now accessed by a wide and diverse audience and less specialized audience. In May 1999, the ERIC Clearinghouse on Assessment and Evaluation alone had over 3,500 users searching the ERIC database daily. This is quite a change from 10 years ago when access to the ERIC database was typically restricted to trained reference librarians who had accounts with commercial information service organizations such as Dialog.
The question studied in this paper is the quality of the search strategies of today’s end-users. We present effective strategies for searching the ERIC database, a brief summary of the literature on end-user searching, and empirical information on the quality of end-users searches of the ERIC database installed at the ERIC Clearinghouse on Assessment and Evaluation web site.
Effective Strategies for Searching the ERIC Database
The Educational Resources Information Center is the largest source of educational information in the world. The most well-known and frequently used body of information produced by the ERIC system is the ERIC Database which contains close to one million citations and abstracts reflecting both published and "gray literature" (conference papers, contractor reports, etc.) gathered by the 16 ERIC subject area clearinghouses. For over thirty years, the database has been a widely-used and well-known research tool.
The ERIC database can be accessed through various media. Researchers may search the database via Dialog, the Internet or CD-ROMs produced by several vendors. Although, the database is still searchable by way of paper indexes, electronic formats are the concern here because they are largely responsible for the surge in end-user searching.
There are some good search practices that are applicable to all electronic versions of the database. One of the most important tactics is the use of Boolean operators (AND, OR, NOT) to refine queries. One-word and one-phrase searches are rarely sufficient. When using Boolean operators, avoid the common mistake of confusing the function of "AND" and "OR". The query *Portfolios AND Nongraded Evaluation* retrieves only documents containing both descriptors, while a search for *Portfolios OR Nongraded Evaluation* retrieves a set of documents that have either or both of the descriptors.
Another fundamental rule for successful searching is to use all relevant descriptors (ERIC indexing terms). Find all related and narrower terms that apply and link them into the search with the Boolean operator "OR". Using all relevant descriptors increases recall (i.e. comprehensiveness of retrieval) and often reveals useful citations not found when searching using only one or two descriptors. The ERIC database is a very well indexed database, but has not been constructed with perfect consistency over the past 30 years. Further, the terms preferred by any individual end-user may not be the same as the terms preferred by the ERIC indexers. For example, ERIC uses *Test Wiseness*, *Student Evaluation* and *Disadvantaged Youth*. The terms *Test Preparation*, *Student Assessment* and *Disadvantaged Students* are not ERIC descriptors. Failing to use the controlled vocabulary terms will result in a search that misses highly relevant documents.
Because of these gaps between the database's controlled vocabulary and natural language, use of *The Thesaurus of ERIC Descriptors* (Houston, 1995) is essential to successful searching. The thesaurus, which has been published in paper since the creation of the database, is now available on many CD-ROM versions of the database and uniquely at the website of the ERIC Clearinghouse on Assessment and Evaluation (ERIC/AE).
The thesaurus is incorporated in the Search ERIC Wizard, one of the user interfaces for the ERIC/AE's Internet version of the database (http://ericace.net/scripts/ewiz/annain2.asp). The ERIC Wizard interacts with users to indicate whether a search term is an actual ERIC descriptor. If a term entered by a user is not a descriptor, the Wizard suggests alternatives. When the correct descriptor is located, the
Wizard displays an array of related and narrower terms. The user may then choose from the first term or the related terms to construct a search of the database.
**Hints for Effective Searching**
- Use Boolean operators (AND, OR, NOT) to craft good queries.
- Expand the query by ORing appropriate narrower and related terms.
- Use the print of an electronic ERIC thesaurus to find useful descriptors.
- Use the Building Block or Pearl Building methods.
- Conduct multiple searches.
An added feature of the search engine installed on the ERIC/AE website is a **Find Similar** link. The **Find Similar** feature performs a popular search strategy known as *Pearl Building*. *Pearl Building* involves the constructing of new searches around descriptors found in the good results of preliminary searches. The **Find Similar** link for a particular citation will produce a new set of documents that are based on the first document’s descriptors. This function often retrieves useful documents not found in the first search. You can choose the best documents from the second set of citations and continue to re-circulate the search until you no longer find any new, relevant hits. You may also edit the descriptors of a selected document to search only for the descriptors judged relevant to your needs.
Another good technique for organizing a complex search, applicable to all search situations, is the *Building Blocks* method. On a piece of paper, write out the two or three most essential components of a given question. These are the building blocks of the search. Construct a search by linking the building blocks with what you believe are the correct Boolean operators. If the resultant search is not very successful, expand it by attaching related descriptors to one or more of the building blocks. Continue to add to the building blocks and, if necessary, rearranging the Boolean operators, until you achieve satisfactory results. Inherent in this method is the necessity of conducting multiple queries for a given search.
**Literature Review**
This section summarizes some of the literature with regard to end-user searching with particular attention to the quality of end-user results, quality of search strategies, time spent on a search, use of thesauri, the frequency of multiple searches, and experience. Since this study is concerned with end-user searching of an electronic database through an Internet interface, both studies of users of on-line databases and studies of users of Internet search engines are relevant. Studies of the first type of users are quite numerous, as on-line databases have been widely used for over 20 years. Relevant literature on the search behavior of Internet users, on the other hand, is still rather scarce.
Quality of end-user results
There is a large body of literature claiming that most end-users obtain poor results when searching for themselves (Lancaster, Elzy, Zeter, Metzler and Yuen, 1994; Bates and Siegfried, 1993; Tolle and Sechang, 1985; Teitelbaum and Sewell, 1986). Lancaster, Elzy, Zeter, Metzler and Yuen, for example, compared faculty and student searches of ERIC on CD-ROM to searches conducted by librarians. They noted that most of the end-users found only a third of the relevant articles found by the librarians.
There are several studies, however, where end-users are able to search online databases with good results. Sullivan, Borgman and Wippern (1990) compared the searching of 40 doctoral students given minimal training with searches done by 20 librarians. The 40 students were no less satisfied with their searches of ERIC and Inspec than with the results retrieved by the librarians, and, in fact, found their searches to be more precise. Similarly, the patent attorneys in Vollaro and Hawkin (1986) felt that intermediaries could have done a better job, but were largely satisfied with their own searches. Both studies observed that the end-users still had trouble searching databases. Sullivan, Borgman and Wippern noted that the end-users "made more errors, prepared less well than intermediaries and had less complete results."
There are a few explanations for why some end-users may search more successfully than others. Yang (1997) observed that certain concepts and metaphors used by novice users to construct searches were beneficial to searching. Marchionini, Dwiggins and Katz (1993) suggested that subject expertise helps end-users search more effectively.
Strategies
Several studies have concluded that end-users use poor searching techniques, marked by overly simple statements and limited use of Boolean operators or other commands (Bates and Siegfried, 1993; Tolle and Hah, 1985; Teitelbaum and Sewell, 1986). In their study of 27 humanities scholars, Bates and Siegfried (1993) observed that 63% of the searches contained only one or two terms and 25% included no Boolean operators at all.
Nims and Rich (1998) studied over 1,000 searches conducted on the Search Voyeur webpage hosted by Magellan. The Search Voyeur site allows users to spy on the searches of other users. The researchers found a profusion of poorly constructed searches. Searchers performed one-word searches when more complex queries linked with Boolean operators were necessary. Overall, a mere 13% of the searchers used Boolean operators. The study, which observed how the general public searches the entire World Wide Web, suggests that end-users may have more trouble searching Internet databases than older online databases. End-users of Internet databases may be less familiar with the search protocols and may have higher expectations of the technology's ability to make up for their poor searching techniques.
Time Spent Searching
Looking at the transaction logs of 11,067 search sessions on computers linked to Medline at the National Library of Medicine, Tolle and Hah (1985) found that end-users averaged significantly less time searching than librarians. Patrons in the study averaged 15 minutes of searching per session, while librarians in the control group averaged 20 to 25 minutes.
**Use of a Thesaurus**
In their study of 41 patent attorneys searching Inspec, Valloro and Hawkins (1986) observed that the majority of the end-users did not utilize the database’s thesaurus. Interviews revealed that most of the subjects did not feel familiar enough with the main functions of the database to effectively use the thesaurus (which they considered an advanced feature). The study suggests that end-users may be under-utilizing online thesauri, but the subject remains largely unexamined.
**Number of Queries**
Conducting multiple searches is often essential to successful searching. Yet studies suggest that only around half of all end-users perform more than one search per session. (Spink 1996; Huang 1992). Spink conducted 100 interviews with academic end-users at Rutgers University and found that only 44% conducted multiple searches per session.
**Experience**
The most significant factor determining searching success appears to be experience using a database. In a recent study of law school students searching Quicklaw, Yuan (1997) showed that the search repertoires of students became more complex and effective over time. Tolle and Hah (1986) found a correlation between experience and the frequency of multiple searches. Only 8% of the experienced users in the study stopped searching after a failed search, while the rate of stopping was 11% for moderately experienced users and 20% for inexperienced users.
**Summary**
The quality of end-user searching appears to vary depending on the individual end-user. Some searchers are stronger than others because of skills they bring to searching or gain from using an online database over time. However, the literature suggests that most end-users could be doing better. Even the studies that recorded a high level of end-user satisfaction, observed that end-users rely on overly simple searches, make frequent errors, and fail to attain comprehensive results.
**Method**
For two days in early November 1998, all patrons wanting to search the ERIC database installed at the LRIC/AE website were required to complete a 10-item background questionnaire. For each
patron, we then tracked a) the maximum number of OR's in their searches as a measure of search quality, b) the number of queries per session, c) whether they used thesaurus or free-text search engine, d) number of hits examined, and e) the amount of time devoted to searching the ERIC database per session.
Data were collected on 4,086 user sessions. Because some browsers were not set to accept identifiers, we were not always able to relate background data to session information. Accordingly, our analysis is based on the 3,420 users with background and corresponding session information.
Participation in the study was entirely voluntary; patrons could go elsewhere to search the ERIC database. However, our questionnaire was short and our data collection was unobtrusive. Based on the prior week's log, we estimate our retention rate was over 90%.
Results
We asked our end-users "what is the primary purpose of your search today?". As shown in Table 1, most patrons were searching in connection with preparing a research report.
Table 1
Purpose of searching the ERIC database
| Purpose | N | Percent |
|----------------------------------------------|-----|---------|
| Research report preparation | 1825| 53.4% |
| Class assignment | 601 | 17.6 |
| Professional interest | 554 | 16.2 |
| Lesson planning | 177 | 5.2 |
| Background for policy making | 175 | 5.1 |
| Classroom management | 88 | 2.6 |
| **TOTAL** | **3240** | **100.0%** |
Some searching characteristics of the entire sample and of groups of individuals who identified themselves as college librarians, college professors, and researchers are presented in Table 2. College librarians are presumably the most trained and most experienced user group, while college professors and researchers are presumably the most diligent user group.
Most variables were fairly normally distributed. Accordingly, means and standard deviations (std dev) are presented in the table. The amount of time spent searching, however, was quite skewed. Central tendency and variability for time are represented by medians and
semi-interquartile ranges (sir).
Table 2
Searching Characteristics for Select User Groups
| | Quality n | N queries Mean | N queries Std dev | Thesaurus Use % | Hits Examined Mean | Hits Examined Std dev | Time (in seconds) Median | Time (in seconds) sir |
|---------------------|-----------|----------------|-------------------|-----------------|--------------------|-----------------------|--------------------------|------------------------|
| College Librarian | 96 | .91 | 3.89 | 2.66 | 3.11 | 5.41 | 207 | 240 |
| Researcher | 445 | .42 | 1.26 | 3.04 | 4.85 | 10.23 | 376 | 408 |
| College Professor | 209 | .37 | 1.10 | 2.49 | 5.58 | 15.09 | 361 | 345 |
| All users | 3420 | .44 | 1.77 | 2.75 | 3.65 | 8.65 | 352 | 351 |
A good search incorporates Boolean operators to capture appropriate terms. As a measure of search quality, we noted the maximum number of OR's used in any query during a patron's session. The data indicate that there is about one OR in every two search sessions. College librarians tend to conduct the most complicated searches and college professors conducted the simplest searches. To provide an additional perspective on these numbers, we computed the number of OR's used in the 84 pre-packaged search strategies at http://ericae.net/scripts/ewiz/expert.htm. These search strategies were developed by the top reference librarians across the entire ERIC system. The mean number of OR's used in these high quality, general purpose searches was 2.9 with a standard deviation of 2.8. Thus, the data show that online users tend to be conducting very simple searches that do not take account of subject matter nuances.
The typical user performs 2 to 3 queries per search session and there is little variability across groups. In contrast, the reference staff at the ERIC Clearinghouse on Assessment and Evaluation typically conduct 3 to 6 searches when responding to patron inquiries.
Not using the ERIC thesaurus to guide a search is equivalent to guessing which terms are used by the ERIC indexers. Using the thesaurus, one can employ the proper terms in a search. College librarians and college professors use the thesaurus much more often than most users. Yet, less than half of the searches at the ERIC/AE site take advantage of this unique, special feature.
For any given topic in education, there is typically a large number of related papers and resources. To find all the resources which meet their specific purposes, users need to examine a large number of citations. College professors and researchers are much more diligent than other users in examining citations. Further, as noted by the variance, some professors and researchers are looking at a very large number of citations. Still, the average number of citations examined is quite small, typically about 5 or 6 hits for the most diligent groups. It appears that most patrons, especially those that are not trained researchers, are not looking beyond the first page of hits.
The study showed that the median amount of time spent searching the ERIC/AE site is about 6 minutes. College professors and researchers spend slightly more time than the typical user searching for information. College librarians spend considerably less time searching.
At a minimum, we would like to see at least one OR in the query, more than one query, and at least four hits examined. Only 153 (4.5%) of our examined 3420 users met these criteria.
**Discussion**
Our findings with regard to Internet searching of the ERIC database are consistent with the broader literature on end-user database searching. Some researchers may be doing a better job than most patrons. Nevertheless, most end-users are conducting few searches, crafting poor searches, not using the thesaurus, and are examining only a few potential hits. While there are times an end-user may want to quickly look up something, such as finding a reference, research report preparation usually involves finding a collection of several relevant, high quality studies. This work cannot be done quickly. Ninety-five percent of the searches we examined do not meet our minimal criteria. From our point of view, these results are very disappointing. Patrons are not using effective search strategies and cannot possibly find the best and most relevant articles in the database being searched.
We have reason to believe that most end-users are satisfied with any somewhat-relevant hit and are not looking for the best citations. After we added the Find Similar option to our search engine, we noted that few end-users were taking advantage of the feature. We posted a short survey for a few hours asking why. The vast majority of users (80%) told us they were able to find what they wanted on the first page of hits. The reality is that with the default search options, hits are presented in what is basically chronological order. The ranked relevance option does not necessarily present the best quality documents first. Users may be satisfied, but they are not finding the best.
We cannot place enough emphasis on the need to use the *Thesaurus of ERIC Descriptors* when constructing a search strategy. In addition to the need to include related and narrower terms, the philosophy behind the *ERIC Thesaurus* and its structure necessitate added diligence on the part of the searcher. The *ERIC Thesaurus* is designed to reflect the terms used in the professional and scholarly education literature. It is not a strictly hierarchical thesaurus with a rigid set of mutually-exclusive term arrays. Thus, the *ERIC Thesaurus* is populated with terms that partially overlap and its structure sometimes necessitates variable search strategy design. For example, to find the documents that address the evaluation of instructional methods or activities one should search "Course Evaluation" OR "Curriculum Evaluation". This is a problem with the social sciences in general as terms are less well defined, more fluid and less strictly hierarchical than in the physical sciences.
We occasionally hear frustration from the research community with regard to the ERIC database. The data imply that much of the
end-user frustration is due to poor end-user searches. This is not to say the ERIC database is not without its faults. The ERIC system has basically been level-funded for the past 20 years and there has been no system-wide examination of ERIC's acquisition and processing efforts in 20 years. As a result, there are gaps in ERIC coverage. At our own clearinghouse, we have noted that the 39 journals that we process for inclusion in the ERIC database produce 1,100 articles. Yet, due to our budget, we have usually been limited to entering 700 articles per year. We process few international journals and are slow to add new journals, regardless of their quality or prominence.
We believe there has also been a steady decline in the "gray" literature portion of the ERIC database. Of the approximately 5,500 papers presented at the annual meetings of the American Educational Research Association, for example, only about 1,200 are entered into the ERIC database. Many authors do not have prepared papers and many that have papers do not respond to solicitation requests. Authors should view ERIC as a reproduction service. We make copies of papers available to others. Inclusion in the ERIC database only means that a paper has met some minimal acceptability criteria; it is not equivalent to peer-reviewed publishing and it should not preclude an author from submitting their paper to a refereed journal. Accordingly, we do not see any reason an author should not submit their paper to ERIC. In fact, submitting high quality papers can result in more people seeing the research and more people submitting their papers. Thus, we believe many authors are not assuming their share of the responsibility in building the ERIC resource.
While ERIC database content has its limitations, we believe the lack of end-user search skills is the major impediment to locating the best and most relevant resources. Poorly formed searches and poor search strategies cannot possibly find the best citations. We are encouraged by the conclusions of Sullivan, Borgman and Wippern (1990). With minimal training and a bit of diligence, end-users *can* attain satisfactory results. It is our hope that readers of this article will follow the suggestions outlined at the beginning of this paper and, concomitantly, increase their chances of finding the best and most relevant documents in the ERIC database.
**Note**
We wish to thank Dagobert Soergel, Jim Houston and Ted Brandhorst for their useful suggestions on an earlier version of this paper.
**References**
Bates, M. J., Siegfried, S L. and Wilde, D. N. (1993). An Analysis of Search Terminology Used by Humanities Scholars: The Getty Online Searching Project Number 1. *Library Quarterly*, 63(1), 1-39.
Ching, Y. S. (1997). Qualitative Exploration of Learners' Information Seeking Processes Using the Perseus Hypermedia System. *Journal of the American Society of Information Science*, 48(7), 667-669.
Houston, J. (1995). *The Thesaurus of ERIC Descriptors, 13th edition*. Phoenix, AZ: Oryx Press.
Huang, M. H. (1992). Pausing Behavior of End Users in Online Search. Unpublished doctoral dissertation, University of Maryland.
Lancaster, F. W., Elzy, C., Zeter, M. J., Metzler, L. and Yuen, M. L. (1994). Comparison of the Results of End User Searching with Results of Two Searching by Skilled Intermediaries. *RQ*, 33(3), 370-387.
Marchionini, G., Dwiggins, S., and Katz, A. (1993). Information Seeking in a Full-Text End-User-Oriented Search System: The Roles of Domain and Search Expertise. *Library and Information Science Research*, Vol 15 (Winter), 35-69.
Nims, M. and Rich, L. (March 1998). How Successfully Do Users Search the Web. *College and Research Libraries News*. 155-158.
Spink, A. (1996). Multiple Search Sessions: A Model of End-User Behavior. *Journal of American Society of Information Science*, 47 (3), 603-609.
Sullivan, M.V., Borgman, C.L., & Wippern, D. (1990). End-users, Mediated Searches and Front End Assistance Programs on Dialog: A comparison of learning, performance and satisfaction. *Journal of American Society of Information Science* 41(1), 27-42.
Tolle, J. E. and Hah, S. (1985). Online Search Patterns. *Journal of American Society of Information Science* 36(3), 82-93.
Teitelbaum, S. and Sewell, W. (1986). Observations of End-User Online Searching Behavior Over Eleven Years. *Journal of American Society of Information Science* 37(7), 234-245.
Vollaro, A.J. and Hawkins, D.T. (1986). End-User Searching in a Large Library Network. *Online*, 10(7), 67-72.
Yuan, W. (1997). End-User Searching Behavior in Information Retrieval: A Longitudinal Study. *Journal of American Society of Information Science* 48(3), 218-234.
**About the Authors**
Scott Hertzberg is a Research Assistant at the ERIC Clearinghouse on Assessment and Evaluation, College of Library and Information Services, 1129 Shriver Laboratory, University of Maryland, College Park, Maryland, 20742. He specializes in social science information services.
Lawrence Rudner is the Director of ERIC Clearinghouse on Assessment and Evaluation, College of Library and Information Services, 1129 Shriver Laboratory, University of Maryland, College Park, Maryland, 20742. He specializes in assessment and information services. He can be reached at email@example.com.
Copyright 1999 by the Education Policy Analysis Archives
The World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, firstname.lastname@example.org or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: email@example.com. The Commentary Editor is Casey D. Cobb: firstname.lastname@example.org.
EPAA Editorial Board
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimce Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs- Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | email@example.com |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | firstname.lastname@example.org |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | email@example.com |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | firstname.lastname@example.org |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina—Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois—UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
email@example.com
Adrián Acosta (México)
Universidad de Guadalajara
firstname.lastname@example.org
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
email@example.com
firstname.lastname@example.org
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
email@example.com
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
email@example.com
Jurio Torres Santomé (Spain)
Universidad de A Coruña
firstname.lastname@example.org
J. Félix Angulo Rasco (Spain)
Universidad de Cádiz
email@example.com
Alcjandro Canales (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Josué González (U.S.A.)
Arizona State University
firstname.lastname@example.org
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
email@example.com
This article has been retrieved 747 times since August 29, 1999
Education Policy Analysis Archives
Volume 7 Number 26 August 29, 1999 ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Solving the Policy Implementation Problem:
The Case of Arizona Charter Schools
Gregg A. Garn
University of Oklahoma
Abstract
When Republican legislators in Arizona failed to approve educational vouchers in four consecutive legislative sessions, a charter school program was approved as a compromise. The charter school policy was written during a special summer session and within three years, over 30,000 students were enrolled in 260 charter schools across the state. Republican policy makers, who failed to enact voucher legislation, proclaimed the charter school program to be an overwhelming success and protected it from amendments by Democrats and potential actions of bureaucrats that could have altered the policy intent.
Research on the implementation of policy indicates that state and local implementors frequently undermine or alter legislative intentions. However, when Arizona policy makers approved the charter school policy, they overcame this persistent implementation phenomenon and, in fact, succeeded in preserving the legislative intentions in the working program. This policy study analyzes how they were able to achieve this elusive result. Key policy makers attended to four significant features of policy implementation in creating the charter school policy: communication, financial resources, implementor attitudes, and bureaucratic structure. Manipulating these key variables allowed policy makers to reduce implementation slippage.
The Implementation Problem
Contrary to the desires of federal, state, and local policy makers, policies are not self-executing. After policy enactors develop legislation, various stages precede a working program. Simply because legislators express explicit intentions in policy does not guarantee those aims will be preserved through the implementation process. Frequently, implementors misconstrue or disagree with the conceived purpose and undermine legislative intent.
Beginning in the 1970s with the work of Pressman and Wildavski (1973), studies on the implementation of government policy over the following 16 years illustrated the problem of convincing local implementors to adhere to the spirit of government mandates. This implementation problem has been repeatedly identified in studies of agricultural, economic, energy, environmental, labor, penal, public health, urban planning, technology, and welfare policies at the state and federal levels. Baum (1981; 1984) and Clune (1984) identified
similar frustrations in the implementation of judicial policy.
In the late 1970s, research on federal and state educational policy also identified the implementation problem (Barro, 1978; Berman & McLaughlin, 1978; Weatherly & Lipsky, 1977). Over the following two decades, educational researchers have continued to highlight the implementation problem in their work (Elmore & McLaughlin, 1981; Hall, 1995; Hall & McGinty, 1997). In a comprehensive review of the literature Odden (1991) concluded:
In short, early implementation research findings coupled with somewhat later findings on the local educational change process concluded that local response was inherently at odds with state (or federal) program initiative. If higher levels of governments took policy initiatives, it was unlikely local educators would implement those policies in compliance with either the spirit, expectations, rules, regulations or program components. (p. 2) (Note 1)
Social scientists from various disciplines studying an array of social programs acknowledge that policies emanating from higher levels of government are inherently problematic. McLaughlin (1998) identified local capacity and will as two paramount variables that affect the outcomes of the implementation process.
The local expertise, organizational routines, and resources available to support planned change efforts generate fundamental differences in the ability of practitioners to plan, execute, or sustain an innovative effort. The presence of will or motivation to embrace policy objectives or strategies is essential to generate the effort and energy necessary to a successful project. (p.72) (Note 2)
Despite the preponderance of research indicating slippage during the implementation of social policy, legislators are not completely impotent after enacting legislation. McDonnell and Elmore (1987) identified four discrete methods policy makers can use to increase the likelihood that policy intentions are preserved in working programs. "They can set rules, they can conditionally transfer money, they can invest in future capacity, and they can grant or withdraw authority to individuals and agencies" (p.140). Baum (1984) described two additional sources of power policy enactors hold over policy implementors- they can investigate and publicize. "These powers allow legislators to embarrass an agency and its officials." (p.41).
What is consistent over three decades of research in the policy implementation literature of social policy is that armed with these "policy instruments," more often than not, policy enactors fail to manipulate the actions of policy implementors. Current research on the implementation of education policy is sparse and further exploration of this area is necessary. Accordingly, this study sought to clarify the nexus between policy development and program enactment by
focusing on the implementation process. The actions of policy makers and the contextual environment surrounding the implementation of the Arizona Charter School policy were analyzed using a case study methodology. The purpose of this research was to investigate three interrelated research questions concerning the design and implementation of Arizona's charter school legislation.
1. How did policy makers articulate the intent of the charter school policy?
2. After three years of a working charter school program, were they satisfied with the results?
3. How were state policy makers able to preserve their original intentions through the implementation process?
**Methodology**
A descriptive and exploratory case study approach was utilized for this policy study because how or why questions were posed, I had little control over the events, and the focus was on contemporary phenomenon (Yin, 1994, p.1). The study was completed using data from the analysis of documents, observations of key actors, and focused interviews with policy makers and policy implementors.
Relevant documentary information from a variety of sources, including articles from Arizona newspapers, minutes from the Committee on Education meetings of the state legislature, and relevant charter school statutes were analyzed. Data from documents were used to verify and strengthen data from other sources (Stake, 1995; Yin, 1994). Key actors were observed in various contexts, including Committee on Education meetings in the Arizona State Senate and the Arizona House of Representatives during the 1998 legislative session. Also, observations of the meetings of the State Board of Education and the State Board for Charter schools were completed from 1995-1998. (Note 3) My role in the field was toward the observer side of the participant observer continuum (Gold, 1969).
The third significant source of data came from focused interviews. A semi-structured interview protocol was employed with 24 key actors from the following four groups:
1. Legislative insiders
2. Administrative staff and board members from the two state level charter school sponsoring agencies (the State Board of Education and the State Board for Charter Schools)
3. Administrative staff members from the Arizona Department of Education, including the Superintendent of Public Instruction
4. Administrative staff members from the Office of the Auditor General
Interviews were taped and transcribed. All participants were given a chance to comment on the content of the interview transcripts and all 24 granted permission for the quotations used in this study. In
some cases participants insisted on receiving credit for their comments, while others preferred to remain anonymous.
Data collection and analysis occurred simultaneously through a process of reduction, display, and verification (Miles & Huberman, 1994). When data from one source was collected it was coded and compared with data collected from the same source at another time, as well as data collected from alternative sources. As this process continued, patterns emerged. These patterns often became themes that were refined and challenged against data from competing sources. Eventually distinct categories developed, and conclusions emerged.
This qualitative case study provides a rich account of the Arizona policy making context, however, generalization is limited, and this single case provides us with little insight into national trends. A multi-state comparison would be useful in such a pursuit. Moreover, most of the data collected was based entirely on the perceptions of policy makers and implementors. The recent nature of the reform, combined with the minimal reporting requirements for charter schools resulted in a meager amount of quantitative data.
Although this research is focused on the ways that Arizona legislators attempted to insure their intent was carried out, the author takes no position on the question whether this goal and these objectives are desirable in themselves. Others have pointed out the value of "loose coupling," (Weick, 1982) "street-level bureaucracy." (Lipsky, 1980) and other ways in which legislative or regulatory intent are modified or, in extreme situations, even subverted, for the good of all. Finally, this study does not address whether or not this particular reform, charter schools, produces meaningful changes in classroom practice (see Bomotti, Ginsberg & Cobb, 1999 and Knapp, 1997 for further discussion of this type of research). Unarguably an important question, it was beyond the scope of this research.
**Legislative Intent**
Across the United States many organizations, including Republicans, Democrats, teachers unions, business organizations, and parent groups have climbed on the charter school bandwagon. Many of the groups promote disparate ideologies, but every organization has specific motivations for its support. (Note 4)
In the US, the charter school concept has been driven by three distinct ideologies. Consequently, policy makers define the problem to be solved by charter schools differently in various states. Some state legislators argue that the current bureaucratic system of public education has stifled educational improvement and innovation in the United States. Charter schools in these states typically are granted a blanket waiver from most rules and regulations. Other state policy makers believe that market mechanisms will improve the public school system. In these states charter schools must compete for and maintain their student population. Finally, a few state legislators maintain that teacher professionalism must be increased before any real improvements in public education will occur. In these states teachers
have the power to make and implement decisions that affect learning in the classroom (Garn, 1998, p. 50).
This research first examined how key legislative insiders in Arizona defined the problem and articulated the intent of their charter school policy. Determining intent does not easily lend itself to precise measurement. However, the triangulation of various data sources confirmed the purpose of the policy.
The former Chair of the House Education Committee, Lisa Graham-Keegan, defined the "problem" that the charter school policy was intended to solve in a 1994 article that appeared in the Arizona Republic. "I hope this reform will begin to demonstrate that you don't need all of the bureaucratic overlay we now have in public schools." What they [charter schools] are getting is freedom from regulations in return for greater [market] accountability" (Mattern, 1994, p. A1). This was corroborated in the interview data. A leading legislator in the Arizona Senate reflected on the original intentions.
The bureaucratic administration and the monopoly that public schools used to have are now being eroded by charter schools." Charter schools have to compete in a market for students. So, if they for whatever reason can't attract children to go to that school, they are not going to have a school. And that's the whole key to charter schools; that's what disciplines them and that is their accountability mechanism. (Senate Education Committee Member Tom Patterson, March 16, 1998)
Furthermore, even legislators from the minority party, who were ideologically opposed to market accountability, recognized the aims of the charter school reform.
Well, for the rest of the world, the non-charter public schools, there is this perception, and also laws, which say, 'If I am going to give you the money out of the purse, then you have to give me accountability back.' So, what happened with these [charter schools] was that by using the definition of the 'innovativeness' of charter schools, we can just give them the money and part of the 'innovativeness' is not bothering them about the details of how the money is being spent." So, I guess, I mean, to me there is no [bureaucratic] accountability. (House Education Committee Member Kathy Foster, March 24, 1998)
Well, right now, currently, there's an atmosphere in the state that the 'buyer beware,' 'let the market forces drive them,' 'people are voting with their feet,' any number of clichés. As far as voting with their feet or the rhetoric you hear that charter schools are more accountable because there is an actual contract they have to adhere to. Well, the oversight of this contract is lame at best. The Department of Education] and even the charter school boards themselves, and local
districts that have all chartered, there has been very little monitoring of activities and adhering to their charter. (Senate Education Committee member Mary Hartley, March 23, 1998)
Arizona legislators created a charter school policy that was intended to address two intertwined problems. First, they wanted to reduce the bureaucracy with which public schools must contend. Second, they wanted to inject market mechanisms into the public school system. Satisfaction with the Results Building on the first research question, the second goal of the study was to determine if Arizona's policy makers were satisfied with the results of the working charter school program. Data from interviews with key actors indicated that they were pleased with the effect of the legislation.
Well, we hoped that it [charter school policy] would have a large impact and I think it is more successful than we anticipated it would be in the time span. Arizona is probably one of the leading states in the number of charters that have been granted and we have a few failures, but we expected that. (Chair of the House Education Committee, Dan Schottle, March 25, 1998)
It's [the charter reform] been one of those things that I think we had a pretty clear idea of what kind of principles we wanted it based on, and particularly what kind of accountability we wanted for charter schools.... And we were astonishingly successful, but I don't think we realized, or I certainly did not realize all the implications of that at the time and what a large and profound public policy movement this would be. (Senate Education Committee Member Tom Patterson, March 16, 1998)
I am not a plan-ahead person, and I don't know what will happen in the future, and I certainly did not know with the charter school legislation when I was working on it. That is just not the way I work. However, I did know there were some good principles in that legislation and then it took off.... Charter schools have just opened up one more venue for school choice. They vastly surpassed the number of schools that I or anyone else anticipated.... Yes, I am happy with the program, and, yes I think it is working like I wanted it to. (Personal communication, Superintendent of Public Instruction, Lisa Graham-Keegan, April 21, 1998)
The interview data were confirmed by data from documentary sources. All proposed charter school legislation from 1995-1998 was coded into three categories: bills that reinforced the intent, measures that subverted the intent, and acts not related to the intent. (Note 5) Moreover, the proposals were grouped by party preference. Assuming that proponents would protect the program from bills that would alter
the policy intent, the documentary record was clear. Although many amendments (proposed by Democrats) would have subverted the legislative intent, very few of those made it out of the House or Senate Education Committees, and even fewer were written into law. And those proposed by Republicans reinforced the intent and were more likely to be written into law.
Legislators involved in passing the charter law in 1994, who remained in office through 1998, explicitly understood their role in protecting the principles expressed in the statute. Senator John Huppenthal, Chair of the Senate Education Committee stated that "They [charter schools] are still getting sucked back into the bureaucracy." I've been able to defeat any legislation that would harm the charter schools" (March 23, 1998). The stability of the political support structure from 1994 to 1998 contributed to the preservation of intentions. Champions of the charter school policy remained in powerful positions and were able to protect the program from amendments that could potentially subvert the aims of the policy. Senator John Huppenthal served on the Education Committee from 1993 through 1998 and chaired the committee from 1995 through 1998. The Chair of the House Education Committee, Lisa Graham-Keegan, resigned from the House of Representatives and soon after was elected Superintendent of Public Instruction. Representative Dan Schottle assumed leadership of the House Education Committee in 1995 and maintained a strong defense of charter schools. Senator Tom Patterson was the first to introduce the idea of charter school reform to the Arizona Legislature. Formerly the Majority Leader, his support and defense of the charter policy was invaluable to the preservation of intentions.
In sum, policy enactors who enacted the statute remained in powerful positions. These champions were pleased with the working program and worked diligently to protect it. With regard to the first two research questions, the data were clear. Policy makers wanted to limit the bureaucratic requirements for charter schools and replace them with market accountability mechanisms. Moreover, after four years of charter school operation in the state, they were satisfied that the policy had achieved those objectives. The final step in this policy study was to address the third and larger research question: How were Arizona policy makers able to preserve the original legislative intent through the implementation phase when so many mandates are subverted?
**Avoiding Implementation Slippage**
To address the final research question required a framework that could isolate the linkages between the national and state political levels, state political and state bureaucratic levels, and state bureaucratic and charter school levels. Hall and McGinty's (1997) mesodomain framework was useful in clarifying how "the realization of intentions is shown as both constrained and enabled by (1) organizational context and conventions, (2) linkages between multiple sites and phases of the policy process, (3) the mobilization of
resources, and (4) a dynamic and multifaceted conceptualization of power" (p. 439).
The National Level
At the national level, George W. Bush pushed hard for systemic reform of the district public school system and was the first American president to endorse charter schools. Charter school legislation was first approved in Minnesota during the 1991 legislative session. Since that time, the charter school reform has evolved into a national movement as 34 states, the District of Columbia, and Puerto Rico have approved this policy. Bush's successor, William J. Clinton, recognized this national education reform trend and called for the development of 3,000 charter schools by 2001 (Clinton, 1997). Accordingly, federal funds for charter school research were first approved in 1994 through amendments to the Elementary and Secondary Education Act. Federal stimulus funds to charter school operators (to defray start up costs) increased from $6 million in 1995 to $100 Million in 1998 (Wohlstetter & Griffin, 1997).
Information about charter schools spread nationally through various channels, but two organizations took the lead. The first issue network was the Center for School Change at the Hubert H. Humphry Institute of Public Affairs at the University of Minnesota (Nathan, 1996). The Pioneer Institute, a conservative think tank located in Massachusetts, was the second organization to take an early lead in publicizing this reform (Wohlstetter, Wenning, & Briggs, 1995).
National to State Political Linkage
Ted Kolderie, a Senior Policy Analyst at the University of Minnesota's Hubert Humphry Institute, visited Arizona in 1993 to explain the charter school concept. Kolderie, who was influential in lobbying the Minnesota Legislature on the merits of charter schools, emphasized the professionalism for teachers embodied in the reform. In Arizona, his vision of charter schools was rejected. Providing teachers with more autonomy was not a problem that Arizona's leading legislators wanted to fix. In addition to disagreeing with the core ideology (as described by Kolderie), in 1993 notable policy makers in Arizona were pondering more radical educational change--school vouchers. Vouchers for all children were a top education reform priority for influential Republican members of the legislature and Arizona's Republican Governor, Fife Symington. During the 1993 legislative session, Symington stated that he would defeat any education reform that did not contain a voucher program.
Conversely, Arizona Democrats, the minority party, were fundamentally against the concept of a voucher program. They were able to unite and, with a few moderate Republicans, mustered enough support to defeat voucher proposals in the 1991, 1992, and 1993 legislative sessions. By 1994, the calls for educational reform were incessant. The public and the media were increasingly demanding that legislators "do something." As Arizona's 1994 legislative session
ended, again without voucher legislation, the pressure intensified.
In the early 1990s, staff members at the Goldwater Institute, a conservative think tank located in Phoenix, developed several alternative voucher proposals, ranging from limited to full participation. When voucher legislation was defeated in four successive sessions, Goldwater staff members promoted charter schools as a viable policy option. Goldwater officials closely monitored the school choice issue networks and were aware of the Pioneer Institute's work on charter schools. They saw the potential in the concept, but rather than focusing on the teacher autonomy, the Goldwater Institute's proposal emphasized radically decreasing bureaucratic oversight and forcing charter schools to compete for students. Behind closed doors in a Republican caucus, the Goldwater Institute's plan was modified without input from Democratic legislators. Authored by House Education Chair Lisa Graham-Keegan and championed by leading legislators, it quickly passed through the special session and was enacted into law on September 15, 1994.
Contrary to many other states, in Arizona charter school legislation was approved as a compromise in place of vouchers. Although both Democrat and Republican legislators voted for the bill, it is too simplistic to argue that there was bipartisan support. Democrats were against any plan that would divert funding from the district public schools. However, they were worried they would not have the votes to defeat another voucher bill. Conversely, Republicans were displeased they had failed at their original voucher intentions, but were anxious to pass an education reform that increased parental and student choice while decreasing bureaucratic oversight.
State Political to State Bureaucratic Linkage
The state political to state bureaucratic linkage was critical to the preservation of policy makers' intentions. Although policy makers had clearly articulated intentions for the charter school plan, this did not guarantee that state level bureaucrats would promote those interests during implementation. There has been a history of discord between the state Department of Education and state legislators; the latter feeling that bureaucrats too frequently misinterpreted the aims of the policy and the former feeling they were constantly being asked to do too much with too little. Due to the institutional distrust, policy makers took two explicit steps to ensure that state level bureaucrats did not undermine their intentions. First, the legislature minimized the authority of the Department of Education to regulate charter schools. McDonnell and Elmore (1987) stated that "Selecting or creating an implementation agency is often as important a choice for policymakers as transferring money or specifying rules" (p.138). The legislation granted two state sponsoring boards (the State Board of Education and the newly created State Board for Charter Schools) general sponsorship and oversight responsibilities for charter schools. This shifted the authority away from the Arizona Department of Education to regulate public charter schools.
To reinforce this shift in authority, legislators included a statute that provided the governor with the power to appoint members to the
State Board for Charter Schools. Although Governor Symington was originally opposed to charter legislation, it was not because he opposed increasing school choice. Rather he wanted additional choices (including private and religious schools) and supported education vouchers. However, he quickly reversed course and championed the charter school reform when he realized vouchers were not a viable policy option. As a strong proponent of school choice, Symington appointed seven individuals to the State Board for Charter Schools who supported the legislative intent. Board members understood that they first needed to approve as many applications as allowed under the law, and that second, they would play a "hands off" role in oversight (Garn & Stout, in press). The members of the State Board of Education, while supportive, were so to a lesser extent because of a slightly more diverse board makeup. (Note 6)
In addition to transferring much of the authority for charter schools to the State Sponsoring Boards, legislators used a second policy instrument to ensure state bureaucrats would not interfere with the spirit of the legislation. They passed the charter school reform as an unfunded mandate for state level administrative staff. The Arizona Department of Education, the Office of the Auditor General, the State Board of Education and the State Board for Charter Schools received no additional funding for charter school staff. This proved to be an effective policy instrument in limiting the influence of bureaucratic agencies. The Arizona Department of Education [ADE] and the Office of the Auditor General illustrate this point.
The Arizona Department of Education could easily justify an oversight role for charter schools. The legislative statute creating this agency speaks of a responsibility for all public schools. However, without additional funds to hire charter school support staff, ADE's role was effectively limited. Moreover, the charter statute asked little of the Department beyond providing general support to the sponsoring boards on an as-needed basis. Without clearly articulated statutory demands and funding to hire charter school support staff, ADE was overwhelmed by these new responsibilities and unable to institute any meaningful oversight on charter schools.
The Office of the Auditor General faced the same dilemma as ADE: they had statutory responsibilities, but received no additional funding to carry out those duties. The Office of the Auditor General was created to ensure that public entities were using tax-payer dollars appropriately. Arizona Revised Statute §41-1279.03 requires this office: "to be an independent source of impartial information concerning state and local governmental entities and to provide specific recommendations to improve the operations of those entities" (http://www.azleg.state.az.us/ars/41/1279). Accordingly, this agency had responsibilities for conducting and reviewing financial audits of public schools. Because charter schools are publicly funded, they came under the purview of this agency. Similar to the Arizona Department of Education, the Office of the Auditor General Office was not allocated additional funding to meet this charge.
One fundamental objective of the charter reform was to make sure that charter schools were not caught up in the same bureaucratic
rules and regulations as the district public schools. Transferring authority to specially appointed bureaucratic agencies and limiting funds to government agencies for administrative staff effectively achieved that goal.
An equally important contextual factor ensured the original intentions embodied in Arizona's charter school policy were intact during the state political to state bureaucratic linkage. Lisa Graham-Keegan was the author of the charter legislation as Chair of the House Education Committee. The charter school legislation took effect in September 1994, and she was elected to the position of State Superintendent of Public Instruction in November 1994.
Wohlstetter (1991) argued that "success of educational reforms was tied directly to the political agendas and self interests of their legislative sponsors or champions" (p.289). Other legislators clearly recognized Graham-Keegan's self interest in the charter school policy. "The Superintendent of Public Instruction is a strong proponent of charter schools. As a matter of fact, I would say sometimes to the disadvantage of the non-charter schools" (House Education Committee Member Kathy Foster, March 24, 1998).
The Superintendent of Public Instruction had a place on both the State Board of Education and the State Board for Charter Schools. Observations of both boards recorded over three years verified that Graham-Keegan used her position as expert on these layperson-dominated boards to ensure that the legislative intent was preserved. (Note 7)
Moreover, in her capacity as CEO of the Department of Education, Graham-Keegan was able to make sure that her staff did not misconstrue the aims of the policy. Although the legislature had transferred authority away from this agency and withheld funding, Graham-Keegan took several additional steps. First, Keegan ran on a platform of cutting the bureaucracy within ADE. One of her first actions after the election was to initiate a major downsizing of staff at ADE. The year before she took office the Department of Education had 460 full-time staff members. By 1996, she reduced the number of full-time staff to 231 (personal communication, ADE Payroll Division, April 1998). However, the Department was unable to function effectively with such low staffing provisions, much to the concern of some Democratic legislators.
Well, for one thing they [ADE] could add a few more staff people and they could keep them longer than six months. I don't think myself, I've called over there and gotten the same person twice. There's no continuity of staff at all." I think that speaks volumes of what's going on. (Senate Education Committee Member Mary Harticy, March 23, 1998)
However, legislators from the minority party were forgotten players in education policy, and only after school district leaders began to vociferously complain about the quality of services, did the numbers rise to 348 full-time staff by April 1998 (personal communication, ADE Payroll Division, April 1998). Consequently, fewer staff had
more responsibilities, further limiting the possibility for bureaucratic oversight of charter schools. Graham-Keegan took another explicit step in order to limit bureaucratic interference from ADE. She discouraged effective communication among the various divisions in the Department of Education. Her justification for this uncoordinated approach was as follows:
Our main efforts are not to be onerous on all schools. We don't have an internal structure of people who just focus on charter school issues. We have people from all departments dealing with the schools and don't isolate it anyway. We have specialists who work in various areas in all public schools. (Personal communication, Lisa Graham-Keegan, April 21, 1998)
The practical result to this uncoordinated approach to charter school oversight ensured that each unit within the Department had no idea what the other divisions were doing and gave rise to the belief, reiterated in interviews with ADE staff, that "somebody else must be looking at that." Keegan's position on the state sponsoring boards, a major staff reduction and persistent turnover, coupled with a lack of coordinated leadership, disabled the bureaucratic response to charters. These contextual factors, in addition to the explicit steps taken by legislators to transfer authority away from state agencies and limit funding, allowed the original aims of the charter school policy to remain intact during the state political to state bureaucratic linkage.
State Bureaucratic to Charter School Linkage
The final linkage in the charter school reform was from the state bureaucratic to the charter school level. The institutional distrust between political leaders and state level implementors was equally as strong among political leaders and local implementors. Selecting a system changing policy instrument transferred authority away from district administrators and teachers, much as it did with state level implementors. Similar to creating the state boards and appointing handpicked individuals, local implementors were recruited. Most of the charter school applicants participated in the Goldwater Institute's charter school project. Mary Gifford, a staff member at the Goldwater Institute during the early 1990s, said in a 1998 interview,
We were integral in getting that legislation through in the summer of 1994, and then the Goldwater Institute launched a two-year charter school program, a project at that time. The first year [we] aimed at getting the word out on charter schools" setting up conferences, developing a how to apply type of manual, [and] trying to get as many qualified applicants as possible before the board so we could get charters up and running. (Mary Gifford, March 9, 1998)
Consequently, those at the school level were socialized early
on as to the intentions of state level policy makers. Some of the charter school directors were formerly district teachers who were frustrated with the constraining rules, regulations, and levels of bureaucracy. Others came from private industry and wanted to run their school like a business. Whatever the rationale, virtually all of the charter school directors attended the Goldwater seminars. Therefore, the individuals who were creating the policy at the point of implementation understood the intentions of policy authors; they would not have to endure the level of bureaucratic reporting as district public schools, but they would be forced to attract and maintain their student population. More importantly, individuals at the smallest unit had the capacity and will to implement these principles (McLaughlin, 1998). In addition to transferring authority away from district public school personnel and recruiting local implementors, the charter school policy also removed one linkage in the policy process. Traditionally policy is interpreted at the state department of education, the central district office, and finally it is passed along to schools within the district. However, in a charter school, the district and school are one in the same. Consequently, one potential linkage, where original aims could have been misconstrued or subverted, was averted with the charter school policy. In sum, local implementors were recruited, socialized and had the will to support the legislative intent.
Conclusion
The distortion of intentions for Arizona's charter school policy when put into practice was minimal, a finding at odds with most of the research on education (and social policy) implementation. From the literature, it appears that four variables influence successful policy implementation: communication, financial support, will, and bureaucratic structure (Edwards, 1980; McLaughlin, 1998; Weatherly & Lipsky, 1977). Arizona policy makers addressed all four features, significantly increasing the chances that the legislative intent, embodied in the charter school policy, would be preserved in practice.
First key Arizona legislators effectively communicated their intentions to state and local implementors. They did so by clearly articulating their intent to decrease the bureaucratic structure in statute. Arizona Revised Statute (ARS) §150183E obligates charter schools to comply with federal, state, and local rules, regulations, and statutes relating to the health, safety, civil rights, and insurance. In addition, charter schools must provide a non-sectarian, comprehensive curriculum and design a method to measure pupil progress. With the exception of the aforementioned requirements, charter schools are "exempt from all statutes and rules relating to schools, governing boards, and school districts" (ARS §15-183E5).
This blanket waiver liberated charter schools from over 1000 pages of rules and regulations by which district public schools must abide. Moreover, the specific responsibilities for the Office of the Auditor General were omitted in the charter school statute, and there was only a single vague reference to the role expectations for the Arizona Department of Education. By explicitly excluding a clear
description of the responsibilities for state regulatory agencies, policy makers reinforced their message of limited bureaucratic controls. Key legislators preserved their intent over the following four years by defeating proposals that would limit competition or increase reporting requirements for charter schools. Arizona policy makers were also acutely aware of the impact financial support would have on implementation efforts. To limit excessive bureaucratic oversight, they simply refused to appropriate funds for state level bureaucrats. The administrative staff for the State Board for Charter Schools through 1998 consisted of an Executive Director and one administrative assistant. (Note 8)
The State Board of Education staff was also very lean. During the first two years of the charter school program, the staff included an Executive Director and one secretary; the same staffing provision as before the state board gained charter school responsibilities. In November 1997, the SBE created a new position, Director of the Charter School Division for the State Board of Education, who was given all responsibilities for SBE sponsored charter schools. The simple yet effective strategy of withholding funds for administrative staff also thwarted the efforts of the Arizona Department of Education and the Office of the Auditor General to bureaucratically monitor Arizona charter schools. The attitudes of individuals implementing policy were a third critical influence, which affected successful implementation. Arizona policy makers transferred authority away from state and local implementors who lacked the "appropriate" attitude toward the charter school policy. Republican legislators understood that many local implementors were hostile to the charter school idea. District school administrators were threatened by the potential loss of students and funding. District school teachers felt the charter schools were a way to deteriorate inroads made by the teachers unions. Consequently, individuals from non-traditional backgrounds (e.g., the military, health care, private schools and industry) were recruited to run the public charter schools.
The same technique was used at the state level. The Department of Education lost some of their authority when charter schools were granted a blanket waiver from the rules, regulations, and reporting requirements established for district schools. This authority was assumed by two "charter friendly" boards. The State Board of Education, where all members were appointed by pro-school choice governors and the State Board for Charter Schools, where the main criteria for membership was a strong disposition toward increasing school choice. Transferring authority to individuals who had a favorable inclination towards the policy intent dramatically decreased the chance of slippage.
Finally, organizational fragmentation at the Arizona Department of Education, combined with minimal bureaucratic structure for the state level sponsoring boards ensured that the policy intent was preserved in the working program. Lisa Graham-Keegan, the author of the charter school bill, had an unusually large amount of power over the implementation of the policy and proved to be a key actor in preserving the original aims. In her capacity as Superintendent
of Public Instruction, Graham-Keegan had a seat on both state sponsoring boards and was able to influence the behavior of board members who deferred to her judgment. Her position also allowed her to constrain the actions of bureaucrats at the Arizona Department of Education.
The intent of the Arizona charter school policy was preserved through a series of purposefully employed policy instruments and reinforced by a supportive contextual environment. Policy makers created a system-changing reform, which successfully transferred authority away from state and district level personnel, both of whom had historically altered legislative intent. Hall (1995) stated, "Policy production is a very complex process requiring much integration and coordination. It depends on the collective activity of many actors.... There are many places for contingency and numerous opportunities for altering the patterns of the past and context" (p.409). Arizona policy makers were able to maximize the potential for the preservation of their intentions by their explicit actions to produce a policy that limited bureaucratic oversight and neutralized the influence of policy actors who traditionally play key roles in shaping policy in practice.
Notes
1. For a more thorough review of the implementation research see Elmore & Sykes, 1992; Fuhrman, Clune & Elmore, 1988; or McLaughlin, 1998).
2. See also Kaufman 1972; Van Meter & Van Horn, 1975; or Edwards, 1980 for a further discussion on the character of implementors.
3. These two state entities were responsible for approving new schools as well as general oversight.
4. For example, business leaders tend to favor the market-based nature of the reform. Conversely, teacher organizations favor the reform because it provides teachers with more autonomy.
5. For example, amendments that would increase reporting requirements or restrict the choices of customers would run counter to the spirit of the legislation.
6. All of the members were appointed by pro school choice Republican Governors. Although their position on this issue was not the main criterion for appointment, as it was with the members serving on the SBCS.
7. Although Keegan was never a professional educator, board members repeatedly deferred to her judgment because of her position as Superintendent of Public Instruction.
8. The SBCS had five Executive Directors in the first three years of the charter school program.
References
Barrow, S. (1978). Federal education goals and policy instruments: An assessment of the strings attached to categorical grants in education. In M. Timpane (Ed.), *The Federal Interest in Financing Schools*. Santa
Monica, CA: The Rand Corporation.
Baum, L. (1984). Legislatures, courts, and the dispositions of policy implementors. In G. C. Edwards III (Ed.), *Public Policy Implementation* (pp. 29-57). Greenwich, CN: JAI Press..
Baum, L. (1981). Comparing the implementation of legislative and judicial policies. In D. A. Mazmanian & P. A. Sabatier (Eds.), *Effective Policy Implementation* (pp. 39-62). Lexington, MA: Lexington Books.
Berman, P., & McLaughlin, M. W. (1978). Federal programs supporting educational change: Vol. VIII. Implementing and sustaining innovations. Santa Monica, CA: RAND Corporation.
Bomotti, S., Ginsberg, R. and Cobb, B. (1999). Teachers in Charter Schools and Traditional Schools: A Comparative Study. *Education Policy Analysis Archives*, 7, , 22 (Entire issue). Available online at [http://epaa.asu.edu/epaa/v7n22.html](http://epaa.asu.edu/epaa/v7n22.html)
Clinton, B. (1997). Public school choice and accountability in education. President Clinton's call to action for American education in the 21st century, [http://www.ed.gov/updates/PresEDPlan/part6.html](http://www.ed.gov/updates/PresEDPlan/part6.html).
Clune, W. H. (1987). Institutional choice as a theoretical framework for research on educational policy. *Educational Evaluation and Policy Analysis*, 9(2), 117-132.
Edwards, G. C. III. (1980). Implementing public policy. Washington, DC: Congressional Quarterly Press.
Elmore, R. & Sykes, G. (1992). Curriculum policy. In P. Jackson (Ed.), *Handbook of Research on Curriculum* (pp.185-215). New York: Macmillan.
Elmore, R. F., & McLaughlin, M. W. (1981). *Rethinking the federal role in education*. Washington, DC: US Department of Education.
Furhman, S. H., Clune, W., & Elmore, R. (1988). Research on education reform: Lessons on the implementation of policy. *Teachers College Record*, 90(2), 237-258.
Garn, G. (1998). The thinking behind Arizona's charter school movement. *Educational Leadership*, 56(2), 48-50.
Garn, G., & Stout, R. T. (In press). How a good theory failed in practice. In R. Maranto, S. Milliman, F. Hess, and A. Gresham (Eds.), *School Choice in the Real World: Lessons from Arizona Charter Schools*. Boulder, CO: Westview Press.
Gold, R. L. (1969). Roles in sociological field observation. In G. McCall & J. L. Simmons (Eds.), *Issues in participant observation* (pp. 30-39). Reading, MA: Addison-Wesley.
Hall, P. M. (1995). The consequences of qualitative analysis for sociological theory: Beyond the microlevel. *The Sociological Quarterly*, 36(2), 397-423.
Hall, P. M., & McGinty, J. W. (1997). Policy as the transformation of intentions: Producing program from statute. *The Sociological Quarterly*, 38(3), 439-467.
Kaufman, H. (1973). *Administrative feedback: Monitoring subordinates behavior*. Washington, DC: Brookings Institution.
Knapp, M. S. (1997). Between systemic reforms and the mathematics and science classroom: The dynamics of innovation, implementation, and professional learning. *Review of Educational Research*, 67(2), 227-266.
Lipsky, M. (1980). *Street-level bureaucracy: Dilemmas of the individual in public services*. New York: Russell Sage Foundation.
Mattern, H. (1994, October 4). It's not much now. It's humble, but that will change: Phoenix site will give rise to one of state's 1st charter schools. *The Arizona Republic*, p. A1.
McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: Alternative policy instruments. *Educational Evaluation and Policy Analysis*, 9(2), 133-152.
Mclaughlin, M. W. (1998). Listening and learning from the field: Tales of policy implementation and situated practice. In A. Hargraves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), *International Handbook of Educational Change* (Part One). Boston, MA: Kluwer Academic Publishers.
Miles, M. B., & Huberman, A. M. (1994). *An expanded sourcebook: Qualitative data analysis* (2nd ed.). Thousand Oaks, CA: SAGE Publications.
Nathan, J. (1996). *Charter schools: Creating hope and opportunity for American education*. San Francisco, CA: Jossey-Bass Publishers.
Odden, A. R. (1991). The evolution of education policy implementation. In A.R. Odden (Ed.), *Education Policy Implementation* (pp. 1-12). Albany, NY: State University of New York Press.
Pressman, J., & Wildavsky, A. (1973). *How great expectations in Washington are dashed in Oakland, or why it's amazing that federal*
programs work at all. Berkeley, CA: University of California.
Stake, R. E. (1995). *The art of case study research*. Thousand Oaks, CA: SAGE Publications.
Van Meter, D. S., & Van Horn, C. E. (1975). The policy implementation process: A conceptual framework. *Administration and Society, 6*, 445-488.
Weatherly, R., & Lipsky, M. (1977). Street level bureaucrats and institutional innovation: Implementing special education reform. *Harvard Educational Review, 47*(2), 171-197.
Weick, K. E. (1982). Administering education in loosely coupled schools. *Phi Delta Kappan, 63*(10), 673-676.
Wohlstetter, P. (1991). Legislative oversight of education policy implementation. In Odden, A. R. (Ed.), *Education Policy Implementation* (pp. 279-295). Albany, NY: State University of New York Press.
Wohlstetter, P., & Griffin, N. C. (1997, March). *Creating and sustaining learning communities: Early lessons from charter schools*. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.
Wohlstetter, P., Wenning, R., & Briggs, K. (1995). Charter schools in the United States: The question of autonomy. *Educational Policy, 9*(4), 331-358.
Yin, R. K. (1994). *Case study research: Design and methods*. Thousand Oaks, CA: SAGE Publications.
**About the Author**
**Gregg A. Garn**
The University of Oklahoma
Educational Leadership & Policy Studies
820 Van Vleet Oval
Norman, OK 73019-2041
Email: firstname.lastname@example.org
B.A. University of Northern Iowa, 1994
M.S. Arizona State University, 1996
Ph.D. Arizona State University, 1998
Gregg Garn is an Assistant Professor of Educational Leadership and Policy Studies at the University of Oklahoma where he teaches courses in politics and policy. His research interests include school choice,
policy development and implementation, and the politics of education.
Copyright 1999 by the Education Policy Analysis Archives
The World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
EPAA Editorial Board
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauls- Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison J. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina--Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois--UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
Erwin Epstein (U.S.A.)
Loyola University of Chicago
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
J. Félix Angulo Rasco (Spain)
Universidad de Cádiz
firstname.lastname@example.org
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
José Contreras Domingo
Universitat de Barcelona
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
firstname.lastname@example.org
Marcela Mollis (Argentina)
Universidad de Buenos Aires
email@example.com
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
firstname.lastname@example.org
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
This article has been retrieved 816 times since September 6, 1999
Education Policy Analysis Archives
Volume 7 Number 27 September 6, 1999 ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Homeschooling and the Redefinition of Citizenship
A. Bruce Arai
Wilfrid Laurier University
Abstract
Homeschooling has grown considerably in many countries over the past two or three decades. To date, most research has focused either on comparisons between schooled and homeschooled children, or on finding out why parents choose to educate their children at home. There has been little consideration of the importance of homeschooling for the more general issue of citizenship, and whether people can be good citizens without going to school. This paper reviews the research on homeschooling, as well as the major objections to it, and frames these debates within the broader issues of citizenship and citizenship education. The paper shows that homeschoolers are carving out a different but equally valid understanding of citizenship and that policies which encourage a diversity of understandings of good citizenship should form the basis of citizenship education both for schools and homeschoolers.
Introduction
There has been a heightened interest in homeschooling in both popular and academic circles in recent years. The numbers of homeschoolers across North America, Australia and Western Europe have grown significantly over the past two decades (Knowles, Marlow and Muchmore, 1992; Thomas, 1998), and this growth shows no sign of abating. The number of "how-to" manuals has exploded, as has the number of support groups and regional, national and international support organizations.
Most of the debates about homeschooling have been framed as primarily educational issues. For example, the most common theme in discussions of homeschooling is whether or not homeschooled kids are disadvantaged in the education they receive, versus children who attend regular school (Rudner, 1999). Other issues which have received significant attention are the legality of the practice (Marlow, 1994), the motivations of parents to homeschool (Knowles, 1991; Mayberry, 1988; Mayberry and Knowles, 1989), and the different ways in which homeschooling is accomplished (Mayberry, 1993; Thomas, 1998). In most of these discussions, the implications of homeschooling for citizenship are downplayed in favour of educational or methodological concerns.
However, the broader issue of the place of homeschooling in contemporary democratic societies can be better understood as a more fundamental debate about the nature of citizenship, and the place of the school as a major agent of socialization in the construction of citizens. In short, most of the concerns about and objections to homeschooling are worries about whether homeschooled children will grow up to be good citizens.
This paper begins with an overview of the major objections to homeschooling, and how these objections can be seen as concerns about citizenship. The next section summarizes international trends in citizenship education in schools, especially the concept of multidimensional citizenship. This is followed by a review of the international evidence on homeschooling, and how homeschoolers are implicitly creating a different vision of citizenship by keeping their children out of school. Finally, some policy implications, for schools and for homeschoolers, are outlined.
Objections to homeschooling
When parents decide to homeschool their children, they face many hurdles. These include self-doubt about their decision, worries about the reactions of family and friends, bureaucratic interference from school officials, and sometimes even problems with the legality of their decision, depending on how they choose to pursue homeschooling and the laws of their jurisdiction (Marlow, 1994; Mayberry, et al., 1995). But the most common question which homeschoolers hear, from bureaucrats, educators, teachers, family and friends alike is, "What about socialization?" (Holt, 1981; 1983)
Socialization
The "socialization question", as it is known among homeschoolers, is actually an omnibus inquiry which usually leads more specific questions. Homeschooled parents are often asked questions like, "Don't you worry that your kids will grow up to be weird?", "How will you prepare them for the real world?", or, "Will they be able to get a job?" These are really concerns about homeschoolers not participating in one of our most important institutions of proper socialization. It is useful to break this larger question about socialization down into its major components.
The inability to cope. One of the interpretations of the socialization question is that students who are homeschooled will not be able to cope with the harsh realities of life beyond their family environment (see Luffman, 1997). In school, the argument goes, children learn valuable skills such as the ability to work with others, to handle interpersonal conflicts, work in groups or teams and to make personal sacrifices for the betterment of the group. These are vital skills later in life. Homeschooled children, who will not necessarily acquire these skills because of the protective cocoon of the home, will then be at a disadvantage when they grow up. (Menendez, 1996).
A different version of the same argument is that homeschooled children will be unprepared for the harsh and competitive nature of the labour market. They will then turn to government assistance, their parents, or a life on the margins of society in an attempt to reproduce the utopian bubble in which they were raised. In either version, parents are doing their children a great disservice by not giving them the opportunity to learn these skills at school. This quickly leads to a conclusion about the desirability of compulsory schooling, which will be addressed later. But the point here is that without school, and the valuable "job skills" it teaches, homeschooled children will not be willing or able to compete with their schooled counterparts (Pfleger, 1998; see also Webb, 1989).
In addition to job skills, schools teach children a great deal about social expectations (Pfleger, 1998). Standards of behaviour, dress, etiquette and morality are all powerfully reinforced through schooling. That is, school "normalizes" people because they learn important social norms and their sanctions, even if they choose not to follow them. School provides a kind of "informed consent" in that people who choose to ignore social prescriptions do so in full awareness of the penalties that they will likely encounter. Homeschooled children do not receive this majoritarian filtering of norms, but are more likely to pick up their parents idiosyncratic understandings of the world. They will again be disadvantaged because they will not realize what constitutes conforming and unconforming behaviour once they leave the family and enter the wider society (see Taylor, 1986).
Bias and narrow curricular content. A second issue which is sometimes referred to by the socialization question is whether or not parents can provide their children with a sufficiently broad education. In school, critics argue, children are exposed to many different
teachers, each with their own areas of expertise. No parent, no matter how intelligent and dedicated, could possibly provide this breadth of understanding for their children. The necessary conclusion, if these premises are valid, is that schooled children receive a better education than homeschooled children (Menendez, 1996). Many of these critics will admit though that homeschooled children receive much more individual attention than children in school, and that this may offset some of the advantages of having many teachers.
The problem of bias and narrow curricular content is more serious when parents deliberately set out to teach their children a "distorted" or erroneous view of the world. This claim is usually reserved for those people who keep their children out of school because they want to teach them a dogmatic view of the world, such as a belief in creationism. Occasionally, people who try to instill "new age" values or beliefs in their children are accused of bias. There are two problems with people who teach their kids a distorted view of the world according to this argument.
First, there is the problem that these parents know full well what the dominant social attitudes, beliefs and understandings are, and they have deliberately chosen to teach their kids something else. These people are not good citizens because they are purposefully flouting established conventions and disadvantaging their children in the process (Menendez, 1996). The second problem is related to the problem of the inability of homeschooled children to cope in the real world. Because these kids have been fed a biased and inaccurate view of the world, they will not fit into the wider society when they are forced to live on their own. If these homeschoolers are returned to school at some point, it is the school system and taxpayers who have to provide the resources to correct mistakes made by the parents (Pfleger, 1998).
*Lack of exposure to others.* A final major thread of the socialization objection to homeschooling is that homeschooled children do not receive enough exposure to other people and their distinctive ways of life. Especially in this era of many cultures, schools teach students from extremely diverse cultural and ethnic backgrounds. All students benefit from this diversity because they learn about other ways of life, and the values of tolerance, difference and novelty. Homeschoolers on the other hand, do not receive this exposure because they are cooped up in the home. Not only is this a less enriching environment, but it can undermine social cooperation if homeschoolers do not learn the value of tolerance of others. Homeschooling, according to this argument, runs the danger of producing a less unified culture, including people with higher levels of prejudice than if everyone went to school (Menendez, 1996).
All of these criticisms about the lack of socialization for homeschooled versus schooled children are primarily about what schools teach beyond the regular curriculum. That is, the value of tolerance and cooperation, an awareness of the dominant culture, and a broad perspective on life are not things which are taught directly, but which children learn in order to participate in the formal lessons of school. So these are things that homeschoolers cannot teach their
children by simply picking up a book and lecturing out of it. These are "life skills" which can be taught most effectively through school because of its communal organization.
Elitism
Homeschoolers have also been accused of being elitist. The argument takes one of two forms. The first one is that the current public system is in disarray, but parents have a duty to try to improve that system to make it better for all children. Taking a child out of school may be fine for that one student, but it does nothing to improve the situation for all of the other children who are left in school. Homeschooling then, is an ungenerous act because those parents who choose it are shirking their duty to the other families who stay in the system (Menendez, 1996). In addition, if middle and upper class parents leave the school, this removes active and concerned parents who might otherwise fight for improvements. Occasionally, this criticism takes on a class or ethnic dimension as well. That is, homeschooling may be a viable solution to poor schools for middle and upper class families with a stay-at-home parent, but it is not an option for the lower classes where both parents must work in order to survive. Since ethnic minorities are over-represented in the lower classes, homeschooling is a way for ethnic elites to protect the education of their own children while abandoning children from other ethnic backgrounds.
A second version of the elitism criticism of homeschooling is that homeschooling can only be done by parents with high levels of education. The argument is that homeschooling may work for the well-educated elites because they have the ability to teach their kids at home. But for people who don't have high levels of education, they must rely on the public school system (Menendez, 1996). Again, this is a way for elites to maintain privilege. The interesting thing about both versions of the elitist argument is that its implications for the public school system contrast sharply with the socialization arguments above. In the socialization arguments, school was seen as superior to homeschooling while in the elitist argument school is viewed as inferior to the home, at least for elites.
Higher education
Another worry of critics of homeschooling is that homeschooled kids will be disadvantaged in their abilities to apply for post-secondary education opportunities. This criticism is different from all of the other criticisms because it is a concern that is shared by homeschoolers. The argument from the critics is that homeschoolers will not have the credentials (namely a high school or equivalent diploma) to apply for college, trade school or university. Therefore, homeschooled children will be forced either to go to school anyway to earn these credentials, or to demonstrate their abilities through some other means. This can prove difficult because most post-secondary institutions have little or no experience or interest in evaluating the
qualifications of homeschooled applicants. Again, the criticism is that children will be punished for unwise parental decisions.
**Citizenship and choice in education**
All of the above criticisms of homeschooling are really concerns about parental choice in education, and the conflict between parental rights and state rights in education. Worries about coping in the real world, getting along with others, working for the common good rather than individual privilege, and being able to contribute to society through higher education are all based on a vision of what good citizens do. Because of this, they are also concerns about citizenship and whether or not homeschoolers will fit into the larger society in the proper ways.
One of the most sophisticated arguments against parental choice in education, including the choice to homeschool, is Eammon Callan's (1997) *Creating Citizens: Political Education and Liberal Democracy* (see also Callan, 1995). Callan's argument stems from the ongoing debates in political philosophy concerning the nature of rights, democracy, rationality, fairness, and justice, and how we can construct schools which promote these principles. He argues that a true common school, in which all students receive a common curriculum, with some reasonable departures, provides the best way of ensuring a vibrant sense of citizenship among present and future generations. This sense of citizenship is built around the virtues of a critical tolerance of diversity, the power of rational thought and argument, and commitment to a defensible moral code. Citizens who develop these graces will have an understanding of the world which will give them the freedom to choose how they live their life, which is the ultimate aim of the liberal democratic state. Moreover, it is through common schooling that these attributes are best developed. As Callan wrote,
> Schooling is likely the most promising institutional vehicle for that understanding since the other, extra-familial social influences that impinge heavily on childrens' and adolescents' lives—peer groups, the mass media of communication and entertainment—do not readily lend themselves to that end (Callan, 1997, p. 133).
Callan has in mind a very particular form of schooling here which he refers to as "schooling as the great sphere" (Callan, 1997, p. 134). This is a form of schooling in which children are helped to explore the world and in the process they acquire the abilities to decide for themselves how and where they wish to live in that world. Callan further argues that schooling as the great sphere should be mandatory for all children, except in some clearly defined circumstances. The reason is that the preservation of a liberal democratic state depends on it. As he wrote,
> The need to perpetuate fidelity to liberal democratic institutions and values from one generation to another
suggests that there are some inescapably shared educational aims, even if the pursuit of these conflicts with the convictions of some citizens. (Callan, 1997, p. 9)
This is reminiscent of the early mandate of public education systems to provide the people of the country with the skills to allow them to become proper citizens (Wong, 1997). The key question concerning homeschooling, then, is when is it permissible to not send a child to a common school. Callan has argued that parents have a right to keep their children out of school in only two circumstances. The first is when a parent's right to freedom of association with their children would be jeopardized by sending them to school. If the teachings of the common school would so alienate a parent from a child that they could no longer sustain an adequate parent-child relationship, then the state must allow these parents to keep their children out of a common school. The second situation is when a community creates a separate educational system which helps preserve the integrity of that community. For example, if a distinct community was able to construct a set of educational institutions, and these institutions were necessary to preserve the integrity of that community, then the state should grant children in that community an exemption from the common school. The example he uses is an Amish community that cannot preserve its integrity if its children attend a common school.
However, Callan is clear that these are very unusual circumstances, and exemptions are only to be granted after careful scrutiny of each case. One cannot keep their child out of school simply because they think it is in the best interests of the child to do so. He explicitly argues that parents do not have the right to reject great sphere schooling for their children. The reason is that this would interfere with the child's future "zone of personal sovereignty" (Callan, 1997:155) by keeping the child "ethically servile" (Callan, 1997:155) to her or his parents. Children who are ethically servile to their parents are those who have been raised in "ignorant antipathy" toward all points of view other than that of their parents. In other words, parents do not have the right to keep their children out of a common, great sphere school because they could be brainwashed into believing in only their parents very limited view of the world. This is not only harmful for the child so brainwashed, but also for the larger society. As Callan wrote,
Large moral losses are incurred by permitting parents to rear their children in disregard of the minima of political education and their children's right to an education that protects their prospective interest in sovereignty (Callan, 1997, p. 176).
Further, he argues that, "Those who would argue for the right of parents to veto the great sphere are effectively demanding a right to keep their children ethically servile" (Callan, 1997, p. 155). In Callan's argument, the personal rights of the child are connected with state
rights to the preservation of liberal democracy to cancel out parental rights to make choices about their children's education. There appears to be little room in his proposal for homeschooling. Homeschooling would only seem possible under extreme circumstances when parents would be at risk of losing their relationship with their children, or if they happened to belong to a community in which homeschooling was the chosen method of preserving a distinctive way of life. But since the reason for requiring attendance at school is to help create good citizens, the issue becomes what sort of citizenship education children receive in school.
**Citizenship and citizenship education**
The concept of citizenship is interesting because while there is general agreement about some of the elements which form a core definition of the concept, there is wide disagreement about its final composition, and which elements should receive more prominence than others. Most understandings of citizenship include some combination of five elements: group identification; rights or entitlements; responsibilities or duties; public participation, and; common values (Derricott, et al., 1998; Touraine, 1997; Callan, 1997). Various models of citizenship have been proposed and debated (see Delanty, 1997 for a good review of the major positions), but there is no single vision of citizenship which is acceptable to all. Perhaps this is not surprising given that citizenship is a fundamentally political concept. Similarly, there are many different proposals about the nature and content of citizenship education.
Starting with the earliest ideas of citizenship, there was an important distinction between good people and good citizens in ancient Greece. Good people lived their lives according to a set of legitimate moral principles, but good citizens carried the additional burden of participating actively in the public life of the society (Cogan, 1998). And this participation required a certain level of education.
With the development of industrial capitalism and the rise of public education, the school became a primary site for citizenship education (McKenzie, 1993). Early versions of citizenship education in most countries stressed several elements including nationalism and national history, individual rights and responsibilities and factual information about a country's geography and systems of governance (MacKenzie, 1993; Wong, 1997). In many cases, schools continued to emphasize one's duty to participate in the public life of the society. In these early years, participation meant not only following political events and voting in elections (if one had the right to vote) but also working within the local and church communities to which one belonged. That is, children were taught that they have a duty to work actively to improve the conditions of life for themselves and others in their immediate environment (Fogelman, 1991; Wong, 1997).
Over time, more and more emphasis was placed on " civics," or the facts about a country's political system, and less attention was paid to participation and community identification, beyond formal political
participation in elections. In many countries, citizenship education was confined to history courses, and later to social studies courses (McKenzie, 1993; Wong, 1997). This led to the teaching of a more formalistic understanding of citizenship, one which stressed rights and responsibilities rather than participation and group identification. When participation was stressed, the fear was that it was incomplete and did not result in strong bonds between individuals and their communities. As Touraine (1997:146) says, "In today's mass society, everyone talks of participation; but participation tends to mean dissolving into what David Riesman called "The Lonely Crowd"". In other words, in many schools participation was a rather vacuous moral injunction to be publicly involved. This has begun to change with the development of "community service" elements in many curriculums (Cogan and Derricott, 1998; Fogelman, 1991; MacKenzie, 1993). Schools appear to be rediscovering that participation in the daily events of life are important for the education of proper citizens.
Fogelman has shown that although citizenship education has stressed public involvement, there is a clear difference between the attitudes and behaviours of students. In a survey of British students, many of them reported that public involvement, especially in helping others, is important but very few students were actually involved in these activities. For example, the percentage of students who thought charitable work (e.g., helping the elderly or the disabled, preserving the environment) was important ranged between 37% and 71%, but only 6 to 12% of students were actually involved in these activities (Fogelman, 1991).
**Multidimensional citizenship**
Kubow, Grossman and Ninoyama (1998) and others (Cogan and Derricott, 1998) have recently articulated an idea of "multidimensional citizenship". Multidimensional citizenship for them has four components, the personal, the social, the temporal and the spatial, which encourage students to reflect on their own behaviour, their relations with others both locally and globally, and their relationships to the past and the future. Multidimensional citizenship is based on the principles of toleration of and cooperation with others, non-violent conflict resolution, rational argument and debate, environmentalism, respect for human rights, and participation in civic life. This vision of citizenship, they argue, must become the philosophical foundation for schools of the future.
Kubow et al. (1998) argue that in the personal dimension, compulsory schooling should develop a personal sense of virtue in all students and that this cannot be done in isolated courses. Rather, the school must be a model of virtue in all respects, from the behaviour of teachers, administrators and students to the place of the school in the life of the community. Schools should provide students with opportunities to integrate into their communities in numerous ways to foster proper attitudes and behaviours. Moreover, other social institutions such as families, churches and volunteer organizations must help schools achieve this mission by reinforcing the principles of
multidimensional citizenship.
The inculcation of virtue through schooling is a theme that also runs through Callan's (1997) ideas, as well as those of others such as Holmes (1995). For example, Callan stresses that contemporary common schools can and should promote "virtue" in their students, and Holmes wants major changes to the school system so that they can build "character" in pupils. In both cases, these goods cannot be taught simply in history or social studies courses, but must be an integral principle upon which an adequate school is founded. Moreover, character and virtue involve more than adherence to the values of respect for the law, tolerance of others and non-violent conflict resolution, but must also include a belief in the power of rational thought and argument, and a constant search for the good, the true and the right.
The social element of multidimensional citizenship encompasses the active commitment of citizens to participate in "civil society" which is not simply a formal political space. Rather civil society takes in a much broader range of actions including everything from public highway clean-ups to parades, and the use of public spaces to running for political office. The energetic participation of all people in these actions is a primary goal of education for multidimensional citizenship. The spatial element forces us to think of our place in the world, but not giving any one reference an exclusive claim on our identities. Rather, we need to recognize that we are all pulled in many directions by spatial and other affinities, and that we do not have to choose one at the expense of others. So for example, one can be a North American and an environmentalist at the same time, without any necessary contradiction. Finally, the temporal dimension encourages us to think about our place in the march of time. We need to recognize that our actions are shaped by those who preceded us, and that we have a responsibility to those who will come after us.
All four elements need to be developed and explicitly recognized in school curricula, according to Kubow et al. (1998). One of the interesting things about the idea of multidimensional citizenship is that the four dimensions all involve many different skills and values, and people may combine aspects of the four elements in many ways to produce different, but no less valid, forms of citizenship. For example, citizenship for some people might include a very strong environmental commitment which for them means a focus on internationalization and globalization as the basis of environmental problems. For others, environmentalism means cleaning up the chemical waste from a local factory. In multidimensional citizenship, both of these incarnations are valid. We are not required to agree on one and only one vision of the good citizen.
Homeschooling seems to have little place in any of the above discussions of the relationship between citizenship and education. In all cases, schools are argued to have an important, even primary role in the cultivation of new citizens, and in some cases, it is argued that parents do not even have the right to exempt their children from this education. Yet the number of homeschoolers in most countries continues to grow. The key issue then is whether homeschoolers pose
a threat to citizenship because they do not go to school. That is, do homeschoolers make good citizens? In the following section, I will argue that the answer to this question is "yes", but there are important differences between the vision of citizenship promoted in schools and that found among homeschoolers.
**Homeschooling**
Homeschoolers have responded to the above charges of not being good citizens, and have begun to create a different understanding of citizenship through their actions. The counter arguments to the charges of lack of socialization, elitism, post-secondary qualifications and parental rights to choice in education reveal that homeschoolers do not accept the assumption that schools are a primary agent in the construction of all good citizens. Further, the majority of them do not want to isolate themselves from the larger society, as is commonly presumed. Rather, they seek meaningful integration into the society, and in doing so, have come to produce a different but equally valid understanding of citizenship.
**Socialization**
Homeschoolers have been charged with failing to provide their children with the tools necessary to cope in the wider world. The contention of this criticism is that school provides this wisdom. However, homeschoolers recognize that school is not the only means by which children learn coping skills, nor is it necessarily the best. Homeschooled children, far from being isolated in their homes, are often heavily involved in sports, music, church and wilderness groups (i.e., scouts and guides) outside the home (Mayberry et al., 1995; Ray, 1994; Thomas, 1998; Knowles, 1998). To play on sports teams, in an band, or be a member of a Guide troop requires that children learn how to interact with others, which means they need to learn the values of tolerance, mutual respect and cooperation. Homeschooling parents contend that their children learn the supposed coping skills in these activities, so learning them at school is unnecessary.
Some homeschooling parents react to this criticism more harshly, arguing that the supposed coping skills learned in school are simply unintended consequences of the communal organization of schools. Moreover, parents also provide instruction in these skills and values, so it is erroneous for schools to claim all of the credit for these abilities (Gatto, 1997). It is not the case that just because a child is homeschooled that he or she will not learn what is necessary for proper interpersonal interaction.
The charge of bias and narrow curricular content has also been addressed by homeschoolers. The criticism depends, they argue, on the assumption that all teachers are unbiased, or that their biases offset one another. This is unlikely according to homeschooling parents, so there is no necessary reason to think that children in school will receive an unbiased education. In addition, many parents use standardized curriculums and/or also make extensive use of public and
college/university libraries in their home education, which reduces potential bias and idiosyncrasy (Ray, 1994; 1997).
The criticism of narrow mindedness is most serious when parents set out to indoctrinate their children in a particular world view. For example, if some homeschooling parents wanted to ensure that their kids believed that the world was flat, set out to teach them this, and made sure that no other views contaminated this truth, most people would rightly have a problem with this approach. However, homeschoolers view this as a parenting problem, not a homeschooling problem (Sheffer, 1997). They argue that children can be indoctrinated into malicious or erroneous world views even if they attend school, and that it is up to their critics to show that indoctrination is more likely in homeschooling than in public education. For example, homeschoolers contend that most racists have attended school. Raising bigoted, intolerant or violent children then can be done as easily if they attend school as if they stay home.
Homeschooling parents have responded to the charge that their children do not receive sufficient exposure to others in two main ways. First, they claim that their children do get exposure to others through their other activities such as sports and music, as noted above. Second, many of them also claim that the exposure to diversity that kids actually receive in school is probably over emphasized because schools demand a high level of conformity in the first place. The organization and structure of schools requires that diversity fit into specific patterns such as the daily schedule of classes and extra-curricular activities. Also, in school children have little opportunity to interact with people who are not almost exactly the same age, thereby robbing them of the ability to learn from those older and younger than themselves. Therefore, real exposure to other ways of life probably does not happen in school, according to many homeschoolers (Thomas, 1998).
Elitism
Some homeschoolers are understandably upset at the suggestion that they are being elitist by keeping their kids at home. This is especially true of the selfish version where homeschoolers are perceived to be abandoning the public education system and the kids who remain in it. Although homeschooling is usually a response to problems or perceived problems at school (Knowles, 1991), they recognize that home education is not for everyone. They wish only to be accorded the same respect for their decision as is given to parents who decide to send their kids to school.
As for being part of the elite, homeschooling families, from the many surveys that have been done, are not part of the financial elite, although the large majority of them are white (Mayberry et al., 1995; Ray, 1994; 1997). And while there are problems with all of these surveys (see Welner and Welner, 1999 for a summary of problems which apply to these as well as other surveys of homeschoolers), they all show homeschooling families to have an average or slightly below average level of family income, and slightly higher levels of education.
in comparison with the general population. However, homeschoolers are quick to point out that home education can and is being done by parents with very low levels of education as well. Indeed, many home educating parents would find it ironic if they had to attend school just so their kids could stay home.
**Higher education**
Homeschooling parents as noted above are as worried about their children's chances of entering post-secondary institutions as are some critics of homeschooling. Their response has usually been one of planning, and trying to find out what institutions would require while there is still time for their kids to acquire the necessary credentials or documentation (Ray, 1994; 1997). For example, if getting into university requires a high school diploma, many homeschooled kids will end up spending a year or more in school, or taking correspondence courses, to get the diploma. Homeschoolers point out that this has the unintentional benefit of forcing these teenagers to think about what they want to do and then work toward that goal instead of just finishing school and then choosing among the options that happen to be available.
Other homeschoolers are unwilling to attend school or take correspondence courses, and try to change the entry requirements of post-secondary institutions. Some homeschoolers approach college and university registrars and try to convince them that they are qualified for admission without the regular high school diploma. The success of this approach of course depends very heavily on the persuasive abilities of the student, and probably more importantly the regulatory context within which the institution must work. In some jurisdictions (for example, in most provinces in Canada) colleges and universities receive government funding only for students who meet specific entrance criteria which usually includes a high school diploma or recognized equivalent. Universities do not receive funding for students who do not meet these criteria, so there is no incentive to accept these students.
**Homeschooling and citizenship**
Moving beyond homeschoolers responses to criticisms levelled at them to the larger body of research on homeschooling, there is evidence to suggest that homeschoolers appear to be involved in a process of constructing an alternative vision of citizenship for them and their children, albeit largely implicitly. Consistent with the notion of multidimensional citizenship, homeschoolers are involved in combining a different mix of attributes to become good citizens. In particular, they emphasize participation and the importance of family as the basis of a different definition of citizenship.
In school, citizenship education emphasizes history, geography and social studies lessons, with some limited participation in extra-curricular activities both inside and outside the school. However, as Fogelman (1991) shows, the amount of extra-curricular
participation is limited. For homeschoolers, participation in the public sphere is a more important component of their education. They are much more involved in things like volunteer work than schooled children, which also further offsets socialization criticisms. For example, Ray (1994: 1999) found that over 30% of homeschooled kids 5 years old or older in both the US and Canada were actively involved in volunteer work, compared to the 6 to 12% found by Fogelman for schooled kids.
In other activities, homeschooled kids also exhibit high participation levels, although perhaps not any higher than schooled children. In the same surveys noted above, Ray found that 98% of homeschooled kids in the US were involved in 2 or more regular activities outside the home (Ray, 1999) and that Canadian homeschoolers had an average of almost 9 hours per week of contact with non-family adults and over 12 hours per week of contact with non-sibling children (Ray, 1994). And while the generalizability of these results must be treated with some caution, there is some evidence to substantiate the claim that homeschooled kids are very involved in activities outside the home. This suggests that homeschooled kids and their parents are keen to integrate into the wider society rather than pulling back from it, as is commonly presumed.
Mayberry and Knowles (1989), Knowles (1991) and Mayberry (1988) have also shown that "family unity" is a major factor in many parents' decisions to educate their kids at home. They feel that homeschooling promotes or at least allows them to have much stronger relationships with their children than would be possible if they went to school. These parents feel that these strong relationships are important not just for them but for two important characteristics in their children as well.
First, children with strong family relationships have the confidence to explore the world in challenging and sometimes unconventional ways. For instance, Thomas (1998) suggests that strong family bonds allow children to learn at their own pace, to maintain a heightened level of curiosity and to be involved in intense learning processes. As he says, "At home, on the other hand, children spend most of their time at the frontiers of their learning. Their parents are fully aware of what they already know and of the next step to be learned. Learning is therefore more demanding and intensive" (Thomas, 1998, p. 46).
Homeschooling parents also feel that a strong family will give their children the ability and the confidence to be more independent and to think for themselves. Indeed, raising kids who are willing and able to think for themselves is a primary goal many homeschooling parents (Knowles, 1991; Thomas, 1998). There is also some evidence to suggest that homeschooled kids see their relationships with their families as crucial to their own independence (Sheffer, 1997). It may be the case then that some homeschoolers would fall under Callan's "freedom of association" exemption from mandatory great sphere schooling. That is, strong family bonds, whether they are the motivation for or an effect of homeschooling could be jeopardized by not allowing parents the right to homeschool.
The strong bonds in homeschooling families are also thought to be the basis of deliberate and informed participation in the larger society, especially later in life (Sheffer, 1997). Many homeschooling parents find the level of consumerism and/or materialism in the "dominant society" to be too high and they want their kids to be able to resist these intense pressures. Some homeschooling parents have pulled their kids out of school because of the peer pressure and the availability of drugs and alcohol, while others mentioned that the pressure to be part of the "in crowd" was antithetical to the way they wished to raise their children (Marshall and Valle, 1996).
Homeschooling then, is a way to live out a lifestyle which is somewhat different from the norm and to raise their children to make their own decisions about how they wish to live. In other words, these parents share Callan's vision of raising and educating children to make informed and reasonable choices about their lives.
**Policy Implications**
While the form and content of citizenship education among homeschoolers is clearly different from what children receive in school, it is not an inferior experience. Homeschoolers, in other words, can be good citizens. Here I have argued that homeschoolers, despite being accused of not being good citizens, are actually engaged in a process of defining their own vision of what it means to be a citizen. They clearly do not believe that compulsory schooling is a necessary prerequisite of adequate citizenship and they prefer to stress the importance of family and participation in public activities as the basis of their understanding of the good citizen. The key issue now is what this implies for educational policies about homeschooling and compulsory schooling.
The major implication for compulsory schooling in this paper is that schools cannot be the only, or even the primary, agent of citizenship education for all children. Homeschooled kids can be good citizens, even if their vision of citizenship is somewhat different than that taught in schools. This undermines the arguments that schooling should be compulsory for all children in order to preserve "democracy", and that wanting a right to not send children to a common school is necessarily to want to keep them ethically servile. Most homeschooled children and their parents, just like most schooled children and their parents, are fervent supporters of democracy and have no interest in ethical servility.
Schooling is not an antidote to ethical servility, and policies surrounding the compulsory nature of school should be re-examined in light of this. Specifically, the need to educate all children to be good citizens has always been a cornerstone of mandatory schooling policies, so if these policies are to be retained, they need to account for the fact that children can become good citizens without going to school. This is not to suggest that a rationale for compulsory schooling is impossible, but only that it cannot be based primarily on constructing good citizens.
As for the content of citizenship education which is taught in
schools, the argument in this paper is consistent with policies which would continue to build on the importance of participation as a crucial element of citizenship education. This would not only help to legitimate the definition of citizenship being modelled by homeschoolers, but would also close the gap between what is taught in school and what is taught by home educators.
Further, schools should continue to pursue policy initiatives which promote multidimensional citizenship. Schools need to recognize that there is no one best version of being a good citizen, but that there are many valid interpretations of an ideal member of society. Moreover, multidimensional citizenship suggests that becoming a citizen is a constant process, and that people's ideas about good citizenship can change. Perhaps all educators, including those who teach at home, need to consider multidimensional citizenship as an important component of helping children become citizens.
Finally, it is clear that there are no guarantees for creating good citizens. Homeschoolers have an alternative and very powerful understanding of citizenship, but this does not mean that we should relinquish all citizenship education in schools, or that schools should adopt the vision of citizenship shared by many homeschoolers. This is no more a cure for poor citizenship than is forcing everyone to take civics classes. Rather we need to recognize and evaluate the validity of alternative definitions of citizenship, and to recognize that it does not have to be taught at school.
For homeschoolers, the policy implications are a little less clear, because they are much less likely to have a "policy" on citizenship education than are schools. However, homeschoolers should recognize that there are good elements to citizenship education in schools as well. For example, basic facts of national history and governance are often very important for informed participation in a democracy. Most of the people that homeschooled kids will encounter later in life will have this understanding, and those people will presume that homeschoolers have it as well. Homeschoolers need to be prepared to deal with these expectations, either by acquiring the relevant knowledge or convincing others of the validity of their experiences.
In addition, homeschooling parents and children must recognize that they are not just keeping their kids at home, and that they are not just making a statement about parental rights in education. Rather, they are also helping to define and shape what it means to be a citizen of their country. They must be prepared to think in these broader terms, and to recognize that what they are doing has some good elements and some bad elements, just as citizenship education in schools has strengths and weaknesses. In other words, homeschooling is not just about where kids will learn their ABCs, it affects the very definition of what it means to be a member of a society.
Note
The author gratefully acknowledges that financial support for this research was received from a standard research grant from the Social
Sciences and Humanities Research Council of Canada (SSHRC) and an internal grant partly funded by WLU Operating funds, and partly by the SSHRC Institutional Grant awarded to WLU.
References
Callan, E. (1995) Common Schools for Common Education. *Canadian Journal of Education*, 20(3):251-71.
Callan, E. (1997) *Creating Citizens: Political Education and Liberal Democracy*. Oxford: Clarendon Press.
Cogan, J.J. (1998) The Challenge of Multidimensional Citizenship for the 21st Century. Pp. 155-68 in J.J. Cogan and R. Derricott (eds.) *Citizenship for the 21st Century: An International Perspective on Education*. London: Kogan Page.
Delanty, G. (1997) Models of Citizenship: Defining European Identity and Citizenship. *Citizenship Studies*, 1(3): 285-304.
Derricott, R., A. Gotovos, Z. Matrai, S. Kartsen, R. Case, K. Osborne, K. Skau, K. Otsu, S. Pitiyanuwat, C. Rukspollmuang and W. Parker (1998). National Case Studies of Citizenship Education. Pp. 21-76 in J.J. Cogan and R. Derricott (eds.) *Citizenship for the 21st Century: An International Perspective on Education*. London: Kogan Page.
Fogelman, K. (1991). Citizenship in Secondary Schools: The National Picture. Pp. 35-48 in K. Fogelman (ed). *Citizenship in Schools*. London: David Fulton Publishers.
Gatto, J.T. (1991) *Dumbing Us Down: The Hidden Curriculum of Compulsory Schooling*. Philadelphia: New Society Publishers.
Holmes, M. (1998). *The Reformation of Canada's Schools: Breaking Barriers to Parental Choice*. Montreal and Kingston: McGill-Queen's University Press.
Holt, J. (1983) *Learning All The Time*. Philadelphia: New Society Publishers.
Holt, J. (1981) *Teach Your Own: A Hopeful Path for Education*. New York: Delta/Seymour Lawrence.
Illich, I. (1971) *Deschooling Society*. New York: Harper & Row.
Knowles, J.G. (1998) Home Education: Personal Histories. Chapter 14, pp 302-31 in M.L. Fuller and G. Olsen (eds). *Home-School Relations: Working Successfully with Parents and Families*. Toronto: Allyn and Bacon.
Knowles, J.G., S. Marlow and J.A. Muchmore (1992) From Pedagogy to Ideology: Origins and Phases of Home Education in the United States, 1970-1990. *American Journal of Education*, 100(1): 195-235.
Knowles, J.G. (1991) Parents' Rationales for Operating Home Schools. *Journal of Contemporary Ethnography*, 20(2): 203-230.
Kubow, N. D. Grossman and A. Ninoyama (1998). Multidimensional Citizenship: Educational Policy for the 21st Century. Pp. 115-34 in J.J. Cogan and R. Derricott (eds.) *Citizenship for the 21st Century: An International Perspective on Education*. London: Kogan Page.
Luffman, J. (1997) A Profile of Home Schooling in Canada. *Education Quarterly Review*, 4(4):30-47.
Marlow, S.E. (1994) Educating Children at Home: Implications for Assessment and Accountability. *Education and Urban Society*, 26(4): 438-60.
Marshall, J.D. and J.P. Valle (1996) Public School Reform: Potential Lessons from the Truly Departed. *Education Policy Analysis Archives*, 4(12). [online]. Available at http://cpaa.asu.edu/cpaa/v4n12/.
Mayberry, M. J.G. Knowles, B. Ray and S. Marlow (1995) *Home Schooling: Parents as Educators*. Thousand Oaks: Corwin Press.
Mayberry, M. (1993) Effective Learning Environments in Action: The Case of Home Schools. *School Community Journal;* 3(1): 61-68.
Mayberry, M. (1988) Characteristics and Attitudes of Families Who Home School. *Education and Urban Society;* 21(1): 32-41.
Mayberry, M. and J.G. Knowles (1989) Family Unity Objectives of Parents Who Teach Their Children: Ideological and Pedagogical Orientations to Home Schooling. *Urban Review;* 21(4): 209-225.
McKenzie, H. (1993) *Citizenship Education in Canada*. Ottawa: Canada Communication Group, Cat. No. Ym32-2/32E.
Menendez, A.J. (1996) *Homeschooling: The Facts*. Silver Spring MD: Americans for Religious Liberty.
Pfleger, K. (1998, April 6) School's Out. *The New Republic*. 11-12.
Ray, B. D. (1999). *Home Schooling on the Threshold: A Survey of Research at the Dawn of the New Millenium*. Salem OR: National Home Education Research Institute Publications.
Ray, B. D. (1997). *Strengths of Their Own--Home Schoolers Across America: Academic Achievement, Family Characteristics, and*
Longitudinal Traits. Salem OR: National Home Education Research Institute Publications.
Ray, B. D. (1994). *A Nationwide Study of Home Education in Canada: Family Characteristics, Student Achievement, and Other Topics*. Lethbridge: Home School Legal Defense Association of Canada.
Rudner, L. M. (1999). Scholastic achievement and demographic characteristics of home school students in 1998. *Education Policy Analysis Archives*, 7(8). [online]. Available at http://cpaa.asu.edu/cpaa/v7n8/.
Sheffer, S. (1997) *A Sense of Self: Listening to Homeschooled Adolescent Girls*. New York: Heinemann.
Taylor, J. W. (1986) *Self-Conception in Homschooling Children*. Doctoral Dissertation, Andrews University.
Thomas, A. (1998). *Educating Children at Home*. London: Cassell.
Touraine, A. (1997) *What is Democracy?* Boulder: Westview Press.
Webb, J. (1989) The Outcomes of Home-Based Education: Employment and Other Issues. *Educational Review*, 41(2):121-33.
Welner, K.M. and K.G. Welner (1999) Contextualizing Homeschooling Data: A Response to Rudner. *Education Policy Analysis Archives*, 7(13). [online]. Available at http://cpaa.asu.edu/cpaa/v7n13/.
**About the Author**
**A. Bruce Arai**
Assistant Professor
Department of Sociology and Anthropology Wilfrid Laurier University
Waterloo, Ontario Canada
(519) 884-0710 ext. 3753
Email: email@example.com
Bruce Arai teaches courses in research methods, statistics, and the sociology of work at Wilfrid Laurier University. His research interests include homeschooling, educational assessment, and economic sociology, particularly self-employment.
The World Wide Web address for the *Education Policy Analysis Archives* is [http://cpaa.asu.edu](http://cpaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, firstname.lastname@example.org or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: email@example.com. The Commentary Editor is Casey D. Cobb: firstname.lastname@example.org.
**EPAA Editorial Board**
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs-Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | email@example.com |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | firstname.lastname@example.org |
| Robert Stonchill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | email@example.com |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | firstname.lastname@example.org |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina—Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petric | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois—UC |
| Robert T. Stout | Arizona State University |
BEST COPY AVAILABLE
EPAA Spanish Language Editorial Board
Editor Asociado
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
email@example.com
Adrián Acosta (México)
Universidad de Guadalajara
firstname.lastname@example.org
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
email@example.com
firstname.lastname@example.org
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
email@example.com
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Daniel Schugurensky (Argentina-Canadá)
OISE/UT. Canada
email@example.com
Jurjo Torres Santomé (Spain)
Universidad de A Coruña
firstname.lastname@example.org
J. Félix Angulo Rasco (Spain)
Universidad de Cádiz
email@example.com
Alejandro Canales (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Josué González (U.S.A.)
Arizona State University
firstname.lastname@example.org
María Beatriz Lucci (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
email@example.com
Project Hope and the Hope School System in China: A Re-evaluation
Samuel C. Wang
University of Illinois at Urbana-Champaign
Abstract
I investigate the creation, development, contributions and limits of Project Hope, a huge government-endorsed education project seeking non-governmental contributions to overcome educational inadequacy in poverty-stricken rural communities in transitional China. By reexamining the composition of sponsored students, the locations of Hope Primary Schools and non-educational orientations for building and expanding schools, I argue that Project Hope and its Hope School system have not contributed to educational access, equality, equity, efficiency and quality as it should have. Poverty-reduction-oriented curriculum requirements in Hope Primary Schools are theoretically misleading and realistically problematic.
Introduction
According to what is published on the official homepage of the China Youth Development Foundation (CYDF), the founder of Project Hope, "[Project Hope's] mission is to raise much-needed funds for the improvement of educational conditions in China's poor areas and promote youth development in China. Its goal is to safeguard the educational rights of children in poor areas. In line with government policy of raising educational funds from a variety of sources, Project Hope mobilizes Chinese and foreign materials and financial resources to help bring dropouts back to school, to improve educational facilities and to promote primary education in China's poverty-stricken areas" (CYDF 1996a).
Seeking non-governmental financial and physical support in both China and overseas for the improvement of primary education in economically underdeveloped regions in China, Project Hope tries to help enroll in school those school age children who can not go to school or drop out of school because of poverty. It tries to improve the educational conditions, the classroom and school facilities in particular in underdeveloped rural areas. Furthermore, Project Hope tries to contribute to poverty reduction in local areas by contributing agricultural and technical knowledge and skills to the curricula and instruction of its Hope Primary Schools and encouraging the schools' participation in business operations (Yue, 1991; Huang, C. 1994; Tou, Cheng and Huang 1995).
Project Hope has sponsored the schooling of tens of thousands of children in poor areas. To date, it has sponsored the construction and renovation of over 5,000 primary schools in poor areas (Guangmin Daily 1997; CYDF 1998). As a non-governmental charitable project, it has special political and educational legitimacy, power, and influence in China. The phrase "Project Hope" has become a household word among the Chinese people.
This article traces the origins and development of Project Hope and the Hope Primary School system, which are marked by high politicization and bureaucratization, and investigates their contribution to the development of rural basic education. By examining the demographics of sponsored students, the locations of Hope Primary Schools and non-educational orientations for building and expanding schools, I argue that Project Hope and the Hope School System have not contributed as they should have to educational access, equality, equity, efficiency and quality in poverty-stricken areas. I suggest that poverty-alleviation-oriented curriculum and instruction requirements in Hope Primary Schools are theoretically misleading and realistically problematic. With the growth of political liberalization in the central government and sustained economic growth in China, the signs of competition for financing rural basic education have already appeared; rural basic education will experience a new stage of expansion. The theoretical basis of my analysis is the philosophy of basic education for literacy and socialization of children and the World Bank's guidelines for education: highest priority for investment in basic education for educational access, equality, equity and efficiency in developing countries. The main methods applied here are historical
analysis based on documentary records and macro-economic analysis based on the criteria for educational access, equity and quality.
**Origins and Development**
On October 30, 1989, only months after the Tiananmen Tragedy, the CYDF, a sub-organization of Communist Youth League (CYL), declared that Project Hope was set up to help school age children in poverty-stricken areas to enroll in school. The league, under the leadership of the Communist Party of China (CPC), is regarded as a supporting hand to CPC in conducting national youth activities according to its political guidelines. The league set up CYDF in March 1989 allegedly for the purpose of promoting activities related to youth development. Winning endorsement and support from state leaders and the central government and becoming nationally visible since its inception, Project Hope added the topic of poorly supported rural basic education to the list of issues that government officials hoped would distract the public from focusing on the aftermath of the Tianamen Tragedy. However, it is very difficult to trace the exact political origins of Project Hope from available documents and verify the hypothesis that it was a politically motivated project undertaken at this critical time.
Soon after its founding in 1989, Project Hope sponsored 11 children who could not go to school because of poverty in north China's Hebei Province. On May 15, 1990, the project sponsored the renovation of an old primary school in Jinzhai County, in east China's Anhui Province and renamed the school "Hope Primary School." Since then, the organized solicitation of donations and gifts for the project started to acquire momentum. A great number of old schools were renovated and new schools were built with the project's sponsorship. All these schools were uniformly named the Hope Primary School. Project Hope has received aggregate donated funds and gifts of 1.257 billion RMB (about 151.5 million US dollars) and has sponsored 1.8 million school age children from poor rural families to enroll in schools. The project has sponsored the construction and renovation of 5,256 primary schools (Guangmin Daily 1997). Thus, Project Hope has set up a special school system, the Hope Primary School system.
Interestingly, the Hope School system, just like the reemerged private school system in China, to a certain extent is not within the dominant state public school system in terms of school financing. Private education has reappeared since the middle 1980s (Deng 1997; Kwong, 1997; Mok, 1997), with school numbers and enrollments reaching 0.4 percent of the total number of schools and student population in China in 1997 (Wang 1997). Private schools are under the supervision of the Superintendent Office; therefore, they are under the control of state educational authorities in terms of macro administration and political monitoring (Deng 1997). Organizationally and administratively, only the Hope Primary School system is to any great extent outside the hierarchy of the State Education Commission, renamed the Ministry of Education after 1998; and more often than not, it operates independently of the educational authorities.
The Chinese educational system was centralized and politicized to great extent in terms of administration and financing until the middle 1980s, when a series of educational changes and reforms took place in line with the state economic reform and modernization strategies. The success of vanguard agricultural and economic reforms in the late 1970s and early 1980s provided physical resources for new educational expansion (Riskin, 1993). In 1985, the CPC Central Committee enacted the "Decision of the CPC Central Committee on the Reform of China's Education Structure." In the following year, the National People's Congress turned it into the Education Act, the first education law since 1949 when the new China was founded. The most important features of this fundamental reform are legalization of 9-year compulsory universal education; the decentralization of educational administration; the diversification of the educational financing system; and the vocationalization of secondary education (Lewin, Little, Xu and Zheng, 1994; Zhu and Lan, 1996). For primary education, according to Tsang (1996), the most important change was that the central government would get rid of almost all its financing responsibilities in order to encourage lower level governments and communities to tap their great potential to finance education. The local governments, plus the provincial government that provided a minor share of funds, became almost totally responsible for the financing of primary and secondary education. According to the data of the State Education Commission, provincial and local governments accounted for 99.98 percent of budgeted expenditure in 1991 and 99.97 percent in 1992 for primary education. The national-level investment in basic education remained inadequate after the reform, much below the average level of developing countries (Tsang, 1994, 1996).
The local resources for education in "poverty-stricken areas" (state- or province-categorized "poor counties") were notably inadequate. In 1995, there were 592 counties categorized as "poor" with 85 million of people, whose average annual income was less than 268 RMB (about 32 US dollars per capita). Of these poor counties, there were 195 "extremely poor" counties with 58 million people whose average annual income was less than 200 RMB (about 24 US dollars per capita) (Huang, 1995). The average annual income per capita in these poor counties was far below the UN poverty indicators that regard PPP (purchasing power parity) below $60 per month per capita as poverty and $30 as extreme poverty (Psacharopoulos and Patrinos, 1994). In these poor counties, people lacked subsistence levels of food and clothing. Naturally, it was very expensive for poverty-stricken households and local governments to provide basic education to school age children in these counties. Also, the teaching force in these counties continued to be inadequate in both quantity and quality. Even though there was free tuition in all public schools, school age children could not go to school or they dropped out of school because their parents were not able to pay general fees and because they were needed as farm or household helpers. In addition, the school buildings and facilities were in very poor condition. In some extremely poor mountainous counties, teachers taught students in unsafe and
undesirable places such as dilapidated temples or caves, according to many reports (Cheng, 1992; Huang, 1994; Zhang and Ma, 1996). Striving to obtain enough food and clothing for local people to subsist, the political authorities and school communities in these poor counties were not able to invest adequately in education. They were very eager to accept any financial contributions from outside to help expand education.
The trends of educational decentralization and finance diversification in reform and long standing poverty in the underdeveloped areas in particular, provided Project Hope the sociopolitical atmosphere to come into being and grow quickly. The initiators of the project took on the gritty issue of the long-awaited rural education expansion by making the best use of the opportunities for change and reform to promote basic education investment in poor areas. When local governments were not able to take care of the basic education in all poor counties, Project Hope's participation in and contribution to primary education expansion grew significantly.
**Empowerment by the Central Authorities**
Almost all the leading officials at state and provincial levels gave Project Hope unusually enthusiastic endorsement and support. The reasons behind this were multiple. They could be due to the real sympathy the leading politicians felt for poor children and their sincere willingness to support basic education expansion in rural areas. They could be due to the close personal connections between the CYDF organizers and the leading politicians, or due to the political needs for the authorities to avert public attention from the tight governmental budget for basic education after reform, or possibly to divert attention from the newly strangled student movements. Leading officials in China have a long tradition of writing calligraphy to express their reflection, admiration and other personal attitudes. Although he vowed to stop writing anything for others to show his personal endorsement in the 1990s, the late leader Deng Xiaoping wrote the title of Project Hope for CYDF to show his endorsement of the program on September 5, 1990. He then donated 5,000 RMB (about 600 US dollars) on two occasions in the name of "an old CPC member." He encouraged his family members to make donations. The late President Li Xiannian wrote the title for the first Hope Primary School that was built in Anhui Province. The Party Secretary General and President Jiang Zeming and then Premier Li Peng followed suit. Almost all the important politicians and celebrities emulated Deng by showing support and making personal donations to the project in one way and another (Huang, 1994; Legal Daily, 1997). Consequently, all governmental departments at different levels related to rural education gave the green light to the implementation and development of a variety of programs of this non-governmental project. More often than not, the programs of Project Hope were even given the highest priority on government agenda in some poor counties and prefectures.
**Domestic and Overseas Solicitation**
With endorsement and support from state and provincial leaders, the mass media joined the publicity campaign for the project. The mass media broadcast a great number of touching stories about how poor kids longed for schooling and about how poor citizens as well as high profile officials and celebrities helped dropouts go to school. Books, television features and films related to the project were produced, publicized and won the popular acceptance of the public. In April 1992, one program named "A Million of Love Hearts" for children in poor areas was launched to seek donations from urban areas across the country. In January 1994, another program named "Project Hope: One Home for One Dropout" covered the whole country. The purpose of this program was to encourage one well-off family in both rural and urban areas across the country to help sponsor the schooling of one poor child. In May 1997, a program entitled "Project Hope: The Last Large-scale Domestic Donation Solicitation" was waged nationwide. Through these programs and many others, Project Hope was well known in urban areas as well as in rural areas. It was estimated that every government employee made donations to the project at least once. According to a random survey done by the State Science and Technology Evaluation Center, 98 percent of the respondents in Beijing knew the project. 80.8 percent knew the project through television, newspapers and other media. Eighty-two percent of the respondents made donations to people in poor areas or areas hit by natural disasters; 73.1% made donations to Project Hope (Beijing Youth Daily, 1997). In recent years, corporations, and specifically foreign enterprises like the multinationals Motorola, Coca-Cola and Phillips in particular, were attracted by the publicity for Project Hope. Corporate donations and gifts accounted for the major portion of the funds in recent years.
Well-organized publicity work also targeted potential donors overseas, especially entrepreneurs in Hong Kong, Taiwan and Macao. A great number of individuals from Japan, the US and other countries made donations and sponsored poor students. In 1996, Project Hope created its homepage on the Internet and made its programs more accessible to international communities. The CYDF held an international conference entitled "Project Hope and Fund Raising in China." In addition, three students who were sponsored by the project were selected to participate in the passing of the Olympic Torch in the US that year.
During 1980-1988, the total number of school age children who are not in school in China was estimated to have reached 37 million due to various socioeconomic reasons. In the early 1990s, the number decreased due to the government's growing will and more serious work in implementing the 9-year compulsory education policy. But it was still estimated that over one million students dropped out each year. Most of the dropouts were in the poverty-stricken areas (Yue, 1991).
Project Hope's outstanding efforts and activities against the bleak rural education background have been warmly accepted by people in all walks of life in the country. It has sponsored the return to
school of nearly 2 million dropouts and constructed over 5,000 primary schools. Currently, its organizers seek to raise the aggregated number of sponsored students to 3 million, and the number of constructed and renovated schools to 6000 by the year 2000. In addition, and more ambitiously, Project Hope has attempted to train primary school teachers and tried to use the Hope Primary Schools to directly reduce poverty in families and communities. It is widely believed that Project Hope has made a great contribution to the development of rural basic education in poor areas. It is even regarded as the only hope for developing basic education and reducing poverty by some policy-makers as well as by the public in some poverty-stricken areas.
**Politicization and Bureaucratization**
Not only did the state and provincial leaders support the project directly and make personal donations, but they also made use of the project for various political purposes. The donations and gifts from high officials were always in the headlines of national and local media, which publicized them caring about poor children and their schooling. In addition, the central political authorities used the project as an important means of implementing top-down "ideological education." Through supporting this campaign, it was expected that the CPC members, government officials and staff as well as the ordinary citizens would be taught to continue the popular traditions of the party and avoid corruption and other social ills exacerbated by the introduction of a free market economy. According to the People's Daily (1995) and Guangmin Daily (1997)--mouthpieces of the central party authorities and the central government--Project Hope has become one of the most effective and most influential "ideological education" programs in recent years.
On March 10, 1994, former Premier Li Peng made his "Government Work Report" to the National People's Congress. He specifically emphasized Project Hope by urging people to "mobilize the social forces to continue the implementation of Project Hope." For three consecutive years since 1994, the "White Paper of China Human Rights Development" detailed the yearly statistics of the project's achievements in school enrollment and school building as important indicators of human rights improvement in China (Xinhua News Agency, 1997).
Behind the fanfare of this highly politicized educational scene were the bare facts of rural basic education. First, though there have been significant increases in financial and physical resources invested in education since the 1980s, national investment in education remains relatively low compared to other countries. In 1992, the per-student budgeted expenditures for primary education in China, as a ratio of per capita GNP, was only 6.8 percent, substantially lower than the average of 10-11 percent for countries in Asia (Tsang, 1996). By contrast, public spending per student in the higher education sector as a percentage of GNP per capital was 193 percent in 1990 and 175 percent in 1994, much higher than the 1990 average of 98 percent in
East Asian countries (World Bank, 1997). Second, "minban" teachers (literally "people-managed" teachers, or community-supported teachers), most of whom teach in poor school communities in rural areas, have been decreasing by tens of thousands each year due to the governmental policies aimed at eliminating them and due to the differential pay they receive. The shortage of rural teachers has become more serious. In 1998, minban teachers decreased to about 2.3 million, accounting about 40 percent of the total teacher population in rural areas. They do not enjoy "equal pay for equal work," the golden rule that China has always pledged to obey both in Mao's egalitarian era before reform or in free market era since the late 1970s. Most of these teachers are actually living under the poverty line; some even join the ranks of the extremely poor because the poor and extremely poor communities are not able to pay their salaries and benefits for months or even years (Paine, 1991; Cheng, 1992; Zhang, 1994; Wang, 1997).
Third, the education surcharges and taxes legalized by central and local governments, which aim at development of basic education and at compensating minban teachers' salaries in particular, are difficult to collect, or are misused by being invested in township enterprises when they are collected. Both minban and public basic teachers, in particular minban teachers, working in poor communities more often than not are paid IOUs for their work, and the school facilities remain in poor conditions or worsen (Cheng, 1992; Wang, 1996; Zhang and Ma, 1996). All of these undesirable situations related to rural schools and teachers could have been much improved if the state and provincial authorities had shown equally their enthusiasm and endorsement for Project Hope for the basic education expansion in poor counties.
Bureaucratization of the project and the Hope Primary School system is closely linked to politicization of the project. In addition to its own leaders and operational departments, CYDF has invited a number of politicians, officials and celebrities to be honorary leaders. CYDF's honorary president is the former top legislator Wan Li, the former Chairman of the Standing Committee of the National People's Congress. Its president Li Keqiang is a member of the Standing Committee of National People's Congress. A number of retired political elites such as generals and the former leading members of the secretariat of the State Council and even some current officials of the Ministry of Education and the representatives of the upper-house-like People's Political Consultative Conference were invited to hold the titles of supervisors for the implementation of Project Hope programs. It was arranged that they would occasionally visit prefectures and counties to supervise and examine the implementation Project Hope programs and report to the public and external decision-making bodies on behalf of the CYDF and Project Hope.
Just like a centralized governmental organization, CYDF and Project Hope have developed their top-down national networks from Beijing-based headquarters down to county-level branches in almost all provinces and autonomous regions. These parallel the hierarchy of the public educational system under the Ministry of Education.
Seemingly, they are non-governmental, non-profit social welfare promotion organizations. As a matter of fact, they have been shaped into another pseudo governmental organization with its personnel actually on the governments' payroll. In most extremely poor counties, the local economy is simply subsistence level agriculture; the local budget sometimes can not cover the payrolls of the over-staffed governmental departments. Furthermore, it is difficult and time-consuming for the external financial aid for education to reach these remote poor counties. The CYDF and Project Hope branches in these counties most often have nearly nothing to contribute and merely add to the burden of local fragile financial, administrative and educational systems. In some counties, when a limited amount of sponsorship for a program is available, various kinds of corruption (such as misuse of funds, cheating, and falsification of records) beleaguer the program. Becoming fully aware of these organizational and management problems, the headquarters attempted to adjust and downsize their organizations and improve management and efficiency by eliminating those ineffective and problematic prefecture- or county-level branches. Such a goal, as the leaders of CYDF admit, is difficult to achieve (CYDF 1996).
For Equality, Equity, Efficiency and Quality?
A great deal of attention has been focused on the issues of equality, equity, efficiency and quality in education investment by educational researchers, policy decision makers in governments and the international institutions represented by the World Bank (Psacharopoulos and Patrinos, 1994; World Bank, 1995). Originally, the goal of Project Hope was to increase educational equality, equity and quality by sponsoring the education of school age children of poor families and by improving school facilities. The school age children who can not go to school and those who drop out of school because of poverty reach one million across the country per year as mentioned above, and such children account for from 20 percent to over 50 percent of school age children in different poor counties (Tsang, 1996). Therefore, the students that Project Hope can help account for only a very small percentage of the total population of these children. The question arises: Among them who should receive the limited sponsorship? The obvious answer should be those who are from the poorest families based on basic economic principles, so that sponsorship can be used with the greatest marginal utility and effect. In fact, however, the financial aid more often than not goes to students related to the power groups such as administrative authorities, government employees, the school principal and teachers as well as the students from relatively rich families.
Where should Hope Primary Schools be located; who should be enrolled?
There are two kinds of standard that CYDF set for Hope Primary School construction. If the donated funds reach 200,000 RMB
(about 24,100 US dollars), a new school should be constructed with the funds. If 100,000 RMB (or 12,200 US dollars) are available, an old dilapidated school should be renovated with the funds. According to the requirements of CYDF, the location for a new school should be in the township center. In reality, a great number of newly built Hope Primary Schools are built in county centers, cities and towns. It is mandatory that the location for renovated schools be at least on a village center school if a location is not available where, for instance, a group of households are clustered and share a small simple school. Most renovated and reconstructed Hope Primary Schools, however, are former township center schools. Obviously, the county centers, the township centers, and even the village centers, are relatively economically developed and have higher average household income than surrounding areas in the county. This is especially the case in extremely poor and remote mountainous counties. Generally, in these central areas, there are comparatively fewer school age children who can not go to school or drop out because of poverty. Thus, the building of new schools and school renovation contribute less to educational access, equity and equality for poorer children in the county than they should.
My visit in 1996 to a beautiful Hope Primary School located in a town center in Anhui Province revealed substantial inequality and inequity in the student enrollment of the Hope Primary School. The school I visited was the only Hope Primary School under the jurisdiction of a township level, the lowest level government in a county categorized as being of "extreme poverty." From random sampling and interviewing thirty K-1 and K-2 students, and their parents and interviewing the principal and teachers, I found that all students' families had adequate food and clothes. This meant that the families were not "poor" according to local governmental poverty criteria--as discussed above--despite the fact that in this mountainous county the average annual income per capita is less than 200 RMB (about 24 US dollars). All interviewed families reported that they could regularly give their children pocket money during the academic semester. They all responded that they were able to support the general fees if their children were otherwise sent to an ordinary public school in the town, which was tuition free but charged general fees. Second, 51 percent of the total enrollment lived in the prosperous township center with booming business. Forty-one percent lived in the three neighboring villages that were about half an hour to one hour for students to walk to school; a few students in senior grades from rich families rode bikes to school. This group of students all had family members working in the town, most in township factories. Only 8 percent of students came from the outer six villages, which were home to 59 percent of the population under the township jurisdiction. Almost all this 8 percent of students had one family member working in the township either as a factory worker or self-employed businessman, so that the students had places for boarding and lodging. The principal and 5 teachers interviewed stated that it was impossible for the school to enroll children whose family members were peasants in the outer poorer villages in deep mountains, which were anywhere
from a one-hour to a five-hour walk from the township center. They admitted that it was financially impossible for the school to support students' lunches, not to mention board, lodging or transportation to some villages.
The typical rhetoric concerning the Hope Primary School's location in the highroad-accessible town or county centers is that more of the general public will see the exemplary buildings and teaching activities of the school. More importantly, the higher-level authorities who sometimes make investigations of grassroots units can easily witness the physical outcome of Project Hope. Thus, hopefully, they will provide greater attention and support to basic education in the county. If schools were built in remote mountainous villages where the cars and busses can not reach, the authorities would not see the evidence of educational development in the area. One concludes that in locating a school site, building and renovating a school, the local policy makers orient to the response of higher level authorities and relatively rich students and their families rather than to the educational needs and expectations of economically disadvantaged children and their families who live in geographically disadvantaged places (Tou et al., 1995).
After a school site is located, on what scale should a Hope Primary School be built or renovated?
Though the distribution of the donated funds for school building or renovation is fixed to standard amounts, the local governments can supplement extra funds when available if they think necessary. Due to the huge regional differences and economic variation across the country, it is impossible for CYDF to strictly apply uniform standards. The local politics use the opportunity of school building and expansion to exercise their powers and seek local funds to invest in education based on local economic conditions. Nevertheless, some, if not most, local authorities unrealistically aim at building the best, the biggest and the most beautiful school building in the region. In many cases because of careless planning and mismanagement, the building of the main school infrastructure expends all the funds before a school can be completed. Then, no more money is available for purchasing accessory parts and items to complete the school. For instance, the government of the rural Xinguo County in east China's Jiangxi Province made the ambitious decision to build the best primary school building in the Ganzhou Prefecture. The county government even sent professionals to the city where the prefecture headquarters was located to investigate what the most modern and beautiful primary school building should be like before they started construction. The county used 200,000 RMB (about $24,100 US) externally donated funds and gifts, plus over 100,000 RMB locally collected funds to build a Hope Primary School. After the main building was finished, the builders found no money was left for completing washrooms, laboratories or to order blackboards, desks and chairs, not to mention equipment and books for the school laboratories and library. It was impossible to collect more
non-governmental funds in this poor county. The county and lower level governmental branches were unable to provide extra financial assistance. This Hope Primary School was left as nothing but an unfinished modern building (Tou et al., 1995). It then took many years for the county to complete the school and make it useable. Unfortunately, a great number of Hope Primary School buildings were built or half built in the same way.
Because of the complicated dual leadership in administration and management, most Hope Primary Schools can not fully improve their potential for internal and external efficiency. As required by the CYDF, a Hope Primary School should upon completion be immediately put under the leadership and administration of the local Educational Bureau or Office at the county or town level, which is the grassroots level in the hierarchy of the Ministry of Education. However, the school is built with the funds obtained by and under instruction of the local CYDF and Project Hope Office. And they are affiliated with the local branch of the CYL, which also functions as a governmental branch. Thus, the Hope Primary School is subject to two administrators. Since the League branch solicits and accepts the donations and gifts, and is responsible for building the school, it always has greater decision-making power in administration and management. The local educational authorities are often put aside in the decision making process about Hope Schools. But local educational authorities will not easily retire from the competition for power and influence, particularly on the high-profile Hope Primary School.
In addition, who should be the principal, who should be teachers in the new school, what kind of poor children should be enrolled, and what special management policies should be practiced in the school? All these equality, equity and efficiency-related questions receive conflicting answers from the two different administrative authorities with different motivations and orientations. Hence, constant conflict and tensions ensue between the two power systems.
When the two administrators can not reach a compromise, as is usually the case, the Hope Primary School becomes the victim of their conflicts, competition and antagonism. Teaching quality and student achievement are thus negatively affected.
**Curriculum and Instruction: For Poverty Reduction?**
China has long worked under a national uniform core curriculum in primary education. Alternations of core curricula are under the absolute control of the Ministry of Education. Naturally, Hope Primary Schools are expected to follow the standard practice of all public primary schools about what core curricula should be taught. The curricula and instruction do not include vocational and technical education at the level of primary schools, but only at the level of junior and senior secondary schools and beyond. This is in line with World Bank educational policy recommendations: "Basic education encompasses general skills such as language, science and mathematics, and communication skills that provide the foundation for further
education and training. It also includes the development of attitudes necessary for the work place. Academic and vocational skills are imparted at higher levels, on-the-job training and work-related continuing education update those skills." (World Bank, 1995).
In early 1994, Vice Premier Li Nanqing, who took charge of national education policy, suggested that the Hope Primary School should be different from other general public primary schools in curriculum and instruction. The Hope Primary Schools should educate elementary graduates in the agricultural and technical knowledge needed and develop the skills to help families and communities in the drive to alleviate poverty. Obviously, such educational goals for children ages 7 to 13 years in primary schools were unrealistic given the inadequacy of the schools and the communities. What is more, these goals are inconsistent with the commonly held philosophy of universal primary education (Wang, 1995). In the education reform of 1985, the educational authorities proposed vocationalization at secondary school level and beyond. Almost half of the secondary schools in the country have been gradually turned into vocational and technical secondary schools since then (Lewin et al., 1994). This policy orientation and implementation were regarded as realistic, viable and effective because they were based on the economic development strategies of China, on the advice of educational professionals, and on the related experiences of other countries. The vice-premier's radical educational policy proposal for Hope Primary Schools did not win warm support from educators, especially professors and researchers under then State Education Commission. But the CYL and CYDF followed this proposal and demanded that the Hope Primary Schools should define their own character by educating students with agricultural, scientific and technological skills as well as cultural knowledge. Since then, it has been required that the Hope Primary School should take the path of "combining agricultural and technical knowledge and skills with cultural contents" by adding farming and technical education to the core curriculum. In addition, more radical policy guidelines were adopted to encourage Hope Primary Schools to develop school economies, such as the school-affiliated business operations, and furthermore, to develop schools as technical extension stations in poor rural areas. Later, CYDF explicitly required Hope Primary Schools to become agricultural and technical extension stations or centers in local communities (CYDF 1996b).
According to case studies by Peking University graduates, three direct obstacles lay in the way of this "new path" (Tou et al., 1995).
- First, there were no places such as experimental fields or laboratories to implement agricultural and technical education. When not even desks and chairs were available or adequate for students, it was difficult or nearly impossible for the school to obtain extra facilities such as land plots or laboratory equipment to teach plant cultivation techniques or electrical skills, for example.
• Second, teachers were notably inadequate in both quantity and quality in poor counties. When the supply of teachers in vocational and technical schools at secondary level were inadequate, it was to be expected that they would also be inadequate for the agricultural and technical curricula of the Hope Primary Schools. This was partly because vocational and technical teacher qualifications were not yet required in Hope Primary Schools, and most probably would not be considered by planners of teacher education. The Hope Primary Schools occasionally had to invite experienced farmers and technicians or secondary vocational school teachers to classrooms, as mere gestures toward implementing vocational and technical curricula and instruction.
• Third, the elementary students were obviously too young and cognitively unprepared to accept vocational and technical training; and the acquired knowledge and skills would most probably become obsolete years later when they graduated and entered the labor force.
• Fourth, parents opposed the non-cultural content of curricula and instruction. If children had to spend a significant amount of time in agricultural work in school, peasants would prefer that they be household helping hands or learn farming skills on the farm instead. And even some local administrative and educational leaders believed that it was not realistic for primary school students to be directly involved in programs of poverty alleviation and economic development for families and communities.
According to Chinese educational professionals, some light agricultural work is necessary. Children should be educated to have solid ethics and a good attitude toward productive and vocational work through such experiences. But the Hope Primary School has little alternative to being a general primary school rather than a vocationally or technically oriented school carrying heavy political and economic expectations (Sha, Zhou, Fang and Xu, 1995). As the World Bank (1995) pointed out: "Education alone will not reduce poverty; complementary macroeconomic policies and physical investments are also needed." When actual investments in the poverty stricken counties were rare and problematic, the great hopes and expectations placed on the children of Hope Primary Schools to contribute to the reduction of poverty and to economic development are only daydreams, even if the students are otherwise adequately equipped with vocational and technical knowledge and skills.
**Conclusion**
Encouraged by its political endorsements, its great achievements and popularity, Project Hope has become more and more ambitious in its educational endeavors. The leaders of CYDF and Project Hope expect the Hope Primary School to be not only the hope for children of poor families, but also the hope for parents and local communities to rid themselves of poverty and become well off. Unless
poor children are lucky enough to live in the more advantageous villages or unless they are related to locally powerful people, the children's chances of truly escaping poverty through schooling are small. The expectations placed on the Hope Primary School have become a burden for the young children and teachers; these expectations are unrealistic and problematic.
The leaders of Project Hope now plan to set up at least one Hope Primary School in each of the over 500 poor counties in China. Presently, because the donations and gifts are not adequate, Hope Primary Schools currently exist in only about 100 poor counties. But, the leaders now have new plans in addition to continuing to sponsor children and building schools in every poor county. In addition to transforming Hope Primary Schools into agricultural and technical stations and training students with poverty-reduction skills as mentioned above, the new goals and plans include the following:
- First, to seek donations and gifts to set up the Hope Library in every Hope Primary School. If one donates 3,000 RMB (approximately US$ 362), 500 books for children will be purchased and a small library in a Hope Primary School will be set up.
- Second, to build up Training Bases for Hope Primary School Teachers. The first National Training Center for Hope Primary School Teachers, at least the physical building, has been completed in Zhejiang Province.
- Third, to establish the Project Hope Award for Outstanding Teachers. Dozens of dedicated and experienced teachers from extremely poor school communities have been selected for the awards. They were invited to Beijing to accept the honors.
- Fourth, to organize the selection of a number of outstanding students from Hope Primary Schools as Hope Stars (CYDF 1997a, 1997b).
At present, more and more people and institutions unrelated to the system of the CYL and CYDF use, consciously or unconsciously, the title of "Project Hope" for financing basic education, or for non-educational or profit making purposes. They thus challenge the political and educational authority of CYDF and Project Hope for rural basic education expansion. To safeguard its best and exclusive interests, CYDF registered its service trademark of Project Hope with the China Trademark Bureau in April 1997. This places in legal jeopardy any individual or any institution using the title "Project Hope" in China without approval from CYDF and Project Hope. It is estimated that over 400 Hope Primary Schools in China have recently been built with the funds from sources outside CYDF and Project Hope. Apparently, all these schools will have to either change names or join the Hope Primary School system under CYDF in the near future.
In November 1997, the State Education Commission and the Ministry of Finance set up the "State Compulsory Education Scholarship For Children in Poor Areas" by earmarking 130 million
RMB (about 15.7 million US dollars) to support poor children in the state-categorized "poor" and "extremely poor" counties. It is expected that every year over 600,000 students will receive the scholarship (CYDF 1997). Though started much later and on a smaller scale compared with Project Hope's programs, this was the biggest effort ever made by the State Education Commission after educational reform in the middle 1980s to directly sponsor the schooling of children of poor families in poor areas.
When CYDF and Project Hope play bigger roles and the Hope Primary School system attempts to exercise greater influence in basic education and community development in poor areas, the Ministry of Education will become more active in rural basic education expansion. The ministry along with other powerful ministries will probably make greater financial contributions and work out more carefully-designed policy guidelines for all primary schools including Hope Primary Schools in rural areas. This will improve educational access and quality for school-age children of poor families. Along with the state's poverty-alleviation and economic development programs, it is expected that educational equity and efficiency as well as the governmental goal of universalization of 9-year basic education will be gradually realized in the future.
Note
The author wishes to express deep appreciation to friends and former colleagues, Prof. Zhou Quanhua of the Peking University, as well as to Senior Coprespondent Han Lin with the monthly China Today for assistance in collecting published information in China. The author, however, is solely responsible for any information, ideas, and views expressed here.
References
Beijing Youth Daily (1997), September 11, 1997.
Cheng, K. M. (1992). The true situation of education in mainland China. Taipei, Taiwan: Taiwan Commercial Press.
CYDF (1996a) The English-language homepage of CYDF and Project Hope. Available at: http://www.chinaprojecthope.org/phc.htm.
CYDF (1996b) The Newsletter. May 15, 1996.
CYDF (1997a) Questions and Answers Regarding Project Hope. Beijing: CYDF.
CYDF (1997b) Newsletter. November 20, 1997.
CYDF (1998) Project Hope--The Certificate of Young Volunteer. Beijing: CYDF.
Guangmin Daily (1997) Editorial, October 3, 1997.
Huang, C. (1994) Project Hope in China. Beijing: China Radio and TV Publishing House.
Huang, S. (1995) The Dilemma and Solutions of Rural Basic Education in Poor Areas. In Tou, et al. (eds. 1995): Zhongguo Xiwang Xiaoxue Hope Primary Schools in China: Investigations and Reflections on Project Hope, 59-82.
Kwong, J. (1997) The reemergence of private schools in socialist China. Comparative Education Review, 8, 244-259.
Legal Daily (1997). To meet the 21st century with "Hope." June 18, 1997.
Lewin, M. K., et al. (1994) Educational innovation in China: Tracing the impact of the 1985 reforms. Essex, England: Longman Group Limited.
People's Daily (1995). Special Column "Love Heart for Project Hope." Also in CYDF Newsletter, May 15, 1996.
Mok, K. (1997) Private challenges to public dominance: the resurgence of private education in the Pearl River Delta. Comparative Education, 33 (1), 43-60.
Paine, L. (1991) Reforming Teachers. In Irving Epstein (Ed.), Chinese Education: Problems, Policies and Prospects. New York: Garland Publishing Inc.
Psacharopoulos (1985) Education for Development. London: Oxford University Press.
Psacharopoulos and Patrinos (1994) Indigenous People and Poverty in Latin America: An Empirical Analysis. Washington, D.C.: World Bank.
Riskin, C. (1993) China's political economy: The quest for development since 1949. London: Oxford University Press. pp290-302.
Shan, W., et al. (1995) The Management of Hope Primary Schools. In Tou Meng, et al. (Eds.): Hope Primary Schools in China: Investigations and Reflections on Project Hope. pp. 211-220.
Tou, M., Cheng, J. and Huang, H. (Eds., 1995) Hope Primary Schools in China: Investigations and Reflections on Project Hope. Beijing: China Science and Technology Press.
Tsang, M. C. (1996) Financial Reform of Basic Education in China.
Wang, C. (1997) "Minban" Education: The Elimination of People-managed Teachers in Reforming China. The Midwest Comparative and International Education Society Conference 1997 at the University of Illinois at Urbana-Champaign.
Wang, D. (1997) Sili xuexiao: wenti yu chulu [Private Schools: Problems and Solutions]. *People's Education*, March 1997.
Wang, D. (1995) *On Hope Primary School*. In Tu Meng, et al. (Eds., 1995). *Hope Primary Schools in China: Investigations and Reflections on Project Hope*.
Wang, S. (1995) *The Role Expectation of Hope Primary School*. In Tu Meng, et al. (Eds., 1995). *Hope Primary Schools in China: Investigations and Reflections on Project Hope*. 16-21.
Wang, Z. (1996) The Analysis of Communist China's universalization of 9-year Compulsory Education. *Studies of Communist Problems*, 8. 34-35.
World Bank (1995) Priorities and Strategies for Education. Washington D. C.: World Bank.
World Bank (1997) China Higher Education Reform. Washington D. C.: World Bank.
Xiahua News Agency (1997). March 31, 1997.
Yue, X. (1991) China Project Hope. Beijing: Science Literature Press. 20-22.
Zhang, H. and Ma, S. (1996) Guanyu longcun pinkun diqu kiaoshi wenti de sikao [Reflections on the Teachers in Rural Poor Regions]. *People's Education*, 1, 10-12.
Zhang, W. (1997) Yuhe zuodao minban jiaoshi yu gongban jisoshi tonggong tongchou [Minban Teachers vs. Gongban (public) teachers: How to achieve equal pay for equal work?]. *People's Education*, 3, 15-16.
Zhu, Y. and Lan, J. (1996) educational reform in China since 1978. In Hu, Hong and Starvou (Eds.) *In Search of a Chinese Road Towards Modernization: Economic and Educational Issues in China's Reform Process*. Wales: The Edwin Mellen Press.
**About the Author**
Samuel C. Wang
Department of Educational Policy
University of Illinois at Urbana-Champaign
360 EPS, 1310 South Sixth Street
Champaign, IL 61801
Email: firstname.lastname@example.org
Samuel C. Wang is a Ph.D. candidate in Comparative Education and Social Sciences and a research assistant at the University of Illinois at Urbana-Champaign. With bachelors and masters degrees obtained in China, he served in institutions of English education, publishing and science education before he sought doctoral study in U.S. He was an Editorial Assistant with the journal *Educational Theory* for 1997-1998. His main research interests include education in Asia and education and human development.
Copyright 1999 by the *Education Policy Analysis Archives*
The World Wide Web address for the *Education Policy Analysis Archives* is [http://cpaa.asu.edu](http://cpaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
**EPAA Editorial Board**
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs- Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne I... Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Canilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison J. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina — Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petrie | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois — UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
J. Félix Angulo Rasco
(Spain)
Universidad de Cádiz
firstname.lastname@example.org
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
firstname.lastname@example.org
Javier Mendoza Rojas
(México)
Universidad Nacional Autónoma de México
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Humberto Muñoz García
(México)
Universidad Nacional Autónoma de México
email@example.com
Angel Ignacio Pérez Gómez
(Spain)
Universidad de Málaga
firstname.lastname@example.org
Daniel Schugurensky
(Argentina-Canadá)
OISE/UT, Canada
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
Carlos Alberto Torres
(U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
Block Scheduling Effects on a State Mandated Test of Basic Skills
William R. Veal
University of North Carolina-Chapel Hill
James Schreiber
Indiana University
Abstract
This study examined the effects of a tri-schedule on the academic achievement of students in a high school. The tri-schedule consists of traditional, 4x4 block, and hybrid schedules running at the same time in the same high school. Effectiveness of the schedules was determined from the state mandated test of basic skills in reading, language, and mathematics. Students who were in a particular schedule their freshman year were tested at the beginning of their sophomore year. A statistical ANCOVA test was performed using the schedule types
as independent variables and cognitive skill index and GPA as covariates. For reading and language, there was no statistically significant difference in test results. There was a statistical difference mathematics-computation. Block mathematics is an ideal format for obtaining more credits in mathematics, but the block format does little for mathematics achievement and conceptual understanding. The results have content specific implications for schools, administrations, and school boards who are considering block scheduling adoption.
The past decade has provided schools with many opportunities to reform education at a local level. One reform movement that has gained in popularity in the past few years is block scheduling. More than fifty percent of secondary schools in the United States have opted to change their schools' schedule to one that involves longer classes (Canady & Rettig, 1995). Proponents of school reform often view block scheduling as a way to extend the traditional periods of uninterrupted class time and improve student achievement (Bevevino, Snodgrass, Adams, & Dengel, 1998; Canady & Rettig, 1995; Cobb, Abate, & Baker, 1999; Queen & Isenhour, 1998; Canady & Rettig, 1996). As the trend continues to grow throughout the United States, teachers, parents, administrators, and university professors are seeking evidence for the impact of block scheduling on student achievement. As reformers have sought better ways to increase student achievement in the high schools, the question of time used for instruction has become a major focus.
**Literature Review**
There have been many debates at the district and school levels about the perceived benefits of block scheduling. The results of studies have supported and denounced the implementation of block scheduling. Previous studies have reported favorable teacher attitudes and perceptions about block scheduling though the use of surveys (Pullen, Morse, & Varrella, 1998; Sessoms, 1995; Tanner, 1996). Other studies have reported on the relationship between block scheduling and student grade point averages (Buckman, King & Ryan, 1995; Edwards 1993; Holmberg, 1996; Schoenstein, 1995). These studies focused mainly on trends in grade point averages over time of implementation. Mixed results have been reported on state standardized test scores (North Carolina Department of Public Instruction, 1996) and standardized test scores (Bateson, 1990; Hess, Wronkovich & Robinson, 1998; Lockwood, 1995; Wild, 1998). Most of these studies support the longer traditional schedule over the 4 x 4 block in science for example, yet support the 4 x 4 block schedule in math and social studies. Graduation rates have also been reported to benefit from the 4 x 4 schedule (Carroll, 1995; Monroc, 1989; Sessoms, 1995). The findings of these studies have been inconsistent, sometimes reporting gains for students on block scheduling, sometimes reporting no differences, and
sometimes reporting losses compared with students on traditional scheduling. Several large-sample studies, for example, have reported results in multiple subject areas. Hess, Wronkovich, and Robinson (1998) and Wronkovich, Hess, and Robinson (1997) used "retired" copies of SAT II Achievement Tests. Using the Otis-Lennon Scholastic Aptitude Test as a covariate, they conducted regression analyses on pre- and post-tests. The study concluded that there were no significant differences in student achievement between 4x4 semester and traditional schedule types in geometry and history, and a significant difference in biology and English with 4x4 semester schedule students achieving higher scores than the traditional schedule.
In a second study done by The College Board (1998), tests were examined for student achievement differences in four subject areas: Calculus, biology, US history, and English literature. An analysis of covariance using the PSAT/NMSQT as a covariate was performed on Advanced Placement examination scores. Students who were taught AP English literature under an extended traditional class time (meeting everyday for more than 60 minutes) scored significantly higher than students in a traditional schedule, and both fall and spring 4x4 schedules. Students who took the AP US history exam in both the traditional and extended traditional format outperformed those in the 4x4 block schedules. Students enrolled in an extended traditional AP biology and calculus class outperformed those students in a traditional format and the 4x4 block schedules. However, these results might be expected if more time was spent on a daily basis learning any subject. Moreover, the results reported the effects of the traditional, extended traditional, and the 4x4 schedules, but did not include other types of block scheduling (e.g., block 8, alternating block, trimester, or hybrid).
Cobb, Abate, and Baker (1999) used a post-test only, matched pairs design to evaluate standardized achievement in mathematics, reading, and writing. The researchers found that block students performed significantly less well on the mathematics standardized test. There were no differences in achievement on the standardized reading and writing test scores. The literature is consistent on the inconsistency of achievement of students within the block schedule.
Most studies have examined students after they have switched to a new schedule. Few studies have directly compared student achievement within the same school utilizing different schedules. The purpose of this paper is to add to the literature base a study which investigated student achievement on standardized tests of reading, language, and mathematics. The tests results were evaluated based upon the three schedule types within the same school. Systematic examinations of the effects of block scheduling are needed if research is to adequately inform reform movements and decisions.
**Methods**
**Context**
In the spring of 1994, discussions were held on changing the traditional day schedule at South Springfield High School (SSHS). The
change to a 4x4 alternative schedule was proposed after five years of study and consideration. However, a compromise tri-schedule was implemented rather than a 4x4-block schedule. The tri-schedule included three schedules types (traditional, 4x4-block, and hybrid) running at the same time during the school day. The traditional schedule consisted of six 55-minute classes that were taught for the entire school year. The 4x4-block schedule consisted of four 87-minute classes that were taught in one semester. The hybrid schedule consisted of three traditional and two block classes taught each day.
South Springfield High School is a large, four-year school located in a medium-sized college town in the Midwest. The student population of 1800 is mostly white and includes children from the city and rural areas of the county. In the fall of 1997, SSHS began the scheduling format described earlier. Under this format, both traditional and block courses were offered in all subject areas except the performing arts and advanced placement classes. The total contact time in block courses was approximately 37 hours less than for yearlong traditional courses (Table 1). This equated to 40 fewer class meetings for block classes than traditional classes.
**Table 1**
**Descriptive Information for Classes under Block and Traditional Formats**
| Schedule Descriptors | Traditional | Hybrid | 4X4 Block |
|---------------------------------------|-------------|--------------|-----------|
| Class Time (mins./day) | 55 | 55 and 87 | 87 |
| Number of Days of Instruction | 180 | 180 and 90 | 90 |
| Class Time (mins./school year) | 9900 | 9900 and 7830| 7830 |
| Classes/Day | 6 | 5 | 4 |
| Classes/Year | 6 | 7 | 8 |
| Hours/Day | 6.5 | 6.5 | 6.5 |
| Credits | 12 | 14 | 16 |
| Teacher Utilization Rate<sup>a,b</sup> | 83% | 83%<sup>b</sup> | 75% |
a. Defined as the total teaching contact hours divided by the total class time during a day.
b. Teacher utilization rate was the same for all teachers due to contract and union regulations.
**Students**
During their freshman year, the students were randomly assigned to a block or traditional schedule. Due to scheduling concerns with special education students and Advanced Placement classes,
students were then asked to switch into different classes than originally assigned. This resulted in the formation of the hybrid schedule to accommodate the course requests. Learning from the first year's scheduling dilemma, scheduling for the second year was student driven. Students submitted requests to take certain classes in either the block or traditional format. Based upon frequency counts, certain classes were only offered in one particular format one time and in the other format multiple times. Due to the proportionately distributed classes, student choice was ultimately limited to certain class formats.
**State Mandated Test of Basic Skills**
The Indiana Statewide Testing for Educational Progress (ISTEP+) is a state mandated test of basic skills that all students in Grades 3, 6, 8, and 10 had to take. All 10th graders (sophomores) are required to take all three sections of the ISTEP+ test, regardless of previous year state of residence or school. The results included only those students who took all three sections of the test (N = 327). Due to absences, some students did not take certain portions of the test.
The areas tested include reading, language, and mathematics. The sub-areas of reading are comprehension and vocabulary. The sub-areas of language are mechanics and expression. The sub-areas of mathematics are concepts and applications, and computation. In addition to these sub-areas, each area has a total score and a battery score for the entire test. For the purposes of this study, only scores on the sub-areas are reported since the total areas are composed of the two individual sub-areas, and the battery is a composite of all six sub-areas. Norm Curve Equivalent (NCE) scores and the Cognitive Skills Index (CSI) were used from the result printout for analysis. The NCE and CSI scores were norm-referenced. The NCE scores (1-99) were based upon an equal-interval scale. Using NCE scores allowed us to compare scores among schedule groups. The CSI describes an individual's overall performance on the ISTEP+ aptitude test. It compares the student's cognitive ability with that of students who are the same age. The CSI is a normalized standard score with a mean of 100 and a standard deviation of 16. The test was administered over a four day period for three hours per day. Each section of the test was timed. Table 2 shows the descriptive information about the students who took the ISTEP+ test.
**Table 2**
**Descriptive Statistics of Students Taking ISTEP+**
| Schedule Type | N | 1997-98 Freshman GPA | CSI |
|---------------|-----|----------------------|-----|
| Traditional | 117 | 2.73 | 113.06 |
| Block | 141 | 3.01 | 113.08 |
| Hybrid | 75 | 3.25 | 116.99 |
Analysis
ANCOVA statistical tests were run on the SPSS computer statistical software package. Because it was impossible to obtain a randomized or matched sample in this present study, analysis of covariance (ANCOVA) was utilized for the design. The ANCOVA for each dependent variable was a one factor fixed effect (schedule type: traditional, block, hybrid) with CSI (cognitive skills index) and cumulative GPA as simultaneous multiple covariates.
Results
Reading
Both of the sub-areas for reading were analyzed and determined to be non-significant by schedule type, and thus their results are not reported. Using reading-total as an example, CSI and GPA provided significant regression effects \((F[1,331] = 160.740, p < .001; F[1,331] = 6.308, p < .001)\) respectively. No main effect for schedule type was found for reading-total \((F[2,331] = 1.470, p = .231)\).
Language
Both of the sub-areas for language were also analyzed and determined to be non-significant by schedule type, and thus their results are not reported. Using language-total as an example, CSI and GPA provided significant regression effects \((F[1,331] = 140.809, p < .001; F[1,331] = 51.153, p < .001)\) respectively. No main effect for schedule type was found for language-total \((F[2,331] = .679, p = .508)\).
Mathematics
The ANCOVA results for mathematics-computation were significant. The covariates CSI and GPA provided significant regression effects for the dependent variable \((F[1,331] = 155.369, p < .001\) and \(F[1,331] = 53.196, p < .001\)) respectively (Table 3). A significant main effect for schedule type (Table 3) was found \((F[2,331] = 4.380, p = .013)\). Table 4 shows the unadjusted mean scores for the mathematics-computation section of the ISTEP+ based upon schedule type. Traditional schedule students scored significantly higher on mathematics-computation than block and hybrid students (Table 5). The traditional and block students had a mean difference of 4.175 \((p = .006)\) and the traditional and hybrid students had a mean difference of 4.181 \((p = .022)\).
Table 3
ANCOVA for Dependent Variable Mathematics-computation
Table 4
Means\(^a\) for Mathematics-computation by Schedule
| Schedule | Mean | Std. Error |
|----------|--------|------------|
| Traditional | 69.115 | 1.128 |
| Block | 64.940 | 1.008 |
| Hybrid | 64.934 | 1.399 |
\(^a\) Evaluated at covariates appeared in the model: CSI = 113.9819, CUMGPA = 2.9750.
Table 5
Pairwise Comparisons for Dependent Variable Mathematics-computation
| (I) Schedule | (J) Schedule | Mean Difference (I-J) | Std. Error | Sig. |
|--------------|--------------|-----------------------|------------|------|
| Traditional | Block | 4.175 | 1.521 | .006 |
| Traditional | Hybrid | 4.181 | 1.823 | .022 |
| Block | Hybrid | 0.005 | 1.720 | .997 |
For the dependent variable, mathematics-concepts and application, CSI and GPA provided a significant regression effect \((F[1,331] = 188.767, p < .001\) and \(F[1,331] = 41.867, p < .001\)), respectively. No main effect for schedule type was found \((F[2,331] = 1.456, p = .235)\), thus tables are not provided due to the non-significant results. Even though three schedules existed at the high school and all students were enrolled in one of three schedules, students took mathematics in either a traditional or block format. The ANCOVA results from Table 5 would indicate that the traditional schedule is better for student achievement than the hybrid and block schedules. Mathematics was not taught in a hybrid format; only a block or traditional format. Thus a statistical ANCOVA test was performed on mathematics-computation separating the students based upon their
mathematics class format. The covariates CSI and GPA, provided significant regression effects \((F[1,332] = 164.238, p < .001\) and \(F[1,332] = 43.876, p < .001)\) respectively (Table 6). A significant main effect for mathematics class format was not found \((F[1,332] = 0.018, p = .892)\).
**Table 6**
**ANCOVA with Dependent Variable Mathematics-computation for All Sophomores**
| Source | Sum of Squares | df | Mean Square | F | Sig. |
|----------|----------------|----|-------------|------|------|
| CSI | 24069.004 | 1 | 24069.004 | 164.238 | .000 |
| CUMGPA | 6429.975 | 1 | 6429.975 | 43.876 | .000 |
| Format | 2.703 | 1 | 2.703 | .018 | .892 |
| Error | 48068.272 | 328| 146.550 | | |
**Discussion**
**Reading and Language**
There is no schedule that is significantly better than another for student achievement on ISTEP+ reading and language scores. After adjusting for differences in CSI and GPA, students' scores on the reading and language portions of the ISTEP+ were comparable. In essence, the schedule type did not influence positively or negatively student scores. The findings of this study confirm the results found in previous studies. Cobb, Abate, and Baker (1999) and Holmberg (1996) reported that there were no differences in student achievement on reading and writing standardized test scores. In terms of the development of reading and language skills, as long as students are taking classes for the same amount of time each year, reading and language scores might be expected to remain the same. Perhaps all classes that a student might take under any schedule format, reinforce reading and language skills by incorporating some kind of reading and language component to their curriculum. Reading and language skills are most often found and needed in all types of curriculum and are thus reinforced across all classes.
**Mathematics**
The traditional schedule seems better for the understanding and retention of mathematical computation as determined from ISTEP+ scores for sophomores. Some studies have reported that block scheduling was desirable because it allowed for more credits and classes to be taken (Queen & Isenhour, 1998). What has not been examined is how a decrease in total time throughout the year due to a
schedule change might influence mathematics learning. Does taking a mathematics class everyday with a longer total percentage of time in class benefit a student over taking more mathematics classes with less time in each math class?
Table 6 shows the ANCOVA results for mathematics-computation based upon mathematics format of all students taking the ISTEP+. The non-significant results indicate that the mathematics format taken by students does not have an impact on their standardized mathematics test scores. Thus, schedule type was not a factor in the test scores for sophomores even though parts of the curriculum were left out of the block format classes due to time constraints (see Table 1). It is also interesting to note that the students were equalized using the two covariates. Initial glance of the unadjusted means might indicate that the traditional students actually did better. This was not the result. Another issue that has been discussed as an advantage of block scheduling is that students can take more classes, including more core classes such as mathematics, under the 4x4 block schedule (Queen & Isenhour, 1998). At SSHS, proponents of block scheduling used this argument to bolster support for block scheduling. If a student could take more mathematics courses, could the student complete and understand the curriculum? In order to answer this question we examined 76 sophomores that took more than one mathematics class their freshman year. Of those students one was in the traditional schedule and one was in the block schedule. Seventy-three students who took more than one mathematics class were hybrid. These hybrid students had the opportunity to take the mathematics classes in either a block or traditional format. Twenty-two of the 73 hybrid students took their mathematics classes in a block format, and 51 took their mathematics classes in a traditional format. Table 7 shows the ANCOVA results for mathematics-computation for those hybrid students who took their freshman mathematics classes in either the traditional or block format. Those students who had mathematics for a longer daily period (block) all year scored the same on the ISTEP+ mathematics section as those students in a traditional format after adjusting for CSI and GPA. This result indicates that taking more than one mathematics class does not increase a student's mathematics achievement. Thus, the argument that block scheduling would allow more students to take more mathematics classes is true, the impact of the increased learning is not justified due to the lack of time and curriculum in the mathematics classes due to the shorter class hours in the block format.
**Table 7**
**ANCOVA with Dependent Variable Mathematics-computation for Hybrid Sophomores**
Moreover, those hybrid students who took more than one math class their freshman year scored similarly when they took mathematics classes in the block schedule. In essence, the hybrid students who took more than one math class their freshman year not only took math daily, but were immersed in mathematics for a longer period of time every day for an entire year. Even though these students lost content in the block format, they made up for the loss with increased amount of mathematics content at higher levels. These results support the conclusion that mathematics is best learned and understood under a daily format. Also, more time spent on learning mathematics concepts in an extended period seems to reinforce those concepts. In essence, block mathematics is good for taking more mathematics classes and obtaining more graduation credits, but the block format per se does little to increase students' understanding of mathematics.
Another issue is the possible "gap in learning" resulting from a block schedule student taking mathematics his/her first semester freshman year and not taking it again until his/her sophomore year. We were unable to determine the effect of the "gap in learning" associated with the 4x4 block schedule. By looking at the mathematics-computation scores, it would indicate that the "gap in learning" was not a significant factor in mathematics achievement as many previous people have perceived (Kramer, 1996; Wronkovich, Hess, & Robinson, 1997). We can speculate that the "gap in learning" was not an issue since the difference in scores on the mathematics-computation section was not significantly different from those students in the traditional and block schedules (see Table 6).
The results found in this study confirm those found in other studies, while conflicting with some others. Learning mathematics under an extended schedule format (daily and greater than 60 minutes) was advantageous for students using an Advanced Placement achievement test (The College Board, 1998). These results also confirm findings by Cobb, Abate, and Baker (1999). Several studies have reported higher grades for students in block mathematics (e.g., Carroll, 1995; Stennett & Rachar, 1973). In essence, some mathematics results due to scheduling type reported in the literature are tenuous at best. Fewer studies have been completed and reported in the literature using standardized tests (Cobb, Abate, & Baker, 1999; Hcss, Wronkovich, & Robinson, 1998; The College Board, 1998).
Conclusions
This study supports the importance of daily instruction and contact time to student achievement in mathematics as distinct from other academic skills. However, the mechanisms that determine this relationship are less clear, and educational policy makers would be unwise to conclude that one type of schedule is generally better than others independent of how different schedules influence the number and type of courses that students take across the secondary curriculum. More research is needed to address the concern of "time-of-discipline." Does a block schedule improve student achievement even when the total amount of time is decreased within discipline areas? Which academic areas are most negatively and positively effected by the switch to a particular schedule type? Should one schedule be the model for all schools? These are important questions that need to be answered by researchers in different academic areas.
References
Bateson, D.J.(1990). Science achievement in semester and all-year courses. *Journal of Research in Science Teaching, 27*(3), 233-240.
Bevevino, M. M., Snodgrass, D. M., Adams, K. M., & Dengel, J. A. (1998). *An educator's guide to block scheduling: Decision making, curriculum design, and lesson planning strategies*. Boston: Allyn and Bacon.
Buckman, D., King, B., and Ryan, S. (1995). Block scheduling: A means to improve school climate. *NASSP bulletin, 79* (571), 9-18.
Canady, R. and Rettig, M. (1995). *Block scheduling: A catalyst for change in high school*. Princeton, NJ: Eye on Education.
Canady, R. L., and Rettig, M. D., (Eds.) (1996). *Teaching in the block: Strategies for engaging active learners*. Princeton, NJ: Eye on Education.
Carroll, J. M. (1995). The Copernican Plan evaluated: The evolution of a revolution. *Phi Delta Kappan, 76*, 104-110, 112-113.
Cobb, R. B., Abate, S., & Baker, D. (1999). Effects on students of a 4 x 4 junior high school block scheduling program. *Education Policy Analysis Archives, 7*(3), (Entire issue). (Available online at http://epaa.asu.edu/epaa/v7n3.html.)
Edwards, C. (1993). The 4 X 4 plan. *Educational Leadership, 53* (3): 16-19.
Hess, C., Wronkovich, M., and Robinson, J. (1998). Measured outcomes of learning in the block. Manuscript submitted for publication.
Holmberg, T. (1996). Block scheduling versus traditional education: A comparison of grade-point averages and ACT scores. Unpublished doctoral dissertation, University of Wisconsin, Eau Claire.
Kramer, S. L. (1996). Block scheduling and high school mathematics instruction. *The Mathematics Teacher*, 89, 758-767.
Lockwood, S. (1995). Semesterizing the high school schedule: The impact of student achievement in Algebra and Geometry. *NASSP Bulletin*, 79 (575), 102-108.
Monroe, M. J. (1989). BLOCK: successful alternative format addressing learner needs. Paper presented at the Annual Meeting of the Association of Teacher Educators, St. Louis, MO.
North Carolina Department of Public Instruction, Division of Accountability Services (1996). Blocked scheduled high school achievement: Comparison of 1995 end-of-course test scores for blocked and non-blocked high schools, Raleigh, NC: Evaluation Services Section, Division of Accountability.
Pullen, S. L., Morse, J., and Varrella, G. F. (1998). A second look at block scheduling. Paper presented at the Annual Conference of the National Association of Science Teachers, Las Vegas, NV.
Queen, J. A., and Isenhour, K. G. (1998). *The 4x4 block schedule*. New York: Eye on Education, Inc.
Schoenstein, R. (1995). The new school on the block schedule. *The Executive Educator*, 17 (8): 18-21.
Sessoms, J. C. (1995). Teachers perceptions of three models of high school block scheduling. Unpublished doctoral dissertation, University of Virginia, Charlottesville.
Stennett, R. G. & Rachar, B. (1973). Gains in mathematics knowledge in Grade 10 semestered and non-semestered programmes. London, Ontario: London board of Education. (Micromedia Limited Use Microlog order No. ON00775).
Tanner, B. M. (1996). Perceived staff needs of teachers in high schools with block schedules. Unpublished doctoral dissertation, University of Virginia, Charlottesville.
The College Board. (May, 1998). Block schedules and student performance on AP Examinations. *Research News, RN-03*. New York: College Entrance Examination Board.
Wild, R. D. (April, 1998). Science achievement and block schedules. Paper presented at the Annual Meeting of the National Association for
Research in Science Teaching. San Diego, CA.
Wronkovich, M., Hess, C. A., & Robinson, J. E. (1997). An objective look at math outcomes based on new research into block scheduling. *NASSP Bulletin, 81* (593): 32-41.
**About the Authors**
**William R. Veal**
Assistant Professor, Science Education
The University of North Carolina-Chapel Hill
CB #3500 Peabody Hall
Chapel Hill, NC 27599-3500
(919) 962-9891
(919) 962-1533 Fax
Email: email@example.com
William Veal is an assistant professor of science education at UNC-Chapel Hill. He taught secondary science in a block schedule in Salt Lake City. His other research interests lie in the development of pedagogical content knowledge in preservice science educators and the implications for teacher education.
**James Schreiber**
Indiana University
Bloomington, IN
James Schreiber is a former high school mathematics teacher, and currently works as Senior Research Associate at the Indiana Center for Evaluation. He has been involved with education at different levels, specifically focusing on international mathematics achievement. He is currently studying toward his Ph.D. in Educational Psychology at Indiana University.
Copyright 1999 by the *Education Policy Analysis Archives*
The World Wide Web address for the *Education Policy Analysis Archives* is [http://cpaa.asu.edu](http://cpaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, [firstname.lastname@example.org](mailto:email@example.com) or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: [firstname.lastname@example.org](mailto:email@example.com). The Commentary Editor is Casey D. Cobb: [firstname.lastname@example.org](mailto:email@example.com).
**EPAA Editorial Board**
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs-Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison J. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina—Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petric | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois--UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
Erwin Epstein (U.S.A.)
Loyola University of Chicago
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Jurjo Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
J. Félix Angulo Rasco (Spain)
Universidad de Cádiz
firstname.lastname@example.org
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
José Contreras Domingo
Universitat de Barcelona
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
firstname.lastname@example.org
Marcela Mollis (Argentina)
Universidad de Buenos Aires
email@example.com
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
firstname.lastname@example.org
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
Grade Inflation Rates among Different Ability Students, Controlling for Other Factors
Stephanie Mc Spirit
Eastern Kentucky University
Kirk E. Jones
Eastern Kentucky University
Abstract
This study compares grade inflation rates among different ability students at a large, open admissions public University. Specifically, this study compares trends in graduating grade point average (GPA) from 1983 to 1996 across low, typical and higher ability students. This study also tests other explanations for increases in graduating GPA. These other explanations are changes in 1) ACT score 2) gender 3) college major and 4) vocational programs. With these other explanations considered, regression results still report an inflationary trend in graduating GPA.
GPA. Time, as measured by college entry year, is still a significant positive predictor of GPA. More directly, comparisons of regression coefficients reveal lower ability students as experiencing the highest rate of grade increase. Higher grade inflation rates among low aptitude students suggest that faculty might be using grades to encourage learning among marginal students.
This study compares grade inflation rates among different ability students at a large, open admissions public University. Specifically, this study compares trends in graduating grade point average (GPA) from 1983 to 1996 across low, typical and higher ability students. This study also tests other explanations for increases in graduating GPA. These other explanations are changes in 1) ACT score 2) gender 3) college major and 4) vocational programs. With these other explanations considered, regression results still report an inflationary trend in graduating GPA. Time, as measured by college entry year, is still a significant positive predictor of GPA. More directly, comparisons of regression coefficients reveal lower ability students as experiencing the highest rate of grade increase. Higher grade inflation rates among low aptitude students suggest that faculty might be using grades to encourage learning among marginal students.
In this study, we examine grade inflation at a public open-enrollment university. There has been little attention on grade inflation within public institutions (Moore, 1996 p.2). Yet the media has provided ample coverage of grade inflation at selective colleges and elite universities (see Reibstein and King, 1994; Strauss, 1997; Archibold, 1998; Sowell, 1994; Shea, 1994; Gose, 1997). In fact, a read of the newspaper would even suggest that the steady proliferation of A and B grades and steady climb in grade point average (GPA) is only at issue among top tier institutions. However, a review of a few other reports (Beaver, 1997; Franklin, 1991; Moore, 1996; Stone, 1995; Van Allen, 1990) shows that grade inflation is also a concern within less selective colleges and universities.
This study focuses on an open admission, public university that typically enrolls 13,000 undergraduates annually. The relatively large size of the university, combined with the fact that most other institutions are also relatively non-selective in their admissions criteria (Beaver, 1997, p.5), make this study's report on grade inflation more applicable to the vast majority of other colleges and universities than the media focus on grade inflation at top notch institutions. In examining grade inflation, this study examines trends in graduating GPA from 1983 to 1996. Our general findings suggest that students have been graduating with consistently higher grade point averages since 1983. We believe these findings show 'grade inflation' since we statistically controlled for any number of other alternate explanations (justifications) for the rise in graduating GPA. We speak to these other influences in the following section.
However, these general findings are not our most important results. Our most important results are based on further analysis of
grading GPA with student aptitude. We wondered whether the faculty, over the years, had changed their grading behavior to accommodate one student group over another. Subsequently, we compared rates of grade increases between low, typical and higher ability students over time. Few grade inflation studies have made similar comparisons, though several studies have hinted that grade inflation rates may differ across different student ability groups (Bearden, Wolf and Grosch, 1992 p.740; Kolevzon, 1981 p.200; Prather, Smith and Kodras, 1979 p.20; Sabot and Wakeman-Linn in Shea, 1994, p.A46). Some studies suggest that high ability students gravitate toward departments that hold more stringent grading standards and lower ability students gravitate toward departments that grade higher (Bearden, Wolf and Grosch, 1992 p. 740). On the other hand, Sabot and Wakeman-Linn suggest the reverse, in that traditionally low grading departments have experienced the highest rate of a grade increase (quoted in Shea, 1994, p.A46; also Kolevzon, 1981 p.200; Prather, Smith and Kodras, 1979 p.20). Subsequently, current grade inflation rates might be steepest among the high aptitude student groups. In short, there is some comment to suggest that rates of grade inflation might be related to student ability. This paper examines more fully the extent to which faculty might have altered their grading behavior toward one student group over another.
In making our own distinctions between differences in student aptitude, we relied on student scores on the American College Test (ACT). We acknowledge the potential class bias in using the ACT as an aptitude measure. We remind readers that ACT score, at best, measures college readiness and is not a measure of cognitive ability. Few grade inflation studies have been troubled in using ACT score as a measure of college aptitude. Most studies, for example, that control for an increase in student preparation as an explanation for an increase in grades have relied on the ACT (Breland, 1976; Chesen-Jacobs, Johnson and Keene, 1978; Cluskey, Griffin and Ehlin, 1997; Kwon, Kendig and Bae, 1997 Mullen, 1995; Olsen, 1997; Taylor, 1985; Remegius, 1979). Like other studies, we also use ACT as a statistical control on grade increase. Unlike other studies, we also rely on student ACT to categorize students into low, typical and higher academic ability groups. We then use these distinctions to check for differences in rates of grade inflation between students of low, typical and higher college aptitude. Results show important and significant differences in grade inflation rates between student aptitude groups. These results remain significant upon controlling for the influence of other factors.
**Literature Review**
**Controlling for Other Explanations of Grade Increase**
**Aptitude**
A rise in college grades might be due to other factors other than grade inflation. An increase in high grades, for example, might be due to an increased presence of more college-prepared students. Early studies examined the influence of increased student preparation levels as an explanation for rising grades. Each found little evidence to
suggest that increases in grade point average were due to improvements in student preparation (Breland, 1976; Chesen-Jacobs, Johnson and Keene, 1978; Taylor, 1985; Remegius, 1979). A recent study reaches similar conclusions: Cluskey, Griffin and Ehlin, (1997) find little evidence that increases in GPA are due to an influx of more college able students; in fact, a negative correlation between GPA and ACT is noted (p.274) with grades rising and average ACT declining over the years. Yet other recent studies reach different conclusions. Other studies document a significant rise in student aptitude and preparedness levels over the years at their prospective institutions (Olsen, 1997; Mullen, 1995; Kwon, Kendig and Bae, 1997). Olsen notes that the average incoming student scored in the 90th percentile on the ACT in 1994, whereas in previous years, the typical student ranked in the 70th percentile (p.4). Considering the rising academic caliber of the student body, Olsen suggests that the corresponding increase in student GPA is warranted and not due to an inflationary spiral in college grading (p.7). Mullen, likewise, finds a significant increase in ACT score over the years. He concludes also that the increase in GPA over the years is the result of more college-prepared students (1995, p.12). In short, in identifying grade inflation at prospective institutions, researchers have examined the confounding effect that increases in student aptitude and preparation levels have in explaining grade increase. Researchers, at separate institutions, have reached separate conclusions on whether identified grade rise is the earned result of increases in student preparation levels or the result of grade inflation.
This leads to the standard empirical definition of grade inflation: That is, if grades rise over a period, without a corresponding increase in student aptitude levels (as measured typically through ACT score), then researchers have "probable cause" to assume that grade increase is due to an inflationary trend in faculty grading (Cluskey et al., 1997 p.273; see also Carney, Isakson and Ellsworth, 1978, p.219). This standard definition and how it has been applied in some studies has been improved upon in others: for example, a number of other studies control for other student and institutional-related factors that might explain an increase in high grades besides a rise in ACT score.
Age
Several recent studies, for example, point to the growing presence of older, more mature, serious minded college students as a possible explanation for grade increase. Kwon, Kendig and Bae (1997) note a positive correlation between age and grades: As GPA increased from 1983 to 1993, average student age also increased from 19 to 22 years (p.52); moreover, further tests show student age as a significant positive predictor of student GPA (p.53). Olsen (1997) corroborates this, in that being a mature student, returning to school, served also as a positive predictor of college GPA (p.10). Thus, research suggests that an increase in the number of older, more-serious minded college students may serve to explain an increase in high grades at some institutions.
Gender
Another demographic influence to control for is gender. Early studies noted that the influx of female students in the seventies might explain part of the increase in GPA (Birnbaum, 1977, p.527). A recent national study confirms that female students continue to earn, on average, significantly higher college marks than their male counterparts (Adelman, 1995, p.267). Studies suggest that a notable increase in female students might explain some of the aggregate rise in grade point average. Thus, gender would be another demographic factor to control for before attributing grade increase to grade inflation.
Course Withdrawals
Apart from demographic shifts, many studies note institutional changes that might explain a rise in high grades. Some studies, for example, cite university changes in withdrawal policies as a contributing explanation for rising grade point average (Chesen-Jacobs, Johnson and Keene, 1978 p.14; Hoyt and Reed, 1976). Universities that implement more lenient withdrawal policies make it easier for students to withdrawal from courses that threaten their grade point average (Weller, 1986 p.125 ). While faculty might continue to grade the same, GPA might climb due to more liberal withdrawal policies. This would be another factor to consider before implicating faculty of grade inflation.
College Major
Other studies comment that the migration of student majors from low to high grading departments is a principal factor behind grade inflation (Bearden, Wolf and Grosch, 1992; Prather, Smith and Kodras, 1979; Sabot and Wakeman-Linn, 1991; Summerville, Ridley and Maris, 1990). According to this view, not all academic departments are equally responsible for grade inflation as far as faculty in certain disciplines might inflate grades more so than others. Lanning and Perkins (1995) note that faculty in the College of Education are often indicted as contributing more to grade inflation because of more emphasis on mastery learning approaches and more collaborative relations with students as future teachers. Here, a movement of students into the education field might lead to an aggregate rise in GPA of which not all faculties in all departments are responsible. Other studies note that to counteract the flight of students to higher GPA departments, traditionally low grading departments might be inflating grades more in order recruit and retain majors (Sabot and Wakeman-Linn in Shea, 1994, p. A46).
Vocational Programs
Other studies have attributed aggregate grade increase to increases in vocational programs within the university (Sabot and Wakeman-Linn, 1991, p.159). Such programs, they have argued, grade more on mastery and learning competency models than other more academic departments (Goldman, 1985, p.103 ). If more A and B grades are awarded in job-oriented programs more than in other college departments than an increase of students into more vocational
oriented curriculums might account for an aggregate rise in high grades. This would then be another factor to control for before charging faculty with grade inflation.
In summary, prior research reports a half dozen other plausible explanations for an aggregate increase in GPA other than faculty simply dispensing higher marks. These other possible explanations are 1) An increase in student aptitude and preparedness levels, 2) an increase in older, more mature college students, 3) an increase in the number of female students, 4) an increase in leniency in university withdrawal policy 5) an increase of students into higher grading departments and 6) an increase in students into more vocationally oriented college programs. Each of these increases might explain or justify an aggregate increase in grade point average over the years. In this study, we control for these other plausible influences prior to identifying grade trends as "grade inflation."
Research Design
Sample Controls
Age. The influence of age on grades is held constant in our analyses through requesting a homogenous sample of traditional college-age students. Our student sample consists of students that entered the university as full time freshmen, in which the average entering age of students in our sample is nineteen years. (S.E.=.05). With age held constant across our sample, an increase in graduating GPA within our sample is not to be attributed to changes in student age.
For each year of our investigation, we randomly selected 500 records of entering full time freshmen, which resulted in a relative large panel of freshmen records. Yet like inflation, retention is also an issue in public, open-enrollment universities and not all students in our initial panel went on to graduate. As a result, our analysis of trends in graduating GPA is based on 1,986 graduating seniors, that entered the university as full time freshmen, between 1983 and 1992. Our data is more up to date than what is implied: Students that entered in 1992, for example, have had time to graduate. Subsequently, data on graduating GPA is recorded up through and including the 1996 graduating year.
University Withdrawal Policy. Liberal changes in university withdrawal policies might explain an increase in average GPA. Students might use liberal withdrawal options to withdraw from courses that they are failing or that threatens their GPA. This, however, is not a notable influence in our analysis since our University started a more liberal withdrawal policy approximately the same time that our analysis of grade trends begins. The second year of our 14-year investigation (1984/85) our university adopted a more lenient withdrawal policy. Under the policy, students have up to eight weeks of class to withdraw from a course and receive a generic "W". Before the change in policy, withdrawal while failing (w/f) or withdrawal while passing (w/p) was noted on the student transcript. Thus, from
1983/84 to 1984/85 the number of students using their withdrawal options increased significantly and has remained steady over the remaining thirteen years of our analysis. (Note 1)
**Statistical Controls**
We statistically control influences of aptitude, gender, college major and vocational program on graduating GPA. To control influences of changes in student aptitude levels, data on ACT score are used. In using ACT as a control on aptitude, we adjusted pre-1989 student scores to equate with post-1989 enhanced version scores based on the standard ACT conversion chart. By adjusting pre-1989 scores, this allows for more accurate comparisons in ACT score across time.
To control for the influence of an increase in female students, gender enters the analysis as a dummy variable (0=male, 1=female). To account for shifts in student major composition as another explanation for grade increase, we based our control at the college level. Table 1 lists the nine colleges, along with the corresponding average graduating GPA for our sample of full-time entering freshmen. A review of Table 1 indicates notable differences in average graduating grade point average across colleges. Students in the College of Natural and Mathematical Sciences (Mean GPA Grad = 3.16, S.E.=.042) and the College of Education (Mean GPA Grad = 3.04, S.E.=.025) receive, on average, higher grades over our 14-year period. Consequently, a migration of students into either one of these two departments over the years would lead to a natural bump in graduating GPA that wouldn't necessarily implicate individual faculty for grade inflation. To control for this influence, graduating averages (listed in Table 1) are included as a control variable in our analysis.
**Table 1**
Average Graduating GPA by College, 1983-1996
| College | Average Graduating GPA |
|----------------------------------------------|------------------------|
| College of Allied Health and Nursing | 2.76 (.029) |
| College of Arts and Technology | 3.05 (.026) |
| College Arts and Humanities | 3.00 (.038) |
| College of Business | 2.84 (.025) |
| College of Education | 3.09 (.025) |
| College of Health, P.E., and Recreation | 2.81 (.036) |
| College of Law Enforcement | 2.82 (.029) |
| College of Natural and Mathematical Sciences | 3.17 (.042) |
| College of Social and Behavioral Sciences | 2.94 (.030) |
Note: Numbers in parentheses are standard errors.
To determine the influence of vocational programs on grade inflation, we dummyed college major into the following categories. The College of Law Enforcement, which contains the programs of police studies, correctional services and fire safety and the College of Applied Arts and Technology, which contains the programs of agriculture, military science, human environmental science, along with several other programs, were both coded into one category (=1), while other Colleges (listed in Table 1) were coded into the other (=0). This dummy variable therefore estimates the influence of vocational programs on graduating GPA.
**Measuring Rates of Grade Inflation** We use ordinary least squares (OLS) regression to examine the extent of grade inflation on graduating GPA by student entry year. Under the null hypothesis of no inflation, student entry year should not be a significant predictor of graduating GPA. That is, time of entry into the University should not influence grade point average. Yet, under conditions of grade inflation, time becomes an important influence on GPA, with recently enrolled students earning significantly higher grade point averages upon graduation than students of ten years prior. Moreover, if student entry year is a significant predictor of graduating GPA, then we would expect it to remain significant when other possible explanations (controls) are added into the regression analysis.
**Measuring Student Ability Levels** The principal purpose of this study is to compare grade trends between students of varying incoming ability. To make such comparisons, we base our distinctions on the ACT quartile and inter quartile ranges of our sample. This results in the following subgroups: Students with composite ACT scores between 10 through 17, between 18 through 21, and greater than or equal to 22 are respectively categorized as low, typical and higher ability students.
Separate OLS regression analyses are then used to compare differences in grade inflation rates between these student aptitude groups. To determine whether observed differences in rates of a grade increase between student groups represent significant differences ($p < .05$), we then examine the combined interaction affect of ACT subgroups with student entry year. We explain this procedure in more detail below.
**Regression Results**
Table 2 summarizes our regression results. Model A reports the influence of student entry year, and other potential influences, on graduating GPA for our full sample ($n = 1,986$) of graduating full time freshmen. Significant slope coefficients on each of our control variables suggest that each is important to graduating GPA. For example, regression results report gender as a significant influence on graduating GPA. Regression results report female students graduating, on average, with significantly higher grade point averages than male students. Moreover, gender remains a significant predictor when college major and ACT score are controlled in the regression. This suggests that the higher aggregate GPA among female students is not only due to females migrating to higher grading departments but indicates, irrespective of college major as well as ACT score, that female students tend to graduate with grade point averages .123 points higher than their male counterparts. In short, regression results on gender show, following national trends, that female college students are more grade conscious than male college students.
**Table 2**
*Graduating GPA: Regression Estimates of Grade Inflation, 1983-1996: Full-time Entering Freshmen*
| | Model A Full Sample | Model B Low ACT Students (ACT 10 - 17) | Model C Typical ACT Students (ACT 18 - 21) | Model D Upper ACT Student (ACT > 21) |
|----------------------|---------------------|----------------------------------------|------------------------------------------|-------------------------------------|
| n, sample size | 1,968 | 379 | 896 | 693 |
| R², squared mult. R | .308 | .18 | .13 | .22 |
| b₀, intercept | -.018 (.182) | -.021 (.438) | -.71 (.35) | .60 (.330) |
| Student Entry Year | .021*** (.003) | .031*** (.006) | .018*** (.004) | .019*** (.005) |
| ACT score | .053*** (.002) | .035** (.013) | .059*** (.011) | .058*** (.006) |
| Gender | .136*** (.018) | .122*** (.037) | .065* (.026) | .229*** (.031) |
| Average College GPA | .570*** (.065) | .665*** (.145) | .785*** (.095) | .305** (.110) |
Interaction Effects
Low vs. typical ACT X Student Entry Year
Low vs. higher ACT X Student Entry Year
Typical vs higher ACT X Student Entry Year
Note: Numbers in parentheses are standard errors.
* p < .05; ** p < .01; *** p < .001
But our emphasis is not on the influence of gender, nor ACT scores, nor college major in predicting GPA upon graduation. These are added as controls in our analysis to determine whether increases in graduating GPA are the result of grade inflation or these factors. With these other influences controlled for, Model A reports a significant slope coefficient for student entry year. This shows that the year of entry into the university is a significant predictor of graduating GPA despite changes in ACT score, gender and college major. The slope coefficient for student entry year (b₁ = .021) shows a steady increase in graduating GPA from 1983 to 1996. The coefficient shows an approximate rise of .021 grade points annually since 1983. Looked at over a five-year trajectory, regression results estimate that graduating GPA has risen, on average, more than one tenth (.1) of a grade point every five years since 1983.
Models B, C and D compare grade inflation rates between low, typical and higher ability students. Slope coefficients on student entry year for each model are revealing. Comparisons of coefficients across models for student entry year show the rate of inflation in graduating GPA to be higher for low aptitude students than for other student subgroups. Regression coefficients measure grade inflation rate for typical and upper ACT students at an annual rate of increase of .018 (Model C) and .019 (Model D) grade points respectively. By contrast, the rate of grade inflation for lower aptitude students (Model B) was estimated at increasing .031 grade points annually. This suggests that nearly every three years since 1983, lower aptitude students have experienced an average increase of one-tenth of a grade point (.1) rise in graduating GPA.
To determine whether these differences in rates of a grade increase represent significant differences, we tested the dummy interaction on ACT subgroups with student entry year. A significant coefficient on this variable would indicate that ACT subgroup and year of entry interact to predict graduating GPA. This would suggest important ACT subgroup differences in grade inflation rates across time. The first tested interaction of low (=1) versus typical (=0) ACT subgroups with student entry year is significant. This suggests an important difference in rates of grade inflation between low versus typical ACT students, with low aptitude students experiencing significantly higher rates of grade inflation. On the other hand, results report a non significant interaction between low (=0) and higher (=1) aptitude students and student entry year. Therefore, results show no important difference in grade inflation rates between low versus higher aptitude students. Finally, results also report a non significant interaction between typical (=0) and higher (=1) aptitude students and student entry year. This also shows no important difference in annual rates of grade increase between typical and higher ability students. In summary, interaction effects report higher grade inflation rates for lower aptitude students in comparison to typical ability students.
Table 3 reports our final control of vocational programs on grade inflation. The influence of job-oriented programs on grade trends among lower ACT students might be most relevant, since less college prepared students may more likely enroll in college programs that provide more job-related training. This may be an important control variable in explaining increases in grade point average among less college ready students especially.
**Table 3**
*Graduating GPA:*
*Regression Estimates of Grade Inflation, 1983-1996*
*Controlling for the Influence of Vocational Programs*
| | Model A Full Sample | Model B Low ACT Students (ACT 10 - 17) |
|----------------------|---------------------|---------------------------------------|
| **n, sample size** | 1708 | 297 |
| **R², squared mult. R** | .28 | .18 |
| **b₀, intercept** | 1.58 (0.055) | 1.86 (0.228) |
| **Student Entry Year** | .023*** (0.003) | .034*** (0.007) |
| **ACT Score** | .055*** (0.003) | .035* (0.014) |
| **Gender** | .196*** (0.019) | .207*** (0.041) |
| **Vocational** | .032 (0.021) | .018 (0.048) |
Note: Numbers in parentheses are standard errors.
* p < .05; ** p < .01; *** p < .001
Model A in Table 3 reports the influence and control of vocational programs on graduating GPA for our full sample. With vocational curriculums included as a control variable, entry year remains a separate and significant influence on GPA upon graduation. This suggests that our initial assumption that part of the rise in graduating GPA might be the result of a migration of students into more jobs-related curriculums is neither a strong nor partial explanation for the identified grade rise in graduating GPA. The same findings apply to lower aptitude students (Model B). Vocational programs were not significant predictors of graduating GPA nor were they important controls in explaining grade inflation among lower aptitude students.
Conclusion
With any number of aptitude, institutional and other demographic factors held constant in our analysis, our general regression model reports a consistent climb in graduating GPA from 1983 to 1996. Further, our subgroup models show an even higher rate of grade inflation among the lower aptitude student group over the years. Moreover, by controlling the influence of other institutional and demographic explanations on grade rise, we believe we have isolated the aspect of grade increase that might be due to individual faculty changes in grading behavior. Considering, for example, the substantial increase in grade point average among lower aptitude students, our
findings show that faculty seem to be more benevolent in assigning grades to low ability students than perhaps fourteen years ago. This suggests a possible change in faculty grading behavior in that faculty might increasingly be relying on grades to encourage and stimulate learning among more marginal students.
In short, it seems as if faculties at open-admissions universities may embrace the equalizing mission of higher education more so than faculties at selective colleges and elite universities. In the classroom, this might mean that faculty are dismantling the hierarchy of learning that is implied by a normal distribution of grades. Outside the classroom, this might mean that faculty are grappling with broader issues of opportunity and social mobility (Birnbaum, 1977, pp. 523-524). During the Vietnam War, for example, grades took on deeper significance than a report on course performance. Likewise, college grades today may carry deeper significance as far as a college degree becomes increasingly, a prerequisite for future economic survival. Consequently, faculty today, as during the Vietnam War, might be giving more good grades because of their future concern for students generally, and for more marginal students especially.
All told, grade inflation has been proscribed as faculty failure to impart meaningful distinctions between students. Thus, it supposedly shows lack of faculty accountability to students, parents and to the larger society. Yet grade inflation might go beyond finger pointing and front accusations, and might reflect a complex social mix where faculty -through grades-might be trying to foster positive feeling toward learning and where faculty, might be awarding higher marks to confer the necessary credentials and future prospects of employment and job security on outgoing students. On its face, these may be both benign even benevolent approaches to the meaning and purpose grades. Yet we wonder ourselves whether grades are the appropriate mechanism from which to tackle burning issues of mobility, opportunity and job security. On this latter dimension, we wonder whether such a program of grade encouragement and credentialing, might not reinforce an ideology of equal opportunity through education. Thus, rather than ameliorating the current system of economic inequities and class hierarchy, current grading trends might be providing the necessary justifying ideology for it.
Note
1. Data obtained from our Office of Institutional Research reports the percentage of course withdrawals for each of the years of our investigation as the following: 1983/84=4.76; 1984/85=7.04; 1985/86=6.93; 1986/87=7.03; 1987/88=7.17 1988/89=7.37 1989/90=7.68 1990/91=8.25 1991/92=8.33 1992/93=9.09 1993/94=10.06 1994/95=9.64 1995/196=9.24.
Acknowledgments
This article was born out of the work of an Ad Hoc Committee to study the problem of grade inflation at Eastern Kentucky.
University. The authors would like formally to acknowledge the work of other Committee members in lobbying the Faculty Senate to pass any number recommendations to curb grade inflation on campus. Other members of the Ad Hoc Committee on Grade Inflation were Ann Chapman, Paula Kopacz and Richard Chen. The authors would like also to thank Aaron Thompson and Karen Carey of Eastern Kentucky University as well as Scott Hunt of the University of Kentucky for comments on earlier drafts.
References
Adelman, C. (1995). *The New College Course Map and Transcript Files: Changes in Course-Taking and Achievement, 1973-1993*. U.S. Department of Education: National Institute of Post Secondary Education.
Archibold, R. (1998, February 18). Just because the grades are up, are Princeton students smarter? *The New York Times*, p.A1.
Bearden, J., Wolf, R., & Grosch, J. (1992). Correcting for grade inflation in studies of academic performance. *Perceptual and Motor Skills, 74*, 745-746.
Beaver, W. (1997). Declining college standards: Its not the courses, it's the grades. *The College Board Review, 181*, 2-7.
Bimbaum, Robert (1977). Factors Related to University Grade Inflation. *Journal of Higher Education, 48* (5), 519-539.
Breland, H. (1976). Grade inflation and declining SAT scores: A research view point. Paper presented at the Annual Meeting of the American Psychological Association. Washington, D.C., September 1976. (ERIC Document Reproduction Service No. ED134610).
Carney, P., Isakson R., & Ellsworth R. (1978). An exploration of grade inflation and some related factors in higher education. *College and University, 53*(2), 217-230.
Cluskey, G.R., Griffin N., & Ehlin C. (1997). Accounting grade inflation. *Journal of Education for Business, 73* (May / June), 273-277.
Farley, B. (1995). 'A' is for average: The grading crisis in today's colleges. *Issues of Education at Community Colleges: Essays by Fellows in the Mid-Career Fellowship Program at Princeton University*. (ERIC Document Reproduction Service No. ED384384).
Goldman, L. (1985). The betrayal of the gatekeepers: Grade inflation. *Journal of General Education, 37*(2), 97-121.
Gose, B. (1997). Duke rejects a controversial plan to revise the
calculation of grade-point averages. *Chronicle of Higher Education*, 43 (28), G53.
Hoyt, D.& Reed J. (1976). Grade inflation at KSU: Student and faculty perspectives. Research Report No. 36. June 1976. (ERIC Document Reproduction Service No. ED160018).
Jacobs-Chesen, L., Johnson, J. & Keene J. (1978). University grade inflation after controlling for courses and academic ability. Bureau of Evaluative studies and Testing, Division of Research and Development, Indiana University, Bloomington, Indiana 47401. (ERIC Document Reproduction Service No. ED156050).
Kolevzon, M. (1981). Grade inflation in higher education: A comprehensive survey. *Research in Higher Education*, 15 (3), 195-212.
Kwon, I., Kendig N.& Bae M. (1997). Grade inflation from a career counselor perspective. *Journal of Employment Counseling*, 34 (2), 50-54.
Lanning, W. & Perkins, P. (1995). Grade inflation: A consideration of additional causes. *Journal of Instructional Psychology*, 22 (2), 163-168.
Moore, P. (1996). Tenured weasels: Getting a degree -but not an education at public universities. (an electronic forum with Internet Address: http://www.bus.lsu.edu/accounting/faculty/lcrumbley/weasels.edu).
Mullen, R. (1995). Indicators of grade inflation. Paper presented at the Annual Forum of the Association for Institutional Research (35th, Boston, MA, May 28-31, 1995). (ERIC Document Reproduction Service No. ED386970).
Olsen, D. (1997). Reality or myth? Student preparation level vs. grades at Brigham Young University, 1975-1994. Paper presented at the Annual Forum of the Association for Institutional Research (37th, Orlando, FL., May 18-21, 1997). (ERIC Document Reproduction Service No. ED410880).
Prather, J., Smith, G. & Kodras J.(1979). A longitudinal study of grades in 144 undergraduate courses. *Research in Higher Education*, 10 (1), 11-24.
Reibstein, L. & King, P. (1994, June 13). Give me an A, or give me death. Newsweek, p.62.
Remigius, D. (1979). Declining ACT scores and grade inflation at Southeastern Louisiana University.
Research/Technical Report.(ERIC Document Reproduction Service No. ED177992).
Sabot, R. & Wakeman-Linn, J. (1991). Grade inflation and course choice. *Journal of Economic Perspectives*, 5 (1), 159-170.
Shea, C. (1994). Grade inflation's consequences: students are said to desert the sciences in favor of easy grades in humanities. *Chronicle of Higher Education*, 40 (18), p. A45.
Sowell, T. (1994, July 4). A gentleman's A. *Forbes*, 154, p. 82
Stone, J.E (1995). Inflated grades, inflated enrollment, and inflated budgets: An analysis and call for review at the state level. *Education Policy Analysis Archives*, 3(11). (Entire issue.) (Available online at http://epaa.asu.edu)
Strauss, V. (1997, June 12). Colleges seek to slow grade inflation rates. *Washington Post*. p. A1.
Summerville, R., Ridley, D., & Maris, T. (1990). Grade inflation: The case of urban colleges and universities. *College Teaching*, 38 (1), 33-38.
Taylor, H. (1975). Grade inflation. Paper presented at the Symposium on Grading. University of Victoria, Victoria, British Columbia, Canada, October 10, 1975. (ERIC Document Reproduction Service No. ED150204).
Van Allen, G. (1996). Educational morality: A task of risking the economic corruption of academic excellence. A position paper. (ERIC Document Reproduction Service No. ED317232).
Weller, D. (1986). Attitude toward grade inflation: A random survey of American Colleges of Arts and Sciences and Colleges of Education. *College and University*, 61(2), 118-127.
**About the Authors**
**Stephanie J. Mc Spirit**
Stephanie J. Mc Spirit received her Ph.D. from the University of Buffalo in 1994, after which she accepted a position at Eastern Kentucky University. She is an Assistant Professor of Sociology where she teaches courses in research methods and statistics. For the past three years she has examined trends in grade point averages along with faculty views on grade inflation. This research has been an outgrowth of serving on the EKU Faculty Senate Ad Hoc Committee on Grade Inflation.
*Email: email@example.com*
*Web page: http://www.anthropology.eku.edu/MCSPRITI/*
Kirk E. Jones
Kirk E. Jones received his Ph.D. from Iowa State University in 1991. He has been a member of the faculty at Eastern Kentucky University since 1990. He is an Assistant Professor of Mathematics where he teaches courses ranging from college algebra through real and complex analysis. For the past three years he has served as Chair of the EKU Faculty Senate Ad Hoc Committee on Grade Inflation. Recent scholarly activity has been an outgrowth of serving on this University committee.
Email: firstname.lastname@example.org
Web page: http://cagle.eku.edu/~jones/jones.html
Copyright 1999 by the Education Policy Analysis Archives
The World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org. The Commentary Editor is Casey D. Cobb: email@example.com.
EPAA Editorial Board
| Name | Institution |
|-----------------------------|--------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs- Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | firstname.lastname@example.org |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | email@example.com |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | firstname.lastname@example.org |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | email@example.com |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina—Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education|
| Mary McKeown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petric | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois—UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Adrián Acosta (México)
Universidad de Guadalajara
email@example.com
J. Félix Angulo Rasco
(Spain)
Universidad de Cádiz
firstname.lastname@example.org
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho dis1.cide.mx
Alejandro Canales (México)
Universidad Nacional Autónoma de México
email@example.com
Ursula Casanova (U.S.A.)
Arizona State University
firstname.lastname@example.org
José Contreras Domingo
Universitat de Barcelona
email@example.com
Erwin Epstein (U.S.A.)
Loyola University of Chicago
firstname.lastname@example.org
Josué González (U.S.A.)
Arizona State University
email@example.com
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
firstname.lastname@example.org
email@example.com
María Beatriz Luce (Brazil)
Universidad Federal de Río Grande do Sul- UFRGS
firstname.lastname@example.org
Javier Mendoza Rojas (México)
Universidad Nacional Autónoma de México
email@example.com
Marcela Mollis (Argentina)
Universidad de Buenos Aires
firstname.lastname@example.org
Humberto Muñoz García (México)
Universidad Nacional Autónoma de México
email@example.com
Daniel Schugurensky (Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Angel Ignacio Pérez Gómez (Spain)
Universidad de Málaga
email@example.com
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com
Carlos Alberto Torres (U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
This article has been retrieved 176 times since October 12, 1999
Education Policy Analysis Archives
Volume 7 Number 31
October 12, 1999
ISSN 1068-2341
A peer-reviewed scholarly electronic journal
Editor: Gene V Glass, College of Education
Arizona State University
Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES.
Permission is hereby granted to copy any article if EPAA is credited and copies are not sold.
Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education.
Children's Rights and Education in Argentina, Chile and Spain
David Poveda
Viviana Gómez
Claudia Messina
Autonomous University of Madrid
Abstract
This article is a first attempt to relate the UN Convention on the Rights of the Child to education policy. It compares three countries, Argentina, Chile and Spain in an attempt to both present particular problems that are of pressing concern in each and to propose a framework that might reveal some possible obstacles to the implementation of children's rights. The article is divided into three sections. In the first section, a comparative review of the formal dispositions and legislative changes in the three countries is presented. Some of the most notable contrasts are briefly contextualized in the history of each nation-state. In the second section, particular problems in each nation are reassessed through the lens of the Convention. Three cases are examined: in Argentina, the funding and organization of public compulsory education; in Chile, an instance of international cooperation in education; in Spain, the relations between public and private education and ethnic segregation. Finally, a general framework is discussed using these three cases as examples.
Introduction
A tenet of modernity is to consider education as one of the most important means of advancing a society and enhancing the quality of life of its citizens. Contrary to common thinking, this idea (as captured in proposals such as universal compulsory education or public funding of schools) has been forwarded by European, North American and South American countries since the middle of the nineteenth century. After World War II, with the consolidation of the "welfare state," the commitment to education has been greatly increased; and the development of strong state educational systems is currently considered a bastion of a country's social capacity.
Such commitments represent general ideas that can lead to several different, and even incompatible, interpretations and consequences. Therefore, it is necessary to try to understand how these expressions of commitment can be translated into specific and coherent proposals. The agenda set by the United Nation's Convention on the Rights of the Child (1989) (also referred to here as "the Convention") advances education as a fundamental right and provides guidelines for its implementation. However, many other problems remain
unresolved, allowing for great variability among nation-states in how this right is provided to children. Some of the pressing concerns include issues such as: what are the resources needed to provide quality education? how can education act to lessen socio-economic inequalities? what is the nature of international cooperation programs? what is the commitment of countries with scarce (or not so scarce) resources to education?
These questions work on two distinct but interrelated dimensions. At one level, there is the problem of interpreting the meaning of the articles of the Convention. The proposals of this document are the result of particular historical and social constructions of childhood (Casas, 1998). Discussions about the implications of the Convention can have more impact than would be apparent at first glance. As a binding document for those countries that have ratified it, it may be used as a legal instrument both at the national and supra-national level. For example, in the European Union the European Court of Justice has the capacity to overturn judicial decisions and procedures established at the state level and may use as a referent the *Convention on the Rights of the Child* since it is a document ratified by all its members (Verhellen, 1997). Currently, this is becoming clear as the tragic and much publicized "Thompson and Venables" case is being reviewed by the European Court with potential implications for legal procedures in England and Wales (Jones, 1997). At a second level, children's rights are social practices and in particular formal education is an institution that stems from the ideals and practical constraints that states and citizens put into operation. The heterogeneity, contradictions and divergent interests of different social groups and institutions account for the range of forms of schooling that one finds across and within nation.
Interest in these topics has been increasing in recent years and is supported by the existence of a European Network on Children's Rights and work currently being done on the topic in the Department of Developmental and Educational Psychology of the Autonomous University of Madrid. As a result, the authors of this article began discussing these matters and contrasting our different experiences. As researchers and educational professionals from three different countries (Argentina, Chile and Spain), several contrasts and questions emerged when we discussed some of the issues that the Convention poses. These three countries reflect diverse and complex realities and, although they also share certain historical and cultural ties, are located in different regions of the world with their own social and economic history. Currently, Latin America (including Chile and Argentina) is experiencing important social, political and economic changes. Formal education and the life conditions of children are clearly part of these transformations and deserve attention. Spanish educational policy is in a period of rapid and significant transformation, resulting from the full implementation of an educational reform begun at the turn of the decade of the 1990s and the political changes occurring at this time (Marchesi y Martín, 1998). *The Convention on the Rights of the Child* (1989) has been ratified by all three countries, and thus can be used as a lens to probe and contrast the characteristics of the three nations. Most importantly, it can be used as an instrument to highlight and
interpret selected problems being experienced by each country.
This article is a first attempt to elucidate this topic and is primarily concerned with establishing some base-line questions and data that may allow further research on particular problems. A description at the formal level, especially contrasting Spain and Latin American countries, can be of more interest than initially apparent. As Spanish and Latin American educational research and policy are construed, it seems that Spain is placed in a consulting position, offering services and standards that Latin American countries have not attained. Such a claim may be supportable as it relates to the economic and political resources that can be mobilized currently in each nation. Yet, as we will see, this view is not accurate with respect to the intentions and efforts that have taken place in education in the second half of this century in Spain, Chile or Argentina. The political history and legislative developments in education on each side of the Atlantic have had their own evolution and ideological sources, without Spain being a specific referent for Argentina or Chile during most of this time. A case analysis of each situation allows us to delve into educational and social problems that are of utmost concern. In particular, analyzing them against some of the tenets of the Convention introduces new possibilities that are less often explored in discussions of these topics.
The presentation is divided into three parts. First, a comparative analysis of the formal arrangements and legal dispositions proposed by the three countries regarding education will be made. This permits an assessment of those aspects in which they converge, in which they diverge and what may be the underlying reasons for these commonalities and variations. Second, a case analysis will be presented of each country. The cases analyses are not structured according to the same questions in the three contexts. Our choice has been to present instances in each nation that are both of interest to us and have been controversial in educational discussions of each country, thus presenting a small portrait of trends and tensions in each region. Finally, we forward a conceptual framework, using the *Convention on the Rights of the Child* (1989) as a matrix, that may allow to make some generalizations on the type of situations these cases represent.
**Meeting Children's Rights and Education at the Formal Level**
A number of articles in the Convention make reference directly or indirectly to what goals and conditions should be part of an educational system that meets children's rights. Based on the content of the articles it is possible to arrange them under four thematic clusters:
- The means that make education accessible.
- The means that support education in groups with special needs.
- Curricular and pedagogical goals.
- The rights of specific social groups.
**Table 1**
**Formalization of Rights Regarding the Means to Make Education Accessible**
| KEY RIGHTS | ARGENTINA | CHILE | SPAIN |
|------------|-----------|-------|-------|
| FREE AND COMPULSORY PRIMARY EDUCATION (art. 28-1a). | Compulsory Education from 5 to 14 years of age: Last year of pre-school (5 years of age) General Basic Education (6-14 years of age). | Compulsory Education from 6 to 13 years of age: General Basic Education (6-13 years of age). | Compulsory Education from 6 to 16 years of age: Primary Education (6-12 years of age). Compulsory Secondary Education (13-16 years of age). |
| DEVELOPMENT OF ACCESSIBLE PROFESSIONAL AND GENERAL SECONDARY EDUCATION (art. 28-1b). | Polimodal Education (15-17 years of age): humanistic, social and scientific and technical (1 extra year) tracks. | Secondary Education (14-18 years of age): scientific, humanistic and technical-professional (1 extra year) tracks. | Pre-university Baccalaureate (16-18 years of age): humanistic, social and scientific tracks. Professional Education (two cycles, 16-19/19-21 years of age): multiple professional modules. |
| ACCESSIBLE HIGHER EDUCATION, BASED ON CAPACITY (art. 28-1c). | Open admission to the General Basic Cycle at Public Universities. Upon successful completion, access to degree cycle. | Access to Public University determined by National Aptitude Test developed by the Ministry of Education. | Access to University determined by: Selective Exam, organized by the public universities, and grades during pre-university education. |
| DEVELOPMENT OF INFORMATION AND ACADEMIC AND PROFESSIONAL COUNSELING PROGRAMS (art. 28-1d). | Psychophysical Units at the district level: composed by social workers, psychologists and doctors. Orientation Departments at the universities. | Psychopedagogical Teams at the district level. A professional and academic counselor in each secondary school. | Psychopedagogical Teams at the district level for primary schools. Psychopedagogical Orientation Teams in each secondary school. |
Table 1 summarizes which dispositions in each country make education accessible. Art. 28 aims at making at least basic education
compulsory (sect. 1a), developing secondary education both in its academic and professional strands (sect. 1b), making higher education accessible to larger parts of the population based on capacity criteria (sect. 1c), and providing counseling and support services to students during their education (sect. 1d). At one level there appears to be much agreement in these issues, since all countries have engaged in educational reforms (all three passed legislation in the 1990's) that meet these demands, and these new policies mostly propose arrangements that address the same principles. However, there are a number of interesting contrasts, most notably the number of years of compulsory education and the age-range in which it is placed.
The beginning of compulsory education ranges from 5 (Argentina) to 6 years of age (Chile and Spain) and the ending ranges from 13 (Chile) to 16 years of age (Spain). Within the educational system, this allows for a variety of arrangements, from making the last year of kindergarten compulsory in the case of Argentina to having separate secondary compulsory education in the case of Spain. These variations are the result of the policy design and practical constraints in the arrangement of the educational system but may also reflect important social issues. At the entry level, although pre-school education is encouraged and supported for several theoretical-pedagogical reasons, making it compulsory is intertwined with social factors. In Spain, the implicit push for early childhood education has been related to the increasing number of middle-class mothers working outside the home; this began to become a priority in the 1980's with some municipalities (primarily in large cities) establishing early education centers. However, in Chile and Argentina, early childhood education began over thirty years ago as part of the extension of education to larger sectors of the population; thus in its origins it was directed to lower-working class children and families. With regard to school exit, it is important to consider how well "harmonized" are the legal minimum age to enter the workforce (or apprenticeship arrangements) and the end of compulsory education--as we will see below the two are not always coordinated, thus the mismatch has been a contributing factor in different legal reforms.
Finally, Argentina's policy regarding entrance at public universities is noteworthy. An important political claim during its democratic transition was making university education tuition-free and open (unrestricted) access to all students who had completed pre-university secondary education. This goal was achieved, making Argentina the only country of those studied here (and also contrasting with many other countries in the world) with an open admissions policy that makes higher education accessible to a much larger student population than before. However, this has introduced other "mechanisms" that are not common in the rest of the countries, such as dividing higher education into a "general" cycle and a "degree" cycle with a series of selection exams between each stage.
Table 2
Formalization of Rights Regarding Support Conditions for Populations with Special Needs
| KEY RIGHTS | ARGENTINA | CHILE | SPAIN |
|---------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------|-----------------------------------------------------------------------|
| ACCESS TO EDUCATION AND OTHER FORMS OF SOCIAL INTEGRATION FOR PHYSICALLY OR MENTALLY HANDICAPPED CHILDREN (art. 23-3). | Legislation regarding the social integration for people with special needs. | Specialists in each school to attend students with learning difficulties. | Special Education schools. Mainstreaming programs. |
| FINANCIAL ASSISTANCE IN CASE OF NEED, AT ALL EDUCATIONAL LEVELS (art. 28-1b). | Financial assistance during compulsory education: family subsidies, meal programs, grants for school materials. Scholarship programs for post-compulsory education. | Free school texts in primary education. President of the Republic school grants (meal programs, school materials). Grants for secondary and higher education. | Support during primary education and early childhood education: meal programs, school material grants. Grants for secondary and higher education. |
| PROGRAMS TO INCREASE SCHOOL ATTENDANCE (art. 28-1c). | Legislation regarding programs to facilitate and guarantee school completion. | Very low attrition rates. Objective of financial support programs (see above). | Social Work programs and action plans in rural areas and socio-economically disadvantaged urban areas. Objective of financial support (see above). |
| LEGISLATION REGARDING MINIMUM WORK AGE AND PROTECTION AGAINST CHILD LABOR (arts. 32-1; 32-2a). | Minimum work age: 15. | Minimum work age: 15. | Minimum work age: 16. |
To make educational rights effective, the Convention explicitly discusses the arrangements that should be provided for certain groups. Art 23-3 discusses the accommodations that should be made for children with disabilities, including those that relate to making education accessible. Arts. 28-1b-1c, 32-1 and 32-2a make reference to provisions that seem necessary to make education accessible to students from underprivileged circumstances. As reflected in Table 2, at one level all countries seem to have formalized these points in the educational system. Special Education arrangements for "gross
disabilities" (physical, mental handicap) exist from the beginning of childhood education, both in separate special-needs schools/units and in "mainstreaming" programs; and screening/educational adaptations for students with learning difficulties start in primary education. Some private organizations in each country (O.N.C.E in Spain, Teletón in Chile) play an important role in providing resources for children with special needs, both inside their institutions and in public schools. Also, several efforts exist to support underprivileged students, such as free-lunch programs and nutritional supplements, scholarships and financial assistance. However, the disparity of circumstances in which these arrangements are implemented make comparisons very difficult, since target populations range from urban low-class students to extremely isolated rural and indigenous populations.
An important contrast is the minimum age for entering the workforce. Spain is the only one of the three nations in which this age is the same as the end of compulsory education, an arrangement that began with the 1990 educational reform (before this compulsory education ended at 14 years of age, while the minimum age to work was 16). In the rest of the countries, the minimum working age is one or two years above the end of compulsory education, which leaves an uncertain gap for youths who abandon school early.
Table 3
Formalization of Rights Regarding Curricular and Pedagogical Objectives
| KEY RIGHTS | ARGENTINA | CHILE | SPAIN |
|---------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------|-----------------------------------------------------------------------|
| EDUCATION GEARED TOWARDS THE FULL DEVELOPMENT OF CHILDREN'S PERSONALITY, PHYSICAL AND MENTAL CAPACITIES (art. 29-1a). | Legislation states that "education should provide permanent and integral instruction so students can self-realize as persons". | Legislation proposes that education should enhance the correct development of children's personality, physical and mental capacities. | Each educational level (pre-school, primary and secondary) states a series of curricular and educational goals. |
| EDUCATION RESPECTFUL OF HUMAN RIGHTS AS EXPRESSED BY THE UNITED NATIONS (art. 29-1b). | Legislation embracing these principles in education. | Cross-curriculum themes. | Cross-curriculum themes. |
| EDUCATION RESPECTFUL OF NATIONAL AND FAMILY CULTURAL VALUES (art 29-1c). | Legislation making this explicit. | Legislation upholding the right for parents to choose schools for their children. | Part of the curriculum Regional de-centralization allows for regional subject curriculum. |
| EDUCATION IN TOLERANCE AND RESPECT FOR GENDER, ETHNIC, RELIGIOUS AND CULTURAL DIFFERENCES (art. 29-1d). | Mentioned as a general principle of the Federal Law of Education. | Mentioned as a general principle of the Organic Law of Education. | Cross-curricular theme. |
| EDUCATION RESPECTFUL OF THE NATURAL ENVIRONMENT (art. 29-1e). | Cross-curricular themes. | Ecology as a curricular subject. | Part of the Natural Sciences curriculum. Cross-curricular theme. |
| DISCIPLINE RESPECTFUL OF CHILDREN'S RIGHTS AND DIGNITY (art. 28-2). | Legislation banning physical punishment as a disciplinary measure. | Legislation banning physical punishment as a disciplinary measure. | Legislation banning physical punishment as a disciplinary measure. |
The Convention throughout Article 29-1 makes a series of general recommendations about the values and objectives that education should pursue. Education should contribute to the full
development of the child (art 29-1a) and teach respect and appreciation for human rights (art. 29-1b), national and personal values (art. 29-1c), values and cultures others than one's own (art. 29-1d), gender equality (art. 29-1d) and the natural environment (art. 29-1e). Table 3 reveals that the three countries have some curricular arrangements or general statements that attempt to develop these ideas. These are developed either as cross-curricular themes--the preferred arrangement in Spain--or specific subjects--as in Chile where Ecology is a distinct content area in schools. Finally, all countries (in line with art. 28-2) have banned physical punishment as a disciplinary measure in schools.
Table 4
Formalization of Rights Regarding Specific Social Groups
| KEY RIGHTS | ARGENTINA | CHILE | SPAIN |
|------------|-----------|-------|-------|
| RIGHTS OF ETHNIC, LINGUISTIC, RELIGIOUS AND INDIGENOUS MINORITIES TO HAVE THEIR OWN CULTURAL LIFE (art. 30). | Legislation regarding: The right of indigenous populations to preserve their cultural life and the learning of their mother-tongue language. The development by the state of indigenous educational programs. The adequacy of the educational resources to regional needs. The right of students to preserve their religious, moral and political convictions. | Legislation recognizing Mapuche (the largest indigenous group) as an official language. Used as language of instruction in Basic Education. Catholic Religion as an optional subject. | Catholic Religion as an optional subject, other religions not available unless specifically organized at the school. Autonomous communities with co-official languages (Euskera, Galician and Catalonian) have bilingual education. |
| CAPACITY OF PARTICULARS TO RUN THEIR OWN EDUCATIONAL INSTITUTIONS (art. 29-2). | Legislation regarding the supervision and granting of capacity to set-up private schools. Subsidized private schools by the state. | Legislation allowing particulars or private entities to collaborate with the State in Educational matters. Subsidized private schools by the state. | Legislation regulating the functioning of private schools. Concert systems to state-fund privately run schools. |
All countries have a number of social groups that have made claims for special arrangements in education. Table 4 shows how on the one hand, in all countries private groups have been able to develop educational institutions parallel to state-run schools (cf. the convention (art. 29-2)). Institutions in all countries are required to meet a series of legal dispositions and criteria set by the state (or public educational authority), and all countries have arrangements for providing financial support to certain private schools. For example, in Spain the Catholic Church was the primary provider of education until the 1980's; the concert system was developed to allow the Church to continue playing this role without expense to families.
Indigenous populations and other minorities must be acknowledged in the educational system as proposed in art. 30 of the
convention. In the case of Argentina and Chile, several instructional (including bilingual education or native culture curricula) arrangements exist for indigenous populations. In the case of Spain, the main developments have been made at the regional level, including local-regional history as part of the curriculum and bilingual education in those regions with languages other than Spanish. However, very little is developed regarding minority populations in the country such as gypsies or immigrant groups that are not part of compensatory education programs. Finally, Catholic religious education is available as an optional subject in public schools of Chile and Spain.
This summary reflects how children's rights have been introduced at the formal level in the educational policies of each country. An overview of these data shows a wide degree of consensus and some particular contrasts. However, children's rights as social practices are poorly reflected at the formal level. It is the real conditions of the day-to-day schooling of children that reflect how rights are put into effect. To analyze this completely for each country is an insurmountable task, however it is possible to present a particular aspect of each nation. This choice to focus on a few particular aspects is justified both because it captures current debates in the educational community of that society and highlights a relevant dimension of children's rights.
**Case Studies of Children's Rights and Education**
Focusing on particular examples provides another perspective on children's rights. The political and daily realities of work in schools are put on the foreground. The goal of this section is to present example of this. The problems facing educational resources in Argentina serve as an example of the obstacles that may exist to providing compulsory education to all (art. 28-1a). Chile's cooperative experience is a good example of how to pursue international cooperation in line with art. 28-3. Spain's distribution of students in schools along ethnic and class lines reflects the political controversies around art. 28-1 and 29-2 regarding equality of educational opportunity and private education.
**A. Children's Rights and Quality Compulsory Education in Argentina**
Argentina is a large and complex country of which it is difficult to make reference to a single national reality. On the one hand, it has an area of more than 2 million square kilometers, with vast regional and climatic differences. On the other hand, the population is distributed very irregularly: 46% of the population lives in the capital, Buenos Aires, and its province. These characteristics account for important social differences that make the discussion of averages and general indicators (illiteracy rates, schooled population and the like) neither very informative of differences inside the country nor illuminative of international contrasts (van den Eynden, 1993).
Argentina, like other Latin American countries, is confronted
with high external debt that greatly affects its chances of development. This economic situation has led to a series of adjustment policies that particularly affect social and educational funding. Another important aspect of these policies has been the privatization of public services. All these measures have increased unemployment and widened the socio-economic division in the population. The effect of this situation is not clearly reflected in quantitative assessments of education nor in the elaboration of formal policy, but it has had an important impact on the quality of the education that students receive in Argentina (van den Eynden, 1993).
Since 1993, Argentina operated under new educational policies developed in the Federal Law of Education. According to this legislation, the educational system is divided into three levels: initial (3 years), General Basic Education (9 years) and Polimodal (3 years). Compulsory education covers the last year of the initial level (pre-school) and the nine years of G.B.E. This reform represents a very important step, since it restructured a highly outdated educational system developed over a century ago (Law 1.420, written in 1884) in very different political conditions. This reform not only extends compulsory education but modifies important aspects of it such as: implementing a democratic and federalized educational bureaucracy, a reaffirmation of the state in educational matters, a formalization of financial mechanisms and a commitment to procedures geared towards quality education (Ministerio de Cultura y Educación de la Nación, 1994).
As part of these principles, the government has established programs aimed at the improvement of educational outcomes in populations "that have not covered their basic needs" (as they are defined by the Ministry of Culture and Education). In a recent letter to the Argentinian representative of the International Monetary Fund, the Minister of Education, Susana Decibe (1998), claimed that the quality of education has been enhanced between 12% and 24% in different regions, this improvement being highest in the regions with the largest proportion of socio-economically disadvantaged populations. Also, net proportions of school attendance have increased significantly during the 1990's.
However, as stated above, these general indicators obscure differences between contexts. Data from the Fifth National Quality Assessment Program, developed by the Ministry of Education in 1997 highlight these differences: urban schools have higher scores than rural schools, private education surpasses public education, and the differences among provinces continue to be high (Clarin, 21 May, 1998). During 1998, only five made significant strides toward educational reform (Clarin, 17 May, 1998). These measures affect mainly the implementation of the third cycle of G.B.E and involve 1) hiring new teachers, with specialized qualifications; 2) restructuring infrastructures; 3) administrative re-organization. Measures that mostly depend on financial resources were lacking. These changes can be progressive and should be completed by the year 2000; however, recent reports indicate that half of the provinces will have trouble meeting this deadline (Clarin, 17 May, 1998).
These actions could be interpreted as an indication that educational reform is being undertaken, albeit slower and with less impressive results than expected. Nevertheless, this interpretation does not capture instances in which measures are undertaken that are clearly contrary to educational reform. For example, the province of Cordoba (fifth largest and third most populated province of the country) accepted the educational reform but subsequently decreased by 50% its role in education. As described by Puiggros (1996), Cordoba transferred schools to the municipalities and the private sector and concentrated or closed a number of secondary schools. Furthermore, the province's government reduced primary education to six years and made the seventh year part of secondary education in line with suggestions made by the World Bank for reducing educational investments (Puiggros, 1996; Akkari and Pérez, 1999). The consequence of this measure is that reaching the sixth grade of primary education will be a goal very difficult to attain for large sectors of the population. This means that although in Argentina the law states ten years of education are compulsory, in several instances the educational system cannot guarantee the infrastructures, human and social resources to provide six years of compulsory schooling--situation that compromises greatly the efficacy of art. 28-1a of the Convention.
B. International Cooperation and Educational Improvement in Chile
International Cooperation in the area of education is stressed in art. 28-3 of the Convention. The history of these programs in Chile goes back about two decades and is related to the social and political changes that the country has suffered during this period. Sweden has played a significant role in bi-lateral and cooperative agreements during this time, with programs that have been especially effective in the area of education. An important characteristic of its approach to cooperation is a focus on democratization, economic reform and development.
Work began in the 1960s with small bilateral programs. In 1972, Sweden put Chile in its cooperation programs. When Allende was overtaken by the military coup d'etat, the programs were canceled and Swedish funds were re-allocated to non-governmental agencies and international institutions. According to Gustafsson and Sjöstedt (in Gajardo, 1994), this also meant expanding its humanitarian aid to other countries of the region. After popular protest and a referendum, democracy was reinstated in 1989 with political elections later that year. During this transition period, several countries showed an interest in cooperating with Chile to consolidate its newly recovered democracy. Sweden finds that a priority area should be programs with underprivileged populations: "to strengthen trust in democracy it is important to achieve short term goals that have a positive effect on the lives of poor people" (Gajardo, 1994, p. 10).
In 1990, Chile had about 5 million people below the poverty line, which represented about one third of the population. The coalition of democratic parties that won the elections had developed a
number of action plans geared towards this sector of society. One of the priority areas was education, which was articulated in the proposal of the 900 Schools Program. During 1989 and 1990, several Swedish delegations traveled to Chile to discuss in detail the nature of cooperative programs, which at this stage were already defined as educational. The nature of these discussions and the relationship between the agencies of both countries gave the program important characteristics:
"The open dialogue between ASDI and AGCI (the cooperation agencies of each country) and the institutions responsible for the project as well as the great experience of the executive board, the majority professionals from Chilean NGO's focused on research and educational development, played a very important role in how assistance was designed. These elements provided Sweden with guarantees of high quality work and allowed it to minimize its participation in the execution of the program" (Gajardo, 1994, p. 10).
"From the beginning the 900 Schools Program was seen as a model for other regions. The program itself was strategically very important and was very innovative as a rehabilitation effort of socio-economically deprived sectors. On the side of Sweden, it was the first time that a program was undertaken with so little direct participation" (Gajardo, 1994, p. 11).
The 900 Schools Program was initially aimed at the 900 schools that showed the lowest performance on the national assessment scores obtained by SIMCE (System of Measurement of the Quality of Education). These evaluations had begun in 1988 (MEC, 1993) so there were data on all the schools of the country, making the selection of the lowest scoring schools relatively easy. The selected schools, mainly located in socio-economically deprived regions and remote areas of the country, participated in a program focused on:
- Improvement of infrastructures of educational facilities.
- Receipt of school texts, classroom libraries and other pedagogical material.
- In-service professional development for teachers.
- Learning workshops for students with difficulties, directed by counselors from the community.
- Assistance for school directors in the creation of educational improvement programs stemming from the teacher body.
- A pilot project focused on multi-aged classroom schools for rural areas.
The project had a duration of three years, during which Denmark also began to cooperate providing funds for the last two projects discussed above. A total of 1,385 schools, 222,491 students, 7,267 teachers, 2,086 community counselors and 312 technical
supervisors of the program participated in it with results that were largely successful (Gajardo, 1994). From 1988-90 to 1990-92, the proportion of schools that scored above the regional average went from 15.52% to 28.3%, below regional average scores decreased from 16.14% to 13.4% and the proportion of schools that did not improve from year to year decreased from 38.75% to 35.2%. Overall, 64.8% of the schools improved their scores after the program.
In 1993, the *900 Schools Program* and Swedish and Danish cooperation concluded. Since then the program has been absorbed by the MECE program (MEC, 1997) (*Improvement of the Quality and Equity of Education*) of the Chilean Ministry of Education. This program incorporates all of the characteristics of the *900 Schools Program*, expanding it to all public schools of the country and it is projected to move on to middle and secondary education.
**C. The Relationship between Private-Public Schools and Educational Rights in Spain**
As mentioned above, Spain has a system of public funding for private schools. The development of this type of institution (*concerted schools*) during the democratic transition served to recognize the role of the Catholic Church in Spain, and especially its important role in education (Turner and Goicoa, 1988). Currently 80% of *concerted schools* are run by the Catholic Church, which supposedly allows these centers to continue their work without restricting the scope of their student population. In any case, an important requirement is that these schools participate in all state-run programs regarding education (such as, professional development efforts, mainstreaming programs or compensatory education).
The organization of this type of schooling is intertwined with the vast development that public schooling has experienced in Spain since the early 1980s. In 1982, the Socialist Party became the governing party and maintained its position for the next 13 years. During this period, the policy and educational reforms that characterize the current educational system were undertaken (*Ley Orgánica del Derecho a la Educación*, 1985; *Ley de Organización General del Sistema Educativo*, 1990). Parallel to legislative changes was a significant increase in public spending, reaching 7.8% of the GDP in 1984; these increases were necessary to meet the growth in numbers of students and develop modern infrastructures. After this, spending gains decreased towards the end of the 1980's to the 4-6% range (Turner y Goicoa, 1988; Moltó et al; 1997). In 1996, the Popular Party (of center-right ideology) became the governing party, a change that has had a significant impact on the educational policy developed up to that moment.
The distribution of students within schools has changed dramatically. For example, although at the beginning of the 1970s secondary education was practically monopolized by private schools (Feito, 1991), in the 1997-98 school year 72.2% of students went to public secondary schools and 27.8% studied in private secondary schools (the data from the Spanish Ministry of Education collapses
private and *concerted* schools in the same category). Overall, in the 1997-98 academic year, approximately 70% of all pre-university students attended public schools. However, this redistribution masks important ethnic and class differences in the student body of these schools:
- Approximately 90% of minority and non-EU immigrant students attend public schools (Consejo Escolar-Spanish Ministry of Education and Culture, 1997; Bartolomé, 1997).
- Choosing public or *concerted* schools is a process highly influenced by socio-economic status (Moltó et al.; 1997).
In many respects this evolution and the recent educational policy changes reflect the tensions between two ideological views regarding education. On the one hand, the reforms proposed during the 1980s focused on providing equality of educational opportunity to all students (Marchesi, 1996). On the other hand, current governmental policy favors "freedom of choice" for parents and considers that the allocation of funds should be determined by supply-demand criteria (Díaz, 1997). These trends reflect the opposition between two distinct conceptions of the role of the state in relation to education and schooling that have been present in Spain at least since the democratic transition (Feito, 1991). This discussion between "freedom of choice" (the state as subsidiary) and "equality of opportunity" (public schooling as the principal force) currently constitutes one of the most important debates among educators, parents and educational policy makers in Spain. In many ways, this discussion and the student distributions are not unlike those of other European countries (Ferrandis, 1988).
It is difficult to assess if these movements respond to any "objective" differences in educational quality. However, two research projects in public schools in lower-socio-economic sections of Madrid seem to show that a move to *concerted schools* by many families is caused by a perceptions of lower educational quality and problematic social environment in public schools (Poveda, 1997; Gómez, 1999). Research in other regions of Spain for many years has also highlighted how public and *concerted* schools are divided across ethnic lines, with public schools enrolling the majority of gypsy students and *concerted* schools being conspicuously all-white Spanish (Abajo, 1997; Knipmeyer, 1980)
The differing realities just presented are not susceptible to simple comparative analysis. However, they reflect problems confronting the implementation of children's rights. In the following section, we try to present a number conceptual lines that may help organize some of the general difficulties that emerge when putting these rights into practice.
**Discussion and Conclusions**
Considering children's rights, in this case as they relate to education, effective practices are a complex matter that is not
automatically guaranteed by making them explicit in a formal document. Rights, as principles that regulate social organization, are intertwined and complicated by the ideological and day-to-day tensions of our current societies. In the case of educational rights it is possible to describe a series of principles that explain why the application of these rights is troublesome. The process can be problematic because:
- **Condition a)** As they have been articulated by different social groups, educational rights can be incompatible with (or in conflict with) each other. Thus, although all groups have a legitimate claim to the rights they demand, the development of these rights by one group implies the withdrawal of other rights in an opposing group.
- **Condition b)** Implementing certain rights implies having a series of material, social and psychological prerequisites necessary to make them effective. These prerequisites, in some cases, can be considered human/children's rights themselves, which often leads to the proposal that human/children's rights are organized in a hierarchical-pyramidal manner (**Condition b1**). In other cases, the determination of prerequisite conditions is the result of some form of "rational analysis" (e.g., scientific research, political discussion), which although an important form of knowledge construction is characterized by a high degree of uncertainty (**Condition b2**).
Furthermore, as pointed out by Verhellen (1994), educational rights can be construed as rights *to* education, rights *in* education and rights *through* education. This organization is very helpful in articulating how educational rights should be effective. However, as we will see below, assigning *a priori* each article of the Convention to the different categories is not as univocal as presented by Verhellen (1994). What is interesting in these two frameworks is that they shed much light on our understanding of the implications of the three nation case studies presented above.
In Spain, it is obvious that different social groups defend each ideological position regarding the basic mechanisms of education. Furthermore, these two groups have polarized their discussion around the development of public or private schooling. Since in the end the discussion is about the allocation of funds and resources, as well as the pedagogical-policy lines that should govern education, it is reasonable to suggest that they stand in opposition as discussed above (**Condition a** above). The argument that defends "freedom of choice," especially as it implies forms of private education, is a claim about a right to education: the right to choose the type of schooling one wants (availability of private institutions) and to guarantee state support of that education, so it is accessible not only to the economically capable (development of *concerted* schools). As such, it is picked-up in art. 29-2 of the Convention, which delineates the rights and responsibilities of private educational institutions. The argument that defends "equality of educational opportunity" is also explicitly stated
as a right in art. 28-1 of the Convention (including basic measures to make it effective). However, although Verhellen considers it a right to education, "equality of educational opportunity" is mainly a right through education, especially when presented as an outcome that makes effective equality of social opportunity (Green, 1971), which is how we think it should be construed.
Therefore, we have a situation of conflicting rights and social groups that make claims on different domains of education. Also, as shown in the data presented above, this double system has created an ethnically and socio-economically segregated student body. One way of assessing this is to see how other educational and children's rights can be developed within an educational system with these characteristics. Segregation, given that the "separate but equal" suggestion has never proven true, hampers many aspects of equality of educational opportunity and may eventually put at serious risk the meritocratic principles by which many attempt to justify social inequalities in Western democratic societies (Rivière, Rivière y Rueda, 1997). Furthermore, preparing children to live peacefully and with tolerance in multicultural societies is a right (art. 29-1d) that is made effective through and in education and is clearly incompatible with segregation along ethnic and economic lines. Given these considerations, it seems that current policy trends in Spanish education should be critically re-assessed through the lens of children's rights.
Argentina's current educational system faces important financial restrictions that result in limitations of infrastructures and human resources. The outcome is a diminished capacity to undertake its role adequately. Art. 29-1a of the Convention asserts that education should help to develop children's full capacities: intellectual, physical and personality. It is easy to understand that "on the part of" the child this means that he or she should meet a series of physical and psychological conditions (security, physical well-being and the like) that will allow him/her to cope with the demands of schooling and take advantage of the possibilities it offers. These conditions themselves are considered rights and are explicitly reflected in the Convention (this would be a common example of Condition b1 above).
"On the part of" the school, it is also possible to speak of necessary circumstances. The institution and people who work in it should also meet a series of conditions so they can face the demands placed on education and the principles reflected in the Convention. As Verhellen (1994) pointed out, educational rights in important aspects are made effective by adults. This obviously refers to parents being able to assert their children's rights; but on the part of the school (and the adults that work in that school), it also signifies having the means to make effective these rights. Providing quality education is something that cannot be met without a firm legislative commitment, proper infrastructures, resources for professional development and positive mid-term and long-term expectations for educators. All these can be considered prerequisites that are part of the child's educational rights (Condition b1 above) and should be made effective through social and economic policies that guarantee the accomplishment of these goals.
Argentina has made important investments in time, human and economic resources to undertake its educational reform. However, international institutions and expert consultants have apparently not been able to fully understand these efforts. Some of their suggestions have been especially unfortunate because they did not take into account the complex and diverse reality of the country.
Cooperation between Chile, Sweden and Denmark puts into effect art. 28-3 of the Convention and represents one of the most important intervention areas between states with differing economic resources. However, intervention programs rest on a series of important principles that are not easily defined. First, intervention is based on an "assessment of needs," but justifying how, who, what, and why these needs are put forward is not clear-cut. Second, educational cooperation as defined in the Convention (art. 28-3) should be aimed at providing access to "technical knowledge" and "modern educational methods," but explaining exactly what these constitute is again a difficult task. Therefore, making operational proposals constitutes an important part of the elaboration of the program (is a process-result of a "rational analysis", Condition b2 above).
When considering developing countries or "emergent economies," these questions have often been very controversial and assume a series of characteristics (lack of knowledge and skills or absence of professional staff in the receiving country), that by being absent make the 900 Schools Program a good model. Cooperation is usually highly asymmetrical with the "target" country playing a minimal role in the intervention process. However, the Chile-Sweden-Denmark experience showed that the "target" country (Chile) was capable of generating useful information for the decision making process, formulating objectives and program frameworks, implementing the program, assessing progress and giving a global appraisal of the intervention. Past theorizing (still proposed today) considered "developing countries" to be helpless, incapable of formulating mid-term and long-term goals for their progress and unable to act on their reality. In contrast to this, the 900 Schools Program showed that an intervention approach that on the part of the "cooperating" country provides adequate financial resources and shows trust in the professional capacity of the "host" country will produce encouraging results. Carlos Rodríguez (1998) captured this attitude well when stating that "if the poor are given the opportunity and adequate incentives they apply with rationality their resources and progress" (p. 14). Chile's cooperative experience shows how these ideas can be put into practice, even in socio-economic and politically unfavorable conditions.
The data presented here illustrate how children's educational rights travel a long journey from the Convention to national legislation to actual day-to-day practices. We believe that daily practices are what constitute children's rights as real principles by which to measure life standards. However, using this criterion does not relegate the other two dimensions (the Convention and legislation) to an insignificant role. In fact the relationship between these three dimensions is dialectic; this dynamic among them is what legitimizes and delegitimizes real
The Convention is a statement in many cases too vague to provide univocal suggestions as to what to do in schools. Legislation tries to advance this process and give directives at the national or regional level on how to manage schools. However, writing and implementing legislation is a political process characterized by power and resource struggles between different constituencies. This does not mean that the resulting configuration cannot be assessed, since it is in the light of how it captures children's rights that we can consider it legitimate. Research into these questions, however preliminary, is part of this process.
References
Abajo, J. E. (1997). *La Escolarización de los niños gitanos: el desconcierto de los mensajes doble-vinculares y la apuesta por los vínculos sociales y afectivos*. Madrid: Ministerio de Trabajo y Asuntos Sociales.
Akkari, A. and Pérez, S. (1999). Investigación en educación en América Latina: una continuación del debate. *Education Policy Analysis Archives*, 7. (10) (Available online at http://epaa.asu.edu)
Bartolomé, M. (1997). La escolarización de los hijos de inmigrantes extranjeros en la Educación Primaria en Cataluña. In M. Bartolomé (ed.). *Diagnóstico a la Escuela Multicultural*, (65-81). Barcelona: CEDECS.
Casas, F. (1998). Social Representations of Childhood. In A. Saporiti (ed.). *Exploring Children's Rights: Third European Intensive Erasmus Course on Children's Rights*. Milano: FrancoAngeli.
*Clarín*. (21/5/98). Cuando los datos marcan diferencias, p. 56.
*Clarín*. (17/5/98). Los alcances de la reforma, p. 5 (suplemento de educación).
Decibe, S. (1998). Carta de Lic. S. Decibe al Dr. M. Figuerola. available at: http://www.mcyd.gov.ar
Díaz, N. (1997, Feb. 3rd). La reforma educativa: asignatura pendiente del PP. *El Siglo de Europa*, 254. 31-37.
Feito, R. (1991). Concapa y Ceapa: Dos modelos de intervención de los padres en la gestión de la enseñanza. *Educación y Sociedad*, 9. 35-57.
Ferrandis, A. (1988). *La Escuela Comprensiva: Situación Actual y Problemática*. Madrid: MEC/CIDE.
Gajardo, M. (1994). *Cooperación internacional y desarrollo de la*
Gómez, V. (1999). *Efectos del Programa de Educación Compensatoria en el Desarrollo Psicológico del Niño: ¿Es Realmente Compensatoria la Educación Compensatoria?* Unpublished Doctoral Dissertation, Universidad Autónoma de Madrid.
Green, T. (1971). Equal Educational Opportunity: The Durable Injustice. In R. Heslep (ed.) *Philosophy of Education 1971: Proceedings of the Twenty Seventh Annual Meeting of the Philosophy of Education Society-Dallas, April 4-7.* Dallas: Studies in Philosophy and Education.
Jones, M. (1997). The Case of Thompson and Venables and the UN Convention on the Rights of the Child. In P. Jaffe, H. Rey and D. Roth (eds.) *The Future of Children's Rights: Student Contributions from Across Europe.* Genève: Université de Genève.
Knipmeyer, M. (1980). El Polígono de la Cartuja de Granada: el sistema escolar y sus problemas. In M. Knipmeyer: M. González Bueno and T. San Román. *Escuelas, pueblos y barrios: tres ensayos de antropología educativa* (11-92). Madrid: Akal.
Marchesi, A. y Martín, E. (1998). *Calidad de la enseñanza en tiempos de cambio.* Madrid: Alianza.
Marchesi, A. (1996). Calidad y Equidad Frente a Proximidad. *El País Digital-Debates.* 2/7/96, originally available http://www.elpais.es/p/d/debates/marchesi.htm
Ministerio de Cultura y Educación de la Nación. (1994). *Ley Federal de Educación: la Escuela en Transformación.* Buenos Aires: Secretaría de Programación y Evaluación Educativa.
Ministerio de Educación de Chile (1997). *Programa de Mejoramiento de la Calidad y Equidad de la Educación Básica.* Santiago de Chile: MEC.
Ministerio de Educación de Chile (1993). *Sistemas educativos nacionales: Chile.* Santiago de Chile: Organización de Estados Iberoamericanos para la Educación, la Ciencia y la Cultura/Ediciones Mar de Plata.
Moltó, M; Palafox, J; Pérez, F; Uricl, E. (1997). Gasto Público y Privado en Educación. In SSAA. *Educación, vivienda e igualdad de oportunidades (II Simposio sobre Igualdad y Distribución de la Renta y la Riqueza).* (163–214). Madrid: Argentaria-Visor.
Poveda, D. (1997). *Un Análisis Etnográfico de la Interacción en el Aula en Relación a la Alfabetización.* Doctoral Dissertation Project,
Puiggros, A. (1996). Para el gobierno la educación es una mala inversión. *Clarín Digital.* (4/9/96). available at http://www.clarin.com.ar/diario/96-09-04/puig_01.htm
Rivière, A; Rivière, J. and Rucda, F. (1997). Igualdad Social y Educación. In SSAA *Educación, vivienda e igualdad de oportunidades* (*II Simposio sobre Igualdad y Distribución de la Renta y la Riqueza*). (9-42). Madrid: Argentaria-Visor.
Rodríguez, C. Pobres Pobres. *El País*. 6/6/1998, p. 14.
Turner, S. y Goicoa, A. (1988). Spain. In G. Kurian (ed.). *World Education Encyclopedia, vol. II.* (1115-1134). Facts on File Publications.
van den Eynde, A. (1993). La Educación en América Latina. *Infancia y Sociedad*, 19. 51-67.
Verhellen, E. (1994). *Convention on the Rights of the Child*. Leuven: Verhellem and Garant.
**About the Authors**
**David Poveda**
Universidad Autónoma de Madrid
Dpto. de Psicología Evolutiva y de la Educación
Ctra. Colmenar Viejo km. 15
Madrid 28049 Spain
Email: email@example.com
David Poveda currently teaches in the School of Psychology of the Universidad Autónoma de Madrid, where he is also completing his doctoral degree. His research focuses on classroom interaction, cultural diversity and literacy.
**Viviana Gómez**
Viviana Gómez currently teaches in the School of Education of the Universidad Pontificia Católica de Chile. Her research interests focus social inequalities in education. She has recently completed a study of compensatory education in Spanish schools.
**Claudia Messina**
Claudia Messina has been a secondary school teacher and counselor in Buenos Aires, Argentina. Currently, she is completing her doctoral degree in the Universidad Autónoma de Madrid. Her research focuses on teacher training and practical reasoning.
Copyright 1999 by the *Education Policy Analysis Archives*
The World Wide Web address for the *Education Policy Analysis Archives* is [http://epaa.asu.edu](http://epaa.asu.edu)
General questions about appropriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, firstname.lastname@example.org or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-9644). The Book Review Editor is Walter E. Shepherd: email@example.com. The Commentary Editor is Casey D. Cobb: firstname.lastname@example.org.
**EPAA Editorial Board**
| Name | Institution/Position |
|-----------------------------|-----------------------------------------------------------|
| Michael W. Apple | University of Wisconsin |
| John Covaleskie | Northern Michigan University |
| Alan Davis | University of Colorado, Denver |
| Mark E. Fetler | California Commission on Teacher Credentialing |
| Thomas F. Green | Syracuse University |
| Arlen Gullickson | Western Michigan University |
| Aimee Howley | Ohio University |
| William Hunter | University of Calgary |
| Daniel Kallós | Umeå University |
| Thomas Mauhs- Pugh | Green Mountain College |
| William McInerney | Purdue University |
| Les McLean | University of Toronto |
| Anne L. Pemberton | email@example.com |
| Richard C. Richardson | New York University |
| Dennis Sayers | Ann Leavenworth Center for Accelerated Learning |
| Michael Scriven | firstname.lastname@example.org |
| Robert Stonehill | U.S. Department of Education |
| David D. Williams | Brigham Young University |
| Greg Camilli | Rutgers University |
| Andrew Coulson | email@example.com |
| Sherman Dorn | University of South Florida |
| Richard Garlikov | firstname.lastname@example.org |
| Alison I. Griffith | York University |
| Ernest R. House | University of Colorado |
| Craig B. Howley | Appalachia Educational Laboratory |
| Richard M. Jaeger | University of North Carolina—Greensboro |
| Benjamin Levin | University of Manitoba |
| Dewayne Matthews | Western Interstate Commission for Higher Education |
| Mary McKcown-Moak | MGT of America (Austin, TX) |
| Susan Bobbitt Nolen | University of Washington |
| Hugh G. Petric | SUNY Buffalo |
| Anthony G. Rud Jr. | Purdue University |
| Jay D. Scribner | University of Texas at Austin |
| Robert E. Stake | University of Illinois—UC |
| Robert T. Stout | Arizona State University |
EPAA Spanish Language Editorial Board
Associate Editor for Spanish Language
Roberto Rodríguez Gómez
Universidad Nacional Autónoma de México
email@example.com
Adrián Acosta (México)
Universidad de Guadalajara
firstname.lastname@example.org
J. Félix Angulo Rasce
(Spain)
Universidad de Cádiz
email@example.com
Teresa Bracho (México)
Centro de Investigación y Docencia Económica-CIDE
bracho disi.cide.mx
Alejandro Canales (México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Ursula Casanova (U.S.A.)
Arizona State University
email@example.com
José Contreras Domingo
Universitat de Barcelona
firstname.lastname@example.org
Erwin Epstein (U.S.A.)
Loyola University of Chicago
email@example.com
Josué González (U.S.A.)
Arizona State University
firstname.lastname@example.org
Rollin Kent (México)
Departamento de Investigación Educativa- DIE/CINVESTAV
email@example.com
firstname.lastname@example.org
Maria Beatriz Lucc (Brazil)
Universidad Federal de Rio Grande do Sul- UFRGS
email@example.com
Javier Mendoza Rojas
(México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Marcela Mollis (Argentina)
Universidad de Buenos Aires
email@example.com
Humberto Muñoz García
(México)
Universidad Nacional Autónoma de México
firstname.lastname@example.org
Angel Ignacio Pérez Gómez
(Spain)
Universidad de Málaga
email@example.com
Daniel Schugurensky
(Argentina-Canadá)
OISE/UT, Canada
firstname.lastname@example.org
Simon Schwartzman (Brazil)
Fundação Instituto Brasileiro e Geografia e Estatística
email@example.com
Carlos Alberto Torres
(U.S.A.)
University of California, Los Angeles
firstname.lastname@example.org
Jurio Torres Santomé (Spain)
Universidad de A Coruña
email@example.com |
Copyright © 1967 Experimental Aircraft Association Air Museum Foundation, Inc.
Winner Of First AC Spark Plug's "Tip Of The Month" Contest
Practically every aircraft homebuilder has at one time or another developed a short cut or a building tip that has saved him both time and money. Realizing this, the AC Spark Plug "Tip of the Month" contest was inaugurated by SPORT AVIATION in 1960. Since that time a gold-mine of ideas have been uncovered. You will find a collection of many of these invaluable tips on the pages that follow. (If you have an aircraft building tip, please forward to EAA Headquarters).
Winner of the first AC Spark Plug "Tip of the Month" contest this month is Randy Varner, 60 Raines Park, Rochester 13, N. Y., EAA 4187 and a member of Chapter 44. Second place winner, who will be awarded an EAA lapel pin for his efforts, is Charles Putnam, 2659 Carleton Ave., Los Angeles 65, Calif., EAA 2859.
"My tip of the month is a punch to punch out wing rib gussets from 1/16 in. mahogany plywood. I made mine from a scrap piece of chrome vanadium steel 2½ in. O.D. by 3 in. long. I drilled and bored a hole lengthwise 2 in. I.D. I then chamfered the outside edge of what is now a tube on a 30° angle, making a razor sharp cutting edge. Next I hardened the cutting edge in oil and polished it to a mirror finish on a buffing wheel, being sure to buff away from the sharp edge.
"I use my punch in an arbor press, and I can punch out 400 gussets per hour. It can also be used in a vise or by laying a piece of flat stock on the top of the punch and hitting it with a hammer. Be sure to place a piece of hard wood beneath the gusset material when doing this so as not to damage the cutting edge.
"After cutting out the circular blanks, I tack ten of them together with a ¾ in. very fine brad. I then draw two lines 180° apart on 90° angles on the top circle and cut them on a band saw. This gives me 40 quarter circle gussets. I find these make much neater gussets than the rectangular ones and production rate is terrific. Almost every chapter has a machinist who can make these punches for pennies of scrap steel."
Charles Putnam offers this tip:
"Here is a method of sawing .063 and up aluminum and aluminum alloys of all types on home workshop equipment. To the best of my knowledge it has never been used or printed before. With it I have sawed 12 ft. x 4 ft. x 3/16 in. 24 ST plates, ripped 22 ft. x 2 in. x 3/16 in. channels into angles, and cut up 1 in. material into small pieces.
"I did all of this on a 6 in. Sears Dunlap table saw powered with a ¼ hp motor. I used a $2.00 Sears 4 in. plywood saw blade, turning 3400 rpm which was all the ¼ hp motor would pull. The saw blades and the material were lubricated and cooled with beeswax. The beeswax was applied in the following manner:
"1. Coat the blade with as much wax as is practical before starting a cut of any size. Heat will gradually melt the wax and throw it into the cut.
"2. Rub a line of wax along the line of the cut on the material. Wax on the bottom of the material sticks it to the saw table.
"3. Lightly apply wax on the teeth of the saw blade at the joint of the cut at about 1 in. intervals while cutting.
"4. Feed the work with a light pressure. Too much pressure will overheat and load up the blade.
"5. When the work is finished, wash off the wax with gasoline.
"With care a blade will cut 400 to 600 ft. of ¼ in. material between sharpenings. You'll find that .063 aluminum cuts at about the same speed as a 1 in. board. With more speed I think the saw blade would do better and faster work. I have tried many lubricants and saw blades but I find the Sears plywood blade and the beeswax were by far the best."
Bullets For Alignment
By Bud Oliver, EAA
On all the excellent material that I've seen presented in SPORT AVIATION and Amateur Builders Manuals, I've never read anything on assembly and rigging techniques. Many times I have shivered and cringed as I watched fellows hammer bolts into strut and wing fittings as they assemble and rig an aircraft. In many cases the assemblers are unaware of the proper techniques to use to avoid trouble.
When you are holding something in alignment, such as a wing to fuselage root fitting, and then proceed to take the actual bolt that you are going to secure it with and attempt to drive it into place with a hammer, you are certain to get varying degrees of the following results (sometimes all of them): ruined bolt threads; galled bolt and fittings; bent bolt; elongated fitting holes; bent, twisted and cracked fittings; loss of paint or plating.
Two persons can assemble any plane whose component parts they are able to lift with absolutely no damage by using the following procedure. Assemble the entire plane by using bolts of at least one size diameter smaller than the bolts that you will use on the completed job. If possible, these bolts should be inserted opposite to the direction that the actual bolts will go in. In this way the entire plane will easily go into approximate alignment and the bolts will go in easily by inserting them with the fingers. (Fig. 2).
Now make a bullet of the proper diameter and length for the alignment of all fittings. To make the bullet, just take an old bolt that is the same diameter that the fitting requires and grind one end to a bullet nose shape and cut the other end off square. Only the unthreaded bolt shank is used. The head of the bolt is cut off and the threaded end is used for the bullet head end so that the threads are ground away (Fig. 1). For tight places where a long bullet cannot be used, make up a short one as shown.
The bullet is given a thin coat of Parker Threadlube or Lubriplate, or white lead and oil (to stop galling of similar metals) and inserted into the fitting in the same direction that the final bolt will go in. The bullet is then tapped in place with a soft drift and hammer until it is flush with the face of the fitting (Fig. 3). The bolt is then tapped into place. It will push the bullet out of the fitting ahead of it (Fig. 4).
You may notice that I illustrated one bullet with an eye at the point. This is the cotter pin hole of the original bolt from which the bullet was made. Often there are places where the bullet cannot be driven in. In these cases you can often pull the bullet into the hole with stainless steel safety wire inserted through this hole (Fig. 5).
This idea can also be used in many other places. Not too long ago I
bought a new Piper PA182 that had stalled out at 250 ft. and augered right into the ground with full load. The airplane was so bad that only the rudder and one aileron were usable. You can imagine what the cross-over exhaust that passed across the front of the engine crank-case looked like. It not only was flat, but it had the impression of one of the case studs driven into it until the metal failed.
I made a bullet out of cold-rolled steel and drilled a \( \frac{3}{8} \) in. hole through the center to take \( \frac{1}{8} \) in. control cable (Fig. 6). I pushed a \( \frac{1}{4} \) in. steel rod into the collapsed exhaust tube until I was able to push the cable through it. Then I had it made. All I had to do was to tie the exhaust tube to a post and pull on the cable with a chain hoist, tapping on the exhaust tube in the area around the bullet (Fig. 7). When the bullet came through - presto! A good exhaust pipe again! In this case I only had to weld up the one little break where the stud was driven through. All this work is done cold, because getting stainless steel red hot doesn't make it form any easier.
Storage Should Be Watched
Do not store fresh lumber near furnaces, radiators or other sources of dry heat. Warping and end-splitting is probable. If stored on edge, sheet plywood is apt to develop permanent distortion; it should be stored flat. Wooden propellers should never be stored standing against a wall with one tip on the floor.
EAAer's Testing "Laboratory"
Seven 100 lb. bags of sal ammoniac are supported by this wing section built by Raymond Reed, Wonewoc, Wis. STRUX plastic is used between \( \frac{3}{4} \) in. wood spars, whole being fiberglassed. Advantages claimed are no leading or trailing edge strips, no doping, no ribstitching, no drag wires and fittings, hailproof, true airfoil shape. Test section weighs 10 lbs.
RESINS AFFECT STYROFOAM
Here's a tip which may save others a lot of trouble. The wing tips on my Tailwind are formed of Styrofoam, which I had carefully formed to shape. I then attempted to cover the wings with fiberglass, using a polyester resin. To my dismay, the Styrofoam soon began to dissolve or melt under the influence of this resin and in the end my nice wing tips had shrivelled about half an inch. Two weeks work went out the window! Another two weeks were spent in making some balsa wing tips and the polyester went onto them fine. I found out later that if I had used an epoxy resin I would have had no reaction from it with the original Styrofoam tips.—George M. Sager, Williamsburg, Va.
FIREWALL FLANGE AND SUPPORT
by Gerald Landes
3201 Vassar Drive, Irving, Texas
This is a tip on construction I believe worth passing on to other members who may be having trouble in building a neat, light and strong firewall flange and cowl support for the wrap around. Many may already use this method but others may be stymied as I was for awhile. Although it is not wholly my idea, the method results in a light and very strong simple component.
The major material is aluminum angle with ½ in. sides and 1/16 in. thick. We acquired ours from a scrap metal dealer at 30c a pound. A pound is a piece about 16 ft. long, so you see it is very light and cheap. The angle must be hard and yet easily bent without fracturing. I have used 24 ST and 53 ST successfully.
After the firewall has been cut to shape from light stainless steel, usually .012 to .020 in. thick, the angle can be riveted to the back side to form a mounting flange for the wrap-around cowl and to provide a good stiff firewall free of anything but rivet heads.
To prepare the angle so it can be formed to the circumference of the firewall, it must be cut so that it can be bent easily. By making saw cuts one in. apart on one side it can be formed to any desired arc or curve. The shorter the turn radius, the wider the saw cut must be. For mild curves a 1/16 in. wide cut is sufficient — for most corners ¼ in. is needed. Then the angle is ready to be riveted.
The proper rivet spacing is about 2 in. apart, or in the middle of every second saw cut. This will provide enough rivets to secure the angle to the firewall. All holes should be drilled through the angle and then the firewall at the same time. This will leave a burr on the front of the firewall which must be removed before riveting. The rivets are inserted so that the rivets are on the front of the firewall.
The rear supports are made in the same way, only ST aluminum is used in place of the stainless steel. These supports are bolted onto the fuselage by clips welded to the structure. Vibrating cowl problems should be eliminated with this arrangement.
BENDING LEADING EDGE ALUMINUM
At first thought the logical way to apply aluminum sheet to the leading edge of a wooden wing would be to attach the edge of the metal to the bottom of the spar, bend it up around the leading edge strip, and back down onto the top of the spar. But an attempt to do it this way often will lead to an irregular, rough bend that makes the leading edge aerodynamically unsatisfactory.
From Athens, Greece, comes a letter written by member Jim Schnell. Jim says that over 20 years ago he dropped into a well known lightplane factory to pick up a ship and was invited to tour the place. There he saw workmen bending leading edge aluminum in a simple but effective way.
They had two long planks set up at bench height on suitable legs, with a suitable gap of a few inches between them, and their edges rounded off. The flat metal was then set over the planks and a long iron pipe was put on top of the metal, directly over the gap in the planks, and pressed down into the gap. The pipe had a radius slightly less than the leading edge radius of the airfoil to allow for spring. This put a smooth bend of perfect radius into the metal, giving it a "U" shape such that it was a simple matter to slide it over the nose of the ribs and tack it down onto the spars.
Jim is with TWA in Athens, has a plane, and invites any EAA members passing through to drop in for a visit.
A SCARFING JIG
When Gord Maunder and I decided to build a Jodel D-9, an all-wood single seater design, it didn't take us long to realize that with all the scarf joints to be made on the skins for the fuselage, box spar, etc., using a hand file was for the birds! So we kicked around a few ideas we had for a mechanical method of turning out a near perfect scarf consistently.
We finally came up with a simple jig, shown in the accompanying photos. Using a 1/2 in. electric drill (a ball bearing drill is best), clamp it on the base channel of the jig, insert a sanding drum (obtainable at most hobby or auto stores) into the chuck of the drill. The drill can be adjusted to any angle or height to accommodate various thicknesses or lengths of scarf. If used flat, it can double as a thickness planer for pieces such as cap strips, etc.
We used special "U" channel 10 in. long. The sides of the channel were short and just nicely held the drill body. Two pieces of 3/4 in. x 3/4 in. angle were used for the vertical bracket of the jig, with 1/2 in. x 2 in. slots milled into them near the top, so that when bolted to the bench or table the whole jig can be raised or lowered. A small turnbuckle is attached to the lower end of the vertical member and to the aft end of the channel, via bolts as shown. A strap clamp completes the jig and hold the drill in place.
We found that we could produce perfect scarfs with about three to four passes under the drum, taking off a little material at a time so as not to tear the feathering edge. In 3/16 in. birch plywood we did 13 in. of scarf in one minute. It took from 20 to 30 minutes to do a 7 in. scarf with a file by hand. We'll be glad to furnish further information to anyone who wants to write.
A Simple Method For Drawing Large Radii
By Chet Klier, EAA 4980
My tip is a method for drawing large radii, such as an arc for wing tips, engine cowl layout, tail surfaces or bulkheads. The material required is simple - a steel measuring tape. Drill a 1/32 in. diameter hole in the center of the tape on the 1 in. increment line (see drawing). This hole will provide a pivot hole around which the steel tape will revolve. A scribe or nail should be used for the pivot point.
The location of the next 1/32 in. diameter hole or holes will depend on the radius of the arc you wish to draw. A pencil point is inserted in the second hole and you simply walk the arc around, holding the tape taut. One word of caution - be sure to add 1 in. to all radius dimensions because you have lost this on the location of the first hole.
JIG FOR HOLDING HINGE TUBING
by James H. Campbell
There are numerous little items overlooked by the average person when building an airplane, such as the hinges on the tail surfaces and the little pieces of tubing forming them. When I was putting the hinges on the tail surfaces of my Model D Baby Ace, I ran into the problem of properly aligning the tubes for the hinges and a way to be certain they would be in the center of the leading and trailing edges of the tail surfaces.
The tubing used for the hinges of my particular bird are 3/8 x .065 x 3/8 in. 4130. When laying out the tail surface jigs I made the gap between the leading edge and the trailing edge of the respective tail surfaces to be 1/2 in. This was to let me use the idea of aligning hinges and getting the hinges properly centered on the tube edges.
I selected a piece of .049 x 1/2 in. tubing approximately 3 in. long, then cut and ground it down as shown in the accompanying drawing. Mount this tube section between the leading and trailing edges of the tail surfaces as shown. This will allow the 3/8 in. hinge tube to be held right in the center of the gap, with a ledge on each side for laying a piece of 1/16 in. weld rod for a filler.
Protrude a 1/4 in. bolt through the hinge stock, and apply the torch and welding rod to the hinges and respective bearers. Remove the jig fixture to the next location and repeat the process. This will give you perfectly aligned hinges and no zig-zag pattern for prospective eyeball engineers to criticize.
SMALL NAIL DISPENSER
The winner selected this month is Dale Johnson of Midland, Mich., who has applied some ingenuity to a sticky problem.
In his words, "small aircraft nails are hard to handle. This nail dispenser is quickly made and will save twenty minutes on each rib. The nails are put inside, then tip upright, the thumb over the hole. Shake gently, and nails will hang through the slot, retained by their heads. Several nails can be removed at a time with the thumb and finger."
"The sketch is self-explanatory. I used one one-inch diameter tube for the holder and two one-quarter inch diameter tubes three inches long for the supporting legs. A one-inch diameter disc closes the bottom of the tube. Saw the slots, and file the edges smooth." Fill with nails and shake away.
MAKE A "TEST" WING
Various articles and textbooks on aerodynamics all stress the importance of maintaining correct airfoil curves and providing a smooth surface when building wings. In particular, stress is laid on the importance of avoiding ridges and sharp edges running in a spanwise direction on the forward third of the upper surface of a wing. One way of making certain that the shape of your ribs, method of applying leading edge material, fabric sag characteristics and other factors influencing the fabric's surface contour will result in a smooth surface is to make a dummy wing. Ribs can be sawn from low cost interior plywood and three or four of them assembled on scrap lumber "spars". Such common material is all right provided an exact duplicate of the real wing parts' shape is made. Covered with cheap muslin and given enough coats of clear and silver dope to develop a taut surface, this dummy wing will show exactly how fabric will look on your real wing, and any needed smoothing-up can be done while building the real wing's structure.
Rotate Fuselage To Aid Welding
by E. A. Fessenden Lafayette, N.Y.
I find it a big help to be able to rotate the fuselage while welding. Leave two of the longerons (one top and one bottom) long on the rear of the fuselage and weld a piece of scrap across the corners. Weld another piece of scrap 90 degrees approximately in the center.
I drilled a \( \frac{3}{4} \) in. hole with a steel drill in the jig for the motor mount and drove in a scrap piece of \( \frac{3}{4} \) in. tubing to rotate the front. Take a couple of scrap boards and bore a \( \frac{7}{8} \) in. hole through and nail to a saw horse for end supports. Take a piece of hardwood 2 x 4 and drill a \( \frac{3}{4} \) in. hole, then saw a slot through the hole. Drive one large nail through the board into the 2 x 4 (see photo). Bore a \( \frac{3}{4} \) in. hole through the 2 x 4 at 90 degrees and insert a carriage bolt. Tighten to obtain the correct tension. The large hole in the supporting board and the nail will allow the 2 x 4 to move and correct for misalignment of \( \frac{3}{4} \) in. supporting tubes. Make one for each end.
The accompanying photos will clarify any questionable points.
Welding Cluster Joints
Member Jim Frost of Tulsa, Okla., tells us that when building his Stits Playboy he encountered trouble doing a good job on some of the cluster joints in the steel tube fuselage. Due to the thin walls of the tubing, it was essential to use a small torch tip, otherwise the tubing would burn through quickly. However, the comparatively large amount of metal at the cluster was able to draw heat away from the weld so fast that good penetration and a smooth head was hard to achieve. Jim got a common blow torch and set it up so that its flame would play onto the cluster joint as a whole, keeping the metal mass at a uniform high temperature. Then the small tip on the welding torch was able to melt the metal at the actual joint easily, but without burning through the tubing. This is the kind of practical tip we love to pass along to readers so, fellows, if you have hit upon some way of handling a job better, don't procrastinate about telling us!
Suggestions On Metal Ribs
By Paul E. Best, EAA 2441
The customary method of making wing ribs and parts like fuselage formers of sheet metal is by means of form blocks and a hammer. Two hardwood boards are sawed to the outline of the desired part and the sheet metal blank is clamped between them. Then the projecting flange of sheet metal is hammered over, crimping where needed to remove kinks from the flange. The process was fully described on pages 5 and 6 of the July, 1937 issue of this magazine.
However, three years later amateurs still seem to prefer wood truss ribs, and it seems to me that the reason for their reluctance to change over to metal ribs may be due to a lack of awareness of the difference in cost and fabrication time between wood and metal. Many years ago the lightplane factories gave up wood ribs, and planes such as the Cub, Taylorcraft and Aeronca all had metal ribs even though wood spars and fabric covering were retained. As you might suppose, the reason was the important one of cost . . . the cost of the material and the amount of labor required to form it.
The average wooden truss wing rib using spruce strips and mahogany plywood gussets calls for about $2.00 worth of material and takes 75 to 90 minutes to assemble. To cite a common example, take the Baby Ace rib. It has a total of 38 small gussets, one on each side of every joint. With 26 ribs in the wing this means 988 gussets per plane. Each gusset has an average of six tiny nails in it, or a total of 5,928 nails in the whole wing! It is no wonder that even in the early C-3, the Aeronca people tried to get away from the cost of mahogany plywood and the labor of driving endless tiny nails by adopting fiber gussets, glued on in a jig designed to keep them from shifting as pressure was applied. On page 10 of the August, 1958 issue of this magazine is shown the Jurca type rib, in which strips of veneer are used in place of gussets. The required width of veneer could be homemade by slicing material off a board of the proper thickness with a table saw, and doubtless a jig can easily be made to position the truss members and veneer strips accurately to eliminate the need for nails.
However, my interest in metal ribs was such that I have studied them carefully and would like to share my discoveries with others. A rib made of .020 ga. 2024 ST aluminum requires 2.5 sq. ft. of metal costing $1.50 as compared to the $2.00 average for wood. If a small plane needed 24 ribs, a savings of $12.00 could be made on material alone, and about six hours of labor would also be saved. Utility grades of sheet aluminum available from building supply houses and mail order stores are even cheaper and while at present the use of non-aeronautical materials is frowned on, I feel that with sufficient testing and investigation to establish their reliability,
ribs made of this aluminum could work out well and be the cheapest one could imagine.
In the Air Force I came in contact with a $40,000 machine in a sheet metal shop. This little gadget forms ribs, bulkheads and other parts rapidly in a rather simple way. A metal blank for the part is cut to shape, with the desired flange width added to the circumference. This is put into a holding die and held in place by an overhead clamp arm fitted with fingers. The hammer, in principle, amounts to an upward-swinging "trap door", the arc of motion of which can be set at any desired angle. The operation is very rapid, there being a choice of 60 or 120 cycles per minute. The hold-down and the hammer are synchronized; the hold-down grips the sheet metal, the hammer comes up, drops down, then the hold-down releases its grip slightly so that pressure applied to the part being formed lets it be moved along so that, in what resembles sewing-machine fashion, the flange is rapidly bent up. To minimize warping and distortion, it is customary to pass the work around the machine three times, putting a successively greater bend into the flange until the 90-degree bend is attained.
Inspired by this, I eventually hit upon a method of reproducing the forming action with simple hand equipment. My tool is nothing more than a maple or oak stick about a foot long, \( \frac{3}{4} \) in. thick and \( 1\frac{1}{2} \) in. wide. One end has a slit cut in it with a thin-blade saw, this slit being of the same depth as the desired flange width. You lay out the part on the metal, being sure to add the flange width to its circumference. Hold the metal flat on the edge of a smooth table, push the tool over the metal's edge, and bend up about 20 degrees. Move the tool along on the metal a distance about half the tool's width and bend up again, and just keep going all around the part. On the second time around bend the flange to 40 or 50 degrees, and on the last time to 90 degrees. The first pass puts in the bends which establish the contour of the part. Moving along half the tool width at a time assures a smooth bend and uniform contour. It is even possible to work at flange-bending while watching TV or baby-sitting!
When the work with this tool is done, the rib or bulkhead will probably be twisted due to the strains in the flange metal. Fluting or crimping the flange will take this out. A pair of cheap pliers can be modified by brazing a pair of small shaped blocks to their jaws after this fashion: (.. Another way I have tried successfully is to get a piece of 3/16 in. steel rod. The end is given a few wraps of tape to avoid scratching. Open a vise about \( \frac{3}{8} \) in. and lay the flange over the end of this opening. Lay the rod on top of the flange and tap down with a hammer to put the flute in. Space the flutes as needed to remove the twisting, usually one each several inches will do. Flute all ribs the same using the first as a guide.
For making the stiffening flanges in the lightening holes, simple male and female dies turned from hardwood can be used. The male die can have a pilot pin in its center which fits a hole in the female die. This will keep the two in alignment while they are squeezed together with a vise, an arbor press, a hydraulic jack or even just a bolt and nut. For most ribs it is necessary to make two or three dies for holes of varying size. Finish off the ribs by burring the raw edge of the flange so it will not chafe the wing fabric.
The advantage of my method of flanging is that troublesome, time-consuming distortion due to hammering is eliminated, and an accurate form block need not be made for each part in the airplane.
And here are a few extra tips. The approved manner of attaching metal ribs to wood spars is through a flange bent 90 degrees at the rib hole. At least three aircraft nails should be used in each flange. If ribstitching cord is used to attach fabric in normal manner, when it is drawn up tight after each loop the unsupported edge of the flange could be pulled down, possibly weakening or warping the rib. Also, the metal's edge could cut the cord in time. Partly for this reason and partly to speed up work, production planes use a variety of clips and strips to attach fabric to ribs. For the amateur, common self-tapping sheet metal screws are probably the best. A tiny washer is used under the head of each screw, partly to distribute pressure over the rib tape and partly to prevent the tape from developing unsightly twists and lumps when the screws are snugged down. In the November, 1939 issue of Popular Science Magazine a simple home-made sheet metal brake was described; I made one 28 inches long and found that for straight bends it works well and would recommend it to other EAA members.
When building my airplane I was faced by the problem of marking for rib stitching on the fabric of the wings without a helper to hold one end of the chalk line. The idea I hit on worked so well that I am telling others about it.
I got a length of electrical conduit long enough to stretch from root to beyond the outboard ribs. About 3/8 in. from each end I drilled a hole, going through one wall only and making sure both were on the same line. The holes were tapped for 10-32 screws and into each went a machine screw about 1 1/2 in. long, with a plain nut threaded onto each up to its head. The screws were turned up tight against the opposite, inner side of the tubing.
The chalk line was stretched tight between the two screw heads and the nuts snugged up to hold it against the bottoms of the heads. The result was something similar to an oversize violin bow.
The rib spacing was marked on the end ribs with a black pencil to get a good, dark mark. One end of the chalkline is lowered onto the rib on the far end from where one is standing, and the near end lowered onto the mark at the near end of the wing. Then it is quite easy to snap the line and move on to the next mark. All ribs will be uniformly marked. It is possible to mark a wing the size of a Cub with this gadget in five minutes.
To further simplify rib stitching, do the marking on the bottom surface of the wing while it is sup-
How to tie the knot for rib stitching. Knot is shown in center of cap strip for clarity; on an actual rib the line of knots would run along one side of the capstrip so that knots can be pressed down with the thumb and avoid having them show through as lumps in the pinked tape.
Quite by coincidence, we received a news release and a letter on the same subject at about the same time. Our friends at Cooper Industries, Inc., 2149 E. Pratt Blvd., Oak Grove, Ill., sent the accompanying photo of their new PLIERENCH. According to their release, the tool is geared for a 10-to-1 leverage ratio and has 100 percent parallel jaws. Selling for $12.75, it is quality made and the kit includes a variety of jaws and cutters.
From member Hap Wilson, president of Chapter 17, Marysville, Tenn., we have the following tip:
"When buying any kind of vise grip plier, get the type which keeps its jaws parallel as it opens and closes. Instead of concentrated pressure, the load is distributed and the tendency of the plier or wrench to slip is much reduced. Such a tool also prevents localized marring of work. It will hold onto a nut in some inaccessible place while one man turns the bolt from the other side of a bulkhead, etc. Two pieces of sheet metal can be firmly clamped together for drilling, with little danger of their twisting out of alignment. When working around firewalls, tail surfaces, etc., one-man assembly is a cinch; clamp the tool onto the bolt head and put the bolt into its hole from inside the aircraft. Rotate the tool until the end of its handle binds on a nearby flange or projection, and then run the nut on from outside."
Another firm, Precision Equipment Co., 4407 Ravenswood Ave., Chicago 40, Ill., sells a gadget consisting of two common vise grip pliers which are attached to a base clamp by means of ball-jointed, adjustable arms. It looks approximately like the arms and jaws of a lobster. Clamped to the workbench, it will hold two parts in any desired position for welding or assembly. Price is comparable to the above-mentioned tool and details can be had from Precision by writing to them.
Continued from preceding page
ported bottom-side-up on sawhorses. Guide holes are punched through the bottom fabric to each side of every rib at the stitch locations, with the needle. Hold the needle at right angles to the lower surface, feel it up against the upper cap strip and push it out through the top of the wing. This is done with no lacing cord in the needle's eye. When all holes have been punched on top and bottom, the spacing on the top surface will be automatic and no further measuring or marking will be needed.
A further simplifying step is to mount the wing panel in a vertical position with its leading edge down. To save many steps, get four or five ribstitching needles and run all of them through at the same time, thus pulling through that many for each movement from one side to the other. When pushing a needle through, put your eye close to the hole in the fabric directly above the stitch being worked, and guide the needle point quickly and easily into the proper pre-punched hole on the other side. If five needles are used at a time, you can do five stitches while walking back and forth only once.
Drawing by Don Cookman
Note: For a smoother finish, knots can be slipped to one side of cap strip.
Another month has rolled around again and another award has been made for the best tip received, this one from Henry E. Winslow, EAA 595 of Inglewood, Calif.
Here in Hank's own words is a tip on making metal trim tabs:
One of the most fascinating things about building one's own plane is the variety of materials available to use in its construction. Too often, however, the builder seems unwilling to change the choice of materials to suit the unit. Because of this a great many homebuilts have ugly looking rectangles of tubing covered with fabric for their trim tabs, when a neat, lightweight one could be constructed of aluminum. (Refer to Fig. 1).
The trim tab described in this article is easy to construct and will give a professional look to your homebuilt. See the example on Fig 2.
The only tools needed are ordinary hand tools with the exception of a sheet metal brake. The skin could be hand formed, however the time involved in making a form and a clamp to hold it is much too great when less than five minutes at the brake will finish the bending operation.
The three micarta ribs are made up first, then the brackets are made and attached with a couple of 6-32 countersunk headed screws and stop nuts. One of the outboard ribs is notched for the tab horn so the countersunk holes will be in the horn on that side.
Bend up the skin over a 3/32 in. radius bar to about 70 deg. on the first bend, then you will be able to get a 70 deg. angle on the second bend. Clamp the trailing edges together and put the skin back in the brake with about 1/2 in. of the leading edge protruding from the radius bar. Now clamp down LIGHTLY until the skin forms the proper contour.
Slip in the center rib and drill the rivet holes through the assembly. Select the proper length rivets and rivet in the rib. This can be done with a ball-pein hammer but care must be taken not to crush the micarta as it is quite brittle. Now fit the end ribs by cutting slots in the leading edge of the aluminum tab for the two brackets to extend through and drill and rivet as with the center rib. Now drill and rivet the trailing edge and the tab is complete.
It is a good idea to spray zinc chromate on the inside of the skin before riveting it up. I used brazier head rivets on the tab as the skin is too thin to countersink and dimpling requires much more work especially as the trailing edge cannot be dimpled anyway. It is good design practice to have the horn arms tilt forward so that the center line of the bolt holes passes through the center of the bracket bolt hole. If the trailing edge on your control is not straight the proper contour can be followed by varying the length of the micarta ribs to suit and trimming the trailing edge to the proper curve.
The question may be raised as to the forming of a streamlined leading edge. It should be easy to form but I am not sure that the gradually curved nose will be stiff enough to resist the flexing and warping it will encounter in service. However, as the tab is very simple to construct, the reader might try building one up and see if that configuration still has the necessary stiffness. Again the use of formed metal ribs instead of micarta might be tried. The problem of riveting the skin to the ribs near the trailing edge will tax one's patience, however, and unless the builder has had quite a bit of sheet metal experience I suggest that he stick to the simpler tab described in this article.
A Recipe For Wing-Root Receptacles
By Tom Roddy, EAA 3705
Box 92, Rockwood, Tenn.
Want to be sure you have a snug fit between the wing root and the fitting on the fuselage? Then combine the following, as in the accompanying photograph:
1. Cut sheet stock for fuselage fitting; fit into wing root before wing root has been bolted into wing; secure with C clamps.
2. Weld up accessible seams; tack-weld any others. Remove clamps from structure and finish welding remaining seams.
3. Place structure back inside wing root, block up under drill press and drill through the two at once for main wing-root bolt to fit into.
4. Separate pieces again and cut pieces of tubing for compression member in fuselage fitting; weld compression member in place.
5. After wing-root fitting has been fastened into wing, slip fuselage part of fitting into the wing-root part, insert bolt through pre-drilled hole.
6. Position wings in correct attitude, butted against the fuselage, as shown in accompanying photograph. Tack-weld fuselage part of fitting to fuselage, using several ply of sheet asbestos to prevent scorching wing. Pull bolts; remove wings.
7. Jig fuselage fittings securely to prevent any shifting during welding; complete welds.
Getting Smooth Cuts
Neatness and accuracy being as important as it is in aircraft work, there is much interest in proper tools and methods for getting clean cuts in wood. Most of us have used ordinary table saws with rip and combination blades and have been unhappy with the rough cuts which result. Yet, it is quite possible to get cuts with a table saw that are so smooth that a few light passes with sandpaper afterward will eliminate tooth marks nearly 100 percent.
Low-price blades are often of thin metal, and blades made for general ripping work are also quite thin so that their kerfs will be as narrow as possible with subsequent savings of wood. The trouble with any thin blade is that it is apt to "chatter" at high speed and thus throw tooth marks into the wood. It can also produce wavy cuts by reason of a tendency to follow the grain. Ripsaws almost always have some set to their teeth, to make the kerf wider than the blade and minimize binding, warping and crooked cutting.
The correct saw to use for aircraft work is a "cabinet maker's blade". They come in different designs; some are hollow-ground with thick teeth, thin disc, and thick hub. Some have thick hubs with a step-down out near the teeth to keep kerf reasonably narrow. The teeth have no set to them and usually are quite thick as compared to the lighter general purpose blades. Some have very small teeth but others have fairly large ones, while retaining the essential feature of no set and a thick, stiff disc.
When cutting long, thin strips it is desirable to make some kind of guide, perhaps of wood with spring "fingers", which will hold the wood snugly against both the table and fence and keep it from bending. When wood bends or chatters, tooth marks are put into it. Feed wood in at as nearly uniform a rate as possible because halts and jerks also make tooth marks.
Many firms make good cabinet maker's blades but to get EAA members started on the right track it can be mentioned here that Sears, Roebuck and Co. has a "Thin Rim—Satin Cut Combination Blade", Cat. No. 9-3254, which upon trial has produced wonderfully smooth cuts in spruce, fir, pine and mahogany.
Owners of cabinetmaker's blades should not use them to cut plywood. The glue lines in plywood are surprisingly hard and brittle and can dull a fine saw rather quickly. Special blades for cutting plywood are also available, which feature very small teeth to keep edge splintering to a minimum.
SAVE THAT GARAGE FLOOR!
Before starting to use a spray gun in a garage or other building having a concrete floor, wet the floor with a garden hose. Dope and primers in the form of spilled drops and overspray globules will not adhere to the concrete and a much better cleanup job can be done after doping.
Shaping Tube Ends On A Metal Lathe
By Francis H. Spickler, EAA 4209
One of the most important steps toward making a good weld is to produce parts that fit accurately. The writer is using a simple attachment for a metal lathe to shape tube ends to fit other tubes accurately and quickly and at any angle.
The main body of the attachment is made of hard maple 2 in. x 2 in. x 4 in. A 3/4 in. x 4 1/2 in. hexagon head machine bolt is turned as shown for a 9 in. South Bend metal lathe, or modified as necessary to fit the lathe available. The small piece of mild steel is fitted to the block and screwed in place in order to keep the block in proper alignment with the compound rest at all times and yet permit easy exchange of blocks for forming any desired size of tubing.
Slide the head of the bolt in the "T" slot on the compound rest and slip the block on the bolt through the 11/16 in. hole. Seat the block so that the piece of mild steel slips into the "T" slot and fasten the block securely with a washer and nut. Place a drill of the desired size tubing in a chuck on the spindle of the lathe and drill a hole through the block. A second hole for a different size tube is also drilled the same way. Remove the block from the lathe and split it in half by sawing on a circular saw and the attachment is completed. As many blocks can be made as desired to prepare for tubing of various sizes. By placing the 11/16 in. hole slightly off center a larger hole and a smaller hole can easily be accommodated on one block.
In using the attachment the piece of tubing is clamped in place on the compound rest, the compound rest is set to the desired angle and tightened. A standard reamer with the diameter of the tube against which the shaped tube will butt is mounted between centers of the lathe. Flood the reamer with cutting oil, feed the tube into the turning reamer with the cross feed, and in a matter of moments one has a perfectly fitted pair of tubes.
Fitting tubes on an angle is no problem. One only has to measure the angle of the center line of the tubes on the jig or from an accurate plan, set up the compound rest for the desired angle, and feed the tube into the reamer as it turns. The fit will be better than absolutely necessary with less than a minute spent in making the cut.
To obtain the proper length, cut the raw stock as closely as possible to length, form one end, form the second end being careful to align the tube in the block so that the two ends will be on the proper angle with each other. Next try the tube in the jig. At this point it is easy to measure how much must be removed to achieve the proper length. If the lathe has a graduated cross feed it is a simple matter to remove precisely the required amount. With practice one can usually cut the tube to the proper length so that it fits accurately the first time.
The writer first tried the idea without the mild steel guide block. It worked satisfactorily, but alignment of the tube was tedious, and took more time than forming the tube end. Cutting and fastening this small item saves much time on setting up the tubes for forming.
Carbon steel reamers are satisfactory as long as plenty of cutting oil is used with relatively slow speeds. Of course high speed reamers will stand up better.
Be sure to remove all cutting oil before welding in accordance with good welding practice.
---
ERASE THOSE BAD HABITS
by Bill Porter, Jr.
Tucson, Ariz.
Use an ordinary firm eraser to get rid of fuzz, or splinters, or curls of wood that are ever present after cutting or sawing wood items, such as gussets, longerons, cap strips, etc.
It is quite often most inconvenient and time consuming to fashion a sandpaper "tool" for this work and even then extremely difficult to reach some areas initially forgotten.
The eraser can be cut to reach or to fit any area or shape. It is very inexpensive. It is convenient to use and removes the "fuzz" faster and cleaner than sandpaper. The rubber will not snag a small edge splinter and end up tearing out a large sliver of wood thereby ruining, or at least marring, an otherwise nice piece of work. It can't round-off an edge meant to be square — as sandpaper will.
The eraser has been a valuable "tool" during the construction of my all-wood fuselage. But then I've been very particular about the workmanship — it apparently paid good dividends, the FAA Inspector OK'd it on the first inspection — found no discrepancies — and commended the workmanship.
The eraser can be cut to any size or shape to use on holes, inside cuts, or what have you
Try it awhile — you'll use it thereafter! It's a real inexpensive and effective time-saver, and adds that extra little bit making for good workmanship.
Bounce into that group practicing good workmanship by erasing those bad habits! ! !
---
CONSTRUCTION TIP
For those who are restoring planes using 8 cylinder engines such as OX5, Hisso's etc. Such engines originally used the Dixie type 800 magnetos. The Bendix-Scintilla VMN-8 is almost a direct replacement, requiring very little modifications. The VMN-8's are modern magnetos and are very reliable, where the Dixie 800 was not. These mags are no longer made but can be obtained from the M & J Magneto Service Co., Wichita Falls, Texas. They have these in stock in both new and used condition. I offer this tip because it took me several years of searching to locate a source on these magnetos.
HOW THE PROFESSIONALS DO IT!
One light airplane factory uses a trick you wouldn't believe if you had not seen it done. They strike longerons with a rubber mallet to bow them out about a quarter of an inch. When dope tightens the fabric, it pulls them in so they are straight rather than bowed in between cluster joints. On fuselages where the longerons are quite thin and long between joints, put fabric on with less than normal tension to prevent dope from pulling it too tight for the good of the longerons. If too much heat is applied to Ceeonite it will shrink even more when doped and can even make the structure collapse.
It is considered good aircraft practice to drill holes slightly undersize in vital fittings and then ream the holes to true and accurate final size. Due to shifting of the work, bending of the drill, variation in grind, increasing dullness with use, etc., twist drills cannot be relied on to make consistently accurate holes. M-M-A, Inc., Lancaster, Pa., makes tap guides sold under the trade name 3-I-Q which hold hand-turned taps at exact right angles to the work and insure true, uniform tapping. When taps go in crooked, they bite a lot of metal from one side of the hole and too little from the other side, giving unreliable threads and causing tap breakage.
Don't use a scriber to mark steel tubing for cutting; scratches that deep are sure to be starting places for cracks. Get a silver colored pencil of the type used to mark blueprints. It marks steel tubing well, even when oily or greasy, and can be seen even when the metal is heated for tack welding.
To make the cutting of steel tubing faster and neater try one of the chromeplated tubing cut-off wheels for table saws, available from Sears and power tool dealers. An ordinary plumber's tubing cutter works well, too.
All steel tubing fits should have gaps not over 1/16 in. Slight looseness at the ends of tubes is used to allow for heat expansion and avoid a weld at one end pushing things out of alignment at the far end. But too-large gaps lead to excess use of rod for filling, with weak joints and danger of burning through tube walls.
GRINDIN'EST WOOD GRINDER EVER
Not having a spare power table saw (or a table saw, period), the 'Grindin'est Grinder' came into existence about two hours after latching onto a Montgomery Ward "Power-Grit" metal disc. It was built from loose parts and scrap wood as a temporary-temporary no cost tool. After two sets of wing ribs and various other items, it has become a standard. The pictures tell the story, except how useful and fast it is. A few items of note are:
1. Take all of the end play out of the electric motor.
2. Notch out the bottom of the table top for the motor arbor and disc clamping washers. Notch so that the washers extend almost through the table top. Place the coarse side of the disc outward for main use.
3. Face the table top with hard sheet material, as otherwise the plywood becomes gouged and makes grinding real jerky, and can ruin parts.
4. Grind only the non pitch woods — mahogany, birch, spruce, hemlock, fir and such. GRIND NO METAL.
5. If you goof up and gum up the disc, try cleaning with turpentine or paint remover and a wire brush. If you have really burned the gum onto the disc, try Sear's "Gum and Pitch Remover", catalog No. 9K4918 (8 oz. for $.45).
Costs (approx.), including loose parts when not on hand.
Sears' 8" Karbo Grit Disc .................. $4.47
Cat. No. 9K3000
Ward's 8" Power-Grit Disc ................. 5.98
Cat. No. 84B4103
Used 1/4 horsepower electric motor .......... 5.00
Tool Arbor for motor .......................... .70
Switch and box ................................. .60
J. Floyd Blair
8612 Bangor Dr.
Ft. Worth 16, Tex.
EAA 5157
Chapter 34
Hole Aligning Made Easy
Photos by the author
This month's award goes to Harry A. Scott of Inglewood, Calif., for a simple tip which should help improve workmanship with more precisely aligned parts. "To obtain drilled holes perpendicular to the surface of a part, one must align the drill press table normal to the cutting tool. This insures holes that can mount matching parts on opposite surfaces such as the wing attach fittings shown in Fig. 1."
"A means to accomplish this is to use a dial indicator. (Can be purchased from either a tool store, hardware, or automotive store). First, locate the drill press table in the position that is going to be used. Then insert the indicator in the chuck adjusting it so as to reach the lever as far out from the center line of the chuck as is practical. Lower the chuck until the indicator lever contacts the table, lock the spindle, and set the dial to read zero. Now, by hand, rotate the chuck to determine the slope of the surface of the table. Make corrections by slightly tapping the table with a mallet. These steps are all illustrated in Fig. 2.
"This procedure will result in good aircraftsmanship and the accompanying personal gratification."
Making Gasoline Tanks
When making up welded gasoline tanks, it is a good idea to put all seams on the outside as shown in the accompanying sketch. There are good reasons:
1. It helps the amateur to do a good job because tolerances are not critical.
2. The different sections are easy to clamp together for welding.
3. The lips act as stiffeners for the tank.
4. It keeps the heat of welding away from the main body of the tank, thus minimizing warpage.
The bends are about one-fourth inch wide and are fusion-welded.
Another important thing to keep in mind when making gas tanks is to provide them with internal baffles. These not only keep gasoline from sloshing around, but strengthen the tank and help it withstand the weight of the fuel when flying through maneuvers, in rough air, etc. — Dick Blair, Vincentown, N. J.
WATER IN THE TANK
A large proportion of aircraft engine stoppages are due to water in the fuel system, and this means that amateur aircraft builders must give careful thought to their tank arrangements to preclude this kind of dangerous and costly trouble. The mere inclusion of a common fuel strainer in the line between tank and carburetor is not always dependable insurance against water in the carburetor. Some gas tanks have "standpipes" in their outlets, some have sumps, their shape varies widely, and where some planes sit level on tricycle gears, others sit on the ground in tail-low attitude on conventional gears. During climbing, turning and gliding flight the airplane's position changes. You may remove water from the gas strainer on the firewall and feel a sense of security, but it has been shown by experience and tests that there can sometimes be an appreciable quantity of water in the tank, confined there by some peculiarity of the tank shape, airplane position or outlet design. In flight this water can get into the fuel lines as the gasoline sloshes around. Engine failures attributed to "carburetor icing" often turn out to have been caused by unsuspected water in the tanks. Therefore the prudent homebuilder should study his contemplated fuel system carefully to see if water, which always settles to the bottom, can be accumulated in sizeable quantity before finally flowing out of the tank and showing up in the fuel strainer. The best safeguard of all is to have a quick-acting drain cock in the lowest part of any fuel tank, and to drain a few ounces of gasoline from it after each refuelling. With this positive means of testing for water, the fuel filter can then serve as a double safeguard in addition to catching bits of dirt suspended in the gasoline. The most carefully designed and built amateur airplane can still get into serious trouble when a tested and proven factory-built engine quits through water in the gas.
A Simple Fuselage Jig
By Frank C. Sabo, EAA 269
This fuselage jig is easy to construct and saves on the amount of wood needed for the job. First thing to do is to check your plans for the length of the fuselage so that you can determine how long to make the jig. I used two 2 x 6 pieces of pine 14 ft. long for my jig. Next I obtained some ½ in. plywood and cut strips 8 in. wide and as long as the needed width of the fuselage with extra space to spare. These are nailed to the 2 x 6's starting about 3 in. from one end so that when you come to join the two halves of the fuselage you will have room to tack weld the front cross tube in place (see Fig. 1). The pieces are spaced according to the distance of the cross members of the fuselage as shown. Place on two saw horses and level and you're ready to use the jig.
By using plywood cut into narrow pieces, only half the material usually used is required. I used white pine blocks 1½ in. by 2 in. by ¾ in. thick to hold the tubing in place for tack welding. For the cutting and fitting of tubing refer to the Amateur Builders Manual.
Upon completion of the two sides of the fuselage, the jig can be used to hold the sides upright while tacking the top and bottom cross pieces into place, also all diagonals. Square off the ends of the 2 x 6's and nail a piece of ½ in. plywood on the end so that it will come up about two-thirds of the way on the front of the fuselage (see Fig. 2). Next use a square to make a center line vertically on the plywood, and then take a string to run a center line the full length of the jig. You will do all of your measuring for the tubes from this point. Nail blocks to hold up the sides as shown. Remember to always work from the center line — take half of the diameter of the tube you are using and mark on each side of the center of a tube so you will know where the blocks are to be nailed.
Try to keep the fuselage as square as possible during tacking. I used small turn-buckles and wire as shown in Fig. 3 to keep my fuselage square while working on it. Wrap wire around the longeron and into one eye of the turnbuckle, and another wire from the opposite diagonal into the other eye. Take up on the turnbuckle to line up the fuselage. By forming the X close to the clusters it is easier to square up each bay as you weld up the fuselage. I used this in tack welding also.
It is very interesting to note the reaction an article sometimes receives. A short time ago I submitted a description of a baggage compartment I built in the turtle-deck of my Stits Playboy, enclosing a photo of the installation. Only two days after the article was published in SPORT AVIATION I received a letter from a member in Florida. He agreed with my baggage compartment construction but having noticed the headrest in the photo wanted information on how I had constructed it. And so it goes! Often that which the writer overlooks is a most interesting project to the reader; which leads us to the reason for this article on how to build an aluminum headrest in one easy lesson.
The materials for the project consist of a piece of .025 aluminum 16 in. by 28 in., a small piece of \( \frac{3}{4} \) in. plywood, some foam rubber and a bit of imitation leather.
The aluminum is rolled to about an 8 in. diameter. Then the plywood head is cut out and fitted to the turtle-deck. When this is accomplished the plywood is inserted in one end of the rolled aluminum and nailed down. Draw a center line down the aluminum and then draw two more lines from the bottom side of the plywood head to intersect at the center line. This gives the bottom contour of the headrest.
Before cutting off the excess aluminum mark out the tabs. (I used four to a side on mine). Make up some wood blocks and glue them between the stringers on the turtle-deck. This forms a support to screw the tabs to. Now you are ready to trim the excess aluminum from the headrest leaving the tabs to be bent up.
Pieces of "U" channel rubber glued to the edges of the aluminum finish off the sides that rest against the fabric.
Sponge rubber trimmed and glued to the face of the plywood and covered with imitation leather finish up the front. Any good contact cement fastens the rubber and leather securely. My own preference is "Grip" cement. The plywood head is nailed or screwed flush with the forward end of the headrest and screwing the headrest to the turtle-deck blocks completes the job.
If more than one headrest is to be constructed the builder could very easily use the first aluminum headrest as a mould to make up any number of fiberglas copies. In that case it would be well to make the dimensions slightly larger (\( \frac{3}{4} \) in.) and use the inside of the aluminum as the mould. In this way the outside of the fiberglas will be smooth and requires but a minimum of sanding.
Plywood formers must be fitted around the outside of the headrest to hold the aluminum rigid to the dimensions of the turtle-deck while the fiberglas is setting up.
---
**WORKING WITH 75ST**
The aluminum alloy known as 75ST is rather prone to have small edge cracks spread and to avoid this certain points should be observed when forming it. Parts which have been cut out of this metal in a shear should be filed back one metal thickness to remove edge cracks. Holes should be drilled, not punched, because punching leaves edge cracks too. Avoid cold dimpling because the dimples will crack. However, hot machine dimpling is acceptable.—A. E. Griffin, EAA 2426.
---
**DON'T NAIL CAPSTRIPS**
Everyone knows that a wing rib picks up air loads from the covering material and serves to transmit them to the spars. To do this job, ribs are built in the form of trusses, and we test and analyze them as such. But, points out member Bob Waaser of Key Largo, Fla., it is essential to remember that the cap strip material is also subject to concentrated shear loads where it passes over the spars and transmits the air load. Driving nails through typically thin cap strips will appreciably weaken them at this critical location. So, don't do it! Instead, drive nails through the vertical members of the rib at the spar opening.
Sheet Metal Brake
By Grover A. Chaplin, EAA 5507
4748 W. 162nd St., Lawndale, Calif.
For a long time I have read with interest various articles in SPORT AVIATION on the different ways the boys have described their ways of bending sheet metal parts. As I work with sheet metal in the aircraft industry I got to thinking how much simpler it would be if a small sheet metal brake was designed that could be cheaply and easily built and eliminate form blocks, hammering on material, heating to bend, etc.
The brake I have designed is rather small in its present form, but this was a matter of choice as all I wanted was a brake to bend small brackets. The drawing could be enlarged to most any scale to suit the individual needs. The basic form could be lengthened or widened to suit.
The brake in its present form will bend .062 chrome moly with a \( \frac{3}{8} \) in. radius with a nice clean brake and will form "U" brackets with an inside dimension of \( \frac{9}{16} \) in., \( \frac{1}{2} \) in. with a little forming on the second flange.
The simplest method I have found on a "U" bracket is to bend the first flange and then use a spacer from the inside edge of the bent up flange to the hold down shoe to set your dimension. This spacer should be the size you want the inside dimension to be.
The only other point I would like to bring up is that the center of the hinge pin must be on the exact center of the bottom plate in both planes; failure to do so will result in either a very sloppy bend or a brake that will not bend a full 90 deg. bend as it will jam if it is too low.
The brake now will bend up to 5 in. long pieces. Rather than design a complex hold-down for the shoe, I have found that the simplest way to get the pressure is to clamp the brake in the vise. One more thing to bring to mind, if the brake is to be lengthened I would advise that the metal thickness be increased in proportion to the increase in length.
Top and side view
These parts were made with the brake.
AILERON WELL HOLES
Sometimes a little forethought can save many hours of repair work. This idea is to forestall a time consuming repair should the fabric loosen from the metal in the aileron well.
Especially, if the construction of the aileron well has a reverse curve to it, this idea will solve the problem of fabric pulling loose from the aluminum and causing interference with the aileron.
The secret is to punch holes in the aluminum, then dope fabric on the back side of the well over the holes. When the wing is covered the dope will bond to the other fabric on the back through the holes so that it will not pull away even if the dope fails to stick to the aluminum.
To make neat holes first lay out a hole pattern in the aluminum then drill out 3/4 in. holes with a hand drill. These holes will take the stud for a 3/4 in. chassis punch. It is a simple matter to screw down the punch to cut clean 3/4 in. holes. Fastening down the fabric to the inside of the well makes the job ready for covering.
GEAR JACK PAD
Many EAAers have the Wittman type spring steel gear on their homebuilts. This jack pad is designed to make the job of changing brake pucks, or the wheel, a safe and easy one. Just slip it on and then jack up the gear on that side and there you are all set to work on the wheel.
The pad is made of a strap of 1/8 in. cold rolled about 1 1/2 in. wide. Heat with a torch and bend the strap into a "U" shape 5/8 in. wide across the inside of the "U". 5 in. legs should be long enough.
A "V" is next bent up from the same material and welded across the strap. Slide the assembly onto the landing gear leg as far up as desirable, then mark for the bolt hole.
Drilling the bolt hole finishes the operation. To try it on for size, slip it on either gear leg with the "V" down, insert the bolt and then slide it up the leg till it binds. The face of the jack goes between the leg and the "V" (see photo) and all that is left is to raise the jack. Of course we have already chocked the other two wheels, haven't we!!! (?)
TOOL FOR BENDING ALUMINUM EDGES
Necessity is often the mother of invention! That thought is not new but was certainly proven recently. After fastening down the leading edge on a pair of wings recently it was noticed that the edges had not been bent down.
The first attempt to do the job was with a pair of pliers that had wide jaws silver soldered to them. This was not satisfactory as it was not only hard to hold the pliers so they would not slide around but the drag and anti-drag wires were in the way of the pliers when the bending operation was tried.
The next try is shown in the photo. It worked very well giving the same edge distance to the bend every time.
All the tool consists of is a couple of bars with a hole drilled near each end. The holes were spaced back of the edge so that the forward edge of the bolt was 3/8 in. back. The bars were made 3/4 in. shorter than the length of edge to be bent down. This is necessary in order that the metal close to the rib is not torn during the bending operation. Also a radius was filed on the edge of the bars to make a gentle curve to the bend.
To operate, slide the assembly over the edge to be bent down until the bolts touch the edge. Be sure to leave an equal space between each end of the bar and the rib. Then clamp the bars together with one or two "C" clamps. A steady pull on the clamps bends down the edge.
A 30 deg. bend is more than sufficient and after one section is bent unclamp and move to the next till all edges have been bent down.
Hints For The Homebuilder
By Donald K. Howard
22 Arch St., Brockton, Mass.
Simple though any operation may seem, we usually find that there is a right and wrong way, and this is also true when bending a steel tube. Often in welding we find that we build in stresses that result in deformation of the structure so that it will no longer meet points of attachment, etc., or we may want to curve a tube for some reason or other. Common practice is to heat the tube red hot and push, but we find that in doing so we usually flatten the outer radius and wrinkle the inside surface. The following method enables us to "shrink" the inside radius and retain a true circular cross-section.
At the beginning of the desired bend heat a spot locally bright cherry red approximately the diameter of the tube on the inside of the bend. Apply a load by hand in the direction of the bend and watch the color of the spot. As the color darkens to a dull red "flash" the tip of your flame across the outside of the bend directly in line with the spot. Several light passes will be enough. Do not heat the outside surface to a red heat. This operation expands the outside of the bend as the inside cools under a load, and causes it to compress or "shrink". If a long curve is required progress along the tube spot by spot until the desired radius is obtained. To reposition a tube pulled out of alignment by welding, spot close to the weld. Remember, let the heat do the work and apply only enough load to shrink the inside surface as the heat is applied to the outside. Don't hurry.
Sounds complicated? Try it and see.
---
FIG. 1
Fill with Water to approx. this level
5/8"-3/4" Tube—Flatten and Weld
Welding Torch or Equiv. Source of Heat
STEAMING RIB CAPSTRIPS
FIG. 1
In the "good old days" of wooden ribs the following method was used in lieu of a steam box. Flatten the end of a 5/8 in. or 3/4 in. dia. tube about 3 ft. long and weld tight. Insert this end in the vise with the open end up. Fill with water for a depth of eight or so inches and insert one or two capstrips. Apply heat locally using a welding torch, blowtorch, or what have you to boil the water rapidly. This will allow you to bend the sharp area at the leading edge of the upper capstrips. With one "cooking" while you are nailing the other in your rib jig, you will always have one ready.
---
FIG. 2
New leading edge material is always a problem to apply. Many an otherwise fine job has been second rate in appearance because of an uneven or irregular leading edge. Remember, perhaps the most important section of your airfoil is its leading edge and the forward upper third of the upper camber. The simple clamp shown will enable you to pull the L.E. skin down tightly to the rib contour. Actual practice is to have one man work the clamp while the other fastens the skin. Start by fastening the skin to the lower capstrip at the back edge. Using the clamp wrap the skin around the L.E. and temporarily fasten in several places. Start at one end and pull each station tightly to the rib and fasten permanently. The handle may be approximately 2 ft. of 5/8 in. tube flattened on one end. The strap may be light sheet steel or a length of spring steel strapping used to tie large bundles. The filler block shown is the depth of the lower capstrip and prevents the strap from pulling around the spar at this point. Coat hanger wires make a good link.
---
Don't trust every last detail in your plans. Most plans turn out to contain at least minor errors of commission or omission. If things don't jibe, look for such mistakes before tearing apart the work you have done. It is better to write to the designer and clear up a vague point rather than to have it haunt you while flying.
---
In designing your own exhaust pipes of steel tubing, always give careful thought to the effects of expansion under running heat. If design does not take this into account, the combination of expansion strain and vibration will cause early failure. Failure of exhaust pipes inside a modern tight cowling is much more hazardous than in an open cowling of older type.
---
Never build a fuel tank without first making a cardboard mock-up to check for dimensions and fit.
---
Lufkin Tool Co. has a very useful tool which many homebuilders would find great use for. It is a steel tape graduated in consecutive inches, with decimal graduations, 50 feet long. The list price is $14.80, Catalog No. C213CX.
Tips For Builders
Don't use a scriber to mark steel tubing for cutting; scratches that deep are sure to be starting places for cracks. Get a silver colored pencil of the type used to mark blueprints. It marks steel tubing well, even when oily or greasy, and can be seen even when the metal is heated for tack welding.
* * * * *
Never build a fuel tank without first making a cardboard mock-up to check for dimensions and fit.
* * * * *
Don't trust every last detail in your plans. Most plans turn out to contain at least minor errors of commission or omission. If things don't jibe, look for such mistakes before tearing apart the work you have done.
It is better to write to the designer and clear up a vague point rather than to have it haunt you while flying.
* * * * *
Keep your FAA agent informed of progress on your project. Notify him before starting construction and before each major component is primed, covered or closed in. Don't cover anything up before he has inspected and approved it.
* * * * *
To make the cutting of steel tubing faster and neater try one of the chromelated tubing cut-off wheels for table saws, available from Sears and power tool dealers. An ordinary plumber's tubing cutter works well, too.
---
CLEANING ENGINE PARTS
By Carl H. Buecker
6603 Coldwater Rd., Ft. Wayne, Ind.
The job of cleaning engine parts during overhaul with facilities available at home is not an easy one. Vapor degreasers and gunk tanks are usually not standard household appliances. The result is the homebuilder has a real job on his hands when he wants to overhaul and clean up an engine. I was recently faced with this problem and after some experimentation, I tried Tide washday detergent. My wife had a large box of Tide and I made generous use of it. I placed the parts in a pan, poured in plenty of Tide and added just enough water to cover the parts. This saturated solution will clean many parts at room temperature if allowed to stand 12 to 24 hrs. To speed the process and for parts having heavy carbonized deposits, place the pan on the stove at low heat (140 deg. to 160 deg. F). Two to four hours at elevated temperature will nicely clean pretty rough looking parts. If they don't come out quite clean give them more time or increase the temperature a little. Crusty aluminum pistons come out shining like a new dime. There is no etching of the aluminum. Steel parts clean up nicely too. Brushing the parts with an ordinary scrub brush and washing in clean hot water, then drying and oiling completes the job.
* * * * *
LEADING EDGE JIG
By W. H. Hadley, EAA 3511
2690 Heather Dr., East Lansing, Mich.
The leading edge jig and nose truecing jig was made by me in order that I could true up all the ribs on the EAA Biplane that I am building.
First, I made short pieces of spars and put all the ribs for one wing on them in order to hold my ribs firm.
Second, I made two templates out of masonite of the nose section of the rib from the front of the front spar to the leading edge.
Third, I made a short piece of leading edge and screwed this between the two templates.
Fourth, I put the ribs in a vise and filed down the cap strips to line up with the templates.
In this way all the nose sections are even and in line. For the trailing edge jig I used scrap pieces of white pine 1 in. thick and as you can see by the drawing I cut one end of the template off about 3 to 4 in. from the tip of the rib. Alongside of each template I nailed a ¼ by ¾ in. piece of wood, and across the top of the templates I nailed two pieces of ½ by ¾ stock. The distance between templates equaled the width of nine ribs. I then nailed the template to my work bench and slid my ribs on and filed across the cap strips and squared off the ends of the ribs.
I trust that this idea of mine will be useful to someone trying to figure out a way to true up his ribs as I did a lot of monkeying around with a lot of Rube Goldberg ideas before I hit on this one.
TAIL ASSEMBLY JIGS
By Palmer Johnson
3434 Fairhaven, Salem, Oreg.
Here is an outline of a frame I made to hold the tubing for the stabilizer for my Cougar. I laid out the plans on plywood first, then took two 2 x 4s the length of the width of plywood and nailed on edge of plywood. Drew a center line across 2 x 4s about 1/4 the depth of the 4 ft. pieces of 2 x 4s cut down past the center line one-half thickness of tubing. These notches will hold tubing in place so short pieces can be cut and spot welded in place. Then all can be lifted out and turned over to weld the rest of the tubing in place. As each side is the same, the form can be used for each side. I used a conduit bender to bend my tubing. (See drawing).
EYEBALL ENGINEER ON THE TAILWIND LANDING GEAR
By Tom Roddy, EAA 3705
The first step in installing the Tailwind landing gear, as I see it, is to anchor the fuselage — preferably with the floor level — so that the gear leg, when placed roughly in position, will remain an inch or two off the floor; it should rest on a stack of thin wooden blocks. The top of the gear leg, I suggest, should be secured with wire and/or "C" clamps.
As seen in the illustration, it is necessary to place two unwarped planks (A, B) across the fuselage floor so that they extend about two feet out the side. This is for reference in measuring and sighting straight down on the center of the anticipated bend at the axle.
Incidentally, this bend area should be marked in the center with a small cross mark of a bright, easy-to-see color (F).
Sight your planks (A, B) from both front and sides to insure that they are parallel with each other and the floor itself. The position of plank B must be such as will make its rear edge exactly the specified distance for the axle behind the bulkhead used as a reference, per plans. Then, when framing square (C) is laid across A and B, and sighted down line D, the projection is the fore-aft position of F. But don't forget that C must be placed so as to give one-half the inside measurement of the gear width, using the center line of the fuselage as a reference. In addition to this, D must be pointed neither right nor left, but straight down — 90 deg. to the floor of the fuselage. This can be done by placing vertically on one of the planks, and in the same plane with the firewall, a small drawing triangle with its 90 deg. corner at AC or BC.
Sight down D and move F into position, fore or aft, left or right. At the same time, slide the gear leg up or down as required for proper depth below fuselage floor, per plans, by measuring down (E) with tape from B plank to F by using blocks (J) to maintain the adjustments. Be sure to allow for half the diameter of the longeron (and also the thickness of the plank if measuring from the top), since the reference is the center line of the longeron and likewise the center of the mount bolt.
When everything checks (this will likely be after much juggling), place tack-welds on tube (which slips over bolt) at "T". Then cut the H and tack weld into position.
When all the foregoing has been accomplished on both gear legs, tack-weld tube G into position. The framework is now triangulated sufficiently so that it will not move out of line anywhere while the other tubes are cut and tacked in to complete the gear mount. Place nuts on all four foundation bolts before making the final welds; weld up the gear mount before jigging the engine mount.
Don't skip anything — but HURRY; you want to get the legs bent at the bottom and wheels on this thing!
FORMING LEADING EDGES
By Bernard J. Schaknowski
127 Ruth Ave., Syracuse, N.Y.
Just came up from the shop in the basement. I formed and temporarily attached the plywood leading edge on the Jodel D-9 wing. Total time on this operation — 3 hrs. To steam the plywood I rented a small portable wallpaper steamer. The steam from this was very effective, about 10 minutes on a 6 ft. section and it was ready to form on the leading edge. It worked far better than expected.
WING STAND
By Raymond Sippel, EAA 6991
12 N. Goodwin Ave., Elmsford, N.Y.
The construction of this wing stand is very simple and of no great cost, for any materials may be used. My dimensions are given because it just so happened that I had these odd pieces of wood available. I made four of these stands, two for each wing panel.
These stands are ideal for rib stitching as well as storage stands for those fellows, like myself, who have to work in confined spaces.
The cut-out in piece No. 3 should be the same as the nose section of your wing ribs and of course may be copied direct from the rib jig of a completed rib.
Two wood screws are used to fasten piece No. 3 to piece No. 1. Three wood screws are used to fasten piece No. 2 to piece No. 1. I used glue along with the screws although I don't believe it necessary in the event you prefer to dismantle the stand later. I lined the cut out in piece No. 3 with foam rubber to cushion the leading edge of the wing as can be seen in the photo.
I believe the sketch of the completed stand is an ample guide for its construction.
APPLYING FABRIC OVER PLYWOOD
By Robert A. Greimel, EAA 9905
69 Burley St., Danvers, Mass.
Many builders on their first try at applying fabric over plywood, attempt to lay the fabric in wet dope and work it toward the edges to remove bubbles and wrinkles. More often than not, when the dope has dried, numerous bubbles appear under the fabric and much time is lost attempting to remove them. The following method has been found to produce uniformly good results.
1. Meticulously smooth the wood structure, using plastic putty in any dents, scratches, gouges or seams, as any imperfections will show through the fabric.
2. Apply at least one coat of dope-proof sealer and allow of dry thoroughly.
3. Apply two brush coats of clear dope, allowing each to dry, then sand lightly with fine emery paper to remove any bumps or brush hairs.
4. Machine sew the fabric so as to make an envelope with the open end at the wing root. Trim the edges \( \frac{3}{8} \) in. from the sewed seam. Use fabric wide enough to wrap around the wing from trailing edge to trailing edge and long enough so a single length reaches from tip to root. As most homebuilts having a short wing chord, 60 in. or 90 in. wide fabric will do. Turn the envelope inside out so the seam edges are on the inside and slide it over the surface being covered, taking care to remove all stray threads, because they also will show through. Staple, tack or sew the edges in the aileron cut-out and at the root. A word of caution—the fabric should be just snug enough to remove wrinkles, not stretched taut.
5. Soak the fabric with water and allow ample time for it to dry completely. A garden hose, with the nozzle set for a fine spray does an excellent job.
6. Next, brush on a heavy coat of dope (thinned, if necessary, to a smooth brushing consistency), vigorously working it into the fabric for good penetration. Allow to dry, then brush on a second coat.
7. Dope 2 in. pinked edge tape over all seams. To make pinked edge tape lie flat around curved tips, dope down one end a distance of several inches, let it dry, then
stretch with all your strength as you work around the curve, doping down the tape as you go. At the end, use a spring clamp to hold the tape until it dries.
8. Continue applying clear dope over the entire surface, carefully picking loose brush hairs and sanding lightly with fine emery paper, until a glossy smooth surface is attained. Pay particular attention to the edges of the tapes while sanding and when you're done you won't be able to find them.
9. Spray on two coats of silver pigmented dope and follow with the color. If you use enamel for a finish coat, wait a few days for the dope to dry thoroughly.
That's all there is to it, but I might mention that a coat of good auto wax such as Simoniz adds years to the life of the finish; and, in the interest of economy and weight saving, you should consider the use of balloon cloth, or certainly nothing heavier than intermediate fabric, rather than the more readily available Grade "A".
Sounds like a lot of work? Well, it is, but the end result makes it worth the effort, and your "amateur-built" will truly look "professional built".
SKIN DIMPLING
By Jack C. Cummings
5457B Underhill St., Otis AFB, Mass.
Dimpling the surface of sheet metal to be installed on a wing or fuselage appears difficult if one does not have access to an expensive dimpling machine. You can counter-sink the metal, but you lose some strength. I have devised a method of dimpling the skin in preparation to install flush rivets. This method will work well through .032 thickness. Take an old rivet gun set and cut off the tip. See attached Figure 1. Drill a hole 3/16 in. in the center of a depth of 3/8 in. Chamfer off the inside and outside edges for smoothness. Next, pick up from an aircraft parts surplus store or mailing house, steel dies (i.e. dimpling tools, squeezer sets) male and female, for the size rivet you plan to use. See Figure 2. The dies will fit into the modified rivet set.
Now, for use. For dimpling the skin, take a flat steel bar 1/2 in. thick, 4 to 5 in. long and 1 to 2 in. wide and drill a 3/16 in. hole in the center. Place the male die here. Lay the pre-drilled aluminum over the die with the "pilot" sticking up through the hole. See Figure 3. Take the modified rivet set with female die and place over the "pilot". With a hammer, strike the set with sufficient force to dimple the skin. After a little practice, you will be able to hit the set with the correct blow each time.
You may also dimple holes in the ribs by mounting the male die on the end of a steel bar and placing the modified rivet set into the rivet gun. See Figure 4.
If you can get dies for AN470 rivets, 3/32, 1/8, 5/32, you can use the modified rivet set for riveting. See Figure 5.
All in all, this tool will come in very handy for a number of things.
HAIR DRYER PREVENTS BLUSHING
It is the privilege of all good EAA wives to come to the aid of their husbands, and girls, you can do just that by letting your husband use your hair dryer for quickly drying small doped areas or patches on his plane. Better still, do it for him if you are on the lookout for new ways of making yourself indispensable.
The use of a hair dryer not only makes the dope dry faster, but prevents blushing. This is especially important in an unheated garage or hangar, or in damp weather.
Hold the dryer fairly close to the patch or doped area or use a corrugated cardboard box with the top cut off facing downward and cut a hole in the bottom of the box (facing upward) for the dryer to fit through. The box concentrates the heat on the doped area and makes it dry even faster.
Marian Armstrong
ATTENTION! BABY ACE OWNERS
If you like winter flying but haven't the hot blood to go along with the open cockpit temperatures, here are some pictures of a modification that might interest you. It was engineered by some of the members of Lehigh Valley Chapter 70 for their Baby Ace after noting the picture of Kenneth Halliday's Ace in Sport Aviation, July 1962.
Photo No. 1 shows the aluminum panel added onto the door and the track in which the Plexiglas hatch slides. Our member, Dick Kessler, has his hand on the thumb screw used to lock the sliding plastic in the forward closed position.
Photo No. 2 has the door closed but the sliding hatch in its open position. Stops are on the front of the Plexiglas to limit the rearward travel in the short track behind the door.
In photo No. 3 you can see the hatch closed and ready for flight. No longer does the pilot need helmet, goggles and those heavy cumbersome clothes that are necessary to keep warm. The FAA approves and so do we members of Chapter 70.
APPLYING AN ALUMINUM LEADING EDGE
By Ed Gumell, Chapter 45
When I was constructing the wings for my Stits Flut-R-Bug, I went into the leading edge bending problem with some trepidation, but as it worked out, it came out perfectly with nary a wrinkle or bulge, and pulled in tight. Using .016 H-14 (I think it should work up to .065) aluminum, with the wing upside down, on horses, flat and level, I nailed the first sheet (18 in. x 72 in.) tight to the bottom cap strips using 3/8 in. cement coated aircraft nails. Then used two straight 2 x 4s and clamped them to the outside edge with the smallest clamps that would fit. The idea was to get the "C" clamps positioned to have as little interference with the final part of the bend as possible. Now, the weight of the 2 x 4 has already started about a third of the bend, so nail it down. Here, while the wing is still upside down, I used rope (tied to the clamps) pulled snugly and brought around about sixty percent. Now, we get wifie or another pair of hands and turn the wing over gently. You should be able to nail the nose section now and all that's left is to tighten up the ropes evenly, and nail it down the rest of the way. Mine was in six foot sections, but it should work in longer ones as long as the wing is square and true.
And, here is a bonus tip, thrown in free! — The cutting and trimming of tubing up to .065 can be handled very easily with aviation snips. Of course, left and right cutters should be obtained. After a few practice cuts, it's surprising what nice, neat joints can be made with very little finish filing necessary. Straight butts or angles are equally simple and easy.
FUSELAGE JIGS UNNECESSARY FOR STEEL TUBING AIRCRAFT
By Harry C. Peterson, EAA 4878
Brown's Mobile Home Park
R. 4, Hwy. 6, Davenport, Iowa
The use of a jig, as we all know, is to insure true and square fuselage ladders; also in the case of the fuselage to have exact duplications of both ladders. My method takes in all these things but does away with all the time, material and effort that goes into the jig.
The only materials needed are the steel tubing, a good flat cement floor, hack saw and welding torch (a wife or helper will come in handy). To begin with we don't make the side ladders, instead we make the top and bottom. Cut your main longerons to the necessary length that the plans call for, four in all. Now lay out a straight line on the floor with white chalk. This will be the center line that will be used to measure to each side and lay out the main longerons. See Fig. 1.
Lay out chalk lines and draw the fuselage out as shown in Fig. 1 to full scale on the floor. Then on two main longerons bevel one end of each so they will fit the stern post and follow the chalk line to No. 3 cross member. See Fig. 2.
Tack weld the two longerons together at the bevel. Cut the No. 3 cross member to the size as called for on your plans and tack on both ends to the longerons; continue to heat these tacked joints and while the aforementioned helper stands on the point end, bend the longeron in to where the No. 2 cross member will be stationed. Repeat this last phase out to No. 1 cross member and you will have the top ladder pretty complete.
Now cut the stern post to size and file the point of the ladder to fit snug on stern post and weld together. The next step is to make the bottom ladder. Follow the method used on the top ladder with the exception that the length will vary slightly. By this I mean, if the bottom ladder has to be bent up to meet the stern post. If this is the case, the most simple way to find out how much longer the bottom ladder must be is to take and cut the upright members for station No. 2, tack them at right
angles to the top ladder as shown in Fig. 3 and measure down from these to the stern post.
Complete the bottom ladder the same as the top. Cut and tack the uprights for stations No. 1 and 2 to the top ladder the same as the No. 3 uprights shown in Fig. 3. When this is done take the bottom ladder and tack to these uprights. Heat the longerons on the bottom ladder at station No. 3 and bend down to fit the stern post. (Fig. 4). This procedure can be followed wherever the bottom ladder needs bending. From here on it is just a simple matter to cut and install the rest of the cross members and uprights.
One thing I would like to make clear now. This is not an untried method. Two fuselages have been successfully built this way. The photos accompanying this article should lend proof to this.
* * * *
ROUTING WITH A CIRCULAR SAW
By John W. Irwin, EAA 4703
18 Orchard Place, Wappingers Falls, N. Y.
Since I have no router, I used my radial saw to rout my fuselage members. Any circular saw can be used. This method tapers the end of the routed portion to prevent sudden change of cross section.
First, mark off the sections to be routed.
Next, set the fence on the saw to dimension "A" in diagram. If using a circular saw, set the blade to protrude above the table the amount of dimension "B".
Lower the member onto the saw blade as on a radial saw lower the blade into the member distance "B". Then slide the members longitudinally until you have a kerf the length of the portion to be routed. Do this on each side of the member in each section to be routed. In a radial saw remember to raise the blade when crossing areas not to be routed.
Now take a chisel and you will find that the wood between kerfs will lift right out. The kerfs are polished hard by the blade and the chisel will follow them nicely.
The ends of the kerf are tapered to the curve of the blade. I did all members for a Pietenpol fuselage in one evening.
All that is required is a light sanding to remove any splinters left by the saw blade. The kerfs can be deepened toward the tail where the stresses are less.
* * * *
USES OF EMERY CLOTH
By Kalman Saufnauer,
EAA 1201
115 Locust Ave., Hollister, Calif.
Saw slot in rod, insert strip of emery cloth in slot and wrap around rod. Use drill motor or drill press to deburr and smooth holes or edges.
Insert strip of sponge rubber in wide slot and coat with lapping compound to polish inside of hole, felt or leather may also be used as each has its merits. Use also to back up emery cloth for polishing uneven surfaces.
Tape emery cloth to plate glass for surface sanding. Hold part flat, preferably with both hands. Use long strokes or circular motion. Long pieces may be obtained from sanding belts, long glass from auto doors. Use only safety plate, safety sheet sometimes found, will have waviness in its surface, while plate has been ground to a flat surface.
Regarding plate glass — did you know a special Aircraft Safety Plate is available less than 3/16 in. in thickness? It is about half the thickness of auto glass, just the thing for seaplanes or roadables if you wish to use windshield wipers. You might salvage some from the roll-up windows of Fairchild 24's, Stinson's, etc.
* * * *
GOT COLD FEET?
By Kalman Saufnauer,
EAA 1201
115 Locust Ave., Hollister, Calif.
Also, you cold weather types can keep your feet warm with foam neoprene skin-diver socks. They worked fine in an unheated airplane.
Welding Hinge Bushings
Do you find it difficult welding hinge bushings to pipe control surfaces? If so, try my method.
Make a jig as shown in the drawing. The steel need not be .064, any thickness will do. Drill the holes in the plate as per pipe thickness and keep the ¼ in. holes spaced as required, weld the pieces to plate as shown. Chuck the control surfaces in a vise, check with level for plumb, next position the jig, keeping top plate level, secure with C clamp, insert drill rod through bushings and through jig, tack weld all four corners, remove jig and continue welding.
Crash Helmet Advice
Among our thousands of members can be found persons having all kinds of specialized knowledge and skills. For example, Vaughn M. Greene of San Francisco knows a lot about motorcycles. In a recent letter he comments that it is an unwritten law among members of the Vincent Owners Club not to get on a bike without wearing a good crash helmet. In respect to our urging amateur aircraft builders using such protective gear, he passes along a word of caution. In hopes of saving money, the airplane builder is tempted to buy surplus helmets. This can be dangerous because sometimes surplus helmets are declared surplus by military agencies for the reason that they have been subjected to shocks. The only way another person or agency can detect some of the hidden effects of such shocks is by X-ray. In buying a surplus helmet that seems to be sound on the surface, one takes a chance on getting one with hidden flaws which could materially reduce the amount of protection afforded. If a man can afford $1,000 to $3,000 for an airplane building project, a new $40 helmet should be within his reach. Greene says he would recommend the full coverage jet pilot type of helmet and suggests the Bell TX and new Buco types. Since we often fly open planes, it is important that the faceplates should be able to withstand wind pressure safely. The two just named have been tested at up to 200 mph.
Bending Cap Strips
Here's a tip on bending cap strips that has worked for me. I latched onto an old steamer for sterilizing baby bottles. It is the type that uses electrical energy going through water to generate heat. It has a hood over it, under which is a rack for holding the bottles.
I took the lifting knob from atop the hood and installed it on one side. Then, using the hole in the top of the hood as a center, I cut tabs and bent them upward. I had a piece of 1¼ in. I.D. aluminum tubing about 18 in. long, which I inserted into the hole in the top of the hood and allowed it to go into the hood about an inch. A piece of plumber's tape and a bolt made a clamp to hold the tabs tight against the tubing.
The rack for the bottles was stripped of its grid and a piece of brass screen wire laid on top to prevent the ends of the strips from going through the rack and into the water. The steam coming up through the hood and tube (which makes a flue to draw the steam) will soften strips excellently without soaking them directly in water, which seems to take some of the "life" out of the strips. The 1¼ in. tubing will hold four strips easily and allow sufficient flow of steam up the tube. My bending block holds four strips, so the steamer is adequate for my needs.
Attaching Aluminum Fittings To Wing Spars
By Bill Wolleat, EAA 1953
4300 F Ave. N.E., Cedar Rapids, Iowa
I am submitting an exceptionally strong method of attaching aluminum fittings to wing spars or any other member where there are great shear forces.
I think this method is superior to the plug system in that it transmits the shear forces directly from the wood to the attachment fitting instead of to the bolts and thereby spreading the stress over a wider area of the metal fittings.
These rings should be cut from steel tubing of .040 to .060 and be cut about 3/16 in. long.
Now, using a fly cutter with a steel cutting bit to cut a groove to fit the thickness of your rings and a pilot bit slightly smaller than the holes you will use, set the stop gauge on your drill press and cut your grooves in the aluminum fittings to a depth of 1/16 in., then using the same cutting tool cut the grooves in the wood to 3/8 in.
All rings should be cut through on one side so that they will conform to the grooves better.
When rings are pressed into grooves there should be about 1/16 in. gap in the ring where it is cut open.
Use a diameter best suited to the size of fittings used.
Uniform Swage on Spar Ends
by Bob White
609 N. Lindberg, Griffith, Ind.
To get a uniform swage on the spar ends of the Little Toot tail surfaces I used two pieces of 2 in. angle 12 in. long.
On the inner spar establish where the swage must start in order to clear the fabric as it tapers to the tip. Place the angles in the vise so they form the desired angle as in Fig. 1. Then heat the required distance from the end of the tube to be used. Place the heated tube between the angles and tighten the vise. The operation may have to be repeated depending on the length and the amount of the swage. Be careful to keep the tube hot. A cold tube will crack.
If the swage is to be used on a rear spar, one side may be kept straight by heating at a point where the swage begins and placing between the angles as in Fig. 2.
Scaling On Steel Tubing?
If excessive scaling occurs on steel tubing and the base metal is pitted, too much heat is being used for welding. Even a small flame can burn base metal if held too long at one place. Undercutting severely weakens a weld. If edges of weld metal do not taper smoothly into base metal, the welding is too cold. Hacksaw your practice welds in half to see cross-sections of your bad work such as undercutting, burning and inadequate penetration.
Tube Bending Simplified
I am submitting a tool tip cf the month in AC's contest. Whether or not it merits an award is something else. But I have found it indispensable for forming tubing, formers, wing-tip bows, etc. Square tubing and flat stock can just as easily be formed.
Any old piece of angle iron (at least 5/16 in. or 3/8 in. thick) should be used. It can be clamped in a vise. (I should say, it should be so the forming or bending can be done very simple).
The pivoting block is drilled slightly larger than the 3/8 in. bolt that goes through it, which in turn is screwed into the tapped hole in the top face of the angle iron. If a bend is made beyond the original curvature it can readily be brought back by just turning it in the jig and applying pressure.
The pivoting block automatically adjusts itself to the size material being used.
I made mine up so it would handle up to 1/2 in. stock and in less than 15 minutes formed a set of wing-tip bows for the Stits Playboy.
The material is inserted between the two blocks and bending is commenced where desired and slowly advanced while at the same time you're bending. Do this until the desired shape is formed. The pivoting block could also be made as a round plate and the hole drilled offset, but as I have drawn it, it works marvelously.
FITTING COWLS
By Frank Wiggins
3356 Clairemont Dr.
San Diego 17, Calif.
If ship is to be metal cowed from cockpit forward, this plan will let you use almost any type nose cowl and will give you the exact temperature for the firewall. On my Mini-Plane I used a P.A. 18 nose cowl and had to figure this out in order to make the firewall to match. Bolt the cardboard where the firewall will be, then run the strings from the cockpit coaming to the edge of the nose cowl. The cardboard will have to be slotted so that the strings will run straight and tight.
I made a plywood adapter to fasten the cowl to the front of the engine. It was a plywood circle cut out with two or four bolts and a block of wood to fasten to the cowl. I then cut out a hole in the exact center and a tight fit for the prop shaft. This will hold the cowl in the exact center for all other measurements. It's hard for me to explain or show you, but it works out well and maybe you can figure it out.
I would like to share with other members who have access to a lathe, an exceptionally simple and inexpensive method of fitting aircraft tubing. We used this method of fitting the tubing on our Stits Playboy, and we had the fuselage completed and on the gear in less than 100 hours.
The tubing is held in a block that is mounted on the cross-feed head of the lathe, a reamer is chucked in the lathe head, and the tubing is oriented so that the reamer cut results in an almost perfect fitting member. This method is not only quicker than most other methods, but it also results in stronger welded joints.
The fitting device can be made from a 4 in. by 4 in. piece of hardwood and several bolts for almost no cost. We made a block to fit each size of tubing, and it usually took about 15 minutes to make each block. We tried making one large diameter block and using sleeves to make it fit the smaller tubing, but this was not satisfactory.
To construct the block, we found that the best way to do it was to cut the block to size and drill the hole for the \( \frac{1}{2} \) in. hold-down bolt. Next mount the block on the cross-feed head and locate the center of the tubing hole. Mark the center lines of the hole on the end face of the block so that they can be used as reference lines during the cutting process. Now drill the tubing hole, saw the slot into the tubing hole and install the tubing lock bolt and nut.
A straight reference line is now drawn on the side of the piece of tubing that is being fitted, and the angle between the member to be fitted and an adjacent member is measured with a protractor. The tubing is placed in the block and the reference line on the tubing is aligned with one of the reference marks on the block. Next the cross-feed head is loosened and the measured angle is set up between the piece of tubing and the reamer. The cross-feed head is locked and the tubing is ready to cut. Feed the tubing into the reamer slowly while applying cutting oil. When the first end is cut to the proper depth, measure the angle between the members at the other end and reverse ends of the tubing in the block. Align the reference line on the tubing with the correct reference mark on the block so that the finished cuts will be oriented properly with respect to one another. Now set the tubing at the proper angle and repeat the cutting process until the member is of the correct length.
A little practice on some scrap pieces of tubing will give the operator enough experience to start fitting the tubing on his plane. This method was a great time saver for us and I hope it will benefit other members in their work.
WHEN WORKING with fiberglass, often a smooth finish is hard to obtain without hard sanding work and filling. When applying the last coat of resin a glass finish without low spots can be obtained by stretching thin polyethylene or vinyl plastic over the wet resin and attaching it to any dry place with Scotch tape. The plastic will not stick and when the resin has set it is readily removed. This film forces the resin to lay smooth and fill all indentations completely. If a light weight curved panel is desired, aluminum screen wire may be used in place of fiberglass cloth or mat. Many shapes can be formed without molds as the screen wire can be stretched into shape. Cheap polyester resin can be used with the wire and plastic film taped to the wire will hold the resin in the mesh until set. Engine cowl pieces are suitable for this method and forming a mold over the engine is not required. Use of brushes in polyester resin does not require purchase of expensive solvents such as Keytone to clean the brush and the hands. The nearest supermarket carries a cheap cleaner which does the job even when diluted with water. It's good old Lestoil. Just soak the brush a short time, then rinse with warm water.
ENGINE MOUNT CONSTRUCTION AND VERTICAL FIN DESIGN
By J. C. Long, EAA 9436
I built the engine mount for my Playboy using a piece of \( \frac{3}{8} \) in. plywood erected in front of the fuselage as a jig to simulate the engine. Ray Stits' instructions for building the Playboy say to use a piece of \( \frac{1}{4} \) in. thick hard asbestos, but plywood was available and asbestos was not.
It was very discouraging when the jig board burned to pieces before welding was half finished. I was able to finish the welding by doing part of it on the tool bench and the rest in another jig but was lucky that warpage did not ruin the job in this piecemeal process.
My subsequent experience with modifying the engine mount to support a Continental 90-14 engine came off better due to the lesson I learned building the mount.
The Continental -14 engine has Ford bushings in the mounting holes which makes it necessary to add an additional brace to the engine mount, on both sides, due to the flexibility of the Ford bushings. The top members of the Stits engine mount are cantilevered out, which is adequate when bolted to the -8 and -12 Continentals but not stiff enough for the -14 engine.
In adding the brace tubes I used, instead of plywood a piece of .065 in. thick sheet steel for jigging the front of the engine mount. Behind the sheet I bolted two pieces of scrap 2 in. x 6 in. wood for additional stiffness. This proved entirely adequate to hold the front of the engine mount to the exact bolt hole centers of the engine while welding in the braces.
From the experience, my advice on jigging to build an engine mount is to use the piece of plywood erected in front of the fuselage, but with a piece of sheet steel on the face of the plywood. The plywood furnishes rigidity to prevent warping of the sheet steel while welding near it, and the sheet steel is a barrier between the welding flame and the plywood. Needless to say, even with this precaution it is advisable to weld the front of the engine mount as quickly as possible so as to finish the job before so much heat is conducted through the sheet and the bolts attaching the mount to the jig that the plywood begins to burn around the bolts, as it will eventually if you are slow in completing the welding. This will not be as critical as it is if you do not use the sheet steel facing, however, since the bolt center dimensions will be maintained by the sheet steel.
An additional tip: when necessary to fit a curved piece of tubing to a
structure and the curve must be determined by trial, use a piece of aluminum tube, which bends easily, to determine the length and shape required. Trace the curve of the aluminum tube on a piece of plywood and drive heavy nails at close intervals along the curve. Heat and bend the steel tubing around the curved line of nails. This is also a very fine way of bending tubing for the outline of your empennage. Use a 1/16 in. welding rod as a ships curve to draw the shape of the tail surface, in other words hold one end of the rod fast at the top of the fin post for example, and flex the welding rod so as to make it bend to the shape that makes a nice looking outline of the fin. Trace along the welding rod. Drive nails at intervals along the pencil line. Tack weld one end of the tubing to the top of the fin post. Heat and bend the tubing around the nails. My entire tail group was built using nothing but nails driven on each side of the pieces to hold them in place. Besides eliminating the need for jig blocks, it is faster and less trouble.
P.S.: Somebody please publish an article on building aluminum cowlings around engines.
PORTABLE NICOPRESS SQUEEZE
By Kalman E. Saufnauer, EAA 1201
Several years ago a cable splice was required in a hard-to-reach location; removing fairleads to pull the cable out was impossible without damaging the fabric finish. This was a factory goof, not on a homebuilt. Consequently, it was desired to perform the splicing from within the aircraft, even though it could not be accomplished with the standard tool.
The portable squeezer shown was developed because regular nicopress tools require a large unobstructed area for operation. Homebuilders with limited requirement for a squeeze may not wish to lay out the cash required for two or three sizes even at surplus prices; many A & P mechanics have only the 1/8 in. size, if any. (The writer was one of these cheapskates who borrowed from a larger shop).
It is suggested a 3/4x1 in. bar of steel be used, although 3/4 square would be enough since the Nicopress sleeve is copper with cadmium plating. Cold rolled is probably satisfactory, however, the one shown was of 4130 C.M. The mating faces must be smooth and straight. A pair of bars should be clamped tightly and drilled for the 3/8 in. bolts which are then installed and tightened. These provide alignment dowels for further drilling and are later used for squeezing. When all four bolts are securely tightened, you may carefully drill pilot holes between the mating faces of the bars. Use of a cen-
View showing finished squeeze and gauge. The 1/8 in. cable in the squeeze has been rotated 90° to better show correct appearance of finished sleeve. Correct use of gauge is shown on 1/16 in. cable, also shown is thimble in cable eye.
View of cable and sleeve being correctly squeezed. Notice sleeve relationship to tool. The three sizes of sleeves are shown, also the 1/8 in. thimble. Note: 1/16 in. and 3/32 in. cables use 3/32 in. thimble not shown.
ter punch and rather heavy hammer may be required to prevent drill chipping or wandering when starting the pilot hole. A machinist's center drill will facilitate hole starting. Please use a smooth running drill press for all operations; the squeeze holes must not be "egged" oversize by a sloppy bearing or imperfect drill bit. Drill these holes 1/16 in. undersize, then slowly and carefully drill to size with a sharp drill, check hole size with drill shank. If a reamer is not available wrap emery cloth around a rod or rat tail file and polish to size. (The rod may be slit at one end to hold the emery cloth which is inserted in the slot, then wrapped around it; a drill motor is used to rotate the rod). This makes a very small diameter sanding drum. The drawing gives drill sizes and all dimensions. For those whose chuck cannot use a 3/8 in. drill it is suggested you use a 1/2 in. drill, then file a slot in one side of each bar to clear the 1/4 in. cable eye; use a 1/4 in. rat tail file. Use emery cloth to lightly round all edges in holes.
Only two bolts are needed to squeeze a cable sleeve, bolts should be hardened, as even AN bolts can only be used a few times on 3/8 in. cable before the threads gall. Use of a thread lubricant is advisable. Hardened bolts may be found in auto connecting rods.
Bolts must not be threaded within 1/8 in. of the mating faces because the shank is needed as an alignment dowel. Short rods or dowels may be used for alignment when using a vise to squeeze. 3/8 in. cable should have three squeezes, 3/32 in. should be squeezed in two places while once will suffice on 1/16 in. cable. When making multiple squeezes, start at cable end and work toward the eye, the sleeve grows lengthwise so will tighten the eye around the thimble which, of course, must be used on primary control cables, the same holds true if a bushing is used.
Note to budding A & P types — tighten each bolt no more than one turn without tightening its mate. Also, leave excess cable on short end so you can pull the eye tight while squeezing. Then cut with sharp chisel and anvil about 3/8 in. from sleeve being careful not to damage the long cable, use a piece of aluminum bent around cable as a shield. Before placing in squeeze, the sleeve may be pinched enough to hold cable eye tight by use of vise-grips on flat sides, do not over do it or the squeeze may not remove your marks. This method may be used to fit cables to correct length on the plane, then carefully remove and squeeze at workbench.
Finished sleeves should not be misshaped and dimension "D" must fit the proper gauge slot. If too small ream the squeeze holes but if too large you must sand or file the mating surfaces and redrill holes or make a new tool. Gauge slots should be carefully filed square in vise and checked with drill shank. Holes in gauge as shown will enable you to fit it to the squeezing bolts for storage.
---
**MORE ON RIB-STITCHING WITHOUT A HELPER**
A few months ago this column described a method of doing ribstitching on a wing without a helper. Member John E. Mueller writes in to tell of another method which is certainly worth passing along. Chalk lines are snapped on the wing as usual, to mark the locations of ribstitching. With the needle, holes are punched in the fabric alongside the ribs and at all the chalk marks, on both surfaces of the wing. The wing is then hung or propped up and it is possible by placing one eye very close to an adjacent hole to see through the wing well enough to guide the needle point into the appropriate hole on the other side.
On a wing of average chord, place it horizontally and usually it is possible to push the needle up and down while looking around the leading and trailing edges and do the stitches at the end by visual reference to the chalk lines. Then prop the wing up and do the stitching in the central portion of the wing by this sighting-through method. This greatly reduces the amount of walking back and forth and enables one man to do the job almost as fast as can two men.
TAKING THE WARP OUT OF TUBING
By John A. Sons, EAA 3588
An interesting letter arrived recently from John A. Sons, of Albuquerque, N. M. He says that his welding experience has shown that a common problem in welding steel tube fuselage is that the longerons tend to bow inward when cross pieces and diagonals are welded into place. This is caused by contraction of the longeron metal at the point of the weld, which is of course mostly on one side of the tubing. It seems to shorten the inner side of the longeron, making it bend inward. The cure is to heat the outside of the tubing as shown in the sketch, just enough to soften it and let the tension in the tubing remove the bend automatically. Trial-and-error will show just how much heat to use: just be careful not to let one spot get so hot that movement of the metal causes a ripple or kink.
Sketch by Don Cookman
DISCOVERS USE FOR RUBBER MALLET
By Stephen du Pont
Buck Hill Farm, Southbury, Conn.
Very few amateur constructors know all the tricks of sheet metal working, and after 30 years of amateur same I discovered the rubber mallet.
The problem was to make a 1-15/16 dural tube about four inches long that could be opened along the side, a thermometer capillary with a sensing coil (not electrical, but gas and fluid operating, such that the coil could not be removed), was to be inserted inside the tube, then a suitable notch having been cut, the tube to be closed upon the capillary not tight but snug, and the short section of 1-15/16 tubing inserted into the ventilator duct of a sailplane. Thus the thermometer coil lay inside the two inch ventilator duct and the capillary passed through the side of the duct. The purpose was for calibration of airspeed instrument static vent position error for FAA flight test, and calibration of airspeed indicator, using an extra airspeed instrument, a trailing bomb with static vent well below, and a movie camera to photograph the panel. Temperature was wanted during the test.
A piece of rod a little longer than the sheet of .032-24ST3 material was set over the vise jaws and the four by seven inch piece of dural was roughly formed onto a very poor tube by hand and in the vise. The bent dural was laid over the rod and lightly hammered with the rubber mallet. The rubber hits the sheet metal, and has a tendency to form the metal around the rod for a short distance. Even a ¾ inch rod can be used to work a very accurate 2 inch diameter, and a nearly perfect tube with overlapping seam ready for riveting can be formed. The formed tube of 1-15/16 inch diameter was considerably larger than the rod. It just depends on how hard you hit it, and how much you move the sheet metal around.
The radius can be carried right to the edge of the sheet, and if you put in too much it is easy to unwrap it and start over.
Cone shaped radii, and angle bends with bend radii can be formed this way, depending on what you use as a mandrel, and how you move it around.
The rubber mallet acts like a baby hydro press with a rubber box attached. It appears to be well known by the professional sheet metal worker but missed by many of us who are forced to improvise, or are in a hurry.
"PANIC CAN" HANDY WHEN DOPING
By A. G. Ronay, EAA 7068
923 Bennington, Houston 22, Texas
Undoubtedly, many times EAA'ers, while doping away find that they're fresh out of clear dope for those last finishing touches. Frequently a gallon or less would do the trick, hence the "Panic Can." This would be particularly handy for those EAA'ers like myself, who can afford only minimum purchases of clear dope. The dope normally lost, i.e., drippings on the sides of the can, brush pot, brush handle or that spilled on the workbench or floor is saved with the use of the "Panic Can."
Take an empty, round, one gallon can and clean thoroughly; mark this can in some way and set it aside. Each time after cleaning your brush, pour the thinner into the marked can, seal and set aside. Each time before beginning the next doping session, peel the hardened dope from the sides of the can, brush pot, brush handle or the spillage on the workbench or floor and drop it into the "Panic Can." Don't fret if the hardened dope picks up fabric scraps, dirt, junk, etc. — you'll fix that later.
Now then, Mr. EAA'er, when you find you're fresh out of dope and can't finish that last wing section, do you panic? Nope. You reach for your "Panic Can" filter out (use a commercial lacquer filter, or in a pinch, a section of Ma's nylon stocking), the contaminates and finish the job.
SCREW DRIVER HOLDS SMALL NAILS
By H. C. Foster, EAA 10200
Box 34, Wexford, Pa.
Here is a tip that is particularly helpful in the building of wooden wing ribs or any other construction which requires the use of one-quarter or three-eighths inch long aircraft nails. Any craftsman who has worked with extremely short nails encounters the problem of holding the nail upright prior to the first blow of the hammer. It is virtually impossible to hold an 18, 19 or 20 gauge nail one-quarter inch long with the fingers without these fingers taking the brunt of the hammer blow intended for the nail. It has been common practice to use a pair of long nosed pliers to pick up the nail and hold it in place for the initial blow of the hammer.
An alternate method, quicker and easier than the use of pliers, is the use of a small magnetic screw driver generally available in any hardware store. This type of screw driver with a magnetic tip enables the craftsman to pick up the nail and hold it in place for hammering. With just a little practice it is possible to make considerably better time than any other method that might be used to hold the extremely small nails used in substantial quantities in the construction of wooden aircraft components.
TACK WELDING TIPS
By Al Griffin
2567 Eleventh Ave., Hayward, Calif.
Here are some tips I will pass along for what they are worth:
1. I cut 4 inch holes in my plywood fuselage jig at the cluster joints to facilitate tack welding. Also makes nice finger holds for removing frame from jig.
2. Put a piece of paper between two blocks 1 inch by 2 inch by 4 inch. Bore holes through block to match the tube sizes you are using. (¾-¾-¾). Remove paper and you have a handy holder for filing and cutting tubing.
3. In laying out my tail feathers I swung the radius on the jig by boring two small holes the correct distance apart in a scrap of aluminum.
4. On the Miniplane compression strut fittings a friend turned mine out on a lathe but if I had to make them again I would construct them as follows: Cut a piece of round stock the I.D. of the compression strut. I'd make it about ¼ inch long, and fabricate to the ¾ inch strap with a countersunk screw or rivet.
DRILLS HOLES FOR LARGER MAGAZINE
By Herman P. Katschke
15227 Hiawatha Dr., Orland Park, Ill.
Since SPORT AVIATION magazine has increased its pages, I can't punch holes in it with a hand punch. To overcome this I took a 5/16 inch drill and drilled a hole through the table of my drill press. Then I ground off the opposite end of the drill flat. By putting the drill in upside down I just pull down the lever for a neat, clean hole.
A HINGE IDEA
By Jerry Nolan
648 Soundview, Bronx, N.Y.
Here is a hinge idea: I'd like to see some small outfit pre-weld these hinge assemblies and market them. All one would have to do is finish weld the hinge to the two rudder posts.
FINISHING OF AIRCRAFT
By Orv Lippert, EAA 9159
President, Chapter 134, Central Michigan
Riverdale, Mich.
This concerns the finishing of aircraft. Normally many otherwise fine looking homebuilts suffer from an
"amateur" paint job. This on close inspection shows touch-up marks, paint peel, runs, sags, etc.
First of all on preparing a new surface for finish. Lay out your color plans on a separate sheet of paper, with either oil or water colors. This will help you as a guide for later maskings. You can't pull off the masking tape half way through to know just how it's going to look.
On a dope finish airplane water sand the last coat of dope with No. 220 wet or dry paper, using plenty of water. When it dries check to see if there are any spots that are through, or rough places. If there are, touch them up. Then give the whole surface a wet coat of silver CAB dope. Go away and come back the next day.
Never touch the surface with sandpaper again. Now you are ready for color. If dope, go ahead and shoot your first color in as with the silver. (Make sure that after you use CAB dope once, you NEVER go back to nitrate dope). If enamel, I spray one real light coat over the whole surface. Then I heat the rest of the enamel hot enough that it will spray without reducer, and follow up the first coat with a second heavy coat. (Don't even try to enamel with outside air temperature below 80 degrees F.). Then go away and come back the next day.
Now notice, if the drying temperature has been 80 degrees or more, I usually go right ahead and mask on the fresh enamel. (Wait for the screams on this). First of all when starting to mask, do you have any tape on hand? If so, throw it away, and go buy some FRESH tape. Use only \( \frac{3}{4} \) inch tape for contact with the fresh painted surface. If the local supply stores don't stock \( \frac{3}{4} \) inch tape, don't paint until you get some. Then lay out your second color with tape. Mask. Use \( \frac{1}{2} \) inch or \( \frac{3}{4} \) inch tape to stick the masking paper to the \( \frac{3}{4} \) inch tape. NOT TO THE PAINTED SURFACE. Go ahead and spray your second coat of enamel. Go have a cup of coffee, perhaps take the wife out to the local FAA office for a refreshing bit of humor, or something. Don't go to bed just yet, however, as soon as this second coat is dry, approximately two to three hours, remove the mask, being very careful not to touch the last coat. Then get away from the job until the next day. If a two color job is all that you are working for, you are all set. A light hosing down with cold water will harden the enamel before flying the new paint. This will keep bugs from slamming into the soft surface as you take your pride and joy up for all to behold.
This primarily concerns finishing a surface with enamel. However, the \( \frac{3}{4} \) inch tape works equally well on dope finishes and will not pull along the edges.
In case you are unfortunate enough to have to wait for several days before attempting a second coat of enamel wait for at least a week or 10 days, otherwise the new thinned coat of enamel over the improperly set up first coat will cause an interesting effect similar to an alligator's tail end. This can be preserved if one likes, as an unusual crackle job and will cause comment wherever you show it.
---
**Turtleback Baggage Compartment**
*By Henry E. Winslow, EAA 595*
Most turtlebacks on the open cockpit homebuilts I have seen are made up from wooden formers with stringers fitted into slots notched into the formers. I use a different type of former which gives me more room to use as a baggage compartment behind the back rest in the turtleback itself.
Instead of wooden formers I used \( \frac{1}{4} \) in. 4130 tubing bent to shape and welded directly to the top longerons of the aircraft. Next I bent up a quantity of "U" shapes clips from .020 cold rolled plate \( \frac{1}{2} \) in. wide. These are tack welded to the top of the tubing formers to take the wood stringers. I held my stringers to the clips with Tinnerman screws and nuts.
The back of the baggage compartment is made of \( \frac{3}{8} \) in. plywood fastened to the former tubing with several straps and the \( \frac{1}{8} \) in. plywood floor is strapped to the top longerons.
The front former is made of \( \frac{1}{4} \) in. plywood as this gives needed stiffness to tack and dope the fabric to. I cut my former out to leave a 1\( \frac{1}{2} \) in. edge all around and a cover plate of \( \frac{1}{4} \) in. plywood is hinged at the bottom to swing forward when the latch on the top is released to give access to the baggage compartment.
Be sure to figure out how much baggage you can carry in the compartment and then it is wise to cut that figure to 80 percent so you will not get too close to exceeding the rearmost center of gravity requirement. Type up or have engraved a plate to be mounted on the inside of the backrest plate giving the max. baggage weight allowable. In almost any ship the CG should be such that 30 lbs. could be carried. This is enough to take care of a change of clothes for an overnight trip and some spares such as tie down kit, spark plugs, spare oil, etc. Some web straps should be screwed or bolted to the floor to tie the baggage so it will not slide around in rough air nor in the case of the tie down rods poke a hole thru the fabric.
As the cover plate hinges and so is not a permanent installation it is a simple matter to upholster it for a neat appearance. Also this area is a good one to screw some transparent holders to for carrying the ship's airworthiness certificate, table of limitations, radio license, etc. as they are sure to be seen by anyone entering the cockpit.
Henry Winslow lives at 314 East Hazel St., Inglewood, Calif.
TIP FOR CUTTING TUBING
Most mechanics resort to the time-honored method of hacksawing and filing when fitting the ends of steel tubing together for welding. The amateur aircraft builder, however, finds this to be tedious when cutting the many tubes in a fuselage. One of the simpler ways of doing it faster is to get a dressing tool and shape the face of a grinding wheel into semi-circular form. However, I have found that there is yet another simple, inexpensive trick, particularly helpful when many tubes have to be fitted.
Purchase a standard hole cutting saw the same size as the tubing. These saws are available at all good hardware stores. Put it into a drill press set for high speed, place the tubing to be cut on the drill table with any suitable vee-block and clamp arrangement, making sure the tube is held so that it cannot turn in any direction, and just bring the saw down. These saws will cut over their own diameter in depth and as a result the tube will be severed completely in one pass. The end shape which results is perfect for a 90° joint. I have found this scheme works best on right angle cutoffs, but you can experiment on other cuts if you wish. The "U" cuts made in this way can quickly be deepened with a round-faced grinding wheel or even a rat tail file when 45 deg. joints are needed. One firm which makes hole saws is the L. S. Starrett Co., Athol, Mass., and they are available in sizes from 9/16 in. to 6 in. — Bruce E. Graham, EAA 4303.
PLIERWRENCH AIDS AIRCRAFT WORK
Our friends at Cooper Industries, Inc., 2149 E. Pratt Blvd., Elk Grove Village, Ill., send along this photo of their handy "PLIERENCH". Having a geared action, this tool has a 10-to-1 gripping power and is shown here actually raising the nose wheel of an airplane off the floor. It can be used to grip polished and plated nuts to avoid the scuffing action of fixed-jaw wrenches, it will bend springs to shape, gets stripped and chewed nuts off, serves as a powerful clamp, a pipe wrench, tube cutter and internal-external wrench. Tool sells for $12.75 including pouch, universal jaw, wire jaw, pipe jaw, internal-external jaw and tube cutter.
A Low Cost Air Compressor
By Ralph J. Cox, EAA 11804 and Thomas Hurley, EAA 6675
203 Locust St., Santa Cruz, Calif.
OLD HOT water heater tanks can be had for the asking from friends and relatives, or other EAA members, and with the small leaks plugged or welded closed, they can be used for this project.
Screw a ¾ in. pipe cap on the cold water inlet on both tanks (one tank can be used, but does not provide the leisure time for spraying), hook a short piece of hose or a pipe connector from the hot water outlet to the bottom drain on the second tank reducer fittings. Connect the spray hose to the hot water outlet on the second tank, and make sure that all of the connections are secure and do not leak.
Lead the water hose to the drain outlet on the first tank, and incoming water at household pressure (usually close to 40 lbs. in many areas) will maintain steady even air pressure. If you do not have adequate line pressure, then the deal is off! But most areas do, so we seriously doubt that many people will have any trouble. For a water level gauge, use a small petcock brazed into the tank, or a very small hole drilled near the top of the second tank will do the same. When the water comes sizzling out of the small leak provided herein, then you know that the tanks need draining into the petunia bed. Obviously, the system isn't very portable, but then it's cheap, and that's something!
Mayday . . . SOS . . . Mayday
Attention All Homebuilders
By Norm Ginn
Van Nuys, Calif.
THE PHOTOGRAPHS will immediately bring to your attention a very vital part of your plane, the control system push-pull tubes. This is a common type of control on both "homebuilt" and "store-boughten" aircraft.
A recent fatality here on the west coast is the reason for this article. EVERYONE who has a plane flying should pull the inspection plates and check ALL of your push-pull tubes.
The critical item is the "screw-in ball joints" that we all use on the ends of our push-pull tubes. MAKE SURE you have a right-hand thread with lock nuts on each end. If vibration loosens the lock nuts, the tube will loosen on one end and tighten on the other. If a left and a right are used the tube will come off and start to work loose, the paint will be cracked.
Another very important item is the "control stops." The aileron stops should be at the stick or torque tube and not at just the bell crank in the wings. Aileron movement of the stick with no stops could possibly put pressure on the elevator push-pull ball joint which might cause the lock nut to be loosened.
The stick side movement (aileron action) should not be great enough to twist the self-centering rod ends to such a degree that it is actually starting to turn the rod ends.
Don't say after reading this article, "My controls are OK. I couldn't have made a mistake." EVERYONE CAN MAKE A MISTAKE. After the accident, FAA started checking COMMERCIAL aircraft and, you guessed it, they found a helicopter with a left and a right hand thread. Bulletins are being sent out to all areas on this subject. Make sure YOU are around to read them.
Some Notes On The Testing Of Non-Standard Fabrics For The Construction Of Amateur-Built Aircraft
By Reed Johnson
1678 Lincoln St., Berkeley, Calif.
The desire of the homebuilder to experiment and devise in the construction of his "baby" is one of the reasons for the existence of our organization. Most of us are constantly looking for better, easier, or more economical ways of doing things, consistent with safe practices. It was for this reason that I began looking for a material which would be better than cotton and cheaper than Ceconite. I then conducted some tests of a number of easily obtained fabrics that could be used for homebuilts. The following covers the methods used in making these tests and the final results obtained.
First of all, I was interested in the comparative weights of the different fabrics, so I die-cut 3/8 in. samples of the various fabrics I desired to test. The reason for the die-cutting was the need for absolute uniformity in size due to the very small samples taken. These samples were then weighed on an analytical balance for comparative weights. It must be emphasized that all of the data obtained is comparative. No attempt was made to obtain absolute values. The number of samples and the quantity of material that could have been tested would have been beyond what I wished to concern myself with. Therefore, the samples were all taken, in each case, from the same bolt of fabric and no diversification was made between different runs. In these tests, all material that tested considerably below the tensile strength of Grade A cotton was discarded and no data on such material is presented. It is interesting to note, in this regard, that all of the Dacron-cotton fabrics were inferior.
The apparatus for testing was very simple. Two clamps were made to grip the samples with an even grip and with polished jaws so that the fabric would not be cut by the jaws of the clamps. One clamp was suspended in a doorway and the other was attached to a plastic bucket. The samples were clamped in these two clamps and lead weights were placed in the bucket, in increments of 1 1/2 lbs., until the sample pulled apart. Considerable care was used to insure that the pull was taken by the full width of the test pieces which were 3/8 in. in width, all cut at the same time.
All samples broke within the length of the piece and not at the clamps. This would indicate that the results were truly comparable. After discarding all unsatisfactory samples, the results in tensile strength and comparative weight were as follows:
| Fabric | Comparative Loading | Comparative Weights |
|-----------------|----------------------|---------------------|
| Grade A Cotton | 49 lbs. 2 oz. | 58.05 milligrams |
| Ceconite | 77 lbs. 5 oz. | 50.45 milligrams |
| Polaron | 77 lbs. 5 oz. | 42.00 milligrams |
| M.W. 16B1446 | 70 lbs. 6 oz. | 38.95 milligrams |
| M.W. 16B1233 | 56 lbs. 11 oz. | 25.90 milligrams |
| M.W. 16B1232 | 39 lbs. 8 oz. | 25.60 milligrams |
| M.W. 16B1589 | 49 lbs. 0 oz. | 25.40 milligrams |
All samples with an M.W. number represent the catalog number in a Montgomery Ward catalog for the Oakland, Calif. supply house. I presume that the numbers are the same for other Montgomery Ward locations. The following is a Summary of Fabrics:
Grade A Cotton is the standard aircraft cotton.
Ceconite is the covering material sold by the Cooper Engineering Co. of Van Nuys, Calif. The price of Ceconite per inch yard is 5.9 cents.
Polaron is the trade name for a flat weave 100 percent Dacron material produced by Travis Fabrics Inc., of New York. One retail outlet is Freifelds' at 2042 University Ave., Berkeley 4, Calif. The price is $1.49 per yard in 44 in. width, or 3.39 cents per inch yard.
M.W. 16B1446 is sold by Montgomery Ward and Co., and is listed as 100 percent polyester Dacron Uniform cloth. It is priced at $1.37 per yard in 44 in. width, or 3.1 cents per inch yard.
M.W. 16B1233 is a Dacron Crepe. The crepe pattern disappears when it is shrunk with a hot iron. The price per yard in 44 in. width is $1.64, or 3.73 cents per inch yard.
M.W. 16B1232 is listed in Montgomery Ward's catalog as, "Dacron Batiste," and sells at $.97 per yard in a 47 in. width, or 2.06 cents per inch yard. This is a very sheer material that would be suitable for ultra-lights or sailplanes.
M.W. 16B1589 is an all-Dacron material that is available only in a printed pattern, but that matters little since it would be painted anyway. It is listed at $1.47 per yard in 44 in. width, or 3.34 cents per inch yard.
Summarized, the above tests show that M.W. 16B1233, M.W. 16B1232 and M.W. 16B1589, (which are Dacron fabrics sold by Montgomery Ward and Co.) are equal to, or very nearly equal to grade A cotton in strength, but weigh less than half the weight of grade A cotton.
M.W. 16B1446 is a Dacron material that is a little over 43 percent stronger than grade A cotton and has about two-thirds of its weight.
Polaron is equal in strength to Ceconite but has only four-fifths of its weight and, best of all, is just a little more than half the price.
All of the fabrics listed tighten up with a hot iron the same as Ceconite and all must use reinforcing tape and rib cord of greater life expectancy than cotton or linen. I have been informed that Ceconite is basically Dacron, though only the Cooper Engineering Co. knows for sure just what it is.
In an effort to find something less costly than the $6.25 asked for a half pound of Ceconite rib stitching cord, I tested a Dacron cord distributed by the Brownell Co. of Moodus, Conn. It is known as: Bonded Dacron type B and used for making archery bow strings. It is sold by many archery supply houses for $7.50 per ¼ lb. Since it is much lighter than Ceconite rib cord, a quarter pound will go as far, or farther than a half pound of the Ceconite product. It tests at 33½ lbs. breaking strength. This is a little less than the standard requirement set up in CAM 18 of 40 lbs. minimum, but it must be remembered that this standard was set for linen cord, which deteriorates at a greater rate than Dacron. The strength standard could be more than met for "do not exceed speeds" in excess of 150 mph by doubling the cord, by using the double loop knot, or by using 15 percent closer
stitching. Even so, it will be cheaper and the smaller bulk, even doubled, makes a neater job.
Herter's of Waseca, Minn., sells a Dacron cord that appears to be the same as Brownell's, even to the spool it is wound on. It is sold plain or pre-waxed at a price of $1.39 per ¼ lb. spool. The samples I tested broke at 31 lbs. 7 oz. and 29 lbs. 10 oz., respectively. This is not a significant difference from the more expensive Brownell material. Used double, this cord is well above the recommended minimum. The catalog number of the pre-waxed cord is: QN3H1A and for the plain unwaxed cord is: QN3H1. Either type on a ¼ lb. spool sells for $1.39.
A neat way to double this cord is as follows: start by cutting a length twice as long as required for the stitching. Clip one end of the single cord to some solid object as an anchor. Then thread the other end through the needle and fasten the end to a clip or screw eye held in the chuck of a ¼ in. drill. Stretch the cord and run the drill for only a few seconds, which will tightly twist the cord. If the needle is positioned in the middle of this piece and then the two ends of the cord brought together after disconnecting from the drill and anchor but carefully held taut, the cord will twist itself into a very neat twoply cord. Some experimentation may be needed to determine the number of seconds to run the drill for each yard of cord. If you have used the pre-waxed material, it will be all ready to use. For my set-up I have a strong spring clip screwed to the door, with another at the correct position to hold the needle at right angles to the stretched cord and at its midpoint. Be sure the thread is stretched in a straight line through the eye of the needle or there will be more twist to one ply than the other. This may tend to make it kink up. Keep the single twisted cords taut or they will twist into a hopeless snarl.
Twisting three lengths together will make the equivalent of the Ceconite cord, but two are strong enough for homebuilts of the average speed class. One strand is probably enough for "do not exceed speeds" of 130 mph. The foregoing does not actually involve the amount of trouble that you might surmise from reading about it. If you are working alone it is convenient to have several needles so you will not need to prepare one at a time. You can make up real long lengths by using two lengths side by side, twisting each separately for the same length of time and then placing them together and allowing them to twist together.
I have found nothing better than the reinforcing tape sold by Cooper Engineering, for use with Ceconite. It looks like and is used like Scotch tape, since it is adhesive on one side and is simply pressed into place. No pins are needed and it is not necessary to impregnate it with dope. The "Super Seam" cement put out by Cooper Engineering is great stuff and obviates the necessity of sewing on the cover. It can be cemented, by following their instructions, for any speed limit.
For wing tape I am making my own from M.W. 16B1446 with a pair of pinking shears. I can cut an awful lot of tape for 15 cents a yard and to any special width I need. Since I made these tests, I find that the fabric used by Pete Bowers on his "Fly Baby" is the same as M.W. 16B1446. Not such a bad idea after all.
Protection for the Open Cockpit Aircraft
By Rollin C. Caler, EAA 11984
1113 New Mexico St., Boulder City, Nev.
PROTECTION OF the cockpit of my Corben "Baby Ace" from being a source of spare parts for adults, and a playground for children, has very successfully been provided by the use of a simple cockpit cover. This semi-rigid cover goes on quickly, fitting over the windshield and back between the struts, dropping readily into place with very little fore and aft movement. The material is inexpensive .021 in. galvanized iron, obtainable at any builder's supply or hardware store. The edges are wrapped with any kind of cloth tape that has its own adhesive material on it.
A ⅛ in. stranded cable is threaded through ⅜ in. holes in the cover, dropping around the fuselage and brought up from underneath to meet at the solder-spliced eyes, where a padlock can be inserted to seal the cover. The cable used was the clothesline cable that I already had on hand, but extra theft protection would be provided by the use of hard aircraft cable.
While this is not a cover to take cross-country, it has the advantages of being inexpensive, durable, easy to put on and remove, and easy to make... a cardboard template was formed and used as a pattern for cutting the cover from the sheet metal. Besides keeping the children and adults out more effectively than cloth covers, it offers another bonus... the cover is locked to my tie-down ring when the airplane is gone, thereby helping to discourage other aircraft owners from parking their airplanes in my spot.
Leading Prop Edge Clamp
By Maj. Antoni Bingelis, EAA 2643
1111 Carlos Dr., Lincoln 5, Nebr.
THE EXTRA effort taken to insure a good joint when gluing the leading edge skin to flaps, ailerons, or even wings, will help insure maximum strength of structure.
The simple clamp illustrated is easy to make from odds and ends found around the average shop and requires no welding. Its design permits clamping pressure where it is most needed... directly over the rib. The clamp is especially valuable if the structure's nose ribs are of thin plywood stock and nail-strip clamping is out of the question. The dimensions are not critical and the device is self adjusting to various spar depths.
For my flaps I made a separate clamp to fit over each rib. The plywood skin was first pre-formed and prepared for gluing. I then cut 1 inch wide rubber loops from an old inner tube and slipped one over each rib and completely around the flap frame. This later provided a cushion under the metal straps and the use of protective webbing as shown in the sketch was not necessary. The inner tube loops also contributed a partial clamping effect and held the skin in place while the clamps were placed and carefully tightened.
NOTE: Be sure your structure is free from warpage and is properly aligned prior to final gluing and clamping of the leading edge.
Notice that the bolt or rod is slipped through the bottom loop of the metal band and is held in place under the rib and behind the "U" frame by tension exerted through the strap when the eyebolt's nut is tightened.
The spacer of plywood holds the clamp frame away from the spar to insure that there is no interference with the untrimmed edge of the plywood cover being glued.
This same gadget can also be used to hold a balky metal leading edge cover in exact position for nailing.
---
New Wrought Aluminum Alloy Commercial Designation System
By Harold Passow, EAA 2709
A NEW wrought alloy commercial designation system for aluminum and aluminum alloy products, (sheet, plate, forgings, tubing, extrusions) has been developed by the Aluminum Association. This system of identification became effective on October 1, 1954 and all wrought material produced after that date is marked according to the new system. Casting alloys are not affected.
The temple designations such as —T4, —T6, remain the same and are affixed to the alloy designation in the same manner. Thus, Alclad 24 S-T4 is now Alclad 2024-T4, 61S-0 is now 6061-0 and 75S-T6 is now 7075-T6.
The new designations are number changes only. The alloys are the same, in all respects, as before and like alloys are completely interchangeable.
The old and new designations for the wrought alloys are as follows:
| Old Designation | New Designation |
|-----------------|-----------------|
| 2 S | 1100 |
| 3 S | 3003 |
| 14 S | 2014 |
| 17 S | 2017 |
| A17 S | 2117 |
| 24 S | 2024 |
| 43 S | 4043 |
| 52 S | 5052 |
| 53 S | 6053 |
| 56 S | 5058 |
| 61 S | 6061 |
| 62 S | 6062 |
| 75 S | 7075 |
| XA78 S | X7178 |
| 78 S | 7178 |
DEBURRING SHEET ALUMINUM
By Henry E. Winslow, EAA 595
Mira Loma Circle Apt., Unit 14A
1600 W. 5th St., Oxnard, Calif.
Deburring sheet aluminum with a file is a long, tedious process, and often the results are not as desired. Here is a tool which not only does an excellent job of deburring, but is fast and neat. It also works well on curved edges. The main part is one of the double blades from a wire stripper used by electricians. A rod is fitted through the hole in the blade and peened over. A handle is fitted on the other end of the rod. To finish the tool the rod is bent so that the blade angle to the work is tilted toward the handle about 10 deg. from vertical.
To use it is simplicity itself! One of the half-round holes in the blade is placed on the edge of the aluminum and the tool is pulled toward yourself, deburring both edges as it goes.
TUBING CLAMPS
By Thomas W. Martin, EAA 12149
Meeting Grove Lane, Norwalk, Conn.
When joining the two fuselage sides of a welded steel structure, it might be helpful to others to suggest the idea which I found very useful.
The tubing will stay right where you want it if you employ Stanley 404 picture frame clamps. These are made of cast metal, and will hold a true 90 deg. angle for welding or tacking.
Trimming Windshields
By P. Richard Coughlin, EAA 7333
109 W. Seneca Turnpike, Syracuse 5, N.Y.
HERE'S AN IDEA on how to apply a very neat, attractive edge moulding to the plastic windshield of an open-cockpit plane. There is on the market a product called "Silvatrim", a plastic channel material of "U" shape, or, to be more accurate, of pear shape as the upper ends of the "U" are tapered down to fair into the surface to which the stripping is applied. It is made by Glass Laboratories, 883 65th St., Brooklyn 20, N.Y., and sold through auto supply shops in strip and roll form. It is supposed to be applied to the rear edges of auto doors for trimming, and for this use it has a chrome-plated plastic exterior surface of good appearance. The inside of the groove is coated with pressure-sensitive material which holds tighter with the passage of time. It is very inexpensive, is easy to form, and readily cut. It will bend to any curvature down to about six inches radius. When applied to the edge of an airplane windshield it looks just like a tailor-made metal edging. It is quite weather resistant and besides looking well no doubt can increase the resistance to cracking of a plastic windshield.
BRONZE WOOL RECOMMENDED
If steel wool is used in maintenance work inside an airplane, tiny steel particles dropped from it will eventually cause rust spots on surfaces upon which they fall. Boat yards use bronze wool, available from marine supply houses, to avoid that trouble. Non-magnetic, too, and particles won't be picked up by charged electrical components — fine for use when overhauling magnetoes, etc.
NAIL SCREEN
By John W. Grega, EAA 3808
355 Grand Blvd., Bedford, Ohio
The most troublesome operation in nailing gussets, or skinning a wing, is picking up the small nails with a magnetic tack hammer.
With this set-up, all the nails can be picked up right the first time, as all the heads of the nails will be facing up. There will be no need to position the nail on the hammer head so it can be driven straight.
A frame can be assembled out of 1 in. by 2 in. stock, about 12 in. square, and ordinary household screen tacked or stapled to it.
Ready-made aluminum framed screen can be used, providing it is of a convenient size.
The nails are strewn across the screen, the screen is picked up in both hands, and moved sharply sideways, back and forth, until all the nails fall in the screen openings, points down.
The frame is set on the work bench, ready for use. The magnetic end of the hammer is touched to the nail head, the nail picked up and driven home through the gusset.
The frame, with the nails on it, can be stored after the job is completed, and when the next job is started, the frame will be ready for use.
ATTACHMENT OF PLEXIGLASS TO WINDSHIELD FRAME
By Paul Stadler, EAA 7463
1459 Acheson, San Diego 11, Calif.
If anyone has ever worked with plexiglass for their windshield and canopy, they will know how cumbersome it can be in attaching it to the frame. The holes are usually large enough to allow for the rubber bushings, yet too close to the edge to avoid cracks. The bolts are usually too tight to allow for expansion and contraction, and yet you want a tight cabin with as little noise as possible? Why not try this next time?
Drill a series of \( \frac{1}{8} \) in. holes around the edge, spaced \( 1\frac{1}{2} \) in. apart, and dull the edges of the holes with fine sandpaper, working also between the holes and the outer edge of the plexiglass, both inside and out. Lace with 1 in. fiberglass tape, but keep it loose. Then mix a small amount of epoxy-resin, and paint the plexiglass and the tape, beginning at the first hole, drawing it up tight, and then proceeding to each following hole and doing the same. Cover the lacing with a plastic wrapping, and clamp it down with a strip of spruce or plywood to flatten the fiberglass tape against the plexiglass, and let it set.
This method will insure a neat job, and a safe and trouble-free installation. A few sheet metal screws may be necessary to hold things in place during this operation, but they can be removed later.
---
REMOVING RIBS FROM THE JIG MADE EASY
By Stewart Steinberg, EAA 2128
1638 Churchill Rd., Sarnia, Ontario, Canada
While building the ribs of my homebuilt airplane, I was faced with the problem of getting the ribs out of the jig. I was having trouble with the glue that pressed out sticking to the jig board, so I tried this on the jig board and it worked very well.
When using full-size plans of a rib, to save a lot of time, and get your rib out of the jig easier and with less pressure applied on the rib, try this: Fasten your full-size rib print to your jig board with tacks, or Scotch tape, then cover the print with a medium-heavy clear plastic, and fasten it over the print with tacks or tape. Now you can put your holding blocks around the edge of your print to form your rib by using an Exacto knife. Cut the plastic and the print through to the wood, slightly smaller than the blocks, then put a drop of Bond-Fast on the blocks and nail them to the jig board. The blocks will help hold the print and the plastic tight to the jig board.
Now you can start to build your ribs. You will find that any glue that is pressed out of the joints cannot stick to the jig, due to the plastic cover under the rib. You will also find that your rib is easily removed from the jig.
PLATING PRECAUTIONS
By Charles Lasher, EAA 1419
1430 W. 29th St., Hialeah, Fla.
AMATEUR AIRCRAFT builders should be very cautious about chromium and cadmium plating. Seeing highly attractive plated parts on other airplanes, the temptation is to have similar parts of one's own airplane plated.
But, there's more to it than meets the eye! Non-structural parts, such as engine rocker arm covers, wheel hub caps, door handles and so on, can be plated by any commercial plating shop with no precautions other than what may be needed to obtain an attractive job.
Structural parts which are to be plated should be taken only to a shop which specializes in, and is equipped to do, industrial plating, as opposed to simple decorative plating. The kind of work coming under the industrial plating classification includes plating done to protect parts from corrosion, to increase the wear resistance of parts, to build parts up to certain dimensions, to repair old parts by building up worn spots, and so on.
The higher the grade of steel used for a part, the more important it is to have such an expert shop do the plating. Improper chemical content of plating solutions — and there are many kinds in use — and improper procedures in doing the plating will often suffuse hydrogen ions into the steel and make it become brittle. Most of the hydrogen can be removed by heat treating, hence the importance of taking the work to a shop which understands such advanced plating processes and is equipped with ovens of suitable size to heat plated parts to 300 deg. F or more.
In general, don't plate structural parts just to make them look nice. If you must plate, pick an ethical shop and make sure they know that they are plating aircraft parts. Be cautious with steel parts such as chrome moly and anything harder. Never replate hard steel items such as streamlined wires, bolts, bearings, AN hardware, rocker arms, etc. If for any reason plating of such items seems essential, consult real experts first.
Economical Paint Pot
By Alvin E. Johnson, EAA 6599
R. D. I, Box 276, Oxford, Pa.
RECENTLY, WHILE looking around in a local A & E shop, I saw this money and time saving idea. Everyone who has done a large paint or dope job knows that refilling a quart spray gun cup is very bothersome, and the cost of a big pressure pot is certainly out of the question, especially if it would be seldom used.
All that is needed for this economical paint pot is a clean 5 gal. dope can, and 15 or 20 ft. of paint hose from Sears and Roebuck. The local hardware or auto parts store should have a pipe fitting that can be soldered near the bottom of the dope can. Cut the top out of the can for ease of filling and ventilation. Connect the hoses, strain the dope into the can, hang it up and you are ready to spray. Let the law of gravity work for you, and save you time and money.
Wing Leading Edge Clamp
PLYWOOD TO SPRUCE
By F/L J. E. Riley, EAA 7118
RCAF Station Vancouver
Richmond, British Columbia, Canada
YOU WILL immediately recognize by the photo that I employed Gene Slade's method of nailing down the plywood skin. To review his method, the skin was positioned on the main spar by driving two nails, one at each end, through the skin into the spar and then the nail heads were cut off. The skin was fitted to close tolerance at the leading edge and the root area, and then the skin was lifted free of the positioning nails in the spar. The wing framework was then coated with glue, the skin repositioned, and nailed down with pre-nailed wooden strips. Sequence of nailing was first along the main spar; down the center rib from spar to trailing edge, and progressively each rib until the aft section of the wing was completed, and then the forward section ribs from the spar to leading edge were nailed.
Instead of nailing at the leading edge, I employed a custom clamp which was easy and inexpensive to con-
(Continued next page)
WING LEADING EDGE CLAMP . . .
(Continued from preceding page)
struct and simple to apply. The resulting mating of the plywood to the spruce leading edge was clean, tight, and perfectly smooth.
Actually, the clamp was born of necessity because I had only myself to skin the wing, and except for help from my wife in applying the adhesive to the frame, I put on all four skins without help. (I used Epoxy and worked in 60 deg. F. which gave me more than two hours working time).
The accompanying sketches, I hope, will provide sufficient explanation to construct the clamp. I made mine in two lengths for ease in handling. The clamping at the leading edge was the last function of the skinning process. Wax paper was placed between the clamp and the skin to prevent adhesion of skin to clamp.
INEXPENSIVE
ENGINE OVERHAUL STAND
By C. E. Bombardier EAA 9398
4539 N. 49th Ave.
Phoenix, Ariz.
An inexpensive engine overhaul stand can be made from an old or damaged metal propeller that cannot be economically repaired. These propellers are usually badly bent. Therefore, heat the blades enough to roughly straighten them out by hammering or prying. The pitch angle can be taken out at this time if desired. However, it isn't necessary.
The propeller can be bolted to a rolling engine stand made from scrap angle iron, or the propeller can be bolted to a bench or between two benches, whichever you like. I prefer the rolling stand because it can be taken directly to the airplane so that the engine can be removed and placed immediately on the stand for disassembly. If the pitch angle hasn't been removed, wooden wedges will have to be made so that the propeller will lay flat. Drill two holes through each blade and fasten to the bench or rolling stand. Steel bolts of the proper length and either \( \frac{3}{8} \) in. or \( \frac{1}{2} \) in. diameter will do well.
Various engines use different propellers with differing hubs. I'm sure that a visit to your nearest propeller overhaul shop will net you an old propeller bent beyond repair, and one that will fit the bill.
ACCURATE DRILL GUIDE
By Joe Kirk, EAA 2023
3405 Harrington, Rockford, Ill.
"PUNCH-POINTER" MAKE FROM DRILL ROD 1 FOR EACH HOLE.
30° INCLUDED ANGLE
"DRILL GUIDE" MAKE FROM 1" SQ. COLD ROLLED STEEL — FOR LONG LIFE MAKE FROM TOOL STEEL & HARDEN.
NOTE:
BORE HOLES ACCURATELY ON DRILL PRESS OR VERTICAL MILL.
CENTER LINES MARKED ON WORK PIECE.
1. PRE-PUNCH WITH PUNCH-POINTER.
2. SLIP PUNCH-POINTER THRU DRILL GUIDE & LOCATE POINTER IN PRE-PUNCHED HOLE
3. CLAMP DRILL GUIDE IN PLACE
4. REMOVE POINTER & DRILL HOLE WITH ELECTRIC HAND DRILL OR BRACE.
DRILL GUIDE FOR DRILLING SPARS, ETC. WHERE ACCURATELY LINED UP HOLES ARE REQUIRED THRU THICK MATERIAL & IN REMOTE ASSEMBLIES.
Safety Alert
U.S. Civil Aviation
FROST
Frost does not change the basic aerodynamic shape of the wing but the roughness of its surface spoils the smooth flow of air thus causing a slowing of the airflow. This slowing of air causes early airflow separation over the affected airfoil, resulting in a loss of lift and early wing stall.
REMEMBER
A heavy coat of hard frost will cause a 5 to 10 percent increase in stall speed.
An airplane with frost may not become airborne at the normal take-off speed because of premature stalling.
It is also possible, once airborne, that the aircraft could have insufficient margin of airspeed above stall that moderate gusts or turning flight could produce incipient or complete stalling.
Remove All Frost From Wings Before Take-Off
Jig for Wing Attach Fittings
By Dick Albrecht, EAA 11105
52 Eucalyptus Road
Annapolis, Md.
I thought and thought about what kind of a jig to use, to put the wing attaching plates on the fuselage of the Miniplane that I am building. I finally came up with the following:
Took two pieces of one inch angle iron, cut and drilled them to the right size required, and then after cutting the heads off of four 5/16 inch bolts, welded them in the corners. Welded the bushings on the inside of the wing attaching plates and then cut and notched them until they fit in the right position over the bottom longerons. Then I cut two more pieces of one inch angle iron to the right size required and drilled them so that I could bolt them to the other two pieces of angle iron. Put the attaching plates on the 5/16 bolts and snugged them down with nuts and then put the whole jig on the fuselage. After leveling the fuselage out I then used a level on the top two pieces of angle iron and a protractor on the two side pieces of angle iron. (The plans called for 2 degrees from the top longeron). When everything was in place I tack welded the plates, then boxed them in, welded everything up and cut out the jig with a hack saw.
Attached is a drawing of the jig. It worked fine for me.
SCARFING JIG
By Harris G. Hanson, EAA 12204
Fort Nelson, British Columbia, Canada
As a metal worker of 25 years experience, I felt some misgivings in tackling the all-wood Jodel D-11. My chief worry was the scarfing of plywood. It seemed that a vertical sander was the logical solution, but how to hold the large flimsy sheets was the real question. The jig used is simple and can be varied to suit available equipment. A sander plate on any bench saw would be satisfactory. Some sort of screw feed inward, on the fence, is desirable so that light cuts can be taken and progress noted. A coarse open grit paper was found to be the best.
Everything was tried to hold the plywood on the jig, but it wasn't until Best Test Rubber Paper Cement, made by Union Rubber and Asbestos Co. of Trenton, N.J. was found that the process became practical. This adhesive adheres but does not penetrate wood fibers. It never dries and is easily removed by rubbing with the fingertips. Coat both the jig and plywood surface. One coating on the jig will do for several pieces. The finished work is easily removed with a putty knife. A few nails can be used where the plywood has a tendency to bulge out. The heads grind off with the wood and the nails pull through when removing the sheet. Liberally wetting the edge to be scarfed, after gluing to the jig, stops splintering of the feather edge. Similarly, it helps in sawing plywood to wet the bottom ply along the saw line. The rubber cement is also excellent for holding the paper disc on the sander. When using a transparent glue like Aerolite 300, the scarf joints produced by this method are almost invisible.
FABRIC TENSION TESTER
By Samuel R. Bigony
994 Hale St., Pottstown, Pa.
Engaged in the restoration of a vintage aircraft for the past several years, I designed a fabric tension tester for use when covering with grade A cotton or Ceconite fabric. By using this tester, every section of the wing, fuselage or control surfaces can be brought up to the same tension. The springs can be purchased at any good hardware or automotive supply store.
ATTACHING CABANES
By H. P. Whittaker
1243 Popular Ave., S.W.
Canton, Ohio 44710
After leveling your fuselage or aircraft for the purpose of attaching cabanes or for weight and balance check, it is handy to weld a tab on an exposed longeron and drop a plumb bob to a plate and center punch mark. Later this can be used for weight and balance check after the aircraft is completed.
AN INEXPENSIVE LEAKPROOF FUEL GAUGE FOR INVERTED FLYING
By Ross E. Diehl, EAA 8142
4743 E. Ave. R-12, Palmdale, Calif.
I NEEDED A LEAKPROOF fuel gauge for my "Miniplane" that could be used for inverted flying and, since I had heard that some light aircraft use a Model "A" fuel gauge unit, I purchased one from the local salvage yard and proceeded to modify it.
Items 8, 10, 11, 12, 13 and 14 shown in the accompanying figure, were not changed as they were in good condition.
The back plate and fuel gauge housing were riveted to the fuel tank skin since access to the back plate was not possible.
After modifying it as shown, I mounted it on the lower left side of the aft vertical panel of the tank. It proved to be very effective and leakproof.
VARNISHING
Ralph R. Driscoll, EAA 10742
2699 Fruitland Blvd. S.W., Cedar Rapids, Iowa
While varnishing between gussets on ribs, in forcing a brush into this ¼ in. space the bristles were broken and cut off of the brush. I now use a folded, twisted pipe cleaner, dipped into the varnish and worked well between the gussets. Also, I find this helpful in other inaccessible places and to saturate the drilled fitting holes in wood. Work thoroughly in a circular motion and when the varnish has become tacky, use the bit used to drill the hole and turn counter clockwise to remove the excess varnish and maintain hole diameter.
In forming this, start the fold at 2½ in. from one end. Leave about ½ in. flat at the folded end, then start twisting. The ½ in. flat part serves as an eye and picks up a good supply of varnish. When completed, bend the single strand to a hook. Hook over the edge of varnish container, and it is at hand when needed.
IMPROVING AEROBATIC MOVIES
When taking movies of aerobatic flying against a cloudless sky, if the plane is kept in the center of the view finder there is no background to give a sense of speed and motion. For some of your shots hold the camera still and let the plane move across the view finder. The resulting movies will have a better sense of speed.
GETTING IN THE LITTLE WOMAN'S HAIR
By F. Wiedenmeier, EAA 10009
Palmer Lake, Colo.
You may say that's easy, just start building a plane. I'll agree, but I think I've found a rather painless way. That is, if you don't steal hers and go to the drug store and buy your own. I'm talking about those little urethane hair curler rolls. Mine are 1½ in. in diameter by 2 in. in length, with a ¼ in. bore, but have seen some smaller.
When using epoxy, I've lost quite a few brushes as the pot life comes fast to an end. Cleaning is also a messy problem.
One day "The Little Gal" brings home a sack of these little gems and I stole one without getting caught. By bending up a handle, as in the photo, I had the slickest little glue roller you ever saw! It can be done in any width to match the job. When finished, slip it off the handle and toss it in the scrap box, because it has cost less than a dime for a 2 in. roller or a nickel for the 1 in. job, and performed a fast, even job. It will work as well on other glues or for painting in a hard-to-get-at location.
BENDING WORK MADE EASY
By Edgar C. Smith
Secretary-Treasurer, Chapter 47
Below you will find an inexpensive, easily made, and most useful tool for anyone who has fittings to fabricate in sheet metal. It is not original with me, but was excerpted from the November, 1960 issue of Mill and Factory. I claim no credit, but feel that it is of such universal value that it might perhaps be published in the "mag" as a bonus item, and should definitely be included in any compilation of hints and tools for the aircraft workshop.
Tool makers, mechanics, and maintenance men are frequently called upon to make some brackets, etc., from sheet metal or band iron.
To assist in this work a New York reader made up a bending tool for use in a bench vise. It is made from angles and channels as shown. He used 1 x 1 x ⅝ and 1½ x 1½ x 3/16 and 3 in. channels.
These sizes match up very well as shown. The angles are secured to the channels by welding inside the angles at each end.
MARKING METAL
By Duane Sunderland
5 Griffin Dr., Apalachin, N.Y.
The single practice which has been the biggest help to me in metal working has been the use of a good marking pencil. Since I find that many homebuilders are not familiar with this technique, I am sure it would be a valuable tip to be published in SPORT AVIATION.
For marking tubing and for all layout work on steel, I use a silver pencil. This pencil leaves a very clear mark and can even be seen when the metal is being tack welded and it doesn't blow away as soapstone does. It, of course, makes no scratches on the surface being marked. It marks well even on oiled surfaces. The pencil is made by Eagle and is designated "CHEMI-SEALED" VERITHIN Silver 753. It is commonly used for marking blueprints.
EXPERIMENTS WITH VARNISH
By Edward M. Sampson, EAA 13365
Box 38, Belview, Minn.
The article on covering wood surfaces, which appeared in the December, 1963 SPORT AVIATION, has aroused my interest. As of now, I haven't done any covering on my "Fly Baby" project, but have experimented with the effects of dope on the various types of varnish.
I have found that Gilt Edge No. 1000 Spar Supreme marine varnish, manufactured by Farwell Osmun Kirk and Co. of St. Paul, Minn., is impervious to both nitrate and butyrate dope. No lift of the varnish occurs, and there is no penetration. This varnish is a high grade polyurethane type formulation.
"C" CLAMPS MADE OF OAK
By Larry Hawes, EAA 10319
149-06 85 Rd., Jamaica 35, N.Y.
Wanting clamps to add pressure while nail gluing ribs and not wanting to buy so many "C" clamps I cut pieces
of oak and made the clamps as shown in the sketch. They do all that is needed and are very inexpensive.
The size shown can be varied to suit any other need. For a wider span the size of the wood pieces should be larger and the running thread of greater diameter.
The pressure developed can easily be measured directly on a scale. For some of us steel may be more easily obtainable, but the use of the running thread is the prime idea.
With the use of filler blocks these clamps are equally efficient on other than flat surfaces.
When using wood it is important to drill the holes WITH the grain. Have fun.
MANUFACTURING "C" CLAMPS
By Stan Olive, EAA 12697
Sub Office—90 Union St., Saint John, N.B. Canada
Recently I stumbled on a very effective and inexpensive method of manufacturing "C" clamps, which may be of interest to other members engaged in the construction of a wooden aircraft. Simply cut a "C" shape out of 3/4 in. plywood. Length and thickness of tangs can be varied to suit the gluing job, and thickness of backing blocks and/or wedges will decide the adjustment for size. A local wood working factory uses these by the hundreds in the manufacture of slab doors, and reports wonderful results. I think you will agree that the possibilities of such a simple gadget are almost unlimited.
RIVET HOLE FINDER
By Fred W. Luddeke
P. O. Box 36, Chickasaw, Ala.
The hole finder idea is really very simple, but I have not seen it mentioned previously, so here it is: I have been using this type for over 20 years and have found it to be the most foolproof one that can be used in almost any location except where there is a very tight radius. Even then it can be used by using a smaller size drill bit and pulling the hole to the lower side of the drill guide. Practice will teach you how to do this. Wider strips should be used if the length is to exceed 14 inches as they tend to lose proper alignment when made too long and too thin.
PRESSURE GLUING METHOD
By L. J. Weishaar, EAA 9250
1924 N. 6th St., Springfield, Ill.
Here's a trick which may not be original but is, as far as I'm concerned, my own brainstorm.
The problem: Obtaining glue pressure over large, flat areas which are inaccessible to C-clamps. In my case it was the solid-core, ply-covered front fuselage bulkhead on the "Turbulent."
The solution: Cut top and bottom caul boards from a suitably heavy plywood (the area to be covered determines the stiffness required). For each caul, cut a piece of newspaper to the same shape but slightly smaller in all dimensions. On top of these, fit pieces of newspaper cut to yet smaller dimensions but still roughly the same shape. Continue building this paper "contour model" toward the center of the area with pieces of paper of decreasing sizes. As you work toward the center, the corners should be more and more rounded so that if, for instance, you start with a rectangle, the smallest center piece would be an oval. The whole thing looks like a symmetrical high pressure area on a weather map. Fasten the paper hill to the caul with an "X" of masking tape.
With the sandwich to be glued assembled between the paper-covered cauls, mount C-clamps around the periphery of the cauls, tighten securely and there you are!
Before the actual gluing is attempted, the pressure developed should be tested by "feel" by assembling the cauls face-to-face and noting the clamp pressures required to close the edge gaps. If it doesn't seem to be sufficient, either make the contour lines closer together or double the thickness of each contour.
I bored a few lightening holes in the solid core and, by gluing one side at a time, was able to note by the glue squeeze-out on the first side that a good joint was obtained over the whole area.
ASH WOOD AND VOLKSWAGEN CARBURETORS
By Capt. William E. Brown, EAA 10669
R. D. 4, Athens, Ohio
If some of the members have had difficulty, as I have had, in locating ash lumber of aircraft quality, this knowledge might be of help. Most wood specialty houses stock baseball bat blanks, $2\frac{1}{2}$ in. by $2\frac{1}{2}$ in. by 38 in., of very high quality ash. The Craftsman Wood Service Co. of 2725 S. Mary St. in Chicago, 8, Ill., has them for $1.25 each.
Secondly, this information from the "Volkswagen Handbook" published by HOT ROD magazine may be of interest: "The VW Solex carb, when run without an air cleaner, gives an extremely lean mixture. It is necessary to increase the main jet size about two sizes larger to correct this." This might save some burned valves on some of the conversions using the Solex carb. The J. C. Whitney Co. sells an adjustable air-metering jet for the Solex which could be used to enrich the mixture without changing the main jet.
For The Birds
By Milt Colden, EAA 1855
Clintonville, Wis.
I have a tip that is a prize winner, but will not win me a prize as it is not a construction tip. However, it must be published as it is a cure-all for one of the aircraft owner's worst problems, namely, Birds and Bird-Dirt. The idea is so simple that one just couldn't think of it and the idea was given to me by an old German lady who used it in her garden. Take an old piece of fur from a coat collar or anything like it and tack it around the top and two sides of a piece of 2x2 about 8 in. long, leaving about 6 in. hanging from the end of it to resemble a tail. Put one or more of these in the top area of your hangar and you will never again have Bird trouble. About three years ago a doctor friend of mine with a nice new Comanche had Bird trouble so I mentioned this to him and he tried it out with two such "Cats" nailed to his rafters. Not a bird dropping since. Two months ago I did the same with two such "Cats" and I haven't had a dropping since on my Cyclops. So if you owners are having trouble just give it a try. I have also been informed by the old gal that a stuffed owl will give the same effect.
TRAILING EDGE MATERIAL MANUFACTURE
By J. E. Riley, EAA 7118
1 Mills Crescent, Saskatoon, Saskatchewan, Canada
For anyone having difficulty locating trailing edge material for control surfaces as used in the "Tailwind", the following suggestion may prove helpful:
First, purchase 4130 steel square tubing of the required gauge and size to conform to the width of the trailing edge required. In the case of the "Tailwind", I used \( \frac{3}{8} \) in. by .035 in. Next, rip the tubing diagonally. (Use a band saw or rip saw) thus giving two trailing edges from the one lineal measure. Now dress the edges (either by grinding or filing) to the desired width; \( \frac{3}{16} \) in. for the "Tailwind." Using a brake or jig in a vise, bend the one side over to the required angle. There is no danger of a crack forming because the tubing already contains the correct bending radius.
WELDING TABLE FROM BARBECUE
By Chet Klier
1014 Prosperity Ave., St. Paul, Minn.
This handy and economical welding table can easily be constructed from an ordinary home barbecue. Moist sand is placed in the fire pan and then spread to give an even, smooth surface. The firebrick are then placed on top of the sand bed and fitted into place. The bricks are easily shaped with hammer and chisel to fit the round edges of the fire pan. (All bricks should be numbered so that they can be easily replaced after the barbecue has been repossessed for outdoor cooking).
A brazier hood will provide a wind screen and spark shield while welding small fittings. Hot pieces can be placed on the top of the hood to cool after welding. Drawings show detail of the welding table.
SAFETY ALERT
U.S. General Aviation
CARBURETOR ICE
Vaporization of fuel in the carburetor will lower the carburetor air temperature as much as 60 deg. F. With moisture in the air, ice will form in the carburetor when the temperature reaches the freezing range. This condition could result in a critical loss of power.
An outside air temperature range of 40 deg. F. to 60 deg. F. is most conducive to carburetor ice; however, it can occur at temperatures as high as 90 deg. F. Conditions are more critical when operating at reduced throttle settings. Float type carburetors are more prone to icing than the pressure type. The temperature range and the degree of icing depend upon the carburetor design and installation.
REMEMBER
Be aware of the possibility of icing under varying operating conditions and use the recommended procedures for safe operation.
CIVIL AERONAUTICS BOARD
Cut Metal With A Band Saw
Hands ache from using a hack saw to make the many cuts involved in making sheet steel aircraft fittings? Then send $1.00 to Rockwell Manufacturing Co., Pittsburgh 8, Pa., for a copy of their booklet, "Getting the Most Out of Your Band Saw and Scroll Saw." It contains a rather detailed chapter on the subject of using common home workshop wood-cutting type band saws to cut sheet metal stock and tubing quickly and neatly. Special metal-cutting blades are readily available from hardware supply houses to do the work, and they cost only a trifle more than common wood cutting blades. Soft grades of aluminum can even be cut with sharp wood-cutting blades. It is possible to bolt together several sheets of metal and saw out a number of identical fittings at once. By all means, get this publication if you are doing much work with sheet steel, tubing and aluminum.
Handling Small Nails
By Henry C. Foster, EAA 10200
Box 34, Wexford, Pa.
Here is a tip that is particularly helpful in the building of wooden wing ribs or any other construction which requires the use of one-quarter or three-eighths inch long aircraft nails. Any craftsman who has worked with extremely short nails encounters the problem of holding the nail upright prior to the first blow of the hammer. It is virtually impossible to hold an 18, 19 or 20 gauge nail one-quarter inch long with the fingers without those fingers taking the brunt of the hammer blow intended for the nail. It has been common practice to use a pair of long-nosed pliers to pick up the nail and hold it in place for the initial blow of the hammer. An alternate method, quicker and easier than the use of pliers, is the use of a small magnetic screw driver generally available in any hardware store. This type of screw driver with a magnetic tip enables the craftsman to pick up the nail and hold it in place for hammering. With just a little practice it is possible to make considerably better time than any other method that might be used to hold the extremely small nails used in substantial quantities in the construction of wooden aircraft components.
Rosette Welds
By Robert A. White, EAA 10484
609 N. Lindberg, Griffith, Ind.
There are a number of things that the more experienced builders take for granted, which the amateur has to find out the hard way. Some of the things just aren't in the books, if you have the books.
For instance . . . I found that my drawings called for rosette welds on tubes with zero clearance between the two tubes. It is very hard to get the inner tube hot enough to weld without burning the outer tube. I also found that if a hole is drilled in the inner tube about half the diameter of the outer tube, the heat will penetrate the inner tube more readily and is a simple matter to fill. I might say that where there is no clearance between the tubes, CAM-18 states that the rosette is not necessary at all, depending upon the type of joint.
Aileron Locks
By Raymond L. Shamblem, EAA 10195
225 Viking Rd., Charleston, W. Va.
To make these simple and inexpensive aileron locks, a piece of sheet metal is marked to fit the aileron slot in the wing. Mark off a 1½ in. margin on the top and bottom, then mark off these lines in one inch increments. Drill or punch a ¼ in. hole at each one inch mark.
Zinc-Chromate Tip
By Alan Zingelmann, EAA 1256
3555 Madrid Dr., Westerville, Ohio
In warm weather, zinc-chromate primer that has been reduced with thinner will soon turn to a jelly-like consistency that cannot be re-mixed. Should you find yourself with an unused quantity of reduced chromate on hand, you may store it for several months in a screw top container (instant coffee jar, etc.) in your refrigerator. Although chromate and thinner will separate, a brisk shaking will make it ready for use.
Cut from the edge to each hole, so that a series of tongues will result. Bend each tongue 90 degrees, alternating to the right and the left. Slip a piece of sponge rubber tubing on each tongue to protect the wing. This lock slips into place easily, and the sponge rubber tubing keeps it in place. The locks depicted in the picture were made for a Schweizer 1-26 sailplane.
APPLYING ALUMINUM LEADING EDGE
By Charles T. Vogelsong, EAA 10199
R. 3, Dillsburg, Pa.
I have seen many hints on applying leading edge aluminum. I had the job to do on my "Flut-R-Bug" and felt that there must be a simpler way without special jigs, clamps, tools, etc. I found that all that is needed are some pieces of hardwood strips, \( \frac{3}{4} \) in. by 2 in. by 8 in., and a supply of used baler twine that any farmer will give to you for free, and some scrap blocks. You will need as many strips and pieces of twine as ribs over which the leading edge is placed. Slots are sawed in the strips about \( 1\frac{1}{2} \) in. deep on each end, just wide enough to slip the twine into, and not so wide as to let a knot in the twine through. The distance between the slots should be about the same as the maximum thickness of the rib over which the leading edge fastens. If the aluminum is hard, some may wish to pre-shape it between two pieces of 2 by 4's, using a piece of pipe as a press between the 2 by 4's. The aluminum is placed over the rib ends and can be tacked, if desired, along one of the edges to hold it in the proper position. Make a knot in one end of the piece of baler twine and slip it into one of the slots of the strips. Place the twine about \( \frac{1}{4} \) in. on either side of the rib. (This will allow clearance for nailing into the ribs).
The slotted strip is placed behind the rear spar, the twine is run over the aluminum and back through the slot in the other end of the strip. A second knot is placed in the twine after it passes through the second slot. I placed a piece of \( \frac{3}{4} \) in. scrap plywood between the strip and the spar to prevent damage. Now by slanting the strip in reference to the spar, a block can be placed between it and the spar. The necessary leverage for tightening is obtained by straightening the strip. Different thicknesses of blocks will give correct tightness. (You obtain terrific pressure in this manner!).
This will snug the aluminum right up to the rib. One of these jigs is used at each rib. The complete leading edge can be positioned to satisfaction before any permanent nailing is done. The illustration shows the steps for tightening. The twine will slide easily on the aluminum but will not mar it if it is the hard kind. A wider strap can be placed under the twine if it is feared it will crease the aluminum. The main advantage is that the leading edge can be strapped into its exact position before using a nail. Readjustment of the strings can be made to work out any irregularities before nailing. After the leading edge is nailed, just release the strips, slip the knots out of their slots, and you are ready for the next piece of aluminum. Results were perfect for my installation; expenses for equipment exactly nothing!
---
RIB NOSE SECTION SLOTS
By Don Simons, EAA 9191
163 New York Ave., Youngstown, Ohio
To cut neat slots in the nose section of a rib, make a metal pattern from any suitable material (aluminum, terneplate, tin, etc.), to slip over the front of the rib. Bend the tabs down to get the correct position, and provide a slot to allow marking the intended slot to be cut out.
After marking the ribs to show the intended cut-out, make a disc from plywood of the correct thickness. We used a disc about 6 in. in diameter. Glue medium-grade sandpaper to the edge of the disc and mount it on any \( \frac{3}{4} \) hp motor. Clamp the motor firmly at the end of a bench, so that the ribs can be pushed straight into the revolving disc.
We had to change sandpaper three times to finish two sets of "Miniplane" ribs. The total time spent making the marking pattern, disc and set-up to do the job . . . 30 minutes. Spoilage . . . zero!
Winter Air Vent Covers
By Edward B. Price, EAA 21541
19 Orton Rd., West Caldwell, N.J.
When installing the new windshield on my 1946 Taylorcraft BC-12D which I had just repainted and reassembled, I also installed "snap-vents" for ventilation. Now that cold weather is here, I find that too much cold air for the cabin heat to overtake comes in, so I found an answer which I would like to pass along.
The "snap-vent" is removed from the 2 in. diameter hole and a small plastic disposable beaker, manufactured by Econo-Lab, is pushed into the hole until it is tight. The excess on the inside is removed with a sharp knife or razor blade. Approximately 1 in. of height was about right. The beakers are tapered and will fit any size near the 2 in. diameter hole in which the "snap-vent" is installed.
These plastic beakers are cheap. I also use one on the fuel tank to keep rain out when the ship is tied down. There is more drag from the plug than there is from the vent, and it can be easily removed when the vent is required.
DECIDING COLOR SCHEME AND TRIM
By Alan Zingelmann, EAA 1256
3555 Madrid Dr., Westerville, Ohio
Undecided on the color scheme and trim for your airplane? An economical method that will let you experiment with many ideas is to mark the fuselage, etc. with contrasting colored tape (¼ in. by 1 in. strips of black tape on silver dope, 3 to 6 in. apart) to provide a scaled reference. Then take a "Brownie" photograph to include the full length, from a position at right angles to the center of the object, (this will minimize scale distortion). Be sure to fill your negative with only the structure that is part of the problem. Take this to the drug store or photo shop and ask for an 8 in. by 10 in. matte finish enlargement; then you have a scaled view that may be drawn upon with pencil, ink, etc. When your art satisfies, you may take the full scale dimensions of your color scheme directly from the reference marks the tape provides on the photograph. An 8 in. by 10 in. photograph of a "Cub" size fuselage will enable you to read one inch increments with dividers. Picture quality is not very important; if the scale marks and outline of major details show, you will have good results for a cost of $2.00 or $3.00.
A time saving method for penciling in locations for registration numbers, trim stripes, etc., is to select a horizontal reference that you wish your numbers, etc., to be parallel and at right angles to. Then support your structure so that the datum you have selected (stringer, longeron, skin joint) is made level. A parallel horizontal line can be drawn through any point measured from this datum, using a carpenter's 24 in. level.
A vertical line can be drawn through any point measured horizontally, using the level to plumb the line through the point. Mild skin curvatures, and stringer and longeron projections will not detract greatly from the accuracy of this method.
SANDING TOOL
By Hal Sanders, EAA 1109
4555 Finley Ave., Los Angeles, Calif.
For a high speed sanding disc, if a sanding disc is not on hand for your table saw:
1. Take an old saw blade of appropriate size for your table saw and grind off the teeth.
2. Trace the disc pattern on different grits of sand paper and cut out the circles and center hole.
3. Commercial adhesives made for the purpose are used to glue the sand paper to the ground down saw, using different grits on each side, according to personal requirements.
4. Saw blade is then conventionally mounted, resulting in a high-speed sanding disc. The fence can be used as a guide for straight sanding if not too much material is to be removed.
TAPPING AND THREADING TIP
By Charles C. Putnam, EAA 2859
2659 Carleton Ave., Los Angeles, Calif.
The following is a method of tapping and threading hard or soft aluminum:
1. Use a sharp tap or die, preferably one that has not been used on steel.
2. Apply "Hinds" industrial compound, or a mixture of honey and almond hand cream to the tap or die and the piece of work. Flood both for best results.
3. Turn the tap or die in one direction only. Do not back off intermittently as with steel.
4. When threading, clean the flutes of the die often and apply more hand cream. When tapping, remove the tap as soon as the drag increases slightly more than normal, and clean the tap and hole and apply more hand cream. If the tap loads up too much, it damages the threads. I usually remove the tap two or three times while tapping the average depth hole.
I have used other types of hand cream with good results, but Hinds seemed to do the best work. I have also tried several different cutting oils and lubricants, but none of them worked as well as any of the hand creams. I believe that the reason hand cream works so well is because of the lanolin or glycerine content, or both.
BRAZING AND WELDING TIP
By Eugene W. Slade, EAA 768
111 Siesta Lane, Marietta, Ga.
While repairing a Go-Kart for a friend, I removed some old brazing in preparation for welding. Some of the braze remained even after filing, and caused the welds to crack down the center upon cooling.
It is suggested that brazing only be used where you are absolutely sure that welding will never be required, as the base metal may have to be cut out and replaced to obtain a high-strength joint.
Check all welding rods which you buy . . . a new brazing rod is on the market which looks like copper-coated weld rod. If even a small amount of brazing rod were to get into an important joint, it would surely fail at a later time due to the weakened weld. It is suggested that two pieces of scrap be brazed just to test this out. Then remove most of the braze, and weld the joint, paying particular attention to the way that the weld acts during welding. After welding, clamp one end in a vise, and you can break off the other piece with a hammer!
Bending Aluminum Sheet
By Rim Kaminskas, EAA 3476
482 Patrician Way, Monrovia, Calif.
Bending sheet aluminum for a leading edge of a wing always presents a problem. An easy way to accomplish this is illustrated here. Picture No. 1 shows all that is required... three boards and three clamps. Bend the aluminum sheet between the three boards as shown in picture No. 2. Then clamp it as shown in pictures No. 3 and No. 4. Remove the clamps and you will have a perfectly formed leading edge as shown in picture No. 5.
The radius at the bend may be controlled by the thickness of the board in the middle.
---
SPRAY "TENT" IN GARAGE
By David Mason, EAA 8828
I HAVE a suggestion for members who may have a problem of where to do their spraying of dope or enamel. I made a "tent" inside my garage of .040 polyethylene sheet, available at most builders' supply stores. Stringing some inexpensive clothes line cable across the garage about a foot below the ceiling, I hung the plastic film over it. Clothes pins hold the film and the end pieces in place.
The polyethylene costs about $15-$20 for a 100 foot roll, 25 to 30 feet wide. This "tent" keeps the overspray from getting on other items stored in the garage, and is easily removed when the painting is finished. Being nearly clear, the plastic allows light in from ceiling fixtures or windows, and extra lighting is not necessary.
---
SAFETY ALERT
U.S. GENERAL AVIATION
FUEL EXHAUSTION
Each year there are over 100 accidents as a result of fuel exhaustion. Ten percent of these are fatal. These accidents could have been prevented by proper pre-flight preparation and en route planning.
REMEMBER
CHECK YOUR FUEL SUPPLY PRIOR TO DEPARTURE, MONITOR THE RATE OF FUEL CONSUMPTION IN FLIGHT, AND PLAN TO ARRIVE AT YOUR DESTINATION WITH AN ADEQUATE FUEL RESERVE.
CIVIL AERONAUTICS BOARD
A Sturdy Wing Stand
By Bill Ware, Jr., EAA 3328
422 Wesson St., El Dorado, Ark.
While the idea presented here certainly is not original, these wing stands will be found to be very stable, protect the wings, and can be used for different wings regardless of variation in airfoil. There's no telling how many hangars and shops use similar wing sling stands.
Each stand consists of two uprights at least 3 ft. high. A length of upholsterer's webbing or a length of heavy canvas strap is tacked to both uprights so as to form a sling. The strap is installed so as not to touch the stand's base. It will adjust automatically to the contours of the wings leading edge . . . again, regardless of what airfoil is used! The uprights are padded in any way possible . . . flattened fire hose, strips of old carpets or rugs, folded rags, etc.
The sling stand sketches contain arbitrary dimensions which can be varied to accommodate whatever scrap lumber is available. The builder may use nails, screws and glue as he sees fit. Materials for the sling straps and padding are left to the discretion and ingenuity of the builder.
Furniture-type caster wheels may be installed on the floor model "wing sling stand", making it possible to move both wing and stands across the hangar floor without removing the wing from the stands.
Knife Edging of a Paint Job
By Harrison P. Whittaker, EAA 1089
1243 Poplar Ave., SW., Canton, Ohio
To get that "knife edge" on a paint job when masking between two or more colors, brush a coat of clear dope along the edge of the masking tape before applying the color coat.
When the masking tape is removed, a "knife edge" will result between the two color areas.
The Versatile Abrasive Wheel
By Dr. Earl T. Johnson, EAA 16252
P. O. Box 367, Glendale, Oreg.
In the course of several construction kinks encountered in the building of my Jodel D-11, I have found that the grit abrasive wheel has many uses. It is particularly effective for beveling leading and trailing edges of spars, finishing rough cut spars or cap strips and as a general all-around replacement for a disc sander.
Since it has no kerf, an abrasive wheel may be much more accurately set with relatively little effort. It will produce a finished cut with one pass, will rip or cross saw like a blade and can be used in many other ways that a blade cannot. Called a "Karbo Grit Abrasive Wheel", it can be purchased from Sears and Roebuck in various sizes for about $6.00. I have not noticed mine wearing out, although I suppose that it eventually will.
However, so far it has given plenty of use with no maintenance.
A Lightweight Generator And Battery For A Lycoming Engine
By R. W. Riter, EAA 12838
Sky Harbor Airport, Northbrook, Ill.
THE LIGHTEST-WEIGHT generator and battery which I could find that could be used on the O-290-G was an Auto-lite GJG-4001M-6M generator that, I believe, was used on a Johnson outboard, and two 6-volt batteries from motorcycles. These batteries measured 1½ in. by 4¾ in. by 5 in., and weighed about 4¼ lbs. each.
These fit into my aluminum battery box which has inside dimensions of 3½ in. by 4¾ in. by 5 in.
The generator was used and in need of repair so, for $10.00, I took it home and investigated. It is a small compact unit, well constructed with ball bearings on both ends and made for high rpm. Rotation has to be changed, and the big job is to remove the brush brackets and turn them around so that the brush angle is correct with respect to the direction of rotation. Other than that, it will work.
I drilled two ¾ in. holes in the housing for blast cooling. I made a new commutator end bracket from aluminum, incorporating a mounting arm. The drive-end bracket is a bolt-on steel type with the mounting arm and provision for the adjustable arm. The bracket that bolts to the engine was made from steel and a 2½ in. aluminum pulley was used.
The engine drive pulley was made from an old starter gear-pulley combination with the gear and excess aluminum cut off on the lathe. However, this could be left on if a starter was to be used at any time.
The regulator is a 12V, 10 Amp. Autolite VBO-4201C-2. This set-up, giving up to 10 amps output, is adequate for a small radio, lights, etc., and would probably be all right for a starter if a large battery, solenoid and wiring were used.
Repairing Fabric Covered Aircraft
By Orville Lippert, EAA 9159
9595 N.E. County Line Rd., Riverdale, Mich.
I WILL TRY to cover a few points on the repairing of fabric covered aircraft.
We will assume, to start off with, that your one-and-only has been the recipient of a beautiful enamel paint job. Excellent weather protection, good looking, glossy with about one-half the work necessary to get a similar effect with dope, to say nothing of the ease of matching cowling and metal parts to the fabric both in color and gloss. Anyway, to your utter horror, a patch has become necessary due to some other clod borrowing the little jewel and dragging a wing tip into the frozen tundra. "Benevolent Joe", the friendly AP, can't get the machine into his heated hangar, and you would like to fly it again before spring.
So the first step is to either get the bird into an unheated "T" hangar, or put on your "long Johns" and scrape the snow and ice off, and get to work. First, determine the size. I have found that on small patches (not to be misconstrued as to mean several panels), a temporary repair can be effected on enameled fabric even in sub-zero weather by the following method:
Obtain some old doped fabric from someone who has recovered a panel. Preferably this should be a silver doped finish, although color dope is satisfactory. Either nitrate or CAB dope is OK. Maybe CAB would be a little more desirable, but not a great deal. Cut a pinked patch from this old material, allowing about a one-inch overlap around the damaged area. Dip this patch in acetone and apply immediately over the damaged area. On a doped surface apply face up, and the acetone remaining in the cloth side of the patch will be enough to weld the patch to the hide. On an enameled surface, probably you will have to place the doped face of the patch to the wing surface. Smooth down around the edges until the acetone dries in about a minute. You are then ready to fly. As soon as you care to, you can shoot your color back on.
I have made satisfactory emergency repairs in 10 below zero weather with this method. But remember, this would have to be classed as a temporary repair, and permanent repairs should be made as soon as possible in accordance with CAM 18.
Tube-Cutting Jig
By Andrew H. Harness, EAA 12899
2805 SW. 55th, Oklahoma City, Okla.
A very practical tool cutting jig can be made from a piece of scrap tubing with an inside dimension big enough to swallow the largest tube that might be cut. Weld the tube to a plate edge for securing in a vise as well as to hold the jig together after the miter slots are cut.
INEXPENSIVE PROPELLER HUB EXTENSION
By Russell W. Riter, EAA 12838
Sky Harbor Airport, Northbrook, Ill.
Here is a simple and inexpensive way to get a propeller extension if you don't need over 3½ in.
I purchased a junked propeller for $2.00, sawed off both blades, and got my money back when I sold the blades for scrap. This propeller fitted the Lycoming O-290 engines.
The outer disc was then finished in the lathe, and I drilled 1¼ in. holes about 1 in. deep between the bolt holes on the back side. On the front side, I drilled two equal-spaced ¾ in. holes through to the 1¼ in. holes. These holes are to make it as light as possible. The mounting-bolt holes were reamed to a straight bore since they were compressed and out-of-round from previous torquing of the propeller bolts.
A counterboring tool with a pilot to fit the reamed bolt holes was made, as well as a cutter to fit the bushings that were pressed into the holes. The bushings are ¾ in. O.D. and ¾ in. I.D. by 1 5/16 in. long. Cessna p/n 0442129-1 will work or they can be turned and plated.
A front-centering bushing was turned from a piece of aluminum and shrunk in. The outside diameter of the part that sticks out is the same as the crankshaft. Since I didn't have a piece of aluminum, I sawed off the end from a scrap Hartzell blade and turned it to fit.
This type of extension is a little heavier than a spool type, but there should be no worries about strength or failure.
If less than 3½ in. is needed, cut off what you do not need while the outside diameter is being turned, but be sure to keep both faces parallel or the propeller will not run true.
A PANEL vibrator for a sailplane was wanted for use during a Photo Panel recorded test of sailplane performance.
An out of balance propeller or rather windmill in the ventilator tube was suggested by Bill Welch, and was made as follows:
A short piece of aluminum tubing, 1/4 inch diameter, short enough to go across the duct without hitting was formed into a two-bladed windmill as shown.
Then a piece of dural scrap stock was wrapped into a hoop as shown. The windmill was mounted by a wire (welding rod) axle secured to the dural coil as shown. A piece of dural 1/16 inch wide and as long as the chord of the blade was attached to the end of one blade with contact cement to provide the out of balance. This device was forced into the near end of the ventilator duct and it shakes as bad as an engine with three AC spark plugs and one you name it.
TAPING WING-TIP BOWS
By James E. Bell, EAA 3786
6 Sheldon Dr., Spencerport, N.Y.
Applying 3 in. tapes on wing-tip bows and tail surfaces can be quite a problem. Some builders cut wedges from the tape to go around the edge, which works all right, but here is a tip that was taught to me by "Squeeck" Hepler.
Make a lengthwise fold in the 3 in. tape and crease it. Dope about 4 to 6 in. of it on at the beginning of the curve and allow the dope to dry. Then dope the rest, or even a part of the curve if it is large. A good, even pull on the tape will allow the tape to fit the curve perfectly and make a much neater finished product. The crease which was made in the tape allows for a good center guide.
---
RIB CONSTRUCTION FOR RUDDER OR FIN
By Henry E. Winslow, EAA 595
Mira Loma Circle Apts., Unit 14A
1600 W. 5th St., Oxnard, Calif.
A strong rib construction for rudder or fin is made from 4130 sheet stock. It is first bent into "U" shape, then filed to fit and welded in place.
This type of construction has the advantage of forming the proper contour when two different sizes of tubing are used at the tail post and leading or trailing edge.
Also position-light wires may be routed through it without weakening the structure materially, as would be the case of a rib made of round tubing.
---
CUTTING DRAIN GROMMET HOLES
By Robert G. Harmon, EAA 1254
2980 NW. 173rd Terr., Opa Locka, Fla.
For cutting the fabric from drain grommet holes, I have found that a ¼ in. auger bit is just the thing! I purchased one for $1.00, and then used a whetstone to put a good sharp edge on the cutting surfaces. By putting a tap handle on it and using very light pressure, it will cut a very clean hole in no time, and by continuing right-hand rotation after the cut is made, the fabric slug will stay on the bit when withdrawn.
It also worked very well to cut the ¼ in. drain holes through the plywood on my "Cougar" wings. I would advise trying out this method first on a scrap piece in order to determine how light the pressure should be to prevent the back surface of the plywood from splintering.
---
ALIGNING BOLT FOR WELDING
By John Stinger, EAA 3782
P. O. Box 131, Beaver Falls, Pa.
I'm sure that there isn't one homebuilder who has not experienced difficulty in getting a bolt out of a fitting or hinge where it was necessary to use a bolt to hold the parts in alignment while welding . . . that is, unless he was building an all-wood airplane. To make the job easy, be sure to clean off all of the cadmium and file several flat spots on the bolt. Then, after welding, the bolt can be easily removed.
GROUND HANDLING SAFETY
By Arlo Schroeder, EAA 4902
114 SW. 6th, Newton, Kans.
Have the wheel chocks disappeared from your airport? Who knows what happened to them. Then how do you start an airplane that has no parking brake?
Arlo Schroeder is shown checking the installation of a glider tow-hook on the tailwheel spring bolt of Bob Stephens' "Special."
The purpose of the hook is to secure the tail of the airplane during engine-starting operations. This eliminates the chase and possible solo flight of an airplane by itself when there is no one to man the controls while the engine is being started by hand. When the pilot is ready to taxi the airplane, the hook is released from the cockpit.
The hook can be homebuilt by the individual or it can be purchased from the Schweizer Aircraft Corporation.
OUTER RIB JIGS
By A. J. Meuse, EAA 5374
RCAF Station
Lamacoza, Quebec, CANADA
This suggestion can save countless hours in making ribs for a tapered wing.
Take a piece of commercial plywood approximately 1/16 to 3/32 in. thinner than the capstrip size being used, of sufficient length and width to accommodate the largest rib plan plus 4 in. on all four sides. Next, draw the chord line on the plywood, using the largest rib plan for reference. Then reproduce all the rib outlines on the plywood, using the chord lines as reference. Then take the smallest rib outline and cut out the center. Put the plywood with the rib outline over the corresponding rib plan or template, placing wax paper between them. When the plan is smoothed out, nail the jig down and then you are ready to place the inner support blocks (cross-member supports) and construct your ribs. When they are finished, take the plywood jig and cut out the next size and do the same as the first. Simple, isn't it?
TEMPORARY METAL-TURNING LATHE
By Graydon L. Sharpe, EAA 3784
R. 2, Augusta, Maine
For the person who has a drill press but no metal-turning lathe, and who wants to square the ends of small bushings, the bushing can be chucked in the drill press, turn it on, and bring the bushing down onto a flat mill file that has been secured to the table. The file will need
(Continued on bottom of next page)
In building the wings for my Stits "Flut-R-Bug", I found that cutting a piece of 1/4 in. plywood to fit over the spars at the required rib spacing helped a great deal when installing the ribs.
By clamping the plywood jig to the spars, and then clamping the rib to the plywood jig, the rib is held snug and straight, with good backing for gluing and tacking the 1/4 in. round gussets on one side. The next day, the jig would be moved up to the next rib for the same operation, then going back to finish the opposite side of the previous rib.
It may seem like a slow process, but many homebuilders can't afford to go too fast, so a little progress each day keeps the enthusiasm alive much better than long waiting periods in between.
---
**Landing Gear Material**
Responding to the many inquiries received in connection with the type of material used in the spring landing gears, we asked Steve Wittman for his advice in this matter, and he replied as follows:
"The desirable steel for the Wittman spring landing gear is SAE 6150, but it is not easy to find. A steel that is satisfactory and readily available is 4140. Both should be heat-treated 400 to 425 Brinell. We purchase our steel from . . . High Alloy Steel Co., 5100 W. 73rd Street, Chicago 38, Ill."
---
**HELPFUL HINTS . . .**
(Continued from preceding page)
frequent cleaning while the work is in progress. Bushings can also be reduced in diameter by holding the file against the surface while it is chucked in the press or a hand electric drill.
---
**GLUE APPLICATOR**
*By Dale Johnson, EAA 4258*
3704 Cambridge, Midland, Mich.
When building wood ribs, a very efficient and effective glue applicator can be had by purchasing a paint striper as shown. They can be purchased from Sears and Roebuck Co. Mix your glue and roll it on. Just the right amount can be applied to both cap strips and gussets. When done with the ribs, the tool can later be used for that fine job of pin striping when painting your completed ship.
NAIL POSITIONING TOOL
By Mel Sutter, EAA 14377
524 W. Market St., Akron, Ohio
One of the most tedious steps in building wing ribs is the nailing of the small \( \frac{1}{4} \) in., 20 gauge nails into the gussets, and the method will help to subtract many hours from the job of building the wings.
A large house nail or spike was ground down to the shape as shown in the picture. Using no other tools, small holes are pressed into the thin plywood with the large nail.
This step is followed by placing the small aircraft nails into the holes with the fingers only. This last step can be delegated to some other member of the family, even the children, because the positioning of the nails has been pre-determined.
---
NAILING WING RIB GUSSETS
By William C. Kilburn, EAA 14168
Plantation Circle, Asheboro, N.C.
In getting nails started in wing gussets prior to mixing glue, here is a method that will not only eliminate some of the frustration of trying to handle the small nails with half-glued fingers, but will also insure ending up with a more accurate job of nailing:
STEP 1. Make one rib, using any method, and after the glue dries, pull the nails and locate exactly where the nails should have been located. Mark each spot.
STEP 2. At each spot that was marked, drill a hole just large enough to let the heads of your aircraft nails pass clear through. This completes the jig. To use it, put a gusset on the work bench and put the jig on top of it, with the gusset properly located where it will go. Drop a nail into each hole. Use a nail set or a common nail with the point filed off to reach into each hole and lightly drive the aircraft nail into the gusset. After all the gussets are pre-nailed for one side of a rib, turn the nailing jig over and nail a set of gussets for the other side.
The picture shows the process of pre-nailing gussets for a "Cougar" false rib. Shown are both pre-nailed gussets for one side of a rib, and the jig being used to pre-nail a gusset for the other side. What appears to be nails on the jig are actually the drilled holes.
---
OPENING HOLES IN METAL FITTINGS
By Graydon L. Sharpe, EAA 3784
R. 2, Augusta, Maine
To enlarge a hole in a steel or aluminum fitting that does not require close-tolerance fit (such as a fitting through which a tube passes and is welded around the perimeter), one easy way to do this is to chuck the tang of an appropriate size rat-tail file in a carpenter's brace. Inserting the small end of the file into the existing hole, rotate the brace counter-clockwise with pressure inward on the file. Because the file teeth pattern on the file is helicoid like a screw, if turned clockwise it would tend to screw into the work and would take too much of a bite. By turning counter-clockwise, the amount of cutting can be easily controlled by the amount of pressure held inward on the file and with no grabbing.
---
NON-SLIP SCREWDRIVER
By Graydon L. Sharpe, EAA 3784
R. 2, Augusta, Maine
Sometimes in the removal of tightly installed or corroded Phillips-head screws, the screwdriver tends to slip up and out of the screw slots, rounding off the shoulders and thus making it easier to slip on each succeeding try. Instead, try this! When first going at the job, apply a tiny bit of fine valve-lapping compound on the tip of the screwdriver for each screw. This fine abrasive much increases the friction between the screwdriver and the slots, and more energy can then be applied toward torque and less to trying to hold the screwdriver down into the slots. No word should be needed on clean-up of the abrasive after use.
---
SABRE SAW VERSATILE FOR HOMEBUILDERS
By Edward J. Gumell, EAA 7038
5641 Willow Terrace Dr., Bethel Park, Pa.
I think most of us in this homebuilding of aircraft have at one time or another run into a situation where we have need of metalworking tools of some kind — lathes, band saws, etc. Metal fittings of 4130 from .065 up can be worked by hand only with difficulty if at all. A little gimmick that has worked extremely well for me is the use of a heavy duty sabre saw as a very portable and versatile band saw.
When I bought my Craftsman heavy duty sabre saw a few years back, included in the kit were the usual woodworking blades and one very fine-toothed metal blade which I tried on some .090 and promptly forgot because it burned up. A little later, however, I needed the use of a metal band saw but had no access to one and had to improvise, which is when I tried the method I now do almost all my cutting with on gauges from .065 up to .250 in mild steel.
I had some standard 10 in. Griffin high-speed hacksaw blades (I think any good blade would do as well)
snapped one off to a 3 in. length, ground the top back edge for three-quarters of an inch to fit my saw holder. In my particular case, about a quarter inch was left to go in the holder. This will probably vary with the saw, but it should work in most cases. The first blade I used was an 18 tooth which works well in heavier material, and I have since used up to a 24 tooth for lighter metals.
The use is the same as though wood were being cut, except that the sheet should be well clamped on a heavy board to minimize up and down vibration. The speed of cutting will be faster than that of a table band saw, but will have to be found out through experimentation. One thing to remember, don't force the cut! You will find that after a certain temperature is reached the blade will move very easily. Another thing, wear safety glasses of some kind; the blade is moving up and down around 3,000 strokes per minute and is cutting on the up stroke and throws out small steel chips quite forcibly.
Radii can be cut down to one-half inch by grinding the blade to a smaller width. The usual width is one-half inch, so I took time and made a set in graduations of 1/16th down to a quarter which has taken care of all my needs so far. Another thing that helps on long straight cuts is to use a guide and a little cutting oil or grease along the line of cut. Aluminum, brass and cast iron have all been cut this way, remembering the softer the metal, the coarser-toothed the blade that can usually be used. Because the blade is moving at such a terrific speed, the edges of the piece being cut rarely have to be touched except for possible rounding-off.
---
**RIB STITCHING**
*By Noel M. Walker, Jr., EAA 5150*
*Tazewell, Va.*
Except for the extra walking around, I find that doing rib stitching alone is as easy as doing it with two men. In fact, I use this same procedure when I have help, as it saves a great amount of time in trying to hit the mark on the other side.
First, I stand the wing on its leading edge in a simple rack, and then mark the chalk lines. Then I take my needle and punch the holes for the needle on both sides of the wing, either all or as many ribs as I expect to sew at the time. Then I place a light on each side of the wing and by looking through one of the needle holes above the hole in which I am inserting the needle, I can see the light shining through the opposite hole and can also see the needle. It is easy to aim the needle through the opposite hole and saves punching those accidental holes and the guesswork involved in finding the right spot. I believe that this saves about half the time on a two-man job, and it is surprising how much of a wing you can reach from one side.
---
**Airframe Demagnetizer**
*By M. B. Standing, EAA 11383*
*135 Sheridan Way*
*Woodside, Calif.*
When installing the compass in my Stits "Sky-coupe", I found so much residual magnetism in the cabin area, as a result of welding the 4130 airframe, that the compass would point only in a single direction. An expenditure of 90 cents for a surplus TV choke provided material from which to construct a simple demagnetizer.
Fig. 1 shows the external appearance of the choke before alterations. Fig. 2 is a schematic drawing of the general shape of the laminated iron core and the copper coil that is wound around the central core. The electrical resistance (DC) of the coil measured 100 ohms. This would give a flow of about 0.8 amps when connected to a house circuit.
To convert the choke for demagnetizing work, it is necessary to remove a portion of both outside cores. First, however, drill through the plates and install two brass machine screws to keep the core laminations from separating. Cut the outside cores at the location shown and discard the outside pieces of laminated cores. The shape of the remaining core will now be that of the letter H with the coil around the center section. File any sharp corners. Connect the leads to a suitable length of electric wire and a standard electrical plug. Add a wooden handle if you wish.
To use the demagnetizer, move it slowly back and forth along the desired section of airframe while it is connected to the 60 cycle/110 volt house circuit. Position the unit so that the airframe tubing acts to complete the magnetic flux path emanating from one end of the H. Do not turn off the current while the unit is against the airframe. To do so will result in the airframe being strongly magnetized at that point.
I suggest that all welded clusters and connections be tested for magnetism before applying any final cover. This can be done by bringing a compass close to the weld and noting any strong deflection of the needle. Believe me, it would pay to demagnetize the bare airframe before covering than not to do it and run the chance of having to do it after the fabric and paint is on.
BOX SPAR CLAMP
By Stanley W. Wilkin
EAA 10764
184 Islington Ave., N.
Islington, Ontario, CANADA
This very simple clamp is an idea that I came up with to help me make an extra dollar to carry on with the building of my aircraft.
My chums asked me if I would build a box spar for a mast for their sailboat. The construction of the mast was very similar to a box spar of an aircraft wing.
I took on the job to build this 32 foot spar, but in the back of my mind, the thing that I did not know was how I was going to clamp it simply and not lose my shirt in labor for jigging.
I hit on this idea for a clamp and made 120 of them from ¾ in. packing box lumber planed on one side. This was obtained free from my place of employment. The time required to make the clamps was only eight hours. I used two 1½ in. long nails in each block and no glue to make the clamp. As the spar was tapered, this made the clamps easy to adjust by moving the block behind the wedge to the right width for the spar and driving in the nails.
The clamp can be made to fit any width of spar just by cutting the base block to suit. I used a base block that was 6½ in. long. I glued A, B and C first with the filler blocks in place, and then glued D.
I hope that this idea will be of some help to some of the EAA members who are short on clamps.
---
TRAMMELING A SWEPTBACK WING
By
Ellis S. Barrett
EAA 15787
E. Surry Rd., Keene, N.H.
and
G. D. Wilson
EAA 11422
Fitzwilliam, N.H.
Fig. 1
The problem of how to trammel a pair of sweptback wings with precision has undoubtedly been solved before. However, outlined here is our method which is simple and very accurate. It can be done with a trammel bar, a scale, and a minimum of skull work.
We solved the problem for the upper wings of a PJ-260. However, the method can be easily adapted to any wing.
Let us assume that the wing has 9¼ deg. of sweepback and the spars are 25 in. on centers (measured parallel with the ribs).
If you draw a line perpendicular to and intersecting the center line of the rear spar at a compression tube or rib location, use the center line of the compression tube or rib, and the center line of the front spar, you form right triangle ABC. (Fig. 1). Angle CAB equals 90 deg., angle CBA equals 9¼ deg. This can be proved geometrically. Line CB equals 25 in.
Using basic trigonometry, you determine the length of line CA.
\[
\text{Sine of } 9\frac{1}{4} \text{ deg.} = \frac{\text{CA}}{25}
\]
\[
.1809 = \frac{\text{CA}}{25}
\]
\[
4.0175 \text{ in.} = \text{CA}
\]
Now take both pairs of spars and clamp them totogether with the front and rear spar butts displaced by 4.0175 in. (Fig. 2). With a square, scribe several trammel lines across the spars preferably in the vicinity of the compression tubes.
When you assemble the wings, use these trammel lines and trammel as if the wings had no sweepback. (Fig. 3). Lo and behold! The wing has 9 1/4 deg. sweepback.
Trammeling at other than compression tube locations presented no problems. Both wings have a predetermined amount of sweepback. There are no eyeball measurements and no jigs required. But, perhaps more important, both wings have exactly the same sweepback.
CARBURETOR AIR SCREEN COVER AND CONTROL LOCK
By Rollin C. Caler, EAA 11984
1113 New Mexico St., Boulder City, Nev.
PROTECTION OF the carburetor air screen from blowing dust and sand while the aircraft is tied down at the airport can be quickly accomplished with the use of a simple slip-on type cover made of .021 in. galvanized iron obtained at the local builders' supply or hardware store.
The two dimensions of the screen are first marked on the flat sheet, then about 1 1/2 in. added outside and parallel to the inner lines. A 90 deg. cut with sheet metal shears is made at each corner to allow bending to a box shape. The remaining sharp ends should be cut round to prevent injury. The sides are formed over a block of wood using a mallet. In bending, the inner line should be "saved" which will give a slightly oversize effect. The bend should then be bent in more than 90 deg. to recover the original dimensions then bent out to give a smooth spring-like slide-on surface. The outside face can be painted red as well as to attach a red cloth streamer as a reminder to remove the cover before starting the engine.
Tailor-made gust locks for ailerons are easily made by more use of this galvanized iron and 3/4 in. scrap wood. A sheet of this metal is placed between the aileron and the adjacent wing rib and about 6 in. long pieces of 3/4 by 3/4 in. are placed above and below the rib to give the outer outline of the lock. After marking, the sheet should be cut slightly undersize to prevent unnecessary sharp edges. The assembly is then nailed together using nails long enough to go through both sides and clinched. The nails go through the metal quite easily. I used three nails on top and three on the bottom.
The outer surfaces should be painted red to be easily seen and removed during the pre-flight inspection. The inner surfaces should remain unpainted to prevent discoloring the aircraft surfaces. The forward edge of the metal should be filed smooth to prevent damage to the aircraft fabric. Attachment to the wing depends on what struts, etc. are present. On my Corben "Baby Ace," I found a spare piece of vinyl-coated No. 12 solid electrical wire made a quick and durable attachment to the rear strut.
TUBE FLANGING TOOL
By Hilton McNeal
EAA 5902
4390 S. Tamiami Trail
Ft. Myers, Fla.
In the past, it was a slow, tedious process to make a flanged tube for joining 2 in. flexible tube to a flat surface such as the vent for cabin heat. I took two 3-in. dia. chunks of steel and machined them, as shown in the sketch, for a male and female die. This tool enabled me to easily form a finished flanged tubing nipple in a very short time by putting a 2 3/4 in. blank length of 2 in. soft aluminum tubing in the female die and then pressing the male die down on the top to make the flare. Release the press, tap lightly, and the blank comes out perfectly flared and ready to use by drilling the mounting holes. This sure beats the old hand forming, welding, or riveting two or three pieces to form this part, and takes only a fraction of the time.
"COUGAR" FOLDING-WING DETAILS
By Marvin D. Becker, EAA 3238
11571 Palmwod Dr., Garden Grove, Calif.
FOLDING-WINGS is not a new idea and there have been many methods of accomplishing this feature in past designs. The method used on my "Cougar" has worked fine for me, but perhaps you can improve a few features to suit your needs.
There have been many inquiries about the details on this folding-wing which prompted me to present the material here.
Very little extra weight penalty is added with the full universal at the rear spar, the front spar-attach point which is beefed up and extended 2 in., and the nuts welded to gussets for wing stowage.
Setting the ship up for flight takes 20 to 30 minutes depending on how many questions I answer for the group that gathers. The front-spar and lift strut are completely removed for folding. The struts are stowed in the cockpit. A set of tail lights with stop and turn signals is slipped on the prop blades for travel to and from the airport and the "Cougar" is pulled on its gear backwards, by a "bolt-on fuselage" tow-bar made from large streamline tubing.
I have logged about 1,000 miles of trailer operation since the first flight on September 12, 1961 without any problems. She fits very well in my garage, the highest point is 6 ft. 6 in. at the aileron with the wings folded. See the May, 1961 issue of SPORT AVIATION for more pictures and construction details on the "Cougar."
CRIMPING TOOL
By Russell W. Riter
Sky Harbor Airport
Northbrook, Ill.
I have formed a lot of ribs and bulkheads over plywood using 2024T3 up to .063 but, as anyone who has formed them knows, when a curve is formed while bending the 90 deg. flange, the main material takes a curve when it usually should be flat and straight. The flange can be drilled and cut out between rivet positions, however, I have made a tool that makes a little crimp between rivet positions and straightens the rib. A
couple of pieces of angle iron were cut and the edges were all rounded and smoothed. They were then brazed to a pair of vise-grip pliers.
After forming and trimming the flanges, mark the rivet spacing and then crimp between the rivet spacing just enough to straighten the rib.
Also shown in the picture is one size of lightening hole flange die. I made a set from 1 in. to $3\frac{3}{4}$ in., by $\frac{1}{4}$ in. increments. Simply lay out the holes, saw out with a hole saw, smooth the edges and press the flange with a vise or a press. They look as professional as a factory job.
---
**SPORTSMAN'S CRADLE**
By Kenneth C. Walton, EAA 12488
1 Stuart Ave., Chateaugay, N. Y.
This simple and inexpensive cradle will greatly simplify the problem of handling and moving the fuselage-hull of a Volmer VJ-22 "Sportsman" during its construction. There is no reason why the cradle could not also be adapted to original design amphibian.
---
**LIFTING TEMPLATES FROM FULL-SCALE PRINTS**
By Daniel D. Dwyer, EAA 11545
2735 Wilma, Wichita, Kansas
If you have a set of prints that show wing fittings or other such small metal parts full scale, take a piece of clear celluloid or acetate (technically referred to as cellulose-acetate sheeting), place the transparent material over the parts shown on the print and tack it to the print with four small pieces of masking tape wherever it is convenient. This material is transparent and a lot like plexiglas, but should only be about .015 thick. It can be purchased from an office supply dealer if no other convenient source is available.
Trace the outline of the part on the transparent material with a sharp knife, razor blade, utility knife or, better yet, an Exacto knife. This is a tool common to model builders for cutting balsa, shaped and handled like a pencil, with interchangeable cutting edges on the end.
It isn't necessary to cut too deep into the celluloid material. Anyone can draw the line with the cutting tool without any danger of cutting the print. Use a straightedge to follow the straight lines, and washers or coins to guide around the radii. Locate the holes with a scribe. Lift the celluloid from the print and bend along the lines, and it will break along these lines very easily. It helps to use tweezers or pliers to break around the contours or radii. Knock off any burrs with sandpaper before using the pattern to scribe the flat stock. Your print will not be marked or damaged in any way.
Simple Construction Of Fiberglas Wing Tips
M. B. Standing, EAA 11383
135 Sheridan Way, Woodside, Calif.
When I got around to making the wings for my Stits Skycoupe, I decided that I wanted something different in the way of wing tip design. Ray had designed the tips to use a single \( \frac{1}{2} \) in. tube bow supported by the outermost rib and the two spar ends. The fabric stretches tight from the last rib to the bow a la Airnocker. I wanted something more fancy.
I figured the best way to start was to place some small riblets between the regular wing rib and the bow. These were cut from \( \frac{1}{8} \) in. plywood to the contour I wanted. These are shown in Fig. 1.
To get a rounded effect at the bow I next cemented pieces of polystyrene foam between the riblets and at the leading edge of the wing. These are also shown in Fig. 1.
My plan was to next cover each side of the tip with one layer of fiberglas — after sanding the polystyrene to the right shape. This was a good idea up until I put the resin on the glass cloth and tried to stretch it over the riblets. If the cloth was on the top of the wing, I got nothing but hills and valleys! When I turned the wing on edge or upside-down to get rid of the saw-tooth effect, the cloth would fall on the floor! Obviously, a supporting media was needed.
In looking around for something to use, I spotted the \( \frac{1}{8} \) in. corrugated cardboard that Mr. Reynolds had shipped my sheet aluminum in. Just the thing — cut strips and glue these between the riblets. The result is shown in Fig. 2.
But I wasn’t out of the woods yet! The top surface for the first half of the chord was composed of too many flat segments with breaks in the curvature at the riblet locations. Solution was to glue more styrofoam to the flat spots and then sand to a smooth shape. This was then covered with one layer of 7 oz. glass cloth, using the usual polyester boat resin. You can see the finished product in Fig. 3.
I had originally thought to remove the cardboard after the fiberglas had set up. However, it adds so much rigidity to the tip with so little additional weight that I’m going to leave it in the wing.
One word of caution is in order. There are two types of styrofoam. One type dissolves when contacted with Ambroid glue, lacquer thinner, polyester resin, etc. It will, however, stand polyvinylacetate glues (Elmer’s, Wilhold, etc.). The second type, which I obtained from the local airplane hobby shop, was not affected by the organic materials.
SAFETY ALERT
U.S. GENERAL AVIATION
MISUSE OF BRAKES
Misuse or excessive use of light aircraft brakes will reduce their reliability and service life.
To maintain effectiveness and reliability, it is suggested that you:
1. Permit aircraft speed to be reduced aerodynamically before using brakes.
2. Taxi in a manner requiring minimum brake use. Do not “drag” brakes at any time.
3. Do not use brakes while there is lift on the wings.
4. Apply brakes smoothly with an increase in pressure as necessary for maximum effectiveness.
5. Exercise caution during touch-and-go landing as brakes may become over-heated.
REMEMBER
GOOD BRAKES CAN PREVENT ACCIDENTS
CIVIL AERONAUTICS BOARD
EAA Air Museum Foundation, Inc.
A Non-Profit Organization Devoted To The History And Development of Sport Aviation
Aviation Museum: 11311 W. Forest Home Ave., Franklin, Wisconsin —
Mailing Address: Box 229, Hales Corners, Wisconsin 53130
PREPARED BY PAUL H. POBEREZNY AND S. H. (WES) SCHMID
Reprint 1977 |
HARRY PAYNE'S ICONIC GREAT WAR "OILETTE"
RAPHAEL TUCK & SONS
Dear CMMRG Members:
WELCOME TO NUMBER 200! This is a momentous occasion for our Study Group and a significant milestone for our parent society BNAPS.
Doug and I wish to especially thank all those members who contributed to this very special issue, along with those (past and present) who have submitted articles, participated in meetings at our BNAPEX conventions, and given their support to our group over the years. A very big "thank you" to everyone!
Since our group's formation in Calgary in 1973, through this publication, our members have greatly contributed to the research and study of Canadian military postal history. In perusing some back issues I came upon some poignant observations by our late former Editor J. Colin Campbell. Colin wrote in the September 1986 Newsletter (Issue #69, whole p. 464): "We would like to think that the study group has advanced the research and recording of Canadian Military Postal History a considerable distance". That was an amazing twenty-five years ago and I believe that his comments still ring true today.
Given this special occasion it is naturally time for us to reflect upon our past and the many significant contributions made by our fellow members, students, and friends within our study group "family" who we have lost over the years. This anniversary issue is dedicated to their memory.
We would also wish to salute our group's former officers: Colin Campbell, Ken Ellison, Henk Burgers, E.R. "Ritch" Toop, and W.J. "Bill" Bailey for their tireless efforts, hard work, and dedication which greatly assisted in bringing our group and this publication so far.
We hope that members enjoy this anniversary issue. Our next issue will continue with our regular format. A CMMRG meeting will be held on SATURDAY, SEPTEMBER 3, 2011 AT 2:30 PM at the NORTH BAY, ONTARIO BNAPEX 2011!
C. DOUGLAS SAYLES
Chairman/Treasurer
DEAN W. MARIO
Editor
Congratulations to the BNAPS Military Mail Study Group on their 200th issue. Since 1973 your Group has listed, described and illustrated a wide variety of items related to mailings by, or addressed to, members of the Canadian Armed Forces. There also has been considerable material related to the procedures of mail distribution in the services. The information has resulted in several major handbooks. Almost as impressive as reaching a 200th issue is that the newsletter has been maintained over a period of almost 38 years, and has averaged more than five issues a year over that period. As President of BNAPS I want to express my appreciation to all the contributors over the years, and especially to the previous newsletter editors, Colin Campbell, Ken Ellison, and Henk Burgers and to present editor (since 1995!) Dean Mario. Well done.
Robert Lemire, President BNAPS
Boer War Connections
Henk Burgers
Canada sent troops overseas for the first time when it became involved in what was actually the Second Boer War in 1899 when Great Britain requested that it send a military force to South Africa to assist in the fight against the Boers. Prime Minister Wilfrid Laurier decided to raise a special force, the 2nd (Special Service) Battalion of the Royal Canadian Regiment of Infantry.
Figure 1. GENERAL ORDER 107 of 1899 for the Militia, authorizing the raising of an infantry Battalion for Active Service in South Africa.
Figure 2. Special GENERAL ORDER authorizing a Force for Special Service in South Africa consisting of The Canadian Mounted Rifles and a Brigade Division of field artillery consisting of Batteries "C", "D" and "E" Royal Canadian Artillery.
En Route
On 30 October 1899, the first contingent of 1000 soldiers sailed from Quebec City on the Allan Line's SS Sardinian, a converted cattle ship. They arrived in Cape Town on 29 November 1899 and on December 1 boarded trains for Belmont, where they joined the British rear guard for the next two months.
The second contingent included the five-member Postal Corps detachment, commanded by Lt WR Ecclestone of Hamilton. It left Halifax on the SS Laurentian on 21 January 1900. Its postal kit included an oval rubber date stamp inscribed "CANADIAN CONTINGENT EN ROUTE SOUTH AFRICA". This stamp was used on 30 January 1900 when letters were posted at St. Vincent in the Cape Verde Islands where...
the ship stopped for coaling. It was also used on 31 January and 15 February 1900.
After arrival in South Africa, the "EN ROUTE" part was removed and this stamp and another one were used on Canadian mail in Capetown until 27 December 1900.
Figure 4. Letter to Truro, NS, from Canadian soldier Kaulbach, son of the Archdeacon of NS, to his mother, posted in St. Helena on 15 December 1899.
There are ten covers known from St. Helena. Rank or first name of Kaulbach is not known, because none of the covers are endorsed with the name of his regiment or rank. This was his first letter home and was written en route to South Africa, and posted in St. Helena, 5000 miles from home. The reverse bears a Truro arrival marking of JA 15/1900. The envelope flap bears the shipping line (Union Steam Ship Company Limited) seal in blue. The ship arrived in Capetown on 16 December, 11 days sailing from S. Helena. It would appear that the St. Helena date stamp may be wrong and should be 5 December.
The Canadian postal detachment was part of the British Army Post Office system in South Africa. This explains why many covers are seen bearing the cachet plus a British military postmark.
Figure 5. British Army Field Post Office with indicia 36 DRCDS dated 7 May 1900 ties GB 1d lilac on cover to Mrs Kaulbach at The Rectory, Truro, NS. Stamp selvage used to seal homemade envelope.
The letter in Fig. 5 was posted at APO 1, located then in Brandfort, Orange River Colony. It served the Army Headquarters of the South African Field Force.
The other Canadian contingents used civilian offices or British Field Post Offices, depending on what they had access to. Only a third of the Canadian mail received the cachet, as mail was presorted and bagged for Canada in Bloemfontein or Pretoria so there was no need to do this again in Cape Town. Some BFPO numbers seen include 1, 17, 21, 30 and 100. APOs include 43, 50, 52, 54 and 55.
The postal staff returned to Canada on 20 January 1901 and the corps was disbanded for the time being. After this, all Canadian mail used APOs or civilian post offices.
Marching to Pretoria
On 21 January 1900, the first members of the 2nd Canadian contingent sailed from Halifax. On board the SS Laurentian were two artillery batteries (D and E) from the Royal Canadian Artillery. The ship arrived in Cape Town on 17 February. They were followed on 27 January by the 2nd Battalion, Canadian Mounted
Rifles, sailing on board *SS Pomeranian*, which arrived on 26 February. The final draft left Halifax in February, on board the Elder-Dempster liner *SS Milwaukee*, and carried the 1st Battalion, CMR, along with C Battery, RCA.
The Second Contingent cover in Figure 6 was sent on 25 July 1900 by Pte BC d'Essum of the 2nd Bn CMR to his father in Hamilton.
Figure 6. Orange Free State franked cover to Hamilton, endorsed "Via New York". Stamp overprinted VRI/1d and cancelled by ARMY [P.O./TS] AFRICA/JL 25. NY arrival marking AU 20 on front and Hamilton arrival CDS of AUG 20 on reverse.
The gunners of course also sent mail home. This is a m/s endorsed "Canadian Contingent Field Force Brit. S. Africa". Three strikes of FIELD P.O / BRITISH ARMY S. AFRICA. Proud Type IV datestamp, locally made, with a damaged second 0. Used by FPO 43, 2nd series. This was the Advanced Depot at Bloemfontein from 1 May 1900 to 7 August 1902.
Figure 7. Canadian Contingent Cover with Orange River Colony 1d and Canadian 2c Map stamp to Newmarket, Suffolk, England. Posted at Bloemfontein by Pte M. Boone of 'E' Field Battery, RCA.
The reverse bears Newmarket arrival marking CDS AU 20/00. The map stamp was perhaps used as a patriotic label; the cover may be philatelic. However, an article in SG Stamps of Oct 2008 states that often stamps of soldiers' home countries were used, as well as GB stamps. This is one of 2 known entires with Canadian stamps; illustrated in Rowe, p. 82.
**The South African Constabulary**
The British government raised a para-military force to police the conquered Boer republics. For this purpose they set up the South African Constabulary (SAC). In August 1900, two months after the fall of Pretoria, Major-General Robert Baden-Powell, was appointed to command the force. However, despite the British optimistic expectation that the 8,500-strong constabulary could assume responsibility for pacifying the countryside, the Boers continued to fight on following the capture of their capitals.
Figure 7 shows a letter from a member of the SAC to Merritton (now part of St. Catharines), Ontario.
Figure 7. Cover from Sgt Ball of 20 Troop, SAC, in Bloemfontein to Merritton, ON. The 1d Orange Free State VRI surcharge is cancelled by Bloemfontein civilian cancel 4 July 1902. Also rated "T" and 4 due, but should not have been collected as soldier's name came from Receiving with support office of APO Bloemfontein dispatch marking, Hamilton CDS transit marking and Merritton square circle receiving marking.
Patriotic Passions
There was a good deal of patriotic fervour in the British Empire and Canada certainly shared this. One of the results was an outpouring of postcards, preprinted stationery and other items all demeaning the Boers.
Figure 8 shows an example of this rather undignified propaganda.
Figure 8. A registered letter from Toronto to Pittsburgh, PA featuring "Dorn" (Uncle) Paul Kruger, The Cause of It All. York Street duplex cancel FE 15/02 and letter from Toronto to Pittsburgh. Reverse bears another York Street duplex. In Toronto split ring, Buffalo NY transit marking, another transit marking, and a Pittsburg Registration Div. arrival marking in violet.
The Home Front
Some 2000 other troops served as garrison troops in Halifax, Nova Scotia, to allow British troops to serve at the front, or landed after the fighting ended. Canada replaced the British regiment in Halifax for the duration of the war with a Canadian unit, the 3rd (Special Service) Battalion, Royal Canadian Regiment of Infantry. Figure 10 shows an example of correspondence to a member of the battalion.
Royal Review Souvenir Marking
Although strictly speaking not Boer War covers, the postal arrangements for the Royal Review of 1901 are usually discussed in the same breath. The postal corps detachment had returned to Canada in January 1901 and was promptly disbanded. The men went back to their civilian jobs in the post office.
Subsequently Capt Ecclestone and three of his staff were called up for Militia service in Toronto during 8-12 October of that year to provide postal facilities for soldiers mustered there for the visit of the Duke and Duchess of Cornwall and York. It was at this 'ROYAL REVIEW' on 11 October 1901 that the first souvenir military postal marking was used.
Figure 11. Cover to Belleville, ON, franked with 2c QV numeral. Front has round stamp of Assistant Postmaster Toronto in blue. Reverse bears Belleville CDS arrival marking OC 16/01.
The South African or Second Boer War as it was also known, lasted from 1899-1902 and involved more than 7,000 Canadians, including 12 female nurses. A total of 267 soldiers died in the conflict. Six contingents were sent overseas between 1899 and 1902. The last draft of the sixth contingent consisting of part of the 4th Regiment CMR and the 5th Regiment sailed from Halifax on the SS Corinthian on 23 May 1902 and arrived at Cape Town on 18 June 1902.
The first to return (at the end of their enlistment period) was the 2nd (SS) Battalion RCRI on 1 October 1900 on the SS Idaho from Cape Town, arriving in Halifax on 1 November 1900. Peace was signed at Pretoria on 31 May 1902 and the last of the sixth contingent returned via Liverpool, boarded the SS Lake Erie, on 3 September 1902 and disembarked at Quebec on the 13th. A number of Canadians remained in South Africa with the South African Constabulary and some other, more or less irregular, units such as the Canadian Scouts.
References
[1] W.J. Bailey & E.R. Toop, *Canadian Military Postal Markings, 1881-1995* (Chades G. Firby Publications, Waterford, MI, 1996).
[2] W.J. Bailey & E.R. Toop, Ed. B.B. Proud, *The Canadian Military Post*, Volume 1, (B.B. Proud, 1984).
[3] K. Rowe, *The Postal History of the Canadian Contingents in the Anglo-Boer War 1899-1902*, (Vincent G. Greene Philatelic Research Foundation, 1981).
[4] Canadian War Museum, http://www.warmuseum.ca/cwm/exhibitions/boer/boerwarhistory_e.shtml
[5] The Canadian Encyclopedia, http://www.thecanadianencyclopedia.com/index.cfm?PgNm=TCE&Params=NA1ARTM0012043
[6] Proud-Bailey Co. Ltd, *History of the British Army Postal Service, Vol. 1*, Edward B. Proud, Ed., 1982.
[7] Canadians Troops to South Africa, 1899 – 1902, http://www.rootsweb.ancestry.com/~alwcobit/LER/Boer/index.htm
[8] W. Sanford Evans, *The Canadian Contingents and Canadian Imperialism*, (Eugene G. Urmal, Ottawa).
********
CANADIAN RAILWAY TROOPS IN PALESTINE AND SYRIA—
IN THE SHADOW OF T.E. LAWRENCE
--Robert Toombs
The summer of 1918 saw General Sir Edmund Allenby's Egyptian Expeditionary Force (E.E.F.) holding a front from Haifa on the Mediterranean in the west to the Jordan River in the east, just north of the Dead Sea. Allenby was planning a major offensive against Turkish Forces; he requested of the War Office in London for a contingent of expert railway bridge builders. Canadian railway troops in France were approached on August 3, 1918 and this assignment was approved on August 20, 1918. Six officers and 255 men were assembled mainly from eight Canadian railway battalions, by the 12th Battalion, Canadian Railway Troops (C.R.T.), at Verton, France. This newly-formed 1st Bridging Company, C.R.T., sailed from Marseilles September 20, via Malta, and arrived in Palestine on September 30, 1918.
While the 1st Bridging Company was in transit from France to Palestine, the Meggido battles had commenced on September 19, 1918. The Turkish Army was soundly beaten there and retreated, following the Haifa—Dera'a connector of the main Hejaz Railway line, eastwards up the Yarmuk River which joins the Jordan River, a few miles south of the Sea of Galilee near Samakh (see FIG. 1). In so doing, they sabotaged two key 180-foot long bridges near Samakh (see note below). The retreat continued north from Dera'a, up the Hejaz Railway main line through Syria.
The brilliant destructive work done on the railway line approaching Damascus from the south by Lt. Colonel T.E. Lawrence leading Emir Feisal's troops, likely precluded a regrouping of Turkish forces about Damascus which fell to the British and Allies on October 1, 1918. This Turkish retreat yet continued beyond Damascus. On October 9, Allenby accepted an offer from Emir Feisal to operate northwards from Damascus with 1,500 cavalry and camel-men against Turkish forces between Hama and Aleppo (see note below). The retreating Turks, under the Austrian General Liman von Sanders, blew up railway installations in the Hama. On October 26, the British entered Aleppo and then advanced eight miles beyond it along the Hejaz Railway line towards Alexandretta. An Armistice was signed with Turkey on October 30, 1918, formally ending hostilities in this theatre.
An operating railway was urgently needed by the British to supply their rapidly advancing forces and to consolidate their increasing territorial gains. Starting October 5, 1918, shortly after their September 30 arrival in Palestine the 1st Bridging Company, assisted by 560 men of the Egyptian Labour Corps, began the restoration of two of the key bridges on the Yarmuk River near Samakh (T.E. Lawrence had attacked these bridges behind Turkish lines in 1917 but failed).
These Canadian railway troops were ravaged by poor local health conFIG. 1. Sketch map shows a portion of the Hejaz Railway route running from Arabia to Constantinople. The branch line from Dera'a west via Samakh terminates at Haifa on the Mediterranean. Shown are the general areas of operations of the Canadian Railway Troops at Samakh (in present day Jordan) and around Hama (in present day Syria).
Conditions in the swampy Yarmuk River valley in the Samakh area. By mid-October sickness (malaria and influenza) had reduced the 1st Bridging Company's strength by 75%. Many were hospitalised; two died from malaria and two from pneumonia. However by October 26, the bridges at Samakh were restored to service and British rail traffic then flowed through to Damascus.
At the end of October 1918 the Canadians relocated to the Hama area, north of Damascus, to restore the railway line until the first week of February 1919. One month later on March 14 the 1st Bridging Company, C.R.T., sailed for France. Some of its members remained behind in hospitals to recuperate.
FIG. 2 shows a cover addressed to St. John, New Brunswick, postmarked
December 26, 1918 at Safed, Palestine. The sender was Sapper Emil Ramm, a thirty-year old Norwegian-Canadian batchelor who enlisted at St. John in 1916. The cover was censored by Lt. J.F. Sanderson, C.R.T.
The postmark "Field Post Office/X/26 DE 18/SZ 61" is a stationary post office attributed to Safed, Palestine from November 1918 to February 1919 (Kennedy and Crabb, p. 154). The Censor Marking No. 1386 is of the CM7 Type (numbers ranging from 701 to 4501) in use by the E.E.F. between November 1915 and January 1919 (K & C, p. 145). There are no back markings.
FIG. 2. The only recorded cover (to date) from the Canadian Railway Troops in Palestine and Syria. Postmarked "Field Post Office/X/26 DE 18/SZ 61".
A four-page letter from Sapper Ramm was enclosed in the above cover. As far as is known, this is the first reported correspondence from the Canadian Railway Troops in Palestine and Syria. FIG. 3 illustrates the enclosed letter [reduced to 60%.Ed.] and a full transcript follows.
Palestine 23.12.1918
Dear Miss,
I suppose you will rather be surprised to hear from a stranger, but you address from a girlfriend of yours and thought I may wish you a few lines from this so called holy land. I want to tell you that some year have to find that out you say we are here a few hundred Canadians in a big valley. The first Canadian Bridging Company. At present we are close to the Sea of Galilee building bridges across the Jordan River which runs into the sea of G. After doing our bit in France for about two years, we volunteered to come out here. Been curious that day ever since this country is no good for white men, too much sickness. Only been over here now three months and out of that nearly every man in the bug spent a month in hospital. I was in hospital in Alexandria for 34 days, pretty sick. Don't like that place at all, and never want to see it again. The war is over now and let's hope the day will soon come for us to leave this country for Canada. I hear the Spanish flu is very bad over there. Hope you will steer clear of it, as it is not fun to be sick. Well Miss, Christmas is only a day off.
Yours truly,
Sapper Ramm
One of the 145th Bn.
Please write soon.
Sapper Ramm's letter [as written.Ed.] is transcribed below:
"Palestine 23.12.1918
Dear Miss.
I suppose you will rather be surprised to hear from a stranger. Got your address from a girlfriend of yours, and thought I my write you a few lines from this so called holy land. I wont tell you her name you have to find that out yourself. We are here a few hundred Canadians in a Coy. called the "first Canadian Bridging Company". At present we are close to the Sea of Galilee building bridges across the Yarmuk River which runs into the Sea of G.
After doing our bit in France for about two years, we [2] volunteered to come out here. Been cursed that day ever since. This country is no good for white men, to much sickness. Only been over here now three months and out of that nearly every man in the Coy. spent a month in Hospital. I was in Hospital in Alexandria for 34 days, pretty sick. Don't like that place at all, and never want to see it again.
The war is over now, and let's hope the day will soon come for us to leave this Country for Canada. I hear the Spanish "Flu" is very bad over there. Hope you will steer clear of it, as it is no fun to be sick. Well Miss Christmas is only a day of [3] now. Hope you will have a merry one and a happy New Year and lots of them. Sorry to say that we have to spend it in such a God forsaken Place. Never mind next year we will have at home. My address is now, if you want to answer, 743128. Sapper E. Ramm 1st Bridging Coy. Canadian Railway Troops E.E.F. c/of Army Post Office London.
It better to send mail there as we never know when we leave here. The alway have a record of all Troops [4] now news is getting scarce, so I have to close for this time. Hoping to hear from your side of the World soon. I remain yours sincerely Sapper E. Ramm One of the 115 Batt. Please write soon."
A partial copy of the 1916 Attestation Paper of Ramm follows in FIG. 4. Although this is the first recorded postal history from the C.R.T. in the Near East, there are possibly a few such other "sleepers" out there awaiting discovery. Good hunting!
REFERENCES
Kennedy and Crabb. *The Postal History of the British Army in WWI-Before and After: 1903-1929*. February 1977.
Swettenham, John A., Captain, Royal Canadian Engineers. Report No. 85, Historical Section (G.S.), Army Headquarters, Operations in Palestine, 1918-1919. Directorate of History, Canada, October 20, 1959.
Virk, D.S. *Indian Army Post Offices in the Second World War*. The Army Postal Service Association, New Delhi, 1982.
Confidential War Diary. 1st Bridging Company, Canadian Railway Troops, August 3, 1918-August 23, 1918. National Archives, Canada.
TRIPLECTATE.
115th Battalion, G.E.F.
ATTESTATION PAPER.
CANADIAN OVER-SEAS EXPEDITIONARY FORCE.
No. 743128.
Folio.
QUESTIONS TO BE PUT BEFORE ATTESTATION.
1. What is your surname? .................................................. Sapper
2a. What are your Christian names? ........................................... Sapper
2b. What is your present address? ............................................ 3rd Battalion, 1st Division, N.B.
3. In what Town, Township or Parish, and in which County were you born? ................................................................. Newbury
4. What is the name of your next-of-kin? ..................................... Mother: Annie
5. What is the relationship of your next-of-kin? ............................... 2nd Battalion, 1st Division, N.B.
6. What is the date of your birth? .............................................. 10th July 1888
7. Are you married? ........................................................................ Yes
8. Are you willing to be vaccinated or re-vaccinated and immunised? ........ Yes
9. Do you now belong to the Active Militia? ..................................... No
10. Have you ever served in any Military Force? ................................. No, since enlistment of Private Service
11. Do you understand the nature and terms of your engagement? ............... Yes
12. Are you willing to be attested to serve in the Canadian Overseas Expeditionary Force? ................................................................. Yes
DECLARATION TO BE MADE BY MAN ON ATTESTATION.
I, ____________________________, do solemnly declare that the above are answers made by me to the above questions and that they are true, and that I am willing to fulfil the engagements by me now made, and I hereby undertake to agree to serve in the Canadian Overseas Expeditionary Force, subject to being called to any arm of the service thereof, for the term of one year, or during the war now existing between Great Britain and Germany should that war last longer than one year; and for six months after the termination of that war provided His Majesty should no longer require my services, or until legally discharged.
(Signature of Recruit)
Date: 8th April 1916. (Signature of Witness)
OATH TO BE TAKEN BY MAN ON ATTESTATION.
I, ____________________________, do make Oath, that I will be faithful and bear true Allegiance to His Majesty King George the Fifth, His Heir and Successor, and that I will so in duty bound honestly and faithfully defend His Majesty, His Heir and Successor, in Person, Crown and Dignity, against all enemies, and will observe and obey all orders of His Majesty, His Heir and Successor, and of all the Generals and Officers set over me. So help me God.
(Signature of Recruit)
Date: 8th April 1916. (Signature of Witness)
FIG. 4. Part of 743128, Sapper Ramm's enlistment papers.
*******
THE CANADIAN EXPEDITIONARY FORCE:
SIBERIA, 1918-1919
--Ged Taylor
One of the least-known Canadian military undertakings was that in which over 4,000 troops were sent to Siberia during 1918-1919. Some reasons for their despatch originated with the Russian Revolution, the subsequent break-down of the Eastern Front, and the Allied need to support friendly troops still fighting in North Russia.
British and Canadian troops were committed to this Russian area of operations and were landed at Vladivostok during August (British), October 1918, and January 1919 (Canadian).
With the Canadian advance party went an officer and three other ranks of the Canadian Postal Corps; equipped to operate a full postal service. Part of their equipment included special "Siberian Force" hammers. The Canadians complimented British forces as well.
During the Unit's seven months of operation, many difficulties were experienced in moving the mail in and out of the area. A good deal of mail transport was dependent upon Japanese mail steamers operating through Vladivostok to Japan, the U.S.A., and Canada.
Post card view of Vladivostok Harbour to Kitchener, Ontario dated March 12, 1919 from a Rifleman in the 259th Battalion, Canadian Rifles, C.E.F. (S).
Due to the difficulties of securing troopships in 1918, the advance party of 700 Canadian troops did not reach Vladivostok until October 26, 1918. The main body of 2,700 all ranks followed on, arriving in the port on January 15, 1919.
Lt. Ross (No. 5 C.P.C.) had, after initial problems, established a satisfactory postal service from Vladivostok. In addition to the Canadian units, the C.P.C. also serviced mails for the British Mission and British Forces along the Trans-Siberian Railway as far as Omsk. The Field Post Office closed May 27, 1919.
A letter and contents on Japanese pictorial newspaper written on March 12, 1919 from a soldier in the 20th Canadian Machine Gun Company to his sister.
It bears the Siberian Exp. Force hammer dated March 19, 1919.
The enclosed letter, from #3132701, W.P. Krug, 20th Machine Gun Company, C.E.F. Siberia, follows:
"Dear Elizabeth,
To-day I'll drop just another line as this afternoon being devoted to sports gives me breathing time for a little correspondence. I hardly think this is gentlemen's [sic] correspondence paper but one thing I've found it to be very poor quality.
On Monday, Ireland and I took a walk down town. We had an all day pass. I found the city very interesting as the stores were open and the business section seemed unusually busy. I was pleasantly surprised with the large departmental stores. These show considerably empty shelves and bears evidence to the difficulty of procuring goods and perhaps also to unsettled conditions in Russia.
Hong Kong must be a city of some size and importance. Textiles, groceries and [?] of all description comes here from Hong Kong, Shanghai, and Japan. But everything is so dear Ireland bought an ordinary bath towel and paid 15 ruples which is about $1.50. We were in Chinatown and went thro [sic] their market. Many things were on display and I made no purchases except a pair of sandals as its no use buying when space in your kitbag is nil and besides we never know whether we'll be going home or proceed up the line.
To-day I'm told is the anniversary of the Russian Revolution and a holiday all over Russia. By the way this morning I addressed a parcel containing two sheets of music to you at Chesly....
I suppose you know more of our movements than we do. We all hope to come home in April. We're not having to [sic] bad a time but will consider our happiest moments in Siberia when on a ship we see Vladivostok fade in the distance.
To-day the boys are playing baseball just outside the barracks. It has also started to snow and after this inning the game will be postponed. This is the first snowstorm I've seen in Siberia and the ground now is almost white.
All morning we had field tactics and I was taking ranges. Most of the time I lay on the dead grass almost asleep in the sun. "Ain't it funny what a difference just a few hours make".
But now I'll close, the paper is giving out and I've only one envelope per sheet.
With love,
Weel"
Censor marking devices numbered 001 to 035 were prepared and taken to Russia in December 1918. Numbers 021 to 035 were not issued, and numbers 008, 011, and 015 have not been seen.
The largest known number of any of the devices is seven of number 007 which was used by the Canadian Base Depot in Vladivostok (located in the East Barracks, some two miles outside of the city) for a period of sixty-nine days.
The "PASSED/BY/CENSOR/006" device. It was used by the 5th Canadian Postal Corps Unit. They occupied premises close to the Egerscheldt Docks in Vladivostok.
The censoring officer's signature is that of Hon. Major H. McCausland, the Senior Protestant Chaplain.
Another interesting marking related to the Canadians in Siberia is that of the circular "BASE DEPOT/SIBERIA," which is often seen in coloured ink.
The following cover, addressed to the U.S.A., is dated March 26, 1919. The Base Depot Orderly Room handstamp in purple ink is dated one day earlier. It bears no censor markings.
Cover from #3037910, H.T. Symons, 85th Battery, Vladivostok.
His enclosed letter, dated March 23, 1919, was a lengthy eight-pages of which part is reproduced below:
"Dear Julia,
You can't imagine just how happy I was to hear from my wee aunt once more. It sure seemed an age since I heard from anyone, so I went near crazy when a mail arrived day before yesterday postmarked Detroit. It seems to me everyone are [sic] holding their letters back until they hear from me, which I suppose is very foolish to me for I suppose they are all held up between here and Canada for we sure have a poor mail service [mail arrived on March 22, 1919 on the S.S. Suva Maru.GT].
If I had been wise I would have had you send letters straight out here for the U.S. mail service is so much better.
It has been hard to write letters here after the first couple went, everything and every day is the same, just filling in time and the last couple of weeks we have been confined to our barracks on account
of [a] Bolshevik uprising down town. At least that is who we blame it all on when the troops leave this place it will simply be one H...for since rumours have been going around that the troops are leaving there have been hold ups and even worse.
Vladivostok has a name of being a bad city and it sure deserves it all. I never saw such a rotten place in my life. It even beats Creighton mine when it was in its worse state. However if all rumours are true we will be home very soon. Perhaps as soon as this letter for the very latest one is we are sailing next Thursday but that hardly seems probable [the first Canadians to depart for home left on the S.S. Monteagle on April 21, 1919.GT].
The last papers we had from home said we would all be out before the end of April which might just be paper talk. It will be just as well for there doesn't seem to be any chance to go up the line. And all we are doing here is keeping order to a certain extent amongst the Russians I understand the Yanks are staying a while longer. Poor fellows have had seven months of it already. I suppose you heard we are all getting the D.S.O. when we return to Canada (Didn't See Omsk)....
******
BOER WAR CHRISTMAS GREETINGS, 1897-1898
--Hal Kellett
FIG. 1.
The Princess of Wales Own Rifles (P.W.O.R.) was created on January
16, 1863 as the 14th Battalion Volunteer Militia Rifles of Canada. Shortly after the wedding of the Prince of Wales (later King Edward VII) to Princess Alexandra of Denmark, the Regiment became the P.W.O.R.
FIG. 1 illustrates a Christmas item dated 1897-1898, and is from Lt. Colonel H.R. Smith and Officers of the 14th Battalion, P.W.O.R. This regiment subsequently went to South Africa.
The booklet shown in FIG. 2 was sent by Lt. Colonel McLean and Officers of the 62nd St. John Fusiliers as a Christmas and New Year's greeting in 1898. The recipient was the Minister of Militia for Canada during the Boer War, Sir Frederick William Borden. Borden's son, Lt. Harold Borden, was killed in the war.[Both items reduced to 60%. Ed.]
******
UNUSUAL BOER WAR RATES: REGISTERED TO CANADA AND A SHORT-PAID PARCEL RATE
--Hal Kellett
Registered covers from the Boer War are not common. The cover shown below is registered to Montreal, Canada and is dated November 26, 1901 at Pretoria. There is a red triangular Pretoria "PASSED PRESS CENSOR" marking on the front. The back of the cover bears a London registered cancel dated December 22, 1901, and a Montreal receiver of 1902.
The following parcel wrapper was sent by Canadian soldier Private (Trooper) Green who remained in South Africa and joined the South African Constabulary Force (Canada's first peace-keeping mission).
The item was mailed from Pretoria, Transvaal on January 11, 1902 and was addressed to Halifax, Nova Scotia. The stamps are the Transvaal British Occupation "E.R.I." Provisionals (Scott #248). Eight stamps short-paid the 9d Empire Rate for a thirty-six ounce parcel (eighteen x ½d per two ounces) by ld. It was taxed "10 centimes" (2¢) and charged to collect 4¢ for double-the-deficiency at Halifax. A triangle "PASSED PRESS CENSOR" from Pretoria was stamped on the front. Here is a scarce South Africa Constabulary 4th class rate.
******
POSSIBLE FIRST KOREA AIRMAIL COVER TO THE UNITED KINGDOM
--Mike Street
The following cover, obtained recently on eBay, was mailed to Swindon, Wiltshire, England by a Lance Corporal in the 191st Canadian Infantry
Workshops, Royal Canadian Electrical and Mechanical Engineers, serving in Korea. It bears a CDS CFPO 27 dated "15 IX 51".
The 45¢ in postage pays the correct triple airmail rate (15¢ per ounce) as if the letter had been mailed from Canada to the United Kingdom. As far as I know this is the first known airmail cover to the U.K. from a Canadian serving in Korea. Neither Ritch Toop nor, as far as I can remember Steve Luciuk, had such an item in their collections.
********
THE S.S. EROS
--John Burnett
With this article are illustrated two covers from my King George VI Canadian collection. Both covers ended up at the same location and figuring out just what happened was a real challenge.
The first cover FIG. 1 is addressed to Paris, France from Montreal. The cancel dates the mailing as May 29, 1940.
FIG. 2 was mailed from Aquadell, Saskatchewan on May 24, 1940 and is addressed to a Canadian Army trooper in care of the "Base Post Office" in Ottawa, Ontario for forwarding to an active duty military person.
Both covers ended up in Montreal for sea transport to England (mail to Paris was routed via London), and the Canadian trooper was probably in
England awaiting deployment.
The mail was loaded on the 5,888 ton ship S.S. Eros which also carried raw copper, Ferro chrome and small arms. On June 3, 1940 [Hoggarth and Gwynn note May 30, p. 250.Ed.] the Eros departed Montreal and was nearly across the Atlantic when, on June 7 at 0322, she was spotted by the German U-48 and fired upon from a range of 3,000 meters. Eros was seriously damaged and the crew of sixty-two abandoned ship. U-48 left after seeing the crew abandon ship.
FIG. 1. Montreal to Paris [reduced to 80%. Ed.]
Eros was taken in tow by HMS Berkley and assisted by HMS Bandit (an ocean-going tug who was nearby looking for another ship that had sent out a "mayday" call the day before). HMS Bandit towed the sinking Eros to Tory Island (off the coast of Northern Ireland) and beached her. Eros had some holds flooded and some were still dry. Once beached the cargo was off-loaded (including the mail).
The cover to France was stored in a dry hold while the cover to the army trooper ended up wet (as evidenced by the condition of the envelope and the missing stamp). Both covers were marked "SALVED FROM THE SEA" (a British term meaning "salvaged") [Hoggarth/Robin Eros Type 2 cachet. Ed.], and then forwarded to London for continuation of their journey.
During the period all this was happening, France fell to the Germans and so FIG. 1 received another handstamp "SERVICE SUSPENDU" (meaning
mail service to France had been suspended due to war). This envelope was returned to Canada and was received by the Dead Letter Office in Ottawa on August 14, 1940 where it was opened, the address found and noted in pencil in the upper left corner of the cover, and it was then officially sealed by postal sealing tape.
FIG. 2. Aquadell to Ottawa [at 80% Ed.]
The envelope to the Canadian trooper was forwarded but to this day the contents have not been removed as they became totally stuck together by being submerged in water in a flooded hold. Eros is seen in FIG. 3.
You might think "how does he know all this stuff about these covers?". The answer is "the Internet" and lots of time devoted to the research. This is something you can do just as easily and I encourage you to try and write something about your collection for the Study Group's Newsletter.
[For more on the Eros and other Canadian-related military ship disasters, members may be interested in J.E. Kraemer's "The Battle of the Atlantic and Canadian Mail," The Canadian Philatelist, 268, Vol.46 (3), May-June 1995 and N. Hoggarth and R. Gwynn's Maritime Disaster Mail: A Study of Mail Salvaged From Maritime Disasters, as Casualties of War, Collision, Fires, Shipwrecks, and Stranding. Bristol: Stuart Rossiter Trust, 2003. Ed.]
A permanent engineering corps unit was formed in 1903 under the guise of Major-General the Earl of Dundonald, CB, CVO, General Officer Commanding, who expanded the Canadian Militia transformation (from 1902-1904). This rather unusual registered cover was sent from a member of the Canadian Engineers while at the London Military Camp and bears the CDS dated "PM/AP 17/16". The rate was 5¢ for the registration fee and 2¢ per ounce plus 1¢ War Tax for a domestic letter. The sender presumably overpaid by 1¢. An Edmonton, Alberta CDS dated April 21 is on the reverse.
The London Military Camp CDS was proofed on June 26, 1915.
Bailey, W.J. and E.R. Toop. *Canadian Military Postal Markings, 1881-1995*. Vol.I. Waterford, MI: C.G. Firby Publications, 1996, p. 140.
A 1917 COVER FROM SASKATCHEWAN THAT MADE IT TO FRANCE, ENGLAND, AND BACK TO REGINA
--Robert Henderson
This envelope, which is now separated from its contents, was sent by Mrs. Fred Scott of Willowbrook, Sask. It passed through Winnipeg, Manitoba to her husband on April 25, 1917. He was thought to be serving in France with the 46th Battalion, C.E.F. Fred was part of "D" Co., 19th Reserve Reinforcements.
The 46th Battalion had been formed in Moose Jaw, Sask. on February 1, 1915, and was sent to France by August 16. A book entitled *The Suicide Battalion* by Jim McWilliams and R. James Steel (Hurtig Publishers, Edmonton, 1978, Ed.) provides a history of the unit.
Postmarks on the reverse indicate that it passed through FPO 182 on June 18, 1917 (4th Canadian Brigade), APO 2. CAN. SEC. on July 13, 1917 (Rouen, France), and the CANADIAN RECORD OFFICE/POSTAL DESPATCHED on July 25, 1917 (London, England).
A "Wounded" purple ink marking, over a red pencil mark indicating "France" is on the front cover upper left. A gummed label over a portion of the original address redirects the letter to Edinburgh War Hospital, Bangore, West Lothian.
The cover turned up, some ninety-four years later, in the hands of a
Regina antique dealer where it was recently acquired. [Several companies of the 46th fought at Vimy Ridge; especially engaged in the assault on "The Pimple" (a prominent knoll atop the northern end of the Ridge). Could Scott have been wounded at Vimy Ridge? Ed.]
******
GREAT WAR ORDERLY ROOM MARKINGS
--Dean Mario
The study of orderly room markings has been virtually ignored by military postal history collectors. Those from the Great War are especially difficult to find. However markings such as this shield type from the 10th Canadian Reserve Battalion (dated February 21, 1918) are appealing and a study of them would be challenging.
******
TOWER OF LONDON.
Scaffold Site.
Site of the Scaffold on which many famous persons have been hanged, including Queen Elizabeth Boleyn, Queen Katherine Howard and Lady Jane Grey. In the background is St Peter-ad-Vincula Roman Catholic Church, often called the Chapel of the most famous of English Royal prisons. |
Avian Repellents: Options, Modes of Action, and Economic Considerations
J. Russell Mason
*Utah State University, Logan*
Larry Clark
*Colorado State University, firstname.lastname@example.org*
Follow this and additional works at: [http://digitalcommons.unl.edu/nwrcrepellants](http://digitalcommons.unl.edu/nwrcrepellants)
Part of the Natural Resources Management and Policy Commons
Russell Mason, J. and Clark, Larry, "Avian Repellents: Options, Modes of Action, and Economic Considerations" (1995). *National Wildlife Research Center Repellents Conference 1995*. 26.
[http://digitalcommons.unl.edu/nwrcrepellants/26](http://digitalcommons.unl.edu/nwrcrepellants/26)
Avian Repellents: Options, Modes of Action, and Economic Considerations
J. Russell Mason, U.S. Department of Agriculture, Animal and Plant Health Inspection Service, Denver Wildlife Research Center, c/o Predator Ecology and Behavior Project, Utah State University, Logan, UT 84322-5295
Larry Clark, U.S. Department of Agriculture, Animal and Plant Health Inspection Service, National Wildlife Research Center, Colorado State University, Engineering Research Center, Foothills Campus, NWRC Room B05, Fort Collins, CO 80523
ABSTRACT
The present manuscript considers visual, auditory, tactile, chemosensory, and physiologic repellents currently available for use in the United States. Discussion of tactile, chemosensory, and physiologic repellents is emphasized for three reasons. First, these products are preferred by users. Second, application of these substances is regulated by state and federal agencies. Third, only four active ingredients are legally available at the present time. This lack reflects difficulties in obtaining regulatory approval and limited market size.
KEY WORDS
bird, fear, irritation, repellents, sensory, sickness
INTRODUCTION
For birds, repellents can be visual (e.g., eyespot balloons [Shirota et al. 1983], flagging [Mason et al. 1993]), auditory (e.g., distress calls [Aubin 1990, Blokpoel 1976], propane exploders [Linz et al. 1993], shell crackers [Cummings et al. 1986, Mott et al. 1990]), tactile (e.g., clay-based seed coatings [Avery et al. 1989], polybutene products [Timm 1983]), chemosensory (e.g., methyl anthranilate [Mason et al. 1993]), or physiologic (e.g., mesurol [Rogers 1980]). Under conditions of normal use, repellents act directly on pests but, importantly, they are not lethal. Hence, 4-aminopyridine, and other lethal "frightening" agents (Eschen and Schafer 1986) are toxicants, not repellents. Of the 43 products registered as bird damage control chemicals in the United States, only seven (16.4%) are repellents. Within this small group of products, the active ingredient in four is polybutene. Capsaicin, denatonium saccharide, and napthalene are the active ingredients in the remaining three products. Only polybutene has
demonstrated utility; the available evidence suggests that birds are indifferent to the other materials (Clark et al. 1990, Mason 1987).
**TYPES OF REPELLENTS**
**Visual repellents**
These often are inexpensive (e.g., $0.80/acre for plastic flags, Mason et al. 1993; Mason and Clark 1994), and they tend to be effective, if only for short periods. Typical examples of visual repellents include balloons (Shiota et al. 1983, Mott 1985), kites (Fazlul Haque et al. 1985), plastic flagging, and mylar streamers (Bruggers et al. 1986, Dolbeer et al. 1986a, Mason et al. 1993, Mason and Clark 1994, Timm 1983). Functionally, visual repellents cause startle responses, as do aposematic colors (e.g., orange, red, silver; Reidinger and Mason 1983, Lipcius et al. 1980) and cues associated with predators (e.g., hawk silhouettes, eyespots, raptor models; Conover 1982, Inglis 1980, Inglis et al. 1983). However, startle responses eventually diminish (often within days or a few weeks) as a function of several variables, including weather conditions, bird numbers, and the availability of nearby unprotected foods (e.g., Feare et al. 1986).
**Auditory Repellents**
These include both sonic and ultrasonic devices. Among the former, propane cannons are commonly used for the control of bird depredation and nuisance problems (Linz et al. 1993). Provided that units are moved every few days, cannons can be effective when one is placed for every 10 acres of crop. Repellency is enhanced when shooting is implemented concurrently, or when other measures are taken to slow birds' habituation to noise (Slater 1980, Inglis 1984). Electronic triggers that detect the presence of birds and selectively fire cannons are now available (Adams Dominion, Inc., Crestwood, KY).
A variety of other sonic frightening devices, including electronic noise systems, synthetic bird calls, and pyrotechnics, are sometimes used in addition to exploders (Aubin 1990, Feare et al. 1986). These systems can be effective against loafing and roosting birds (e.g., Blokpoel 1976). However, they have little utility against feeding birds in agricultural settings and are not any more effective than propane cannons alone (Feare et al. 1986). Repellency is variable, and depends on the persistence and skill of the operator, the attractiveness of the crop, the number of birds present, and the availability of alternative food sources (e.g., Mott 1978; Mott and Timbrook 1986, Salmon and Conte 1981).
Ultrasonic devices are offered as deterrents to roosting and loafing birds (Krzysik 1987). These devices have no demonstrated utility (e.g., Theissen et al. 1957, Theissen and Shaw 1957, Martin and Martin 1984, Kerns 1985, Griffiths 1986, Woronecki 1988), probably because birds are physiologically incapable of detecting ultrasound (i.e., frequencies above 20,000 Hz; e.g., Summers-Smith 1963).
Tactile Repellents
Clay-based seed coatings that become tacky when wet are effective bird repellents under some conditions (Avery et al. 1989; Decker et al. 1990). For example, the estimated loss of clay-coated rice in a Texas field trial averaged 17%, compared with 36.5% in control plots (Decker et al. 1990). However, when bird numbers are high and/or when alternative foods are relatively unpalatable or sparse, clay-based coatings confer little protection (Avery, pers. commun.).
Polybutene products (e.g., tacky pastes and liquids) repel birds from ledges or other roosting structures (Timm 1983). These products often contain other ingredients, including mineral oil, lithium sterate soap, diphenylamine, zinc oxide, and castor oil (Timm 1983). While effective, polybutene-based repellents are thermally labile, and melting repellent can deface structures to which it is applied. Both clay-coatings and polybutene are considered pesticides.
Chemosensory and Physiologic Repellents
These substances are effective either because they are painful or because they cause sickness. If the latter, then food avoidance learning is involved (Avery 1985, Reidinger and Mason 1983). If the former, then the repellent often is stimulating pain receptors (i.e., trigeminal chemoreceptors) in the mouth, nose, and eyes (Green et al. 1990). Although many birds possess adequate or even superior olfactory and gustatory capabilities (e.g., Berkhoudt 1985, Clark and Mason 1989), smell and taste, per se, are rarely of consequence for bird damage control (Mason and Otis 1990).
At present, no effective chemosensory and physiologic repellent is legally available in the United States.
METHODS DEVELOPMENT
The remainder of this discussion is organized into four areas. The first three areas are agricultural repellent needs, nonagricultural repellent needs, and conservation applications. The final area is consideration of a simple economic decision-making model.
Agricultural Needs
Background
Reliable measures of economic loss caused by wildlife are unavailable. Nevertheless, national surveys of farmers by the U.S. Agricultural Statistics Service (A. P. Wywialowski, pers. commun.) can be used as a general index of where research may be needed. In the Eastern United States, 52.5% \((n = 4,463)\) of farmers who raised field crops reported some losses. Of these, 86.5% attributed losses to wildlife (Figure 1). For those farmers who raised vegetables, fruits,
FIGURE 1. The percentage loss attributed to various sources by farmers in the eastern United States who raise field crops (top), store seeds and grains (middle), or grow fruits, nuts and vegetables (bottom). Data from Wywialowski and Beach (1991).
or nuts, 41.8% (n = 877) reported some losses, with 62.5% of this damage attributed to wildlife. For those farmers who stored feed, seed, or grain on their properties, 23% (n = 2,634) reported some losses, 27% of which was attributed to wildlife.
The economics of damage varies greatly among crops (Table 1). For example, a 1972 survey of sunflower fields in North Dakota and Minnesota showed that the mean loss to birds was only 13 kg/ha (Besser 1978). Because 174,500 ha were planted in sunflower during that year, we can estimate that the national loss was 2,270 metric tons (Putt 1978). At an average value of $230 per metric ton (Cobia 1978), bird damage cost growers $522,100. On the other hand, Avery et al. (1991) estimated that birds destroyed 11% of the national blueberry crop in 1989. Because total blueberry production during that year was 158 million pounds, and the average price was $0.50/pound, Avery estimated that bird damage may have cost growers as much as $8.5 million from a total market size of $77.3 million.
Bird damage has been documented in many agricultural contexts other than food crops. For example, feed consumption and contamination by birds are problems for feedlot and grain storage operators (Feare 1975, 1979, 1980, Twedt and Glahn 1982). Birds associated with livestock and poultry also represent a potential vector for economically important diseases such as transmissible gastroenteritis (Gough and Beyer 1982, Pilchard 1965), tuberculosis (Bickford et al. 1966), and avian influenza (Alexander et al. 1979). Histoplasmosis, a human respiratory disease, is associated with roosting blackbirds and starlings.
An issue that is increasingly significant is the hazard that modern agricultural chemicals present to birds. Pelleted agricultural chemicals and treated seeds are essential components of no-till conservation farming, a practice that will be used on 60% of the cropland in North America within 20 years (Crosson 1982). These farming practices generally benefit wildlife by providing cover and food (Castrale 1987), and they are environmentally safe relative to pesticide spray applications (Greig-Smith 1987). However, pelleted chemicals and treated seeds are dangerous to birds that forage in treated fields (Best and Gionfriddo 1991, Greig-Smith 1988, Schafer et al. 1983, U.S. Environmental Protection Agency 1989). In recognition of this hazard, the U.S. Environmental Protection Agency has threatened a generic ban on the use of granular products. Although the cost of such a ban is difficult to gauge, it is obviously large (Figure 2, Table 2). Particulate formulations are a major fraction of the pesticide market, and the principle source of income for some chemical companies (Mason and Turpin 1990).
**Existing Repellents**
There are no effective chemicals legally available for use in agricultural settings in North America.
**New Repellents; Near-Term Possibilities**
These substances may already be registered for agricultural use (e.g., insecticides or fungicides with bird repellent properties; Avery and Nol 1991, Avery et al. 1993, Avery and Decker 1991, Babu 1988, Crocker and Reid 1993). Alternatively, they might be approved for
Table 1. Estimates of Economic Losses Caused by Birds to Selected Agricultural Commodities for Which Damage and Dollar Values Are Reported
| FIELD CROPS | $ Value | % Loss | $ Loss | Reference |
|-------------|---------------|--------|----------|--------------------|
| **Field Corn** | | | | |
| Ohio | 1,726,800,000 | 0.7 | 3,880,000| Stickley et al. 1979|
| Ohio | 737,500,000 | 0.8 | 5,900,000| Dolbeer 1981 |
| Ohio | 968,571,428 | 0.7 | 6,780,000| Dolbeer 1981 |
| Ohio | 4,507,142 | 0.1 | 450,714 | Andrews and Henze 1985|
| Ohio | 5,000,000 | 1.0 | 5,000,000| Dolbeer 1980 |
| Michigan | 544,000,000 | 0.3 | 1,360,000| Dolbeer 1981 |
| Kentucky | 240,000,000 | 0.5 | 1,200,000| Stickley et al. 1979|
| 10 states* | 137,241,666 | 0.4 | 380,000 | Stickley et al. 1979|
| **Sweet Corn** | | | | |
| Ohio | 1,000,000 | 2.0 | 200,000 | Dolbeer, pers. commun.|
| FRUIT | | | | |
|-------------|---------------|--------|----------|--------------------|
| **Blueberry** | | | | |
| National | 79,000,000 | 10.8 | 8,500,000| Avery et al. 1991 |
| National | 32,000,000 | 5.0 | 1,600,000| Mott and Stone 1973|
| Michigan | 8,333,333 | 6.0 | 500,000 | Stone et al. 1974 |
| **Cherries** | | | | |
| Britain | 44,726,774 | 11.5 | 5,163,264| Feare 1979 |
| Michigan | 25,000,000 | 17.4 | 4,250,000| Guarino et al. 1974|
| National | 138,888,889 | 17.4 | 24,166,667| Crase et al. 1976 |
| **Grapes** | | | | |
| National | 683,920,900 | 0.4 | 2,600,000| Lee, pers. commun. |
*a The 10 states were Illinois, Indiana, Iowa, Michigan, Minnesota, Missouri, Nebraska, Ohio, South Dakota, and Wisconsin. Together, these states produced 79.4% of the corn crop in 1981.*
FIGURE 2. Quantities of agricultural chemicals used by U.S. farmers by type of application. Data are derived from the 1987 Census of Agriculture.
Table 2. Estimated Net U.S. and Worldwide Agricultural Chemicals (AgChem) Sales ($ Millions) by the Top 10 Producers (T. Miller, American Cyanamid, Pers. Commun.), and Total Sales of All Products by These Companies (Values are drawn from 1988–90 Annual Reports of These Companies)
| Company | Estimated U.S. Agric. Sales<sup>a</sup> | Estimated World Agric. Sales<sup>b</sup> | Total World Sales<sup>c</sup> |
|------------------|----------------------------------------|----------------------------------------|-------------------------------|
| DuPont | 711 | 3,076.5 | 15,064 |
| Ciba-Geigy | 672 | 3,109.6 | 17,600 |
| Dow Elanco | 545 | 1,023.0 | 8,293 |
| Monsanto | 520 | 1,377.0 | 4,825 |
| American Cyanamid| 510 | 1,100.0 | 24,449 |
| ICI | 447 | 4,189.0 | 13,612 |
| Rhone Poulenc | 255 | 2,239.0 | 16,039 |
| BASF | 207 | 3,047.0 | 2,150 |
| Mobay USA | 185 | 358.3 | 3,287 |
| FMC | 183 | 521.0 | 22,297 |
| All others | 850 | 3,176.7 | — |
<sup>a</sup> Net sales estimates for the U.S. market were provided by T. Miller, American Cyanamid Corporation.
<sup>b</sup> Sales estimates for the world agricultural market were extracted from corporate earnings statements contained in annual reports.
<sup>c</sup> Total sales obtained from corporate earnings statements contained in annual reports.
use as human or animal feed additives. Compounds in this category include cinnamic acid derivatives (Avery and Decker 1992, Crocker and Perry 1990, Crocker and Reid 1993), cinnamyl alcohol and benzoate derivatives (Jakubas et al. 1992), anthranilate derivatives (Mason et al. 1989), acetophenone, benzoic acid and triazine derivatives (Clark and Shah 1991a, Clark et al. 1991, Mason et al. 1991a), and d-pulegone (Mason 1990). Also, a variety of inert materials exert some bird repellency, including bentonite clays (Daneke and Decker 1988, Avery et al. 1989, Decker et al. 1990) and activated charcoal (Mason and Clark 1994).
Whatever the repellent in question, one strategy to contain registration costs may be to target nonagricultural uses where ecological concerns and residue requirements (vis-a-vis food contamination) are relatively less. Such nonagricultural uses are described below.
**New Repellents; Long-Term Possibilities**
Research that explores fundamental concepts in avian foraging may yield practical results. Four lines of investigation appear especially promising. First, basic examination of structure-activity relationships between the chemistry of known irritants and avoidance behavior may lead to the reliable prediction of new sensory repellents (Mason et al. 1991a,b, Clark and Shah 1991a, Clark et al. 1991, Shah et al. 1991). Second, basic examination of physiologic repellents (i.e., those that act by causing malaise) could lead to the development of new products. For example, intestinal membrane disaccharidases may constrain the feeding behavior of some birds (e.g., those species that are unable to concentrated sucrose solutions; Martínez del Rio and Stevens 1989; Brugger 1992). Although sucrose may not be repellent in some feeding contexts (Clark and Mason 1993), it is possible that the simple addition of sucrose to livestock feed could economically reduce depredation and disease hazards that birds present at feedlots. Third, selective breeding and genetic engineering of plants could produce crop varieties that are bird tolerant. This approach has been investigated with maize (Dolbeer et al. 1982), sorghum (Bullard et al. 1981), rape (Inglis et al. 1992), sunflower (Dolbeer et al. 1986b), and pears (Greig-Smith et al. 1983). More broadly, phenylpropanoids, a class of common phenolic compounds in plants, are bird repellent and insecticidal (Buchsbaum et al. 1984, Crocker and Perry 1990, Jakubas et al. 1992). Because production of phenylpropanoids in plants is focused in specific plant tissues (i.e., husks, pericarp, aleurone; Collins 1986, McCallum and Walker 1990), it may be possible to maximize the repellency of endogenous chemical defenses against birds (e.g., by concentrating chemicals in achene surface tissues) while minimizing the impact of the defense on the nutritive value or palatability of the grain once these surface tissues are removed (Jakubas et al. 1992). Finally, many plant chemical defenses against insect predators are well-described, and some of these materials are repellent to birds as well (Crocker and Perry 1990, Guilford et al. 1987). For example, cucurbitacins are triterpenoid glycosides that occur in plants belonging to the Cucurbitaceae and Cruciferae families (Robinson 1983). These substances deter insect feeding (Metcalf 1985) and repel birds (Mason and Turpin 1990).
Nonagricultural Needs
Background
Nonmigratory waterfowl are a nuisance in urban and suburban locations (Cummings et al. 1991). Grazing geese damage turf (Laycock 1982), and their feces adversely affect public health (Conover and Chasko 1985) and contribute to eutrophy in ponds and streams (Conover and Chasko 1985, Mott and Timbrook 1986). The overall economic impact of problems caused by waterfowl in these settings has not been quantified, but the cost of capturing geese for relocation can exceed $12/bird (Thompson 1991). One survey of golf course superintendents found that they would be willing to pay $60/ha for effective Canada goose control (Cummings et al. 1991). There are about 14,500 golf courses in the continental United States (U.S. Golf Association, pers. commun.).
Other species cause nuisance and public health problems by carrying garbage from dumps (Dolbeer et al. 1988b), roosting in urban and suburban areas (Chick et al. 1980, Dolbeer et al. 1988b,c, Tosh et al. 1970), and causing structural damage (Stemmerman 1988). In Missouri, the annual cost of damage by woodpeckers to electrical transmission poles exceeds $350,000 (Stemmerman 1988). If the average cost of damage is merely $250,000 per state, then the national annual cost exceeds $12.5 million.
Existing Repellents
Naphthalene and polybutene are registered to repel roosting birds (Timm 1983). However, naphthalene has no demonstrated utility as an avian repellent (e.g., Clark et al. 1990, Dolbeer et al. 1988a). In field tests, applications of naphthalene 32.5 times higher than the registered rate have no repellent effect (Dolbeer et al. 1988b). Undoubtedly, polybutene has bird repellent activity under some circumstances, as the number of products containing this substance attests (100% of commercial roost repellents). Again, however, experimental data in support of this claim are sparse.
New Repellents; Near-Term Possibilities
Some of the materials that we described for agricultural purposes could serve as useful repellents in nonagricultural contexts. These chemicals include food and flavor additives like anthranilate derivatives, and registered agricultural chemicals like ziram. Registrations for the use of methyl anthranilate at land fills and in sterile ponds are expected in 1994 (PMC Specialties Group, Inc., pers. commun.). In addition, materials such as methoxyacetophenones, 4-ketobenztriazine, veratryl amine, and N-acetyl veratryl amine (Mason et al. 1991b, Clark et al. 1991) may prove useful. Several of these substances are already used as synthetic intermediates for food additives, pharmaceuticals, and agricultural chemical coatings.
New Repellents; Long-Term Possibilities
Long-term possibilities that we described for agricultural needs also are applicable here.
Conservation Needs
Background
Industrial byproducts and mine effluvia are frequently stored in open outdoor impoundments that pose serious risks to wildlife (Allen 1990, Kay 1990). Waterfowl, shore birds, and other species are attracted to the freestanding water and risk exposure to both acute and chronic toxicants (Ohlendorf et al. 1989, Williams et al. 1989).
The costs of protecting birds from mine and industrial effluvia is readily quantified. U.S. sales from the gold and silver industry exceeded $3.3 billion in 1989. Because cyanide is used for the extraction of these metals from ore, the leachate impoundments are highly toxic to wildlife. Eliminating cyanide from ponds by quenching is expensive, costing between $240–400,000/year for a mid-sized operation. Excluding birds from ponds until cyanide reclamation or quenching can be achieved is also costly, running between $9,000–$13,000/acre (Schroeder 1990). Echo Bay Minerals Company spent $7.2 million to neutralize cyanide and exclude birds from a 363-acre pond at a mine site. Despite substantial reductions in avian mortality, Echo Bay still paid $500,000 in fines to the U.S. Fish and Wildlife Service.
Airports also pose risks to wildlife (Blokpoel 1976), and frequent collisions between birds and aircraft represent a hazard to human health and safety (Dolbeer et al. 1993). In 1989, bird strikes caused $80 million damage to U.S. military aircraft and $100 million damage to civilian aircraft (Dolbeer, pers. commun.). In many instances, birds are attracted to airports after rains because of the free-standing water which accumulates on runways. As in the case of mining operations, traditional hazing operations are ineffective because birds simply move from one location to another, and quickly become accustomed to the harassment.
Existing Repellents
No repellent chemicals are registered in the United States for any conservation use.
New Repellents; Near-Term Possibilities
A variety of substances may have utility as bird repellent additives to standing water. These include sensory repellents like methyl anthranilate, 4-ketobenztriazene, and anthranilic acid. The major obstacle blocking the practical application of these compounds is the development of delivery systems that (1) preserve the chemical integrity of repellents in the hostile environments that wastewater presents (Clark and Shah 1991b, 1993), and (2) assure that chemical is
concentrated in ways that maximize the likelihood of contact with target birds (e.g., on the surface of ponds).
**New Repellents; Long-Term Possibilities**
The development of chemical repellents for use in small, shallow pools of water is a fairly simple matter. However, the development of substances that can be added to large ponds is physically and ecologically more complex. Further, toxic impoundments negatively affect members of all vertebrate classes, not just birds. The identification of broadly repellent materials is likely to be a long-term process, as all the available evidence suggests that there are dramatic differences among vertebrates classes in their responsiveness to chemical irritants (Szolcsanyi et al. 1986, Mason and Otis 1990).
**CONCLUSIONS**
The path from discovery of a candidate repellent to product availability can be thought of as a filtering process (Figure 3). Each step along the process constrains development. Accordingly, the smaller the initial number of candidate repellents, the less the likelihood of successfully developing a new product. Because serendipity has too often been responsible for repellent discovery, and registration, manufacturing, and marketing constraints have been ignored, few repellents are presently available. Nevertheless, there are a range of substances that could become available if interested developers can be found. These substances include existing insecticides and fungicides, synthetic intermediates for these products, human and animal feed flavorings, and inert substances such as bentonite clays and activated charcoal. Conceivably, expedited registration of biological pesticides, and other relatively innocuous substances by environmental regulatory agencies (P. Savarie, Denver Wildlife Research Center, pers. commun.) will encourage industry and bring new bird repellents to consumers. At present, however, few tools are available, and the likelihood that more tools will become available in the next few years appears remote.
**FIGURE 3.** A heuristic model for factors affecting the discovery and development of a repellent strategy. Abbreviations: FIFRA = Federal Insecticide, Fungicide, and Rodenticide Act; ADC = Animal Damage Control.
LITERATURE CITED
Alexander, D. J., W. H. Allen, and G. Parsons. 1979. Characterization of influenza isolated from turkeys in Great Britain during 1963–1977. Res. Vet. Sci. 26:17–20.
Allen, C. 1990. Mitigating impacts to wildlife at FMC Gold Company’s Paradise Peak Mine. Pages 67–71 in R. McQuivey, ed. Proceedings of the Wildlife Mining Workshop, Nevada Mining Association, Nevada Department of Minerals and Nevada Department of Wildlife, Reno, NV.
Andrews, D. A., and L. E. Henze. 1985. Blackbird depredations on field corn—1985. Unpubl. Report., Sandusky, OH. 10 pp.
Aubin, T. 1990. Synthetic bird calls and their application to scaring methods. Ibis 132:290–299.
Avery, M. L. 1985. Application of mimicry theory to bird damage control. J. Wildl. Manage. 49:1116–1121.
______, and D. G. Decker. 1991. Repellency of fungicidal seed treatments to red-winged blackbirds. J. Wildl. Manage. 55:327–334.
______, and ______. 1992. Repellency of cinnamic acid esters to captive red-winged blackbirds. J. Wildl. Manage. 56:800–805.
______, ______, D. L. Fischer, and T. R. Stafford. 1993. Responses of captive blackbirds to a new insecticidal seed treatment. J. Wildl. Manage. 57:652–656.
______, ______, and M. O. Way. 1989. Field evaluation of a nontoxic clay coating for reducing bird damage to newly planted rice. Denver Wildl. Res. Cent. Bird Sec. Res. Rep. No. 444. 9 pp.
______, J. W. Nelson, and M. A. Cone. 1991. Survey of bird damage to blueberries in North America—1989. Denver Wildl. Res. Cent. Bird Sec. Res. Rep. No. 468. 21 pp.
______, and P. Nol. 1991. Evaluation of the repellency of Imidan to fruit-eating birds. Denver Wildl. Res. Cent. Bird Damage Res. Rep. No. 479. 10 pp.
Babu, T. H. 1988. Effectiveness of certain chemicals and fungicides on the feeding behavior of house sparrows. Pavo 26:17–23.
Berkhoudt, H. 1985. Structure and function of avian taste receptors. Pages 463–495 in A. S. King and J. McLelland, eds. Form and function in birds, Academic Press, NY.
Besser, J. F. 1978. Birds and sunflower. Pages 263–278 in J. F. Carter, ed. Sunflower science and technology. American Society of Agronomy, Crop Science Society of America, Soil Science Society of America, Inc., Madison, WI.
Best, L. B., and J. P. Gionfriddo. 1991. Characterization of grit use by cornfield birds. Wilson Bull. 103:68–82.
Bickford, A. A., G. H. Ellis, and H. E. Moses. 1966. Epizootiology of tuberculosis in starlings. J. Am. Vet. Med. Assoc. 149:312–318.
Blokpoel, H. 1976. Bird hazards to aircraft. Can. Wildl. Serv., Ottawa, CN. 236 pp.
Brugger, K. E. 1992. Repellency of sucrose to captive American robins. J. Wildl. Manage. 56:794–799.
Bruggers, R. L., J. E. Brooks, R. A. Dolbeer, P. P. Woronecki, R. K. Pandid, T. Tarimo, and M. Hoque. 1986. Responses of pest birds to reflecting tape in agriculture. Wildl. Soc. Bull. 14:161–170.
Buchsbaum, R., I. Valiela, and T. Swain, 1984. The role of phenolic compounds and other plant constituents in feeding by Canada geese in a coastal marsh. Oecologia 63:343–349.
Bullard, R. W., J. O. York, and S. R. Kilburn. 1981. Polyphenolic changes in ripening bird-resistant sorghums. J. Agric. Food Chem. 29:973–981.
Castrale, J. S. 1987. Pesticide use in no-till row-crop fields relative to wildlife. Ind. Acad. Sci. 96:215–222.
Chick, E. W., C. Flanigan, S. B. Compton, T. Pass, C. Gayle, C. Hernandez, F. R. Pitzer, and E. Austin. 1980. Blackbird roosts and histoplasmosis: an increasing medical problem? Chest 77:584–585.
Clark, C., L. Clark, and L. Clark. 1990. Anting behavior of common grackles and European starlings. Wilson Bull. 102:167–169.
Clark, L., and J. R. Mason. 1989. Sensitivity of the brown-headed cowbirds to volatiles. Condor 91:922–932.
———, and ———. 1993. Interaction between sensory and postingestional repellents in starlings: methyl anthranilate and sucrose. Ecol. Appl. 3(2):262–270.
______, and P. S. Shah. 1991a. Nonlethal bird repellents: in search of a general model relating repellency and chemical structure. J. Wildl. Manage. 55:538–545.
______, and ______. 1991b. Chemical bird repellents: applicability for deterring use of waste water. Pages 157–164 in R. D. Comer, ed. Proceedings 5: Issues and technology in the management of impacted wildlife. Thorne Ecological Institute, Snowmass, CO.
______, and ______. 1993. Chemical bird repellents: possible use in cyanide ponds. J. Wildl. Manage. 57:657–664.
______, ______, and J. R. Mason. 1991. Chemical repellency in birds: relationship between chemical structure and avoidance response. J. Exp. Zool. 260:310–322.
Cobia, D. W. 1978. Production costs and marketing. Pages 387–406 in J. F. Carter, ed. Sunflower science and technology. American Society of Agronomy, Crop Science Society of America, Soil Science Society of America, Inc., Madison, WI.
Collins, F. W. 1986. Oat phenolics: structure, occurrence and function. Pages 227–295 in F. H. Webster, ed. Oat chemistry and technology. American Society of Cereal Chemists, St. Paul, MN.
Conover, M. R. 1982. Behavioral techniques to reduce bird damage to blueberries: methiocarb and a hawk-kite predator model. Wildl. Soc. Bull. 10:211–216.
______, and G. C. Chasko. 1985. Nuisance Canada goose problems in the Eastern United States. Wildl. Soc. Bull. 13:228–233.
Crase, F. T., C. P. Stone, R. W. DeHaven, and D. F. Mott. 1976. Bird damage to grapes in the United States with emphasis on California. U.S. Fish Wildl. Serv. Spec. Sci. Rep. No. 197. 18 pp.
Crocker, D. R., and S. M. Perry. 1990. Plant chemistry and bird repellents. Ibis 132:300–308.
______, and K. Reid. 1993. Repellency of cinnamic acid derivatives to rooks and chaffinches. Wildl. Soc. Bull. 21:456–460.
Crosson, W. F. 1982. Conservation tillage and conventional tillage: a comparative assessment. EPA Report No. 600/3–82–027. U.S. Environmental Protection Agency, Washington, DC. 72 pp.
Cummings, J. L., C. E. Knittle, and J. L. Guarino. 1986. Evaluating a pop-up scarecrow coupled with a propane exploder for reducing blackbird damage to ripening sunflower. Proc. Vertebr. Pest Conf. 12:286–291.
———, J. R. Mason, D. L. Otis, and J. F. Heisterberg. 1991. Evaluation of dimethyl and methyl anthranilate as a Canada goose repellent on grass. Wildl. Soc. Bull. 19:184–190.
Daneke, D., and D. G. Decker. 1988. Prolonged seed handling time deters red-winged blackbirds feeding on rice seed. Proc. Vertebr. Pest Conf. 13:287–292.
Decker, D. G., M. L. Avery, and M. O. Way. 1990. Reducing blackbird damage to newly planted rice with a nontoxic clay-based seed coating. Proc. Vertebr. Pest Conf. 14:327–331.
Dolbeer, R. A. 1980. Blackbirds and corn in Ohio. U.S. Fish Wildl. Serv. Resource Publication No. 136. 18 pp.
———. 1981. Cost-benefit determination of blackbird damage control for cornfields. Wildl. Soc. Bull. 9:44–51.
———, J. L. Belant, and J. L. Sillings. 1993. Shooting gulls reduces strikes with aircraft at John F. Kennedy International Airport. Wildl. Soc. Bull. 21:442–450.
———, M. A. Link, and P. P. Woronecki. 1988a. Napthalene shows no repellency for starlings. Wildl. Soc. Bull. 16:62–64.
———, P. P. Woronecki, and R. L. Bruggers. 1986a. Reflecting tapes repel blackbirds from millet, sunflowers, and sweet corn. Wildl. Soc. Bull. 14:418–425.
———, ———, E. C. Cleary, and E. B. Butler. 1988b. Site evaluation of gull exclusion device at Fresh Kill Landfill, Staten Island, NY. Denver Wildl. Res. Cent. Bird Sec. Res. Rep. No. 411. 10 pp.
———, ———, T. W. Seamans, B. N. Buckingham, and E. C. Cleary. 1988c. Herring gull nesting on Sandusky Bay, Lake Erie, 1988. Denver Wildl. Res. Cent. Bird Sec. Res. Rep. No. 427. 8 pp.
———, ———, and R. A. Stehn. 1982. Effect of husk and ear characteristics on resistance of maize to blackbird (Agelaius phoeniceus) damage in Ohio. Prot. Ecol. 4:127–139.
———, ———, ———, G. J. Fox, J. J. Hanzel, and G. M. Linz. 1986b. Field trials of sunflower resistant to bird depredation. North Dakota Farm Res. 43:21–24,28.
Eschen, M. L., and E. W. Schafer. 1986. Registered bird damage chemical controls—1985. Denver Wildl. Res. Cent. Bird Damage Res. Rep. No. 356. 16 pp.
Fazlul Haque, A. K. M., and D. M. Broom. 1985. Experiments comparing the use of kites and gas bangers to protect crops from woodpigeon damage. Agric. Ecosyst. & Environ. 12:219–228.
Feare, C. J. 1975. Cost of starling damage to an intensive husbandry unit. Proc. Brit. Insect. Fung. Conf. 8:253–259.
———. 1979. The economics of starling damage. Pages 39–55 in E. N. Wright, ed. Bird problems in agriculture. British Crop Protection Council Publications, Croyden, UK.
———. 1980. The economics of starling damage. Econ. Damage 2:39–54.
———, P. W. Greig-Smith, and I. R. Inglis. 1986. Current status and potential of non-lethal means of reducing bird damage in agriculture. Pages 493–506 in H. Ouellet, ed. Acta XIX Congressus Internationalis Ornitholgiae, Vol. 1. Univ. Ottawa Press, Ottawa, CN.
Gough, P. H., and J. W. Beyer. 1982. Bird-vectored diseases. Proc. Great Plains Wildl. Damage Control Workshop 6:260–272.
Green, B. G., J. R. Mason, and M. R. Kare, eds. 1990. Chemical nociception in the nose and mouth. Chemical Senses, Vol. 2, Irritation. Marcel Dekker, Inc., NY. 361 pp.
Greig-Smith, P. W. 1987. Hazards to wildlife from pesticide seed treatments. BCPC Monograph 39: applications to seeds and soil. 8 pp.
———. 1988. Wildlife hazards from the use, misuse, and abuse of pesticides. Aspects Appl. Biol. 17:247–256.
———, M. F. Wilson, C. A. Blunden, and G. M. Wilson. 1983. Bud-eating by bullfinches, *Pyrrhula pyrrhula* in relation to the chemical constituents of two pear cultivars. Ann. Appl. Biol. 103:335–343.
Griffiths, R. E. 1986. Efficacy testing of an ultrasonic bird repeller. Pages 56–63 in S. A. Shumake and R. W. Bullard, eds. Vertebrate Pest Control and Management Materials, Vol. 5, Am. Soc. Testing and Materials, Philadelphia, PA.
Guarino, J. L., W. F. Shake, and E. W. Schafer. 1974. Reducing bird damage to ripening cherries with methiocarb. J. Wildl. Manage. 38:338–342.
Guilford, T., C. Nicol, M. Rothschild, and B. P. Moore. 1987. The biological roles of pyrazines: evidence for a warning odour function. Biol. J. Linnean Soc. 31:113–128.
Inglis, I. R. 1980. Visual bird scarers: an ethological approach. Pages 121–143 in E. N. Wright, I. R. Inglis, and C. J. Feare, eds. Bird problems in agriculture. British Crop Protection Council, Croydon, UK.
———. 1984. Bird scaring. Ministry of Agriculture, Fisheries, and Food, London, UK.
———, L. W. Huson, M. B. Marshall, and P. A. Neville. 1983. The feeding behavior of starlings (Sturnus vulgaris) in the presence of "eyes." Ziet. Tierpsychol. 62:181–208.
———, J. T. Wadsworth, A. N. Meyer, and C. J. Feare. 1992. Vertebrate damage to OO and O varieties of oilseed rape in relation to SMCO and Glucosinolate concentrations in the leaves. Crop Prot. 11:64–68.
Jakubas, W. J., P. S. Shah, J. R. Mason, and D. M. Norman. 1992. Avian repellency of coniferyl and cinnamyl derivatives. Ecol. Appl. 2:147–156.
Kay, F. R. 1990. Nevada Department of Wildlife's role: past, present, future. Pages 18–23 in R. McQuivey, ed. Proceedings of the Nevada Mining and Wildlife Workshop. Nevada Mining Association, Nevada Department of Minerals and Nevada Department of Wildlife, Reno, NV.
Kerns, J. D. 1985. Evaluation of the effectiveness of the "Ultrason ET" ultrasonic device as a means of cliff swallow control. Natural Resources Report 85–2, Fort Wainwright, AK.
Krzysik, A. J. 1987. A review of bird pests and their management (unclassified). Technical Report REMR-EM-1. 145 pp.
Laycock, G. 1982. The urban goose. Audubon 84:44–47.
Linz, G. M., R. A. Dolbeer, J. J. Hanzel, and L. E. Huffman. 1993. Controlling blackbird damage to sunflower and grain crops in the northern Great Plains. U.S. Department of Agriculture Information Bulletin No. 679. 15 pp.
Lipcius, R. N., C. A. Coyne, B. A. Fairbanks, D. H. Hammond, P. J. Mohan, D. J. Nixon, J. J. Staskiewicz, and F. H. Heppner. 1980. Avoidance response of mallards to colored and black water. J. Wildl. Manage. 44:511–518.
Martin, L. R., and P. C. Martin. 1984. Research indicates propane cannons can move birds. Pest Cont. 52:52.
Martínez del Rio, C. M., and B. R. Stevens. 1989. Physiological constraint on feeding behavior: intestinal membrane disaccharidases of the starling. Science 243:794–796.
Mason, J. R. 1987. Ro-pel efficacy: evaluation of active ingredients under optimal conditions with red-winged blackbirds (*Agelaius phoeniceus*). Denver Wildl. Res. Cent. Bird Damage Res. Rep. No. 384. 10 pp.
———. 1990. Evaluation of D–pulegone as an avian repellent. J. Wildl. Manage. 54:130–135.
———, M. A. Adams, and L. Clark. 1989. Anthranilate repellency to starlings: chemical correlates and sensory perception. J. Wildl. Manage. 53:55–64.
———, N. J. Bean, P. S. Shah, and L. Clark. 1991b. Taxon-specific differences in responsiveness to capsaicin and several analogues: correlates between chemical structure and behavioral aversiveness. J. Chem. Ecol. 17:2539–2551.
———, and ———. 1994. Evaluation of plastic and mylar flagging as repellents for snow geese (*Chen caerulescens*). Crop Prot., 13:531-534.
———, ———, and N. J. Bean. 1993. White plastic flags repel snow geese (*Chen caerulescens*). Crop Prot. 12:497–500.
———, ———, and T. P. Miller. 1993. Evaluation of a pelleted bait containing methyl anthranilate as a bird repellent. Pestic. Sci. 39:299–304.7
———, ———, and P. S. Shah. 1991a. Ortho–aminoacetophenone repellency to birds: perceptual and chemical similarities to methyl anthranilate. J. Wildl. Manage. 55:334–340.
———, and D. L. Otis. 1990. Effectiveness of six potential irritants on consumption by red-winged blackbirds (*Agelaius phoeniceus*) and starlings (*Sturnus vulgaris*). Pages 309–324 in B. G. Green, J. R. Mason, and M. R. Kare, eds. Chemical Senses, Vol. 2, Irritation. Marcel Dekker, Inc., NY.
———, and T. Turpin. 1990. Cucurbitacin-adulterated diet is avoided by captive European starlings. J. Wildl. Manage. 54:672–676.
McCallum, J. A., and J. R. Walker. 1990. Phenolic biosynthesis during grain development in wheat: changes in phenylalanine ammonia-lyase activity and soluble phenolic content. J. Cereal Sci. 11:35–49.
Metcalf, R. L. 1985. Plant kairomones and insect pest control. Ill. Nat. Hist. Surv. Bull. 33:175–198.
Mott, D. F. 1978. Control of wading bird predation at fish-rearing facilities. Pages 131–132 in J. C. Ogden and S. Winckler, eds. Wading Birds, National Audubon Society, Washington, DC.
———. 1985. Dispersing blackbird-starling roosts with helium-filled balloons. Proc. East. Anim. Damage Control Conf. 2:156–162.
———, K. J. Andrews, and G. A. Littauer. 1990. The effect of cormorant roost dispersal on numbers of cormorants at nearby catfish ponds. Denver Wildl. Res. Cent. Bird Sec. Res. Rep. No. 462.
———, and C. P. Stone. 1973. Bird damage to blueberries in the United States. Unpubl. Tech. Rep. No. 8, Work Unit DF-102.3, Denver Wildlife Research Center. 18 pp.
———, and S. K. Timbrook. 1986. Alleviating nuisance Canada goose problems with acoustical stimuli. Denver Wildl. Res. Cent. Bird Damage Res. Rep. No. 380. 13 pp.
Ohlendorf, H. M., R. L. Hothem, and D. Welsh. 1989. Nest success, cause-specific nest failure and hatchability of aquatic birds at selenium-contaminated Kesterson Reservoir and a reference site. Condor 91:787–796.
Pilchard, E. I. 1965. Experimental transmission of transmissible gastroenteritis virus by starlings. Am. J. Vet. Res. 26:1177–1179.
Putt, E. D. 1978. History and present world status. Pages 1–30 in J. F. Carter, ed. Sunflower science and technology. American Society of Agronomy, Crop Science Society of America, Soil Science Society of America, Inc., Madison, WI.
Reidinger, R. F., and J. R. Mason. 1983. Exploitable characteristics of neophobia and food aversions for improvements in rodent and bird control. Pages 20–42 in D. E. Kaukienen, ed. Vertebrate Pest Control and Management Materials. Am. Soc. Testing and Measurement, Philadelphia, PA.
Robinson, T. 1983. The organic constituents of higher plants. Cordus Press, Amherst, MA 353 pp.
Rogers, J. G. 1980. Conditioned taste aversion: its role in bird damage control. Pages 173–179 in E. N. Wright, I. R. Inglis, and C. J. Feare, eds. Bird problems in agriculture. British Crop Protection Council, Croydon UK.
Salmon, T. P., and F. S. Conte. 1981. Control of bird damage at aquaculture facilities. U.S. Department of the Interior, Wildlife Management Leaflet No. 475. 11 pp.
Schafer, E. W., W. A. Bowles, and J. Hurlbut. 1983. The acute oral toxicity, repellency, and hazard potential of 988 chemicals to one or more species of wild and domestic birds. Arch. Environ. Contam. Toxicol. 12:355–382.
Schroeder, M. L. 1990. The netting of cyanide ponds at Copperstone Gold. Pages 72–81 in R. McQuivey, ed. Proc. Nevada Wildlife and Mining Workshop. Nevada Mining Association, Nevada Department of Minerals, Nevada Department of Wildlife, Reno, NV.
Shah, P. S., J. R. Mason, and L. Clark. 1991. Prediction of avian repellency from chemical structure: the aversiveness of vanillin, vanillyl alcohol, and veratryl alcohol. Pestic. Biochem. Physiol. 40:169–175.
Shirota, Y., M. Sanada, and S. Masaki. 1983. Eyespotted balloons as a device to scare grey starlings. Appl. Entomol. Zool. 18:545–549.
Slater, P. J. 1980. Bird behavior and scaring by sounds. Pages 105–114 in E. N. Wright, I. R. Inglis, and C. J. Feare, eds. Bird problems in agriculture. British Crop Protection Council, Croydon, UK.
Stemmerman, L. A. 1988. Observation of woodpecker damage to electrical distribution line poles in Missouri. Proc. Vertebr. Pest Conf. 13:260–265.
Stickley, A. R., D. L. Otis, and D. T. Palmer. 1979. Evaluation and results of a survey of blackbird and mammal damage to mature field corn over a large, 3-state area. Pages 169–177 in Vertebrate Pest Control and Management Materials, ASTM STP 680. Am. Soc. Testing and Materials, Philadelphia, PA. 323 pp.
Stone, C. P., W. F. Shake, and D. J. Langowski. 1974. Reducing bird damage to high bush blueberries with a carbamate repellent. Wildl. Soc. Bull. 2:135–139.
Summers-Smith, D. 1963. The house sparrow. Collins, London. 251 pp.
Szolcsanyi, J., H. Sann, and F-K Pierau. 1986. Nociception in pigeons is not impaired by capsaicin. Pain 27:247–260.
Theissen, G. J., E. R. G. Shaw, R. D. Harris, J. B. Gollop, and H. R. Webster. 1957. Acoustic irritation threshold of Peking ducks and other domestic and wildfowl. J. Acoust. Soc. Am. 29:1301.
———, and E. R. G. Shaw. 1957. Acoustic irritation threshold of ring-billed gulls. J. Acoust. Soc. Am. 29:1307.
Thompson, G. 1991. Canada geese: waterfowl or just plain foul. Annual Meeting of Golf Course Superintendents, Reno, NV.
Timm, R. M. 1983. Anticoagulants. Pages G 27-28 in S. Hygnstrom, R. Timm, and G. Larsen, eds. Prevention and control of wildlife damage. Univ. Nebraska Press, Lincoln, NE.
Tosh, F. E., I. L. Doto, S. B. Beecher, and T. D. Y. Chin. 1970. Relationship of starling-blackbird roosts and endemic histoplasmosis. Am. Rev. Resp. Dis. 101:283–288.
Twedt, D. J., and J. H. Glahn. 1982. Reducing starling depredations at livestock feeding operations through changes in management practices. Proc. Vertebr. Pest Conf. 10:159–163.
U.S. Environmental Protection Agency. 1989. Carbofuran: special review technical support document. Office of Pesticides and Toxic Substances, Washington, DC.
Vacca, M. M., and C. M. Handel. 1988. Factors influencing predation associated with visits to artificial goose nests. J. Field Ornithol. 59:215–223.
Williams, M. L., R. L. Hothem, and H. M. Ohlendorf. 1989. Recruitment failure in American avocets and black-necked stilts nesting at Kesterson Reservoir, California, 1984–1985. Condor 91:797–802.
Woronecki, P. P. 1988. Effect of ultrasonic, visual, and sonic devices on pigeon numbers in a vacant building. Proc. Vertebr. Pest Conf. 13:266–272. |
Two-dimensional wetting on an elastic substrate
I.F. Lyuksyutov
Physics Institute, Academy of Sciences of the Ukrainian SSR
(Submitted 7 May 1987)
Zh. Eksp. Teor. Fiz. 94, 195–202 (August 1988)
The influence of substrate elasticity on the phenomenon of wetting in two-dimensional systems is considered. For the transition of two steps from a bound state to a free state it is shown that the interaction of the steps via elastic deformations of the substrate leads to a logarithmic growth of the activation barrier for the transition as $T \to T_c$. It is found that in a two-dimensional lattice of steps the corresponding phase transition should be first-order. For the problem of the wetting of a step by a two-dimensional phase it is found that the transition should occur discontinuously from incomplete to complete wetting, with a large activation barrier.
1. INTRODUCTION
The problem of wetting has attracted considerable attention in recent years. A review of the numerous papers on this theme can be found in Refs. 1–4. As a rule, the problem of wetting is understood as that of finding how the thickness of an adsorbed film depends on various parameters, e.g., the temperature and chemical potential. The wetting phase transition is understood as the transition from a thin (two-dimensional) to a thick (three-dimensional) adsorbed film. The work done in recent years has demonstrated the decisive influence of long-range interactions, e.g., van der Waals interactions on the wetting phenomenon. 4–6
The problem of wetting can also be formulated in two dimensions. In this case we are concerned with the change in the width of a strip of some two-dimensional phase growing in the neighborhood of a line defect on a surface. In experiment, this phenomenon has been observed repeatedly on electron micrographs. 7–10 In all the cases investigated the role of the line defects has been played by steps on the surface of a single crystal, while the role of the two-dimensional phase has been played by the adsorbate or the reconstructed surface.
A similar problem is the question of the transition of two or more interacting linear entities from a bound state to a free state. 11–14 The decay of steps from heights of two or three lattice constants to heights equal to one lattice constant has been observed in the experiments of Refs. 15 and 16.
It is natural to expect that in the two-dimensional case, as in the three-dimensional case, interactions that fall off slowly will determine the behavior of the system in wetting. An example of such an interaction in two-dimensional systems is the elastic interaction via the substrate. Between the steps there is a repulsive interaction that falls off as the inverse square of the distance between them. 17 The energy of the elastic deformations that arise when a strip of the two-dimensional phase appears increases logarithmically with the width of the strip. 18 In this paper we shall consider the effect of such elastic deformations on the phenomenon of two-dimensional wetting.
2. TRANSITION OF TWO STEPS FROM A BOUND TO A FREE STATE
We shall describe this transition by a model of two interacting filaments, arranged, in the ground state, along the x axis. The Hamiltonian of the model has the form
$$H = \int dx \left\{ J_1 \left[ (\nabla l_1)^2 + (\nabla l_2)^2 \right] + U(l_1 - l_2) \right\}. \quad (1)$$
Here $l_1$ and $l_2$ are the displacements of the filaments in the direction perpendicular to the filaments. We shall not consider states in which the steps pass through each other, i.e., when “overhangs” are formed on the surface. For definiteness, we shall choose $l_1 > l_2$. The potential of the interaction between two steps at a distance $l$ is conveniently chosen in the form
$$U(l) = \begin{cases} \infty, & l < 0 \\ -U_0, & 0 < l < a, \\ \alpha/l^2, & l > a \end{cases} \quad (2)$$
where $a$ is a distance of the order of several interatomic spacings. The potential (2) describes attraction at short distances ($U_0 > 0$) and repulsion on account of elastic deformations 17 at large distances.
The step is displaced as a result of the formation of kinks (a break in the step) with energy $E_k$. Considering the mean square displacements of a filament and a step it is easy to relate $J_1$ to $E_k$:
$$J_1 = T \exp(E_k/T)/b, \quad (3)$$
where $b$ is the lattice constant. Replacing $l_1$ and $l_2$:
$$l_1 = \frac{1}{2}(l + l_0), \quad l_2 = \frac{1}{2}(l_0 - l), \quad (4)$$
we obtain the Hamiltonian describing the relative displacements of the filaments ($J = \frac{1}{2}J_1$):
$$H = \int dx \left\{ \frac{1}{2} J \left( \frac{dl}{dx} \right)^2 + U(l) \right\}. \quad (5)$$
One-dimensional systems with a Hamiltonian of the type (5) are conveniently studied by going over from the one-dimensional statistical problem to the one-dimensional quantum problem. 19–21 Here the free energy of the system is $F = TE_0$, where $E_0$ is the energy of the ground state of the Schrödinger equation
$$-\frac{T^2}{2J} \frac{d^2\Psi}{dl^2} + U(l)\Psi = E\Psi. \quad (6)$$
In the wetting problem it is customary to characterize the behavior of the system by the mean square fluctuation $\xi_\perp^2$ and correlation length $\xi_\parallel$. These are expressed in terms of the wavefunctions $\Psi_n$ as follows:
$$\xi_\perp^2 = \int dl \Psi_n^2(l)(l - l)^2, \quad (7)$$
where \( \bar{l} = \int dl l \Psi_0(l) \);
\[
\langle [l(x) - \bar{l}] | l(0) - \bar{l} \rangle = \sum_n \exp[-(E_n - E_0) |x|] \int dl \Psi_0 \Psi_n l \\
\propto \exp[-(E_1 - E_0) |x|] = \exp(-|x|/\xi_b).
\]
In the case of the potential (2) Eq. (6) can be solved exactly. For \( E < 0 \) the substitution \( \Psi = l^{1/2} f(\beta l) \) brings Eq. (6) for \( l > a \) to the form
\[
f'' + \frac{1}{\beta l} f' - \left( \frac{\nu^2}{\beta^2 l^2} + 1 \right) f = 0,
\]
where
\[
\beta^2 = |E|2J/T, \quad \nu = (2J\alpha/T^2 + 1/4)^{1/2}.
\]
The solution of (9) that decreases as \( l \to \infty \) is expressed in terms of the modified Bessel function \( K_\nu \) (Ref. 22). Thus, for \( l > a \) the wavefunction has the form
\[
\Psi = Bl^\nu K_\nu(\beta l).
\]
The asymptotic forms of \( \Psi \) are
\[
\Psi \approx B(\pi/2\beta)^{\nu/2} e^{-\beta l}, \quad \beta l \to \infty,
\]
\[
\Psi \approx \frac{1}{2} B \Gamma(\nu)(2/\beta)^\nu e^{i/\beta - \nu}, \quad \beta l \to 0.
\]
In the region \( 0 < l < a \) the solution of (6) has the form
\[
\Psi = A \sin ka, \quad k = [2J(U_0 + |E|)]^{1/2}/T.
\]
From the joining condition we obtain an equation for the dependence \( E(T) \):
\[
ka \cotg ka = \frac{1}{2} - \beta a K_{-\nu}(\beta a)/K_\nu(\beta a).
\]
The transition point is determined from the condition \( E_0 = 0 \). In this case the solution of Eq. (6) can be expressed in terms of elementary functions. The solution that decreases as \( l \to \infty \) has the form
\[
\Psi = Bl^{\nu-1},
\]
and the equation for the transition temperature is
\[
ka \cotg ka = \frac{1}{2} - \nu
\]
or
\[
z \cotg z = \frac{1}{2} - (z^2\delta + 1/4)^{1/2}, \quad z = (2JU_0)^{1/2}a/T; \quad \delta = \alpha/a^2 U_0.
\]
The solution \( z_c \) of Eq. (15) as a function of \( \delta \) increases monotonically from \( \pi/2 \) to \( \pi \):
\[
z_c = \begin{cases}
\frac{1}{2}\pi(1 + \delta), & \delta \ll 1 \\
\pi(1 - 1/\pi\delta)^{1/2}, & \delta \gg 1
\end{cases}.
\]
Taking into account the dependence \( J_1(T) \) given by (3), we obtain an equation for the transition temperature
\[
T_c = E_k/\ln(\pi^2 c^2 b T_c/U_0 a^2),
\]
where \( \frac{1}{2} < c < 1 \) is determined by the quantity \( \delta \). The approximate solution of (17) for \( E_k/T_c \gg 1 \) has the form
\[
T_c \approx \frac{E_k}{\ln(E_k \pi^2 c^2 b)/U_0 a^2}.
\]
Using the wavefunctions (10), it is easy to determine the dependence \( \xi_\perp(T - T_c) \) near the transition point:
\[
\xi_\perp^2 \propto 1/|E| \approx 1/(T - T_c).
\]
For \( T > T_c \) the bound state is separated, because of the elastic repulsion, from the free potential barrier. We shall calculate how the height \( E_a \) of this barrier depends on \( T - T_c \).
The metastable states of the pair of steps in the quantum-mechanical analogy correspond to quasistationary levels with \( E > 0 \), and the probability of formation of a segment with free steps is proportional to the probability of a tunneling transition from the quasistationary to the free state. Therefore, knowing the dependence of the width of a quasistationary level on \( E \), we can determine \( E_a(T - T_c) \). Performing transformations analogous to those in the case \( E < 0 \), we represent the solution of (6) in the region \( l > a \) in the form
\[
\Psi = \frac{l^\nu}{S} \left[ -\exp\left(-i\frac{\pi}{2}\left(\nu + \frac{1}{2}\right)\right) H_\nu^{(2)}(\beta l) \right. \\
\left. + S \exp\left(i\frac{\pi}{2}\left(\nu + \frac{1}{2}\right)\right) H_\nu^{(1)}(\beta l) \right].
\]
Here \( H_\nu^{(1)} \) and \( H_\nu^{(2)} \) are Hankel functions\(^{22} \) and \( S \) is the scattering matrix. The asymptotic form of the solution (20) as \( l \to \infty \) is
\[
\Psi \approx (2/\pi S)^{1/2}(-e^{-\beta l} + Se^{i\beta l}).
\]
From the joining condition we find \( S \):
\[
S = (Z/Z^*) \exp(-i\pi(\nu + 1/2)),
\]
where
\[
Z = \left[ 1 - \frac{\tg ka}{ka}(\nu + 1/2 - \nu) - \frac{\beta}{k} \tg ka \frac{Y_{\nu-1}(\beta a)}{Y_\nu(\beta a)} \right] \\
+ i \frac{J_\nu(\beta a)}{Y_\nu(\beta a)} \left[ 1 - \frac{\tg ka}{ka}(\nu + 1/2) - \frac{\beta}{k} \tg ka \frac{J_{\nu+1}(\beta a)}{J_\nu(\beta a)} \right],
\]
and the symbol * denotes complex conjugation. Here \( J_\nu \) and \( Y_\nu \) are Bessel functions of the first and second kind.\(^{22} \) The poles of the scattering matrix (22) determine the energy \( E_0 \) and width \( \gamma \) of the quasistationary states. We shall be interested in the neighborhood of the transition point, i.e., \( E_0 \to 0 \), which corresponds to \( \beta a \ll 1 \). The Bessel functions \( J_\nu \) and \( Y_\nu \) have a simple form near zero\(^{22} \):
\[
J_\nu \approx \left( \frac{z}{2} \right)^\nu / \Gamma(\nu + 1), \quad Y_\nu(z) = \frac{1}{\pi} \Gamma(\nu) \left( \frac{z}{2} \right)^{-\nu}.
\]
The metastability effects will be manifest in the case of a large potential barrier, i.e., for \( \alpha/a^2 U_0 \gg 1 \). We shall consider the limiting case \( \alpha/a^2 U_0 \gg 1 \), which corresponds to \( \nu \gg 1 \) for \( T \sim T_c \). In this case, in the scattering matrix (22) we can neglect the terms containing \( Y_{\nu-1}(\beta a)/Y_\nu(\beta a) \) and \( J_{\nu-1}(\beta a)/J_\nu(\beta a) \). The imaginary part of the denominator of (22) decreases like \( E^{2\nu} \) while the real part decreases like \( E \) as \( E \to 0 \). In these conditions the concept of a quasistationary level is fully applicable. It is not difficult to obtain expression for the dependences \( E_0(T - T_c) \) and \( \gamma(T - T_c) \):
\[
\frac{E_0}{U_0} \approx \frac{T - T_c}{T_c} \left( 1 + \frac{E_a}{T_c} \right), \quad \gamma = \left( \frac{E_0}{U_0} \frac{\pi^2 c^2}{16c^2} \right)^\nu \propto \left( \frac{T - T_c}{T_c} \right)^\nu.
\]
Thus, the probability of a transition from a metastable bound state to the free state tends to zero by a power law as $T \to T_c$, corresponding to a logarithmic growth of the activation energy of the process. The exponent in the dependence $\gamma(T - T_c)$ is easily obtained in the quasiclassical approximation:
$$\gamma^{-1} \propto \exp \left\{ -\frac{2}{T} \int_a^{(\alpha/E)^{1/4}} \left[ 2J \left( \frac{\alpha}{x^2} - E \right) \right]^{1/4} dx \right\}$$
$$= \exp \left\{ -\frac{2(2J\alpha)^{1/4}}{T} \ln \left[ \left( \frac{\alpha}{E} \right)^{1/4} \frac{1}{a} \right] \right\} \propto E^{-\nu \nu} \left( \frac{T - T_c}{T_c} \right)^{-\nu}.$$
(25)
The condition for applicability of the quasiclassical approximation is the requirement $\nu \gg 1$. We note that in the general case of an interaction potential of the form $\alpha/x^n$ between the steps the quasiclassical approximation is applicable only for $n = 1, 2$ (Ref. 23). For $n = 1$ the tunneling probability is proportional to $\exp \left[ -\text{const}/(T - T_c)^{1/2} \right]$. For $n > 2$ the quasiclassical formula gives a finite tunneling probability as $E \to 0$ ($T \to T_c$), in contradiction to the exact solution.\textsuperscript{23}
As $T \to T_c$ the size $L_c$ of a critical nucleus of a state with steps that have dissociated, as follows from (25), has the behavior
$$L_c \approx (\alpha/E)^{1/4} \propto (T_c/|T - T_c|)^{1/4}.$$
(26)
The kinetics of the process by which the steps pass from a bound to a free state (and back) is determined by several factors. The results obtained above make it possible to distinguish two factors that lead to a growth of the relaxation time as $T \to T_c$. These are, first, the activation barrier (24), (25), and, second, the growth of the size $L_c$ of a critical nucleus. The second factor is associated with the necessity of diffusion of surface atoms through distances at least of order $L_c$ in the rearrangement of the steps. The probabilities of both processes have a power dependence on $T - T_c$ and $T \to T_c$. Therefore, we may expect that the total probability of formation of a nucleus will vary as a power of $T - T_c$ and the corresponding relaxation time will satisfy
$$\tau \propto |(T - T_c)/T_c|^{-\nu_1},$$
where $\nu_1 > \nu$. The exact relationship between $\nu_1$ and $\nu$ can depend on the details of the relaxation mechanism, about which little is known, and can differ, therefore, from the very simple relation $\nu_1 = \nu + 1$.
As noted above, metastable phenomena should be observed only for $\nu \gg 1$. At the same time, this condition should correspond to the parameters that obtain in experiment. In this case, the contribution to the exponent $\nu_1$ from the kinetics of formation of the nucleus—a contribution connected with the fact that $L_c \to \infty$ as $T \to T_c$—will be small in comparison with the contribution from the activation barrier.
This approach does not permit a rigorous investigation of the kinetics of the system, and the accuracy of the estimate of the kinetics corresponds essentially to the accuracy of the quasiclassical approximation (25). However, solving (13) exactly makes it possible, on the one hand, to investigate accurately the thermodynamics of the problem, and, on the other, to justify the results of the quasiclassical approximation.
In experiment the transition of steps from a bound state to a free state has been observed on vicinal faces with angular inclination 5–15° (Ref. 16). In this case the system is two-dimensional and one may speak of a phase transition. We shall consider the question of the order of the phase transition in the case when $L \to \infty$, where $L$ is the period of the lattice of steps in the free state. We shall write a phenomenological expression for the free-energy density of the lattice of steps in the free state:
$$F_1 = A_1/L^2 + B_1 T^2/J_2(T)L^2.$$
(27)
Here $A_1$ and $B_1$ are certain constants. The first term in (27) describes the elastic repulsion, and the second describes the contribution to the free energy from thermal fluctuations of the steps. This term is simply the kinetic energy of a quantum particle in a box of width $L$. In the bound state the free-energy density can be represented in the form
$$F_2 = \frac{A_2}{(2L)^3} + \frac{B_2 T^2}{J_2(T)(2L)^3} + \frac{DT^2}{4A_1 a^2 L} - \frac{U_0}{2L}.$$
(28)
Here $D \approx 1$. The other constants can be related to the constants in (27). For simplicity we shall assume that $A_2 = L_1 A_1$, since the energy of the elastic interaction of the steps is proportional to the square of the step height,\textsuperscript{17} and that $B_2 = B_1$ and $J_2 = 2J_1$. The third and fourth terms in (28) describe the free energy of the steps in the bound state and correspond to the contribution of thermal fluctuations and the binding energy. Equating $F_1$ and $F_2$, we can obtain an equation for $T_c$, analogous to (17):
$$(2J U_0)^{1/4} \frac{a}{T_c} = \left[ \frac{D(1 + A/2U_0 L^2)}{2(1 - 15Ba^2/16DL^2)} \right]^{1/2}.$$
(29)
The temperature dependence of the free energy is connected with the second term for $F_1$ and with the second and third terms for $F_2$. These dependences differ only by a factor of order $\alpha/L^2 \ll 1$. Consequently, the entropies of the phases at the transition point will differ sharply. This implies that, in the framework of the model considered, the phase transition in the lattice of steps for large values of $L$ will be first-order.
The reasoning given is based implicitly on the assumption of a large barrier between the bound state and free state of the steps. Such a barrier, as indicated above, arises for $\nu \gg 1$. In the opposite case of small $\nu$, pronounced fluctuation phenomena, which are not taken into account in the above phenomenological analysis, will be present in the system. In this case it is also possible that the transition of the decoupling of the steps is a second-order transition. Unfortunately, in the case of the second-order transition we have not succeeded in identifying the universality class.
For a system of two steps the relaxation time $\tau$ tends to infinity as $E \to 0$ [see (26)]. In a two-dimensional lattice of steps $E$ is bounded by the quantity $\alpha/L^2$. Therefore, the maximum relaxation time attainable in experiment will behave as
$$\tau_{\text{max}} \propto L^{2\nu_1}.$$
(30)
### 3. TWO-DIMENSIONAL WETTING
We turn to the problem of the wetting of steps by a two-dimensional phase. As shown by observations in an electron microscope,\textsuperscript{7–10} in many cases the phase wetting a step forms a strip of width $l$ to one side of the step. We shall consider such a strip on a terrace of width $L$. The Hamiltonian describing the system will also have the form (5), but with a different dependence $U(l)$, which can be chosen in the form
$$U(l) = \begin{cases}
\infty, & l < a, \\
\alpha \ln (l/a) - \mu (l-a) - U_0, & a < l < L
\end{cases}$$ \hspace{1cm} (31)
where $\mu$ is the chemical potential, $\alpha > 0$, and $U_0$ describes the energy gain upon wetting of the steps. The logarithmic term in (31) is connected with the energy of the elastic deformations that arise because of the difference between the surface energy of the phase being formed and the surface energy of the remaining substrate.\textsuperscript{18} Suppose, for definiteness, that we are concerned with the wetting of a step by a commensurate two-dimensional phase. Then the constant $J$ can be represented in the form (3). Unfortunately, it has not been possible to obtain an exact solution in the case of the potential (31). In certain cases, however, one can obtain approximate solutions by using the expansion of the potential (31) about the boundaries of the terrace, and the quasiclassical approximation.
We shall consider the case of a rigid interphase boundary ($J \gg T^2/\alpha a^2$) and a broad terrace ($L \gg a$). We approximate the potential at the right and left boundaries of the terrace by a linear potential ($\mu > 0$);
$$U_l = \begin{cases}
-U_0 + (\alpha/a - \mu)(l-a), & l \geq a \\
\infty, & l < a
\end{cases},$$
$$U_r = \begin{cases}
-U_0 + \alpha \ln (L/a) - \mu (L-a) + (\mu - \alpha/L)(L-l), & l \leq L \\
\infty, & l > L
\end{cases}$$ \hspace{1cm} (32)
Expressions for the energy levels in a field of the form (32) are given in, e.g., Ref. 23. Using them, we obtain for the ground-state energies in the wells
$$E_{0l} = -U_0 + (T^2/2J)^{\eta} |\lambda_1| (\alpha/a - \mu)^{\eta},$$
$$E_{0r} = -U_0 + \alpha \ln (L/a) - \mu (L-a) + (T^2/2J)^{\eta} |\lambda_1| (\mu - \alpha/L)^{\eta}. $$ \hspace{1cm} (33)
Here $\lambda_1 = -2.338$ is the first zero of the Airy function. At the transition point, as will be seen below, the wells are separated by a large potential barrier. Neglecting the tunneling splitting of the levels, we obtain, by equating $E_{0l}$ and $E_{0r}$, an equation for the equilibrium curve $\mu(T)$. We seek the solution in the form
$$\mu = gL^{-1}\alpha \ln (L/a).$$
Neglecting $\mu$ in the second term in the expression for $E_{0l}$ and in the last term in $E_{0r}$, we obtain
$$\left( \frac{T^2}{2J} \right)^{\eta} |\lambda_1| \left( \frac{\alpha}{a} \right)^{\eta} = \alpha \ln \frac{L}{a} - g\alpha \ln \frac{L}{a} + g \frac{\alpha a}{L} \ln \frac{L}{a}. $$ \hspace{1cm} (34)
Neglecting the last term in (34), we obtain an approximate expression for $g$ and $E_0$:
$$g \approx 1 - |\lambda_1| \left[ \frac{T^2}{2J\alpha a^2} \right]^{\eta} / \ln \left( \frac{L}{a} \right), \quad 1 - g \ll 1,$$ \hspace{1cm} (35)
$$E_0 + U_0 = |\lambda_1| \left( \frac{T^2}{2J\alpha a^2} \right)^{\eta} \alpha \ll \alpha.$$ \hspace{1cm} (36)
The inequalities in (35) and (36) follow from the condition that the interphase boundary be rigid. The latter inequality confirms the applicability of the linear expansion used for the potential. We now calculate in the quasiclassical approximation the tunnel integral determining the activation energy $E_a$ for the appearance of a region with the two-dimensional phase covering the whole terrace, i.e., for the transition of the interphase boundary from the left well to the right well:
$$E_a = (2J)^{\eta} \int_{a_1}^{L_1} \left[ -\mu(x-a) + \alpha \ln \frac{x}{a} - U_0 - E_0 \right]^{\eta} dx$$
$$= (2J)^{\eta} \int_{a_1}^{L_1} \left[ -\mu(x-a) + \alpha \ln \frac{x}{a} - \alpha \left( \frac{T^2}{2J\alpha a^2} \right)^{\eta} \right]^{\eta} dx,$$ \hspace{1cm} (37)
where $a_1$ and $L_1$ are the zeros of the expression under the square root. The integral (37) can be estimated by the method of steepest descent. Finally, we obtain the approximate result
$$E_a \approx \frac{(2J\alpha)^{\eta}}{T} L \ln^{\eta} \left( \frac{L}{a} \right). $$ \hspace{1cm} (38)
By virtue of the assumptions made, $E_a \gg T$. This implies that on the transition curve the time of establishment of equilibrium can be too long. In experiment the transition itself will then be observed from a metastable state.
Thus, elastic deformations lead to the result that the width of the strip of adsorbate on the terrace should change discontinuously, from an almost empty to an almost filled terrace. In addition, in experiment pronounced hysteresis phenomena should be observed. When elastic deformations are not taken into account, the width of the strip should grow continuously as a function of $\mu$ (Ref. 14).
The author is grateful to B. Z. Ol'shanskii, O. M. Pchelyakov, and S. I. Steinin for explaining the experimental situation, and to V. L. Pokrovskii for discussion of the results of the work.
\textsuperscript{1}R. Pandit, M. Schick, and M. Wortis, Phys. Rev. B 26, 5112 (1982).
\textsuperscript{2}P. G. de Gennes, Rev. Mod. Phys. 57, 827 (1985).
\textsuperscript{3}M. Bienfait, Surf. Sci. 162, 411 (1985).
\textsuperscript{4}A. D. Migone, J. Krim, J. G. Dash, and J. Suzanne, Phys. Rev. B 31, 7643 (1985).
\textsuperscript{5}S. Dietrich and M. Schick, Phys. Rev. B 31, 4718 (1985).
\textsuperscript{6}C. Ebner, W. F. Saam, and A. K. Sen, Phys. Rev. B 32, 1558 (1985).
\textsuperscript{7}N. Osakabe, Y. Tanishiro, K. Yagi, and G. Honjo, Surf. Sci. 109, 353 (1981).
\textsuperscript{8}Y. Tanishiro, K. Takayanagi, and K. Yagi, Ultramicroscopy 11, 95 (1983).
\textsuperscript{9}W. Teliips and E. Bauer, Surf. Sci. 162, 163 (1985).
\textsuperscript{10}A. L. Aseev, A. V. Latyshev, and S. I. Steinin, in: Problems of Electronic Materials Science [in Russian], Nauka, Novosibirsk (1986), p. 109.
\textsuperscript{11}M. E. Fisher, J. Stat. Phys. 34, 667 (1984).
\textsuperscript{12}G. Uimin and P. Rujan, Phys. Rev. B 34, 3551 (1986).
\textsuperscript{13}S. T. Chui and J. D. Weeks, Phys. Rev. B 23, 2438 (1981).
\textsuperscript{14}R. Lipowsky, Phys. Rev. B 32, 1731 (1985).
\textsuperscript{15}V. I. Mashanov and B. Z. Ol'shanskii, Pis'ma Zh. Eksp. Teor. Fiz. 36, 292 (1982) [JETP Lett. 36, 355 (1982)].
\textsuperscript{16}V. I. Marchenko, B. Z. Ol'shanskii, and S. I. Steinin, Poverkhn. Fiz. Khim. Mekh. 4, 38 (1986) [Phys. Chem. Mech. Surf.].
\textsuperscript{17}V. I. Marchenko and A. Ya. Parshin, Zh. Eksp. Teor. Fiz. 79, 257 (1980) [Sov. Phys. JETP 52, 129 (1980)].
\textsuperscript{18}V. I. Marchenko, Pis'ma Zh. Eksp. Teor. Fiz. 33, 397 (1981) [JETP Lett. 33, 381 (1981)].
19 Ya. M. Gel'fand and A. M. Yaglom, Usp. Mat. Nauk 11, No. 1, 77 (1956).
20 R. A. Suris, Zh. Eksp. Teor. Fiz. 47, 1427 (1964) [Sov. Phys. JETP 20, 961 (1965)].
21 D. J. Scalapino, M. Sears, and R. A. Ferrell, Phys. Rev. B 6, 3409 (1972).
22 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York (1965), p. 360.
23 V. M. Galitskii, B. M. Karnakov, and V. I. Kogan, Problems in Quantum Mechanics [in Russian], Nauka, Moscow (1981), p. 648. [First ed. transl. published by Prentice-Hall, Englewood Cliffs (Fig. 3)].
Translated by P. J. Shepherd |
'Democracy' lobby instigates breakup of Russia
The secret network behind George Soros
Belgium stands up against 'pornocracy'
Break the coverup of Bush's drug trafficking
Jail George Bush
a kinder, gentler CRACK DEALER
George Bush and the 'Ibykus' principle, by Lyndon LaRouche
Chapter 1
New revelations tie Palme murder to Bush, Thatcher-linked arms cartel
Chapter 2
John Train: Wall Street's man in Bush's secret government
Chapter 3
The Olof Palme assassination and coverup, revisited
Case studies: The LaRouche case and the Palme assassination. The Club of the Isles and the international weapons cartel. Schalck-Golodkowski and 'destructive engagement.'
Chapter 4
The death toll rises
Uwe Barschel. André Cools and Gerald Bull; Rajiv Gandhi. Yitzhak Rabin. Cyrus Hashemi. Some other strange deaths.
Chapter 5
'Paris Review' goes to Kabul
CHRONOLOGY
Bush-Thatcher 'secret government' operations: 1979-96
ALSO AVAILABLE:
"Would A President Bob Dole Prosecute Drug Super-Kingpin George Bush?"
This EIR Special Report documents the ongoing war between U.S. President Bill Clinton and the Queen's Own Republican Party of 1980s drug super-kingpin George Bush.
88 pages $100
Order #EIR 96-005
126 pages, $100. Order #EIR 96-003
From the Associate Editor
The campaign by EIR and the LaRouche movement to jail George Herbert Walker Bush took a big step forward on Oct. 25, with the release of a new Special Report, “George Bush and the 12333 Serial Murder Ring,” available for $100 from this news service. It is a sequel to our September 1996 report, “Would a President Bob Dole Prosecute Drug Super-Kingpin George Bush?”
Why Bush? Lyndon LaRouche, in his introduction to the new report, spells it out: “Although out of public office, Bush is still a powerful international, and national figure behind the scenes, one of the vilest, meanest, and most corrupt figures in any part of today’s world. Your freedom, and that of our nation, hangs upon our ability to purge our institutions of the evil, bootleg, unconstitutional power, represented by the secret government created for George Bush, beginning 1981, under the title of Executive Order 12333.”
Referring to the recent exposés of Bush’s role as kingpin of the cocaine-trafficking Contras, and in the assassination of Swedish Prime Minister Olof Palme, LaRouche writes: “These presently ongoing, fresh exposures of Thatcher and Bush, are like a Heaven-sent gift. These facts present an opportunity for our government, our citizens, to free themselves from continued bloody abuses, by secret, armed government, operating from within the precincts of our military’s Joint Chiefs of Staff. Sometimes, true facts are the most effective of the weapons by means of which an entire people may regain its lost freedom. This—right now—is such a time.”
EIR this week provides a wealth of supporting analytical and documentary material. St. Petersburg correspondent Roman Bessonov’s Feature shows how the Bush-Thatcher “Project Democracy” networks are financing separatist insurgencies in the former U.S.S.R. that could lead to World War III. In International, we have reports on the demise of Bush’s defense policy for the Americas; the popular revolt in Belgium against perversion and corruption by Bush’s political partners in high places; and the arrests in Italy of Bush-leaguers known as “the new Propaganda-2.” See National for reports on how the crimes of Bush and his family friend William Weld of Massachusetts are coming under increased public scrutiny.
Susan Welsh
Interviews
45 Vazgen Manukian
The leader of Armenia’s National Democratic Union and unified former opposition candidate in the Sept. 22 Presidential elections, discusses the vote fraud that left his country with a “semi-dictatorial regime, with a democratic shell.”
Book Reviews
6 The ‘super-blowout’ in world finance
*Die Globalisierungsfall* (The Globalization Trap) by Hans-Peter Martin and Harald Schumann.
Photo and graphic credits: Cover, EIRNS/Robert Trout. Pages 11, 55, 65, EIRNS/Stuart Lewis. Page 19, UN Photo. Pages 29, 30, EIRNS/John Sigerson. Page 39, Agence France Presse. Page 43, Schiller Institute.
Investigation
54 The secret financial network behind ‘wizard’ George Soros
From a dossier by our Wiesbaden, Germany bureau, titled “A Profile of Mega-Speculator George Soros.” Beneath the thin veneer of philanthropy, Soros has been called the master of “hit-and-run capitalism.”
61 Soros’s looting of Ibero-America
Case studies of Brazil, Argentina, Mexico, and Peru.
Departments
72 Editorial
A crisis of the institutions.
Economics
4 Zedillo won’t privatize oil; speculators take revenge
Mexico’s President is said to be “retaking a nationalist path,” and as a result, financial warfare against him is escalating.
7 Currency Rates
8 Venezuela’s austerity plan puts it on the road to a ‘Mexican’ explosion
10 How deregulation shot down the U.S. airlines
An EIR Contributing Editor Feature by Lyndon H. LaRouche, Jr.
14 Whole classes of patients are denied treatment with ‘managed care’
Children and adults with mental disabilities are being denied care under these “managed abuse” plans.
16 Business Briefs
Feature
18 Bush’s ‘democracy’ lobby instigates breakup of Russia
Part 4 of Roman Bessonov’s series, “The Anti-Utopia in Power,” shows how the indigenist, separatist, and environmentalist seeds of breakup of the former U.S.S.R. were planted by Gorbachov’s “new thinking,” and are being nurtured today by the friends of George Bush and Margaret Thatcher, including the U.S. National Endowment for Democracy and its affiliated “quangos.”
32 Democracy or destabilization? What NED funds in Eurasia
Follow the money-trail of the National Endowment for Democracy.
International
36 IMF pressure is driving Russia toward civil war
Yeltsin’s firing of Lebed reminded Russians that they didn’t elect a President last summer, but a continuing time of strife, as the International Monetary Fund turns the screws tighter on the ravaged economy.
38 Belgium rocked by protests as people stand up against ‘pornocracy’
40 Italian prosecutors close in on ‘new P-2’
43 Europe’s responsibility in Bosnia-Hercegovina
By Gen. J.A. Graf von Kielmansegg (ret.) of Germany, former chief of NATO’s Northern Command of Europe.
47 ‘Williamsburg II’ flops: Time to dump Bush’s defense policy for the Americas
The second Defense Ministerial of the Americas was a failure, as U.S. Secretary of Defense William Perry adopted bankrupt policies inherited from the Bush administration: destroying nations, under the slogans of “peace” and “democracy.”
50 ‘Democrat’ Sarmiento: an Anglophile racist
The program of Argentina’s President Domingo Faustino Sarmiento in the 19th century was a forerunner of today’s efforts to demilitarize Ibero-America.
52 International Intelligence
National
Demonstrators in Houston, Texas on Oct. 4, 1996.
64 Coverup begins to crack on Bush cocaine ring
Frantic efforts by the Los Angeles Times and Washington Post to cover for drug super-kingpin George Bush are falling apart, as Senate hearings begin into the Contra crack-running operations headed by Bush and his “gopher” Oliver North.
66 ‘Bill Weld blocked our investigation’
67 DOJ’s Bromwich: Some oppose drug probe
68 Battle lines drawn against Social Security privatization
Including the National Association of Manufacturers’ resolution to “piratize” the system.
70 National News
Zedillo won’t privatize oil; speculators take revenge
by Carlos Cota Meza
Mexico is once again being visited by financial instability, triggered by President Ernesto Zedillo’s decision to resist the pressures of the international banking community and not privatize the petrochemical industry, instead applying an investment formula in which the state will retain ownership of 51% of the industry, and national or foreign private investors can participate in the remaining 49%. That decision was welcomed by many inside Mexico’s ruling Revolutionary Institutional Party (PRI), but was also met with a great gnashing of teeth by the gnomes in Wall Street and the City of London.
The illusion of Mexico’s financial stability, so carefully nurtured since the debt bomb blew up in December 1994, dissolved during the third week of October, when Mexico was hit by a familiar pattern of capital flight, a peso slide against the dollar, a skyrocketing of internal interest rates, and a toboggan ride on the stock market. Assurances by Energy Minister Jesús Reyes Heroles to international speculators that suspension of the privatization plan would be but a temporary setback, did little to assuage the wrath of those who had thought that Mexico’s oil was finally within their grasp.
‘The pirates are angry’
Lawmakers from the ruling PRI are saying for the first time that President Zedillo “is retaking a nationalist path,” and that the financial instability “is the backlash of a great conspiracy by all those who find themselves affected.” Some congratulated President Zedillo openly for finally listening to the PRI rank-and-file which elected him. Others are warning that “the pirates are angry,” and that these are the big foreign moneybags who “wanted to grab Mexican stocks at bargain-basement prices.”
On the other hand, the champions of free trade within the opposition National Action Party (PAN), are hysterically accusing the PRI of “tying the hands of the President,” referring to the intense lobbying effort against petrochemicals privatization carried out by the Mexican labor movement and others in the period leading up to Zedillo’s decision. They are the ones “who adhere to the general outlines of the Luis Echeverría and López Portillo Presidencies,” charged PAN Sen. Francisco X. Salazar.
Luis Pazos, a populist spokesman for the extreme neoliberalism (i.e., British free trade ideology) of the Mont Pelierin Society, says that the failure to fully privatize the petrochemical industry “is a victory by the dinosaurs, and by the sectors whose goal is to preserve outmoded statist schemes.” Enraged columnists accused Fidel Velázquez, veteran leader of the Mexican Workers Federation, of having imposed a “suicidal fundamentalism” against the privatization.
The Mexican media have revealed that intense pressure was applied on members of the Zedillo administration, in the form of private meetings between PRI legislators and public officials. Miguel Mancera Aguayo, Mexico’s central bank director, was subjected to one such “private meeting” with members of the finance committees of both houses of Congress, where, at the insistence of PRI deputy Francisco Suárez Dávila, a four-hour discussion was held on the question of central bank autonomy. Congressman Suárez Dávila, who chairs the finance committee, demanded “detailed information” on Mancera’s monetary policy, and warned that, its autonomy notwithstanding, it must “not be forgotten” that the Bank of Mexico “does not enjoy absolute independence from the powers of the state.”
Sen. Carlos Sales (PRI), who chairs the Senate finance committee, pointed out to Mancera that he must recognize that the “people’s patience is running out.”
In the middle of this row, some were asking: “Is there any chance that the non-privatization of the petrochemical industry will resolve the economic crisis?” Others responded, “Has economic liberalism resolved the crisis?” But what none of the parties to the conflict have managed to recognize is that the fundamental instability of Mexico is not due to any of the events in which they are participating.
**Mexico first to go down—again?**
As was admitted in the recent annual meeting of the International Monetary Fund (IMF), the world is going through the worst banking crisis of the century. Mexico’s upheavals are directly related to the jam that the international financial bodies are in as they try to deal with each new crisis erupting anywhere around the globe. Rimmer De Vries, former chief economist at J.P. Morgan bank, during a recent seminar at the IMF’s Institute of International Economics, said that the “next time a new crisis breaks out in the problem debtor countries . . . we will not see the IMF coming in with billions of dollars in aid, but we will see the consequences of the crisis hit the domestic and international markets.”
De Vries points to Turkey and Thailand as countries ready to suffer a financial crisis. De Vries knows Mexico’s financial situation, because J.P. Morgan headed up the creditor syndicate in the Brady Plan negotiations of 1990.
The IMF directors have also been explicit that there will no longer be any financial bailout packages to refloat the “next Mexico.” The only thing being offered to Mexico at present, according to IMF Managing Director Michel Camdessus, is a so-called “Preventive Plan,” which involves “defining the cost of the reimbursement of contributed loans,” in order to confront the most recent problems of the financial emergency in Mexico. The plan would begin to function during the first quarter of 1997, and operate until the end of 1999.
Finance Secretary Guillermo Ortiz claims that the “Preventive Plan” is to deal with more than $10 billion in foreign debt owed to the IMF. In 1995, Mexico paid $41 billion on its foreign debt; this year, payments will be $26 billion, not counting the $14 billion due which were refinanced by new Mexican bonds floated on the international markets.
Adding up the write-offs and bond issues from 1995 to the present, Mexico has recycled some $81 billion, only to enter 1997 with yet another rescue plan to deal with its obligations to the IMF. And, the “Preventive Plan” does not have the support of the entire international financial community.
The same is occurring with the national banking system, or the “internal markets,” as De Vries calls them. It is well known that the Mexican government has been trying for a year and a half to keep Mexico’s banks from disappearing, but has not succeeded. The Savings Bank Protection Fund (Fobaproa) and the National Commission on Banks and Stocks (CNBV) have taken over ten private banks, and aided another 11. So far, these operations have cost 113 billion pesos (roughly $15 billion).
According to President Zedillo’s second State of the Nation address, these actions have done nothing to stop the current banking disaster. Overdue debts in 1995 grew 64.5% over the previous year, and in only the first four months of 1996, non-performing debt was already 98.5% of the level reached in all of 1995. And these calculations don’t include interest.
In June 1996, private banks had declared assets of 599 billion pesos, while their debts rose to nearly 606 billion pesos, leading to a deficit of 6.8 billion pesos. The profit margin of the banks is less than 7.69%, rates of return on capital are less than 13.29%, and the yields on assets are less than 0.59%. These indicators exclude banks that have been taken over or are in “a special situation,” including Banco Unión, Cremi, Banpaís, Interestatal, Oriente, Obrero, Inverlat, Bancen, Capital, and Sureste.
Traditional banking has disappeared from the national economy. Bank deposits in the first half of the year plunged 23.5%, the loan portfolios of the banks have fallen 56% in the past year and a half, and bank financing has fallen 47.25% in the same period.
With the increase in interest rates and the peso devaluation during the last turbulence in October, it is forecast that arrears will grow, bank income will decline, and bankruptcies in the national banking sector will multiply. Everything now seems to indicate that the IMF, through its famous “Preventive Plan,” seeks to directly administer Mexico from now through the year 2000. That plan includes the explicit “obligation” that Mexico privatize its state oil company, Pemex.
**‘LaRouche plan’ is sole alternative**
It is this last conditionality which appears to have defined a “boundary condition” for the Mexican ruling class, whose members are now faced with the fact that sticking with IMF conditionailities will mean the eventual privatization of Pemex, something which until just a few years ago was rejected out-of-hand in all public and private discussions. The factional brawl within Mexico’s ruling elites over whether that boundary will be crossed is growing increasingly heated.
Addressing the overall financial crisis, U.S. statesman and economist Lyndon LaRouche offered the following on Oct. 16: “The only alternative is mine. . . . My policy is that the governments must act now, or set the action up now, to be prepared, the moment the public clamors for it, is willing to support it, to put all this financial system into government receivership and reorganization, to prevent a chain-reaction which could lead to social chaos. That is, to protect the people, to protect the economy from the effects of this financial bubble bursting.”
The ‘super-blowout’ in world finance
by Lothar Komp
Die Globalisierungsfalle
by Hans-Peter Martin and Harald Schumann
Rowohlt Verlag, Reinbeck bei Hamburg, 1996
350 pages, hardbound, DM 38
Yet another taboo in German politics has fallen. Over a long period of time, despite having been bombarded with documentation, petitions, and pamphlets, the political establishment had hysterically denied Lyndon LaRouche’s analysis of the inevitable financial disintegration. Now, this line is no longer sustainable. Suddenly, Hessen’s minister of the environment, Joschka Fischer, and Prof. Ernst Ulrich von Weizsäcker, director of the environmentalist Wuppertal Institute, are telling everyone within earshot that there is a risk of a “super-blowout in the cyberspace of world finance.” You don’t have to look far to find the source of this sudden insight: a new book, Die Globalisierungsfalle (The Globalization Trap), now well on its way to becoming a best-seller in Germany. Written in the snide tone one would expect of Der Spiegel magazine staff reporters, the book describes the worldwide onslaught of neo-liberalism (free trade) and the manifest destruction said policy has already wreaked.
The message of the “political-economic Holy Word, with which a veritable army of economic advisers has marched into politics,” really means nothing other than “the market is good, state intervention is bad,” the authors write. Starting from the Anglo-Saxon world, most Western governments have elevated this dogma into the guiding light of their policies. Deregulation instead of state supervision, liberalizing trade and capital flows, privatizing state firms—such are the “strategic weapons in the arsenal of those governments which have put their faith in the free market economy and into those economic institutions controlled by said markets: the World Bank, the IMF [International Monetary Fund] and the World Trade Organization.”
Should anyone still entertain illusions about the consequences of neo-liberal rule, let him look at the United States today. That nation, say the authors, has become “the world economy’s biggest cheap labor zone. . . . For more than half of the population, forced-march competitiveness has become the new American nightmare: the ladder straight down to hell. In 1995, four-fifths of all male employees and workers in the U.S.A. earned, in real terms, 11% less than in 1973. In other words, over the last two decades, living standards for the vast majority of the population, have actually fallen. Between 1979 and 1995, some 43 million people lost their jobs. Most quickly found another job, but in three-quarters of all these cases, at a far lower wage, and under far worse working conditions.”
Similarly, say Martin and Schumann, “capitalist counter-reformation” has brought about acute social decay: “Criminality has taken on epidemic proportions. In California, itself the world’s seventh largest economy, expenditure on jails exceeds that for education. Twenty-eight million Americans, over 10% of the population, live entrenched in high-rise buildings or security estates guarded night and day. Americans spend twice as much on private guards as they do on state police.” In all vital questions bearing upon the future, governments have surrendered to the “ineluctable constraints of the transnational economy,” while politics has become nothing but impotent theater. Globalization has, in fact, become a “trap for democracy,” leading to the “20:80 society,” in which 80% of the population are ejected from the economy and have to be kept punch-drunk by mass media entertainment.
The book’s most interesting passages deal with the global financial markets. They contain a remarkable documentation of expert warnings on the coming financial crash, so far only seen in publications associated with LaRouche. We are treated to Wilhelm Noellinger’s proposal that “the financial world needs to be protected from itself,” by taking measures to ward off a “super-blowout in the financial system”; to Jacques Chirac’s expression of “financial AIDS”; to Felix Rohatyn’s warnings on the “deadly potential, which lies in the combination of new financial instruments and overly rarefied trading mechanisms which could touch off a destructive chain reaction; the world financial markets have become a greater threat to stability than atomic weapons.”
In 1994, Horst Koehler, president of the German Association of Savings Banks, warned that the collapse of one financial institution could lead to a worldwide domino effect: “The risk will hit the stock exchanges, next the currency exchanges, and then, the real world.” Therefore, said Koehler, “a blowout on the financial markets is perfectly possible.” Were that to be the case, say Schumann and Martin, trade would screech to a halt, “the entire system will break down and a worldwide crash will become as unavoidable as that which took place on Black Friday in October 1929.”
Indeed, in the past few years there have been several occasions, upon which the world financial system stood on the
edge of the abyss, or, as IMF Managing Director Michel Camdessus put it, “a real world catastrophe” is imminent. A slight rise in U.S. interest rates sufficed to unleash, in early 1994, a chain reaction on international bond markets. While $3 billion of Orange County, California’s assets went down the drain, simultaneously, “just overnight more than $3 trillion” of financial paper went up in smoke, catalyzed by the incredibly bloated mass of financial gambling called derivatives trading.
**Camdessus: the puppet on a string**
By December 1994, things had gotten much worse. When the Mexico crisis hit, the world financial system was about to disintegrate. As the authors document, a worst-case scenario was impeded only by a desperate, and, in effect, illegal, action by the IMF’s boss: “On a cold Monday evening, Jan. 30, 1995,” push came to shove. “At 9 a.m. Camdessus received a piece of news which made him shudder. He stood quite alone, shouldering all responsibility to prevent the very disaster which he himself had always thought could not possibly occur. Wracked by anxiety, he could not sit still at his desk. He collected his papers, and left his wood-panelled office for the large conference room, where IMF directors normally gather to discuss issuing credit. ‘I was seeking the answer to a question which had never before been posed,’ said Camdessus. Should he put aside all previous IMF rules, and, without conditions, without contract, without even the lenders’ agreement, grant the largest loan in the fifty-year history of the IMF? Camdessus picked up the telephone, and within a few hours, the mighty director of the world’s largest credit institution became but a puppet himself, its strings pulled by people he himself did not even know.” The book describes that in these dramatic hours, Camdessus had received a barrage of “calls made by leading New York bankers and investment managers,” pressuring him to opt for a gigantic bailout. “Were the Mexican market to collapse, he had heard, nothing could stop hell breaking loose. A chain reaction would be touched off by the mere apprehension of a similar crisis in other Third World countries, the which might well lead to a world wide financial crash.”
Such a crash, according to the authors, is, in effect, much more likely than many of the movers and shapers of global markets are admitting to themselves and their clients. Of course, how such a “financial machine run amok” shall ever be brought back under the control of the governments of nation-states, how social tranquility and a decent living standard shall ever be restored, does not overly preoccupy Martin and Schumann. Vaguely pointing to the “Tobin tax,” and calling for “a vital, vigorous European alternative to destructive Anglo-Saxon market radicalism,” will not do the job. Listening to proposals of a “continent-wide ecological tax reform,” coupled with monetary union, “in order to make the Euro the major currency,” one can only draw the conclusion that the authors cannot distinguish a coffin from a lifeboat.
Venezuela’s austerity plan puts it on the road to a ‘Mexican’ explosion
by David Ramonet
A chill is sweeping over Venezuelans, as they see the government’s “Agenda Venezuela”—the austerity agreement struck with the International Monetary Fund in April—turning more and more into an imitation of the IMF’s “Mexican miracle.” In Mexico, the illusion of imminent prosperity through financial speculation, even as the vast majority of people sank into misery and unemployment, burst in December 1994, when the peso fell through the floor, and investments evaporated overnight.
Six months of the International Monetary Fund’s (IMF) “miracle cure” have apparently changed the opinions of several financial advisers, and even some press commentators, who, during the period in which Venezuela’s economic sovereignty was protected from the assaults of international usury, were the fiercest critics of President Caldera. Now, they can see for themselves the consequences of having ignored the “Ninth Forecast” warning of Lyndon LaRouche, that the international monetary system was heading for a crash (see EIR, June 24, 1994).
Opening the speculative spigot
On April 15 of this year, President Caldera announced a program of economic adjustment based on IMF demands. Within weeks of lifting the exchange controls Caldera had imposed earlier in his administration to defend the financial system from looting, flight capital began pouring back into Venezuela, headed by the funds of speculator George Soros, who has made headlines with his financial backing for campaigns for drug legalization.
Immediately after the recent annual IMF assembly, at which IMF Managing Director Michel Camdessus stressed the “achievements” of Venezuela’s adjustment program, a speculative wave hit the Caracas stock exchange, driving it to over 6,000 points, with a 10% jump in the first week of October.
This, in turn, unleashed euphoria within certain media, with the announcement that bolívar investments in the stock exchange had yielded 150% profit during the first nine months of 1996, and that dollar investments had yielded up to 80% profit. During the last week of September and the first week of October alone, more than $120 million entered the stock market which, in addition to inflating the prices of existing stocks, helped the Central Bank withdraw from the exchange market, producing a “revaluation” of the bolívar, from 475, to 469 to the dollar.
According to economist Francisco Vivanco, Venezuela has begun to experience “an unconcealable economic bonanza,” given the fact that “inflation is receding, the stock market breaking records, and the national currency appreciating against the dollar.” However, Vivanco warns that “these improvements are not being felt by the vast impoverished majority of the country,” because, even though inflation is receding, it has still been running at 115% for the past 12 months and at 84% for the first nine months of 1996. Vivanco acknowledges that these signs of “improvement” could reverse at any moment, and “provoke an economic collapse.”
Financial adviser Ignacio Oberto comments on the Central Bank’s strategy:
“It is surprising to see how those responsible for financial and monetary policy could end up fooled by this speculative ‘bonanza,’ in which it looks like more flight capital is coming into the country every day, whose only purpose is to obtain disproportionate yields in dollars.” Oberto concludes that if things continue this way, “in the long run we are going to have to swallow a bitter dose of ‘tequila,’ and suffer the effects of a Mexico-style situation, in which reserves dissolved in a split second.”
When President Caldera announced his list of neo-liberal (free trade) economic measures on April 15, El Nuevo País editor and television commentator Rafael Poleo warned that, from that moment onward, “the IMF’s occupation of the Venezuelan economy is a fact.” He said that discussing the benefits of such an occupation would be comparable to “the Czechs and Poles discussing the benefits of Hitler’s invasion in 1940.”
The great reversal
On Oct. 13, President Caldera convened a press conference at La Casono, his official residence, on the ostensible pretext of denying rumors of imminent changes in his economic cabinet, rumors which had caused capital withdrawals
from the stock exchange. "I want to declare categorically that I have not planned, nor am I planning, to make any changes of my economic ministers," he stated. "Agenda Venezuela" is moving forward, he insisted, with "international reserves of $13 billion, the highest figure in recent years; the bolívar is stabilized, and inflationary pressure is lessening, as can be verified month by month."
Finance Minister Luis Raúl Matos Azócar declared at that press conference that various strategies are being considered for changing "the profile" of foreign debt coming due, on the Mexican or Philippine model. In particular, he said that one approach under consideration was that of issuing bonds on the international markets, in order to buy back restructured (Brady Bond) debt, or to pay off internal debt. That is, to convert internal debt, subject to national conditionalities, into foreign debt, subject to foreign conditionalities. According to the minister, this is possible because the country's international creditors and speculators already see Venezuela as another Argentina or Mexico. "It is impressive to note how the spread of the Venezuelan debt has been approaching that of the rest of Latin America, where only two years ago it was the highest of all. . . . At this point, it is comparable to that of Argentina. . . . These are the most important changes that can be presented . . . to demonstrate the return of international confidence," declared Matos Azócar.
That "confidence" can be measured in the unusual flood of speculative dollars into the country, which Matos Azócar disingenuously referred to one week earlier, during a Washington, D.C. press conference with the IMF's Camdessus, as "not flight capital, but investment capital." Perhaps he felt obliged to issue such a clarification, because a year and a half ago, President Caldera had told a forum of international speculators organized by the *Economist*, the British magazine, that foreign investment would be welcome in Venezuela, "but not flight capital, which doesn't come to share the risk with us."
Not present at the press conference was Cordiplan (Planning) Minister Teodoro Petkoff, the Marxist terrorist converted into an existentialist neo-liberal. Petkoff was in Brussels, praising "Agenda Venezuela" at a forum organized by Venezuela's National Council to Promote Investment. Before an audience composed of representatives of Unilever, Heineken, Makro, Phillips, and others, Petkoff offered his assurances that the Venezuelan government would stick to its plan for "restructuring" (read, "deconstruction") of the state, starting with a dismantling of the social security system. Petkoff said that "entitlements have become a problem both for workers and for businessmen," because the latter will not raise salaries if workers don't agree to reform the system. "Privatizations constitute the first structural reform of a sluggish, inefficient, costly, and corrupt state," Petkoff intoned. Petkoff also pledged a "reform" of the Venezuelan judicial system, the better to protect the rights of foreign "investors."
**'Worst is still to come'**
Notwithstanding all of the government's assurances, the latest assessment of the *Economist Intelligence Unit* is that President Caldera has yet to pass the test of fire, virtually insisting that he commit hara-kiri to prove his sincerity. Says the *Economist Intelligence Unit*'s report, published in the Caracas daily *El Nacional* of Oct. 13, "The government began its plan with the easiest reforms," such as elimination of price and exchange controls, increasing gasoline prices 500%, and so on. Such measures, says the *Economist Intelligence Unit*, "were accompanied by painful effects on the population," but the population acquiesced. "However, the greatest test [to] which the government must subject itself will be to continue with reform of the public sector. . . . It is critical that the fundamental causes of the permanent fiscal deficit be eliminated: the hypertrophy of the public sector and the costly system of entitlements."
Such reforms will mean, in practice, the reduction of the public payroll from 1.3 million to 500,000 employees; the halving of the Education Ministry's payroll; and the privatization of state industrial complexes of steel and aluminum, at the cost of some 13,000 layoffs. The *Economist Intelligence Unit* warns that such measures will cause a social explosion:
"The population will not likely be so docile with this next phase of the program, when eliminating public jobs and changing labor laws hits directly at the image of the paternalistic state." In particular, it says that there are fears that the organized labor movement, primarily in the public sector, but with the backing of the Venezuelan Labor Federation (CTV), will begin organizing an opposition that could grow rapidly. Although the British bulletin doesn't mention it, the CTV has until now blocked the dismantling by decree of entitlements, as the investors have been demanding.
**Usurers and narco-terrorists join forces**
Strangely enough, the *Economist Intelligence Unit* promotes the Causa R (Radical Cause) party, a member of Fidel Castro's narco-terrorist São Paulo Forum, describing it as the leading opposition movement in the country. In fact, Causa R's leading trade union figure, the former governor of the state of Bolívar, Andrés Velásquez, *has* objected to the government's plan to privatize the Orinoco steel complex (Sidor) as a whole, in order to pull in some $2 billion. But what Velásquez proposes is not a defense of Sidor, but rather to cut it into pieces, so as to garner $7 billion. With such a creative accounting approach, it's no wonder the *Economist* loves Causa R!
The truth is that there is a rush to loot Venezuela while there is yet time. Perhaps most telling is the fact that the investment brokerage house Bear Sterns is currently recommending Venezuela as a place to invest to all and sundry. It was Bear Stearns' David Goldman who had given similar high marks to Mexico, just two weeks before the December 1994 collapse from which that country has yet to emerge.
How deregulation shot down the U.S. airlines
by Lyndon H. LaRouche, Jr.
The Sunday, Oct. 20 New York Times demonstrated, once again, that, often, that newspaper is to science, economics, and English prose style, what inflatable dummies are to love-making. We refer to the leading piece of the "Money & Business" section: Adam Bryant's "U.S. Airlines Finally Reach Cruising Speed."
To appreciate the authority of the Times's opinion on the aerospace investments, one should remember, that it was the same newspaper, which not only warned its readers against replacing gas lamps with Thomas Edison's electric-light bulb, but which ridiculed the Wright brothers' insistence that heavier-than-air flight was possible, and, which assured us, later, that no rocket could ever escape the Earth's atmosphere.¹ Today, unfortunately, the newspaper's views have a perverse kind of newsworthiness; its silliness is tragically consistent with what passes for "mainstream economic thinking" around Wall Street and Washington, D.C. today.
Consider that piece's substitution of myth, for the reality of the 1978-1996 airlines crisis. A few excerpts from his opening paragraphs are sufficient to make that point.
"While other consumer industries went through good times and bad, airlines mostly gyrated between bad and awful."
Prior to 1978, that did not happen: Bryant has concocted his fiction to fit his fantasy. The back files of the Times financial pages, would inform him, that, until the introduction of deregulation of transportation, during the late 1970s, the major airlines were among the leading components of a financial-market investor's preferred mix of holdings.
He continues, in his next sentence, with the following non-sequitur:
"In just the first five years of the 1990s, they [the airlines—LHL] lost $13 billions, more than all the profits accumulated since the Wright brothers made their historic flight at Kitty Hawk in 1903."
A responsible journalist would have contrasted the depleted physical purchasing-power of a highly inflated $13 billions of the 1990s, to the market-basket requirements for operating a safe, technologically progressive airline prior to the fateful years of 1978-1979. Just to show how recklessly ignorant of the subject Bryant is, he has brought up the embarrassing fact which the Times has been trying to cover up for nearly a century: that newspaper's original comment on the 1903 flights of the Wright brothers.
A few more samplings from the opening paragraphs of Bryant's piece:
"The explanation can be summarized in one word: over-expansion. . . . The industry didn't seem able to learn from its mistakes, in part because it was dominated by such big egos. . . . Now, however, the big airlines seem to be mending their ways. Stung by their recent disastrous run and taken over in many cases by a new crop of chief executives more in tune with the sober-minded 1990s. . . . 'It's not a testosterone-driven industry any longer,' said Gordon Bethune, chairman of Continental Airlines. 'Success is making money, not in the size of the airline.'"
"Sober-minded 1990s?!" The financial powers which dominate the Wall Street market today, are frantic madmen, for whom next week is a long-range investment. They are so obsessed with zeal for quick profits from the wildest forms of speculation, that they make the speculators of the Seventeenth-Century Dutch tulip bubble seem sober citizens, by
---
¹. The New York Times, on Jan. 6, 1880, wrote that Edison's electric light could never compete with gaslight, and on Jan. 16 quoted a "noted electrician" that "Every claim he makes has been tested and proved impracticable." On Dec. 10, 1903, the Times editorialized against Samuel Langley's experiments in heavier-than-air flight, less than a week before the Wright brothers' success at Kitty Hawk, which the Times blacked out. In a Jan. 13, 1920 editorial, the Times denounced Robert Goddard's experiments in space travel.
comparison. As the *Times* should know, the airlines’ “sober-minded 1990s” are typified by the fact, that a certain well-known company was run, not for operating profits, but for the anticipated financial capital gains of a highly leveraged, purely speculative price of its traded stocks.
What ruined the U.S. and other nations’ major airlines during the past eighteen years,\(^2\) is a combination of four factors: 1) deregulation;\(^3\) 2) the unchecked, 1982-1996 binge of “takeovers” of airlines (and other industries) under the “skull and crossbones” guidon of “shareholder values”;\(^4\) 3) the impact of the post-1987 transformation of the world’s financial system into a casino economy;\(^5\) and 4) the net collapse of net physical income of the economy, by about half, as measured in terms of market-baskets of infrastructure, agriculture, industry, and households, per capita of labor-force, and per-square kilometer of relevant land-area.\(^6\) Indeed, nothing has happened to the airlines (and trucking) industry, against which I, and others, at *EIR*, did not warn, in considerable detail, during the period from the 1978 introduction of deregulation under President Carter, through the period of my campaign for the Democratic Party’s 1980 U.S. Presidential nomination.\(^7\)
Granted, there are precedents for the post-1978 records of the airlines from earlier parts of the post-war period.
During the 1966-1973 interval, I was teaching a one-semester introductory course in physical economy on several campuses around the northeastern U.S.A. There were three case-studies of speculative looting of infrastructure and industry, which I chose to emphasize to the students: the early 1950s looting of the New Haven railroad, under the direction of speculative raider Maginnis; *Wall Street Journal* writer Norman C. Miller’s *The Great Salad Oil Swindle*;\(^8\) and the mid-Fourteenth-Century collapse of the Lombard banking system. The *Times* has flunked that course: the deregulation epidemic of 1978-1996 belongs in the same dock with New Haven raider Maginnis and the “Salad Oil Swindle”’s Anthony de Angelis.
There were such pre-1971 precedents for the kinds of swindles which ruined our major airlines over the course of the 1980s and 1990s.\(^9\) The difference was, that, back during the 1950s and 1960s, even a *Wall Street Journal* reporter considered de Angelis’ swindle an embarrassment. The fundamental difference, between the ruinous post-deregulation period, from 1978-1979 onward, and the relatively more prosperous 1945-1966 U.S. post-war economy, is that during the earlier time, cases such as the Wall Street looting of the New Haven railroad and the “Salad Oil Swindle” were in
---
2. The oil-price hoax of the mid-1970s, did deliver an economic shock to the airlines, as to the transportation sector generally. However, as long as airline regulation was in force, the oil-price shock could have been absorbed.
3. *EIR*, March 29, 1996, “Case Study No. 1: Lorenzo, Deregulation Decimate the Airlines”; “Case Study No. 2: Destruction of the Rail Grid Leads to Accidents”; and “A History of the Push for Deregulation.”
4. op. cit. “Daschle Proposes to Bring Back the Entrepreneur.” See also, *EIR*, Jan. 1, 1990, “Junk Bond Collapse Triggers Leveraged Blowout of Financial System,” p. 30.
5. *EIR*, Oct. 23, 1992, “Casino Mondiale: A Swindle Runs the Monetary System.” See also, *EIR*, Jan. 1, 1990, op. cit.
6. *EIR*, April 14, 1995, “NAM’s ‘Renaissance’ of U.S. Industry: It Never Happened,” by Christopher White; “U.S. Market Basket Is Half What It Was in the 1960s,” *EIR*, Sept. 27, 1996.
7. *EIR*, June 26-July 2, 1979, “Deregulation: The Road to Transport Chaos”; *EIR*, Sept. 15, 1981, “Deregulation Schedules U.S. Airline Service for a Return to the 1930s,” p. 7.
8. Norman C. Miller, *The Great Salad Oil Swindle* (Baltimore, Md.: Penguin Books, 1965).
9. Although the shift in U.S. policy, away from our republic’s traditional emphasis on capital-intensive, energy-intensive investment in scientific and technological progress, to a “post-industrial” utopianism, occurred during the second half of the 1960s, full-scale insanity in U.S. economic policy was not unleashed until the successive blows of the institution of the post-1971 “floating exchange-rate” international monetary system, and the mid-1970s oil-price shock.
contrast to the prevailing rule in entrepreneurial practices of management of agriculture and most industry. Step by step, over the course of 1966-1979 changes in policy-making axioms, real economic growth has become a lost art; the endemic tendency for occasional financial swindles, of the earlier period, has become today’s rule of business and governmental practice.
Those changes in axioms of general economic policy-shaping, combined with the specifics of the deregulation mania, are what has bankrupted leading, formerly prosperous major airlines, again and again, throughout 1978-1996. There is not a single known case, in which a major U.S. airline was thrown into bankruptcy, that that airline was not the victim of the same kind of monetarist sleight-of-hand common to the three case histories I recommended, as examples of criminality, to the attention of my students, back during 1966-1973.
If the *Times* were a competent financial analyst, it would have warned its readers, that airline deregulation, like the hostile takeovers of the 1980s generally, is a swindle which Vice-President George Bush et al. should not have been permitted to legalize. To parody that fabled New York City entrepreneur of the 1970s, “Crazy Eddie,” the newspaper’s economic “policies are insane.” Like today’s *Wall Street Journal*, the *Times* continues to push a form of economic cannibalism otherwise fairly describable as “shareholders’ socialism.”
**Economic cannibalism**
It may be fairly argued, that the Wall Street Republican “neo-conservatism” of today is the campus socialism of 1968: “Students Destroying Society (SDS).”
According to principles laid down by the leading Bolshevik economist of the 1920s, Yevgeni Preobrazhensky, the practice of 1980s raiders such as Michael Milken and Frank Lorenzo, and of the “derivatives” bandits of today, is a form of “primitive socialist accumulation”: running an industry, even an entire national economy, into the ground, as a source of relatively short-term profit for the speculator. The airline industry has been a victim of approximately eighteen years of the latest fad in slave-owner’s democracy, the “shareholder socialism” of “Contract With America.”
To clarify your understanding of this form of economic cannibalism, turn your attention to the new stage of global economic crisis, now erupting world-wide. Then, consider the mechanisms by which the shared “free trade” ideologies of the *Times* and *Wall Street Journal* made this crisis inevitable.
What the *Times*’s Bryant is defending, is the kind of “socialism” which put the East Germany government of Erich Honecker into its 1989 bankruptcy. It could happen to Wall Street, and the United States, very soon: whenever the current U.S. stock-market mimicking of the “Weimar 1922-1923 bubble” comes to its inevitable end.
Today, as a cold winter approaches, a menacing, infectious, popular social insurgency against Gingrich-like cutbacks, has broken out in western continental Europe. A wave of political strikes is now endemic in Jacques Chirac’s France. A political mass-strike has erupted suddenly, triggered by popular rage against a pedophile ring close to NATO circles, in Belgium. Once again, the social ferment in the eastern part of Germany is echoing the rumblings which led to the 1989 collapse of the old East Germany Communist state. The conditions in western Europe today are comparable to the combined economic and social crisis which led to the break-up of the old Soviet Union over the 1989-1991 interval. The present eruption occurs in the same time-frame that Gingrich-like policies are pushing Russia toward the point of some mighty social and political explosion.
To understand the immediate political implications of the *Times*’s present economic policies, one must recall the Czarist regime of 1916 Russia, or the French monarchy of 1789: the *Times* and *Journal* are sputtering the last, manic gasps of a deluded, and doomed “old regime.” *EIR* has described this process repeatedly before; look at the same process here from the standpoint of the individual industrial enterprise, or particular industry, such as the victim of the *Times* piece, the U.S. airline industry.
There are three “capital factors” which are decisive for determining the relative economic health, or morbidity, of a producer firm or industry. The first, is the quality of the labor-force employed: the local communities’ accumulated “capital investment” in the culture and education, its skills, its health, its household standard of living of the households from which the employed labor-force is recruited. The second, is the aging of its capital investment in plant, machinery, tools, and essential inventories. The third, is the effectiveness of the productive enterprise’s effective command over the relevant factors of technological attrition.
In all three of these areas, the key word is “control.” Does the firm, the industry (or farmer) have effective control over the needed improvements of quality, and availability, in its available labor-force? Does the firm have effective control over the refurbishing of the aging stocks of physical capital which it is depleting? Does the firm have effective control over the urgent refurbishing and advancement of its technological position? To lack that quality of effective control, is to increase the factor of risk accordingly.
In addition, in a similar way, the firm’s performance depends upon the quality of development of basic economic infrastructure, in the vicinity of its operations: transportation, water and sanitation, power, and so on. Infrastructure is the capital factor of the economic environment; as infrastructure is relatively more poorly developed, costs are higher, performance is poorer, all relevant factors of risk are greater.
Both of these sets of capital factors, are subject to the
general, physical-economic rule of thumb which this writer and *EIR* have identified in sundry earlier locations.
Take all physical factors of productive output and consumption, plus the factors of education, health, and science and technology services: determine the manner and degree a variation in productive potential is effected by increasing, or decreasing the various elements of this content of the market-basket of consumption (by households, infrastructure, agriculture, industry, and so on). Measure this in terms of per capita of labor-force, per household, and per square kilometer of relevant area. The result is, that for any designated level of productivity, there is a level of market-baskets' contents which is required to ensure continued productive potential at that level. Call this "energy of the system."
Then, all of the output of those market-basket elements which is in excess of the required "energy of the system," may be termed "free energy." The unwasted portion of this excess, is the "net free energy."
Now, however, the normal effect of the investment of the "net free energy" is either to expand the existing productive, and related, operations in scale, or, to increase the capital-intensity of existing work-places. In both cases, the ratio of "energy of the system" per capita is increased. However, it is necessary that the ratio of "net free energy" to "energy of the system," as measured in per-capita of labor-force, and in relevant square kilometer of area, must not decrease, even though the "energy of the system" per capita is increasing.
That principle applies to the individual productive enterprise, to entire industries, and to the economy considered as an integrated whole. The only way in which this requirement can be satisfied, is through investment in scientific and technological progress. Scientific and technological progress is the only source of what might be termed "sustainable profit."
Although the pre-1966 professional production manager usually did not understand the scientific principles governing scientific and technological progress, he (or, she) understood the importance of such a principle of practice. Such managers understood, at least as rules of thumb, each of the principles of production management we have just summarized. The manager's executives and staff measured these factors in terms of bills of materials and process-sheets, showing the flow of the physical materials and labor activities, the work-centers, and so on, and also noted the prices and related costs of each such factor of the bills of materials and process-sheets. The competent such manager also agreed with the trade-union representative, that there is a relationship between standard of community and household life of the available labor-force, and potential productive powers of labor.
From the standpoint just outlined, a relatively precise definition of "economic cannibalism" can be supplied for purposes of setting broad policy-parameters. In those terms of reference, the accelerating degeneration of the U.S. economy during the recent quarter-century can be summarized as follows:
1. The unique source of macro-economic profit of an economy, its capital-intensive, energy-intensive investment in scientific and technological progress has been suppressed. Respecting functional content, the requirements for a classical and scientific content of public-school and higher education have been depleted greatly during this period. The increase of class-size in schools, the reduced literacy of teachers, the lowering of standards of pedagogy, increased use of drill and grill, corresponding multiple-choice-questionnaire testing, use of personal computer terminals to replace cognitively essential teacher-student interactions, and increased ratio of class-hours to total hours, are typical of the degeneration of the quality of education, per teacher, and per student, at both the public-school and university levels. Similarly, the course content, in both public and university education, has been collapsed, such that it is not atypical that a secondary-school graduate of thirty years ago, had a higher level of cognitive development and general literacy, than university graduates today.
2. Where the modal standard of skilled industrial operatives and technicians, was formerly the family household organized around a single principal wage-earner, two and three incomes per household are needed now to reach up to the physical standard of living enjoyed by a comparable household today. The difference in standard of living of wage-earners, is pure economic cannibalism: what Preobrazhensky identified as "primitive accumulation."
3. The non-investment in maintenance of public and private investments in basic economic infrastructure, is another source of economic cannibalism.
4. The physical aging of capital stocks, is a similar form of looting, with potentially catastrophic results.
5. The replacement of high-quality controlled technical and related services and sources of supply, by cheaper, less reliable contracted sources, is also economic cannibalism.
For approximately twenty years, since the oil-price shock of the mid-1970s, but, most emphatically since deregulation and hostile takeovers, the airline industry has been looted savagely by the economic cannibals of Wall Street, the Frank Lorenzos and Carl Icahns.
The aging fleets, and strained maintenance and air-traffic facilities, have been depleted to such a degree, that the economic cannibals now see the virtual elimination of all Federally regulated safety and maintenance standards, as the only way in which the economic cannibals of Wall Street can continue to enjoy a rewarmed meal from this industry.
What is the safety-conscious passenger's alternative to looted airlines? Even walking isn't safe any more.
Whole classes of patients are denied treatment with ‘managed care’
by Marcia M. Baker and Anthony K. Wikrent
Almost every day, you see news coverage of some individual or group in the United States—nurses, doctors, patients, patients’ relatives, etc.—announcing, “Managed care is harming people, but it’s here to stay. Let’s correct the (fill in the blank) abuse, and make it fair.”
But this is impossible.
In fact, the thousands of instances of wrongdoing in the era of “managed” health care, stem not from mere coincidental perpetration of abuses, but, rather, from what is characteristic of the managed care system. Under the managed care principle, medical services are to be limited in a way to maximize profit-taking by designated interests, at the expense of the person, the economy, and the country. In practice, this means that managed care kills.
The way to deal with the rash of managed care “abuses,” is to mobilize to roll back the whole system as soon as possible, in the interests of the public good.
The practices of denying and restricting treatment under managed care are so distinct that they constitute crimes under the Nuremberg Principle, under which the U.S. government tried Nazi officials and doctors in 1945. The Tribunal established the doctrine of, “knew, or should have known,” governing the culpability of officials whose decisions result in harm and atrocities.
In recent weeks, we have printed biographical case studies of individuals harmed, and brief reports on whole categories of patients harmed, by managed care. We continue that coverage here.
Mental health patients
The limitation, or denial, of care to subgroups of mental health cases have become so widespread that remedial actions have been prompted in several states, and in thousands of court cases. For example, the consumer-affairs agencies of California and Rhode Island have begun investigations of how managed care companies code and handle mental health cases.
Over the 1980s to the present, most HMO plans cut back on the number and type of mental health treatment services formerly covered by fee-for-service, or other means. This was accomplished through outright cuts, and through pressure on the medical staff, and facilities involved. As of late 1995, Dr. Russell Newman, who deals with issues connected to clinical practice for the American Psychological Association, said, “We’re starting to see clients up in arms. People are starting to realize there’s a conflict of interest for those who are deciding how much therapy they can get.” Dr. Newman is referring to the HMO profits coming from such practices as limiting sessions with psychotherapists, limiting hospital stay, and so forth.
“It violates the Hippocratic Oath,” stated Dr. Robert Feder, staff psychiatrist and medical director of the partial hospitalization program at Optima Health Catholic Medical Center in Manchester, New Hampshire, in statements given to the Oct. 13 Boston Globe. Dr. Feder said, “That Oath calls for us to do everything possible to help a patient, not everything possible to reduce care for a patient so an insurance company can make bigger profits. . . . Let’s put it this way. Managed care companies seem to be taking more risks with patients’ lives than we as clinicians feel comfortable doing, especially when it comes to length of in-patient stays.”
Clinicians point to patients hospitalized in a suicidal state, who are then ordered by managed care to be discharged after only a few hours of being stabilized.
In 1993, the RAND Corporation conducted a study tracking 617 patients for two years, who were treated for depression by different kinds of health insurance plans. In the more serious cases, individuals did worse under health plans that imposed fewer treatment sessions because of cost limits.
Mary Hurtig, policy director for the Southeastern Pennsylvania Mental Health Association, told the Jan. 24, 1996 New York Times, “A major profit center for health plans has been mental health. For example, I know of one large HMO that gets $35 per month for mental health treatment for its members who qualify for Medicaid. But they subcontract their mental health care to a managed-care firm at a rate of $14 per month. The result is that some vulnerable, very ill people are getting badly hurt by arbitrary denial of care.”
Mentally disabled children
One entire group facing cut-off of Social Security benefits is mentally disabled children. There are about 900,000 such children in this category nationwide. The Social Security Administration has announced they expect that 185,000 of these
children will be cut off from benefits by next July.
Before a 1990 Supreme Court decision, such benefits, known as the Supplemental Security Income (SSI), and usually amounting to about $400 a month, were not available for children suffering from such childhood conditions as spina bifida, Downs syndrome, and autism. Then this was changed, following a Supreme Court finding that thousands of children had been illegally denied SSI assistance, because their specific disorder had not been included on a list of eligible disabilities. The eligibility requirement was changed, making it contingent on an expert determination of whether a child could function at a level appropriate for his or her age, based on reports from teachers and other child care providers, and by a Social Security physician.
As a result of the broadening of eligibility following the 1990 Supreme Court ruling, the number of children receiving benefits tripled, from 300,000, to over 900,000 presently.
Such rapid growth in an "entitlement" program—though the increase only amounted to less than $3 billion annually—became a target for the Conservative Revolution. In 1994, Rep. Jim McCrery (R-La.) declared in congressional testimony that many of the new beneficiaries had been coached by their parents to *fake* the symptoms needed to become eligible for benefits. Bob Dole, then senator from Kansas, chimed in, declaring that "children's SSI needs a tune-up." A national hot line was set up, with teachers and other child care providers instructed to call to report any children they believed had been coached to feign mental disability.
The minimal hot line results show how venal were the accusations. From September 1994 to July 1995, only 230 calls were made to the hot line. Of those, only about half involved children actually receiving, or applying for, benefits. Of the 115 or so cases thus investigated, the Social Security Administration recommended further investigation in 83 cases. That is one *possible* case of fraud, for every 7,228 recipients. One wishes that such a record could be established for Congress!
Despite the minute amount of possible fraud, Congress pressed ahead, and, in the recently passed welfare "reform" legislation, ordered the Social Security Administration to tighten the eligibility requirements for children's SSI. Under the new law, mentally disabled children are eligible for SSI, only if they suffer from a "medically determinable impairment which results in marked and severe functional limitations" that are potentially fatal, or which last more than one year.
The new law also directs that 300,000 of the nearly 1 million children receiving benefits, be reevaluated. Melinda Bird, managing attorney with Protection and Advocacy Inc., a disability rights law firm in Los Angeles, told the *Los Angeles Times* on Oct. 17, "It's part of the Social Security Administration saying we basically have a goal of eliminating people off of our rolls. It's more a cost goal, than based on any evidence that these people aren't disabled."
**Pregnant immigrants**
The same welfare reform law also prohibits local government assistance, including medical care, for illegal immigrants, unless a state specifically passes a new law providing such aid. In California, Gingrichite Governor Pete Wilson immediately announced that state assistance for prenatal care for illegal immigrants would be cut off, saying the state could not afford the $69 million cost.
A number of medical associations have attacked Wilson's deadly cuts. "Cutting prenatal care for pregnant women will cause unwarranted suffering, avoidable birth complications, smaller babies, and needless disability," the Los Angeles County Medical Association president, Dr. Brian D. Johnson, told the *Los Angeles Times* on Oct. 17.
Dr. Jack Lewin, executive vice president of the California Medical Association, which represents 34,000 physicians in the state of California, said Wilson's cuts "will cause an epidemic of low-birth-weight babies, and expectant mothers presenting late to emergency rooms. This is absurd public policy for the state."
Lisa Kalustian, a spokesman for Wilson, replied, "What we're saying is that people who are in this country illegally, who broke the country's laws, should not have this care paid for by California taxpayers. They should be getting aid in their own countries."
Doctors throughout the state are warning that it is a sick fantasy to believe that pregnant women will go home to seek proper care during their pregnancies. Instead, the women simply will not seek, and will not receive, proper prenatal care. And treatments for infants born with health problems that could have been prevented by prenatal care, easily cost far more than prenatal care.
"We are attacking one of the weakest, but most important, links in our society—that is, the mother," said Fred Quevado, former executive director of the Philippine Medical Society of Southern California.
**'Mercy killing' of the poor and elderly**
A recent study in the *Archives of Internal Medicine* shows the well-founded grounds for fear among the poor and elderly, of being targets for 'mercy killing' by the euthanasia movement. A survey was taken by researchers at the Duke University Geriatric Evaluation and Treatment Clinic, in Durham, North Carolina. A group of 168 elderly patients (average age, 76) and their relatives were canvassed on whether they favored physician-assisted suicide for the terminally ill. Less than 40% of the elderly patients at the clinic said that they agreed. But close to 60% of their relatives said they were in favor.
Dr. Harold Koenig, the research director, said, "These findings are provocative and of great concern, because the frail elderly, and poorly educated and demented members of our society, have little power to influence public policy that may affect them."
Petroleum
Iraq, China will develop the al-Ahdab oil field
The *Middle East Economic Survey* reported on Oct. 14 that Iraq and China signed an agreement last August to develop the al-Ahdab oil field in central Iraq. The deal, signed by the Iraqi oil minister and senior officials of the China National Petroleum Corp., will become effective when it receives final approval from Iraq's President Saddam Hussein, the report said. The field's production capacity is preliminarily put at about 80,000 barrels per day.
"This is the first agreement to be initialed by the Iraqi oil authorities, who have been carrying out upstream talks with foreign firms during the past five years," the *Survey* said. "The fact that the Ministry of Oil has decided to propose a production-sharing agreement to the political authorities is a significant breakthrough in the prolonged negotiations with foreign firms and is an important challenge to the UN sanctions regime."
The *Survey* added that "the Iraqi oil authorities have held upstream talks with over a score of European, Asian, Arab, and even some U.S. firms, but no agreements have been concluded yet."
Infrastructure
Mubarak launches great water project in Egypt
On Oct. 15, Egyptian President Hosni Mubarak led an "historic celebration" of the diversion of waters from Lake Nasser into the Toshka overflow canal. As reported in the Oct. 16 London *Financial Times*, in an article entitled "Mubarak's Historic Moment Aims to Make the Desert Bloom," this was the "first time the reservoir behind the Aswan High Dam reached more than 178 meters since its construction in 1964." This was due to heavy rains in the Ethiopian highlands.
The Toshka Depression, 6,000 square kilometers, is 30 miles away from Lake Nasser. It is to drain 4 billion cubic meters per day, which will allow for reclamation of land not now cultivated. Mina Iskandar, chairman of the Aswan High Dam Authority, was quoted saying, "The increase of water level means that Egypt, for the next seven years, will be able to draw its annual share of 55.5 billion cubic meters of water, even if subsequent annual floods are low."
According to Egypt's daily *Al Ahram*, Mubarak characterized the project as a road to the 21st century. With the newly available water supply, Egypt will be able to increase the amount of cultivated terrain from 4%, to 25%, or 500,000 feddans (a feddan is slightly larger than an acre). This expansion of agricultural production will require further infrastructure outlays; apparently, the project involves the creation of another fertile valley, in addition to the Nile, which will encircle the desert, between the Nile and west of it. A rail line is planned to reach the area of the Toshka Depression, where cities will be built. Thus, an entire economic region is being developed.
Mubarak stressed the fact that this infrastructure project could not be built by the private sector, but he welcomed private investment in agricultural programs. The irrigation minister reportedly ridiculed ecological arguments, including about drought, the ozone hole, and global warming. He emphasized that people living in the Nile Valley know that there have been cycles of floods and droughts for thousands of years, but this kind of project shows that the cycle can be broken.
Science
Iron fertilization of the ocean holds promise
A potential manifold increase in the world's fisheries and a transformation of the biosphere is possible, based on the results of the Iron Ex II oceanographic experiment designed to test whether dumping iron into the oceans would increase the amount of plankton, the Oct. 10 issue of *Nature* magazine reported.
The experiment was designed to test whether "global warming" could be ameliorated by reducing CO₂ levels in the atmosphere through an increase in CO₂-absorbing phytoplankton, but what it really demonstrated is the ability of man to increase biological activity of the oceans and transform the biosphere.
The experiment consisted in seeding a 25-square-mile area of the Equatorial Pacific with 1,000 pounds of ferrous sulfate, a compound of iron thought to be most common in wind-borne dust deposited naturally on surface waters. Trace amounts of iron seem to be essential for numerous cell activities, including the manufacture of chlorophyll and the processing of nitrates.
The scientists, representing 13 institutions in the United States, England, and Mexico, chose a patch of ocean about 800 miles west of the Galapagos Islands that is nearly a "desert," in terms of living organisms. The iron "fertilization" led to the growth of more than 2 million pounds of phytoplankton in a week, a 30-fold increase. Kennet S. Johnson of Moss Landing Marine Laboratories in California reported, "We had an explosion of phytoplankton that's almost biblical in proportions; the water went from clear blue to this green, soupy-looking mess."
South Africa
IMF plan draws attacks during Camdessus visit
International Monetary Fund Managing Director Michel Camdessus praised the macro-economic plan that the South African government adopted on June 14, after a meeting with President Nelson Mandela in October. But Camdessus, on his first official visit to South Africa since Mandela came to power, was greeted with severe criticism. The Congress of South African Trade Unions (Cosatu), allied with the African National Congress-led government, blasted Camdessus's presence. Cosatu Deputy General Secretary Zwelinzima Vavi said, "The IMF is not a friend of the working people or the majority of the South African people... All their recommendations and policies have caused disasters in many developing countries in Africa."
Camdessus attempted to assure a hostile committee of legislators that South Africa,
and not the IMF, would design any package agreed to, i.e., they could pick their own poison.
Camdessus claimed that because of IMF interventions, economic growth in Africa now averaged 5%, while South Africa could not hope for more than 3.5% growth this year. The reality is that African nations have been so devastated by IMF policies, that their continued existence is in doubt.
**Trade**
**WTO demands free trade for poorest countries**
The World Trade Organization announced on Oct. 18 that it was calling a meeting of ministers from the world’s 48 poorest countries, to be held in Geneva on Nov. 13-15, apparently to convince them that if they want to partake of the benefits of world trade, they must ease restrictions on foreign direct investment (FDI). The announcement followed a meeting between WTO Director General Renato Ruggiero, and Rubens Ricupero, secretary general of the United Nations Trade and Development Agency (UNCTAD), which is helping to set up the gathering. UNCTAD has been working increasingly closely with the WTO, which is not formally part of the UN system.
The 48 countries are the Less Developed Countries (LDCs), defined by the UN as countries with per-capita income less than $600. The total LDCs’ population is more than 550 million. In 1995, according to Unctad, the LDCs’ share of world trade was less than 0.4%; LDCs received only 2% of the global flow of foreign direct investment.
In the past, these proposals have been criticized by LDCs and other developing countries, which have insisted that they must maintain their powers to steer FDI according to national development policies. Many Asian and African developing countries also argue that discussion of investment issues should be pursued in Unctad, and not in the WTO, which sets binding rules for all its members.
Meanwhile, the trade secretaries of about 20 developing countries in Asia and Ibero-America scheduled a three-day planning meeting in New Delhi, in preparation for WTO negotiations, and explicitly excluded Singapore, the Sept. 23 *Asia Times* reported. The immediate issue is Singapore’s “doubtful stance” on one of the most contentious issues, the Western demand for a “Multilateral Agreement on Investment,” which would dictate against any sovereign limits on foreign ownership of companies, and use the WTO to enforce this and similar colonial rights. S.P. Shukla, India’s former ambassador to the General Agreement on Tariffs and Trade, said that adoption of such an agreement would involve loss of sovereignty for national governments.
**Finance**
**Belgium loses money in derivatives speculation**
On Oct. 15, Belgian Finance Minister Philippe Maystadt testified before a Parliamentary commission, to explain how and why the government engaged in international currency speculation over the past five years, which has resulted in unrealized losses of over $1 billion.
An op-ed in the Oct. 15 *Wall Street Journal Europe* said that the operations were part of what the government termed “active management” of the huge Belgian national debt. According to court records, between September 1989 and April 1992, the Belgian Treasury entered swap contracts, borrowing deutschmarks (a strong currency) and lending in lira (supposedly weak), and the difference used to offset the Belgian debt. But, in September 1992, the lira collapsed after speculative attacks by George Soros and others, falling 30%. As a hedge against such losses, Belgium sold “put” contracts in U.S. and Canadian dollars.
The outcome was that every side of the complex bet went against the Belgian Treasury, resulting in, instead of a nice gain, an added $1 billion of public debt. But because it was the government, and it makes its rules, the losses never showed up on the public budget until they were discovered in October. The case also raises questions of “trading on government privileged information for profit.”
**Briefly**
**THAILAND** and Myanmar have completed studies to integrate a deep-water seaport with regional industrial development, the Sept. 5 *Bangkok Post* reported. The port, at Tavoy, on Myanmar’s Andaman Sea coast, is also the endpoint of anatural gas pipeline being constructed to Bangkok, and would turn the route into a development corridor.
**CHINA** will build 120,000 homes in Nigeria, the Nigerian state news agency reported Oct. 19. Works Minister Abdulkareem Adisa said, “The agreement . . . is within the framework of technical cooperation among developing countries in the spirit of south-south cooperation.”
**MALAYSIA** launched the $5.5-billion Bakun Dam in Sarawak Oct. 2. By 2002, the 2,400-megawatt project will provide power for development of the mostly primitive island of Borneo, and send power through the world’s longest submarine cable to peninsular Malaysia.
**BANGLADESH’S** Prime Minister Sheikh Hasina said on Oct. 11 that the nation will tie into the Eurasian rail projects. This is “required for the economic interest of our country,” she said, the Oct. 12 *Daily Star* reported. “We can’t afford to remain disconnected with other countries in this modern world.”
**ASIAN FARMERS** defended agricultural supports against free trade demands, at a forum in the Philippines in October. Mitsugi Kamiya, president of the Food and Agriculture Research Center in Japan, said, “Rural folk . . . cannot survive in a free trade arena if they don’t get enough support from their governments,” especially investment in infrastructure and research.
**EUROPEAN** Commission President Jacques Santer attacked EU finance ministers for “killing” five of 14 Trans-European Net (TEN) projects and threatening the others by refusing to authorize an extra $1.25 billion, at a meeting on Oct. 14.
Bush’s ‘democracy’ lobby instigates breakup of Russia
by Roman Bessonov
Part 4 of a series on “The Anti-Utopia in Power” in Russia. The author subtitled this section, “How to Build a Bomb.” Parts 1-3 appeared in EIR on Sept. 16, Oct. 4, and Oct. 18.
In the late 1970s and early 1980s, the two world superpowers, the United States and the Soviet Union, were economically developed enough to have charted a policy for the whole world, based on the peaceful use of advanced technologies, the joint exploration of space, development of infrastructure, reform of modern education, and overcoming backwardness and poverty in the Third World.
The most popular genre of Soviet literature, in those years, was science fiction that depicted the future world as a community of strong and brave people. The heroes of these novels were neither studying Marx and Engels, nor exporting “proletarian” revolution to Ibero-America. They were building cities on new planets and growing gardens in the Sahara, conquering wild nature and making it serve Man, with a tremendous passion of selfless creativity. One book, perhaps the most popular in my youth, was titled *People Like Gods*. It expressed a view directly opposite to the misanthropic image of “people like animals,” pushed by the House of Windsor through supranational institutions like the United Nations, as well as in the permanent bureaucracies of both the United States and the Soviet Union.
The resonance of Lyndon LaRouche’s International Development Bank proposal (1975) was tremendous within the Non-Aligned Movement and elsewhere in the Third World, because leaders of those nations hoped to enter an era of economic development. But such perspectives collided with the poison of post-industrialism and the “information age,” which had already become a weapon of the transnational cartels that sought total financial control of the world.
Inside the Soviet Union, the interests of international petrochemical giants, for example, matched the corporate appetites of the Soviet petroleum bureaucracy. The resultant shift of the lion’s share of investments into oil and natural gas,
contributed to the stagnation of the country’s technological development, already in the Brezhnev period (1965-82).
The Soviet economy’s stagnation, as it became dependent on petroleum export revenues, coincided with the end of the fixed-parity currency system in the Western world (1971), and the beginning of the subsequent upsurge of financial speculation, ever more decoupled from the real economy. The Russian side of that global process of sacrificing real economic development to financial priorities, helped set the stage for the final corruption of the Soviet elite and the collapse of the U.S.S.R. (1989-91), but that collapse did not bring freedom to the independent states. They found themselves in another prison, in the deadly grip of the international financial institutions. Today, it is difficult to still be glad about the end of the Cold War, because Russia is totally destabilized, its military technologies in the hands not of space explorers, but of organized crime.
The last stage of the destructive processes which led to the miserable result we can witness today, began under the world’s domination by the “Gang of Three.” Margaret Thatcher, George Bush, and Mikhail Gorbachov, in 1988-91, the period when Gorbachov and Bush proclaimed the anti-nation-state “new world order.”
**Gorbachov’s ‘new thinking’**
The “new thinking” of Mikhail Gorbachov, who took office as General Secretary of the Communist Party of the Soviet Union (CPSU) in 1985, initially consisted of two interconnected parts: “democratic socialist” changes in ideology and economy, rooted in the concepts of old Bolshevik Nikolai Bukharin, and post-industrialist environmentalism, pushed under the cover of “repentance” (for the crimes of the Soviet past) and “humanism.” The latter was a Soviet version of the self-fixation of Baby Boomers in the United States: Gorbachov’s propaganda campaign for the “human factor” in society and the economy, diverted people from thinking about common values, about the goals of the country’s economic development, to concentrate on themselves, their biology, physiology, and physical circumstances.
Criticizing the bureaucracy (in order to initiate purges that improved his position), Gorbachov blamed high state and industrial functionaries for damaging people’s health in heavy industry, with poor environmental protection. But the oil *nomenklatura* retained and enhanced its privileges, according to the Bukharinite formula, “Enrich yourself,” which was applied in such a way as to encourage officials to run semi-legal businesses. The petroleum bureaucracy achieved an advantageous position from which to “privatize,” later becoming a part of the world elite. In the late 1980s, this part of the *nomenklatura* controlled the regions where oil was extracted and refined.
Before becoming a powder keg, the Caucasus, especially Chechnya, was an oil barrel. The oil men in this area were probably the first to realize that the trappings of the state, especially tax obligations, were nothing but an obstacle to their private and clan interests. Outside interests, those centered in London, as well as associated U.S. companies like Amoco, could exploit these private appetites for their own advantage, here and in other regions. As elsewhere in the world, the old instruments came into play: ethnicity, pagan mythologies, and environmentalism.
Not a one of those political and cultural currents failed to receive funding from the U.S. National Endowment for Democracy, and its sub-groups. Over and above those cases in Central Asia, where NED-approved groups are embroiled in the exploding “cockpit of war” around Afghanistan, the association of the NED and its subsidiaries with movements that have contributed to the fragmentation of Russia, fuels hostility toward the United States on the part of many patriotic Russians. A letter published in Nezavisimaya Gazeta on Oct. 19, attributed to “the collective of officers of the General Staff,” gave voice to such passions: Denouncing the “trans-Atlantic sponsors of the Kremlin,” the letter alleged a U.S.-instigated design “to crush the system of military leadership today, [which] means that, tomorrow, impoverished people will, on the pretext of a deterioration of the internal Russian situation, call in NATO forces under the UN flag to come help, and the latter will take control of the administrative centers and all military-strategic facilities.”
The collapse of communism and the inability of the 1989-91 “democratic reformers” to find any formula by which to unite the nation, aside from primitive neo-liberal rhetoric, left the field open for synthetic, as well as spontaneous, particularist ideologies. The soil (especially the soil rich with oil) was well prepared for classic British Intelligence manipulations. Mixed up with human rights rhetoric, and fueled by great sums of money, environmentalism, especially under pretext of the “protection of indigenous populations,” was to play a key role in a multitude of ideological and parareligious left-right games, which promoted a process of destabilization throughout Eurasia.
I. The ‘separatist’ card in Russian politics
Some years before the collapse of the CPSU and the Soviet Union, when Gorbachov transformed the official ideology into a vague mixture of “pink and green” conceptions, he opened the gates to a resurgent Orthodox culture, while permitting all sorts of formerly forbidden samizdat literature to be printed, at first only for the limited readership of the journal Naslediye (Heritage), of the Soviet Culture Fund. It was under the auspices of this state fund, headed by Raisa Gorbachova, that George Soros launched his activity in the U.S.S.R. Soros was able to make friends with leading intellectuals of the “left” and the “right,” such as the historian Prof. Yuri Afanasyev, future head of the “radical liberal” Democratic Russia movement, or the Slavophile writer Valentin Rasputin. Out of this milieu came the separatist card, which was to be played with force in the Russian political battles of the 1990s.
Afanasyev developed Gorbachov’s theme of “repentance,” by insisting that the Soviet republics should not be forced to remain in the U.S.S.R. His motto was, “For your freedom, and ours!” The idea of a Declaration of Independence of Russia itself from the U.S.S.R., meanwhile, came from the Russian nationalist Rasputin, who argued that the other republics were “eating Russia’s bread.” Rasputin especially attacked the peoples of Central Asia and the Caucasus.
Both these lines in public thinking, the “radical liberal” and the “nationalist,” had the backing of high officials in the CPSU ideological apparatus. The support for both sides, resembling a great ideological game, evidently originated with Aleksandr Yakovlev and some younger officials from the “thaw” generation, such as Aleksandr Degtyaryov, deputy head of the Central Committee’s Ideological Department.
The Russian opposition of the early 1990s was not quite fair, when it accused Boris Yeltsin of “destroying the Russian state.” Gorbachov pointed the way, with the policies he summarized in his famous, much-ridiculed phrase, “The process has begun.”
The power of the central Soviet administrative bureaucracy was significantly undermined by official or semi-official protection for the first generation of cooperative proprietors and other shadow economy operators. The bureaucracy adapted to the new situation, spawning semi-private commercial operations out of the existing management structures; these would later be “institutionalized” by Russian Premiers Ivan Silayev, and then, in 1992, Yegor Gaidar.¹
As the central economic structures abandoned their management duties in favor of these private economic projects, the leaderships of Soviet Socialist Republics (S.S.R.), Autonomous Soviet Socialist Republics (A.S.S.R.), and provinces were left with only one weapon for pressuring Moscow. They used the advantages of their respective economic specializations (in the Soviet system, many industries were concentrated in one or a few regions), as leverage for demanding privileges. The famous miner strikes of 1989, effectively used by Russian politician Yeltsin against Soviet President Gorbachov, could only have happened with support from the Ural elites, who were seeking a more privileged position in the country.
The Ural elites have a tradition of regionalist ambitions, reaching back at least two and a half centuries, which was expressed in many plans for a separate Ural Republic, even during periods of strong central leadership in Russia. The famed industrialist Demidov, granted privileges by Peter I in
---
1. When Yeltsin supporters, later, were mocking Gorbachov, one Supreme Soviet deputy completed the phrase: Tualeta ne nashol, a protsess uche poshol, which means, “He hasn’t found a toilet, but the process has already begun.”
2. Roman Bessonov, “IRI’s Friends in Russia” (Part 1 of this series), EIR, Sept. 6, 1996, presents the notion of “institutionalization,” developed in Russia by Vitali Naishul, according to which the “informal,” or criminal economy should be promoted to a central role in the national economy—“institutionalized.”
the early eighteenth century, illegally issued his own Siberian currency. There were similar developments during World War I: in 1918, Siberia and the Urals became the headquarters of the White Russian troops, opposing the Bolsheviks.
In 1991, one of the regional concerns established in Sverdlovsk-Yekaterinburg, started issuing "Ural francs." In the summer of 1993, Boris Yeltsin was effectively forced to support the project for creating a Ural Republic; he granted special raw materials export privileges to the Sverdlovsk clan, which had brought him to power. Yeltsin needed their political and financial support, in his drive to crush the Russian parliament, the Supreme Soviet. But when, after Yeltsin prevailed in the October 1993 massacre in Moscow, the Sverdlovsk provincial soviet dared to adopt a Constitution of the Ural Republic, Yeltsin dissolved it, along with all the other regional legislative bodies in the country. In 1995, the rebellious regional leader Eduard Rossel ran for the Sverdlovsk governorship, and Yeltsin again felt obliged to support him, although Rossel won against the candidate of Our Home Is Russia, set up as the "party of power."
In the framework of the Soviet Union, other centers of regionalist ambitions were the Caucasus (Azerbaijan, Georgia, and Chechnya) and the Volga (Tatarstan and Gorky Province, now Nizhny Novgorod). Several powerful elite groups, or economic clans, had their home in Ukraine, concentrated in Dnepropetrovsk, Donetsk, Odessa, and Simferopol. The last all-U.S.S.R. "congress" of organized crime was convened in the south Ukrainian industrial city of Dnepropetrovsk in the late 1970s. Still, the majority of the Russian "thieves-in-law," or "godfathers," originated from the so-called Caucasus criminal brotherhood; their next generation grew up in the Moscow suburbs.
**A desert with casinos: the case of Artyom Tarasov**
The neo-Bukharinist shift in economic policy, introduced in order to boost the "living creativity of the people" (the theme of Gorbachov's speeches in London, in December 1984, when he received the accolades of Mrs. Thatcher, just months before his elevation to the post of CPSU General Secretary), included a relaxation of responsibility for economic crimes. The criminal revolution made its first headway, under Gorbachov.
The mass media, in those late 1980s days, promoted certain young adventurers, as heralds of the "new thinking" in the economic realm.
One such herald was Artyom Tarasov from Lyubertsy, an industrial town in Moscow Province—a place as famous for its organized crime traditions as Dolgoprudny, Balashikha, and Solntsevo. In his interviews, Tarasov emphasized the fact that he was half-Armenian. Tarasov's first co-op, called Tekhnika, was co-founded by a prominent local criminal, Vladimir Ponomaryov, who had made his fortune reselling stolen cars. Tekhnika bought and resold computers. The first criminal investigation of Tarasov for tax fraud was halted, due to the fact that a relative of U.S.S.R. General Prosecutor Oleg Soroka was involved in his business. Then Tarasov offered his service to high officials of the Yeltsin leadership in Russia, making friends with Academician Tikhonov, head of the Cooperatives Union. His new structure was called Istok (which means "source," or "outflow").
In 1989, the Russian leadership launched a highly publicized program called Crops-90, under which Russian peasants sold their crops for vouchers, later exchangeable for consumer goods. Some crops were traded for oil (40 million metric tons of it!), to be sold abroad. Tarasov won exclusive rights to handle these transactions. At the time, the state's monopoly on foreign trade had been loosened enough to allow semi-private operations; the thing to do was to found a "foreign-trade economic association." Along with his association, also called Istok and co-founded by the same Ponomaryov, Tarasov established a Russian-British joint venture, with an account in Paribas-Monaco Bank. The money from the oil sales never returned to Russia.
In the summer of 1990, Gorbachov's police were about to arrest Tarasov. The obstacle was his parliamentary immunity as a deputy of the Russian Supreme Soviet, to which he was elected earlier that year with assistance from the Washington-based Krieble Institute of the Free Congress Foundation. Tarasov escaped arrest, and emigrated; he entered Britain on the passport of a citizen of the Dominican Republic. (Such passports could already be purchased from Moscow criminal firms.)
In London, Tarasov set himself up to assist Russian businessmen who had escaped prosecution in Russia, and founded a special institution for harboring flight capital. Evidently, his service to the Yeltsin "reformers" was rather significant, since in November 1993 he easily returned to Russia, on the same passport, won election to the State Duma (parliament), and took a seat on the Duma Commission for the Supervision of Law Enforcement Agencies—still being a citizen of the Dominican Republic! In 1995, he ran for the Duma as a top environmentalist, one of the leaders of the "ecological" election block, Kedr.
In summer 1994, Artyom Tarasov gave a remarkable interview to Radio Liberty, on Russian statehood. In his view, Russia consists of a large number of regions with quite differ-
---
3. Yekaterinburg was called Sverdlovsk in the Soviet period. It has reassumed the old name, but the surrounding area is still Sverdlovsk Province.
4. On Sept. 21, 1993, Yeltsin abolished the Russian Constitution and the parliament, the Supreme Soviet. The Supreme Soviet's resistance was ended on Oct. 4, with many casualties, when Army tanks shelled its headquarters.
5. According to an unconfirmed report, published in the not-always-reliable Russian weekly Zavtra in 1994, another of Tarasov's partners in this deal was the Swiss oil magnate (and fugitive from U.S. tax evasion charges) Marc Rich.
6. *EIR*, Oct. 4, 1996. Part 2 of this series reported on the Krieble Institute (p. 55), and its help to Tarasov's campaign (p. 57).
ent specializations; these regions are “self-sustainable” and can function as separate states, which is a “natural way of transformation.” He did not make clear how a future Tyumen Republic, possessing oil, will solve its border questions with some republic of the Far North that has no fuel or food, but a lot of nuclear warheads. Maybe he was just an optimist, but more likely this was the typical thinking of an experienced organized-crime figure, who knows very well what it means to control a territory with all its industries.
The example of the Kalmyk Republic, where a person with a similar career, Kirsan Ilyumzhinov, established a dictatorship, ignoring a federal laws and owing trillions to the national budget—shortchanging other regions, as well as his own people—gives an impression of what a Tarasov-headed “independent region” would look like: a desert, with casinos.
**The ‘human rights’ war**
The Artyom Tarasov story is just one example of how shadow economy figures, used by Gorbachov and Yeltsin against each other, were themselves a conveyor belt for the kind of oligarchical thinking, according to which a “confederalist” model for Russia is preferable to the model of a nation-state. In the Tarasov case, we also see that the U.S. Republican neo-conservatives, and people like Tarasov, whom they support in Russia, are pupils of that same school of oligarchic thinking, which is headquartered in London and promotes the “decentralization” of both the United States and Russia. They abhor a strong central system of economic development.
The pro-separatist strategies of the British don’t contradict the option of a monarchical model for Russia, or the ideological instigation of U.S. “hawks” against Russia, for their purpose is not only to undermine the United States and Russia, but to get them into a bitter and disastrous conflict against each other. As we shall see, there are examples of the “peaceful coexistence” of separatist and monarchic models, even inside one conception.
One of the favorite Russian politicians of the National Endowment for Democracy, Galina Starovoitova, may be the best illustration of this “yin-yang” coexistence of pro-separatist and monarchic ideologies in one person. In 1990-91, she was the most passionate supporter of the idea of dividing Russia into dozens of entities. A year later, she offered herself as candidate for defense minister, made friends with Cossack groups, and spoke (on Artyom Borovik’s “Top Secret” TV program) about the need for a strong, reformed intelligence service on the base of the former KGB. In 1994, she promoted Marshal Shaposhnikov for the Russian Presidency, and said that Russia would take back the Crimea from Ukraine; in a talk with St. Petersburg Mayor Anatoli Sobchak, she expressed delight with the West European constitutional monarchies. As soon as the new war began in Chechnya, she returned to pro-separatist positions, betraying the President in the most difficult period for him. In general, her activity results in nothing but destruction, and even Democratic Russia members admit, off the record, that she is responsible for the blood of Armenians, Azerbaijanis, Chechens, and Ingushi, to a greater extent than any of the regional warlords.
But Starovoitova is only a part of a task force, formed years before, dating back to the 1960s. Her political mentor is considered to be Viktor Sheinis, a graduate of the Institute for the World Economy and International Relations (IMEMO), who was in disgrace after 1956, for protesting against the Soviet invasion of Hungary. In the early 1960s, after the new round of destalinization at the 22nd Party Congress (1961), he was accepted to Leningrad State University (LGU), where Aleksandr Degtyaryov, son of a repressed CPSU official, was head of the Komsomol (Communist Youth League) organization. His wing of the “thaw” generation was the source of the Bukharinist revival within the CPSU, campaigning for “internationalism” as opposed to “imperialism,” and becoming a most useful tool for British subversive operations.
Those Anglo-American strategists who thought in terms of dismembering the Soviet Union and then Russia, saw a good opportunity, when dissident Academician Andrei Sakharov was vilified in 1973 and exiled to Gorky in 1980. The Sakharov Congresses, which began to be held in the United
States when Richard Nixon was President, heavily concentrated on ethnic problems in the U.S.S.R., especially the problems of Caucasus peoples oppressed by Stalin, the Crimean Tartars, and others. After Sakharov died, it became clear that his widow, Yelena Bonner, daughter of a purged Armenian Comintern official named Gevork Alikhanyan, would continue to be active for such causes. Through her and a group of intellectuals in Soviet academic institutions, the issue of Nagorno-Karabakh (a province of Armenia, assigned to Azerbaijan under Soviet rule—a complex ethnic and territorial problem with similarities to the Jewish-Arab problem in Palestine) became an object of political speculation, and the detonator of the late-1980s wave of wars in the Caucasus.
The aged Sakharov, or rather his image, was used as a universal tool for pushing geopolitical games, under the cover of human rights. There were very decent people among the political convicts, rehabilitated together with him, but only a tiny group of militant radical liberals like Sergei Kovalyov and Gleb Yakunin made a career.
At the Second Congress of the U.S.S.R. People’s Deputies in the fall of 1989, Sakharov’s document on the reform of the U.S.S.R. proposed equal status for all the ethnic regions within the Soviet Union. Later, after his death, this naive approach would be exploited by powerful private interests in the clashes between Georgia and Abkhazia, Georgia and South Ossetia, and the Russian Federation with Tatarstan and Chechnya.
Lastly, Sakharov was used to make careers. During the election campaigns in 1989, politicians like Gavriil Popov (famous for legalizing corruption), Sergei Stankevich (now a fugitive), and Konstantin Zatulin (one of the first big Moscow privatizers) were photographed with Sakharov, and thus paved their way to power.
The core group of influentials most active in the Caucasus in 1989-91 included Bonner, Starovoitova, Viktor Sheinis, Anatoly Shabad, Fyodor Shelov-Kovedyaev, and others. In this period, the full-scale Armenian-Azerbaijani war was fueled by multiple ethnic conflicts, started by new leaders, who were brought to power with assistance from this group. Their projects were far from what Sakharov had proposed, but exactly replicated British operations back in 1917-20. The new “anti-Communist” (and, therefore, regarded as positive) Georgian leader Zviad Gamsakhurdia blew up Georgia by eliminating the autonomous status of Abkhazia and South Ossetia, while in Azerbaijan, Popular Front leader Abulfaz Aliyev (Elchibey) pushed a Greater Azerbaijan project, with support from the Turkish Grey Wolves. The industry, infrastructure, and science of the Transcaucasia went to pieces.
The victory of criminal elites in the Transcaucasia, prepared by the decades-long existing might of the Caucasus criminal brotherhood, was obvious for those who saw the situation from the inside. The unwanted rivals of Gamsakhurdia (Georgia), Ter-Petrossian (Armenia), and Elchibey (Azerbaijan), though belonging to the anti-Communist forces, were physically eliminated. This happened to Merab Kostava in Georgia, Gamsakhurdia’s friend, whose dissident biography, unlike that of Gamsakhurdia, included no episodes of repentance before the authorities. Georgi Chanturia, another prominent Georgian politician who was hard to manipulate, was murdered later.
After Yeltsin came to power in Russia, documents from CPSU archives (those parts that did not “disappear”) exposed the fact that the Popular Fronts, which propelled the careers of such leaders as Elchibey in Azerbaijan, enjoyed direct sponsorship from the CPSU Central Committee. Some Gorbachovists tried to explain this pattern as reflecting an intention to “rotate” corrupt elites in the republics, but eyewitness reports from the bleeding Transcaucasia suggested some alternative explanations.
In Karabakh, one could see such a scene: A Soviet Army commander has an unofficial meeting with an Azerbaijani, who pays for a military operation against Armenian positions. The operation is carried out, several more villages, roads, and bridges are destroyed, hundreds more inhabitants and soldiers killed. The next day, an underground Armenian dealer comes to the same commander, and an anti-Azerbaijani attack follows. In both cases, the officer or a group of officers shares the incomes, derived from stolen weapons and equipment (officially listed as “destroyed”), with local criminals. The same picture was seen in the Georgian-Abkhazian conflict in 1992.
The arms trade became a Klondike for organized crime. The shadow elites which started it were born in the Brezhnev era and grew strong in the period of neo-Bukharinite “coops.” One of Gorbachov’s orders introduced semi-private structures in every plant, including most of the military industry. The “shop men” (tsekhoviki) needed a market for their products. In 1987-88, they were powerful enough to dictate their conditions to the Soviet leadership. In 1989-90, they were powerful enough to create shortages of basic goods, sabotaging the old state-run retail system.
The human rights milieu was sensitive to unofficial decisions made by the world oligarchy and to the preferences of the criminal community, alike. On Aug. 8, 1991, Yelena Bonner and Yuri Afanasyev issued an open letter to Yeltsin, claiming that “Russia does not need two leaderships.” Yeltsin owed his election victories, first as chairman of the Russian Supreme Soviet (1990) and then as President of Russia (June 1991), to the Afanasyev-led Interregional Group in the U.S.S.R. Supreme Soviet. With its constant promotion of a regionalist, even separatist, agenda, however, the Interregional Group was pushing in the direction of the dissolution of the country, of which Yeltsin would be president.
**The Caucasus trap**
On April 16, 1990, the U.S.S.R. Supreme Soviet, under pressure from the Interregional Group, adopted a law declaring all the republics (both the S.S.R. and the A.S.S.R.!) to be “subjects of the Soviet Union.” Gorbachov’s yielding to this
option, by which he hoped still to secure the loyalty of the "autonomies" inside Russia, triggered the process later known as "the parade of sovereignties." The ethnic architecture of the state, previously regarded as the most sensitive problem of domestic policy, was in shambles.
The first autonomies within Russia that hurried to upgrade their status and become "Soviet Socialist Republics," were strategic regions with fuel resources, refining industries, as well as an ethnic diaspora—citizens from this area, but living in Moscow, other Russian cities, and abroad, who could serve as lobbyists in those locations. These were Tatarstan and Checheno-Ingushetia (at that time headed by the "pro-Moscow" Doku Zavgayev). Yeltsin answered by carving out Ingushetia as a separate entity, which was a small part of the former Checheno-Ingushetian A.S.S.R. (or now S.S.R.). This brought two immediate results: the rapid decrease of Yeltsin's popularity in Chechnya, and territorial claims by Ingushetia against the Prigorodny district of the North Ossetian A.S.S.R.; this district, inhabited both by Ossetians and Ingushi, had been a part of Checheno-Ingush S.S.R. in the early Khrushchov period.
Naturally, over 90% of Chechnya's and North Ossetia's populations voted against Yeltsin in the June 1991 Presidential elections, and during the August 1991 putsch attempt in Moscow, Chechnya's leadership supported it, not Yeltsin's resistance. This set the stage for members of the Interregional Group, together with Yeltsin loyalists Mikhail Poltoranin and Gennadi Burbulis, to promote an alternative leadership for the area. Three years later, Poltoranin and Burbulis explained their support for Jokhar Dudayev, the Chechen general who declared the republic independent of Russia, by saying they thought that if they offered one more "star" to a general, they would gain his total loyalty.
The real explanation was more serious, as it does not require a great intellect to realize, just looking at the map, what games a separatist leadership headquartered in Chechnya can play, with encouragement from the imminent new, foreign proprietors of the Baku oil.
Yeltsin failed to learn from the mistakes of Gorbachov. He allowed the same people who started the Caucasus wars in the 1980s, to dominate Caucasus policy again. Moreover, in the autumn of 1991 Gorbachov was still the formal President of the U.S.S.R. The Soviet military leadership had still two supreme institutions (the Soviet Defense Ministry and the General Staff), but there was not yet a Russian Minister of Defense. When Yeltsin, disgusted by Dudayev's declaration of Chechen independence, tried to introduce a state of emergency there at the end of October 1991, this order was disobeyed. Democratic Russia, the movement that had ensured his elections in June, turned against him: Bonner and Afanasyev, in October 1991, promulgated a conception that Russia is "united but separable," alluding to Yeltsin's own populist phrase, pronounced in a fit of anti-Gorbachov rhetoric, that "everybody can take as much sovereignty as he can."
Yeltsin was trapped. The Belovezhye agreement ending the U.S.S.R., for which he is now constantly blamed by the Communists, was his attempt to get rid of the "dual power" situation, and solidify his rule in Russia.
Eighty-nine constitutions: the regional issue in Yeltsin's clash with the Supreme Soviet
In late 1991, State Secretary Gennadi Burbulis, and the young, radical liberal-privatizer crowd around him, pushed a draft law to prohibit all those who had remained CPSU members until the Aug. 19, 1991 putsch attempt, from holding positions of power. Had this option been implemented, Yeltsin would have lost his most loyal people from Sverdlovsk, who had at least some experience in management (such as Oleg Lobov and Victor Ilyushin). Burbulis, Ponomaryov, Murashov, and other "photographed-with-Sakharov" people, participants in British Tory seminars and pupils of the Krieble Institute, used all their might to create tensions between Yeltsin and the Supreme Soviet, which they had called "the real democratic power" in 1990, but now regarded as "a remnant of Soviet dictatorship."
The argument that the Supreme Soviet was elected in 1990, when the CPSU still ruled the U.S.S.R., was widely retailed in the Western press, to justify Yeltsin's struggle against it as a crusade for "democratic" values. The fact that the "world progressive opinion," shaped by the mass media, has no historical memory, was well exploited by those British and U.S. manipulators who were on the inside of the process all along, and remembered quite well that Yeltsin, too, was elected when the CPSU still effectively ruled (although Article 6 of the Soviet Constitution, certifying the "leading role" of the CPSU in society, had been eliminated in 1990). They also remembered quite well that Ruslan Khasbulatov was elected chairman of the Supreme Soviet as a candidate of the democratic forces.
Khasbulatov, who comes from Chechnya, also significantly depended on the crew that was playing separatist games in the North Caucasus, which were so profitable for the weapons trade mafia. Together with Burbulis and Starovoitova, he had promoted Dudayev for the Chechen Presidency, and was also involved in projects for a Greater Adygea and a Greater Circassia, in the North Caucasus. He was also somewhat responsible for adoption of the Law on Rehabilitation of Oppressed Peoples, promoted by Bonner and Starovoitova in the autumn of 1991. This law served as an instrument for an armed clash between North Ossetia and Ingushetia, as it legitimized the right of the Ingushi to take back the Prigorodny District of North Ossetia. The efforts of First Deputy Prime Minister Lobov and Security Council Secretary Yuri Skokov managed
---
8. In December 1991, the Presidents of Russia, Ukraine, and Belorussia met at a hunting lodge in the Belorussian forest, and issued a statement that "the U.S.S.R., as a subject of international law and a geopolitical reality, no longer exists."
9. There are 89 "subjects of the Federation"—provinces, cities, and republics—in the Russian Federation.
to avert a full-scale war in the region, despite the November 1992 publication in *Izvestia* of an open letter by Bonner, Afanasyev, and others, calling to carve out a separate Prigorodnaya Republic from North Ossetia. But Ingushetia became a “free economic zone” dominated by British companies.
A burning issue during the closing months of 1991, was who would be the prime minister of the new Russia, the person to preside over economic reform. Yeltsin’s preferred candidate was Oleg Lobov, but though he belonged to the Sverdlovsk clan, it was impossible to choose him: He was too much attacked by the Thatcher-Bush lobby, such as the publications of the RF-Politika center.\(^{10}\) Finally, Yeltsin chose Gaidar, whose nomination was suggested by Aleksei Golovkov, an “institutionalist” and the former head of the Interregional Group’s staff. Gaidar’s candidacy had the overwhelming support of Anglo-American finance and intelligence circles, who knew him well through the Mont Pelerin Society’s seminars in the late 1980s.
The shock therapy reform, started by Gaidar’s team, seriously affected Russian regional leaders outside the “autonomies.” With central budget subsidies reduced, they envied the tax privileges of the “ethnic” autonomies. When a national payments crisis blew up in May 1992, due to an absolute cash shortage with inflation running at a 2,000% annual rate, some regional barons teamed up with the directors of major plants (who were furious not only because of the collapse of industry, but due to the sharp decline of their own fortunes). Since most of them lacked a “democratic” image, they used the “Sakharov-photographed” Boris Nemtsov, governor of Nizhny Novgorod Province, to wave a regionalist threat. Nemtsov’s economics aide at that time, Grigori Yavlinsky, introduced a separate Nizhny Novgorod currency.
Yeltsin replaced Central Bank head Yuri Matyukhin with Viktor Gerashchenko, thereby effectively authorizing the money printing presses to be turned on. The old directors’ nomenklatura, more broadly, rushed to improve their position by regrouping around the Civic Union of ex-CPSU Secretary Arkadi Volsky, now head of the Union of Industrialists and Businessmen. Its draft program, designed to establish the Civic Union as an alternative to Democratic Russia, contained the inevitable nod in the direction of regional bosses’ desires: “Each subject of Federation [i.e., provinces, as well as ‘autonomies’], should have its own constitution and its own parliament.”
Before the Civic Union consolidated as any kind of effective opposition to the total elimination of industry under “shock therapy,” its leaders and leaders of member parties like the Democratic Party of Russia found themselves being diverted into courtship rituals in London and elsewhere. DPR leader Travkin was invited to the international conference of the Conservative International, returning to announce at the 1992 DPR Congress, that his party was now not only “democratic,” but also “conservative.” The Gorbachov Foundation invited DPR activists into their “image training” programs, teaching the use of Orthodox-patriotic rhetoric with the impoverished Russian population.
Finding no effective flag-bearer in Moscow, the regional elites broke loose in a new wave of regionalism, which greatly shaped the course of the showdown between Yeltsin and the Supreme Soviet in 1993. The clash between the Executive and Legislative branches provided new openings for pro-separatist tendencies. Exploiting the confusion in the center, the regional barons reached for as much privilege as they could.
Beginning in March 1993, as tensions rose between Yeltsin and Khasbulatov, Khasbulatov was recognized by the regional barons as an instrument for taking more economic power from Moscow. The first to speak up was the leadership of Chechnya, whose foreign minister, Shamsuddin Yusef, warned that if Yeltsin removed Khasbulatov, the safety of the Russian population in Chechnya could not be guaranteed.
In eastern Siberia, Khasbulatov won the sympathy of the newly formed Siberian Agreement movement, which grouped together several key regions.\(^{11}\) Another interregional coalition, centered in Samara, called itself Greater Volga. Another group of regions convened in the northwest, where the strongest autonomist tendencies were in Vologda Province, which even declared itself a republic.
The most active supporter of the Supreme Soviet was Kalmyk leader Kirsan Ilyumzhinov. Having established a sort of feudal regime in his region, without any legislature, he spoke out among regional leaders, in favor of strong parliamentary power! Other organizers of regionalist congresses were Boris Nemtsov of Nizhny Novgorod, St. Petersburg City Council leader Aleksandr Belyayev (regionalist tendencies were very strong there), and Ingushetia’s President Ruslan Aushev.
Pro-Yeltsin propagandists railed against regional separatism, as a way to attack the Chechnya-born Khasbulatov. From early 1993 on, the Poltoranin-Burbulis-supervised paper *President* served as a mouthpiece for such hysterical support for
---
\(^{10}\) *EIR*, Oct. 4, 1996, p. 59. Part 2 of this series introduces RF-Politika.
\(^{11}\) In the summer of 1994, *Zavtra* published an article called “The Shift to the East,” which fit into *Zavtra’s* brand of pan-Slavonic and Eurasian “continentalist” conspiriology. It sheds some light on the background to Khasbulatov’s courtship by these circles. The author, Boris Isakov of the International Slavonic Academy, argued that the so-called “democenter” of Eurasia, i.e., “the heart of the people’s biological field” (*biopole*, a term used by parapsychologists), as well as the epicenter of “ethnic passionarity,” had begun to shift from the Moscow area in the fourteenth century, and had now reached eastern Siberia (Krasnoyarsk Territory), while the “geocenter” of Eurasia, i.e., “the center of the biological field of the flora and fauna,” had reached the Southern Urals (Chelyabinsk Province), and might proceed on to northern Kazakhstan, which would be “very dangerous.” This outstanding “research” was produced by the newly established academy, in collaboration with the Moscow Economic Academy named after Plekhanov. Ruslan Khasbulatov was a professor at the Plekhanov Academy. His most vehement attacks against Yeltsin began in March 1993, after he visited Novosibirsk. After that excursion, Khasbulatov was consistently supported against Yeltsin by the Siberian “regionalist” nomenklatura. During the siege of the Supreme Soviet, there was serious discussion of transferring it, and the status of the legitimate capital of Russia, to Novosibirsk.
Yeltsin, carrying constant crude attacks at the Chechens and other peoples of the Caucasus, ascribing organized crime exclusively to them. At the same time, associates of Bonner and Afanasyev established their influence in one of the centrist factions of the Supreme Soviet, which was speaking for more privileges to the regions. This was the Concordance for Progress faction, established by Victor Sheinis and associated with Grigory Yavlinsky. It was joined by Yuri Nesterov, a close associate of the Starovoitova team (and later a functionary at Interlegal, an NED-sponsored non-governmental organization). Its St. Petersburg branch, headed by Olga Starovoitova, Galina’s sister, later merged with the pro-separatist Confederation of National Associations of Russia (KNOR).
After the Supreme Soviet was besieged at the end of September, the centrist factions did not hurry to leave the building, but attempted to remove Khasbulatov from his post—not in order to help Yeltsin, but to promote a “zero option” which would throw both Khasbulatov and Yeltsin out of power, diminish the status of Vice President Rutskoy, and promote a weak figure, Valentin Zorkin, to the Presidency. The behind-the-scenes mover of this operation was Veniamin Sokolov, deputy head of the Supreme Soviet. Some regional bosses coordinated their actions with his group, which included some odd birds like Vladimir Yurovitsky (author of a theory of “informational money”), Yuri Yarmagayev (a regionalization fanatic, linked to Trotskyite groups, who advocated the total elimination of the Executive branch), and Yevgeni Gilbo (a St. Petersburg economist with a “green” bent, sometimes to the left and sometimes to the right). The group of such “theoreticians,” around Sokolov, elaborated a plan for the emission of unlimited quantities of currency, and not only in the capital city, which they claimed was an “anti-monetarist,” “anti-Gaidar” alternative! After the October 1993 suppression of
---
12. The groundwork for this “zero option” was laid by a group of Supreme Soviet deputies, associated with the Shatalin Foundation (of Academician Stanislav Shatalin, supervisor of the 500 Days radical privatization scheme in 1990), which played a very sophisticated power game. The Shatalin Foundation worked to elevate Valeri Zorkin to the post of the head of the Constitutional Court. Zorkin was then to promote the “zero option,” a “draw” between Yeltsin and the Supreme Soviet leadership, followed by simultaneous Presidential and parliamentary elections.
While Burbulis’s radical democrats were loudly agitating for the Supreme Soviet to be dissolved, the centrists were more quietly at work. In early September 1993, when it was still possible to attempt to make peace between Yeltsin and Khasbulatov, Viktor Sheinis drew up a draft new Russian Constitution on behalf of the Constitutional Conference, although that institution had not cleared it. That move triggered a new anti-Yeltsin speech by Khasbulatov, which, in turn, pushed Yeltsin over the edge. On Sept. 21, the President abolished the Supreme Soviet, and the armed denouement followed two weeks later.
In the spring of 1994, at the time of Yeltsin’s first serious illness, Zorkin was again promoted as a key figure in a project for a new Russian leadership—the “Accord in the Name of Russia” initiative, which ousted Vice President Aleksandr Rutskoy (in jail from the Oct. 4, 1993 showdown until Feb. 26, 1994). A key organizer of the “Accord” was the last of its signatories: Dr. Aleksandr Tsipko, top official of the Gorbachov Foundation, promoter of regional self-determination, and author of articles in the NED’s *Journal of Democracy*.
the Supreme Soviet, this group rounded out its ideology by incorporating the idea of a constitutional monarchy, and even located an odd-looking candidate who regarded himself as Nicholas III, the real successor of the Romanov dynasty.
Phantasmagorical, but real. If a criminal kingpin participated in the Constitutional Conference as the representative of some Far East Cossack movement, why not have a Nicholas III ruling with help from the local soviets? If the prayers of Shoko Asahara from the Aum Shinrikyo sect sounded on Russian radio for a whole hour on Oct. 3, 1993, what might happen the next day? Anything.
---
II. Centrifugal forces with an environmentalist spin
Environmentalist propaganda, imported by Gorbachov and his cronies from the United Nations, the Club of Rome, and their affiliates, played a significant role in the degeneration and criminalization of the Soviet central and local elites. It helped set the pattern, by which Communist rule collapsed and the U.S.S.R. broke apart, but it also contributed to a process of Russia’s own disintegration, which appeared as a threat almost immediately after 1991.
The heavy involvement of criminal groups in Russian privatization, along with the dubious state of Russia’s strategic arms arsenal, make clear that the collapse of this country poses a threat to all mankind. The intentions of the pseudo-scientific public institutions that promote ethnic types of environmentalism, appear to reflect private interests in Russia’s regions, especially those of the oil and metals companies that are violently struggling for market share. The injection of British-cultivated tribal indigenism, brings various kinds of neo-paganism, which resembles a raw material for misanthropic, neo-fascist conceptions. In the Russian Far North and Siberia, rich oil, gas, and precious metals deposits are adjacent to huge stores of military equipment and nuclear arms. Imagine a pagan tribe, possessing nuclear weapons along with a neo-fascist conception, that might establish itself as a Sovereign Kingdom of Novaya Zemlya Archipelago!
From the very beginning, the green ideological movements in Russia targeted large-scale infrastructure projects. The relevant organizations also attacked nuclear energy, in a fashion that provoked mistrust and tension among republics and regions. This undermined the security of the nuclear industry, rather than improving it.
In the late 1980s, green propaganda fell on sensitive ears in Ukraine and Belarus, which had suffered the most from the Chernobyl accident in 1986. People in those two countries felt like victims of a “Moscow experiment,” at the very time when Gorbachov-promoted greens were denouncing the projects for diversion of part of the flow of Siberian rivers, to irrigate the deserts of Central Asia. Kazakhstan’s delegates nearly fell on their knees at the First Congress of U.S.S.R.
People’s Deputies in the summer of 1989, pleading for urgent action to save the Aral Sea. Getting no answer, they could only conclude that they had no hope for their industry, but could only save themselves by the sale of oil, natural gas, and minerals—and they could do that more profitably, if they didn’t have to pay into the U.S.S.R. central budget. Even the thoughtful layers of the Russian opposition, not to mention foreign analysts, pay scant attention to such events, when analyzing the reasons for the dissolution of the Soviet Union.
The green denunciations of the big power projects in Siberia and on the Volga, were promoted by the same Western institutions that promoted the disastrous privatization of basic industry, including fuel and energy production. Novy Mir editor Sergei Zalygin, who led the campaign against the Siberian river projects, invoked the work of Prof. Douglas R. Weiner from the University of Arizona, whose “Ecology in Soviet Russia. The Archipelago of Liberty: National Parks and Environmental Protection” was sponsored by the Andrew W. Mellon Foundation and the Russian Research Center of Harvard University.
Contacts of British and American ideological institutions in the Soviet Union had been maintained for years, under the cover of environmental science, religion, and anthropology. Thanks to Gorbachov’s close collaborator, longtime Soviet Ambassador to Canada Aleksandr Yakovlev, who now oversaw ideology policy from his seat on the CPSU Politburo, these channels came alive.
Yakovlev’s closest associate, Prof. Aleksandr Degtyaryov, was head of the ideological department of the Leningrad Party Committee when the infamous “Russian nationalist” Pamyat movement launched its rallies in Rumyantsev Square, not far from Leningrad State University (LGU). Along with blatant anti-Semitism, these Leningrad CPSU-approved nationalists proclaimed green views. In the late 1980s, one of Pamyat’s founders, Yuri Riverov, headed up an organization called the Committee to Save Lakes Ladoga and Onega, which campaigned against heavy industry, especially nuclear energy, from an environmentalist standpoint.
Around the same time, a Committee to Save the Volga emerged out of the Russian Union of Writers, which also promoted the “Ladoga” group. The Russian Union of Writers was seeking independence from the U.S.S.R. Union of Writers leadership, but by 1990, it had split into “democratic” and “nationalist” sections, thanks to efforts by Yakovlev’s lobby in the “creative intelligentsia.”
The propaganda campaign against the projects to irrigate the deserts of Central Asia was pushed mostly through the “Russian nationalist” lobby, but radical “westernizers” became even more successful wielders of the environmentalist agenda than the “slavophiles.” A young friend of Academician Sakharov, physicist Boris Nemtsov, launched a campaign against the plans to build a nuclear power plant in the Gorky (Nizhny Novgorod) Region; the project was never carried out. Another young radical democrat, Sergei Belozertsev, was elected to the U.S.S.R. Supreme Soviet by launching an environmentalist movement in Karelia, in northwest Russia near the Finnish border. Its activists later merged into the so-called Republican Union, a group demanding Karelia’s independence from the U.S.S.R. and Russia. (There were probably not enough indigenous Karelians, not to mention a lack of oil and of Caucasus-type temperaments, for that operation to go live!)
Environmentalist ideas also surfaced in Siberia, rich with oil, gas, and precious metals. The intellectual center of such right-left environmentalist operations was Novosibirsk, with its special Siberian branch of the Academy of Sciences. Aurelio Peccei, founder of the Club of Rome, had visited Novosibirsk already in 1967. The adjacent Chelyabinsk region was a playground for anti-industrial propaganda around the Metallurgic Plant and the consequences of a nuclear accident there in the 1950s. Sergei Kostromin, a radical liberal from Chelyabinsk, became a violent anti-Semite in 1992, headed the Party of Russian Nationalists, and demanded a separate South Ural Republic.
Western Siberia, just east of the Urals, is the main oil province of Russia. The richest oil deposits are concentrated in its northern part, which was established as the Khanty-Mansi Autonomous Region in the 1920s. Khanty (Ostyaks) and Mansi (Voguls) are two small ethnic minorities, which had no written culture before the likbez (liquidation of illiteracy) program of the Soviet Russian People’s Commissariat of Education.
These two minorities, which comprise less than 5% of the population of the district since oil extraction was started there in the early 1970s, have been an object of study by foreign anthropologists since years before Gorbachov’s perestroika. Beginning in 1975, Marjorie Mandelstam Balzer, then a Harvard anthropologist, conducted “ethno-historical and field research” in West Siberia, with assistance from Leningrad State University. Her reports concerned not only “menstrual taboos and pollution beliefs,” but Shaman rites and other elements of pagan religion. In her 1981 paper, analyzing gender relations among the Khanty and Mansi from a psychoanalytical standpoint, Balzer cited an array of anthropological studies carried out by Oxford, the Finnish Academy of Sciences, as well as Harvard, and expressed special gratitude to Prof. Rudolph F. Its of LGU, who organized her trips to Western Siberia.
Last year, I read the obituary of Rudolf Its, head of the Anthropology Department at LGU, and not just in any publication. It appeared in Rodnyye Prostory (Native Expanses), which is published by one of Its’s students—philosopher Victor Bezverkhy, a specialist in ‘Kantian anthropology,’ and one of the most radical neo-pagans of the Nazi sort. The frontispiece of his journal is usually adorned with a swastika. Another one of Bezverkhy’s teachers, the pagan philosopher Yuri Lisovoy, died in London in 1992; he had gone to England at the end of World War II, through the British zone of Germany, lived in Leeds, and had many friends among Oxford specialists.
The ethnically defined entities within Russia, so extensively profiled by foreign, as well as homegrown ethnographers, and susceptible to environmentalist agitation, became tools in the hands of both the ruling circles and the opposition. A society which had lost its identity, could be split more and more. As early as 1991, Gennady Burbulis, who was a co-author of Chechen separatist Jokhar Dudayev’s career, and later eagerly supported the nationalist ambitions of Tatarstan (having enough oil deposits to earn the label of a “New Kuwait”), also backed the idea of separating the Khanty-Mansi national district from the Tyumen Province, on an “indigenist” pretext.
**Shamanism, Islam, and UFOlogy**
The target areas of the World Wide Fund for Nature (WWF, the former World Wildlife Fund) were also concentrated in Siberia: on the Taimyr peninsula, close by the Norilsk Metallurgical Plant (now Norilsky Nickel); in Yakutia, rich with gold and diamonds; and in the Far East, at the Chinese border, where the WWF hires military personnel to protect tigers from poachers. Besides tigers, the WWF is very anxious about the white stork, which lives in the oil-rich Komi Republic and spends its summer migration period in Afghanistan.
Stork-seekers from Britain were followed by oil-seekers from the United States, and Scandinavia, who formed the Komi-Pechora oil consortium in the 1990s. That is when the Komi people, who also lacked literacy until the 1920s, found out that they have a long and developed culture, tightly connected with Finno-Ugric civilization.
In the nineteenth century, British Intelligence circles had already circulated the myth of a relationship between the Finnish-Hungarian and Turkic civilizations. In 1990, a tiny group of intellectuals representing the ethnic minorities of the Far North started promoting the “ancient cultural traditions” of their ethnic groups, along with environmentalism. Yuvan Shestalov, an ethnic Mansi with close ties to the Russian nationalist group in the Russian Union of Writers, issued a newspaper called *Shaman*, which revived the pagan traditions of the Finnish-Hungarian minorities, mixed with mysticism and, for some reason, UFOlogy. Other New Age/pagan periodicals published articles by Hungarian scientists, boosting the human rights of the “fraternal peoples”—primarily those ethnic groups (Komi, Khanty, and Mansi), which inhabited the oil-rich areas of the Russian Far North.
Related Finno-Ugric groups inhabiting the Volga valley (Chuvashes, Udmurts, and Mari) were told of their common origin with the Turkic nations. On this basis, the leadership of the Tatar A.S.S.R. (soon to be Tatarstan) planned to form a federation of Volga republics, splitting European Russia right in the middle. Tataria’s Muslim union separated from the all-Russia Muslim Association, DUMES; the splinter structure, DUMRT, controlled the Muslim communities along the whole middle Volga.
---
13. Joseph Brewda, “David Urquhart’s Ottoman Legions,” *EIR*, April 12, 1996.
14. Webster Tarpley, “Palmerston’s London During the 1850s,” *EIR*, April 15, 1994, p. 12, relates how Urquhart “went native,” beginning in Constantinople. The modern case of James Dickey, a.k.a. “Yakup Zaki,” is reported by Gumer Sabirzianov, *Volzhskie Tatary i Russkiye v serkale simpatii i antipatiyi* (*Volga Tartars and Russians in the Mirror of Sympathies and Antipathies*), Kazan, 1993.
leader Kirsan Ilyumzhinov, visiting the Kalmyk capital, Elista, in the company of some Uighur Buddhists. This was not just a coincidence: Kalmyks and Uighurs are a part of a previously numerous ethnos, which used to inhabit the whole territory of today’s Kazakhstan.\(^{15}\) Another target area of the Dalai Lama was the underdeveloped Tuva Republic, which was formally independent of the U.S.S.R. until 1944. The population in Tuva is very poor, but its soil is rich with asbestos and uranium. Kalmykia does not play any strategic role, but under certain conditions it might; during the mostly unofficial discussions of the fate of Baku oil, after Azerbaijan became independent, one option reportedly promoted by then Russian Foreign Minister Andrei Kozyrev was for the Caspian Sea resources to be equally divided among the littoral countries. Approximately one-third of Russia’s Caspian shoreline is in Kalmykia (see Figure 1).
**Pagans, diamonds, and submarines**
In June 1994, the “indigenous peoples” of the Russian Far North were favored by a conference organized by the Cultural Committee of “Barents Region.” The term “Barents Region” is supposed to subsume the Scandinavian countries, plus several regions of northwest Russia—several, but not all of them. Vologda Province, for example, is not involved, while Arkhangelsk, located farther east, is favored and even serves as a center of the Cultural Committee’s activity. There is a curious coincidence in this selection: Unlike Vologda, which is covered with thick, swampy forest, Arkhangelsk Province includes part of the Timano-Pechora oilfield, and has rich diamond deposits.
The “Barents” ideologues’ concern for indigenous peoples is so strong, that it extends from Sweden to eastern Siberia, across thousands of kilometers to Yakutia—this time under the aegis of the Council of the Barents-Euroarctic Region. Yakuts are neither Finno-Ugric, nor even Islamic, and the only thing they have in common with the inhabitants of Arkhangelsk, is the diamond-rich territory on which they happen to live.
The name “Barents Region” originates in Sweden; still, it is attributed not to Swedish officials, but to a frequent guest in Stockholm, former Russian Foreign Minister Andrei Kozyrev.
At a recent pagan meeting in St. Petersburg, a self-styled Russian nationalist from Arkhangelsk Province boasted that he had received New Year’s congratulations from “one of the Volga Presidents,” i.e., the head of one of the Finno-Ugric republics. The sect to which the Arkhangelsk nationalist belongs (he goes by the name of Vladimir Bogumil II) calls itself Yarl-Pomors, and claims to promote the interests of “indigenous” Ingrian (or Ingermanland) Finns. That is on the “right” side; on the “left,” the Ingrian community, representing less than 1% of the population of Leningrad Province, belongs to the “radical liberal” Confederation of Russian National Associations (KNOR), which also includes the Abkhazian and Chechen cultural societies, along with one of the organizations called Friends of Tibet. The first promoter of a separate Ingrian Republic on the territory of Leningrad Province, was radical environmentalist Yuri Shevchuk, currently deputy head of Gorbachov’s Green Cross in St. Petersburg.
The former head of the Ingrian Union (Inkerinliitto), Dr. Aleksandr Kiryanen, also runs the local branch of the Unrepresented Nations and Peoples Organization (UNPO)\(^{16}\) (see Figure 2). The Inkerinliitto headquarters building was a Finnish church before 1917, and later became the House of Nature. Dr. Kiryanen is a cousin of Marina Salye, leader of the Free Democratic Party of Russia, one of the most convinced advocates of “self-determination.” In 1995, Salye became the number-two person in a newly established political party called Preobrazheniya (Transformation), headed by Eduard Rossel, governor of Sverdlovsk Region and the ideologist of an independent “Ural Republic.” Kozyrev was midwife to the new party.
Unlike many Russian political players, who prefer the the warm climate and clean air of the North Caucasus, Andrei Kozyrev has gravitated to the cold and damp Russian northwest. Twice he was elected to the Russian Parliament from Murmansk Province, bordering Norway. Kozyrev attracted various United Nations institutions to the region, apparently
---
15. Joseph Brewda, “Pan-Turks Target China’s Xinjiang,” *EIR*, April 12, 1996.
16. Mark Burdman, “UNPO Plays Key Role in Transcaucasus Blowup,” *EIR*, April 12, 1996.
The map shows some of the 50 "peoples" and "nations," which, the Unrepresented Nations and Peoples Organization (UNPO) says, should be independent states. The names of those targeted areas within Russia and other CIS countries, which are mentioned in this article, appear in bold.
1. The Hungarians of Romania
2. Kosova
3. The Greeks of Albania
4. The Ingrian Finns of the St. Petersburg region
5. Chuvash
6. Mari
7. Tartarstan
8. Udmurt
9. Bashkortostan
10. Komi
11. Tuva
12. Buryat
13. Yakutia
14. Crimean Tartars
15. Circassia
16. Abkhazia
17. Ingushetia
18. Chechnya
19. Iraqi Turkoman
20. Assyria
21. Kurdistan
22. "East Turkestan" (Xinjiang, China)
23. Tibet
24. Taiwan
25. Cordillera (Philippines)
26. Mindanao (Philippines)
27. Moluccas (Indonesia)
28. West Papua (Indonesia)
29. East Timor (Indonesia)
30. Aceh (Indonesia)
31. Karenni state (Myanmar)
32. Nagaland (India)
33. Chittagong Hill Tracts (Bangladesh)
for reasons having to do with the problem of nuclear waste.
Nuclear waste pollution troubles Norway, and not only due to the personal views of the country’s former prime minister, the environmentalist Gro Harlem Brundtland. The 1989 catastrophe of a Soviet submarine in the Norwegian Sea reminded the local population of Chernobyl. To prevent new accidents, requires investments for the utilization of spent nuclear fuels, and to provide security at the Kola nuclear power plant in Murmansk Province. Any foreign diplomat knows the glistening of Russian officials’ eyes, at the word “foreign investments.”
In June 1994, a delegation from British Nuclear Fuel paid a visit to Murmansk. In autumn 1995, the object of British interest, the floating nuclear waste-processing base, was put up for auction. Against expectations that a well-known Russian company would place the winning bid, it went to an Anglo-French consortium.
While the European Union was discussing nuclear security, a group of Russian sailors, led by a captain of second rank, was caught stealing some uranium-containing cylinders. This was in autumn of 1995. Since the used cylinders were hardly a saleable commodity, the theft looked for all the world like a pretext for mass media hysteria. The officer turned out to be a member of a Pentecostal sect with an office in Murmansk, frequented also by Norwegian citizens.
A month later, a new scandal broke out, which is intensively discussed up to the present day. Russian security forces searched the Murmansk office of a Norwegian environmentalist organization called the Bellona Foundation. The whole Russian and international green and human rights beaumonde mobilized to denounce the KGB and support the Norwegian institution. Besides Academician Aleksei Yablokov, a fierce opponent of nuclear energy and former member of the Interregional Group of Deputies, and former Soviet Minister of Ecology Nikolai Vorontsov, the outcry came from Greenpeace, National Resources Defense Center, former French Minister of Ecology B. Lalonde, and even the International Fund for Animal Welfare (IFAW)—although Bellona, judging by the results of its own fact-finding mission, was interested less in evidence of environmental pollution around Murmansk, than in the location of the nuclear objects of Russia’s Northern Fleet. (Bellona’s two reports on the Murmansk area, with detailed maps, have been posted on the Internet, where—thanks to the efforts of financier George Soros to expand Internet access in the former Soviet Union—any Russian or Chechen youngster can also find instructions for a “human rights” militant, entitled “How to Make a Bomb.”)
Capt. Aleksandr Nikitin, a Bellona author who was arrested on Feb. 6, 1996 in St. Petersburg, before he could escape to Canada, was sincerely surprised when the Bellona office was searched by Russian intelligence. “Why,” he said, “but for three years nobody interfered with our work! Some officials even praised it. For example, Andrei Kozyrev.” “And Mikhail Gorbachov,” added a Norwegian Bellona member. “We just met him on the plane to Moscow. We gave him our report, and he said we’re doing a very useful work.”
After the 1995 Duma elections, International Republican Institute officials boasted that their greatest image-making success was the victorious campaign of Andrei Kozyrev in Murmansk, coordinated by the Moscow IRI office. They changed their tune changed after Kozyrev’s resignation and Nikitin’s arrest. “Now we’ll have to quiet down, and cancel public seminars for some time,” the same official said nervously.
He didn’t have to worry. Kozyrev is travelling around the world, saying that if he were the International Monetary Fund, he would do to Russia just what the IMF is doing. He said that as a featured speaker at the IRI’s event, held in San Diego during the Republican convention this past August. Nikitin is in jail, but this fact is a great advantage for the Human Rights Bureau (co-chaired by Yelena Bonner), which was hired by Bellona at $20,000 per half-year, to campaign for his exoneration. Amnesty International has already declared Nikitin the next “prisoner of conscience,” after Sakharov, and uses him in its fundraising material. Mikhail Gorbachov, surrounded by an odd-looking crowd of Buddhists, tries to make Russians fall in love with him again (when he is not making well-paid appearances in California, or Sioux Falls, South Dakota).
They think themselves secure amid the disaster they wrought—although, they should hear the warning signals, as George Bush’s role in the world drug trade is discussed in U.S. newspapers.
In late autumn 1995, Russian TV channels broadcast a short report from the town of Khalmer-Yu in the Komi Republic, where coal mining had stopped, due to a complete lack of finances. It was a horrible picture of a deserted town, comparable to Chernobyl. The last inhabitants were leaving. The TV cameraman zeroed in on a reindeer-drawn sledge with two Nenets peasants in it; through a snowstorm, they were gazing at the cross-barred doors of the last shop, already closed. With the miners leaving, the local population, comprised of the ethnic minorities, was left with nothing. Probably the next time the liberal mass media would speak of real problems of the Far North, was when its inhabitants starved and disappeared.
Gorbachov and Bush don’t care for them, as nor for the unfed miners and hungry soldiers. They travel across the world, hold press conferences and order banquets. They are at the feast in the time of the plague. But nobody in the world, including them, can really be secure, while the resources vital to feed people, to educate children, to provide high-technology energy sources and infrastructure, are siphoned into the dope and arms trade, to prop up financial speculation, or fund table-turning, Shaman dancing, and environmental spying. If the situation doesn’t change in the nearest future, the world will be doomed. The feast during the plague doesn’t last an age—not even a decade.
17. “The Feast in the Time of the Plague,” is one of the “little tragedies” of Russia’s poet Aleksandr S. Pushkin (1799-1837). Edgar Allan Poe’s *The Masque of the Red Death* treats the same theme.
Democracy or destabilization? What the NED funds in Eurasia
The published list of current and recent grants by the National Endowment for Democracy (NED), to support organizations and activities in north-central Eurasia, is excerpted below. We provide some annotation in brackets, to assist in comparing this flow of official U.S. funds, with the painful and dangerous fracturing of Russia, chronicled by Roman Bessonov in this issue’s installment of his “The Anti-Utopia in Power” series.
As William Jones reported in his introduction to Bessonov’s series (EIR, Aug. 9, 1996), the NED is officially styled as “a nonprofit, bipartisan, grant-making organization,” whose aim is to “strengthen democratic institutions around the world through nongovernmental efforts.” Funded by Congressional appropriation, “the Endowment’s worldwide grants program assists organizations abroad—including political parties, business, labor, civic education, media, human rights and other groups—that are working for democratic goals.”
As government covert operations drew unwelcome attention from Congress during the 1970s, it was deemed desirable to “privatize” many intelligence operations, making them immune to Congressional oversight. The NED was established as a “private” entity in 1983, in order to shield it from the depth of scrutiny a fully public organization would incur.
President Reagan announced the creation of the NED in a 1982 speech delivered at Westminster, in England: In testimony at congressional hearings on the Iran-Contra scandal, Walter Raymond, National Security Council staffer for “Project Democracy” operations in the mid-1980s, revealed that that speech was co-drafted for Reagan by Lawrence Eagleburger, Henry Kissinger’s close associate.
The legislation that established the NED created a new species, the “quasi-autonomous non-governmental organization,” or “quango.” There are four main quangos under the NED: the International Republican Institute (IRI, run by the Republican Party), the National Democratic Institute for International Affairs (NDI, Democratic Party-dominated), the Center for International Private Enterprise for business, and the Free Trade Union Institute for Labor.
‘Democracy’ in Russia
John Brademas, chairman of the National Endowment for Democracy, in the NED’s Summer 1996 newsletter:
“To judge from the participation in the parliamentary and presidential contests, Russian citizens are truly politically engaged. On June 16, 70% of the electorate voted; on July 3, 69%! Boris Yeltsin made a remarkable comeback, from leading Communist Party chief Gennadi Zyuganov by 35% to 32% in the first round to achieving a landslide triumph of nearly 54% to 40% in the second. Democratic leaders around the world voiced great relief. . . .
“While Russia’s progress toward democracy has been the historic achievement of the Russian people, I am proud to recognize the important assistance given to Russian democrats by the Endowment and its four core institutes. I want especially to commend the Endowment’s party institutes (the NDI and the IRI) for the training they provided over the past six years to reform candidates and party activists. The modest expenditure of U.S. tax dollars in this effort may be one of the most cost effective investments in peace and security in our nation’s history.”
Lyndon H. LaRouche, Jr., in his October 1996 Presidential campaign paper, The Blunder in U.S. National Security Policy:
“A short-lived democracy in Russia was brought to an end by artillery-fire against the parliament, during October 1993. Both the rebellious spirit of that suppressed parliament, and the shelling, were prompted by the pressure of the ‘IMF conditionailities’ introduced in accord with the ‘New Morgenthau Plan’ geopolitics of Prime Minister Thatcher and her familiar, President Bush. Thus, in the hallowed name of ‘democracy’ and ‘market economy,’ a short-lived genuine political democracy was destroyed in Russia, as real democracy is repeatedly destroyed, all in the name of ‘democracy’ and ‘free trade,’ in Central and South America.”
LaRouche’s diagnosis and the fairy tale from Brademas may be usefully contrasted, against the backdrop of Roman Bessonov’s articles, beginning with his report (EIR, Sept. 6, 1996) on how organized crime took charge of the Russian economy, on the wings of the “free market” reform. In his Blunder paper, LaRouche recalled William F. Buckley, Jr.’s acknowledgement of how hostile to democracy is the Mont Pelerin Society brand of economics, imposed on Russia: “It is possible,” Buckley said, “that Milton Friedman’s policies suffer from the overriding disqualification that they simply cannot get a sufficient exercise in democratic situations.”
Some 1994 and 1995 NED grants
The following are excerpts from the published list of NED grants. Our comments are in brackets:
Russia
Former Soviet dissidents Yelena Bonner and Sergei Kovalyov were honored at the NED’s Fifth World Conference on Democracy, May 1-2, 1995, in Washington. Among the other Russians in attendance were Galina Starovoitova, Sergei Grigoryants of the Glasnost Public Foundation, and Andrei Vasilevsky, head of the Panorama Information and Research Center.
Center for International Private Enterprise—$110,025. To help educate Russia’s younger generation in the basics of economics and the free market by developing instructional materials for a high school curriculum entitled “Economics for Young Russians.”
Center for International Private Enterprise—$94,316. To enable the Center for Political Technology to engage in research and advocacy on the role of business associations.
Free Trade Union Institute—$1,046,232. To support democratic worker organizations in Russia, including education and training activities, and continued support to Delo, a Russian newspaper covering developments in the independent trade union movement and industrial relations, as well as the operation of FTUI’s Moscow field office and five small liaison offices located in major Russian industrial centers.
[The FTUI, formed in 1977, continues an earlier U.S. government-funded AFL-CIO project, the Free Trade Union Committee. It is the only one of the core NED quangos that existed prior to the NED. The FTUC had been run by Irving Brown, who was in charge of AFL-CIO operations in Europe in the post-war period, then succeeded Jay Lovestone as head of the AFL-CIO’s international department. His mentor, Lovestone, had led a Bukharinite faction in the Communist Party U.S.A., before becoming a specialist, for U.S. intelligence agencies, in subversive operations under trade union cover.]
Freedom House—$11,000. To enable Panorama, an independent Russian information and analysis group, to produce a study on the various extremist groups in Russia.
[Founded in 1941, Freedom House is the leading private intelligence organization of “social democratic” coloration in the United States. From its inception, Freedom House was closely allied with the networks of Jay Lovestone and Irving Brown. Leo Cherne, who chaired the organization for over 40 years, was vice-chairman of the President’s Foreign Intelligence Advisory Board during the Reagan administration, when the NED was created. A close ally of George Bush, Cherne oversaw much of the private intelligence apparatus used in bankrolling the Afghan mujahideen, and funding the Contras, as well as arming groups in Iran.]
Freedom House—$44,000. To enable Panorama to publish a reference guide to regional politics in Russia, including information about local politicians, administrative structures, political parties, and independent organizations.
[Typical of the guidance provided by Panorama was its position on the Democratic Union, Russia’s first political party to declare itself in opposition to the CPSU (1988). In 1993, an official at the U.S. embassy in Moscow cited the Panorama guide in support of his view that, insofar as leading Democratic Union figures had taken positions opposed to Yeltsin, that group could no longer be considered a “pro-reform,” or “democratic” movement.]
Glasnost Defense Foundation—$44,000. To publish five books on issues of importance to journalists and state officials who deal with journalists, including a book on violations of journalists’ rights during the military campaign in Chechnya, a guidebook of key information for journalists reporting from conflict zones, an annual report of all violations of journalists’ rights throughout the former Soviet Union, an anthology of legislation concerning media issues, and an anthology of proposed codes of ethics for media agencies.
Glasnost Public Foundation—$52,150, $53,170. To review the activities of Russia’s intelligence services; to examine the political influence of the secret service and the relationship between the secret service and Russia’s new commercial structures; and to organize a legal consultation service to review cases involving the violation of human rights by the security services. [In 1995, also] to update its data bank on the KGB and its successor institutions.
Interlegal—$49,278. To enable this Moscow-based group to serve as a clearinghouse for non-governmental organizations’ activity in Russia; provide legal advice to nonprofit organizations; and work with the legislative and executive branches in the drafting of relevant national legislation.
[See previous article, p. 26, on some of the Russian personnel of Interlegal, among whom was Galina Starovoitova’s ally, Yuri Nesterov.]
Jamestown Foundation—$24,590. To enable the Globe Independent Press Syndicate to serve as an information clearinghouse for the various democratic blocs in preparation for the Russian national and local elections.
New Times (Novoye Vremya)—$38,200. To enable this Moscow-based journal to respond to the growing threat to democracy from anti-democratic groups and ideologies by publishing articles on Russian nationalism and other right-wing extremist trends.
Panorama—$40,000. To research, analyze, and publish comprehensive reports on the December 1995 parliamentary and the June 1996 presidential elections.
Partners for Democratic Change—$65,000. To promote the development of public mediation centers in Moscow, Krasnodar, Arkhangelsk, and Khabarovsk which will
provide training in negotiation skills, cooperative planning, and public mediation.
**Russian Center for Citizenship Education**—$15,000. To enable this Moscow-based grassroots network to conduct a series of public policy workshops aimed at developing the knowledge, skills, and habits necessary for conducting productive civil discourse, especially among provincial teachers and civic activists.
**St. Petersburg STRATEGY Center**—$40,000. To analyze the third sector in St. Petersburg; and to conduct a series of workshops and a conference on NGO development.
**Azerbaijan**
**Democracy Development Fund of Azerbaijan**—$30,000. To enable this Baku-based organization to conduct educational programs, promote public policy debate, and publish a bulletin aimed at strengthening democratic values and civil society in Azerbaijan.
**Armenia**
**Center for International Private Enterprise**—$91,688. On behalf of Technical Assistance for the Republic of Armenia (TARA), to provide consultations and organizational advice to businesses and associations such as the All Armenia Women’s Union; to mobilize and train advocacy groups; to remove obstacles to private sector development; and to advise small-scale entrepreneurs on business plans.
**Belarus**
**Center for International Private Enterprise**—$73,313. On behalf of the Independent Institute of Socio-Economic and Political Research, to analyze the impact of free-market economic principles in the activities of political and economic leaders, and to commission a national survey of attitudes toward privatization and market reforms.
**Georgia**
**National Peace Foundation**—$55,000. To enable the Caucasian Institute for Peace, Democracy and Development in Tbilisi to conduct a series of programs to help promote democratic and free-market values and consider solutions to the problems of the democratic transition in Georgia and the Caucasus.
**Kazakhstan**
**ARKOR Foundation**—$30,000. To establish an Inter-ethnic Relations Monitoring Center; and to provide an open forum for political parties, government officials, and others to discuss inter-ethnic issues and formulate policy recommendations on how to ameliorate tensions.
**Free Trade Union Institute**—$329,565. To support the Independent Trade Union Center of Kazakhstan and its network in ten oblasts; and to conduct training to strengthen democratic workers’ organizations.
**Kyrgyzstan**
**Center for International Private Enterprise**—$88,152. To provide entrepreneurship development training for Kyrgyz entrepreneurs, managers of entrepreneur associations, journalists writing on economic issues, business school instructors, and government officials of agencies pertinent to small business development.
**Tajikistan**
**Glasnost Defense Fund Light of Day Journal**—$40,000, $50,000. To support the publication of *Charoghi Ruz* (*Light of Day*), an independent Tajik-language newspaper providing information not available in the state media on current events in Tajikistan, human rights issues, and the positions of a wide spectrum of the country’s political forces.
[According to a Moscow source who worked with *Charoghi Ruz*, the paper served as a mouthpiece for the Tajik opposition group of Akbar Turajonzoda (see *EIR*, April 12, 1996, pp. 44-48, for Ramtanu and Susan Maitra’s dossier on the Tajikistan civil war and its relationship with Afghanistan). Besides this apparent indirect funding from the NED, Tajik opposition figures have been supported by the Washington-based U.S. Institute of Peace, a USAID-funded private organization in the NED orbit, directed by former U.S. Ambassador to Pakistan Robert Oakley. Tajik opposition candidate (in 1991) Davlat Hudonazarov was a USIP “peace fellow” in Washington in 1995. In February of that year, the USIP funded a U.S. tour for Turajonzoda and Muhammadsharif Himmatzoda, of Hudonazarov’s Movement for Islamic Revival. That movement has collaborated with the Taliban in Afghanistan, although Turajonzoda claims to be “strictly observing neutrality” toward the current strife there.]
**Turkmenistan**
**Dashkhovuz Ecological Club**—$15,000. To increase information available on a local level; to establish a system to deliver print media from Russia and other former Soviet republics; and to create a Dashkhovuz Public Information Center to house publications and other materials.
**Ukraine**
**Center for International Private Enterprise**—$153,427. To enable the Association of Entrepreneurs of Ukraine to raise the level of entrepreneurial activity in the country by expanding its database of economic information, identifying barriers to development of key sectors of the economy, and providing training in privatization and business practices.
**Freedom House**—$62,838. To enable the Democratic Initiatives Research and Educational Center to conduct nationwide and regional public opinion polls designed to provide democratically-oriented politicians with reliable, unbiased information on the political attitudes of the Ukrainians.
**International Republican Institute**—$333,210. To
conduct a multi-faceted program designed to prepare parties for the 1994 parliamentary and presidential elections in Ukraine, including civic education, poll watcher training, and technical assistance; and to field an election monitoring team for the parliamentary elections.
[The IRI’s standard of “civic education” is suggested by a briefing sheet used in one of its Russian-language youth programs, which advised that before addressing an audience, “It is useful to tell oneself: • I can reduce my viewpoint to one sentence, and my basic thoughts to three; . . . • I have written the main points of my speech in different colors on cards . . . and not forgotten to number the cards, in case I drop them.”]
**International Republican Institute**—$36,925. To enable Democratic Initiatives to compile and publish a “Directory of Elected Officials,” including national parliamentarians as well as members and chairpersons of local councils throughout Ukraine.
**Ukrainian Center for Independent Political Research**—$82,000. To support its public affairs television program P’yatyi Kut (“Fifth Corner”), which focuses on issues relating to democratic and free-market transition; to continue providing information and analysis on the current situation of women; to encourage the formation of new women’s associations; and to publish a yearbook, *Democracy in Ukraine, 1994-1995*, assessing the transition to democracy.
**Uzbekistan**
**Foundation for Eurasia**—$45,000. To enable a group of leading opposition figures and journalists from Uzbekistan to publish and distribute the Forum, a pro-democracy newspaper featuring news, information, and analysis otherwise unavailable in the country.
**National Democratic Institute for International Affairs**—$64,484. To encourage dialogue among the Uzbek democratic opposition, and between the opposition and the government; to bring together political activists in an effort to better coordinate activities and agree on a common strategy and program; and to encourage the government to recognize opposition forces.
**Regional**
**National Democratic Institute for International Affairs**—$142,785. To conduct a joint program with the Turkish Democracy Foundation for participants from Azerbaijan, Georgia, Kazakhstan, Kyrgyzstan, Turkmenistan, and Uzbekistan focusing on the importance of a multipartisan political process in promoting democratic reform.
**Union of Councils**—$75,000. To support the work of its Central Asian Human Rights Information and Monitoring Network, particularly in Tajikistan, Turkmenistan, and Uzbekistan, including publication and distribution of biweekly information bulletins.
---
**LISTEN TO LAROUCHE ON RADIO**
**Frequent Interviews with Lyndon LaRouche on the Weekly Broadcast "EIR Talks"**
**ON SATELLITE**
4 p.m. ET
Galaxy 7 (G-7)
Transponder 14.
7.71 Audio.
91 Degrees West.
**SHORTWAVE RADIO**
Sundays 2100 UTC
(5 p.m. ET)
WWCR 12.160 MHz
**Cassettes Available to Radio Stations**
**Transcripts Available to Print Media**
| Local Times for "EIR Talks" | Sunday Shortwave Broadcast on WWCR 12.160 MHz |
|-----------------------------|-----------------------------------------------|
| Adis Ababa | 0100* Little Rock |
| Amsterdam | 2200 London |
| Anchorage | 1300 Los Angeles |
| Athens | 2400 Madrid |
| Atlanta | 1700 Manila |
| Auckland | 0100* Mexico City |
| Baghdad | 0100* Melbourne |
| Baltimore | 1700 Mexico City |
| Beijing | 0200 Minneapolis |
| Belfast | 0600 Minneapolis |
| Berlin | 2200 Montreal |
| Bogota | 0700 Montreal |
| Bohemian Grove | 1400 New Delhi |
| Bogota | 1700 New York |
| Bombay | 2300 New York |
| Boston | 0330 Norfolk |
| Bucharest | 1700 Oslo |
| Broken Woods | 1700 Oslo |
| Bucharest | 2400 Philadelphia |
| Buenos Aires | 1900 Pittsburgh |
| Cairo | 1700 Pittsburgh |
| Cairo | 2400 Rangoon |
| Calcutta | 0330 Richmond |
| Caracas | 1800 Rio de Janeiro |
| Casablanca | 2200 Rome |
| Chattanooga | 1700 St. Louis |
| Chicago | 1600 San Petersburg |
| Copenhagen | 2300 San Francisco |
| Denver | 1500 Santiago |
| Detroit | 1700 Seattle |
| Dublin | 2200 Seattle |
| Gdansk | 2300 Seoul |
| Guatemala | 1600 Shanghai |
| Havana | 1700 Singapore |
| Helsinki | 2300 Stockholm |
| Ho Chi Minh City | 0600 Sydney |
| Honolulu | 1200 Tel Aviv |
| Hong Kong | 0600* Tel Aviv |
| Jakarta | 1800 Tokyo |
| Jerusalem | 2400 Toronto |
| Jakarta | 0500 Vancouver |
| Johannesburg | 2400 Vladivostok |
| Karachi | 0300 Venice |
| Kielsharkport | 1700 Warsaw |
| Kiev | 2400 Wellington |
| Khartoum | 2400 Wiesbaden |
| Kuala Lumpur | 1700 Yokohama |
| Lima | 1700 Yokohama |
| Lincoln | 1600 Yorktown |
| Lisbon | 2300 Mondays |
*Mondays*
IMF pressure is driving Russia toward civil war
by Rachel Douglas
The sight of ailing Russian President Boris Yeltsin, appearing on TV on Oct. 17, to fire Gen. Aleksandr Lebed as head of the Security Council, brought home to millions of Russians what happened in last summer’s elections: They elected not a President, but a continuing time of strife, verging on civil war. Yeltsin signed the decree in front of the cameras (lest it be claimed he hadn’t signed it), after days of furious scandal-mongering, counter-accusations, and coup threats among the top figures in Moscow, including the Russian Armed Forces.
An economic crisis, more severe by the day, is driving the political turmoil in Moscow. Lyndon LaRouche reported to a U.S. national TV audience last June 2, after his visit to the Russian capital in April, that “the country is on the verge of an explosion. They have only one option, and that is to get rid of the International Monetary Fund.” But the IMF, which dispatches a delegation to Moscow every month to approve or disapprove the Russian government’s performance and decide whether to release the latest $300 million tranche of its current loan to Russia, is fanning the flames.
The Sept. 28-Oct. 1 IMF annual meeting, where IMF Managing Director Michel Camdessus again admitted that the international financial system is on its last legs, was attended by a hefty contingent of Russian officials: First Vice-Premier Vladimir Potanin, Central Bank chief Sergei Dubinin, Vice-Premier and Finance Minister Aleksandr Livshits, and Minister of Economics Yevgeni Yasin, all close associates of the IMF’s darling, Anatoli Chubais, chief of the Presidential Administration. The Moscow daily Nezavisimaya Gazeta was headlined Oct. 5, “High-Ranking Russian Functionaries Satisfied with Their Visit to Washington; Implementation of the IMF’s Demands, However, Threatens the Existence of Russia.”
The group received only a $159 million World Bank credit (to “develop the securities market”!), but returned to launch a tax-collection assault on Russian industrial firms. On Oct. 11, Yeltsin decreed creation of a new tax collection agency, the Temporary Extraordinary Commission, or Ve-Che-Ka—the original name of the KGB secret police in the Soviet Union, in Lenin’s day!
Our correspondent in Russia asks, “When the IMF comes to dictate conditions to Germany, will they set up a collection commission and call it the Gestapo?”
Thomas Wolf, the IMF’s representative in Moscow, told the press how happy he was with this innovation, and the New York Times editorialized that now the IMF can better “hold Russia to exacting fiscal standards.”
Pyotr Mostovoy, head of Russia’s State Bankruptcy Committee, announced Oct. 15 that bankruptcy proceedings would begin against three oil companies, an aluminum plant, and two auto firms, unless they pay 1.3 trillion rubles ($240 million) of tax arrears within a week. The debtors are the oil firms Tatneft and Purneftegaz, the Krasnodar oil refinery, aluminum producer Achinsky Glinozemny Zavod, and car manufacturers Moskvich and KamAZ (located in Tatarstan). They paid under 20% of their taxes due in the first six months of 1996, Mostovoy said. The 185 largest firms owe 12 trillion rubles ($2.4 billion) to the federal budget and more than 25 trillion rubles ($5 billion) to local budgets.
No economy, no taxes!
Behind the agitation about Russian tax collection is the disintegration of the economy, during nearly five years of deregulation and looting called “reform.” Yes, Yeltsin eased tax demands on regions and firms last winter, as part of his
reelection effort. But, wrote Academician Leonid Abalkin Oct. 8, the search for tax evaders is no “fundamental approach” to lack of revenue. If the military, science, and social services aren’t financed, and there is an overall payments crisis (30% of transactions in Russia are carried out by barter), then companies and institutions don’t pay wages or collect payroll taxes, and can’t pay taxes. There is no revenue base to support the functions of government.
In the draft 1997 budget submitted by Prime Minister Viktor Chernomyrdin’s government (already rejected by the Duma, or parliament), Russia was to borrow $9.2 billion on foreign markets, while spending $9.2 billion to service the foreign debt. “There will not be one spare cent,” wrote Abalkin, “to be invested in the Russian economy.” Russia also owes billions to the Moscow banks of the nouveaux riches, which bought short-term government bonds at triple-digit interest, tiding Yeltsin over before the elections.
Economist Sergei Glazyev, as Lebed’s economics desk chief at the Security Council, made the same point. He proposed radical state intervention, to promote a process of investment in the real economy, decoupled from the speculative markets. This did not accord with the agenda of Chubais or the IMF. Hours after Lebed’s ouster, Glazyev resigned. (For more on the views of Glazyev and Abalkin, see EIR, May 31, 1996, “Russia, the U.S.A., and the Global Financial Crisis,” a roundtable discussion in Moscow between LaRouche and Russian economists; and “Growth in a Transitional Economy,” an analysis by Glazyev.)
The payments crisis persists
But the payments crisis won’t go away. On Oct. 21, workers at two of Russia’s nine atomic power stations stopped work for an hour, though by law they are forbidden to strike, to demand payment of back wages for June through October.
Russian Health Minister Tatyana Dmitriyeva said that, during the first 10 months of the year, the health care sector received only 38% of funds due from the national budget. According to Nikolai Gerasimenko, head of the Duma’s Committee on Health Care, Russian medical institutions are unable to deliver even minimum services.
At an Oct. 23 press conference during his visit to Moscow, Secretary General George Weber of the International Federation of Red Cross and Red Crescent Societies, said that “30 to 40 million vulnerable people in Russia need help during the country’s transition.” That is nearly one-third of the population: the elderly, the homeless, and the disabled.
In Chita Province, where pensions have not been paid in over three months, conditions may be called “famine,” said an Oct. 15 article in Moskovsky Komsomolets. It quoted a letter provided by Duma member Sergei Kalashnikov, chairman of the Committee for Labor and Social Policy, from a parent there: “I have five children. For three weeks now we have had nothing to eat at all. We live on oil cake and mixed fodder. The children are fainting from hunger. Please help.”
Hot autumn
At a press conference the night of his dismissal, Lebed charged Chubais with running an illegal regency for the sick Yeltsin, and warned of a “hot autumn.” Asked to elaborate, Lebed cited the most dramatic wage demand of the month, a letter published in Nezavisimaya Gazeta on Oct. 19 and attributed to “the collective of officers of the General Staff” (the coordination body of the Armed Forces).
Whether or not it is authentic, the text expresses the rage of Russian patriots at IMF-induced economic devastation. The officers demand payment of all back wages by Oct. 25, or else. “We command sufficient forces and means, to force the Kremlin gentlemen to drop their plans.” The “plans,” identified as crushing Russia’s military leadership as a prelude to putting the country under UN rule, are attributed to “the trans-Atlantic sponsors of the Kremlin.”
The publicity given this letter by pro-Chubais TV stations, linking it to Lebed, seemed designed to bolster charges from Internal Affairs Minister Anatoli Kulikov and others, that Lebed was planning a coup. But the document gained credibility, when Chief of the General Staff Marshal Mikhail Kolesnikov was shifted to another post on Oct. 20, and there was a shakeup in the command of the Airborne Troops.
Lebed, who ran third in the Presidential elections (after which he got the Security Council post from Yeltsin, who needed his supporters’ votes in the second round), is launching a new political movement.
Yeltsin set up one more structure to run Russia: a Consultative Council, consisting of himself, Chernomyrdin, Federation Council Speaker Yegor Stroyev, and Duma Speaker Gennadi Seleznyov, a Communist Party leader who helped grease the chute for Lebed’s slide from office. Chubais will be attending CC meetings in Yeltsin’s place, until after the latter’s heart bypass surgery, scheduled for mid-November.
As we go to press, however, there are more demands for Yeltsin to step aside. Gen. Aleksandr Korzhakov, ousted as Yeltsin’s chief of security, and lately a political ally of Lebed, told the London Guardian in an Oct. 23 interview, that “Chernomyrdin must take the reins,” as Yeltsin is unable to govern. He charged that Yeltsin was a prisoner of his daughter, Tatiana Dyachenko, the partner of Chubais in the “regency.” “Why is the Chubais regency so dangerous for Russia?” asked Korzhakov. “We have a regent with a President alive, this is extremely dangerous. . . . I wouldn’t like things to get to a level of popular revolt, but events are moving this way by themselves.”
In the Oct. 22 issue of Komsomolskaya Pravda, Korzhakov was seconded by another close Yeltsin aide of the recent period—Nikolai Yegorov, fired as Yeltsin’s chief of staff in July, to make way for Chubais. The President, Yegorov alleged, “is remote from reality.” Like Lebed at his press conference, Yegorov identified Moscow financiers Boris Berezovsky and Vladimir Gusinsky as key backers of Chubais in the current power struggle.
Belgium rocked by protests as people stand up against ‘pomocracy’
by Rosa Tennenbaum
In Belgium, as in other countries, political scandals have become part of everyday politics, but what was revealed to the public in the last few weeks, was just too much: a ring of pedophiles, killers, blackmailers, and thieves, directed by Marc Dutroux, operating with the protection of the highest-level political circles in the country. At least four girls were killed by Dutroux’s gang, including two who were starved to death. Over several years, investigations yielded no results, until investigator Jean-Marc Connerrotte took over the case. He identified Dutroux, and put him behind bars; he rescued two girls out of their hell of captivity; and he even managed to put people such as Alain Van der Biest, a former government minister, into prison. It is no surprise that Connerrotte became a very popular public figure.
Thus, the decision of the highest Belgian court, on Oct. 14, to remove Connerrotte from the case, hit like a bombshell. In the same hour that this decision became known, spontaneous protests started all over the country. Everywhere, workers put down their tools and went into the streets to vent their anger. They were joined by housewives and students; teachers left their classrooms with their students to demonstrate; firemen blew their horns; engineers stopped their trains at noon-time for 30 minutes; bus drivers followed their example; garbage collectors formed their trucks into caravans to protest. In the port of Antwerp, the lock-keepers went on strike; in Namur and Charleroi, the birthplace of Dutroux, bus drivers went on strike for the whole day; workers of the national telephone company walked out; workers at an aeronautics factory blocked the streets to a nearby airport; students occupied the judicial palace in Antwerp and set up a vigil; and the fire brigade in Liège drove their trucks downtown and turned their water cannons against the Justice building to demonstrate that “the whole judicial system needs a good cleanup.” Even in the southern part of the Netherlands, 600 workers at the Nedcar auto factory stopped work for one hour in remembrance of the murdered children. All these protests were spontaneous—no trade union, no social grouping had organized them. And they were only the beginning.
Connerrotte was made a scapegoat by the high court. Connerrotte, who uncovered the biggest judicial scandal in the history of the country, was found guilty by the highest judges of having attended a dinner sponsored by the Children Funds, together with State Attorney Bourlet. There, the two little girls whom Connerrotte had freed from Dutroux, honored him with a bouquet of flowers and a pen. The Children Funds “donated” to each judge a plate of spaghetti. The organizers of this meeting wanted to raise funds for the mother of a kidnapped girl, who lacked money to pay her lawyer. That was evidence, according to the high court, that Connerrotte was biased; they removed him from the case, despite the fact that a number of judicial officials had made clear beforehand, that the Children Funds were not an interested party in any court case, and therefore the investigator was only taking a clear position against crime in general, and not for any particular party.
The background
This was not the first time that Connerrotte’s investigations had been sabotaged. In 1992, the high court had dismissed Connerrotte from another hot case: the murder of André Cools, who had been vice prime minister and head of the Socialistist Party of Wallonia. He was involved in the so-called “Augusta” affair, in which the Italian helicopter manufacturer Augusta bribed Belgian politicians to supply the Italian Army with their helicopters. Shortly before Cools was killed in 1991, he had announced that he would name the names of everybody who was involved in this bribery, which was connected to the “Iran-Contra” scandal.
Already back then, Connerrotte had identified and arrested suspects who were set free again by the court in Liège. Even though this was a clear indication of a judicial coverup, nothing was done. The same was true in the case of the investigating judge in Charleroi, who had put aside dossiers on suspicious activities of certain policemen.
Instead, the single successful investigator, Connerrotte, was now being dismissed for a second time.
These facts, nourished the suspicion in the Belgian population, that the child abduction case, too, was to be suppressed because of its links to the highest political circles. Such suspicions were compounded by public statements of some of the highest judges—such as the honorary chairman of the highest court, André Mazy, who defamed the two investigators as “cowboys” and “demagogues,” “unreasonably lucky” in their search for the criminals, and who dismissed the mourning parents as “sick people.” In addition, people who pointed out that a coverup was being perpetrated, suddenly came under political and physical attack. An activist for children’s rights, Marie France Botte, for example, who expressed her fear that high representatives of the state may have built up a ring of ritual killers, was assaulted in front of her apartment. She escaped her would-be killer, but was seriously injured.
Everything indicates that the population is right to suspect a coverup. Newspapers such as *La Libre Belgique* reported that Connerotte had successfully tracked down high-level officials involved in the murder case. He managed to confiscate 5,000 videos, in which child abusers were filmed in their orgies; apparently the intent of whoever made the films was to blackmail the abusers afterwards. High officials in politics, law enforcement, the judiciary, the different parties, and so on, are said to have been captured on tape *in flagrante delicto*. The tapes are now being evaluated, under utmost secrecy.
**The ‘White March’**
The population responded to all these disgusting revelations with an unprecedented wave of protests, reaching a high point, the “White March,” on Sunday, Oct. 20. The parents of the girls who had been killed by the Dutroux ring, called the huge demonstration to Brussels, the capital of Belgium. Out of the nation’s 10 million inhabitants, around 325,000 people came—three times the number that was expected. From grandmothers to babies, everybody participated in the march. The demonstration had to leave much earlier from its staging area, Gare du Nord, just to make room for the thousands more constantly streaming into the city.
It was one of the biggest demonstrations in the history of the country, one which “had a touch of the year of change 1989,” Belgian media observed, referring to the mass rallies in Leipzig, East Germany, that brought down the communist regime. Demonstrators accused the judicial system of complicity with Dutroux, and demanded that the crimes be investigated and cracked ruthlessly. They demanded the heads of high-level officials involved in the coverup, and that the judicial apparatus (which is not independent in Belgium, but staffed and controlled by the parties in power), be reformed. Throughout the country, literally every second house is decorated with a white banner or bed sheet, to make known their solidarity with the movement. White, the symbol of purity and innocence, became the color of this movement.
The protests have already produced results. Right before the march, King Albert II, in a speech to a seminar on child abuse and missing children in Brussels voiced his concern about the moral condition of the judicial system. “One of the state’s main duties is to ensure the security of all its citizens,
and particularly the most vulnerable ones: our children," the king said. "This drama must be totally clarified, along with its origins and its ramifications." Such statements that clearly exceeded the king's constitutional status, which obliges him to refrain from interference in political affairs.
The strength of the protests forced Prime Minister Jean-Luc Dehaene to meet with the victims' parents after the demonstration. He promised definite measures to clean out what everybody calls the "pornocracy." The parliament felt compelled to finally agree to a reform of the judicial system, to make it independent.
**Bring morality back into politics**
This scandal hits a people who have already suffered much at the hands of the politicians. The government has imposed draconian austerity measures, and still unemployment is rising. At the beginning of this year, the parliament handed over emergency powers to the government, in a desperate attempt to meet the European Union's Maastricht Treaty criteria, at any price. One newspaper quoted a trade union representative at Volkswagen, Brussels, as saying: "We are totally fed up with people in high places telling us what is good and what is bad for us, whether it be the high court, or the people who tell us our salaries are too high and that we should join the single currency."
People are standing up against these policies, across ethnic and national borders. Three weeks ago, British papers were gloating that this scandal would finally tear the country apart, which has always been divided along ethnic lines between Dutch-speaking Flanders and French-speaking Wallonia. Exactly the opposite happened. In their grievance and anger, people closed ranks against corruption and political incompetence. "We are one country and are standing up for one cause," was one of the slogans at the march. People are asserting a "common humanity that has risen above the squalid deals of a political class that has failed the nation," the London *Times* admitted on Oct. 21. "Belgian society remains steeped in the precepts of mainstream Roman Catholicism. Last weekend, those moral certainties challenged the political establishment to live up to its responsibilities."
Strikes, demonstrations, and protests continued throughout the week after the march. The scandal around the pedophile ring was just the detonator, which sparked the widespread anger and frustration about the economic and political situation. As the mother of one of the missing children said at the "White March": "We owe a lot to these dead children, for a new force is born, thanks to them."
That force wants to put morality back on the agenda; the movement is carried by moral principles, for humanity and justice. The Belgians are discovering the power of the people, or, as one marcher said: "It is as if we were waking up from a bad dream.... It is urgent to put morality back in the running of this country."
---
**Italian prosecutors close in on 'new P-2'**
by Claudio Celani
The Sept. 15 arrest of Lorenzo Necci, general manager of Italy's national railway company, is threatening Italy with consequences which observers describe as potentially more devastating than the 1993 "Tangentopoli" corruption scandals that rocked the country's post-war political system. The scandals have had the immediate effect of jeopardizing Italy's largest infrastructure project. But there is more to it than that.
Necci was the architect of a 36,000-billion lira plan for high-speed railway construction, which had just started to be implemented. The project consists of east-west routes, from Turin to Venice, which would be connected to the French high-speed network and to the central and eastern European networks through new tunnels under the Alps. A north-south line would stretch from Milan to Naples. Although the project has some critical weaknesses, especially in its financing, Necci worked for five years to rationalize the structure of the national railway company, Ferrovie dello Stato (FS), and the project finally did get under way. Work began on the Rome-Naples, Milan-Bologna, and Florence-Bologna lines, all involving the largest private and state-owned construction companies, such as FIAT, ENI, and IRI.
The whole project is now thrown into doubt, even if the government did quickly replace Necci as president of the FS, and Transport Minister Claudio Burlando stated that the work will not be interrupted. But the prosecutors who arrested Necci and are keeping him in jail, are focussing on a suspected system of illegal bribes which involves all contractors for the Alta Velocità project. Therefore, developments in the investigation could easily block the project.
Also targeted for investigation is Public Works Minister Antonio Di Pietro, the former "Operation Clean Hands" prosecutor, who had just announced a vast program for building and upgrading highways, aqueducts, and roads, especially in southern Italy. Di Pietro was apparently supporting construction of the famous "Messina Bridge," to connect Sicily to the mainland.
All these infrastructural projects had been attacked by radical ecologists, such as Environment Minister Edo Ronchi, and by Treasury Minister Carlo Azeglio Ciampi, who
was unwilling to finance them. Strong opposition came from the European Commission as well, which is pressuring Italy to comply with the Maastricht Treaty’s “criteria” of budget balancing.
All the pressures notwithstanding, the economic and social situation in the country, above all the high unemployment (more than 20% in southern Italy), had pushed the investment plans ahead. Now, the new scandals have changed the picture.
The representative of a parliamentary faction whom this author met at the beginning of October in Rome, described the situation: “Here, everybody is afraid of being arrested the next day.” Real issues, such as the economy or international strategy, are being set aside until the issue of survival is solved. “Our priorities are: first, to avoid getting arrested; second, to implement, quickly, an institutional reform limiting judicial power against political power [again, avoiding getting arrested]; and third, the economy.”
**The institutional conflict**
A number of parliamentary forces believe that the Italian judiciary—a completely independent body, which has no elected representatives—has gone out of control. This faction insists that, if they are not stopped, the “Party of the Prosecutors” (*Partito dei Giudici*) will make the Italian political landscape into scorched earth, to the advantage of such Jacobin forces as the Northern League.
The opposite faction, which ardently desires to put a majority of Parliament in jail, insists that corruption is real and has to be eradicated; that the prosecutors are just doing their job.
Both factions are right, in a way. Therefore, the institutional conflict is getting worse and worse, and apparently the only solution, is for one institutional grouping to destroy the other. The real solution, however, can come only if the fundamental hypothesis underlying the current political process is changed.
For instance, there is no longer a figure around of the stature of Aldo Moro, the Christian Democratic statesman who was kidnapped and killed by the Red Brigades, under orders from Henry Kissinger, in 1978. Moro, before being killed, had successfully defused what he correctly perceived as a scandal campaign engineered by U.S. State Department circles, centered around revelations about bribes coming from the Lockheed Corp., and aimed at destroying Moro’s party. It is not important here to discuss whether Moro or his colleagues took the bribes. What is relevant is that Moro understood that the attack was directed against the political system itself, and reacted with a famous speech in Parliament, in which he declared: “We will not let ourselves be put on trial.” Moro could act that way, not only because of his known moral integrity, but because he represented a “political hypothesis” of national sovereignty, on the basis of which he could rally the support of both the government and the opposition, the famous “national unity” policy of which he was the undisputed leader.
Unfortunately, today there is no Moro around, and the political process is determined by paradigms ensuring the self-destruction of the system, such as world government, globalization, and privatization. In order to defend the political system from disintegration, one has first to believe that a political system is needed. But if you believe in the free market and world government, you do not need a political system; you just need technocrats.
**The higher level**
Yet, the very investigation of Necci supplies a couple of leads that could turn the situation around. What is needed, is that investigators raise their attention to the higher levels, those “above politics”; if followed, such investigations take you outside of Italy. Indeed, the system of political corruption in post-war Italy was “built” by forces centered in the City of London, with the intention of, first, corroding the system from the inside; and second, of easily destroying it, when the time were right, with well-steered scandals. These are the same forces which are today pushing deregulation, the free market, and privatization.
Lorenzo Necci is connected to two key figures: banker Francesco “Chicchi” Pacini Battaglia; and a U.S. citizen named Enzo De Chiara, named by investigators as the head of “the new Propaganda-2” (from the name of the famous secret P-2 masonic lodge discovered in 1981).
“Chicchi” Pacini Battaglia was already identified in 1992 as the person who managed a system of bribes through his Swiss bank, Karfinco. Karfinco was founded in 1980, as a conduit through which money coming from the Italian state-owned company ENI could be used for bribes to political parties, especially the Christian Democrats (DC) and the Socialists (PSI). Strangely, Pacini Battaglia walked free, after being interrogated by prosecutor Antonio Di Pietro, while hundreds of other politicians and business managers who had committed the same crime, were arrested and spent months in prison.
The reason that Pacini Battaglia was not arrested, has to do with the fact that the “Clean Hands” investigation was never supposed to destroy the system of corruption itself, but only certain political parties. The system of corruption had to stay, in order to control the next phase.
Pacini Battaglia inherited a system built by British agent Eugenio Cefis in 1963, after Enrico Mattei, the founder of Italy’s national oil company ENI, was assassinated. Cefis reversed Mattei’s policy of national independence and Third World development, and started to build a network of Swiss subsidiaries of ENI, in partnership with the Union Bank of Switzerland, in order to operate out of the control of Italian authorities.
Cefis left ENI in 1975, but after his retirement, the front line was taken by Florio Fiorini, who was financial director of ENI until 1980, and later by Pacini Battaglia.
**The old P-2**
The system of political corruption was officially elaborated by Licio Gelli, the head of the P-2 lodge, in his "Plan for a Democratic Renewal," found in the possession of Gelli's daughter on July 14, 1981. In January of that year, investigators had seized a partial list of 900 members of the secret freemasonic lodge, which included all the heads of the Armed Forces, police, and secret services, as well as top bankers, businessmen, and politicians from all the "anti-communist" parties. Gelli's Plan describes corruption as a way of steering political parties, trade unions, and the mass media, from the outside. Complementing the Plan is a "Memorandum on the Italian Political Situation," which states bluntly: "It is good to add, in conclusion, that if, to reach our aims, it were necessary to insert oneself...in case we had necessary funds, amounting to 10 billion liras...in the current DC membership system, to buy up the party, it would be necessary to do it without hesitating, with cold Machiavellianism."
Pacini Battaglia, as well as his predecessor Florio Fiorini (who ran ENI's illegal bribe system from 1970 to 1980), have worked in close connection with P-2 members.
The P-2 was officially a "pro-American" organization, but in reality it obeyed the Grand Mother Lodge in London. Two parliamentary investigating committees established a central role of the P-2 organization in all major terrorist and destabilizing events in Italy's recent history. For instance, Moro's assassination was possible, because the P-2 controlled all police and secret services structures, as well as the leadership of the terrorist Red Brigades.
Members of the P-2, like former secret services number-two man Gianadelio Maletti, have been found perpetrating a coverup of the 1980 terrorist bombing of the Bologna train station. Today, Maletti is in South Africa, dealing with the weapons traffic.
South Africa is the base of "Operation Longreach" which, according to recent revelations, was used to assassinate Swedish Prime Minister Olof Palme in 1986. One of the operation's leaders, Craig Williamson, was an employee of Giovanni Mario Ricci, connected to the P-2, and a partner of Francesco Pazienza, a P-2 member well introduced into the Bush administration.
The P-2 involvement in the assassination of Palme had already emerged, in a phone call between P-2 head Licio Gelli and Philip Guarino, a top member of the U.S. Republican Party, a few days before the murder. Guarino told Gelli: "The Swedish tree will be felled, tell our good friend Bush" (Palme is referred to as "Palm tree").
Guarino, Gelli and Pazienza were guests at the inauguration of the Reagan-Bush Presidency in 1981. The P-2 organization had played a major role in that election campaign, by engineering the "Billygate" scandal against Jimmy Carter. Eventually, the P-2 network was used as a channel for the arms delivery to Iran, after the Paris negotiations between U.S. National Security Adviser Robert McFarlane and the ayatollahs.
**The new P-2**
The P-2 was officially disbanded in 1981, but, according to investigators, it recycled itself through other freemasonic lodges, and has kept operating. The head of the "new P-2" is, according to Aosta prosecutor Davide Monti, Enzo De Chiara, a U.S. citizen and foreign policy adviser to the Republican Party and a friend of George Bush, Sen. Al D'Amato (R-N.Y.) and Bob Dole. There is an arrest warrant out for De Chiara from Italian authorities, on charges of "conspiracy" and "secret association."
Investigators discovered that in the spring of 1994, De Chiara organized the participation of the separatist Northern League in the government led by Silvio Berlusconi. In a meeting with, among others, De Chiara, Northern League leader Umberto Bossi, and national police chief Vincenzo Parisi, it was decided that the Interior Ministry (which oversees both the police and the internal secret services) be given to Northern League representative Roberto Maroni.
Another participant in that meeting was separatist activist and financial swindler Gianmario Ferramonti, who was arrested last spring because of his involvement in a $13 billion money-laundering ring. The mastermind of the operation, according to investigators, was De Chiara. The ring apparently also involved former U.S. Secretary of State James Baker's law firm in Houston, and a Nicaraguan banker with ties both to Contra and Sandinista circles, Alvaro Robelo.
Robelo, through the Rome branch of his Banco de Centro America e de Italia, issued false certificates which allowed Ferramonti to get credits from Swiss banks. The credits were used, according to investigators, to launder dirty money. The same people tried to sell old German Weimar bonds to the Russian government, at a discount price. Russia, in turn, used them at face value to pay outstanding debt to the German government. The Baker law firm had been contacted to provide expertise in the bonds. The scheme was discovered, before it could be implemented.
With Robelo, the circle of the "new P-2" is closing in on George Bush and Oliver North's "drugs for weapons" operation in Central America. An investigation into De Chiara's friends in Washington could therefore not only support Italian prosecutors in clarifying who is pulling the strings of the Italian destabilization, but could also help uncover who destabilized America through the spreading of crack-cocaine in the streets of Los Angeles.
In both cases, the question "cui bono?" is answered: the London-centered oligarchy, working to destroy nation-states in favor of world government.
Europe’s responsibility in Bosnia-Hercegovina
by Gen. J.A. Graf von Kielmansegg (ret.)
Two weeks before the Sept. 14 elections in Bosnia-Hercegovina, a delegation from the Schiller Institute visited the Bosnian capital of Sarajevo. German Gen. J.A. Graf von Kielmansegg (ret.) accompanied the delegation, and submitted the following report, which has been translated by Anita Gallagher.
General Kielmansegg was, until 1993, Chief of NATO Northern Command of Europe. In 1992, he called for NATO air strikes against Greater Serbian targets. After his retirement in early 1993, he spent a week in Sarajevo during heavy fighting, mobilizing public opinion in Germany thereafter to support NATO military action against Serbian aggression. See EIR, April 19, 1996, for excerpts from a 1994 article in which he showed how a military victory could be achieved against Serbia—contrary to the insistence of the UN, NATO, and the European Union mediators, that a military solution was out of the question.
On my last trip, in August of this year, to Bosnia-Hercegovina, which I had already visited several times during the war, I found a country whose ravaged and destroyed appearance had only barely improved a year after the war’s end—in part through Europe’s standing on the sidelines and non-intervention in the war.
I have spoken with politicians, soldiers, church leaders, and the people of the country; the picture was diverse and often perplexing. However, all were agreed that the situation ought not to continue as it is, if the peace and the unity of the country and its future are not to be lost altogether. And again, it appears that the lack of resoluteness on the part of the Western nations, and their political self-interest, unkept promises, and timid wait-and-see attitude, bear a good measure of blame for the current situation. However, it is also the case that, along with Bosnia, the credibility of Europe and of the free world, their last chance to stand as a morally legitimate authority, and a great part of their political freedom of action, would perish.
No aid, harsh IMF conditions
I found a country in which the destruction of housing, infrastructure, the energy supply, and manufacturing plants, is unbelievably great. The will to rebuild, and even the capability to do so, do exist in the country. However, the pledged and so urgently needed financial and economic aid, so far, has either still not arrived—is just dragging along and becoming available drop by drop—or is tied, mainly by the World Bank and the International Monetary Fund (IMF), to conditionalities which, for a country so impoverished and destroyed, are completely unattainable. A kind of Marshall Plan for Bosnia-Hercegovina would be necessary, just as, 50 years ago, it helped us Germans to get back on our feet.
It appears almost as though it were in the political interest of some powers, to keep Bosnia-Hercegovina permanently weak and dependent. Moreover, humanitarian aid is no solution; it only humiliates the victims as long as it continues. People must become able to earn their own living and shape their own future.
I found a country in which the conditions of the Dayton Agreement have not been complied with up to now, or are not enforced by the guaranteeing powers. The return of especially Bosnian Muslim refugees, especially to Serbian-occupied territories, is neither being prepared for, nor enforced. IFOR [the U.S.-led Implementation Force] is far less established in the Serbian part of Bosnia than in the Bosnia-Croatian Federation; within that, Herceg-Bosnia is a virtually lawless, Croatian-run district, in which the mafia of Croatian Defense Minister Susak holds the power.
Freedom of movement in the country, in reality, is only guaranteed, to a certain extent, in territory secured by the Bosnian government’s army. In all other parts, it is either enforced by IFOR, or by international pressure. The result is, now as before, that the only fully permeable border in Bosnia-Hercegovina for Muslims and Croats, is the inner Bosnian border between the Serbian-occupied part of Bosnia and the rest of the country. IFOR, whose military mission up until
now was a success, is becoming so tied up with political handicaps in the enforcement of its other internal state tasks, that its role is dwindling to that of observing, staying on the sidelines, and waiting. Thus, it will assume, over the short or long term, the same disastrous role as Unprofor [UN protection force] did in its time, for the entire country. Therefore, it is not sufficient to prolong its stay, although that is, in any case, necessary. At the same time, its mandate must be changed, in order to carry out its internal state responsibilities as well as the full enforcement of the Dayton Agreement. For, an effective, competent, and generally recognized police force, bound only by the law, has not been established in the entire country by a long shot.
The West refuses to act
The fact that the Serbian part of Bosnia-Hercegovina is governed by Europe’s most terrible criminals, and that the free world tolerates it, is a scandal of the first order. IFOR ducks: No one has jurisdiction; beware of violence. How could it be otherwise? Every police force resorts to violence, and runs a risk; if it takes violent criminals into custody. There is no doubt at all, that [Radovan] Karadzic, [Ratko] Mladic, and the others could have been arrested, if there had been a desire to do so. The truth seems to be that there are nations—above all, those which always exercised a pro-Serbian policy—which have no interest in a trial of these people. With this, the impotence and/or indifference of Europe in the pursuit of international criminals becomes obvious. Enforcement of the law becomes a plaything at the caprice of political interests.
I found a country in which all men, from simple farmers up to the highest political leaders, live not only in want, but under enormous pressure. This has to do with the conditions which I described above, even if only inadequately, as the tip of an iceberg. Men are no longer masters of their lives; they are, and feel, alien. For the most part, invisible authorities and requirements leave them helpless. That leads, on the one hand, to complete resignation, and, on the other hand, to a relentless internal power struggle. At the brink of the abyss—for which we share the blame—each struggles for himself, and for physical, psychological, and political survival, by every means. No one trusts anyone else any longer. This also manifests itself in the party landscape, their conduct in the distinctively colored ethnic and religious interests, in the contradictions and turns which we often become aware of, shaking our heads. There are the struggles and convulsions of a country that we indeed rescued from death, and that we surely do not want to let completely die, which, however, we are not really helping to a future worthy of human beings, with all the measures that are necessary and possible for us. No one ought to blame the people in Bosnia-Hercegovina, who, indeed, are victims, for this behavior.
I found a country, in which the approaching elections through the OSCE [Organization for Security and Cooperation in Europe], and also in the countries that are hosting Bosnian refugees, have been prepared in an utterly insufficient manner; have been manipulated in advance, through massive abuse of the so-called P2 printed ballots; above all, rigged in the Serbian part of the country to favor the victor, and thus fraudulently alter the outcome. The elections will be neither free, nor democratic, nor fair. That they will be carried out under these conditions, is a second scandal. The key to it lies with the OSCE and in Washington; however, no one appears to have the courage to tell the truth: that it is not working. The outcome is foreseeable. The results of war, genocide, and expulsion have been ratified and legalized.
The division, and with it, the destruction of the multi-ethnic state, will thus be sealed. Perhaps it is so intended. However, no one can say: We did not know.
The refugee problem
Under these circumstances, the utmost caution must be used in the return of the refugees, including from Germany, which up to now has helped so much. No forced return should be allowed—not at this point in time, at any rate. There are indeed many districts in Bosnia-Hercegovina which are proportionally intact; however, those are the ones, as a rule, from which people did not flee.
The expelled persons living among us come from the scorched-earth regions or the areas now under Serbian control, in part also Croatian-settled areas, which they were not allowed to enter, and if and when they did, they were without rights, defenseless, without any resources. Unless economic and political conditions are substantially improved, with our help; unless human rights, protection of minorities, and freedom of movement in the entire country are guaranteed and enforced, these people would be sent into a void. If this were done, they would lose the last remnant of hope in a future worthy of man.
It is right that the refugees should return to their own country someday. Only under this condition, could we undertake projects on so grand a scale. Only in this way, will their own country have a future, in which they can and must work together. President [Alija] Izetbegovic has stated clearly enough that Bosnia-Hercegovina needs its people who are now living in Germany: their knowledge, their skills, and their will to rebuild. This process can now also get under way. However, each case must be examined.
The principle of free will is still supposed to be predominant. If the conditions for a life worthy of man are again strengthened in the country, or, at least, appear within reach in the near future—and, on that, above all, effort should be concentrated—then the other expellees could, and will surely follow.
The securing of Bosnia-Hercegovina’s future is the decisive test, the great challenge for a common European foreign, security, and economic policy guided by justice and freedom. The West ought not once again become guilty in this country—this time, guilty of a lost peace.
The challenge facing post-election Armenia
Vazgen Manukian, leader of the National Democratic Union, was the unified opposition candidate in Armenia’s Presidential elections, held Sept. 22. Formerly head of the now-ruling Armenian Pan-National Union, he served as prime minister and defense minister of Armenia. The Armenian Central Electoral Commission certified a 51.75% return for President Levon Ter-Petrosian; even adjusting only for the vote fraud documented by observers from the Organization for Security and Cooperation in Europe (OSCE), would have pushed Ter-Petrosian below 50%, requiring him to face Manukian in a run-off. Hovhannes Galajian’s article in EIR on Oct. 18, 1996, reported on the vote fraud and post-election violence. Vazgen Manukian was interviewed for EIR on Oct. 14, answering questions from Karl-Michael Vitt and Hovhannes Galajian.
EIR: How do you assess the situation, after the Presidential elections? The OSCE observers established that there were electoral violations on a massive scale.
Manukian: The situation in Armenia may be said to feature a semi-dictatorial regime, with a democratic shell. Before now, the world had only seen the democratic shell. But the population of Armenia understands the essence of the regime very well: The population remembers the vote fraud that took place in the [1995] parliamentary elections and the vote on the Constitution, and so the citizens of Armenia realized, that there would be vote fraud, and the use of force, in the Presidential elections. Many parties and politicians wanted to boycott the elections, thinking that the result was a foregone conclusion. I was one of the few who thought that any chance should be utilized, either to achieve some results in the elections, or to show, once and for all, what kind of a regime has been established in Armenia.
Because L. Ter-Petrosian was sure that, with the population intimidated and lacking confidence in the electoral process, the opposition would be unable to run a successful campaign, he allowed fairly decent conditions for election campaigning, of which we immediately took advantage. It was only in the last few days before the elections, that the regime understood that it was losing the campaign, and again unleashed the fraud machine and resorted to violence. This time, however, our people’s dream of having a regime, based on the people’s power, was especially strong, and therefore the elections were followed by a powerful confrontation, and mass protest rallies. Our demand was not that L. Ter-Petrosian not be President, or that he resign; we were simply demanding verification of the election results.
The population was very well informed about the vote fraud. What the OSCE observers confirmed, was only the tip of the iceberg, but the population is familiar with the deeper layers of the fraud.
The situation in Armenia is such that, if there is not now going to be complete openness on the question of the elections—i.e., if there is not a decision to hold new elections, or a second round, or to recognize the legitimate President—the Armenian population will not take part in elections henceforth, and we, as politicians, will also consider that it is useless to participate in any elections.
EIR: Armenia’s physical economy has been destroyed, as a result of the extreme liberal reforms. Many Armenian citizens have left the country to work abroad, to be able to feed their families. The infrastructure has been destroyed. What is your analysis of the consequences of this situation, for the people?
Manukian: Two aspects of the economic policy, implemented in Armenia, should be delineated. The first is the official economic policy, which will lead to the total destruction of industry in Armenia; a population of only a half-million will be left, working in small-scale handicrafts and primitive agriculture. I do not exclude, that a few major industrial plants will also exist, owned by foreign capital or with mixed ownership, but on the whole, the result of this policy will be Armenia’s transformation into a third-rank country, and the loss of everything we had in the preceding period. The fact
remains, that despite the totalitarian Communist regime, Armenia, in recent decades, became an industrial country with a powerful scientific and technological capability, and there is a danger of all this being destroyed.
The second aspect, is that, besides the official economic policy, there is also an unofficial one, which features clan relations, corruption, and the total suppression of any investors who don’t belong to the five or six clans that have a monopoly in the Armenian economy.
The continuation of both the official and the unofficial economic policies dooms Armenia to the position of a fourth-class country, of no interest not only to surrounding countries, but even to its own people. The greater part of Armenians will be scattered to various countries, and we shall lose all the potential we had. Thus, what was at stake in the Presidential elections was not only to replace the President and establish people’s power, but also a change in economic policy.
**EIR:** L. Ter-Petrossian, the International Monetary Fund, with its conditionalities, and the World Bank are responsible for the present situation. What is your alternative?
**Manukian:** After the destruction of the U.S.S.R., the World Bank ran into a new situation. The World Bank had drafted projects, which were supposed to bring undeveloped African and Asian countries into the mainstream of world economic integration. It had dealt only with countries that lacked the relevant trained personnel and had a very low standard of living and education, but the World Bank attempted to apply the same methods, in the countries formed as a result of the break-up of the U.S.S.R.—where, I am convinced, this policy could not succeed.
Leave aside the circumstance, that the international financial organizations wanted to have levers, by which to exert their own influence in these countries. In and of itself, the implementation of a Keynesian, purely monetarist model does nothing for their development. In Armenia, the monetarist model leads to the destruction of industry, since it is entirely based on the principles of economic Darwinism: strong sectors of the economy develop, while the weak ones perish. Under current conditions of world economic competition, however, a small country like Armenia cannot have strong sectors of industry, so the application of the “natural selection” model means that *all* sectors perish. I do not attribute malicious intent to the international financial organizations, but they do not understand the processes that are under way in the post-Soviet area.
**EIR:** American economist and former Presidential candidate Lyndon LaRouche has proposed to reform the current financial system, including the IMF and the World Bank, and to create a new system, based on the real possibilities for the economic development of nations. He proposes to build a Eurasian land-bridge, with big infrastructure projects, such as railroads, linking Europe and Asia. Armenia, being at the crossroads between East and West, and North and South, would be in the middle of this development corridor. What do you think about these ideas, particularly, the revival of the old Silk Road?
**Manukian:** After the break-up of the U.S.S.R., we are living in a new world; unfortunately, few people understand that. As a result, the philosophy, and the economic and political methods, typical of the Cold War, continue to be applied in practice. The world has changed, which should lead to a change in the principles of international relations, and in political approaches. A time has come for new politicians. In many countries, rather than political players, leaders will come to power, who possess a philosophical way of thinking and a vision of the contours of the future world community.
As for this concrete program, it is one feature of a new world community, which is still hidden in the mist, but its contours will gradually become more defined. Of course, Armenia is interested in these programs—the rebuilding of the Silk Road, the Eurasian bridge, and so forth, but I repeat, that these are only details of the picture that is still only barely seen in the smoke.
**EIR:** Iran, China, and some other countries have already oriented toward this. As President of Armenia, how will you promote the Eurasian development program?
**Manukian:** Of course, we are taking steps in the direction of implementing these programs. I advocate helping politicians, economists, and philosophers to change their approaches to the future. It is necessary to take the right path, but at the same time, to take into account the opinion of the international community, and to take steps to prevent this path from being termed offensive, since that would be a blow against the development of one’s own country.
Armenians can do a lot to help change the world climate, because we are not only citizens of Armenia, but we are also scattered across the whole world, and in some countries, we have considerable weight and influence on public opinion.
**EIR:** After the Presidential elections, there were large protest demonstrations in Yerevan, against the vote fraud. After a provocation, the Army was brought in, many people were wounded, and political activists jailed and beaten. What should the so-called “free world” do, to help democratic forces in Armenia improve the situation?
**Manukian:** The world community has an interest, in each of its members being a normal, democratic state. In this sense, their interest is obvious. It must be taken into account, of course, that specific powers have their own interests in this region, and sometimes these two factors come into conflict.
I think that each people ought to win its own freedom. The help we would expect from the international community, is to understand the situation in Armenia, and refrain from supporting those forces which are pulling the country backwards.
'Williamsburg II' flops: Time to dump Bush's defense policy for the Americas
by Gretchen Small
The second Defense Ministerial of the Americas (DMA), which brought most defense ministers of the Western Hemisphere to Bariloche, Argentina, on Oct. 7-9, reached no conclusions of substance on regional defense strategy or policies, nor did it produce even a semblance of that condition so beloved by diplomats, a "consensus." Mexico, once again, did not send its defense secretary, and empowered its lower-level delegation to participate only as observers. Not even a date or host country for a next defense summit was firmed up, despite U.S. Defense Secretary William Perry's proclamations that the "Williamsburg process"—a term invented out of the first defense ministers summit held in Williamsburg, Virginia, on July 24-26, 1995—has been accepted as an institution by all involved.
Pentagon officials like to spin a different story, but what participation has occurred, has been achieved through diplomatic and economic pressure, and that old Teddy Roosevelt standby, the threat of unilateral military intervention against nations which refuse to go along with the takedown of their military capabilities demanded by the so-called "globalization" of the world economy.
Secretary Perry's "Williamsburg process" is a failure, creating more distrust and hostility in Ibero-America than acceptance, while wreaking havoc on the capabilities of the nations in the region to defeat growing narco-terrorist forces. The insanity of the policy is epitomized by Perry's proclamations in Bariloche that the Williamsburg process has produced "dazzling victories for peace and security in the Americas," and that the Williamsburg principles, which assert that democracy is the number-one security concern, "are now ... promoting stability and security throughout the hemisphere."
There is no "peace and security" anywhere in the Americas today, and, thus, little democracy worthy of the term. The nations of Ibero-America, looted to the bone by International Monetary Fund policies, face imminent disintegration from spreading war, misery, and British-sponsored indigenist separatist movements, while the breakup of Canada, and the United States itself, is being pushed by the agents of the British Crown as well. Colombia faces a nationwide offensive by 10-15,000 well-armed narco-terrorists, which are backed from the Presidential Palace by the drug cartel's Samper Pizano government. Colombia's insurgents are but a local branch of a continental force, the São Paulo Forum, operating under the immediate command of the Castro regime in Cuba, whose allies now deploy sufficient force in Brazil, Mexico, and Venezuela, that civil war could begin at any moment in these countries, too.
In a campaign strategy paper, *The Blunder in U.S. National Security Policy*, issued in October 1995, then-U.S. Presidential candidate Lyndon LaRouche, Jr. warned of the catastrophic global consequences for U.S. security which would result from continuing these Bush-era free trade and democracy policies, and laid out the basic hypotheses upon which a competent U.S. national-security doctrine and policy must be premised, if the United States is to survive the ongoing global financial and strategic firestorm.
The failure of Bariloche, coming on the eve of the likely election of a second, strengthened, Clinton administration, opens an opportunity for the United States to dump the whole supranational "Williamsburg process," inherited wholesale from the despised Bush administration, and return to a defense strategy based on those classical principles of military defense of sovereign nation-states outlined by LaRouche, before the utopian ideologues hand over the Western Hemisphere to narco-terrorism and chaos.
A creepy-crawly from under London's rock
Secretary Perry's "Williamsburg process" was, from its start, conceived as a central feature of the drive to create a Western Hemisphere Free Trade Area (WHFTA), the equivalent for the Americas of the Maastricht Treaty, whose implementation in Europe has led to widespread mass protests. The proposal for a WHFTA was drawn up under Sir George Bush's administration, and first presented under the name of the "Enterprise for the Americas." As Sir Henry Kissinger reiterated most recently in his remarks to the Inter-American Press Association on Oct. 9, the concept behind WHFTA is not merely a free trade zone, but "the creation of an economic and political system of the Western Hemisphere" (emphasis added).
The number of "Sirs" promoting this project is no coincidence. As the global financial system disintegrates, the London-centered international financial interests are ever more strident that supranational structures must replace national governments, because even the weakest nation can, at a time of such crisis, become an instrument of its people to defend
national existence. Because of the military’s active role in Ibero-America historically as a central national institution, a project was started back in the mid-1980s, to silence, reduce, and destroy national militaries, as a necessary flank in the war to weaken, reduce, and eventually eliminate nation-states in the region.
London has been overt in its support for the anti-military project. The London *Economist* magazine greeted the Bariloche conference in its Oct. 5 issue, as an opportunity to rewrite the “mechanisms of defense cooperation in the Americas.” Under the headline, “Toys for the chicos?” the *Economist* argued that the expansion of free trade in the region raises the question, “What use are the weapons anyway?” London’s financiers assert that defense needs have changed, and so, “Latin America could use a framework to decide what they are, and how to meet them through military collaboration—under civilian control.” The *Economist* specified that Bariloche should also consider setting regional “guidelines on weapons purchases.”
And as Bariloche concluded on Oct. 9, London’s International Institute of Strategic Studies released its annual global study, *Military Balance*, which likewise argued that Ibero-America’s armed forces must make “uncomfortable readjustments to their political roles, as well as reducing and reorganizing their force strength.”
**The Williamsburg game**
The idea for the “Williamsburg process” itself was drafted by the Inter-American Dialogue before Secretary Perry was ever named to his post. The Dialogue, ever since its founding in 1982, has been the primary Anglophile policy-making body for the region on this side of the Atlantic.
The Dialogue’s December 1992 report, *Convergence and Community: The Americas in 1993*, outlined the policies demanded of the incoming Clinton administration by the London-centered financier interests for which the Dialogue works. The centerpiece of the program was that the Clinton administration expand Bush’s Enterprise for the Americas into a WHFTA, in order to “lock in” Ibero-American nations to free trade policies. International treaty agreement “restricts national sovereignty and . . . constrains national responses to special problems . . . precisely to limit the sovereign choice of the contracting nations,” it wrote.
Should treaties not be sufficient, however, the Dialogue specified that it is crucial that the WHFTA “club of nations” be empowered with political and military powers, including for “multilateral military intervention” should any nation decide to leave their “club.” All this, in the name of “democracy,” of course.
Key to achieving this goal, *Convergence and Community* insisted, is to curtail the influence of the armed forces, and to establish, as law, the principle that the military has no voice in national life or policy-making in the region. To do so, the Dialogue proposed that “the Organization of American States (OAS), its individual member states, and non-governmental organizations should foster national and regional dialogues among civilian and military officials to take a fresh look at their armed forces—their mission, size, weapons, and cost.”
The Dialogue specified:
“The OAS should consider organizing a permanent forum of civilian defense ministers, armed service commanders, and key members of legislatures, to develop regionwide norms of civil-military relations and the evolving missions of armed forces in the Americas. Clearly, such norms would not be immediately adopted by all armies, but they could lead to a growing convergence of attitudes and behavior as has happened on such matters as the conduct of elections and economic management.”
Because the member-states of the OAS have continued to balk at transforming what serves as an inter-American forum, into a supranational institution, with its own powers, advocates of this policy finally proceeded—unilaterally. The “Williamsburg process” initiated by Perry is intended to become the “permanent forum” proposed by the Dialogue, sneaked in through the back door by a crude diplomatic ruse.
Invitations were issued to the Williamsburg summit, with the promise that the intent was simply to exchange views on hemispheric security, without any commitments required. Pentagon officials insisted no joint declaration would come out of the conference, but said that Secretary Perry would issue a “chairman’s declaration” summarizing what he saw as the major themes and “commonalities” discussed.
Lo and behold, however, at the summit’s end, Perry announced that the nations had accepted his chairman’s declaration as their own, and adopted six “Williamsburg principles,” all based on the premise that the number-one security issue before the region was “democracy.” Perry admitted no vote had been held on his principles. “I passed out the draft of these principles this morning,” he told the press at the conclusion. “Each of the delegates had an opportunity to read them, and then I invited comments for changes and amendments.” And because no delegation stomped out, Pentagon officials now repeat, over and over, that every nation in the hemisphere, minus Cuba, adheres to said principles!
**‘Join me, or I’ll shoot you’**
If Williamsburg was a fraud, the final declaration issued from the Bariloche summit was flat-out ridiculous. Signed by no one, it reports that delegates had “discussed a wide range of security concerns and interests of participating states,” during which discussions, the delegates had “recalled . . . the conference held in Williamsburg”; “stressed the necessity of deepening inter-American cooperation”; “considered that the measures to promote mutual confidence are appropriate”; “urged all countries . . . to promote, through an active and voluntary participation [sic], the success of [UN] peacekeeping missions”; and lastly, “took note of the initiatives proposed during this conference.”
"Recalled," "stressed," "considered," "urged," and "took note": verbs appropriate to a modern t-group session, but which do not connote action or agreement, and certainly do not carry diplomatic weight.
Most sharply rejected, were proposals for the creation of a multinational military force for the Americas. The U.S. delegation reportedly "informally" raised the idea of a regional anti-drug military force in various bilateral meetings held during the summit. Proposals viewed as thinly disguised stepping-stones to a multilateral force were presented to the full meeting by Panama—an occupied nation which no longer has a military, since Bush's 1989 invasion—and by Argentina's Williamsburg toadies. Panama proposed a counter-drug center be set up in Panama; Argentina circulated a proposal to establish a regional peacekeeping training school.
Frontal attacks on the principle of sovereignty are still politically explosive in Ibero-America, despite more than 14 years of IMF economic dictates, and proposals to create any regional multinational force have repeatedly been rejected. Even the Inter-American Dialogue cautioned in its 1992 *Convergence* report that, while "many of us believe" that the OAS should establish "a modest security or peacekeeping capacity . . . to respond to actual or threatened breakdowns of democratic order," discussion of creating an inter-American security force "should be deferred . . . [because] a divisive debate on the subject might well weaken the emerging hemispheric commitment to collective action in favor of democracy."
But the worsening global crisis has put the issue of a supranational military force to police the region back at the center of the globalist agenda.
Before the meeting in Bariloche convened, senior State Department policy adviser Luigi Einaudi (who brags that he "came in with Kissinger" at the State Department and stayed to set U.S. policy for Ibero-America for over 20 years) was chosen to deliver that message. Writing in the special package on "The Security of the Americas," published by the U.S. National Defense University's *Joint Force Quarterly* in its Spring 1996 issue, Einaudi threatened that Ibero-America must learn the lessons of Panama and Haiti. Ibero-America had better authorize creation of "a military arm" for the OAS, because if it does not, "armed peacekeeping activities will be left either to the United Nations or to unilateral action by the United States," he wrote.
Einaudi left unsaid that such foreign interventions stripped Panama and Haiti of their militaries altogether.
Equally devoid of subtlety, Perry centered his opening at Bariloche on what he claimed were "dazzling victories" won for the Williamsburg principles over the past year—and they were all supranational gains. The first and most important victory repeatedly cited by Perry was the ouster and jailing of nationalist Gen. Lino Oviedo as Army commander in Paraguay in April 1996. That so-called victory for "democracy," had been accomplished under the explicit threat conveyed to the Paraguayan government and military, that General Lino be ousted, or Paraguay would be militarily invaded, by either the United States, or Brazil and Argentina.
Likewise, Perry hailed the proliferation of international peacekeeping exercises in Ibero-America as "dramatic symbols of the change which has swept our hemisphere." Dramatic, indeed. Whereas, before Williamsburg, no multilateral exercises had been held under the rubric of peacekeeping in Ibero-America, since August 1995, five such exercises have been held, most sponsored by the U.S. Southern Command "under the United Nations umbrella—its doctrine, organization, and vision," according to the deputy commander for operations of the U.S. Army South, Col. Alfred Valenzuela.
Perry also claimed "peace is breaking out" in Guatemala, with Williamsburg's help. There, what Perry calls "peace" is a formula for unending ethnic war, as the UN directs the reshaping of Guatemala's Constitution to establish indigenist bantustans, cut the military by one-third in force strength and strip it of all political power, and grant the Guatemalan National Revolutionary Union (URNG), allies of the Colombian narco-terrorists, major political powers.
Formation of a multilateral force, under whatever guise, nonetheless, was rejected. Mexican Foreign Ministry Undersecretary Sergio Gonzáles Gálvez, who headed their observer delegation, told the press afterwards, "under no circumstances" would Mexico support any multinational force, because it "violates the principle of self-determination." Gonzáles Gálvez cautioned that Panama's proposal for an international anti-drug air traffic control center in Panama, had been "sponsored" by the United States, and would only "be admissible if it is limited to multilateral information-sharing."
Pentagon Public Affairs spokesman Lt. Col. Arnie Owens assured this reporter on Oct. 18, after checking with higher-ups, that "there are no plans for any sort of international force, peacekeeping, counter-drug, or otherwise." So, what about the *Joint Force Quarterly*'s publication of Einaudi's knuckledragger demand? Owens tried to dismiss the senior State Department official's article as one of "a whole lot of ideas being floated by all sorts of think-tank people. . . . What I've just given you is where we are, officially, on it," he insisted. "It's not anything that's been floated in official channels."
**'Bush manual' made official**
Perry's major initiative in Bariloche was the announcement that the United States will establish an Inter-American Center for Defense Studies (ICDS) in Washington, "to foster a cadre of civilians" who are to run defense in the region, and thus ensure a "commonality of approach [on] . . . military strategy," and "institutionalize civilian direction of the armed forces."
This is a sweeping proposal, indeed. Until now, the globalizers' anti-military project has been run out of non-governmental organizations and "academic centers," working with government officials such as Einaudi, but in a way such that it could be denied that these projects were official U.S. policy.
The most notorious example of this is American University’s Democracy Program, which produced the book *The Military and Democracy; The Future of Civil-Military Relations in Latin America*, which *EIR* made famous in the region as the “Bush manual.”
Now, that whole “Bush manual” project is being made official U.S. policy—with all the resources and power that entails. Perry specified that officials working on defense matters, “mostly civilians drawn from the Defense Ministries, Foreign Ministries, and legislative defense staffs,” will be brought up to Washington to receive “on-the-job training courses” in quick three- or four-week programs. At the same time, “teams of instructors” will go to Ibero-America, “and set up courses there.” Research fellows will also be sponsored throughout the Americas.
This proposal, too, has met strong opposition. A Pentagon official, speaking at a pre-summit briefing, insisted that the Pentagon is “very sensitive to . . . the views of many of the countries in the region” that defense establishments must be built up in each country nationally, “and that it cannot be done in any sense by the United States, or for them, by any outside power.” The final report from the working group where Perry presented the proposal, states that the delegations agreed to “carry out consultations” on how the Center should function.
Consultations may be held, but they are for window-dressing only. According to the Pentagon’s Colonel Owens, preparations for the Center are already well-advanced. The new Center is funded for Fiscal Year 1997 (by reallocating $2 million from the Army budget), and the first class is expected to get under way in June 1997. The National Defense University (which publishes *Joint Force Quarterly*), has been handed control over the project, and is already drawing up the curriculum. Next year, the Pentagon will submit a funding request to cover the full expenses of all the foreign students involved. Ibero-American countries suggested that the Center be placed under OAS control, but, as Owens emphasized, this Center is “a [U.S.] Department of Defense operation.”
**Colombia haunts Williamsburg**
Leading Ibero-American countries object to the blatant attempt to use “the Williamsburg process” to create a de facto permanent regional defense institution. For the second time, Mexico refused to send its defense minister, deploying a diplomatic delegation as observers only. The president of the Defense Commission of Mexico’s Chamber of Deputies, Gen. Luis Garfias Magana, reiterated on Oct. 15, that that decision had been taken at “the highest levels” of the Mexican government. What generates “distrust” in Ibero-America, is the U.S. intention to head an intercontinental military strategy, he noted.
The Chileans reportedly informed other governments in advance that they do not wish to “institutionalize the Defense Ministerials.” At the Pentagon’s pre-summit briefing, the U.S. official found it necessary to assert, “We have no objective to create an institution of defense ministers, an alliance, [or] anything like that at all.”
Perry, once again, left such niceties aside. “There will be a consensus to have a third meeting,” he declared in Bariloche.
The final communiqué, however, announces no date, and no host country, for the next meeting. Colombia’s defense minister told the press that they had been chosen as the host for the next summit, but, when asked by this reporter whether it would not discredit “the process” if the next host were the narco-terrorist regime of Ernesto Samper Pizano, the Pentagon’s public affairs spokesman insisted no such decision had been taken.
Colombia’s crisis has haunted the Defense Ministerial
---
‘Democrat’ Sarmiento: an Anglophile racist
“[Advances] in civilization, instincts, and ideas, are not carried out by mixing the races. . . . Anyone who carefully studies the instincts, [and] industrial and intellectual abilities of the masses in Argentina, Chile, Venezuela, and elsewhere, has occasion to experience the effects of that inevitable, but damaging amalgam of races [which are] unsuited for civilization. . . .
“All of the colonizations carried out in the last three centuries by European nations, have crushed the savages populating the lands they came to occupy. The British, French, and Dutch in North America established no community whatsoever with the aborigines, and when, over time, their descendants were called upon to form independent states, they were found to be made up of pure European races, with their traditions of Christian and European civilization intact. . . .”
It is impossible to glean anything other than the crudest racism from these words, written by Domingo Faustino Sarmiento, President of Argentina during 1868-74. Yet, on Oct. 8, speaking at the Defense Ministerial of the Americas in San Carlos de Bariloche in Argentina, U.S. Defense Secretary William Perry expressed the desire that a new Inter-American Center for Defense Studies would be “infused” with Sarmiento’s “democratic” spirit.
This goes well beyond racism, however. Perry’s embrace of Sarmiento confirms that the plot to demilitarize Ibero-America is rooted historically in the British colonial doctrine of free trade and destruction of the institution of the nation-state. Sarmiento was an agent for Italian Giufrom its outset, exposing the disaster contained in the "democracy" agenda adopted by its sponsors. Even as the drug scandal grew over the Samper Pizano government, U.S. Vice President Al Gore used his keynote to the Williamsburg summit on July 25, 1995, to attempt to stop the scandal from bringing down the Samper government. Gore stated that "we can applaud the work of those like President Samper and Defense Minister [Fernando] Botero in Colombia, who are standing up to traffickers often at tremendous personal risk, demonstrating tremendous personal courage."
Perry backed up Gore all the way in his defense of the Samper regime, telling a press conference the next day, "I strongly agree with the vice president's statement. In fact, I helped prepare his text in that regard. And it's based on . . . solid information. . . . The relationship between Colombia and the United States is very good . . . at the Presidential/vice-presidential level . . . and exceedingly good at the Defense Ministry level."
Their timing proved exceeding bad. The day Perry delivered his paean to Samper and Botero, Samper Pizano's Presidential campaign treasurer, Santiago Medina, was arrested by the Colombian Prosecutor General's office, which then, as now, has worked well with U.S. anti-drug officials. Medina turned state's witness, and named Defense Minister Botero as one of the people who had ordered him to meet with the Cali Cartel to arrange campaign financing. On Aug. 1, 1995, an official of the Prosecutor's office requested an investigation begin into Botero's activities, and on Aug. 2, Botero resigned. He is now serving a 63-month sentence for his crimes, and on Oct. 9, the United States cancelled his visa.
seppe Mazzini's "revolutionary" Young Italy and Young Europe movement, whose agents were deployed throughout Ibero-America to impose these British-dictated policies. The same forces were behind Britain's attempt to dismember the United States during the 1861-65 U.S. Civil War.
In the late 1830s, Sarmiento belonged to Mazzini's Young Argentina lodge, later known as the Association of May, and spent decades trying to achieve Argentina's economic and political submission to British geopolitical goals. From exile in Chile, he collaborated openly with the Anglo-French alliance which tried for almost three decades to overthrow the 1828-52 government of Juan Manuel de Rosas, because of the latter's resistance to free trade. He conspired with the French-speaking literati who operated against Rosas from their bases in Santiago, Chile and Montevideo, Uruguay.
At the center of Sarmiento's Mazzinian philosophy was the British-created Black Legend, the lie that Ibero-America's economic backwardness is a product both of dirigistic ("authoritarian") state and economic structures set up by Spain in its colonies, and the alleged inferiority of Catholic culture. Argentina could better prosper and industrialize, he asserted, if its people possessed the same qualities as the "pure" Anglo-Saxon race which had populated Britain's North American colonies.
This is the same drivel put out by Lawrence Harrison, one of the chief ideologues of the plot to demilitarize Ibero-America. In his presentation "The Genesis of Latin American Underdevelopment," published in the National Defense University's 1989 book Security in the Americas, Harrison bragged that he belonged to a school of thought which "views Latin America's condition as a consequence of traditional Hispanic culture, profoundly influencing a Latin American culture that is anti-democratic, anti-social, anti-entrepreneurial, and anti-work." Canadians and Americans "attach more importance to work—and work harder—than in Latin America," he raved.
On Spain itself, Sarmiento wrote in his essay Popular Education that "the South American states belong to a race which is at the tail-end of civilized nations. In the theater of the modern world today, Spain and its descendants are destitute of all those qualities which life in our era demands . . . due to their radical lack of knowledge of natural or physical sciences, which in other countries of Europe have created powerful industry."
Sarmiento is infamous in Argentina, and Ibero-America, for equating "civilization" with free trade, and "barbarism" and "slavery" with economic protectionism. In his 1845 work Facundo, he attacked the Rosas government for refusing to grant Britain its chief demand, the right of free navigability of Argentina's rivers. Only Buenos Aires, dominated by British trade interests, is "civilized," Sarmiento argued. Only Buenos Aires, "is in contact with European nations; she alone exploits the advantages of foreign trade; she alone has power and income. In vain, have the provinces been asked to allow a bit of civilization, of industry and European population to enter; [but] a stupid and colonial policy offered deaf ears to the clamor."
Sarmiento also used the demand for "opening up" the economy, the same one wielded by today's globalists, against Paraguay. The Argentine "democrat" labeled Paraguay's rulers as "tyrants" for daring to apply protectionist economic policies to achieve internal industrialization. Sarmiento was President during the last two years of the 1865-70 Triple Alliance War, in which Argentina, Uruguay, and Brazil, under Britain's direction, allied to slaughter Paraguay's population and impose free trade.
—Cynthia R. Rush
Brazilian hero denounces terrorist rehabilitation
Brazilian World War II hero Gen. Carlos Eugenio Moncao returned his medals to the government in protest over Brasilia’s decision to pay reparations to the families of two of the 1970s leading terrorists, Carlos Mariateglia and Carlos Lamarca (see EIR, Oct. 4, p. 43). In a letter to the Secretary General of the Army Ministry, Moncao denounced the logic of a government that indemnifies terrorists, but lets its citizens starve: “It is the National Treasury which will pay the obscene indemnities; that is, the people. This, even as starving children rummage through dung-heaps hunting for food, bereft of any moral solidarity. This is institutionalized subversion, an efficient stimulus for growing criminality.”
He continued that he was returning two medals he had earned on the battlefields of Europe during World War II, when he fought with the Allied Forces in the Brazilian Expeditionary Force, because the medals were granted “during the war against Nazi-fascism, in which Brazil was engaged—which were granted to me on the basis of values which are today no longer believed—and which, therefore, have lost their significance, in the face of the decision by the commission, a governmental one, to indemnify the two killed in a fight, which they themselves had unleashed on ideological grounds.”
Bougainville premier shot as conflict tears island
Bougainville Premier Theodore Miriung was shot dead on Oct. 12. It is believed he was shot from behind by two men as he was eating dinner with his family. The island, which is part of Papua New Guinea, has been torn for a number of years by war between the pro-independence Bougainville Revolutionary Army (BRA) and the Papua New Guinea government.
The conflict flared up again earlier this year, at the same time that Papua New Guinea Prime Minister Sir Julius Chan very publicly expelled the World Bank from his country, after refusing to submit to its austerity demands. *The Australian* of Oct. 14 reported that Miriung was a former National Court judge, who accepted the premiership in early 1995 after “rejecting the BRA’s armed struggle as the means for achieving independence.” It is not known who shot him: The government Defense Force and the BRA are blaming each other for his death. Chanhas said it is the work of “ungodly cowards,” and the commander of the P.N.G. Defense Force, Brig. Gen. Jerry Singirok, has blamed “brutal gangsters.” “The premier and I have worked closely since 1994 for peace on Bougainville,” he said in a written statement. “I am at a loss at the news.”
Baroness Cox heats up the war against Sudan
Following a recent, dramatic escalation in the military operations of the Sudanese opposition’s military wing in northeast Sudan from inside the Eritrean territories, the Khartoum government accused the U.S. CIA and Britain’s Baroness Caroline Cox of signing an agreement with the Sudanese National Democratic Alliance (SNDA). The agreement is designed to supply the forces of John Garang’s Sudanese People’s Liberation Army (SPLA) with new weapons and facilities, and to provide logistical aid to move his operations from south Sudan, where the SPLA is losing its bases, to Eritrea in the east.
The Sudanese government claimed that Baroness Cox has recently donated £1 million ($1.5 million) to the SNDA to promote the political activities of the Sudanese opposition, which is currently facing imminent disintegration. The SPLA is being squeezed out of the south of Sudan, as a result of the recent Iran-mediated agreement signed by Uganda and Sudan. This agreement will eliminate all bases and logistical supplies for the SPLA from different operations, including UN and non-governmental organization operations based in Uganda. The northern Sudanese opposition, which is based in London and has its major public support base in Cairo, is being discredited by the Egyptian government and political layers. The SNDA and the SPLA have no other choice but to move to Eritrea to reorganize their ranks with the full support of the British and certain dirty elements in the United States.
British harbor hijackers as ‘political refugees’
Reliable sources revealed to *EIR* in September that the Iraqis who hijacked a Sudan Airways plane on Aug. 26, are being protected by the British government. The hijackers, identified publicly only as “Iraqi opposition figures,” threatened to blow up the plane, which was en route from Khartoum to Amman, Jordan, unless the pilot re-routed it to London. Once in London, the hijackers were persuaded to leave the plane and release all hostages.
The Sudanese government has reportedly demanded they be extradited to stand trial in Sudan, in accordance with international law, but the British authorities are refusing, claiming the Iraqis are political refugees. Her Majesty’s government has said, according to British press accounts, that it would examine the case to see if any crime had been committed, before granting asylum.
A Muslim press source in London said the leader of the hijackers is an Iraqi who had been jailed for some time in Sudan, for involvement in smuggling persons. The man is known in Britain, for having arranged for Iraqi dissidents to be smuggled out of Iraq and into the U.K., where they have received asylum.
New terror outbreak wracks Germany
A new rash of terrorist acts by “Autonomous” groups has struck Germany over recent weeks, typified by the Oct. 7 derailing of locomotives and sabotage of railroad property along the transport routes for nuclear waste destined for Gorleben. The Lower Saxony state security office placed the blame for the attacks on the “Autonomous” groups, saying, “The attacks on the railroad are a part of the battle against the
state." According to the Oct. 13 issue of Bild am Sonntag, the state governments of Hamburg and Schleswig-Holstein financed "civil disobedience" training by "Autonomous" thugs for high school students.
At the same time, Germany's ZDF television network aired an investigation exposing the training of the terrorist Anti-Imperialist Cells (AIZ) by the Kurdish Workers Party (PKK). This has rung alarms among German anti-terror experts, as the AIZ has carried out numerous bombing and arson attacks against several politicians, and last year, issued a manifesto threatening to "carry the war into the private homes and workplaces of the power elites."
As EIR has stressed for two decades, German terrorism has never been a "sociological phenomenon," but always coincides with British-authored destabilization efforts. The most recent upsurge in terrorism in Germany, coincides with Sir Jimmy Goldsmith and Lord William Rees-Mogg's revival of the line that Germany is attempting to build the "Fourth Reich" out of the Europe of the Maastricht Treaty.
Iran expands diplomacy into Central Asia
Iranian Foreign Minister Ali Akbar Velayati spent a week visiting the Central Asian nations of Kyrgyzstan, Uzbekistan, Tajikistan and Kazakhstan, in an effort to coordinate policy to prevent the Afghanistan war from exploding throughout the entire region. According to the account in Ettala'at of Oct. 17 and 18, Velayati met Kyrgyz President Askar Akayev on Oct. 15 and presented him with a message from Iranian President Hashemi Rafsanjani. Akayev characterized Iran's role in the region as "positive" and stressed there was no military solution to the Afghan crisis.
On Oct. 16, Velayati met Uzbek President Islam Karimov, who also supported Iran's efforts. The two called for continuing bilateral consultation, criticized all foreign intervention into Afghanistan, and urged all Afghan groups to negotiate a peaceful solution. In Dushanbe, Tajikistan, Velayati met with President Imomali Rakhmanov, who expressed "great concern" over the situation in Afghanistan. Rakhmanov "briefed the Iranian minister on last week's summit talks of Central Asian states and Russia on the Afghan crisis." The two also discussed the ongoing talks in Teheran between the Tajik opposition and government representatives, and reviewed progress on the Tajan-Sarakhs-Mashhad railway, a key link in the new Silk Road connecting China to Europe, the Mideast, and Africa.
In Almaty, Kazakhstan, Velayati met with President Nursultan Nazarbayev, and again agreed that the only solution to the Afghan crisis were through talks. Nazarbayev called on all Afghan groups to settle their disputes and called on foreigners not to intervene.
Pakistan's Bhutto now fears for her life, too
Pakistani Prime Minister Benazir Bhutto told the Islamabad daily The News that she does not "feel safe any more" after the political murder of her brother, Murtaza Bhutto, in Karachi on Sept. 20. She added that she did not believe a police report that he was caught in the crossfire of a shootout between police and his guards.
"I don't know who is next," Bhutto told the daily on Oct. 19. "I don't feel safe any more. My husband and my children are not safe... It seems that the Bhuttos are meant to be killed," she said, referring to the 1979 execution of her father, former Prime Minister Zulfiqar Ali Bhutto, after he had been overthrown two years earlier. "I am a politician, an educated person, I know history, and I feel there is more to this," she said. "I don't know whether they are going to hit me next, my husband next, hit my children next, and God forbid they are going to hit my mother, Murtaza's children next, and other members of my family."
Bhutto said that although her brother had formed a rival political group, she always thought that if something happened to her, people would turn to him to carry forward their father's "mission and dream of a Pakistan which is democratic, federal, egalitarian and progressive. After his brutal death, I feel absolutely isolated," she said.
LORD REES-MOGG attacked the social doctrine of Pope Leo XIII as being "the basis of the economic structures of Fascist Italy and Spain," and blamed the main problems afflicting Europe today on his 1891 encyclical, Rerum Novarum. Rees-Mogg, nominally a Catholic, is an unabashed apostle of Adam Smith's hateful doctrines.
SINGAPORE sabotaged a U.S. effort to crack down on seaborne drug and weapons traffic, after the United States had requested that ASEAN countries cooperate on stricter control of the ports. ASEAN was reported to be "decidedly-cool" to the idea, and, according to Asia Times, "Singapore in particular was concerned it might lose port traffic if transshipment times were to slow down."
EIGHT VIETNAMESE provinces have been severely hit by the worst flooding of the Mekong River in two decades. Some 2 million people are seeking relief, and up to 350,000 people have been or will have to be evacuated from the inundated delta area, according to the Red Cross/Red Crescent societies.
NIGERIA AND RUSSIA signed an agreement on Oct. 10 on cooperation in science, culture, and education during 1996-98. Dr. Walter Ofonagoro, the minister of information and culture, thanked the Russian Federation for its firm resistance to the imposition of sanctions on Nigeria. Earlier, the Russian ambassador said 50 Nigerians have been offered scholarships to study in Russian universities.
YASSER ARAFAT attacked the Israeli government's latest proposal for Hebron, from which Israel was supposed to withdraw, as "apartheid," which indicates "Israel's aggressive intentions in Hebron and shows complete and abhorrent racism." Arafat was speaking during his visit in Cairo on Oct. 16. The Israeli proposal would have divided the town into Israeli and Palestinian zones, and given Israel the right to pursue Arabs anywhere in the town.
The secret financial network behind ‘wizard’ George Soros
by William Engdahl
The dossier that follows is based upon a report released on Oct. 1 by EIR’s bureau in Wiesbaden, Germany, titled “A Profile of Mega-Speculator George Soros.” Research was contributed by Mark Burdman, Elisabeth Hellenbroich, Paolo Raimondi, and Scott Thompson.
Time magazine has characterized financier George Soros as a “modern day Robin Hood,” who robs from the rich to give to the poor countries of eastern Europe and Russia. It claimed that Soros makes huge financial gains by speculating against western central banks, in order to use his profits to help the emerging post-communist economies of eastern Europe and former Soviet Union, to assist them to create what he calls an “Open Society.” The Time statement is entirely accurate in the first part, and entirely inaccurate in the second. He robs from rich western countries, and uses his profits to rob even more savagely from the East, under the cloak of “philanthropy.” His goal is to loot wherever and however he can. Soros has been called the master manipulator of “hit-and-run capitalism.”
As we shall see, what Soros means by “open,” is a society that allows him and his financial predator friends to loot the resources and precious assets of former Warsaw Pact economies. By bringing people like Jeffrey Sachs or Sweden’s Anders Åslund and their economic shock therapy into these economies, Soros lays the groundwork for buying up the assets of whole regions of the world at dirt-cheap prices.
The man who broke the Bank of England?
An examination of Soros’s secretive financial network is vital to understand the true dimension of the “Soros problem” in eastern Europe and other nations.
Following the crisis of the European Exchange Rate Mechanism of September 1992, when the Bank of England was forced to abandon efforts to stabilize the pound sterling, a little-known financial figure emerged from the shadows, to boast that he had personally made over $1 billion in speculation against the British pound. The speculator was the Hungarian-born George Soros, who spent the war in Hungary under false papers working for the pro-Nazi government, identifying and expropriating the property of wealthy fellow Jews. Soros left Hungary after the war, and established American citizenship after some years in London. Today, Soros is based in New York, but that tells little, if anything, of who and what he is.
Following his impressive claims to possession of a “Midas touch,” Soros has let his name be publicly used in a blatant attempt to influence world financial markets—an out-of-character act for most financial investors, who prefer to take advantage of situations not yet discovered by rivals, and keep them secret. Soros the financier is as much a political animal, as a financial speculator.
Soros proclaimed in March 1993, with great publicity, that the price of gold was about to rise sharply; he said that he had just gotten “inside information” that China was about to buy huge sums of gold for its booming economy. Soros was able to trigger a rush into buying gold, which caused prices to rise more than 20% over four months, to the highest level since 1991. Typically for Soros, once the fools rushed in to push prices higher, Soros and his friend Sir James Goldsmith secretly began selling their gold at a huge profit.
Then, in early June 1993, Soros proclaimed his intent to force a sell-off in German government bonds in favor of the French, in an open letter to London Times Financial Editor
Anatole Kaletsky, in which Soros proclaimed, “Down with the D-Mark!” Soros has at various times attacked the currencies of Thailand, Malaysia, Indonesia, and Mexico, coming into newly opened financial markets which have little experience with foreign investors, let alone ones with large funds like Soros. Soros begins buying stocks or bonds in the local market, leading others to naively suppose that he knows something they do not. As with gold, when the smaller investors begin to follow Soros, driving prices of stocks or whatever higher, Soros begins to sell to the eager new buyers, cashing in his 40% or 100% profits, then exiting the market, and often, the entire country, to seek another target for his speculation. This technique gave rise to the term “hit-and-run.” What Soros always leaves behind, is a collapsed local market and financial ruin of national investors.
**The secret of the Quantum Fund NV**
Soros is the visible side of a vast and nasty secret network of private financial interests, controlled by the leading aristocratic and royal families of Europe, centered in the British House of Windsor. This network, called by its members the Club of the Isles, was built upon the wreckage of the British Empire after World War II.
Rather than use the powers of the state to achieve their geopolitical goals, a secret cross-linked holding of private financial interests, tied to the old aristocratic oligarchy of western Europe, was developed. It was in many ways modelled on the 17th-century British and Dutch East India Companies. The heart of this Club of the Isles is the financial center of the old British Empire, the City of London. Soros is one of what in medieval days were called *Hofjuden*, the “Court Jews,” who were deployed by the aristocratic families.
The most important of such “Jews who are not Jews,” are the Rothschilds, who launched Soros’s career. They are members of the Club of the Isles and retainers of the British royal family. This has been true since Amschel Rothschild sold the British Hessian troops to fight against George Washington during the American Revolution.
Soros is American only in his passport. He is a global financial operator, who happens to be in New York, simply because “that’s where the money is,” as the bank robber Willy Sutton once quipped, when asked why he always robbed banks. Soros speculates in world financial markets through his offshore company, Quantum Fund NV, a private investment fund, or “hedge fund.” His hedge fund reportedly manages some $11-14 billion of funds on behalf of its clients, or investors—one of the most prominent of whom is, according to Soros, Britain’s Queen Elizabeth II, the wealthiest person in Europe.
The Quantum Fund is registered in the tax haven of the Netherlands Antilles, in the Caribbean. This is to avoid paying taxes, as well as to hide the true nature of his investors and what he does with their money.
In order to avoid U.S. government supervision of his financial activities, something normal U.S.-based investment funds must by law agree to in order to operate, Soros moved his legal headquarters to the Caribbean tax haven of Curaçao. The Netherlands Antilles has repeatedly been cited by the Task Force on Money Laundering of the Organization for Economic Cooperation and Development (OECD) as one of the world’s most important centers for laundering illegal proceeds of the Latin American cocaine and other drug traffic. It is a possession of the Netherlands.
Soros has taken care that the none of the 99 individual investors who participate in his various funds is an American national. By U.S. securities law, a hedge fund is limited to no more than 99 highly wealthy individuals, so-called “sophisticated investors.” By structuring his investment company as an offshore hedge fund, Soros avoids public scrutiny.
Soros himself is not even on the board of Quantum Fund. Instead, for legal reasons, he serves the Quantum Fund as official “investment adviser,” through another company, Soros Fund Management, of New York City. If any demand were to be made of Soros to reveal the details of Quantum Fund’s operations, he is able to claim he is “merely its investment adviser.” Any competent police investigator looking at the complex legal structure of Soros’s businesses would conclude that there is prima facie evidence of either vast money laundering of illicit funds, or massive illegal tax evasion. Both may be true.
To make it impossible for U.S. tax authorities or other officials to look into the financial dealings of his web of businesses, the board of directors of Quantum Fund N.V. also includes no American citizens. His directors are Swiss, Italian, and British financiers.
George Soros is part of a tightly knit financial mafia—“mafia,” in the sense of a closed masonic-like fraternity of families pursuing common aims. Anyone who dares to criticize Soros or any of his associates, is immediately hit with the charge of being “anti-Semitic”—a criticism which often silences or intimidates genuine critics of Soros’s unscrupulous operations. The Anti-Defamation League of B’nai B’rith considers it a top priority to “protect” Soros from the charges of “anti-Semites” in Hungary and elsewhere in Central Europe, according to ADL National Director Abraham Foxman. The ADL’s record of service to the British oligarchy has been amply documented by EIR (e.g., *The Ugly Truth About the Anti-Defamation League* [Washington, D.C., Executive Intelligence Review: 1992]).
According to knowledgeable U.S. and European investigators, Soros’s circle includes indicted metals and commodity speculator and fugitive Marc Rich of Zug, Switzerland and Tel Aviv; secretive Israeli arms and commodity dealer Shaul Eisenberg, and “Dirty Rafi” Eytan, both linked to the financial side of the Israeli Mossad; and, the family of Jacob Lord Rothschild.
Understandably, Soros and the Rothschild interests prefer to keep their connection hidden far from public view, so as to obscure the well-connected friends Soros enjoys in the City of London, the British Foreign Office, Israel, and the U.S. financial establishment. The myth, therefore, has been created, that Soros is a lone financial investment “genius”, who, through his sheer personal brilliance in detecting shifts in markets, has become one of the world’s most successful speculators. According to those who know him and who have done business with him, Soros never makes a major investment move without sensitive insider information.
On the board of directors of Soros’s Quantum Fund N.V. is Richard Katz, a Rothschild man who is also on the board of the London N.M. Rothschild and Sons merchant bank, and the head of Rothschild Italia S.p.A. of Milan. Another Rothschild family link to Soros’s Quantum Fund is Quantum board member Nils O. Taube, the partner of the London investment group St. James Place Capital, whose major partner is Lord Rothschild. London *Times* columnist Lord William Rees-Mogg is also on the board of Rothschild’s St. James Place Capital.
A frequent business partner of Soros in various speculative deals, including in the 1993 gold manipulation, although not on the Quantum Fund directly, is the Anglo-French speculator Sir James Goldsmith, a cousin of the Rothschild family.
From the very first days when Soros created his own investment fund in 1969, he owed his success to his relation to the Rothschild family banking network. Soros worked in New York in the 1960s for a small private bank close to the Rothschilds, Arnhold & S. Bleichroeder, Inc., a banking family which represented Rothschild interests in Germany during Bismarck’s time. To this day, A. & S. Bleichroeder, Inc. remains the Principal Custodian, along with Citibank, of funds of Soros’s Quantum Fund. George C. Karlweis, of Edmond de Rothschild’s Switzerland-based Banque Privée SA in Lugano, as well as of the scandal-tainted Rothschild Bank AG of Zurich, gave Soros financial backing. Karlweis provided some of the vital initial capital and investors for Soros’s Quantum Fund.
**Union Banque Privée and the ‘Swiss connection’**
Another member of the board of Soros’s Quantum Fund is the head of one of the most controversial Swiss private banks, Edgar de Picciotto, who has been called “one of the cleverest bankers in Geneva”—and is one of the most scandal-tainted. De Picciotto, from an old Portuguese Jewish trading family, who was born in Lebanon, is head of the Geneva private bank CBI-TDB Union Bancaire Privée, a major player in the gold and offshore hedge funds business. Hedge funds have been identified by international police agencies as the fastest-growing outlet for illegal money laundering today.
De Picciotto is a longtime friend and business associate of banker Edmond Safra, also born in Lebanon, whose family came from Aleppo, Syria, and who now controls the RepubRepublic Bank of New York. Republic Bank has been identified in U.S. investigations into Russian organized crime, as the bank involved in transferring billions of U.S. Federal Reserve notes from New York to organized crime-controlled Moscow banks, on behalf of Russian organized crime figures. Safra is under investigation by U.S. and Swiss authorities for laundering Turkish and Colombian drug money. In 1990, Safra’s Trade Development Bank (TDB) of Geneva was merged with de Picciotto’s CBI to create the CBI-TDB Union Banque Privée. The details of the merger are shrouded in secrecy to this day. As part of the deal, de Picciotto became a board member of American Express Bank (Switzerland) SA of Geneva, and two American Express Bank of New York executives sit on the board of de Picciotto’s Union Banque Privée. Safra had sold his Trade Development Bank to American Express, Inc. in the 1980s. Henry Kissinger sits on the board of American Express, Inc., which has repeatedly been implicated in international money-laundering scandals.
De Picciotto’s start as a Geneva banker came from Nicholas Baring of the London Barings Bank, who tapped de Picciotto to run the bank’s secret Swiss-bank business. Barings has for centuries been private banker to the British royal family, and since the bank’s collapse in March 1995, has been owned by the Dutch ING Bank, which is reported to be a major money-laundering institution.
De Picciotto is also a longtime business partner of Venetian businessman Carlo De Benedetti, who recently was forced to resign as head of Olivetti Corp. Both persons sit on the board of the Société Financière de Genève investment holding company in Geneva. De Benedetti is under investigation in Italy for suspicion of triggering the collapse of Italy’s Banco Ambrosiano in the early 1980s. The head of that bank, Roberto Calvi, was later found hanging from the London Blackfriars’ Bridge, in what police believe was a masonic ritual murder.
De Picciotto and his Union Banque Privée have been implicated in numerous drug and illegal money-laundering operations. In November 1994, U.S. federal agents arrested a senior official of de Picciotto’s Geneva bank, Jean-Jacques Handali, along with two other UBP officials, on charges of leading a multimillion-dollar drug-money-laundering ring. According to the U.S. Attorney’s Office in Miami, Handali and Union Banque Privée were the “Swiss connection” in an international drug-money-laundering ring tied to Colombian and Turkish cocaine and heroin organizations. A close business and political associate of de Picciotto is a mysterious arms dealer, Helmut Raiser, who is linked in business dealings with reputed Russian organized crime kingpin Grigori Luchansky, who controls the Russian and Swiss holding company Nordex Group.
Another director of Soros’s Quantum Fund is Isodoro Albertini, owner of the Milan stock brokerage firm Albertini and Co. Beat Notz of the Geneva Banque Worms is another private banker on the board of Soros’s Quantum Fund, as is Alberto Foglia, who is chief of the Lugano, Switzerland Banca del Ceresio. Lugano, just across the Swiss border from Milan, is notorious as the financial secret bank haven for Italian organized crime families, including the heroin mafia behind the 1980s “Pizza Connection” case. The Banca del Ceresio has been one of the secret Swiss banks identified in the recent Italian political corruption scandals as the repository of bribe funds of several Italian politicians now in prison.
**The sponsorship of the Rothschilds**
Soros’s relation to the Rothschild finance circle represents no ordinary or casual banking connection. It goes a long way to explain the extraordinary success of a mere private speculator, and Soros’s uncanny ability to “gamble right” so many times in such high-risk markets. Soros has access to the “insider track” in some of the most important government and private channels in the world.
Since World War II, the Rothschild family, at the heart of the financial apparatus of the Club of the Isles, has gone to great lengths to create a public myth about its own insignificance. The family has spent significant sums cultivating a public image as a family of wealthy, but quiet, “gentlemen,” some of whom prefer to cultivate fine French wines, some of whom are devoted to charity.
Since British Foreign Secretary Arthur Balfour wrote his famous November 1917 letter to Lord Rothschild, expressing
official British government backing for establishment of a Palestinian national home for the Jewish people, the Rothschilds were intimately involved in the creation of Israel. But behind their public facade of a family donating money for projects such as planting trees in the deserts of Israel, N.M. Rothschild of London is at the center of various intelligence operations, and more than once has been linked to the more unsavory elements of international organized crime. The family prefers to keep such links at arm's length, and away from its London headquarters, via its lesser-known outposts such as their Zurich Rothschild Bank AG and Rothschild Italia of Milan, the bank of Soros partner Richard Katz.
N.M. Rothschilds is considered by City of London sources to be one of the most influential parts of the British intelligence establishment, tied to the Thatcher "free market" wing of the Tory Party. Rothschild and Sons made huge sums managing for Thatcher the privatization of billions of dollars of British state industry holdings during the 1980s, and today, for John Major's government. Rothschilds is also at the very heart of the world gold trade, being the bank at which twice daily the London Gold Fix is struck by a group of the five most influential gold trade banks. Gold constitutes a major part of the economy of drug dealings globally.
N.M. Rothschild and Sons is also implicated in some of the filthiest drugs-for-weapons secret intelligence operations. Because it is connected to the highest levels of the British intelligence establishment, Rothschilds managed to evade any prominent mention of its complicity in one of the more sordid black covert intelligence networks, that of the Bank of Credit and Commerce International (BCCI). Rothschilds was at the center of the international web of money-laundering banks used during the 1970s and 1980s by Britain's MI-6 and the network of Col. Oliver North and George Bush, to finance such projects as the Nicaraguan Contras.
On June 8, 1993, the chairman of the U.S. House of Representatives' Committee on Banking, Rep. Henry Gonzalez (D-Tex.), made a speech charging that the U.S. government, under the previous Bush and Reagan administrations, had systematically refused to prosecute the BCCI, and that the Department of Justice had repeatedly refused to cooperate with Congressional investigations of both the BCCI scandal and what Gonzalez claims is the closely related scandal of the Atlanta, Georgia Banca Nazionale del Lavoro, which was alleged to have secured billions of dollars in loans from the Bush administration to Saddam Hussein, just prior to the Gulf War of 1990-91.
Gonzalez charged that the Bush administration had "a Justice Department that I say, and I repeat, has been the most corrupt, most unbelievably corrupt justice system that I have seen in the 32 years I have been in the Congress."
The BCCI violated countless laws, including laundering drug money, financing illegal arms traffic, and falsifying bank records. In July 1991, New York District Attorney Robert Morgenthau announced a grand jury indictment against BCCI, charging it with having committed "the largest bank fraud in world financial history. BCCI operated as a corrupt criminal organization throughout its entire 19-year history."
The BCCI had links directly into the Bush White House. Saudi Sheik Kamal Adham, a BCCI director and former head of Saudi Arabian intelligence when George Bush was head of the CIA, was one of the BCCI shareholders indicted in the United States. Days after his indictment, former top Bush White House aide Edward Rogers went to Saudi Arabia as a private citizen to sign a contract to represent Sheikh Adham in the United States.
But, what has never been identified in a single major Western press investigation, was that the Rothschild group was at the heart of the vast illegal web of BCCI. The key figure was Dr. Alfred Hartmann, the managing director of the BCCI Swiss subsidiary, Banque de Commerce et de Placement SA; at the same time, he ran the Zurich Rothschild Bank AG, and sat in London as a member of the board of N.M. Rothschild and Sons. Hartmann was also a business partner of Helmut Raiser, friend of de Picciotto, and linked to Nordex.
Hartmann was also chairman of the Swiss affiliate of the Italian BNL bank, which was implicated in Bush administration illegal transfers to Iraq prior to the 1990 Iraqi invasion of Kuwait. The Atlanta branch of BNL, with the knowledge of George Bush when he was vice president, conducted funds to Helmut Raiser's Zug, Switzerland company, Consen, for development of the Condor II missile program by Iraq, Egypt, and Argentina, during the Iran-Iraq War. Hartmann was vice-chairman of another secretive private Geneva bank, the Bank of NY-Inter-Maritime Bank, a bank whose chairman, Bruce Rappaport, was one of the illegal financial conduits for Col. Oliver North's Contra drugs-for-weapons network during the late 1980s. North also used the BCCI as one of his preferred banks to hide his illegal funds.
Rich, Reichmann, and Soros's Israeli links
According to reports of former U.S. State Department intelligence officers familiar with the Soros case, Soros's Quantum Fund amassed a war chest of well over $10 billion, with the help of a powerful group of "silent" investors who let Soros deploy the capital to demolish European monetary stability in September 1992.
Among Soros's silent investors, these sources say, are the fugitive metals and oil trader Marc Rich, based in Zug, Switzerland; and Shaul Eisenberg, a decades-long member of Israeli Mossad intelligence, who functions as a major arms merchant throughout Asia and the Near East. Eisenberg was recently banned from doing business in Uzbekistan, where he had been accused by the government of massive fraud and corruption. A third Soros partner is Israel's "Dirty Rafi" Eytan, who served in London previously as Mossad liaison to British intelligence.
Rich was one of the most active western traders in oil, aluminum, and other commodities in the Soviet Union and
Russia between 1989 and 1993. This, not coincidentally, is just the period when Grigori Luchansky’s Nordex Group became a multibillion-dollar company by selling Russian oil, aluminum, and other commodities.
Canadian real estate entrepreneur Paul Reichmann, formerly of Olympia and York notoriety, a Hungarian-born Jew like Soros, is a business partner in Soros’s Quantum Realty, a $525-million real estate investment fund.
The Reichmann tie links Soros as well with Henry Kissinger and former Tory Foreign Minister Lord Carrington (who is also a member of Kissinger Associates, Inc. of New York). Reichmann sits with both Kissinger and Carrington on the board of the influential British-Canadian publishing group, Hollinger, Inc. Hollinger owns a large number of newspapers in Canada and the United States, the London Daily Telegraph, and the largest English-language daily in Israel, the Jerusalem Post. Hollinger has been attacking President Clinton and the Middle East peace process ever since Clinton’s election in November 1992.
**Soros and geopolitics**
Soros is little more than one of several significant vehicles for economic and financial warfare by the Club of the Isles faction. Because his affiliations to these interests have not previously been spotlighted, he serves extremely useful functions for the oligarchy, as in 1992 and 1993, when he launched his attack on the European Rate Mechanism.
Although Soros’s speculation played a role in finally taking the British pound out of the ERM currency group entirely, it would be a mistake to view that action as “anti-British.” Soros has long-standing and strong ties to Britain. In 1947, Soros went for the first time to London, where he studied under Karl Popper and Friedrich von Hayek at the London School of Economics.
Soros’s business ties to Sir James Goldsmith and Lord Rothschild place him in the inner circles of the Thatcher wing of the British establishment. By helping the “anti-Europe” Thatcherites pull Britain out of the ERM in September 1992 (and making more than $1 billion in the process at British taxpayer expense), Soros helped the long-term goal of the Thatcherites in weakening continental Europe’s economic stability. Since 1904, it has been British geopolitical strategy to prevent by all means any successful economic linkage between western continental European economies, especially that of Germany, with Russia and the countries of eastern Europe.
Soros’s personal outlook is consonant with that of the Thatcher wing of the Tory Party, those who three years ago launched the “Germany, the Fourth Reich” hate campaign against unified Germany, comparing Chancellor Helmut Kohl with Adolf Hitler. Soros is personally extremely anti-German. In his 1991 autobiography, *Underwriting Democracy*, Soros warned that a reunited Germany would “upset the balance of Europe. . . . It is easy to see how the interwar scenario could be replayed. A united Germany becomes the strongest economic power and develops Eastern Europe as its *Lebensraum* . . . a potent witches’ brew.” Soros’s recent public attacks on the German economy and the deutschmark are fundamentally motivated by this geopolitical view.
Soros is quite close to the circles of George Bush in the U.S. intelligence community and finance. His principal bank custodian, and reputed major lender in the 1992 assault on Europe’s ERM, is Citicorp NA, the nation’s largest bank. Citicorp is more than a lending institution; it is a core part of the American liberal establishment. In 1989, as it became clear that German unification was a real possibility, a senior official at Citicorp, a former adviser to Michael Dukakis’s Presidential campaign, told a European business associate that “German unity will be a disaster for our interests; we must take measures to ensure a sharp D-Mark collapse on the order of 30%, so that she will not have the capability to reconstruct East Germany into the economic engine of a new Europe.”
While Soros was calling on world investors to pull down the deutschmark in 1993, he had been making a strong play in the French media, since late 1992, to portray himself as a “friend of French interests.” Soros is reported to be close to senior figures of the French establishment, the Treasury, and in particular, Bank of France head Jean-Claude Trichet. In effect, Soros is echoing the old Entente Cordiale alliance against Germany, which helped precipitate World War I.
Soros admits that he survived in Nazi Hungary during the war, as a Jew, by adopting what he calls a double personality. “I have lived with a double personality practically all my life,” Soros recently stated. “It started at age fourteen in Hungary, when I assumed a false identity in order to escape persecution as a Jew.” Soros admitted in a radio interview that his father gave him Nazi credentials in Hungary during the war, and he looted wealthy Jewish estates. Further research showed that this operation was probably run by the SS.
Soros did not leave the country until two years after the war. Though he and his friends in the media are quick to attack any policy opponent of Soros, especially in eastern Europe, as being “anti-Semitic,” Soros’s Jewish identity apparently has only utilitarian value for him, rather than providing moral foundations. In short, the young Soros was a cynical, ambitious person, the ideal recruit for the British postwar intelligence network.
**Soros savages eastern Europe**
Soros has established no fewer than 19 “charitable” foundations across eastern Europe and the former Soviet Union. He has sponsored “peace” concerts in former Yugoslavia with such performers as Joan Baez. He is helping send young east Europeans to Oxford University. A model citizen, is the image he broadcasts.
The reality is something else. Soros has been personally responsible for introducing shock therapy into the emerging
economies of eastern Europe since 1989. He has deliberately fostered on fragile new governments in the east the most draconian economic madness, policies which have allowed Soros and his financial predator friends, such as Marc Rich and Shaul Eisenberg, to loot the resources of large parts of eastern Europe at dirt-cheap prices. Here are illustrative case histories of Soros's eastern "charity":
**Poland:** In late 1989, Soros organized a secret meeting between the "reform" communist government of Prime Minister Mieczyslaw Rakowski and the leaders of the then-illegal Solidarnosc trade union organization. According to well-informed Polish sources, at that 1989 meeting, Soros unveiled his "plan" for Poland: The communists must let Solidarnosc take over the government, so as to gain the confidence of the population. Then, said Soros, the state must act to bankrupt its own industrial and agricultural enterprises, using astronomical interest rates, withholding state credits, and burdening firms with unpayable debt. Once this were done, Soros promised that he would encourage his wealthy international business friends to come into Poland, as prospective buyers of the privatized state enterprises. A recent example of this privatization plan is the case of the large steel facility Huta Warsawa. According to steel experts, this modern complex would cost $3-4 billion for a western company to build new. Several months ago, the Polish government agreed to assume the debts of Huta Warsawa, and to sell the debt-free enterprise to a Milan company, Lucchini, for $30 million!
Soros recruited his friend, Harvard University economist Jeffrey Sachs, who had previously advised the Bolivian government in economic policy, leading to the takeover of that nation's economy by the cocaine trade. To further his plan in Poland, Soros set up one of his numerous foundations, the Stefan Batory Foundation, the official sponsor of Sachs's work in Poland in 1989-90.
Soros boasts, "I established close personal contact with Walesa's chief adviser, Bronislaw Geremek. I was also received by [President Gen. Wojciech] Jaruzelski, the head of State, to obtain his blessing for my foundation." He worked closely with the *eminence grise* of Polish shock therapy, Witold Trzeciakowski, a shadow adviser to Finance Minister Leszek Balcerowicz. Soros also cultivated relations with Balcerowicz, the man who would first impose Sachs's shock therapy on Poland. Soros says when Walesa was elected President, that "largely because of western pressure, Walesa retained Balcerowicz as minister." Balcerowicz imposed a freeze on wages while industry was to be bankrupted by a cutoff of state credits. Industrial output fell by more than 30% over two years.
Soros admits he knew in advance that his shock therapy would cause huge unemployment, closing of factories, and social unrest. For this reason, he insisted that Solidarnosc be brought into the government, to help deal with the unrest. Through the Batory Foundation, Soros coopted key media opinion makers such as Adam Michnik, and through cooperation with the U.S. Embassy in Warsaw, imposed a media censorship favorable to Soros's shock therapy, and hostile to all critics.
**Russia and the Community of Independent States (CIS):** Soros headed a delegation to Russia, where he had worked together with Raisa Gorbachova since the late 1980s, to establish the Cultural Initiative Foundation. As with his other "charitable foundations," this was a tax-free vehicle for Soros and his influential Western friends to enter the top policymaking levels of the country, and for tiny sums of scarce hard currency, buy up important political and intellectual figures. After a false start under Mikhail Gorbachov in 1988-91, Soros shifted to the new Yeltsin circle. It was Soros who introduced Jeffrey Sachs and shock therapy into Russia, in late 1991. Soros describes his effort: "I started mobilizing a group of economists to take to the Soviet Union (July 1990). Professor Jeffrey Sachs, with whom I had worked in Poland, was ready and eager to participate. He suggested a number of other participants: Romano Prodi from Italy; David Finch, a retired official from the IMF [International Monetary Fund]. I wanted to include Stanley Fischer and Jacob Frenkel, heads of research of the World Bank and IMF, respectively; Larry Summers from Harvard and Michael Bruno of the Central Bank of Israel."
Since Jan. 2, 1992, shock therapy has introduced chaos and hyperinflation into Russia. Irreplaceable groups from advanced scientific research institutes have fled in pursuit of jobs in the West. Yegor Gaidar and the Yeltsin government imposed draconian cuts in state spending to industry and agriculture, even though the entire economy was state-owned. A goal of a zero deficit budget within three months was announced. Credit to industry was ended, and enterprises piled up astronomical debts, as inflation of the ruble went out of control.
The friends of Soros lost no time in capitalizing on this situation. Marc Rich began buying Russian aluminum at absurdly cheap prices, with his hard currency. Rich then dumped the aluminum onto western industrial markets last year, causing a 30% collapse in the price of the metal, as western industry had no way to compete. There was such an outflow of aluminum last year from Russia, that there were shortages of aluminum for Russian fish canneries. At the same time, Rich reportedly moved in to secure export control over the supply of most West Siberian crude oil to western markets. Rich's companies have been under investigation for fraud in Russia, according to a report in the *Wall Street Journal* of May 13, 1993.
Another Soros silent partner who has moved in to exploit the chaos in the former Soviet Union, is Shaul Eisenberg. Eisenberg, reportedly with a letter of introduction from then-European Bank chief Jacques Attali, managed to secure an exclusive concession for textiles and other trade in Uzbekistan. When Uzbek officials confirmed defrauding of the government by Eisenberg, his concessions were summarily abroSoros has extensive influence in Hungary. When nationalist opposition parliamentarian Istvan Csurka tried to protest what was being done to ruin the Hungarian economy, under the policies of Soros and friends, Csurka was labelled an “anti-Semite,” and in June 1993, he was forced out of the governing Democratic Forum, as a result of pressure from Soros-linked circles in Hungary and abroad, including Soros’s close friend, U.S. Rep. Tom Lantos.
**Lighting the Balkan Fuse**
In early 1990, in what was then still Yugoslavia, Soros’s intervention with his shock therapy, in cooperation with the IMF, helped light the economic fuse that led to the outbreak of war in June 1991. Soros boasted at that time, “Yugoslavia is a particularly interesting case. Even as national rivalries have brought the country to the verge of a breakup, a radical monetary stabilization program, which was introduced on the same date as in Poland—January 1, 1990—has begun to change the political landscape. The program is very much along the Polish lines, and it had greater initial success. By the middle of the year, people were beginning to think Yugoslav again.”
Soros is friends with former Deputy Secretary of State Lawrence Eagleburger, the former U.S. ambassador to Belgrade and the patron of Serbian Communist leader Slobodan Milosevic. Eagleburger is a past president of Kissinger Associates, on whose board sits Lord Carrington, whose Balkan mediations supported Serbian aggression into Croatia and Bosnia.
Today, Soros has established his Foundation centers in Bosnia, Croatia, Slovenia, and a Soros Yugoslavia Foundation in Belgrade, Serbia. In Croatia, he has tried to use his foundation monies to woo influential journalists or to slander opponents of his shock therapy, by labelling them variously “anti-Semitic” or “neo-Nazi.” The head of Soros’s Open Society Fund-Croatia, Prof. Zarko Puhovski, is a man who has reportedly made a recent dramatic conversion from orthodox Marxism to Soros’s radical free market. Only seven years ago, according to one of his former students, as professor of philosophy at the University of Zagreb, Puhovski attacked students trying to articulate a critique of communism, by insisting, “It is unprincipled to criticize Marxism from a liberal standpoint.” His work for the Soros Foundation in Zagreb has promoted an anti-nationalist “global culture,” hiring a network of anti-Croatian journalists to propagandize, in effect, for the Serbian cause.
These examples can be elaborated for each of the other 19 locations across eastern Europe where George Soros operates. The political agenda of Soros and this group of financial “globalists” will create the conditions for a new outbreak of war, even world war, if it continues to be tolerated.
---
**Soros’s looting of Ibero-America**
by Scott Thompson
Several Ibero-American countries have recently been invaded by George Soros, who begins with a small beachhead, then ends, as in the case of Argentina, as the country’s largest landholder. As Soros’s tentacles spread through the country, cries of alarm go up. Here are some case studies.
**Brazil**
In 1993, Soros put out the word that he was moving into Brazil. Of assistance to this operation, was the fact that the director general of Soros Fund Management is Arminio Fraga, the former head of foreign functions at Brazil’s central bank.
An executive of the Soros group told Brazilian businessmen that Soros and company are currently “twisting Brazil’s arm ‘to put its house in order.’ ” According to Fraga, the Soros group was counting on then-Economics Minister Fernando Henrique Cardoso—today President of Brazil—to do that job. As Fraga told *Gazeta Mercantil* of June 26, 1993: “Brazil is an important market and deserves our attention. . . . At the moment, it is present in all of our analysis. . . . The presence of Fernando Henrique Cardoso in the Economics Ministry is very good. Anyone who has been in the government knows that he is dealing with the sore spots, doing things which people wanted done but have been unable to do. . . . He has an open and organized mind and knows he has to first put in order public finances.”
Two days before, Brazil’s *O Globo* newspaper had cited an unnamed “Soros executive,” stating that Soros’s group was “tired of speculative investments and want to bet on some projects. He specifically mentioned linking the western rivers of Brazil with the Rio de la Plata.” In other words, Soros was betting on the industrial and agricultural heartland of South America.
**Argentina**
Soros’s involvement in Argentina can best be described by the fact that today, with total holdings of 348,000 hectares, he is the country’s most powerful landowner. This was accomplished through his October 1994 purchase of the Cresud land company, owner of 20,000 hectares, for a total of $64 million. Since then, through purchase of a large number of smaller plots of land, whose owners were driven out of business by the Menem regime’s austerity policies, Soros has been able to expand his holdings. Located in Salta, Catamarca,
Corrientes, and parts of Buenos Aires's fertile *pampa humeda*, his total holdings are larger than those owned by Argentina's Bunge and Born grain cartel, the Perez Companc holding company, and business magnate Amalia Fortabat. Soros reportedly has another $30 million on hand reserved solely for land purchases. He visited Argentina in March 1996, to inspect his properties. During his visit, he lavished praise on Finance Minister Domingo Cavallo, threatening that foreign investors would withdraw their funds from the country, were Cavallo to leave office.
Soros began his operations in the country in 1990, and in 1991 purchased part of the IRSA real estate company, which became his vehicle for buying up undervalued properties, remodeling them, and selling them at a large profit. He eventually increased his holding in IRSA to 38% of the company's outstanding shares, with a market value of $47 million. One of his first projects was conversion of the Chrysler Palace in Buenos Aires, a former Army building, into luxury apartments. Baring Securities, a subsidiary of Britain's old "Opium Wars" bank Baring Brothers, arranged for IRSA shares to be sold on foreign stock markets, and placed 13 million shares among its own clients including Merrill Lynch, Arnold & S. Bleichroeder.
When the Argentine branch of Citibank sold its shares in Citicorp Equity Investments (CEI) in 1992, Soros bought 2% of the shares through his investment funds. Through CEI, Soros moved into the purchase of privatized companies, including Altos Hornos Zapla, the steel complex formerly owned by the Army; the state-run telephone company; two large gas firms; and many more. He bought up 1 million shares in the state-run oil firm, YPF, when it was privatized in mid-1993, and CEI purchased another 3 million shares in the same company.
The Jan. 15, 1996 edition of *Clarín* reported that IRSA had $80 million available to continue purchasing properties in Argentina. Soros has his eye on the Buenos Aires Airport, lands in Retiro, and some part of the Campo de Mayo Army base, all of which are expected to be privatized soon.
Many Argentine businessmen and legislators are alarmed at Soros's activities in the country. As of mid-1993, Soros's Quantum Fund executives were looking to exploit Argentina's oil reserves, as well as invest in gold mines. Also of concern is Soros's relationship with American fugitive and millionaire businessman Marc Rich. Soros is rumored to be the power behind Rich, who is a big investor in Argentine oil and raw materials, and a partner with Swiss-Argentine business magnate Santiago Soldati in several ventures.
**Mexico**
Mexico has become a major target of Soros in partnership with the Canadian Reichmann brothers, who went bankrupt over their Canary Wharf office building project in London and the bursting of the real estate bubble in general. They had formed a joint venture with Soros in early 1993, to purchase prime real estate at depressed prices in North America. In July 1993, Reichmann International and Soros Realty agreed in principle to develop a $500 million Santa Fe real estate scheme in Mexico City. The joint venture began negotiating property developments that could be worth $500 million in the Alameda district, and up to $300 million for the construction of two tower blocks on Paseo de la Reforma. The July 14, 1993 London *Financial Times* said that the joint venture was looking for other investors, and Soros said the sums mentioned "only represent the total value over a long time and would not represent any specific investment laid out." The total Alameda project cost, with the building of homes, offices, and shopping centers in what had once been a garbage dump and strip mine, would be an estimated $5-10 billion. Soros became upset at the media coverage of the project, and reiterated that he would only be putting in a small part of the total cost, so that it was not the sole project of the $600 million Quantum Realty Trust Fund that Soros set up with Paul Reichmann as manager.
Soros put his trust in then-Mexican President Carlos Salinas de Gortari, now a fugitive, as investigations into his family's corruption have expanded. In late June 1993, just 48 hours before the closing of the regular sessions of the Mexican Congress, the House of Deputies approved a Presidential bill, submitted by Salinas. It completely deregulated the real estate market, and reformed the Civil Code, the Procedural Rules Code, and the Federal Consumer Protection Law to open the way for big real estate investors like Soros. The same reforms paved the way for what is known as the New Rental Law, which in one fell swoop stripped away all manner of protections for Mexico's renters. Now, with the slightest pretext, renters can be expelled onto the streets, without protection of the law.
**Peru**
On July 23, 1993, Soros announced that he "considers that the present conditions have improved in recent years, especially in the application of liberal reforms, both economic and institutional." So, he said that the Soros Fund Management would expand its investments in Peru. Soros flew to Peru to make this announcement, adding: "What is important about the investment we are about to make in Peru is that our group characteristically invests in highly profitable activities, which we find emerging in markets like Peru." Soros was said to be prepared to invest in a brewery, and in mining and pension funds. After his $387 million purchase of a 12% share of Newmont gold mine from Sir James Goldsmith, it is notable that Soros was also said to be interested in development of a gold mine in Cajamarca Province, together with the Peruvian Buenaventura company.
According to the Sept. 7, 1993 daily *El Peruano*, fugitive Marc Rich's company, which is an ally of Soros, was one of 24 interested in purchasing the large Centromin Peru mining company, which was about to be privatized. The announcement was made by Alberto Benavides de la Quintana, president of the committee on privatization of state companies. Benavides is also the owner of Buenaventura company, which a few months ago entered into association with Newmont Mining to develop a gold mine in Cajamarca Province. Benavides also sold Marc Rich, in association with the Brazilian company Paraibuna Metals, the zinc deposits located at Isacaycruz in Peru.
Soros, while praising Peruvian President Alberto Fujimori for his free market reform policies, which made possible Soros’s looting of privatized industry, at the same time demanded a cessation of military action against the narco-terrorist “Shining Path.” Thus, on July 22, 1993, George Soros’s brother Paul, who runs an engineering company, travelled to Peru with Pedro Pablo Kuczynski, a banker with First Boston-Crédit Suisse and a Peruvian national, to announce the expansion of Soros’s investment in Peru. However, Kuczynski, who is also a member of the bankers’ drug-legalization front, the Inter-American Dialogue, was also sent to call for eliminating the military role against Shining Path. Human Rights Watch/Americas, which is funded by George Soros, had already campaigned to end Peruvian sovereignty by stopping military action against the narco-terrorists and cutting back the Peruvian military itself.
And, Paul Soros, in a full-page ad in the *New York Times* on Sept. 28, 1993, emphasized that there was “a lot of wealth in Peru,” before issuing an ultimatum: “[Only] when you can be sure that military influence in the government is firmly finished can the value of any investment be secured. . . . In Latin America, whenever the army, as an institution, is part of the country’s power structure, all investments are discounted, because that introduces an element of instability. As an investor, one likes stability. . . . When you can be sure that [military influence in the government] is really firmly finished, the value of any investment goes up 30, 40, even 50%.” The ad was co-signed by Gerard Manolovici, managing director at the time of Soros Fund Management. Together, they threatened that foreign investment would be cut by as much as 50% if the armed forces were not eliminated.
The Oct. 8, 1993 issue of the intelligence weekly *El Informador Público* published a press release by *EIR*, warning that Soros’s entry into Peru under conditions of dismantling the military would lead to a resurgence of Shining Path narco-terrorism. The release was entitled, “Soros and Company Support President Gonzalo for President of Peru.” “President Gonzalo” is the nickname for the chief of the Shining Path.
Soros continued his investments in Peru. In October 1993, he bought a large share in the national telephone company, as well as several textile companies.
On April 19, 1994, *La Mañana* cited *EIR* to attack Soros, under the headline, “Investment Funds, the Big Hole in the U.S. Economy? George Soros Called before the U.S. Congress to Explain the Financial Disasters Resulting from Speculative Maneuvers with These Funds.” It continued: “For many, this personage is one of the untouchables in the U.S. According to the magazine *EIR*, the secret of his power is not only based on the unlimited credits which he receives from the large banks in the world, but in other types of maneuvers . . . which cohere with his objectives, such as financing the humanitarian organization Americas Watch, whose reports . . . have been able to debase Peru’s image.”
By May 1994, Soros began pulling out of Peru. The weekly *El Mundo* leaked on May 7: “Wall Street’s most important investor and speculator is considering the possibility of withdrawing at any moment from the Peruvian stock market, where he has invested more than $60 million.” The reason was that Peru had begun to stabilize, which meant it was no longer as attractive for exploitative investments. And, Soros, who had been linked to promoting the Presidential candidacy of former UN Secretary General Javier Pérez de Cuellar, was worried about the outcome of the election. Pérez de Cuellar was the co-chairman of the Inter-American Dialogue, and had called for cutting back the military to please foreign investors.
By January 1995, after Fujimori won the Presidential election, it was announced that Newmont Mining had sold off 41% of its stock in Peru’s largest mining company, Southern Peru Copper Company. Newmont’s spokesman at First Boston-Crédit Suisse, David Mulford, who had been deputy treasury secretary in the Bush administration, said that Newmont intended eventually to sell off all its shares.
Coverup begins to crack on Bush cocaine ring
by Edward Spannaus
In the days leading up to a three-hour Senate hearing on the "CIA" crack-cocaine allegations, the leading establishment news media in the United States launched a frantic effort to discredit the San Jose Mercury News story, which had triggered the current national uproar over the U.S. government's role in drug-trafficking.
The CIA is not the beneficiary of the coverup sought by the news media. The CIA has little to fear, should it be the primary target. The purpose of the coverup is to protect George Bush and his "secret government" killer apparatus, which was consolidated in the early 1980s using Executive Order 12333.
Nevertheless—and perhaps despite the intent of its organizers—testimony at the Senate hearing on Oct. 23 took matters well beyond the CIA, pointing to the White House/National Security Council operation which Oliver North has come to symbolize. While this gets the investigation out of the dead end of scapegoating the CIA, it does not yet hit the nail smack on the head by identifying the real command structure which ran the 1980s guns-for-drugs operation.
Discredit where credit is due
The Los Angeles Times, which had been generally silent since the publication of the San Jose Mercury News series, initiated a three-part series on October 20 attempting to discredit the Mercury News stories. The first part opens, "The crack epidemic in Los Angeles followed no blueprint or master plan. It was not orchestrated by the Contras or the CIA or any single drug ring. No one trafficker, even the kingpins who sold thousands of kilos and pocketed millions of dollars, ever came close to monopolizing the trade."
The Los Angeles Times's line of argument was very similar to the "analysis" published Oct. 4 by the Washington Post, in which the Post conceded that, yes, there were Contras smuggling drugs, and, yes, the CIA was involved with some of them, but "Freeway" Ricky Ross didn't sell that much crack, and, hey, what's a little drugs in the African-American community anyway?
The Los Angeles Times's third installment was an outright racist diatribe against the black community, quoting quackademic scholarship to prove that African-Americans "are particularly susceptible to conspiracy theories." Washington Post columnist Richard Cohen, who is linked to the Anti-Defamation League of B'nai B'rith (ADL), spewed out a typically racist piece of venom on Oct. 24, attacking Rep. Maxine Waters (D-Calif.), because she "virtually accepted the Mercury News story as gospel and demanded investigations. . . . When it comes to sheer gullibility—or is it mere political opportunism—Waters is in a class of her own."
On Oct. 21, the New York Times also weighed in with an article with the headline, "With Little Evidence to Back It, Tale of CIA-Drug Link Has Life of Its Own." The article is replete with references to "scant evidence," and makes strenuous efforts to ridicule the black community for being so susceptible to such wild conspiracy theories.
Break the coverup!
At the Senate Intelligence Committee hearing on Oct. 23, the lead-off witness was Jack Blum, former special counsel to the Sen. John Kerry (D-Mass.) subcommittee of the Senate Foreign Relations Committee, who, in 1986-88, conducted the most thorough investigation of Contra drug-running to date. While Intelligence Committee Chairman Arlen Specter (R-Pa.) tried to narrow the focus of his hearing to the crack epidemic in Los Angeles and the CIA itself, Blum emphasized that the responsibility for drugs coming into the country should be put on the "policymakers," and that the CIA was just an "implementing agency." Blum's testimony, as well as
that of Justice Department Inspector General Michael Bromwich, focussed heavily on the White House/NSC operation around Ollie North (who was emphatically not an official of the CIA).
Blum’s remarks point in the right direction, but that is not sufficient. If any investigation of Contra drug-running is to break the fog of obfuscation which has been firmly in place since the mid-1980s, it must expose the official structure under which these and other covert operations were run in the 1981-92 period. That structure was created under the implied authority of Executive Order 12333 and certain National Security Decision Directives (NSDDs) which accompanied it, and it was built around the office and the person of George Herbert Walker Bush.
To ignore this, in favor of hitting the “easier” or “more acceptable” target, the CIA, is to be complicit in perpetuating the coverup which has allowed so much death and destruction to go unpunished to date.
**How the 12333 ‘secret government’ worked**
*EIR* has compiled the most comprehensive picture of how the “secret government” apparatus of the 1980s was created, and has shown that it functioned under the direct control of Vice President George Bush, operating through the NSC—and not the CIA (see *EIR Special Reports*, “Would a President Bob Dole Prosecute Drug Super-Kingpin George Bush,” September 1996, and “George Bush and the 12333 Serial Murder Ring,” October 1996). Following is a summary adapted from the two *EIR Special Reports*.
“Crisis management” is the key to understanding how George Bush became the covert operations “tsar” of the Reagan administration. Step by step, it worked like this:
1. In the early months of the Reagan-Bush administration in 1981, there was a brawl between George Bush and Secretary of State Al “I’m-in-charge-here” Haig over the control of crisis management. On March 22, 1981, a leak to the *Washington Post*, headlined “Bush to Head Crisis Management,” said that Vice President Bush would be placed in charge of a new crisis management structure, amounting to “an unprecedented role for a vice president.” Haig protested, but Bush won out. The article noted that Bush “was chosen to chair meetings in the Situation Room in times of crisis,” although it also noted that the Presidential directive formalizing this had not yet been written. This was a reference to the Special Situation Group (SSG), the status of which was only formalized in December of that year.
2. On Dec. 4, 1981, President Reagan signed Executive Order 12333, which designated the NSC as “the highest Executive branch entity” for review, guidance, and direction of all foreign intelligence, counterintelligence, and “special activities” (i.e., covert operations). This effectively put the NSC in charge of the CIA, military intelligence, special operations, etc. A little-noticed provision of E.O. 12333 gave the CIA the exclusive conduct of “special activities,” “unless the President determines that another agency is more likely to achieve a particular objective.” This officially opened the door for assigning covert operations to the NSC staff.
Most important, Section 2.7 of E.O. 12333 permitted U.S. intelligence agencies to enter into secret contracts for services with “private companies or institutions.” This was the Magna Carta of Bush’s “secret government.”
3. On Jan. 12, 1982, National Security Decision Directive Number 2 (NSDD-2) was issued, which formalized the NSC structure. It confirmed the existence of a series of Senior Interagency Groups (SIGs) for foreign policy, defense policy, and intelligence—thus reducing the power of the secretary of state and other department heads.
4. A month earlier, on Dec. 14, 1981, NSDD-3 had already been signed. Entitled “Crisis Management,” it affirmed the existence of the Special Situation Group (SSG) to be “chaired by the vice president,” and assigned to the SSG responsibility for crisis management. “Crisis Management” was defined as encompassing “a national security matter for which Presidential decisions and implementing instructions
‘Bill Weld blocked our investigation’
Below are excerpts from the Oct. 23 hearings of the Senate Intelligence Committee, chaired by Arlen Specter (R-Pa.), during the testimony of Jack Blum, formerly the special counsel to the 1986-88 Senate Foreign Relations subcommittee on terrorism and narcotics (the “Kerry Committee”). Blum’s references are to William F. Weld, who was at the time the U.S. Assistant Attorney General. Weld, governor of Massachusetts, is now the Republican candidate for U.S. Senate, against incumbent Democrat John Kerry, who chaired the Kerry Committee.
Mr. Blum: Now, you might ask, why did the hearings we ran in ’88 and the report we released in 1989 not get more attention? And the answer is, we were subject to a systematic campaign to discredit everything we did. Every night after there was a public hearing, Justice Department people, administration people would get on the phone and call the press and say the witnesses were all liars, they were talking to us to get a better deal, that we were on a political vendetta, that none of it was to be believed, and please don’t cover it.
Senator Specter: But let me ask you, on a question relevant here, did you ever see any of that interference by U.S. intelligence, CIA or otherwise, of any prosecutions against cocaine in Los Angeles?
Mr. Blum: We did not focus on Los Angeles and Los Angeles prosecutions. I can tell you there were cases in Miami, and there were other cases in other parts of the country.
Senator Specter: Now did those cases permit cocaine dealers to continue to operate?
Mr. Blum: One had the sense they did, but—when we got into this area, we confronted an absolute stone wall. Bill Weld, who was then the head of the [Justice Department] Criminal Division, put a very serious block on any effort we made to get information. There were stalls. There were refusals to talk to us, refusals to turn over data. An Assistant U.S. attorney who gave us some information was reprimanded and disciplined, even though it had nothing to do with the case. . . . We had a series of situations where Justice Department people were told that if they told us anything about what was going on, they would be subject to very severe discipline.
Sen. Bob Kerrey (D-Neb.): Mr. Blum, when you talked to me, you said there was a systematic effort to discredit the work of the subcommittee. . . . How would you define “systematic”?
Mr. Blum: An organized effort from the top—
Senator Kerrey: Who was in charge of it?
Mr. Blum: As best I could tell, it was coming from the top of the Criminal Division.
Senator Kerrey: Who was at the top of the Criminal Division?
Mr. Blum: Bill Weld.
Senator Kerrey: And when you say, the effort was made, what would they do? Would they call—
Mr. Blum: They would tell U.S. Attorneys, systematically: “You can’t talk to them. Don’t give them paper. Don’t cooperate. Don’t let them have access to people who you have in your control.” And we had a very tough time finding things out.
are required more rapidly than routine interdepartmental NSC staff support provides.” This formalized George Bush’s control over intelligence and covert operations.
5. On May 14, 1982, the first phase of the Bush takeover was completed, with the issuance of an extraordinary memorandum entitled “Crisis Pre-Planning,” by the national security adviser. Citing the authority of NSDD-3, this memorandum established an interagency, standing Crisis Pre-Planning Group (CPPG) subordinate to the SSG. The CPPG was created as a standing body, which would meet regularly, and develop plans and policies for the SSG. The SSG-CPPG, under the direct control of the vice president, was given control over any area in which a potential crisis could emerge, and was to develop preemptive policy options for dealing with it. “Crisis management” was no longer just for crises.
This SSG-CPPG structure, according to a chart later thrown at Secretary of State George Shultz in 1983, operated on the same level as the National Security Council, and was above the secretary of state. In reality, it superseded the NSC.
Shultz vigorously opposed the creation of a “Public Diplomacy” unit in the State Department which would report to the NSC instead of him. He asked Reagan for a structure in which he would be the President’s “sole delegate in carrying out your policies.” What he got back was a memorandum in the name of the President which stated: “Success in Central America will require the cooperative effort of several Departments and agencies. No single agency can do it alone nor should it.” Attached was the chart entitled “NSDD-2 Structure for Central America” putting Bush’s SSG-CPPG on the level of the NSC, in between the President and the secretary of state.
This is how, during the Reagan administration, intelligence and foreign policy “crisis management” was consolidated under the operational control of the Vice President of the United States, George Bush.
DOJ's Bromwich: Some oppose drug probe
by S. K. Rose and E. Spannaus
In marked contrast to the attitude of the CIA Inspector General, who has already concluded that there is no merit to the San Jose Mercury News allegations—even before he completes his investigation—the Justice Department Inspector General, Michael Bromwich, told the Senate Intelligence Committee on Oct. 23 that he is treating the allegations and his investigation very seriously.
"I've reviewed the articles in the San Jose Mercury [sic], and it seemed to me that there were enough troubling questions about the points of contact between individuals employed by different components of the Justice Department, and the allegations that drew together the CIA and the Contras in the introduction of crack cocaine into South-Central Los Angeles, that I thought it was very important to launch an investigation," Bromwich testified. "I did so on my own without being directed by anyone, either inside the [Justice] Department or outside the department."
Meetings in Los Angeles
Bromwich said that he has already made two trips to Los Angeles, over the opposition of some inside the Justice Department. On the day that he decided to open the investigation, Bromwich met with Rep. Maxine Waters (D-Calif.), who has spearheaded the drive for investigations of the San Jose, California daily’s allegations. "I subsequently met with her again," Bromwich said. "She facilitated an introduction to me to Gary Webb, the author of the San Jose Mercury articles, and I have talked with Mr. Webb on subsequent occasions." He then met with many of the other members of Congressional Black Caucus who have called for an investigation. Then he went back to Los Angeles again:
"I just returned last night, Senators, from what for me was an extraordinary trip to South-Central Los Angeles. I was invited to do so by Congresswoman Millender-McDonald, to meet with some community leaders so that they would have a chance to meet first-hand with the person who is going to be conducting one of the investigations that touches on these issues. I won't say that that trip was roundly endorsed by others in the department."
At that point, hearing chairman Arlen Specter pressed Bromwich on this point; Bromwich said that, "I prefer not to talk about that in public session," but, he added, "It was opposed by some."
Although Bromwich declined to identify who opposed his efforts, it is almost a sure guess that at the top of the list would be Jack Keeney and Mark Richard, the two long time "career professionals" at the top of the Criminal Division. Richard was singled out in the 1988-89 "Kerry Report" as having obstructed the Senate investigation of Contra drug-trafficking.
The DOJ opposition team
At the opening of his testimony, Bromwich told the committee that, "for better or for worse, I'm not a stranger either to issues of narcotics distribution nor to issues relating to Iran-Contra." He was a federal narcotics prosecutor for four years in New York City, focussing on high-level narcotics trafficking, and then he went to work for the Iran-Contra independent counsel, Lawrence Walsh. There, he first obtained guilty pleas from Carl "Spitz" Channell and and Richard Miller, for illegal fund-raising on behalf of Oliver North and the Contras. He was part of the team which prosecuted Ollie North. And he headed a team investigating illegalities in the Contra resupply effort, which led to the indictment of Joseph Fernandez, the CIA station chief in Costa Rica. "That case, as you know, was aborted subsequently in the latter stages of 1989 because the Department of Justice . . . refused to release the documents and declassify the documents that our office needed in order to pursue that matter."
"Were you dissatisfied with that?" Specter asked. "Very much so," was Bromwich's reply. Specter asked if Bromwich now has the power "to get into the inside of that." Bromwich said he does.
Bromwich has his work cut out for him. He will no doubt face formidable opposition within his own department, as he goes back into the issues of Justice Department obstruction of the Iran-Contra and drug-trafficking investigations. The Fernandez case is indicative.
According to the Final Report of Independent Counsel Lawrence Walsh, the classified information at issue in the Fernandez case involved the location of two CIA stations in Central America, which were already publicly known. Walsh said that the actions by Bush's Attorney General, Dick Thornburgh, "were an unprecedented and unwarranted intrusion into a prosecution of a case conducted by an Independent Counsel."
While Bush and Thornburgh may be gone, a number of those who handled the Fernandez matter are still in the Justice Department; this includes Jack Keeney and Mark Richard, who supervised the DOJ Internal Security Section which was designated as the section with which Walsh was to deal directly. Two other officials who were directly involved with the Fernandez case are also still in the Criminal Division; these are John Martin of the Internal Security Section, and James S. Reynolds, now of the Terrorist and Violent Crimes Section.
Battle lines drawn against Social Security privatization
by Marianna Wertz and Richard Freeman
Just weeks before the Nov. 5 election, both sides in the debate over the privatization of Social Security called press conferences in Washington, D.C., to set forth their agendas for the incoming administration and Congress. On Oct. 15, the National Association of Manufacturers, representing the banking elite, Wall Street, and such leading mouthpieces for privatization as the Cato Institute and the Heritage Foundation, launched what they called a “national campaign calling for enactment of Social Security reform and a schedule for its implementation before the end of the 106th Congress, as a key focus of its agenda to promote higher economic growth.” (Calling their plan “reform” is like calling murder “assisted suicide,” as we shall see.)
Two days later, a press conference jointly sponsored by the AFL-CIO and the Campaign for America’s Future blasted the proposed “privatization,” with CAF spokesman, former U.S. Sen. Howard Metzenbaum (D-Ohio), calling it “an insidious effort of the investment banking community to get their hands on the funds. To say that privatizing will be helpful brings to mind Orange County,” Metzenbaum said. Orange County, California went bankrupt last year as a result of misinvestment of public funds in derivatives swindles. Metzenbaum urged Americans to “speak out now before Clinton takes a position.”
Under the various plans to privatize Social Security—which were better called “piratization”—already drafted into legislation and awaiting the return of the Congress in January, the annual Social Security tax, or its equivalent, will be diverted into individual worker “private retirement accounts” (PRAs), to be managed by Wall Street sharks. *EIR* estimates that the diverted flows could total as much as $9-10 trillion over the next 15 years (see *EIR*, Oct. 11, 1996, “The Plan to Privatize Social Security: A $10 Trillion Bankers’ Rip-Off”). The financial sharks see the funds as one of the last sources of revenue to shore up the collapsing worldwide financial bubble. They also stand to make as much as $200 billion from fees and use of the money.
But when the markets blow, and tens of millions of elderly need the money, it won’t be there.
Dole and Clinton have both refused to be pinned down on the issue prior to the election, for obvious, pragmatic reasons: The vote of elderly Americans is key to determining who will be elected. During the second debate, on Oct. 16 in San Diego, both candidates avoided a direct answer by calling for a bipartisan commission to study the problem and come up with a solution. “Take it out of politics!” both candidates screamed.
The American Association of Retired Persons (AARP), the lead lobbying organization for the nation’s senior citizens, tried, unsuccessfully, to pin the two down in a survey printed in the October 1996 *AARP Bulletin*. “Asked whether fundamental changes would be needed to stabilize Social Security in the next century, Clinton once again was cautious. . . . He also was guarded about ideas emerging from the Social Security Advisory Council that the system be partially privatized—specifically, that some contributions paid into the trust funds be invested in the stock market.
“If ‘the market is a better deal than government securities,’ he said, that’s ‘worth a careful study and maybe some sort of experimentation. I don’t feel that I personally have the level of expertise to say . . . that is a good idea.’ ”
Dole told AARP, “We have to shore [Social Security] up again.” But, he “was cautious,” said the *Bulletin*, “about ideas the Advisory Council on Social Security was floating to partially privatize the system. . . . ‘That may have some appeal,’ Dole said, ‘but there are some policy questions you have to address. Do you want the U.S. government owning corporations, or part of corporations?’ ”
LaRouche Democrat leading the fight
There is one candidate, however, who is boldly opposing the privatizers’ plan, naming the names, and organizing the population to defeat them. She is María Elena Milton, the Democratic candidate in Arizona’s 4th Congressional District, who is challenging incumbent Republican and leading Gingrichite John Shadegg. Milton, a political associate of *EIR* founder Lyndon LaRouche, has made Shadegg’s secret support for Social Security privatization a lead issue in her campaign, forcing Shadegg out on the issue in one of the nation’s most heavily senior-dominated districts.
In an hour-long debate which was broadcast live on cable TV in Arizona on Oct. 16, Milton exposed Shadegg’s support for the “murderous” privatization policy. In reply, all Shadegg could do was lie, claiming, “The system is bankrupt, I’m just trying to save it.” In fact, as Milton has revealed, Shadegg is working closely with the Public Pension Reform Caucus in the House, a group of 40 to 50 congressmen committed to privatizing Social Security, which is led by his fellow Arizona Republican, Jim Kolbe.
Milton’s 24-page campaign brochure includes a feature titled “The Plot to Privatize, or ‘Piratize,’ Social Security.” Milton explains that the plot to privatize will result in “handing over trillions of dollars from the Social Security Trust Fund, into the hands of financial sharks and speculators.”
Milton then takes on the lies of NAM and similar privatizers: “There have been many scare stories about how Social Security is going to run out of money. The reasons given for this scare are that the U.S. population is aging too fast, and that seniors are too greedy. The real reason that there is any danger of a Social Security shortfall, is that the productive workforce of the U.S.A. is too small, and is being denied the ability to be productive.”
‘No to Wall Street’s greed’
At the Oct. 17 press conference of the Campaign for America’s Future, one of the speakers was Richard Trumka, the AFL-CIO secretary-treasurer and former president of the United Mine Workers. Trumka said he was speaking on behalf of 13.1 million AFL-CIO members, an equal number of union retirees, and millions more Americans. “We say no to privatizing and to Wall Street’s greed,” Trumka said.
In a debate on Oct. 18 on Washington, D.C.’s Fox Morning News broadcast with NAM President Jerry Jasinowski, Trumka continued the fight. He first made clear that NAM’s doomsday propaganda about Social Security going bankrupt is just that. “Actually, the 2030 year is predicated on an unrealistically low-growth expectation. Those figures are on a 1.5% growth rate. If the economy grows more than that—which it has for the last four years, which we anticipate it will, and which Jerry wants it to grow at twice that up to the year 2000, there is no problem.”
Trumka then concluded, “Privatization is a $60 billion a year get-rich-quick scheme by the mutual funds industry. Look, Social Security was designed to give everybody a social safety net, a minimum level of retirement income. Once you take that out, that guarantee out, and put it into the private market, markets go up, and markets go down. They will go down, and you could end up at retirement age with no pension. The worker assumes all the risks.”
‘Piratizers’ show their concern
The National Association of Manufacturers released the following “Resolution on Social Security Reform” at its Oct. 15 press conference, which had been approved on Sept. 21, by its board of directors. While lying about the degree of crisis confronting the Social Security system, the resolution completely ignores the only real solution to the crisis—put forward by economist Lyndon LaRouche—which is to put the economy through an orderly bankruptcy procedure, drying out the speculative bubble which NAM and others are seeking to prop up, and to revive real economic growth through a directed credit system. Such a program would create the level of productive employment that would easily guarantee the tax base to make the Social Security fund solvent well into the 21st century.
Whereas, Social Security is of enormous concern to Americans, because of both the reliance by millions of individuals on the system as a retirement program, and the corresponding magnitude of liabilities assumed by the federal government and, ultimately, by the taxpayers;
Whereas, an apparent consensus among economists indicates that the Social Security system, as currently structured, will eventually prove unable to satisfy liabilities for benefits promised to a significant portion of the American workforce, a situation grossly unfair to individuals who have relied in good faith upon the promises of the federal government;
Whereas, any remedy for the financial problems of the Social Security system through greater taxes would increase the costs of labor and capital, raise unemployment, lower productivity, reduce the ability of American businesses to compete in domestic and foreign markets, and ultimately, undermine the vigor of the U.S. economy; and,
Whereas, the only apparent solution to the financial problems facing the Social Security system—not involving a significant increase in taxes and/or a significant reduction in benefits—is reform of the program in favor of a system that requires individual savings rather than collective entitlement, and leads to increased economic growth and prosperity for employees and employers alike;
Therefore, we are resolved,
• first, to educate members of the NAM, and the public more generally, on the urgency of problems facing the Social Security system;
• second, to urge that the Congress respond by enacting legislation that retains a safety net for the truly needy while transforming a portion of the program to a system for retirement savings by individuals, funded with a choice of investment opportunities, separated from the assets and liabilities of the federal government, with transition provisions to protect employees from hardship in adjusting to the reformed system and to prevent increased financial exposure to employers; and
• third, to urge that such a reform program and a schedule for its implementation be enacted before the end of the 106th Congress.
Backlash against GOP kingmaker reported
Arthur J. Finkelstein, whom the Republican Party has paid $500,000 as a “senior political consultant” to a number of its candidates in the November elections, may have overplayed his hand in the Minnesota Senate race.
According to a poll in the Oct. 22 Washington Post, many of the voters who had seen Finkelstein’s campaign ads, said they were now more inclined to vote for incumbent Paul Wellstone (D), and against Republican Rudy Boschwitz, whose campaign is being guided by Finkelstein. The Boschwitz campaign has depicted Wellstone as soft on welfare recipients, murderers, drug-pushers, and terrorists. One ad showed a bearded hippie, inducting Wellstone into the “1967 Liberal Hall of Fame.”
Finkelstein is an avowed homosexual, who played an important role in the Bush-league National Conservative Political Action Committee—in famous for its Iran-Contra intrigues and the White House “call-boy” scandals. Finkelstein’s secret role as Benjamin Netanyahu’s campaign manager was revealed after the Israeli elections. There, Finkelstein’s television ads had linked together terrorist-bombed houses, Arafat, and Netanyahu’s opponent Shimon Peres, as “a dangerous combination for Israel.”
After 20 years, America prepares return to Mars
NASA Administrator Dan Goldin, opening a series of press briefings Oct. 16, issued a stirring declaration: “After a hiatus of 20 years, America returns to Mars.” Goldin was referring to two American spacecraft scheduled for Mars launchings over the next two months. Responding to a question about manned missions to Mars, Goldin replied: “I think we could be on Mars in the second decade of the next century. If the nation has the will to do it, America could do anything it wants to do.”
The Mars Global Surveyor, which will place a spacecraft in Mars orbit for geological and climate mapping for a full Martian year (687 Earth days), is scheduled for launch on Nov. 6. On Dec. 2, the Mars Pathfinder lander heads for the planet, carrying a micro-rover named Sojourner, which will be the first rover on Mars. Between these two launches, on Nov. 16, the Russian Mars ‘96 mission is scheduled to launch, carrying an orbiter, two landers, and—for the first time—two penetrators to probe under the surface of the planet.
Goldin also announced that the photographs and data transmitted from the Mars missions will be made available on the Internet. “Every day on the Internet, we’re going to post the weather report on Mars,” Goldin said. Internet users will also be able to see what the Pathfinder lander and rover “see.” The rover, which Goldin describes as a “22-pound geologist,” has cameras which will provide close-up views of the rocks it has been deployed to examine.
Pathfinder will be landing in the Ares Vallis region of Mars, which is an ancient flood plain. Dr. Michael Carr of the U.S. Geological Survey explained that a channel in the targeted area is up to one kilometer deep, and was created by a catastrophic water event. It is estimated that the equivalent of the water in America’s Great Lakes was discharged into this plain in the space of two weeks.
St. Louis aldermen cite Bush’s role in dope ring
On Oct. 18, the St. Louis Board of Aldermen became the first elected body in the country to pass a resolution naming George Bush in connection with recent exposés of secret government drug-trafficking. The resolution, passed unanimously by the 29-member board, declares in part:
“Whereas, the San Jose Mercury News has in a three-part series, alleged the role of the U.S. Intelligence Agencies in financing covert operations through the sale of drugs, specifically crack cocaine into neighborhoods throughout Los Angeles to finance Iran-Contra operations; and
“Whereas, these alleged activities were under the aegis of then Vice-President George Bush in his capacity as National Security Director, and that the Kerry Committee elicited testimony to the Congress on Feb. 11, 1987, that the Contras moved drugs . . . ‘Not by the bag, but by the ton, by the cargo plane load.’ . . .
“Now therefore be it resolved, that the St. Louis Board of Aldermen endorses the call by Congressman Waters, Senators Feinstein and Boxer, the Los Angeles City Council, and others for the investigation of these allegations as they will serve the best interests of the citizens of the United States.”
FBI official charged in Ruby Ridge coverup
Federal prosecutors filed charges Oct. 22 against an FBI official for obstruction of justice in the murder trial of Randy Weaver, following the 1992 shoot-out instigated by federal agents at Ruby Ridge, Idaho. Justice Department sources said E. Michael Kahoe, who was chief of the FBI’s violent crimes section during the incident, has agreed to plead guilty to the charges, and to cooperate in the long-running probe.
U.S. Attorney Michael Stiles of Philadelphia, who was specially appointed to conduct the investigation, said Kahoe participated in the concealment and destruction of a document from FBI headquarters, sought by federal prosecutors in Idaho preparing for Randy Weaver’s trial. According to the new charges, Kahoe was ordered by his superiors to prepare an “after-action” critique of the FBI’s conduct in the shoot-out. Kahoe allegedly received a copy of a letter from the prosecutors in Idaho requesting any documents about the incident.
When the Justice Department eventually ordered that all of the FBI documents be given to the prosecutors, Kahoe allegedly withheld the after-action report from the documents to be delivered, then destroyed all his copies, and ordered a subordinate to make it appear as if “it never existed.”
The FBI’s destruction and concealment of documents in the Ruby Ridge case, was cited in affidavits submitted earlier this year by the plaintiffs in the long-pending *LaRouche v. Webster* civil rights case against the FBI, filed in federal court in New York in 1975. In the LaRouche case, the FBI had argued that the absence of any records showing illegal activity, could be taken as conclusive proof that no such activity had ever taken place!
On Sept. 2, 1996, Federal Judge Mary Johnson Lowe issued a landmark ruling in New York’s Southern District Court, vacating key sections of the FBI affidavit in *LaRouche v. Webster*, which was used as the basis for the Justice Department’s 1985 motion to dismiss the LaRouche suit. One part of the FBI affidavit struck by Judge Lowe’s ruling, was its statement that the FBI’s Headquarters file comprised “a complete record” of its investigation of the LaRouche organization. Judge Lowe cited the plaintiffs’ allegations that the FBI had destroyed records—a practice again confirmed in the Ruby Ridge case.
**Postal workers campaign against privatization**
The American Postal Workers Union is waging a campaign against the plan by Congressional Gingrichites to privatize the postal system. Postal workers are distributing a flyer nationwide with the title “Postal Customer: What Postal Privatization Would Mean for You.” It is a devastating exposé of the privatizers’ scheme to loot the nation on a vast scale.
The leaflet details the huge stakes involved: If the U.S. Postal Service were privatized, four of its six major product lines would qualify as Fortune 500 companies. The Postal Service owns 6,865 buildings with a total of 168 million square feet, and leases 27,437 buildings with 89 million square feet. If the Postal Service were a private company, it would be the 12th largest business in the United States and the 33rd largest business in the world. Its 1995 revenues of $54 billion exceeded those of Coca-Cola, Xerox, and Eastman Kodak combined.
The postal workers charge that privatization would increase the cost of mailing, cut rural delivery, and reduce security of the mail. Another major attraction to the privateers, not mentioned in the flyer, is the fact that the Postal Service has the largest and most sophisticated cash transaction system in the world.
**Suits filed against immigrant aid cuts**
The first suits filed to stop the cut-off of aid to illegal immigrants, under the provisions of the new federal welfare reform and immigration bills, were filed Oct. 11 in New York and Oct. 15 in California—the states with the largest immigrant populations.
Mayor Rudolph Giuliani (R) sued on behalf of New York City in Manhattan federal court, contending that provisions allowing city employees to turn in illegal immigrants, who seek services such as police protection, hospital care, and public education, were unconstitutional. Provisions in both of the new federal bills overturned a 1985 New York City executive order, which forbade city employees from reporting illegal immigrants, with the exception of criminal suspects.
In California, the American Civil Liberties Union joined with a coalition of immigrant rights groups, seeking a court injunction against a federal Welfare Reform Act provision, which allows California to withhold prenatal care from illegal immigrant women. They brought their suit to U.S. District Judge Mariana Pfaelzer—the same judge who issued the injunction against the anti-immigrant Proposition 187 in 1994, and the 1995 summary judgment blocking any implementation of the ballot initiative.
According to the Oct. 16 *Washington Post*, California Gov. Pete Wilson’s press secretary, Sean Walsh, denounced the suit as “madness, lawyers run wild even before any services have been eliminated.” Seventy thousand illegal immigrants currently receive prenatal care in California, according to Susan Drake, executive director of the National Immigration Law Center.
**BUCKINGHAM Palace would not normally expect to figure in an American election campaign, but the Queen popped up unexpectedly in Oklahoma the other day, courtesy of George Bush,” the London *Sunday Times* reported Oct. 21. At a GOP event, “Bush embarked on an anecdote about his visit to the palace, to be invested as an honorary Knight Grand Commander of the Order of the Bath. ‘I was made a real live knight,’ he told his tittering audience.”
**WILLIAM WELD** has been caught in another lie, according to the Oct. 22 *Boston Herald*. In his Massachusetts U.S. Senate campaign, Weld has intoned that “the only cure for pedophilia is prison.” Seven child pornographers he prosecuted on felony charges during his stint as U.S. Attorney, however, were released on probation without serving any time.
**SERIAL KILLER** Jack Kevorkian delivered the corpse of Mrs. Nancy DeSoto Oct. 17 to a hospital in Royal Oak, Michigan. At the same hour, his lawyer was telling a press conference that Mrs. DeSoto had not come to Michigan to commit suicide, but only to talk to Kevorkian. Attorney Geoffrey Fieger claimed he did not know where DeSoto was. “Dr. Death’s” latest victim, afflicted with multiple sclerosis, was 55 years old.
**DRUG LEGALIZATION** advocates still have not scored with the electorate, and face another defeat this year, according to a recent survey. The Community Anti-Drug Coalitions of America released a poll Oct. 23 of over 1,000 candidates running for office in November, showing that 86% of them have no confidence in legalization as an effective means of dealing with the current drug crisis.
**A NEW STUDY** released by the American Medical Association, claims that 45% of uninsured adults report having difficulty in obtaining medical care, and that 70% of them were unable to get treatment when their symptoms were either “very serious” or “somewhat serious.”
A crisis of the institutions
Nearly six years after the breakdown of the Soviet system, a crisis of confidence in the institutions of government is sweeping western Europe as well.
When the Berlin Wall fell in November 1989, a mood of optimism swept through the world. Not only was an end to the Cold War in sight, but there was the opportunity for a surge in economic growth in the West and the East alike, which would be fuelled by massive infrastructure investment in the former East bloc. Unfortunately, such was not to be the case.
Lyndon LaRouche’s proposal for the integration of what he called the European “Productive Triangle,” as a motor for global economic development, centered in the industrial areas of France and Germany, was blocked. Instead, the British Crown prevailed. “Shock therapy” epitomized the looting policies that were imposed.
Six years later, the situation in Russia is catastrophic, and it is little better in western Europe and the world in general. We are now faced in the immediate months ahead, with an economic, social, and political crisis which is reminiscent of that which brought down the Wall in Germany and drove Mikhail Gorbachov from power.
The mass strike process which has been unleashed in France and Belgium is a lawful response to this situation. Unless the policies put forward by LaRouche prevail, the crisis will deepen. Only a showdown with the British Empire and the oligarchy which it represents, can save the day.
In Belgium, the demonstration by more than 300,000 people against the corruption of Belgian judicial institutions must be seen, on the one hand, in the context of the growing dissatisfaction of the population against the increasingly brutal austerity being imposed on Europeans through the mediation of the International Monetary Fund. Equally significant, is the fact that the pedophile ring exposed in Belgium, is directly connected to George Bush and his friends.
When the Belgian government tried to suppress the revelations, a mass protest erupted. An equally explosive environment is building in the United States, as a result of the revelations of the role of the George Bush-deployed networks in pushing crack-cocaine in U.S. inner-cities.
The deepening of the economic crisis throughout the world, is reflected in this social process. This is precisely what occurred in the last days of the Soviet Union. It is in periods such as this that people lose confidence in the institutions of government, and broad, uncontrolled popular revolts are unleashed.
Only by an uncompromising exposure of corruption within and outside governments, can trust in the ruling institutions be restored. It is not a matter only of prosecuting the relative small fry, but it is imperative that people on the level of George Bush and Margaret Thatcher, be indicted and convicted for their crimes. Which means also that Lyndon LaRouche and his associates, who were unjustly persecuted by these high-level criminal networks, be exonerated.
It is the global network which Bush and Thatcher have created at the behest of the British Crown, which has successfully imposed financial dictatorships, and administrative tyranny, on nations throughout the West—making “politics as usual” virtually useless. That individuals who are willing to subject whole populations to genocide, as in Africa, should also inflict sexual abuse on young children, is not surprising, but symptomatic of the moral corruption rife in our culture.
The kind of mass strike movement which we now see emerging in Europe, and the initial signs of the same ferment in the United States, are a first step in transforming the situation. But, without the leadership of the LaRouche movement, such popular outrage can also fuel fascism. Which way the situation will go, depends on the rapidity with which the LaRouche movement is able to grow; but one thing is clear: The status quo cannot persist.
In 1989, on the occasion of the fall of the Berlin Wall, LaRouche wrote in EIR: “Thus we must choose: Do we want an oligarchical society, or do we want a republican society?” The choice is still before us.
SEE LAROUCHE ON CABLE TV
All programs are *The LaRouche Connection* unless otherwise noted.
**ALASKA**
- ANCHORAGE—ACTV Ch. 44
Wednesdays—9 p.m.
**ARIZONA**
- PHOENIX—Dimension Ch. 22
Sundays—1 p.m.
**CALIFORNIA**
- E. SAN FERNANDO—Ch. 25
Saturdays—8:30 p.m.
- LANC./PALMDALE—Ch. 3
Sundays—6 p.m.
- MARIN COUNTY—Ch. 31
Tuesdays—5 p.m.
- MODESTO—Access Ch. 5
Mondays—2:30 p.m.
- ORANGE COUNTY—Ch. 3
Fridays—7:30 p.m.
- PASADENA—Ch. 56
Tuesdays—2 & 6 p.m.
- SACRAMENTO—Ch. 18
2nd & 4th Weds.—10 p.m.
- SAN DIEGO—Cox Cable North County—Ch. 15
Wednesdays—6:00 p.m.
Greater San Diego—Ch. 24
Wednesdays—4:30 p.m.
- SAN FRANCISCO—Ch. 53
2nd & 4th Tues.—5 p.m.
- SANTA ANA—Ch. 53
Tuesdays—6:30 p.m.
- STA. BARBARA/TUNGA King Video/Cable—Ch. 20
Wednesdays—7:30 p.m.
- W. SAN FERNANDO—Ch. 27
Wednesdays—6:30 p.m.
**COLORADO**
- DENVER—DCTV Ch. 57
Saturdays—1 p.m.
**CONNECTICUT**
- BETHEL/DANBURY/BRIDGEFIELD Comm. Ch. 23, Wed.—10 p.m.
- BRADFORD—TCI Ch. 21
Weds., 10 a.m. & 7:30 p.m.
- NEWTOWN/NEW MILFORD Charter—Ch. 21
Thursdays—9:30 p.m.
**DISTRICT OF COLUMBIA**
- WASHINGTON—DCTV Ch. 25
Sundays—12 Noon
**IDAHO**
- MOSCOW—Ch. 37
(Check Readerboard)
**ILLINOIS**
- CHICAGO—CAN Ch. 21
Schiller Hotline 21; Fri.—5 p.m.
**INDIANA**
- SOUTH BEND—Ch. 31
Thursdays—10 p.m.
**KANSAS**
- SALINA—CATV Ch. 6
(call station for times)
**KENTUCKY**
- LOUISVILLE—TKR Ch. 18
Wednesdays—5 p.m.
**LOUISIANA**
- NEW ORLEANS—Cox Ch. 8
Mondays—11 p.m.
**MARYLAND**
- BALTIMORE—BCAC Ch. 42
Mondays—9 p.m.
- BALTIMORE COUNTY—Ch. 2
2nd Tues., monthly—9 p.m.
- MONTGOMERY—MCTV Ch. 49
Weds.—1 p.m.; Fri.—8:30 p.m.
- P.G. COUNTY—Ch. 15
Thursdays—10:30 a.m.
- W. HOWARD COUNTY—Ch. 6
Daily—10:30 a.m. & 4:30 p.m.
**MASSACHUSETTS**
- BOSTON—BNN Ch. 3
Saturdays—12 Noon
**MICHIGAN**
- TRENTON—TCI Ch. 44
Wednesdays—2:30 p.m.
**MINNESOTA**
- EDEN PRAIRIE—Ch. 33
Weds.—5:30 pm; Sun.—3:30 pm
- MINNEAPOLIS—MTN Ch. 32
Fridays—7:30 p.m.
- MINNEAPOLIS (NW Suburbs) Northwest Comm. TV—Ch. 33
Mon.—7 p.m.; Tues.—7 a.m. & 2 pm
- ST. LOUIS PARK—Ch. 33
Friday through Monday
3 p.m. & 11 p.m.
- ST. PAUL—Ch. 33: Mon.—8 p.m.
- ST. PAUL (NE Suburbs) Suburban Community—Ch. 15
Wednesdays—12 Midnight
**MISSOURI**
- ST. LOUIS—Ch. 22
Wednesdays—5 p.m.
**NEW JERSEY**
- STATEWIDE—CTN
Sundays—5:30 a.m.
**NEW YORK**
- ALBANY—Ch. 18; Tues.—5 p.m.
- BRONX—BronxNet Ch. 70
Saturdays—6 p.m.
- BROOKHAVEN (E. Suffolk)
TCI—Ch. 1 or Ch. 99
Wednesdays—5 p.m.
- BROOKLYN Cablevision (BCAT)—Ch. 67
Time Warner B/Q—Ch. 34
(call station for times)
- BUFFALO—BCAM Ch. 18
Tuesdays—11 p.m.
- HUDSON VALLEY—Ch. 6
2nd Sun, monthly—1:30 p.m.
- IJION—TIW Ch. 10
Fridays—8:30 p.m.
- ITHACA—Pegasy—Ch. 57
Mon. & Thurs.—8:05 p.m.
Saturdays—4:35 p.m.
- JOHNSTOWN—Empire Ch. 7
Tuesdays—4 p.m.
- MANHATTAN—MNN Ch. 34
Sundays—12 & 24—9 a.m.
Sun., Dec. 8 & 22—9 a.m.
- MONTVALE/MWAHAY—Ch. 14
Wednesdays—5:30 p.m.
- NASSAU—Ch. 25
Last Fri., monthly—4:00 p.m.
- OSSING—Ch. 10—Continental Southern Westchester Ch. 19
Rockland County Ch. 26
1st & 3rd Sundays—4 p.m.
- POUGHKEEPSIE—Ch. 28
1st & 2nd Fridays—4 p.m.
- QUEENS—QPTV Ch. 57
Wednesdays—10 p.m.
- RIVERHEAD Peconic Bay TV—Ch. 27
Thursdays—12 Midnight
1st & 2nd Fridays—4 p.m.
- ROCHESTER—GRC Ch. 15
Fri.—11 p.m.; Sun.—11 a.m.
- ROCKLAND—TCI Ch. 27
Wednesdays—5:30 p.m.
- SCHENECTADY—PA Ch. 11
Mondays—10 p.m.
- STATEN ISL.—CTV Ch. 24
Wed.—11 p.m.; Thu.—4:30 a.m.
Saturdays—8 a.m.
- SUNFOLK—Ch. 25
2nd & 4th Mondays—10 p.m.
- SYRACUSE—Adelphia Ch. 3
Fridays—4 p.m.
- SYRACUSE (Suburbs) Time-Warner Cable—Ch. 12
Saturdays—9 p.m.
- TCI—Harbor Ch. 3
Thursdays—10:30 p.m.
- WEBSTER—GRC Ch. 12
Wednesdays—9:30 p.m.
- YONKERS—Ch. 37
Fridays—4 p.m.
- YORKTOWN—Ch. 34
Thursdays—3 p.m.
**OREGON**
- PORTLAND—Access
Tuesdays—6 p.m. (Ch. 27)
Thursdays—3 p.m. (Ch. 33)
**TEXAS**
- AUSTIN—ACTV Ch. 10 & 16
(call station for times)
- DALLAS—Access Ch. 23-B
Sun.—8 p.m.; Thurs.—9 p.m.
- EL PASO—Paragon Ch. 15
Thursdays—10:30 p.m.
- HOUSTON—Access Houston
Mondays—5 p.m.
**VIRGINIA**
- ARLINGTON—ACT Ch. 33
Sun.—1 p.m.; Mon.—6:30 pm
Tuesdays—12 Midnight
Wednesdays—12 Noon
- CHESTERFIELD COUNTY—Comcast—Ch. 38
Tuesdays—4 p.m.
- FAIRFAX—FCAC Ch. 10
Tuesdays—12 Noon
Thurs.—7 p.m.; Sat.—10 a.m.
- LOUDOUN COUNTY—Ch. 59
Thurs.—10:30 a.m.; 12:30 p.m.;
2:30 p.m.; 4:30 p.m.; 7:30 p.m.;
10:30 p.m.
- MANASSAS—Jones Ch. 64
Saturdays—6 p.m.
- RICHMOND—Conti Ch. 38
(call station for times)
- ROANOKE—Cox Ch. 9
Wednesdays—2 p.m.
- WOODBRIDGE—Ch. 3
Saturdays—6 p.m.
- YORKTOWN—Conti Ch. 38
Mondays—4 p.m.
**WASHINGTON**
- KING COUNTY—TCI Ch. 29
(call station for times)
- SNOHOMISH COUNTY
Local Cable Ch. 29
(call station for times)
- SPOKANE—Cox Ch. 25
Tuesdays—6 p.m.
- TRI-CITIES—TCI Ch. 13
Mon.—12 Noon; Weds.—6 pm
Thursdays—8:30 pm
**WISCONSIN**
- WAUSAU—Ch. 10
(call station for times)
If you would like to get *The LaRouche Connection* on your local cable TV station, please call Charles Notley at 703-777-9451, Ext. 322. For more information, visit our Internet HomePage at http://www.axsamer.org/~larouche
---
Executive Intelligence Review
U.S., Canada and Mexico only
| 1 year | $396 |
| 6 months | $225 |
| 3 months | $125 |
Foreign Rates
| 1 year | $490 |
| 6 months | $265 |
| 3 months | $145 |
I would like to subscribe to *Executive Intelligence Review* for
- [ ] 1 year
- [ ] 6 months
- [ ] 3 months
I enclose $________ check or money order
Please charge my
- [ ] MasterCard
- [ ] Visa
Card No. ________________________ Exp date ____________
Signature ___________________________________________
Name _______________________________________________
Company _____________________________________________
Phone ( ) _________________________________________
Address _____________________________________________
City __________________ State _____ Zip _________
Make checks payable to EIR News Service Inc.,
P.O. Box 17390, Washington, D.C. 20041-0390.
Exclusive, up-to-the-minute stories from our correspondents around the world
EIR CONFIDENTIAL ALERT
EIR Alert brings you concise news and background items on crucial economic and strategic developments, twice a week, by first-class mail—or by fax (at no extra charge).
Annual subscription (United States) $3,500
Make checks payable to:
EIR News Service
P.O. Box 17390 Washington, D.C. 20041-0390
Table of Contents for the issue of October 22
- Historic phase change in Russia
- Chubais forms a ‘regency’
- The IMF’s role in Russian crisis
- The Belgian tinderbox is a harbinger
- Goldsmith active on several fronts
- Friedrich List enters economic debate
- Bush’s link to Contra cocaine targeted |
The Nature of Power Saturation in Traveling Wave Tubes
By C. C. CUTLER
(Manuscript received February 2, 1956)
The non-linear operating characteristics of a traveling wave tube have been studied using a tube scaled to low frequency and large size. Measurements of electron beam velocity and current as a function of RF phase and amplitude show the mechanism of power saturation.
The most important conclusions are:
I. There is an optimum set of parameters \((QC = 0.2\) and \(\gamma r_0 = 0.5)\) giving the greatest efficiency.
II. There is a best value of the gain parameter "C" which leads to a best efficiency of about 38 per cent.
III. A picture of the actual spent beam modulation is now available which shows the factors contributing to traveling wave tube power saturation.
INTRODUCTION
The highest possible efficiency of the traveling wave tube has been estimated from many different points of view. In his first paper on the subject\(^1\) J. R. Pierce showed that according to small signal theory, when the dc beam current reaches 100 per cent modulation an efficiency of
\[
\eta = \frac{C}{2}
\]
is indicated,* and thus the actual efficiency might be limited to something like this value. Upon later consideration\(^2\) he concluded that the ac convection current could be twice the dc current and that one might expect an efficiency of
\[
\eta = 2C
\]
He also considered the effects of space charge, and concluded on the
* Symbols are consistent with Reference 2 and are listed at the end of this paper.
same basis that under high space charge and elevated voltage conditions, efficiencies might be as high as
\[ \eta = 8C \]
(3)
J. C. Slater\(^3\) on the other hand considered the motion of electrons in a traveling wave and concluded that the maximum possible reduction in beam velocity would also indicate a limiting efficiency of \(2C\). Taking a more realistic account of the electron velocity, Pierce\(^2\) showed that these considerations lead to a value of
\[ \eta = -4y_1C \]
(4)
which, since \(y_1\) ranges between \(-\frac{1}{2}\) and \(-2\), leads to the same range of values as the other predictions.
None of these papers purport to give a physical picture of the overloading phenomenon, but only specify clear limitations to the linear theory. L. Brillouin\(^4\) on the other hand found a stable solution for the flow of electrons bunched in the troughs of a traveling wave. This he supposed to represent the limiting high level condition of traveling wave tube operation. His results give an efficiency of
\[ \eta = 2bC \]
(5)
In the first numerical computations of the actual electron motion in a traveling wave tube in the nonlinear region of operation, Nordsieck\(^5\) predicted efficiencies ranging between 2.5 and 7 times \(C\) and showed that there would be a considerable reduction in efficiency for large diameter beams, due to the non-uniformity of circuit field across the beam diameter. He also gave some indication of the electron dynamics involved. Improving on this line of attack, Poulter\(^6\) calculated some cases including the effect of space charge and large values of \(C\).
Tien, Walker and Wolontis\(^7\) carried computations still further for small values of \(C\) by including the effect of small beam radii upon the space charge terms, and showed that space charge and finite (small) beam radii result in much smaller efficiencies than were previously predicted. J. E. Rowe\(^8\) got similar results and gave more information on the effects of finite values of \(C\). Computations for large values of \(C\) by Tien\(^9\) showed that a serious departure from the small \(C\) conditions takes place above values of \(C = 0.1\) if space charge is small (i.e., below \(QC = 0.1\)) and above \(C = 0.05\) for larger values of space charge. They indicated that a maximum value of efficiency as high as 40 per cent should be possible using \(C = 0.15\), \(QC = 0.1\) and elevated beam voltages.
These five papers give some insight into the electron dynamics of power
saturation, but still involve questionable approximations which make it desirable to compare predictions with the actual situation.
Theoretical considerations of the effects of attenuation upon efficiency have not led to conclusions coming even close to the observed results. Measured characteristics\textsuperscript{10, 11} show that the effect of attenuation is very large, but that attenuation may be appropriately distributed to attain stability and isolation between input and output of the tube without degrading the output power.
There are also several papers in the French and German periodicals which deal with the question of traveling wave tube efficiency. Some of these are listed in References 12 through 20.
This paper describes measurements of efficiency and of beam modulation made on a traveling wave tube scaled to large size,* and low frequencies. The construction of the tube, shown in Fig. 1, and the measurement of its parameters were much more accurate than is usual in the design of such tubes. The results are believed to be generally applicable to tubes having similar values of the normalized parameters.
Fig. 1 — The scale model traveling wave tube. The tube is 10 feet long with a copper helix supported by notched glass tubing from an aluminum cylinder overwound with a focusing solenoid. It is continuously pumped and readily demountable.
* See Appendix.
Two kinds of measurements are described. First, the efficiency and power output are determined for various conditions of operation, and second the spent beam ac velocity and current are measured. The principal results are shown in Figs. 2 to 4 which give the obtainable efficiencies, and in Figs. 7 to 10 which show some of the factors which contribute to power saturation. These figures are discussed in detail later. The most significant phenomenon is the early formation of an out-of-phase bunch of electrons which have been violently thrown back from the initial bunch, absorbing energy from the circuit wave, and inhibiting its growth. The final velocity of most of the electrons is near to that of the circuit wave which would lead to a value of
\[
\text{limiting efficiency } \eta = -2y_1C
\]
(6)
if the wave velocity maintained its small signal value. Actually the wave slows down, under the most favorable conditions giving rise to a somewhat higher efficiency. For other conditions, space charge, excess electron velocity, or nonuniformity of the circuit field enter in various ways to prevent the desired grouping of electrons and result in lower efficiencies.
The observed efficiencies are a rather complicated function of \(QC\), \(\gamma r_0\) and \(C\). To compare with efficiencies obtained from practical tubes one must account for circuit attenuation and be sure that some uncontrolled factor such as helix non-uniformity and secondary emission is not seriously affecting the tubes' performance. Measured efficiencies of several carefully designed tubes have been assembled and are compared with the results of this paper in Table I.
The results of these measurements compare favorably with the computations of Tien, Walker and Wolontis\(^7\), and of Tien\(^9\). There are, however some important differences which are discussed in a later section.
**TRAVELING WAVE TUBE EFFICIENCY MEASUREMENTS**
Reasoning from low level theory, efficiency should be a function of the gain parameter, "C," the space charge parameter "QC," the circuit attenuation, and (for large beam sizes), the relative beam radius "\(\gamma r_0\)." It was soon found that efficiency is a much more complicated function of \(\gamma r_0\) than expected. The initial objective was to determine the effect of \(QC\), \(C\), and \(\gamma r_0\) separately on efficiency, but it was necessary to give a much more general coverage of these parameters, not assuming any of them to be small.
Most of the measurements have been made with small values of loss
## Table I
| Laboratory | Freq. mc. | $QC$ | $\gamma r_0$ | $C$ | $\eta$ measured | $\eta$ (from Fig. 3) | $\eta$ (From Fig. 3 with allowance for circuit attenuation$^{10}$) |
|-----------------------------|-----------|-------|--------------|-------|-----------------|----------------------|------------------------------------------------------------------|
| McDowell* | 4,000 | 0.27 | 0.62 | 0.078 | 19.5 | 26 | 21.6 |
| | 6,000 | 0.29 | 0.8 | 0.058 | 13.2 | 16.2 | 12.5 |
| Brangaccio and Cutler† | 4,000 | 0.61 | 0.87 | 0.041 | 11 | 6 | 6 |
| Danielson and Watson* | 11,000 | 0.35 | 1.2 | 0.05 | 6.6 | 7 | 4.8 |
| R. R. Warnecke$^{16, 17, 18}$ | 870 | 0.32 | 0.3 | 0.125 | 27 | 33 | 33 |
| W. Kleen and W. Friz$^{15}$ | 4,000 | 0.5 | 0.43 | 0.05 | 7.8 | 11.5 | 5.7 |
| W. Kleen† | 4,000 | 0.2 | 0.94 | 0.1 | 20 | 26 | 22 |
| L. Brück§ | 3,500 | 0.19 | 0.6 | 0.065 | 15 | 23 | 18.5 |
| Hughes Aircraft Co. | 3,240 | 0.19 | 0.94 | 0.12 | 39 | 31 | 29 |
| | 9,000 | 0.15 | 1.3 | 0.11 | 25 | 15.5 | 12.7 |
* At Bell Telephone Laboratories.
† Reference 10 (a slight beam misalignment could account for most of this difference).
‡ Siemens & Halske, Munich, Germany.
§ Telefunken, Ulm, Germany.
and of the gain parameter, where efficiency is proportional to $C$, as expected from small-signal small-$C$ predictions. This reduces the problem to a determination of $\eta/C$ versus $QC$ and $\gamma r_0$.
Many measurements of this kind have been made, and the data are summarized in Figs. 2 and 3, with efficiency shown as a function of $QC$ and $\gamma r_0$. In Fig. 2 we have the efficiency when the beam voltage is that which gives maximum low-level gain. Fig. 3 shows the efficiency obtained when the beam potential is raised to optimize the power output, and contours of constant efficiency have been sketched in. There is significantly higher efficiency than before in the region of maximum efficiency, but not much more elsewhere.
Fig. 4 shows how efficiency varies with $C$ for a small value of $QC$, a representative value of $\gamma r_0$, and with beam voltage increased to maximize the output. This indicates a maximum of about 38 per cent at $C = 0.14$.
Some of the computed results of Tien, Walker and Wolontis,$^7$ and of Tien$^9$ are also indicated in the figures. Their results generally indicate somewhat greater efficiencies than were observed, but in the most significant region the comparison is not too bad as will be seen in a later section.
The measurements are for conditions having negligible circuit loss near the tube output. There are no new data on the effect of loss, but earlier results$^{10}$ have been verified by measurements at Stanford University$^{11}$ and are still believed to be a satisfactory guide in tube design.
Fig. 2 — Values of efficiency/C as a function of QC and γr₀ at the voltage giving maximum gain per unit length. The shaded contours and triangular points are from the computations of Tien, Walker and Wolontis. The circled points are from the measurements and the line contours are estimated lines of constant efficiency. The most significant difference is for large beam radii, where the RF field varies over the beam radius in a way not accounted for in the computations.
SPENT BEAM CHARACTERISTICS
The scale model traveling wave tube was followed by a velocity analyzer as sketched in Fig. 5 and described in the Appendix. A sample of the beam at the output end of the helix is passed through a sweep circuit to separate electrons according to phase, and crossed electric and
magnetic fields to sort them according to velocity. The resulting beam draws a pattern on a fluorescent screen as shown in Fig. 6 from which charge density and velocity can be measured as a function of signal phase. The velocity coordinate is determined by photographing the ellipse with several different beam potentials, as in Fig. 6(a), and the phase coordinate is measured along the ellipse. From pictures like this a complete determination of electron behavior is obtained from the linear region up to and above the saturation level.
The results of such a run are plotted in Fig. 7. The upper lefthand
Fig. 3 — Values of efficiency/C as a function of QC and γr₀ at elevated beam voltage. Raising the beam voltage has little effect at large QC and small γr₀, and less than expected anywhere. Again the triangular points are from Tien, Walker and Wolontis, and the line contours are estimated from the measured data.
Fig. 4 — Efficiency/C for large values of C and with elevated beam voltage. Efficiency seriously departs from proportionality to C at C = 0.14, where a maximum efficiency of about 38 per cent is measured.
Fig. 5 — The velocity analyzer. A sample of the spent electron beam is accelerated to a high potential, swept transversely with a synchronous voltage, sorted with crossed electric and magnetic fields, and focused onto a fluorescent screen.
pattern, Fig. 7(a), is representative of the low level (linear) conditions (22 db below the drive for saturation output). The dashed curve represents the voltage on the circuit, inverted so that electrons can be visualized as rolling down hill on the curve. The phase of this voltage relative to the electron ac velocity is computed from small signal theory, but
everything else in Fig. 7, including subsequent variations of phase, are measured. The solid line patterns represent the ac velocity, and the shaded area, the charge density corresponding to that velocity. Thus in each pattern we have a complete story of (fundamental) circuit voltage, electron velocity and current density as a function of phase, for a particular signal input level. The velocity and current modulations at small signal levels check calculated values well, and it is not difficult to visualize the dynamics giving this pattern.
Consider first the situation in the tube at small signal amplitudes. At the input an unmodulated electron beam enters the field of an electromagnetic wave moving with approximately the same velocity as the electrons. The electrons are accelerated or decelerated depending upon their phase relative to the wave, and soon are modulated in velocity. The velocity modulation causes a bunching of the electrons near the potential maxima (i.e., the valleys in the inverted potential wave shown) and these bunches in turn induce a new electromagnetic wave component onto the circuit roughly in quadrature following the initial wave. The addition of this component gives a net field somewhat retarded from the initial wave and larger in amplitude. Continuation of this process
Fig. 6 — Velocity analyzer patterns. The beam sample is made to traverse an ellipse at \( \frac{1}{3} \) the signal frequency. Current density modulation appears as intensity variation, and velocity variation as vertical deflection from the ellipse.
Fig. 7 — Curves of current and velocity as a function of phase for various input levels. The velocity becomes multivalued at a very low level, a tail forming a nucleus for a second electron bunch which eventually caused saturation in the output. For this run $C = 0.1 Q, C = 0.06, \gamma r_0 = 0.4$ and $b = 0.26$.
RELATIVE ELECTRON VELOCITY CHANGE, \((\Delta V)_{2V/C}\), AND RELATIVE CIRCUIT VOLTAGE, \((\sqrt{V})_C\)
LEVEL FOR MAX POWER 0 DB
RELATIVE PHASE IN DEGREES
851
may be seen to give a resultant increasing wave traveling somewhat slower than the initial wave, and thus slower than the electron velocity. Returning to Fig. 7 we see that electrons in the decelerating field [from +30 to +210° in Fig. 7(a)] have been slowed down, and because of their initial velocity being faster than the wave velocity, have moved forward in the wave giving a region of minimum velocity somewhat in advance of the point of maximum retarding field (greatest negative slope in the wave potential). Also, bunching due to acceleration and deceleration of electrons has produced a maximum of electron current density which, because of the initial excess electron velocity, is somewhat to the right of the potential maximum (downward).
As the level is increased the modulation increases and at 17 db below saturation drive, Fig. 7(b), some nonlinearity is evident. The velocity and current are no longer sinusoidal, but show the beginnings of a cusp in the velocity curve and a definite non-sinusoidal bunching of the electrons in the retarding field region (between +30 and 210°).
In the next pattern, Fig. 7(c), at 14 db below saturation a definite cusp has formed with a very sharp concentration of electrons extending significantly below the velocities of the other electrons. We already have a wide range of velocities in the vicinity of the cusp, and at this level the single valued velocity picture of the traveling wave tube breaks down. Although it cannot be distinctly resolved, the study of many such pictures leaves little doubt that the cusp and its later development is really a folding of the velocity line.
The next pattern at 12 db below saturation drive, Fig. 7(d), shows a greater development of the spur and a somewhat greater consolidation of current in the main bunch between +60° and +180°. It is interesting that the velocity in this region has not changed significantly. In order for this to be true the space charge field must just compensate for the circuit field. In the vicinity of the 60° point the space charge field obviously must reverse, accounting for the very sharp deceleration evident in the very rapid development of the low velocity spur. The decelerating field must be far from that of the wave, inasmuch as the electrons just behind the cusp are much more sharply decelerated than those preceding the cusp. We conclude that there are very sharply defined space charge fields much stronger than the helix field. At this relatively low drive, the velocity spread has already achieved its maximum peak value.
The succeeding three patterns show a continuing growth of the spur, a continued bleeding of electrons from the higher velocity regions, and a consolidation of the main bunch just in advance of the spur. Presumably the increased concentration of space charge in the bunch has kept
pace with the increasing helix field, so that the net decelerating field still balances to nearly zero. At 4 db below the saturation drive, Fig. 7(h), the spur has moved well into the accelerating region, and has been speeded up. The main bunch of electrons is still to the right of the spur, and has been consolidated into a $60^\circ$ interval. The few electrons in advance of this region evidently no longer find the space charge field sufficient to balance the circuit field, and are being decelerated into a second low velocity loop.
The next three patterns show a continued growth of this second low velocity loop, further consolidation of the 'main bunch', and the rapid formation of a second bunch in the accelerating field at the end of the spur. It is interesting that at saturation drive, Fig. 7(k) the two bunches are very nearly equal, and in equal and opposite circuit fields, nearly $180^\circ$ apart. The reason for the saturation is that while the main bunch is still giving up energy to the wave, the new one is absorbing energy at an equal rate. The fundamental component of electron current is evidently small, and is in quadrature with the circuit field. The current density in the dashed regions is less than 1 per cent of that in the bunches, and probably more than 95 per cent of the electrons are in the two bunches. Two new effects are observable at this level. The second electron bunch has begun to come apart, presumably because of strong localized space charge forces. These forces are also evident in the kink in the velocity pattern drawn by the fast electrons at the same phase as the second bunch.
Since the majority of the current is in the two bunches at a reduced velocity of
$$\frac{\Delta V}{2V_0C} = -1.1$$
one would expect an output efficiency of
$$\frac{\Delta V}{V_0} = 2.2C$$
The actual measured efficiency
$$\frac{\text{RF power output}}{\text{DC power input}}$$
was $2.0 C$. Under the conditions described, (6) would give $1.4 C$.
At still higher drive levels the pattern continues to develop, electrons from the first bunch falling back into the second, which in turn continues to divide, one part accelerating ahead into a new spur, and the other
slowing down and falling further back in phase. At 9 db above saturation, Fig. 7(o), the pattern is quite complex, and at still higher levels it is utterly indescribable.
It is interesting that the velocity gives a line pattern, even though a multivalued one. It is reasonable to suppose that the development of the spur is really a folding of the velocity line so that the spur is really a double line. Thus, at the 9 db level, and at $0^\circ$ phase, for instance, there must be electrons originating from five different parts of the initial distribution. In an attempt to verify this the resolution of the velocity analyzer was adjusted so that a difference in velocity of 2 per cent of the overall spread could be observed, but there was no positive indication of more than one velocity associated with any line shown.
There has been a long-standing debate as to whether or not electrons are trapped in the circuit field, or continue to override the wave at large amplitudes. The observations indicate that with low values of space charge and near synchronous voltage the electrons are effectively trapped in the wave until well above saturation amplitude. In other circumstances this is not the case, as we shall see.
**SPACE CHARGE EFFECTS**
The data of Fig. 7 were taken with a very small value of the space charge parameter $QC$, so small in fact as to be almost negligible as far as low level operation is concerned. Yet the space charge forces evidently played a very strong role in the development of the velocity and current patterns. It is doubtful that space charge would ever be negligible in this respect, because if the space charge parameter were smaller, the bunching would be more complete, the electron density in the bunch would be greater limited only by the balance of space charge field and circuit field in the bunch. The effect of decreasing $QC$ further therefore is a greater localization of the space charge forces, rather than a reduction of their magnitude, at least until the bunch becomes short compared to the beam radius.
Increasing the value of the space charge parameter has quite the opposite effect. In Fig. 8 are shown three velocity-current distributions at the saturation level, for different values of $QC$. It can be seen that a result of increased space charge is a greater spread of velocities, and a wider phase distribution of current.
With the introduction of space charge, the velocity difference between the electrons and the circuit wave at low levels is increased. Consequently electrons spend a longer time in the decelerating field before being thrown back in the low velocity spur, and thus lose more energy. The
Fig. 8 — A comparison showing the effect of the space charge parameter $QC$ on the velocity and current at overload. The points represent the disc electrons of the computations\textsuperscript{7} of Tien, Walker and Wolontis. For this run $\gamma r_0 = 0.4$ and $b$ is chosen for maximum $x_1$.
greater reduction of velocity results in a faster and farther retarding of the current in the spur before the retarded electrons recover velocity in the accelerating region. Also the larger space charge forces prevent as tight bunching of the electrons anywhere, so that at overload they are spread over a much wider phase interval (about $360^\circ$ for $QC = 0.5$). Space charge also prevents electrons from the forward part of the bunch from being trapped so that more electrons escape ahead of the decelerating field and more current is found in the upper half of the velocity curve. This very likely is the reason that efficiency decreases when $QC$ is increased above about 0.3.
EFFECT OF BEAM SIZE
In small signal operation, decreasing the beam radius below that which assures a constant circuit field throughout the beam has no effect except that accounted for by its effect on $QC$. Fig. 9 shows that for large signals, however, it has a pronounced effect. When the beam is made smaller (with $QC$ maintained by changing frequency and beam current), the slowed up tail is formed at a much lower signal level (not shown), by a very few electrons which begin to collect in the accelerating region before the beam is strongly modulated. As the level is increased, the current is redistributed, more going into the tail without much alteration in the shape of the velocity pattern, and with no strong bunching at any part of the curve. This result is exaggerated in Fig. 9(c) by measuring with a
Fig. 9 — Curves of current and velocity as influenced by $\gamma r_0$. Space charge becomes a very potent factor near overload, especially when the beam is small. For this run $QC = 0.34$ and $b = 1.0$.
ridiculously small beam. By comparison with curves taken for larger beams, the tail is diminutive, electrons are much more uniformly distributed over all velocities and phases, and a peculiar splitting of velocities in the main bunch is found. The latter indicates that electrons entering from the higher velocity region move forward in the bunch, and the rest gradually retard. The smaller reduction in velocities, and the spread of electrons into the higher velocity regions is consistent with the lower efficiency measured (Fig. 2).
To explain the observed difference in high level performance of tubes with different size beams we must consider the character of the ac longitudinal space charge field. The coulomb field from an elemental length of an electron beam is inversely proportional to the square of the distance from the element
\[
E = \text{Const} \frac{q \Delta z}{(z - z_1)^2}
\]
provided \( (z - z_1) \gg r_0 \) and \( (z - z_1) \ll a \).
For \( (z - z_1) \) not small compared to \( a \), (i.e., circuit radius not awfully large) the field would drop even faster with \( (z - z_1) \) due to the shielding effect of the circuit. On the other hand, very near to the beam element \( (z - z_1 \ll r_0) \), the field is approximately that of a disc, which is nearly independent of \( z \), i.e.,
\[
E = \text{Const} \cdot \frac{q \Delta z}{\pi r_0}
\]
independent of \( z \) for \( z \ll r_0 \).
Thus to a fair approximation the space charge field may be considered to be uniform for an axial distance of the order of a half a beam radius, and to drop rapidly at greater distances. For a given current element, a small diameter beam has an intense field extending only a short distance, while an equal charge element in a larger beam has a weaker longitudinal field extending to a greater distance.
At low amplitudes the extent of the forces makes no difference in operation, for a sinusoidal current gives a sinusoidal space charge field in either case. However, at large amplitudes, a sharp change in current density has a very high short range space charge field if the beam is small, or a much smaller smoothed out long range field if the beam is large. For \( \gamma r_0 = 0.5 \) which appears to be an optimum compromise between the effects of space charge and field non-uniformity, the space charge field could scarcely be confined closer than about \( \pm 30^\circ \) in phase. On the other hand, a sharp bunching of electrons in a beam having
$\gamma r_0 = .05$ would have 100 times the space charge field, extending however only one tenth as far from the current discontinuity.
Returning to Fig. 9 we can see how these considerations enter into the development of the beam modulation. In the case of the small beam, Fig. 9(c), at the very beginning of the formation of a cusp, the strong highly concentrated space charge force causes a rapid deceleration of nearby electrons, resulting in the relatively early formation of a diminutive tail. The very high localized space charge force also prevents as tight bunching of electrons, forcing some to move forward and continuously repopulate the accelerating part of the wave. The relatively early falling apart of the initial bunch and the greater acceleration of the overriding electrons evidently give the latter enough velocity to penetrate the main bunch of electrons and form the second class of electrons in the main bunch, $90^\circ - 150^\circ$ in Fig. 9(c). Thus the net result of reducing the beam size is a severe aggravation of space charge debunching effects, with a consequent reduction in efficiency. To get high efficiency, we conclude, the beam should not be small. It should not be larger than $\gamma r_0 = 0.7$ however, for then the circuit field is not uniform enough over the beam cross-section to excite it properly, resulting in a loss in efficiency as is evident in Figs. 2 and 3.
**EFFECT OF INCREASED BEAM VOLTAGE**
It is common practice in the operation of traveling wave tubes to elevate the beam voltage, taking a sacrifice in gain in order to obtain increased power output. The effects on the beam modulation are shown in Fig. 10. In Fig. 10(a), the voltage is somewhat below that giving maximum gain. The curve is characteristic of what we have already seen but the bunching is less pronounced and the velocities are less reduced. In Fig. 10(b) the voltage is somewhat above that giving maximum gain and the curve is much like that of Fig. 8 except that the decelerated electrons are slowed by a greater amount, consistent with the increased separation of electron and wave velocity, and also with the measured increase in power output.
Increasing the beam voltage still further gives only a slight increase in efficiency. Fig. 10(c) shows that even though electrons are slowed to still lower velocities, and the velocity spread is increased, many more electrons override the circuit wave and are accelerated, thereby offsetting the greater contribution of the slower electrons. This is much like what was seen with increasing space charge ($QC$) and indeed the effects are almost equivalent. As one would expect therefore, little is gained by elevating the beam voltage if the space charge is large, the
Fig. 10 — The influence of beam velocity on ac velocity and current. When the velocity is raised too high, the electrons are not effectively trapped by the wave, and override into the accelerating field. With large $QC$ and/or small $\gamma r_0$ the electrons override in any case, and little is gained by increasing $b$. For this case $QC = 0.13$ and $\gamma r_0 = 0.21$.
main effect being to push more electrons forward into the accelerating region.
ELECTRIC FIELD IN THE BEAM
Besides telling a clear story of the non-linear dynamics of the traveling wave tube, the foregoing curves contain a lot of information about average current and velocity distributions. From the current or velocity curves we can in turn deduce the distribution of longitudinal electric field in the beam. Figs. 11(a) and (b) show the instantaneous current as a function of phase, taken from the curves of Figs. 8(a) and (b). The infinite differential in the velocity curve necessarily gives a pole in the charge density (at about $88^\circ$). The total charge in the vicinity of the
Fig. 11 — AC current and electric field in the beam. The upper curve comes directly from Fig. 8(a). The lower curve is deduced by an approximate method from the velocity curve of Fig. 8(a). The double value below $90^\circ$ is partly due to inconsistency between the two parts of the velocity curve, and partly due to the nature of the approximation.
pole, and the range of the space charge force (dependent upon $QC$ and $\gamma r_0$) determines its effect upon the electron dynamics.
Most of the current is incorporated in the two bunches nearly $180^\circ$ apart, as we have seen, each bunch having a current density many times the average.
We might obtain the space charge fields from the current density, but this would require a rather definite knowledge of the characteristic space charge field versus distance as influenced by beam diameter. It would also be pushing the accuracy of charge density measurement, which is crude at best. A better way is to compute the electron acceleration from the velocity curves. This may be done by taking two velocity patterns at slightly different signal levels, and tracing electrons from one to the next, using the measured velocity to determine the relative phase shift of any electron.
In the appendix it is shown that a close approximation to this is
$$E_\Phi = 2\beta C^2 V_0 \left[ \frac{(V_0 - V_w) + \Delta V}{2V_0C} \right] \frac{d}{d\Phi} \left( \frac{\Delta V}{2V_0C} \right)$$ \hspace{1cm} (10)
where the parameters are all obtained from a single velocity curve, and
\[ E_\Phi \] is field strength in volts meter at phase \( \Phi \)
\[ \frac{\Delta V}{2V_0C} \] is the value of the ordinate of the velocity characteristic of interest (Figures 7 to 10) and
\[ \left( \frac{V_0 - V_w}{2V_0C} \right) \] is the value of the ordinate corresponding to the wave velocity. (To be precise, the wave velocity at the associated output level, but to a reasonable approximation, that of the wave velocity at low levels. (This value is indicated by \( V_w \) in the velocity curves.)
The total electric field has been computed for the case of Figs. 8(a) and (b) and is given in Figs. 11(b) and 12(b) together with the circuit field calculated for the associated power level and plotted with an arbitrarily chosen phase. In each case it is seen that the space charge field is comparable in magnitude to the circuit field, is far from sinusoidal, and

**Fig. 12** — AC current and electric field in the beam deduced from Fig. 8(b). The greater space charge results in a less defined bunch, and smoother space charge field than in Fig. 11.
Fig. 13 — Curves of output level, Fourier component amplitudes of beam current, and peak velocity as a function of input level for low space charge. These curves were deduced from Fig. 8(a).
Fig. 14 — Maximum velocity reduction as a function of space charge (from Fig. 8). The velocity reduction is about $3.5 y_1$.
agrees qualitatively with what would be expected from the associated curve of beam current.
To determine the curves of Figs. 11 and 12 is rather stretching the accuracy of the measurements as can be seen by the large discrepancy in the field calculated from the two parts of the velocity curve which of course should be identical. The figures do give an interesting qualitative picture of traveling wave tube behavior however, and are included here for that reason.
OVERALL VELOCITY SPREAD
Of more practical importance is the overall velocity spread in the spent beam. It is often desirable to reduce the power dissipation in a traveling wave tube by operating the collector at a potential below that of the electron beam, and it is interesting to see how far one might go. Fig. 13 shows how the velocity reduction of the slowest electron, together with the output level and fourier current components of beam current vary with input level. For small amplitudes, the low level theory accurately predicts the velocity, but near overload, as we have seen, the minimum velocity drops sharply to a value several times lower than that projected from small signal theory.
The maximum velocity spread dependence upon the space charge parameter $QC$ is shown in Fig. 14. Similar data for values of the other parameters may be obtained from the velocity diagrams.
From the foregoing data, one can deduce the amount of reduction of collector potential that should be theoretically possible without turning back any electrons. An idealized unipotential anode could collect all the current at a potential $\Delta V$ (in the foregoing figures) above the cathode, decreasing the dissipated power by a factor of $\Delta V/V_0$ below the dc beam power.
STOPPING POTENTIAL MEASUREMENTS
Information on spent beam velocity has also been obtained by a stopping-potential measurement at the collector of a more conventional 4,000-mc traveling wave tube.* Two fine mesh grids were closely spaced to a flat collecting plate, and collector current was measured as a function of the potential of second grid. The first grid was very dense, to prevent reflected electrons from returning into the helix. One curve taken with this arrangement is shown in Fig. 15 and for comparison we have
* Similar measurements have been reported by Atsumi Kondo, Improvement of the Efficiency of the Traveling Wave Tube, at the I.R.E. Annual Conference on Electron Tube Research, Stanford University, June 18, 1953.
plotted the distribution predicted from Fig. 9(b). The RF losses in the 4,000-mc tube were not negligible, and probably account for slightly smaller power output and greater proportion of higher velocity electrons.
COMPARISON WITH COMPUTED CURVES
Non-linear calculations of traveling wave tube behavior have been made by Tien, Walker and Wolontis\textsuperscript{7} and by Tien\textsuperscript{9} covering the same region of parameter values as is reported here. In Figs. 2, 3, 4 and 9 are shown some of their data on our coordinates. The similarity of the results over much of the range is rather reassuring. It is interesting that in order to make the computations it was necessary to assume two space charge factors, just as was found experimentally. There are, however, some significant differences:
1. In general, the computed values give a higher value of efficiency than is measured, by about 25 per cent. Thus, the computations indicate

**Fig. 15** — Collected current versus stopping potential. The oscilloscope curve is for a 4,000-mc tube, and the other that predicted from the scale model measurements. By integrating current as a function of velocity for Figs. 7–10 stopping potential distributions can be deduced for other conditions.
Fig. 16 — Efficiency versus $\gamma r_0$ for small $QC$. The dashed curve is proportional to the amount of beam current in the circuit field strength having at least 85 per cent of the intensity at the edge of the beam. This illustrates the fact that for large beams only the edge of the beam is effective.
that with the reasonable values of $QC = .25$ and $\gamma r_0 = 0.8$ ($k_T = 2.5$), the efficiency would be about $3.8C$, whereas the measured value is $3.1C$.
2. The largest discrepancy in the measured and computed value of $\eta/C$ is for large values of $\gamma r_0$ (small $k_T$), where the computations show a steady increase in efficiency instead of a sharp decrease. This arises because the computational model assumed the electric field to be uniform across the beam, whereas in the actual tube it varies as $I_0(\gamma r)$, and for large values of $\gamma r_0$ the field is weak near the beam axis. This effect is shown in Fig. 16 where $\eta/C$ is plotted versus $\gamma r_0$ for small values of $QC$, on the same scale with a curve proportional to the square of the fraction of the beam within a cylindrical shell such that
$$1 - \frac{I_0(\gamma r_1)}{I_0(\gamma r_0)} = 0.85$$ \hspace{1cm} (11)
where $r_1$ is the inside radius, and $r_0$ the outside beam radius (i.e., the fraction of the beam in a field greater than 85 per cent of that at the beam edge).
No serious studies of velocity were made for large beams, but on cursory examination it was evident that the beam modulation varied considerably over the cross section when the beam was very large, and scarcely at all when it was smaller than around $\gamma r_0 < 0.8$.
3. The observed effect of small beam radius upon efficiency is not as pronounced as was found in the computations. The reason is not known but may be due to modulation of the beam diameter at large signal levels. This effect would be negligible with the larger $\gamma r_0$'s, due to the focusing fields being relatively much larger.
4. The computations, and also those of Nordsieck, Poulter and Rowe indicate a much higher efficiency than has been observed at elevated beam voltages and small $C$ and $QC$. The reason for this may be that the limited number of "electrons" used in the computational models fail to adequately account for the very sharp space charge cusp that forms under low $QC$ conditions, or that interpolation between their points should not be linear, as assumed in making the comparison. On the other hand it would be difficult to be sure that nonuniformities in electron emission were not influencing the measurements in the case of the large beams by giving a larger $QC$ than calculated.
5. The increase in efficiency to be had by elevation of beam voltage is much smaller than is indicated by the computations. This may be a real difference, or it may be that at elevated voltages, the measurements are beginning to feel the influence of overloading in the attenuator. The margin of safety on attenuator overloading is not as great as one would like at the higher frequencies.
6. The velocity curves, Fig. 8, compare the computed and measured data on three runs. For small $QC$, Fig. 8(a), the agreement is remarkably good considering the fact that in the computation only 24 "electrons" were used to describe a rather complicated function. The effect of the lumping of space charge in the artificial 'disc' electrons causes a scatter-of points which is different from that in an actual tube as is especially apparent in Figure 8c. In spite of this the computational results indicate a velocity spread and current distribution not greatly different from that observed.
CONCLUSIONS
The large scale model traveling wave tube is a means for the determination of non-linear behavior, and has been valuable in determining relationships and limitations important to efficient operation of such tubes. It has shown that there is a broad optimum in tube parameters around $C = 0.14$, $QC = 0.2$ and $\gamma r_0 = 0.5$ for which values it is possible to obtain efficiencies well above 30 per cent. The measured ac beam velocity and current near overload show that it is unlikely that significant increase in efficiency can be obtained by any simple expedients such
as operations on the helix pitch alone, or the use of an auxiliary output circuit.
The results being in normalized form, are believed to be generally applicable to conventional traveling wave tube design. With determination of an equivalence in beams, they should even be a useful guide in the design of tubes using hollow beams or other configurations.
The work described could not have been done without the able assistance of G. J. Stiles and L. J. Heilos and the helpful council of many of my colleagues at Bell Telephone Laboratories.
**Appendix**
**Scale Model Tube Design**
There were a larger number of factors to be accounted for in the design of this tube. Its proportions should be such as to make it representative of the usual design of traveling wave tube. Its size should be such as to make it easy to define the electron beam boundary, and to dissect the beam. The size should also be such that the electron beam velocity analysis could be done before the beam character would be changed either by space charge, or its velocity spread. The voltage should be low so that further acceleration in the velocity analyzer would not lead to an inconveniently high voltage. Finally, the availability of suitable measuring gear over a 3–1 frequency range, and the size of the laboratory must be considered. All of these factors led to low frequency operation, limited principally by the laboratory size and the mechanics of construction.
A moderate perveance of around $0.2 \times 10^{-6}$ was taken, with a $\gamma a$ of 1.2 and $\gamma r_0$ of 0.8 in a representative helix with small impedance reduction due to dielectric and space harmonic loading. This is representative of practical tube design in the microwave range and is centered on the parameter values of most general interest. At a frequency of 100 mc and a beam potential of 400 volts this resulted in a helix 10 feet long and $1\frac{1}{2}$ inches in diameter, with an electron beam 1 inch in diameter. The choice of frequency was finally determined by the availability of measuring equipment, and the voltage was selected to give a convenient size for dissection of the electron beam.
By changing frequency, beam current, and beam diameter it was possible to cover a reasonable range of $\gamma r_0$, and $QC$, and to make some observations into the region of large $C$ operation.
In all of the measurements described, a very strong uniform magnetic field was used to confine the beam, and therefore scaling of the magnetic
focusing field need not be considered. The electron beam was produced in a gridded gun and is thus near to the ideal confined flow, which is the only focusing arrangement which is known to determine a reasonably uniform boundary to the beam. The beam size and straightness was checked using a fluorescent screen at the collector end.
NORMALIZING FACTORS
The measurements described are expressed relative to the linear theory, in Pierce's notation, which are generally used in the design of traveling wave tubes. Thus, instead of being presented in the terms of measurement or simply normalized to efficiency, permeance, impedance, etc., they are expressed in terms of $C$, $QC$, $\gamma r_0$, etc., with normalized fields, currents and velocities. In this way the results become adjuncts to the linear theory and are more easily applied to tube design. Electron velocity is plotted on the same scale as the relative velocity parameters $b$ and $y_1$ used in low level theory, (i.e., normalized to $\Delta V/2V_0C$). Efficiency is normalized as $\eta/C$, which for $C$ less than 0.1 is relatively independent of $C$. Field strength in the linear region is proportional to
$$\sqrt{\frac{\eta'}{C}}$$
($\eta'$ being efficiency measured at the appropriate signal level). Solving the equation for $C^3$,
$$C^3 = \frac{E^2}{2\beta^2P} \frac{I_0}{2V_0}$$ \hspace{1cm} (12)
gives us
$$\sqrt{\frac{\eta'}{C}} \approx \frac{E}{\beta C^2V_0}$$ \hspace{1cm} (13)
which we use as the normalizing parameter for electric field. Circuit potential is the integral of circuit field over a quarter period, giving a normalized parameter $V/V_0C^2$. For convenience in the use of common coordinates, circuit potential was plotted as $-V/2V_0C^2$ in Figure 7.
The other curves are plotted as values relative to dc quantities or to saturation level.
Strictly speaking, the results hold only for tubes having the same proportions as the model. Practically, however, as long as the helix impedance and radius ($ka$ or $\gamma a$) are not different by orders of magnitude from the values used, and as long as the permeance is low (below $2 \times 10^{-6}$ for
instance), the results are believed to be significant for tubes having the indicated values of $\gamma r_0$ and $QC$.
**HELIX IMPEDANCE**
It is important to the measurements to have an accurate evaluation of the helix impedance. Several methods of measurement have been discussed in the literature.\textsuperscript{21, 22} That described by R. Kompfner was selected, wherein the circuit impedance is correlated with the beam current and voltage which gives a null in the output signal. When the beam voltage and current are adjusted to give zero transmission for a lossless section of helix (neglecting space charge) $CN = 0.314$ and $\delta V/V_0 \cong 1/N$. Using the measured length of the helix, and measuring the voltage and current giving the null in signal transmission, we can compute $C$, and thus the impedance and velocity (synchronous voltage) of the helix.
The impedance was calculated by P. K. Tien,\textsuperscript{23} and the results are compared in Fig. 17. The measured impedance at the high frequency end was much too low until space charge in the beam was accounted for in interpreting the measurements. Fortunately, in the absence of attenu-

**Fig. 17** — Helix impedance as a function of frequency. The impedance was calculated taking into account dielectric loading and wire size. It was measured using the Kompfner dip method, taking account of space charge.
ation, the conditions for start of oscillation in a backward wave oscillator are the same as for the output null in a traveling wave tube. Space charge was first accounted for using the results of H. Heffner\textsuperscript{24}, \textsuperscript{25} giving an excellent check between predicted and computed helix impedance. Later C. F. Quate\textsuperscript{26} showed that the same measurement could be used to determine the space charge parameter $QC$ as well as the helix impedance. Since thermal velocity effects and the uncertainty of some of the assumptions used in evaluating the small signal effects of space charge cast some doubt on the proper evaluation of this term, further measurements were made on this factor, and a satisfactory correlation between the observed value of $QC$ and that computed from the Fletcher\textsuperscript{27} curves was obtained.
**TOTAL ACCELERATING FIELDS**
From the velocity characteristics shown in Figs. 7 through 10, we can deduce the electron accelerations, and thus the electric fields at any point. While the curves are actually diagrams of velocity as a function of phase, they closely correspond to the velocity-time or distance distribution of the electrons in the traveling wave tube. Knowing these characteristics we can deduce the motion of any element of charge, and thus the force under which it moves. It is observed that over most of the curve the shape of the velocity pattern does not change nearly so rapidly as the redistribution of electrons within the pattern. Thus, we can approximate the situation at any amplitude by assuming the velocity pattern to be constant, and that electrons move within the pattern according to simple particle dynamics. This is a good approximation except where the acceleration is high (i.e., vertical crossings of the wave velocity line).
Consider then an element of the velocity pattern at phase $\Phi_1$ and velocity $(u_0 + \Delta u)$. In an interval $dt$ this element will move a distance
$$ (u_0 + \Delta u) \ dt $$
(14)
and will change velocity by
$$ du = E \frac{e}{m} \ dt $$
(15)
At the same time the wave will have moved a distance $v \ dt$, resulting in a relative change in phase between wave and current element of
$$ d\Phi = \beta(u_0 - v + \Delta u) \ dt $$
(16)
In terms of equivalent differences the term in brackets can be written
$$ (u_0 - v + \Delta u) = \sqrt{2 \frac{e}{m} V_0} C \left( \frac{V_0 - V_w + \Delta V}{2V_0C} \right) $$
(17)
from (16) and (17) we can write:
\[
\frac{du}{dt} = \frac{du}{d\Phi} \frac{d\Phi}{dt}
\]
\[
= \frac{d}{d\phi} \left( \frac{\Delta V}{2V_0C} \sqrt{2 \frac{e}{m} V_0 C} \right) \left[ \beta \sqrt{\frac{e}{m} V_0 C} \left( \frac{V_0 - V_w + \Delta V}{2V_0C} \right) \right]
\]
giving (from 15)
\[
\frac{E}{\beta V_0 C^2} = 2 \left[ \left( \frac{V_0 - V_w}{2V_0C} \right) + \left( \frac{\Delta V}{2V_0C} \right) \right] \frac{d}{d\Phi} \left( \frac{\Delta V}{2V_0C} \right)
\]
\( \beta, V_0 \) and \( C \) are constants of the tube, the first inner parenthesis may be calculated from the tube constants and is shown in the curves. \( \Delta V/V_0C \) and its differential are the value and the slope of the velocity curve in question.
The important approximations here are that the velocity-phase curves are representative of velocity-distance characteristics, which is true for small values of \( C \), and that the electrons move roughly tangent to the given velocity pattern. By comparing several patterns at different signal levels it is observed that this is true to a fair accuracy over most of the curve. Also it is assumed that the wave velocity at large amplitudes is the same as that for small signals, which is not quite true. The resulting curves give at least a qualitative picture of the field distribution within a traveling wave tube, and serve to emphasize the importance of space charge fields in determining the non-linear characteristics.
**ELECTRIC FIELD OF THE HELIX WAVE**
In order to see what part of the field is due to space charge we must evaluate the corresponding helix fields. A value for this can be derived from the basic traveling wave tube equations assuming the helix fields to be sinusoidal and not seriously affected in impedance by the beam (small \( C \) again). By definition
\[
\frac{E^2}{2\beta^2P} \frac{I_0}{4V_0} = C^3
\]
and
\[
\frac{\eta'}{C} = \frac{P}{I_0 V_0 C}
\]
where \( \eta' \) is normalized power level, i.e., efficiency corresponding to the signal level \( E \) of interest. From this we deduce for the normalized circuit
field
\[
\frac{E}{\beta V_0 C^2} = 2\sqrt{2} \sqrt{\frac{\eta'}{C}}
\]
(20)
which integrates to give a normalized ac circuit voltage
\[
\frac{V}{2\sqrt{2} V_0 C^2} = \sqrt{\frac{\eta'}{C}}
\]
(21)
**RELATIVE PHASE BETWEEN WAVE, VELOCITY AND CURRENT**
The velocity analyzer provides no convenient measure of relative phase between the helix wave and the beam modulation. Therefore we compute the relation of helix field and beam modulation for a small signal, and for large amplitudes measure the phase of each relative to that at small amplitudes.
Pierce gives the relationship \(^2\)
\[
v = \frac{-\eta \Gamma V}{u_0 (j \beta_e - \Gamma)}
\]
(22)
which using (9) and the fact that
\[
\beta_e C \delta = j \beta_e - \Gamma
\]
(23)
gives for the small signal beam modulation
\[
\frac{\Delta V}{2 V_0 C} = -j \frac{\sqrt{2}}{\delta} \sqrt{\frac{\eta}{C}} = \frac{\sqrt{2}}{|\delta|} \sqrt{\frac{\eta'}{C}} \left[ \tan^{-1} \frac{y_1}{x_1} \right] - \frac{\pi}{2}
\]
(24)
Similarly we have for the small signal current modulation
\[
i = I_0 \sqrt{2 \frac{\eta}{C}} \cdot \delta^2 = I_0 \sqrt{2 \frac{\eta}{C}} |\delta^2| \left[ 2 \tan^{-1} \frac{x_1}{y_1} \right]
\]
(25)
The value of \( \delta (= x_1 + j y_1) \) is given in Fig. 18, drawn from data supplied by P. K. Tien, from Pierce \(^2\) and from Birdsall and Brewer. \(^{28}\) This figure was also used as a basis for determining the values of \( y_1 \) and \( b \) used in several of the curves.
**MEASUREMENT OF POWER**
The output power, and relative output phase was measured using a micro-oscilloscope. \(^{29}\) The subharmonic of the signal was used for a sweep voltage, and phase was measured from the shape of the observed lissajou figures. The oscilloscope deflection was compared with the dc deflection from a battery standard, and checked on occasion with a bolometer power meter at the operating frequency.
**THE VELOCITY ANALYZER**
There are many ways in which one may separate velocities in an electron stream. Crossed electric and magnetic fields were used in this exFig. 18 — Increasing wave propagation factors used in interpreting the measurements. These are the maximum value of $x_1$ and the corresponding value of $b$ and $y_1$ for given values of $QC$.
periment because a simple control of sensitivity was important in order to study velocity differences ranging from 1 per cent up to as much as 100 per cent of the dc beam velocity.
The velocity analyzer is sketched in Fig. 5. It consists of an aperture which transmits only a few microamperes of the electron stream; a magnetic pole piece (not shown) terminating the focusing field; a pair of horizontal deflection plates; an electrostatic lens system; pole pieces and deflection plate to provide a region with crossed electric and magnetic fields; and finally a drift tube, a post deflection acceleration electrode and fluorescent screen. The whole assembly is raised 1,000 volts above the helix potential and the 0.001" aperture is very close to the end of the helix, so that the electrons are very quickly accelerated to a high voltage. By this means, the region of debunching outside of the helix field is kept below 1.4 radians transit angle and the velocity spread within the analyzer is reduced by a factor of four. Space charge within the analyzer is entirely negligible because of the small current transmitted.
In order to discriminate in phase before the electrons are scrambled
due to their spread in velocity, the horizontal sweeping plates are mounted just as close to the aperture as is deemed practical. The observed velocity spreads in the beam were such as to give less than 0.2 radians error in phase under the worst conditions.
The horizontal deflecting plates were driven synchronously with a sub-harmonic of the RF input to the helix, and the resulting deflection served to separate electrons according to phase in the final display.
Placing the focusing lens after the deflection plates results in a considerable reduction in deflection sensitivity. However, undesirable magnification of the pinhole aperture dictated that the lens could not be close to it, and it was important to initiate the deflection as early as possible. The lens consists of three discs, the center one being biased to about 800 volts above the mean voltage of the rest of the system.
Immediately after the lens there are two iron pole pieces and two insulated electric deflection plates which extend parallel to the beam for $1\frac{1}{4}$ inches. The pole pieces provide a dc magnetic field up to about 20 gausses induced by small coils outside of the envelope, and the electric deflection plates are biased with up to a corresponding 50 volts dc polarized to oppose the magnetic deflection of the beam. The electric and magnetic fields are adjusted so that the normal unmodulated electron beam traverses the region with no deflection and strikes the center of the fluorescent screen. In the crossed field region
$$\frac{E}{B} = \sqrt{2 \frac{e}{m} V_0}. \quad (26)$$
Electrons having greater or lesser velocity are deflected parallel to the electric field, and give a corresponding deflection from the center of the fluorescent screen.
To get a display in which the various elements are not hopelessly entangled, it was necessary to sweep the trace in an initial ellipse at a subharmonic rate. The sweep voltage was applied to the horizontal deflection plates, with just a little applied to the vertical plates through a phase shifter. The relative phase of any part of the trace was measured from the ellipse, and the velocity sensitivity was calibrated by observing the ellipse deflection as a function of the dc beam potential, as shown in Fig. 6(a). There is a small error due to the sensitivity of deflection to velocity, and due to distortion of the ellipse by fringing fields.
In order to measure velocity and current density in the displayed pattern, the fluorescent screen was photographed, and the negative projected in a microcomparator. It was assumed that with the small currents used, the light intensity was proportional to current, and the film linearity was calibrated by making exposures of several different durations. The trace density was measured with a densitometer, sweeping
over the trace width to account for variations in focus for different parts of the pattern. Admittedly, the process is not very accurate, but it does give a rough measure of current density and helps considerably in interpreting the observed velocity patterns.
**NOMENCLATURE**
| Symbol | Description |
|--------|-------------|
| \(a\) | Circuit radius |
| \(b\) | Parameter relating electron velocity to that of the cold circuit wave \(u_0 - v_1/u_0C = \Delta V/2V_0C\) |
| \(B\) | Magnetic field |
| \(\beta\) | the axial phase constant \(\omega/v_1\) |
| \(C\) | The gain parameter \(= (E^2/2\beta^2P) (I_0/4V_0)\) |
| \(\gamma\) | Radial phase constant \(\cong \beta = \omega/v_1\) |
| \(\delta_1\) | Complex propagation constant for the increasing wave |
| \(E\) | Electric field |
| \(E_\Phi\) | Electric field at phase \(\Phi\) |
| \(e/m\) | Charge to mass ratio of the electron |
| \(I_0\) | Beam current in amperes |
| \(I_0(\ )\) | Modified Bessel function |
| \(k_T\) | Tien’s constant \(k_T = 2/\gamma r_0\) |
| \(ka\) | Circuit circumference measured in (air) wavelengths |
| \(N\) | Number of wavelengths |
| \(\eta\) | Maximum efficiency |
| \(\eta'\) | Efficiency at an intermediate power level |
| \(P\) | RF power obtainable from the circuit |
| \(QC\) | Space charge parameter |
| \(q\) | Charge per unit length in the electron beam |
| \(r\) | Radial distance from the axis |
| \(r_0\) | Beam radius |
| \(t\) | Time variable |
| \(u\) | Electron velocity |
| \(u_0\) | DC beam velocity |
| \(v\) | AC velocity of the electron beam |
| \(v_1\) | Wave velocity |
| \(V_0\) | DC beam voltage |
| \(T_w\) | Voltage corresponding to the wave velocity |
| \(\Delta V\) | Voltage difference corresponding to the difference in velocity of an electron and the dc beam velocity |
| \(\delta V\) | Difference between synchronous voltage and that giving the Kompfner dip |
| \(\Phi\) | Relative phase |
| \(z\) | Distance measured along the beam |
REFERENCES
1. Pierce, J. R., Theory of the Beam Type Traveling Wave Tube, Proc. I.R.E., 35, pp. 111–123, Feb., 1947.
2. Pierce, J. R., Traveling Wave Tubes, D. VanNostrand Co., Chapter XII.
3. Slater, J. C., Microwave Electronics, D. VanNostrand Co., 1950, pp. 298.
4. Brillouin, L., The Traveling Wave Tube (Discussion of Waves of Large Amplitudes), J. Appl. Phys., 20, p. 1197, Dec., 1949.
5. Nordsieck, A., Theory of the Large Signal Behavior of Traveling Wave Amplifier, Proc. I.R.E., 41, pp. 630–647, May, 1953.
6. Poulter, H. C., Large Signal Theory of the Traveling Wave Tube, Tech. Report No. 73 Electronics Research Laboratory, Stanford University, Stanford, California, Jan., 1954.
7. Tien, P. K., Walker, L. R., and Wolontis, V. M., A Large Signal Theory of Traveling Wave Amplifiers, Proc. I.R.E., 43, pp. 260–277, Mar. 1955.
8. Rowe, J. E., A Large Signal Analysis of the Traveling Wave Amplifier, Technical Report No. 19, Electron Tube Laboratory, University of Michigan.
9. Tien, P. K., A Large Signal Theory of Traveling Wave Amplifiers Including the Effects of Space Charge and Finite C, B.S.T.J., 34, Mar., 1956.
10. Brangaccio, D. J., and Cutler, C. C., Factors Affecting Traveling Wave Tube Power Capacity, Trans. I.R.E. Professional Group of Electron Devices, PGED 3, June, 1953.
11. Grumly, C. B., Quarterly Status Progress Report No. 26, Electronics Research Laboratory, Stanford University, Stanford, California, pp. 10–12.
12. Doehler, O., et Kleen, W., Phénomènes non Linéaires dans les Tubes à Propagation D’onde” Annales de Radioélectricité (Paris), 3, pp. 124–143, 1948.
13. Doehler, O., et Kleen, W., Sur le Rendement du Tube a Propagation D’onde,” Annales de Radioélectricité, Tome IV No. 17 Juillet, 1949 pp. 216–221.
14. Berteriotière, R., et Convert, G., Sur Certains Effets de la Charge D’espace dans les Tubes a Propagation D’onde, Annales de Radioélectricité, Tome V, No. 21, Juillet, 1950.
15. Klein, W., und Friz, W., Beitrag zum Verhalten von Wanderfeldröhren bei Hohen Engangspegeln, F.T.Z., pp. 349–357, July, 1954.
16. Warnecke, R. R., L’évolution des Principes des Tubes Électroniques Modernes pour Micro-ondes, Convegno di Elellronica e Televisione, Milano, p. 12–17, Aprile, 1954.
17. Warnecke, R. R., Sur Quelques Résultats Récemment Obtenus dans le Domaine des Tubes Electroniques pour Hyperfréquences, Annales de Radioélectricité, Tome IX, No. 36, Avril, 1954.
18. Warnecke, R., Guenard, P., and Doehler, O., Phénomènes fondamentaux dans les Tubes à onde Progressive, Onde Electrique, France, 34, No. 325, p. 323–338, 1954.
19. Brück, L., und Lauer, R., Die Telefunken Wanderfeldröhre TL6, Die Telefunken-Röhre Heft 32, pp. 1–21, Februar, 1955.
20. Brück, L., Vergleich der Verschiedenen Formeln für den Wirkungsgrad einer Wanderfeldröhre, Die Telefunken-Röhre Heft 32, pp. 23–37, Februar, 1955.
21. Cutler, C. C., Experimental Determination of Helical Wave Properties, Proc. I.R.E., 36, pp. 230–233, Feb., 1948.
22. Kompfner, R., On the Operation of the Traveling Wave Tube at Low Level, Journal British I.R.E., 10, p. 283, Aug.–Sept., 1950.
23. Tien, P. K., Traveling-Wave Tube Helix Impedance, Proc. I.R.E., 41, pp. 1617–1623, Nov., 1953.
24. Heffner, H., Analysis of the Backward-Wave Traveling-Wave Tube, Proc. I.R.E., 42, pp. 930–937, June, 1954.
25. Johnson, H. R., Kompfner Dip Conditions, Proc. I.R.E., 43, p. 874, July, 1955.
26. Quate, C. F., Power Series Solution and Measurement of Effective QC in Traveling-Wave Tubes, Oral presentation at Conference on Electron Tube Research, University of Maine, June, 1954.
27. Fletcher, R. C., Helix Parameters in Traveling Wave Tube Theory, Proc. I.R.E., 38, pp. 413–417, Apr., 1950.
28. Birdsall, C. K., and Brewer, G. R., Traveling Wave Tube Characteristics for Finite Values of C, Trans. I.R.E., PGED-1, pp. 1–11, Aug., 1954.
29. Pierce, J. R., Traveling Wave Oscilloscope, Electronics, 22, Nov., 1949. |
Church of St. Gertrude
Forest City, Minnesota
150 Years
1857-2007
A Reproduction
of the
Centennial Celebration Booklet
1857-1957
- and -
The Next Fifty Years
1957-2007
Centennial Celebration
1857 - 1957
St. Gertrude's Church
Forest City, Minnesota
A History
of
St. Gertrude's Parish
in
FOREST CITY, MINNESOTA
1857-1957
TO THE REVEREND PASTOR AND THE FAITHFUL
OF SAINT GERTRUDE'S PARISH AT FOREST CITY:
The Archbishop has already joined with you in a day of thanksgiving to commemorate the first offering, now one hundred years ago, of Holy Mass in the present parish of Saint Gertrude.
In commemoration of this event, some historical records will be printed and these will also serve to keep alive, for the Catholics of the future, the sense of importance which so great an event always spells in the life of the Catholic Church.
On the occasion of the publication of this booklet, the Archbishop renews his congratulations on the centenary that has already been attained and begs God's best blessings on the Parish at Forest City as it begins a second century near the altar of God.
With every blessing, I remain,
Devotedly in Christ,
William O. Brady
Archbishop of Saint Paul
In Loving Memory of
ARCHBISHOP JOHN GREGORY MURRAY
Late Archbishop of Saint Paul
under whom the Church of Saint Gertrude became an independent parish in 1945
Born February 26, 1877
Consecrated Bishop April 28, 1920
Archbishop of Saint Paul, October 29, 1931
Died October 11, 1956
May he rest in peace.
St. Gertrude's Church, Rectory and Cemetery
As you turn off U. S. 12 at the north end of Litchfield onto State Highway 24, there is no sign to inform you in this year 1957 that the village of Forest City lies just six miles ahead. Yet its place is secure in Minnesota history, for one hundred years ago it was the first white settlement in Meeker County, the site of the offering of the first Mass and the first Catholic church in that territory. Its name soon became familiar to Irish immigrants in cities of the Midwest and the distant eastern seaboard. In fact it is quite reasonable to suppose that within a few years prospective emigrants in far off Ireland were gathering around their hearths in County Tyrone and Kerry to read and discuss the latest letter from one of their relatives in the new frontier settlement. Here was located the first county seat, the first Post Office and Federal Land Office, the first newspaper published in Meeker County. Everything pointed to a prosperous and stable future for the pioneer village on the banks of the Crow River. But the promise was never fulfilled as frequently happens with men's dreams and plans.
The first Catholic settlers who arrived in the spring of 1856 were John Flynn and John Whalen, and they were soon followed by others, among them Thomas O'Dougherty with his family and brother John in the summer of the same year, and in the following year Patrick Finnegan, Andrew Sullivan, Bryan McNulty, Edward Campbell, John and Michael Murray and Patrick Casey. The early predominance of Gaelic settlers was the natural aftermath of the great wave of Irish immigration which had its peak year in 1851 and was still rolling across the western prairies.
Here then was the nucleus of a Catholic community and in a short time its existence became known to the Benedictine monks who had just established themselves at what later became St. John's Abbey, Collegeville. One of their number, Father Alexius Roetzer, traveled the forty miles to offer the first Mass in Meeker County sometime in 1857 at the log cabin of John Flynn. Undoubtedly he must have spent several days instructing the young and old and administering the sacraments before continuing on his far-flung mission circuit which then covered at least four counties. Under such circumstances it is not surprising that in the beginning a priest could come to the settlement only once a year. Father Alexius continued his visitations but by 1860 the names of two other Benedictines, Fr. Whitcomb and Fr. Scherer, are mentioned as attending the pioneer congregation. Then from 1862 to 1866 Father
Meinulph Steukemper of St. John's was in charge of the growing Catholic population of Meeker County, still saying Mass in private homes, administering the sacraments and instructing the children wherever a group could be gathered together.
But the faith and zeal of these pioneers was not satisfied with such makeshift arrangements and in 1866 they bravely started making plans to build a parish church. By February 13, 1867 the site had been secured, a donation from the city, the deed filed and actual construction soon begun. With the generous help of many non-catholic neighbors the men of the parish began hauling lumber from Gillman's Mill near St. Cloud and the work went ahead steadily throughout the summer and fall of 1867. Sometime during this period, Father Meinulph was succeeded by another Benedictine, Father Augustine Burns, who directed the final stages of the project. To him fell the honor of offering the first Mass in the new church (dedicated to St. Gertrude) on Christmas Day, 1867, and on that same day he officiated at the first wedding in the church, uniting John Fitzgerald and Louisa Campbell in the holy bonds of matrimony. During the previous October the first mission in the new parish was preached by a Holy Cross Father, Rev. Paul Gillen, even before the building was completed. At this time the congregation numbered about fifty families who under the leadership of Father Burns and the trustees, M. J. Flynn, John Dougherty, Sr., Daniel Dougherty and Patrick Casey had contributed the sum of three thousand dollars to erect this first Catholic church in Meeker County.
Almost inevitably in such a pioneer parish, because of its missionary status and the frequent changing of priests to other areas, the records of the first baptisms, marriages and funerals are scattered, confused or lost, never now to be recovered. One of the earliest known instances of the administration of the last sacraments was at the death of the pioneer John Flynn in 1859, who was buried on the site of the present parochial cemetery. The land did not become official church property until 1884 when the owner, John Dougherty, presented the deed to the trustees of the parish which had already been incorporated on March 26, 1878.
Prior to this, however, the village of Forest City and St. Gertrude's Church had successfully weathered the Panic of 1857, the Indian outbreak of 1862 and the prolonged absence of some of its ablest men in military service during the Civil War. Thereafter progress and growth continued steadily with the constant arrival of new families seeking homesteads in the area. Indicative of this growth was the memorable visit on September 16, 1868 of the Bishop of St. Paul, Most Reverend Thomas Grace when he conferred the sacrament of Confirmation for the first time in Meeker County. Later that same year on Dec. 6, the church was formally dedicated to St. Gertrude, the famous Benedictine abbess of the seventh century. In the spring of 1869, a class of twenty-five children received first communion from Father Burns, although the parish still numbered only about eighty families. Yet Forest City and St. Gertrude's seemed destined to continue as the thriving center of all the surrounding territory and predictions as to its future greatness could not seem extravagant even to the more conservative citizens.
All these grandiose dreams were shattered in the summer of 1869 by the decision of the officials of the St. Paul & Pacific Railroad to plot the route of their trains several miles to the south of Forest City, and to erect there a station which later developed into the town of Litchfield. While this soon became the commercial and political center of the county, and several families from St. Gertrude's built homes there, future pastors with so many distant missions under their care saw the advantage of living close to such modern means of transportation. So in twelve short years this pioneer church had fulfilled its destiny as the cradle of Catholicism on a wild frontier and had achieved its greatest expansion, at least externally as men measure such things. The full effect of these events was not apparent for several years, and until the church at Darwin was built in 1878, and that of St. Philip in 1882, a large congregation of the faithful attended Mass every Sunday in St. Gertrude's. During this time, Father Burns, the last Benedictine pastor, was succeeded by Father Michael Hurley, the first diocesan priest, in 1871, who continued as pastor until relieved by Father Cahill in 1873. The latter remained in charge until 1876 when Father John McDermott was appointed pastor and it was under him that the parish was legally incorporated. When Bishop John Ireland visited the parish in October of 1876, the number of families had increased to two hundred and there were over three hundred candidates for the sacrament of Confirmation. Actually a large proportion of these families were soon to be assigned to either of the new parishes then being formed in Litchfield and Darwin. At an even earlier date two daughter parishes of Forest
City had been founded, one at Greenleaf with a church built in 1870, and the other at Manannah with a church under construction in 1876. After the establishment of these parishes, especially the one at Litchfield, the Catholics of Forest City had to be content to become a mission, first of St. Philip’s and later of St. John’s, Darwin. The little church of St. Gertrude had been built, well furnished and paid for, a cemetery acquired and laid out, and now the days of its preeminence in the county were coming to an end. Its history from now on will be the uneventful chronicle of Sunday masses being regularly offered week after week, of sacraments being received, infants baptized, first communion and confirmation classes, young couples being married, the sick being visited and anointed, the dead being laid to rest in the shadow of the church. Each year there was a fair, or fund-raising festival and other social events in accordance with the custom of the times. The priests who had charge of the parish after Father McDermott left in 1882 were Fr. Patrick Kenny of Litchfield until the fall of 1883, when Father McDermott returned to become pastor of Darwin and Forest City until his death in 1887. During the next six years the pastoral obligations were fulfilled at various times by Father Joseph Tracy, Father Hugh McDevitt and Father Edward Lee until the
Interior of Church in 1957
Fr. John McDermott
Fr. Joseph Tracy
Fr. P. J. McCabe
Fr. Edward Lee
arrival of Father P. J. McCabe on September 21, 1893. He remained in charge for nine years and in 1901, towards the end of his term, had the foresight to direct the moving of the church building to its present, more imposing site, where it was enlarged and remodeled.
As the church remains today substantially the same as it was at the turn of the century, it is fitting to pause here and to list for the information of the present generation the names of those who donated much of the sacred furnishings we gaze on each Sunday. The bell in the tower is the gift of Mr. and Mrs. Edmund Burke in memory of their son John. St. Gertrude's window in the sanctuary was donated by Mr. and Mrs. Denis McCarthy in memory of their son John; St. Joseph's window by Mr. and Mrs. John Burns in memory of their two sons, Thomas and John; Mrs. Michael Murphy donated the double window, St. Luke and St. John in memory of her husband; Frank and Mary McIntyre gave the double window, St. Matthew and St. Mark as a memorial. The Circle window was donated by the Altar Society.
Windows in the main part of the church were donated by Chris Baden in memory of his wife Mary; James Fitzgerald in memory of his parents, Patrick and Joanna Fitzgerald; Mary Whalen in memory of her husband John Whalen; William and T. J. Murphy in memory of their brother Joseph; Matthew, Patrick and John Flynn in memory of their parents, Patrick and Ann Flynn; Susan Burns in memory of her mother, Ann Burns. Mr. and Mrs. P. J. McIntyre also donated a memorial window.
Mrs. Mary Pike donated the main altar in memory of her mother, Mrs. Burke. The side altars were donated by the Altar Society. Mrs. Michael Murphy and Mrs. Denis McCarthy, Sr., donated the sanctuary lamp, and the vestments were the gift of Catherine Murphy.
William O'Keefe, Sr., by the terms of his will, bequeathed four hundred dollars to the church to be used by Father Dobbins to provide furnishings and decorations for the altar. In addition the unceasing generosity of the whole parish was demonstrated time and time again through the years by their loyal support of their own church and their liberal contributions to other causes at the request of Diocesan authorities.
As far back as 1868, for example, a grand total of eighty dollars had been sent out from St. Gertrude's Parish for the St. Vincent de Paul Society, for the Seminary Fund established by Bishop Grace and for the Holy Father, Pope Pius IX. Similar instances of generous giving could be recorded of every succeeding generation of Catholics in Forest City right down to this centennial year.
Father William Dobbin who succeeded Father McCabe as pastor in September, 1902, served the congregations of Forest City and Darwin with great zeal for fourteen years until August 15, 1916 when Father Lawrence Cosgrove was placed in charge of both parishes. He it was who arranged for the celebration of the fiftieth anniversary in 1917 of the building of St. Gertrude's Church and for the same occasion published the first history of the parish in a booklet which also contained an account of the parish in Darwin. While on a visit to his native Ireland, he died there on April 18, 1920.
He in turn was succeeded by Father M. M. Ryan who despite a great deal of ill health continued as pastor until his death on July 17, 1936. Thereafter the parish was vacant for a short while.
in September, 1948. He remained in charge of St. Gertrude's until replaced in March, 1952 by Father Francis Welch who was succeeded in December of that same year by Father Thomas McNamara. Despite poor health that required extended hospital care several times, Father McNamara continued as pastor until his rather sudden, though not entirely unexpected death on October 15, 1956. He was stricken while preaching at mass on Sunday morning, and without being able to even finish the mass had to be helped to the sacristy and then carried over to the house. Later in the day he was taken to the Meeker County Hospital in Litchfield where after receiving the last sacraments, he died on Monday morning. Archbishop Brady offered the Pontifical Requiem Mass for him here in St. Gertrude's Church the day after the funeral of Archbishop Murray, and he was buried as he had requested, in the
until Father Donovan was appointed in October, 1936 as pastor of Darwin with Forest City as a mission. Nine years later, in the summer of 1945, when he was succeeded by Father Jerome Campbell, the late Archbishop Murray gave the faithful of Forest City their first resident pastor in many years when he appointed Father Michael Lawler to the post.
The new pastor's first task was to draw up plans and start the construction of a rectory. Before it was finished, however, Father Leo Howley was assigned as pastor in December, 1945. Under his direction the house was completed in 1946, and in the following year he saw to the completion of another valuable addition to the parish facilities in the excavation of a basement under the church and the furnishing of it as a center for social activities. Men of the parish donated much of the labor and equipment for this project and thus kept the actual financial cost down to a modest figure.
When Father Howley was transferred to St. Dominic's Church in Northfield in April, 1948, Father John McGuire came to stay for a few months and he was succeeded by Father Francis Fairley
Corpus Christi procession in 1947 with Fr. Leo Howley Carrying the Blessed Sacrament.
parish cemetery behind the church. The bereaved parishioners soon established a fund for the purchase of a beautiful stone cross to mark the grave of Father McNamara and to perpetuate their grateful affection for him.
After his death Forest City was made a mission of St. Philip's in Litchfield and was faithfully served by the pastor, Father Clarence Foley and his assistant, Father William Bullock. They began making plans to celebrate in the following August, the one hundredth anniversary of the first Mass offered in the parish in 1857, but in June St. Gertrude's again became a mission of Darwin under the care of Father John Fleming, the pastor there. A month later Forest City once more had a resident priest when Father Vincent Hope was appointed administrator on July 18, 1957.
To commemorate the completion of one hundred years as a Catholic community and its honorable primacy as the pioneer parish and mission center in Meeker County, Archbishop Brady came on Sunday, August 18, 1957 to offer a Pontifical low mass and to preach the jubilee sermon in the ninety-year-old church which was filled to overflowing. After mass he conferred the Sacrament of Confirmation on a class of forty boys and girls and two adults. Present for the ceremonies and guests afterwards at a dinner served in the rectory by the Altar and Rosary Society were the following priests: Fathers Foley, Bullock, Fleming, John Ward of Manannah, Frederick Barthelme of Maple Lake, Bernard Schreiner of Eden Valley, Leo Howley of Hopkins, John Gleason of Regal and Jerome Dougherty O. S. B. of St. John's Abbey, in addition to the Archbishop and Fr. Hope. A general celebration for all present and former members of the parish and their friends was scheduled to coincide with the annual Fall Festival and Dinner on Sunday, September 29th.
This brings to a close an altogether too brief chronicle of St. Gertrude's Parish in Forest City. To people obsessed with the modern craze for statistics and the mania for gauging greatness by bigness is may not seem a very impressive story. But as Archbishop Brady said in his sermon at the centenary, the destiny of the parish in God's providence was not to expand materially but spiritually and having once been the Mother Church with numerous offspring on a large frontier, to continue quietly as a dwelling place of God among a smaller group of His children, who chose the more wholesome life on the land.
Obviously there is nothing very spectacular or what the Journalists call newsworthy about the place or its inhabitants and their activities. As the motorist speeds along northeastward on State Highway 24, the road curls and winds through a pleasant rolling countryside, until at the crest of a little hill the tall slender spire of Saint Gertrude's Church can be seen cleaving the horizon on the eastern edge of Forest City. For a moment the gleaming white walls of the church and rectory with the well kept graveyard in the foreground, form an interesting picture for the hurrying traveler and then they are left behind. Such was the role, too.
Fr. Albert (Joseph) Hannan, OSB
Grave of Fr. McNamara in St. Gertrude's Cemetery.
of this locality in early Minnesota history. But what a rich pageant of pioneer Catholic life once unfolded here and left its indelible imprint on the future course of events in all the surrounding territory. The hustle and rawness of frontier life were refined by the gentle influence of Christ's presence in the mass and the sacraments; the crude monotony relieved by the annual cycle of the church's feasts and festivals, its picnics and bazaars, as well as by the regular visits of the intrepid Benedictines from St. John's Abbey. All these things belong to the past now, but they also go to make up much of the living tradition of this little rural parish which continues into its second century the God given task of bringing Christ to its members in order that they may become more Christlike.
Despite its status as a mission and the lack of a full time resident pastor most of the time, the deep faith and piety of the parishioners has been clearly expressed in many different ways. Not the least in importance is the fact that two of its men have been ordained to the priesthood and two of its daughters have entered the religious life as sisters. The priests are Father Chrysostom Schreiner of the Order of St. Benedict, ordained at St. John's Abbey in Collegeville, Minnesota on June 29, 1884, who died on January 3, 1928; and Father Joseph Hannan, also a Benedictine who was ordained at the St. Paul Seminary in June, 1913 and is now stationed at Assumption Abbey in Richardton, North Dakota. The nuns are: Sister M. Lucida, Miss Margaret Jackman, who entered the Congregation of St. Joseph in 1913 and died in 1935; Sister M. Bruce, Miss Marie Flynn, daughter of Mr. and Mrs. Pat Flynn, who entered the Order of St. Benedict in 1952 and is now stationed at a Nursing Home in Staples, Minnesota.
The following men have sacrificed their time and efforts for the welfare of the parish since it was first organized, in serving as trustees: John Dougherty, Sr., Patrick Casey, Sr., M. J. Flynn, Dennis McCarthy, Chris Baden, John Sullivan, Michael Murphy.
Philip Turck, T. J. Murphy, Patrick Flynn, Thomas Farley, E. J. Boullion, Matthew Flynn, James McCarthy, Edward Boullion, and the present incumbents, Ray Koenig and Matt Flynn. The officers of the Parish Cemetery Association are: Ray Arens, James McCusker and Fabian McCarthy.
According to the most recent figures available there are at present fifty-three children of pre-school age, seventy-two in grade school, twenty-three in high school and ten young adults out of high school. Unfortunately there is no complete list at present of the young men and women of the parish who are serving in the armed forces of our country.
Since the Men's Club disbanded a few years ago, there is no formal organization of the men of the parish at the present time. Nevertheless, they have in the past always been quick to volunteer their services in performing many tasks for the general welfare of the parish and continue to do so now. With their cheerful and generous cooperation they considerably have lightened the load of responsibility of the priests in the maintenance of the parish buildings, as well as in many other worthwhile projects.
The women's Altar & Rosary Society has been active for many years in the parish and even a partial list of their praiseworthy accomplishments would take up far more space than we have at our disposal. Besides putting on the Annual Parish Dinner and other Social Activities, which raise a considerable sum every year to help keep parish finances out of the red, they assume responsibility for the care and furnishing of the church, altar, sanctuary and sacristy as well as of the parish house. All the members may well be proud of what their Society has accomplished both in the past as well as in the present.
Our Parish Choir has a long history and the present group maintains the high standards set by those of past generations. Under the capable guidance of Mrs. Raymond Koenig, organist and director, they contribute that indispensable element of sacred music to the worship of God, at the Sunday High Mass, at the great festivals like Christmas, Holy Week and Easter, as well as at Lenten devotions, Forty Hours devotion, Wedding masses and Funerals. The members of the choir are: Mrs. Gerald Kiely, Mrs. Wilfred Rick, Misses Patricia Shaw, Joan Rosenow, Elaine Nothnagel, Ellen McCann, Darlene Rohrbeck, Janet Crusoe, Patricia Valiant and Sylvia Theis.
In concluding this brief sketch of the history of St. Gertrude's Parish in Forest City, the compiler is well aware that mistakes and omissions will come to light very quickly, perhaps before the book is off the presses. All that can be said is that they were not intentional, least of all any personal slight to anyone living or dead, and the only excuse is the lack of more complete records and the shortness of the time allotted for the task. Although no one else can be blamed for any errors discovered in our story, credit for inaugurating the work and collecting most of the material used must be given to many others, including Father Clarence Foley of St. Philip's Church, Litchfield; Mrs. Joe Flynn and her daughter, Dorothy, now Mrs. Stanley Tacheny of Litchfield. In addition to the personal recollections of some of the older parishioners, the sources of much of the information set down in the previous pages are: Father Lawrence Cosgrove's Golden Jubilee Parish Book, 1917, Lamson's Meeker County History, 1937, Father Jerome Quinn's Dissertation on the History of St. Philip's Parish, Litchfield, Minnesota, 1951, and The History of the Church of St. John of Darwin, 1953.
September 21, 1957
THE END
MEMBERS OF ST. GERTRUDE'S PARISH IN 1957
Mr. & Mrs. Nick Arens
Mr. & Mrs. Ray Arens
Mr. & Mrs. Donald Asfeld
Mr. & Mrs. Tom Baden
Mr. & Mrs. Milton Baker
Mr. & Mrs. Jacob Becker
Mr. & Mrs. Nick Becker
Mr. & Mrs. Ronald Becker
Mr. & Mrs. John Bollin
Mr. & Mrs. Ed Boullion
Mr. & Mrs. John Boyer
Mrs. Ambrose Brutger
Mr. & Mrs. Dan Christle
Mr. & Mrs. Harold Crusoe
Mr. & Mrs. John Crusoe
Mr. & Mrs. Andrew Ertle
Mr. & Mrs. Ed Farley
Mr. Richard Farley
Mr. & Mrs. Frank Fisher
Mr. & Mrs. Joe Flynn
Mr. Donald Flynn
Mr. Matt Flynn
Mrs. Pat Flynn
Mr. & Mrs. J. W. Frederick
Mr. & Mrs. Hilary Garding
Mrs. Mary Harbinson
Mr. Tom Harbinson
Mr. & Mrs. George Hartneck
Mrs. Ray Hennessey
Mrs. August Holmgren
Mr. & Mrs. G. A. Kiilty
Mr. & Mrs. Tom Kiilty
Mrs. Wilfred Knutson
Mr. & Mrs. Ray Koenig
Mr. & Mrs. Jerome Loch
Mr. & Mrs. Tom McCann
Mr. James McCann
Mr. & Mrs. Fabian McCarthy
Mr. & Mrs. Jim McCusker
Mr. Alvin McCusker
Mr. & Mrs. Edmund Meyers
Mrs. Matt Moser
Mr. Joseph Moser
Miss Dolores Moser
Mr. & Mrs. Calvin Moyer
Mr. & Mrs. Andrew Nothnagel
Mr. Lawrence Nothnagel
Mr. & Mrs. Lawrence Nohner
Miss Shirley Nothnagel
Miss Ann O'Keefe
Miss Kate O'Keefe
Mr. Tom O'Keefe
Mr. & Mrs. William O'Keefe
Mr. Ray O'Keefe
Mr. Francis Pennertz
Mrs. Howard Pennertz
Mr. & Mrs. Peter Pennertz
Mr. & Mrs. Sylvester Quast
Mr. & Mrs. John Ranthum
Mr. & Mrs. Walter Ranthum
Mr. & Mrs. Wilfred Rick
Mr. & Mrs. Ed Rohrbeck
Mr. & Mrs. Walter Rohrbeck
Mr. & Mrs. Rudy Rosenow
Miss Frances Rosenow
Mrs. Kenneth Scholtes
Mrs. Mary Scholtes
Mr. Leo Scholtes
Mr. & Mrs. Henry Schoolmeesters
Mr. & Mrs. Leander Schoolmeesters
Mr. & Mrs. Norbert Schoolmeesters
Mr. & Mrs. Clarence Schreiner
Mrs. Sophie Schreiner
Mr. & Mrs. Milton Shoutz
Mrs. Lloyd Shoutz
Miss Loretta Schulte
Mrs. Anna Shaw
Miss Patricia Shaw
MEMBERS OF ST. GERTRUDE'S PARISH IN 1957
Mr. Vernon Watkins
Mrs. Anna Weiss
Mr. George Weber
Miss Theresa Weber
Mr. & Mrs. Al Westrup
Mr. & Mrs. Clem Wimmer
Mr. & Mrs. Louis Wimmer
Mr. Lawrence Wimmer
Mr. & Mrs. John Wimmer
Miss Rose Wimmer
Mr. & Mrs. Ernest Walters
Mr. Joe Young
Mr. & Mrs. Leo Young
The Next Fifty Years
1957-2007
Written by Gloriette Wimmer
Church of St. Gertrude
Mission Statement
We, St. Gertrude’s Catholic Community, are proud of our history of reaching out and caring for the people of rural Forest City for 150 years.
We seek to form ourselves in the faith, to celebrate God’s presence in our lives, to reach out to those in need, to build God’s Kingdom of peace and justice; and to be the “roots” people look to when death calls upon their family.
We pray that we, in union with our neighboring churches, may honor our heritage and continue to grow and flourish as the concrete, real, and recognizable presence of Christ in our corner of the world.
BISHOP JOHN C. NIENSTADT
Bishop of New Ulm
Born March 18, 1947
Consecrated Bishop June 12, 2001
March 2007 - Pope Benedict XVI appointed Bishop Nienstadt as the coadjutor Archbishop of the Archdiocese of St. Paul and Minneapolis and he will minister as apostolic administrator of the New Ulm Diocese until a new bishop is appointed.
The special term “coadjutor” indicates that upon the retirement of Archbishop Harry Flynn, Archbishop Nienstadt will automatically become the Archbishop of the St. Paul and Minneapolis Diocese.
In loving memory of
BISHOP ALPHONSE J. SCHLADWEILER
Born July 18, 1902
Consecrated Bishop January 29, 1958
Died April 3, 1996
MAY HE REST IN PEACE
The Church of St. Gertrude has continued to be a thriving, faith-filled center in the community for 150 years and became part of the New Ulm Diocese November 18, 1957, when Pope Pius XII established the new diocese which consisted of the western part of the Archdiocese of Minneapolis and St. Paul. Monsignor Alphonse Schladweiler was appointed the first bishop.
Father Vincent Hope, having served the parish for almost four years, June 1957-February, 1961, was succeeded by Father Germain Rademacher until September 1962.
Father Rademacher directed the redecorating of the church in 1962. The upper section of the altar was removed, the circular window in the sanctuary was closed up; the wall behind and above the altar was beautifully decorated by DeNardo Decorating Studio of St. Paul.
A large cross, donated by James and Harriet McCusker and Mrs. Raymond McMullen in memory of Mary Kielty, was placed on the panel behind the altar.
Men and boys of the parish shortened the pews and kneelers to provide an aisle on each side of the church.
Each envelope holder was asked to contribute $30.00 for the redecorating project.
Father Edward Clemens and Father John Brunner of St. Anthony Church in Watkins were pastors of St. Gertrude’s September 1962 to September 1964, when Father Frederick Fink was appointed pastor and was also in charge of St. Columban Church, rural Greenleaf, until it closed its doors in 1971.
A parish renewal program had begun April 1963 in the diocese, with diocesan-wide distribution of Liturgy Constitution to all the priests.
September 1964 instruction of the Sacred Congregation Rites stated in Article 91: “It is proper that the main altar be constructed separately from the wall so that one may go around it with ease so that celebration of Mass may take place facing the people.”
The centuries-long custom of saying Mass entirely in Latin was abrogated. V-Day: the first Sunday of Advent, November 29, 1964, was the day in the United States that the vernacular was introduced into the Eucharistic Sacrifice. (Vernacular Replaces Latin)
Laymen began an active role in Mass as commentators and lectors. The first commentators and lectors at St. Gertrude’s in 1964 were James Shea, Ed Meyer, Leander Schoolmeesters, Wilfred Rick and Patrick Finnegan.
The U.S. Roman Catholic Bishops did away with the rule of not eating meat on Friday, November 18, 1966.
A member of St. Gertrude’s, Sister Kathryn Mary Schoolmeesters, daughter of Leander and Ann (Nistler) Schoolmeesters, made her religious profession as a School Sister of Notre Dame on July 17, 1967, at Provincial House Chapel on Good Counsel Hill, Mankato, MN. She took her final vows on August 4, 1973.
Church of St. Gertrude elected their first parish pastoral council in 1967.
Under the direction of Father Fink, the church interior underwent extensive renovation in 1968. The remaining part of the old altar was removed. An oak wall was built behind the altar, the two stained glass windows in the sanctuary were closed up, communion railing removed, pews refinished, confessional, sacristy and choir loft were relocated.
Father Fink served the parish for seven years, 1964-1971, and was succeeded by Father Harry Tasto.
Father Tasto, an experienced carpenter, exposed the beautiful circular window in the sanctuary, which had been closed up for fifteen years. Father Tasto was the last resident pastor at St. Gertrude’s, having served the parish for nine years, 1971-1980. He was succeeded by Father Robert Merth. Father Merth was appointed pastor at St. Gertrude’s and Church of Our Lady in Manannah until June 1985, when Father Paul Schumacher was appointed pastor at St. Gertrude’s and Church of St. John the Baptist in Darwin.
In the early 1980’s, parishioners in the RENEW Program organized the Youth Group, the future leaders of the church. To fully involve the youth in the church and community they participated in projects and various other activities, serving their Lord and growing in their Christian faith.
The first Annual Ecumenical Thanksgiving Service was on November 27, 1985, at Ostmark Lutheran Church, rural Watkins, led by Pastor Timothy Thoresen of Ostmark, Pastor Lois Vetvick of St. Matthew’s United Church of Christ, and Father Paul Schumacher of St. Gertrude’s, both in Forest City. The service is followed by refreshments and fellowship. The service is hosted each year by one of the three churches on Thanksgiving Eve.
Bishop Raymond Lucker, second bishop of the New Ulm Diocese, was the first bishop in the nation to appoint pastoral administrators as leaders of parishes in March 1981. St. Gertrude’s was blessed with its first pastoral administrator when Sister Kathleen Fernholz was appointed to St. Gertrude’s in September 1987. She moved into the rectory, which had been unoccupied since 1980.
Father Paul and Sister Kay were with the following ministries:
Fr. Paul—Baptisms, weddings, funerals, religious education K-through adult in both parishes, Liturgy Group at St. John’s.
Sr. Kay—Baptism preparation, Social Concerns, Agenda and Enrichment in both parishes, Liturgy Group at St. Gertrude’s.
Father Schumacher served the parish for three years, 1985-1988. Father Fred Fink returned as Sacramental Minister at St. Gertrude’s and Pastor at St. John’s in Darwin.
The last Saturday night Mass at St. Gertrude’s was September 3, 1988. Beginning Sunday, September 11, there is only one Mass, on Sunday morning.
The first Finance Council was appointed in 1989—Wilfred Knutson, Reid Van Brunt and Steve Becker. Also in 1989, the Altar and Rosary Society’s name was changed to Council of Catholic Women—CCW.
St. Gertrude’s first parish pictorial directory was published in 1990.
December 1990, used room dividers were installed in the church hall for religious education classes. The dividers were donated by the Church of St. Willibord in Gibbon, following a fire in their church on Good Friday.
Sister Kay Fernholz resigned in March 1991. Tom Johnson came on as pastoral minister in July 1991. Tom, wife Jeanne, Matthew 10, Benjamin 7 and Rachel 5, moved into the rectory. Tom was installed Pastoral Administrator February 3, 1993.
St. Gertrude’s celebrated its 135th anniversary in the 125 year old church November 15, 1992, with 1:00pm Mass, followed by potluck dinner.
The first annual “Good Friday Way of the Cross Walk” took place April 9, 1993. Three area churches participated: Ostmark Lutheran, St. Matthew’s United Church of Christ, and St. Gertrude’s Catholic. Men carry the homemade wooden cross and the group of followers stop at various sites along the way around Forest City. Scripture is read, with prayers and singing. The group then drives their vehicles to Ostmark Lutheran Cemetery for scripture, prayer and singing. The one-hour service concludes at the host church where a soup supper is served.
During the summer of 1994, the attached single garage on the rectory was dismantled and replaced with a larger garage and office.
The Council of Catholic Women, CCW, disbanded December 1995. Committees were formed: Worship and Spirituality, Education, and Social Concerns.
Father Paul Schumacher returned as Sacramental Minister at St. Gertrude’s and Pastor at St. John’s in Darwin, effective June 1999 until June 2000, when Father Patrick Luke Casey was assigned as Sacramental Minister at St. Gertrude’s and Pastor at St. John’s in Darwin. Father Pat is a son of the late Patrick Joseph “Joe” and Florence (Sexton) Casey of Litchfield, and great-grandson of Patrick Casey (1816-1894), Darwin Township. Patrick Casey was a trustee of St. Gertrude’s parish when the present, historic yet sturdy, beautiful church building was erected in 1867, 140 years ago (ref: page 10).
The early pioneers often walked from their homes to Forest City to get instructions in the Catholic faith, at times walking barefoot and putting their shoes on before entering the church. The Caseys were members of St. Gertrude’s until the Church of St. John the Baptist in Darwin was built in 1878.
In compliance with regulations, the worship area of St. Gertrude’s church became handicapped accessible in September 2001.
Tom Johnson resigned as Pastoral Administrator effective July 31, 2002. The Johnson family moved to Vernon Center, MN.
Kathleen Landreville arrived January 2003 as a Pastoral Administrator Intern, and moved into the rectory. A security system was installed in the rectory and the church.
Kathleen was installed Pastoral Administrator July 17, 2005. She resigned October 8, 2006 and lost her three and a half year battle with cancer on December 13, 2006. Mass of Christian burial was at the Church of the Risen Savior in Burnsville, MN. Burial was in her hometown of Hermansville, Michigan. She is survived by her children: Rick (Mary) Schroeder, Michael (Suzanne) Schroeder, Dan (Kristal) Schroeder, and Mary Jane Kernosky, and seven grandchildren. May she rest in peace.
Since 1957, the following parishioners have willingly and faithfully served as trustees of the church: Matt Flynn, Ray Koenig, Ray O’Keefe, Ed Meyer, Ray Arens, Mabel Ziegler, James Shea, Wilfred Knutson, Reid Van Brunt, Patricia Mares, James Ziegler, Jane Fisher. Present incumbents: Janet Hendrickson and Bruce Kiehn.
The beautiful music for Masses, weddings, funerals, etc. has been provided by organists: Ella Koenig, Mary (Young) Bacon, Maureen (McCarthy) Mahoney, Margaret Schoolmeesters, Betty (Kuechle) Carlo, and present organist, Cindy Knutson, also leading the choir of: Jane Fisher, Don and Juanita Arens, Lou & Carole Huber, James and Marcia Ziegler, Nancy Moyer, Karen Marsh, Mary Knisley and Patricia Munson.
With the cooperation of the entire parish, several fundraisers are held annually: Men’s Pancake, Sausage and Potato Salad Supper, St. Gertrude’s Fish Fry, Razzle Dazzle Raffle (replaces the fall festival) Ladies Night Out Christmas Dinner, and Bake Sales.
The women of the parish volunteer their services by serving a meal for the local Bloodmobile workers, serve birthday cake and coffee at the local nursing homes.
Parishioners donate to the local food shelf, to the white sock and t-shirt project, which goes to local charities.
The 139-year-old church’s worship area underwent extensive renovation April-June 2006. The ceiling was repaired; walls were insulated and painted. New oak window trim and new carpet were installed.
Many parishioners have served on committees, councils, etc. Serving in these ministries in 2007:
Parish Pastoral Council:
Father Casey – pastor; Carole Huber – chairperson; Donald Arens, Joan Turck, Darala Loch, Patricia Mares, Steve Hendrickson, Patricia Munson; Janet Hendrickson – trustee.
Finance Council:
Father Casey – pastor; Howard Pennertz - chairman; Peg Shaw – bookkeeper; Karen Crusoe, Kevin Root, Steve Turck, Cindy Warren, Jim Ziegler; Bruce Kiehn – trustee.
Eucharistic Ministers:
Lisa Cox, Jane Fisher, Kathy Hansen, Nancy Moyer, Marg Theis, Bruce Kiehn, Duane Mares, Bob Markgraf, Chuck Schoolmeesters.
Lectors:
Kim Renner, Melissa Sackett, Judy Barka, Judy Markgraf, Mike Kramer, Howard Pennertz.
Cantors:
Mary Knisley, Patricia Munson, Juanita Arens, Nancy Moyer.
Altar Servers:
Alan Barka, David Barka, Tori Barka, Katlyn Kiehn, Dillon Lucken, Krista Miller, Kim Miller, Joe Valiant, Daniel Cox, David Cox, Adam Schoolmeesters, Donald Arens, Bruce Kiehn.
Sacristans:
Donna Becker, Jane Fisher, Nancy Moyer, Patricia Mares, Jo Valiant.
Ushers:
Jon Barka, Luverne Kuechle, Adrian Meyer, Art Rick, Maynard Theis, Steve Turck, Chuck Rick, Phillip Valiant, Jim Ziegler.
Religious Education Teachers:
Kim Renner, Jill Root, Cindy Knutson, Sarah Marsh, Darala Loch, Lisa Cox.
Confirmation Instructors:
Bruce Kiehn, Judy Barka
Office:
Carole Huber, Millie Turck.
Worship and Spirituality Committee:
Jane Fisher, Kathy Hansen, Cindy Knutson, Joan Turck, Donna Schreiner, Mary Knisley, Juanita Arens.
Social Concerns Committee:
Carole Huber, Marg Theis, Gen Rick, Marge Rick, Rose Ruprecht.
Education Committee:
Lisa Cox, Kim Renner, Darala Loch, Jill Root.
Greeters:
Al and Pam Miller family, Chuck and Linda Schoolmeesters family, Ken and Kim Renner family, Lou and Carole Huber, Bob and Donna Schreiner, Howard and Millie Turck, Jim and Cindy Theis, Chuck and Marge Rick, Mike and Audre Kramer, Don and Juanita Arens, Art and Gen Rick.
A lifelong member, Howard Turck, was baptized September 9, 1923, and was also confirmed at St. Gertrude’s. Howard, an active parishioner, has served on parish council and various committees in the church.
Cemetery Board:
Phillip Valiant, Art Rick, Chuck Schoolmeesters, Chuck Rick.
Quilting Club:
Patricia Mares, Ardes Shea, Rose Stewart, Camilla Katlack, Florence Worden, Ann Schoolmeesters.
This concludes the history of St. Gertrude’s Church, compiled and edited by Gloriette Wimmer, Ann Schoolmeesters, Camilla Katlack, Howard and Millie Turck.
150th ANNIVERSARY
A 150th Anniversary celebration is planned for August 26, 2007, at the church, beginning with Mass, with Bishop Nienstadt as celebrant, followed by a catered noon meal, an afternoon of entertainment, a program, music, games, and reminiscing.
We ask God’s blessing and guidance as we continue our faithful journey in the coming years.
150th Anniversary Committee:
Jane Fisher, Marg Theis, Carole Huber, Lisa Cox, Gloriette Wimmer, Ann Zillmer, Camilla Katlack, Ann Schoolmeesters, Patricia Mares, Howard and Millie Turck.
2007 Membership
Ahlbrecht, Jean
Anderson, Mike & Lynn
Arens, Don & Juanita
Arens, Mike & MaryLou
Bacon, (Robert) & Mary
Barka, Randy & Judy
Becker, Gerald & Carol
Becker, Robert & Carole Lou
Becker, Roxanne
Becker, Steve & Donna
Berg, (Vernon) & Ellen
Bridge, Kerry & Mary
Cox, (Tom) & Lisa
Crusoe, David & Karen
Duclos, Mike & Kathy
Ertl, Dianna
Ertl, Richard & Kim
Felling, Bernard & Gladys
Ficker, Chad & Lori
Ficker, Conrad & Jane
Fisher, Richard & Jane
Garding, Chris & Brigitte
Garding, Deb
Garding, Jerry & Betty
Garding, LeRoy & Melody
Geislinger, Elroy & Lori
Gollier, Eric
Hagedorn, Sylvia
Hanley, Mike & Elly
Hansen, Bob & Kathy
Hendrickson, Steve & Janet
Herker, Tom & Laurie
Herker, Tony
Holker, Albert & Erin
Holmgren, (Travis) & Sara
Hornburg, (Greg) & Aimee
Huber, Lou & Carole
Jacques, Stan & Sue
Jaquith (Jared) & Melissa
Katlack, Camilla
Kiehn, Bruce & Joyce
Kielty, (Gerry) & Breckett
Kielty, Marna
Knisley, Bob & Maxine
Knisley, Mark & Mary
Knutson, Wilfred & Mary
Knutson, Dave & Cindy
Kramer, Mike & Audre
Kuechle, Luverne & Joan
Kuechle, (David) & Pam
Kuseske, Jeff
Laney, Jon & Gloria
Lindberg, Matthew & Mary
Loch, Dale & Mary
Loch, Darren & Darala
Loch, Gail
Loch, Steve & Julie
Lucken, Thomas & Stephanie
Mares, Duane & Patricia
Markgraf, Bob & Judy
Marsh, Harry & Karen
Marsh, Sarah
McCarthy, Jim & (Karen)
McCarthy, Steve & (Becky)
Meyer, Adrian & (Suzanne)
Mies, Ivan & Donna
Miller, Al & Pam
Movrich, Michael & Heather
Moyer, Don & Nancy
Moyer, Larry & Cathy
Moyer, Kyle
Moyer, Thomas & Marie
Munson, Roger & Patricia
Niewind, Sandry
O'Donaghue, Phil and Susannah
Pennertz, Howard & Patricia
Pennertz, Dorothy
Pennertz, Joseph & Jennifer
Quinn, Tom & Donna
Ramthun, Richard
Randt, Jessica
Rathbun, Mike & Julie
Renner, Ken & Kim
Rick, Art & Gen
Rick, Brenda
Rick, Chuck & Marge
Rick, Larry & (Karla)
Root, Kevin & Jill
Ruprecht, Jerry & Rose
Sackett, (Paul) & Melissa
Scholtes, Kenny & Betty
Schoolmeesters, Ann
Schoolmeesters, Chuck & Linda
Schreiner, Bob & Donna
Shaw, Chuck & Peg
Shea, (Ardes)
Shnyder, Anthony & (Michele)
Shoutz, Lloyd & Sharon
Shoutz, Lloyd Jr. & Irene
Slater, (Steve) & Melissa
Smith, (Troy) & Janice
Stewart, Rose
Stinson, (Eric) & Shanda
Theis, Jim & Cindy
Theis, Maynard & Marg
Turck, Howard & Millie
Turck, Steve & Joan
Valiant, Larry & Debbie
Valiant, Lee & Holly
Valiant, Lee Jr. (Butch)
Valiant, Phillip & Jo
Valiant, Sandra
Warren, (Mike) & Cindy
Wimmer, Clem & Gloriette
Wimmer, John & Joan
Wimmer, John Jr.
Wimmer, Tom
Winter, Harry
Worden, Florence
Yanish, Paul & Debra
Ziegler, Jim & (Marcia)
Ziegler, Joseph & (Kristi)
Zillmer, (Rick) & Ann
Side altar 1958
Sanctuary 1978
Sanctuary 1962
Sanctuary 2007
Fr. Germain Rademacher
Fr. Edward Clemens
Fr. Harry Tasto
Fr. Robert Merth
Fr. John Brunner
Fr. Frederick Fink
Fr. Paul Schumacher
Fr. Patrick Casey
Sr. Kay Fernholz
Mr. Tom Johnson
Kathleen Landreville
Sr. Kathryn Schoolmeesters |
1 Introduction
Matroids are a generalization of the notion of independence. They encode certain structure about sets of objects that relate to one another, and are found in a variety of areas of mathematics, spanning from combinatorics to geometry. Objects that have many formulations across different specialties of math are of great interest to many, as they provide a useful set of tools for linking different concepts and performing a variety of different types of analysis in different contexts. This worksheet will explore a few different formulations, though far from exhaustive, and explore some of their properties through different examples.
2 Matroids
The notion of a matroid can be given in several equivalent ways. Here we will present two such formulations. Before, we will motivate the notion of a matroid.
For a vector space $V$ (think of $\mathbb{R}^2$ the plane or $\mathbb{R}^3$ three-space) and a field $F$ (think $\mathbb{Q}, \mathbb{R}, \mathbb{C}$) and non-zero vectors $v_1, \ldots, v_k \in V$, we say these vectors are linearly dependent if there exist coefficients $a_1, \ldots, a_k \in F$ not all zero such that $a_1 v_1 + a_2 v_2 + \cdots + a_k v_k = 0$, where here 0 means the zero vector. If no such collection of coefficients exist, that is, if $a_1 v_1 + a_2 v_2 + \cdots + a_k v_k = 0$ happens only when all the coefficients equal zero, then we say the collection is linearly independent.
Consider the following matrix, with real coefficients.
$$A = \begin{pmatrix} 1 & 3 & 5 & 6 \\ 2 & 6 & 6 & 8 \\ 1 & 3 & 2 & 3 \end{pmatrix}$$
Denote the columns by 1,2,3,4. Let $E$ be the set of column labels $E = \{1,2,3,4\}$ and denote by $\mathcal{I}$ the sets of column indices corresponding to sets of columns in the $A$ that are linearly dependent. Note that $\mathcal{I}$ is not empty. Also, any subset of a set of linearly independent columns is again linearly independent. Finally, for any sets of linearly independent columns $X$ and $Y$ such that $|X| = |Y| + 1$, there is an index in $i \in X \setminus Y$ $Y \cup \{x\}$ is also a linearly independent set of columns.
**Exercise 2.1** In the matrix above, determine the possible sets of linearly independent columns and verify the properties above.
Let $E$ be a non-empty, finite set of elements. Denote by $\mathcal{I}$ a collection of subsets of $E$ called independent sets that satisfy the following properties:
- **i)** $\mathcal{I}$ is non-empty.
- **ii)** Every subset of a set in $\mathcal{I}$ is also in $\mathcal{I}$.
- **iii)** If $X$ and $Y$ are in $I$ and $|X| = |Y| + 1$, then there is an element $x \in X \setminus Y$ such that $Y \cup \{x\}$ is in $\mathcal{I}$.
Then, we say $M = (E, \mathcal{I})$ satisfying the above properties is a matroid.
From the example with the columns from the matrix $M$ above, we see that there exists a matroid structure. The following example again illustrates a matroid.
Let $E = \{1,2\}$ and set $\mathcal{I} = \{\emptyset, \{1\}, \{2\}\}$. We observe that $\mathcal{I} \neq \emptyset$, so axiom one is satisfied. The subsets of $\{1\}$ are $\{1\}$ and $\emptyset$, and similarly for $\{2\}$ are $\{2\}$ and $\emptyset$. Moreover, the only subset of $\emptyset$ is $\emptyset$. We see all of these subsets are included in $\mathcal{I}$ so axiom two is satisfied.
Finally, \(|\emptyset| = 0\) and \(|\{1\}| = |\{2\}| = 1\). Verifying that \(1 \in \{1\} \setminus \emptyset\) and \(\{1\} \cup \emptyset = \{1\} \in \mathcal{I}\) and similarly for \(\emptyset\) and \(\{2\}\), this satisfies the axiom three. Hence, \(M = (E, \mathcal{I})\) determines a matroid.
**Exercise 2.2** Let \(E = \{1, 2, 3, 4\}\). Suppose that \(\{1, 2\}\) and \(\{3, 4\}\) are known to be in \(\mathcal{I}\) and that \(\mathcal{I}\) contains no subsets of \(E\) that are of cardinality three or greater. Determine what other sets must be in the collection of independent sets \(\mathcal{I}\) be to make \(E, \mathcal{I}\) a matroid? Use the above axioms to determine the sets of size 0, 1, and 2 that must be in \(\mathcal{I}\). Note, there may be several different ways to extend this to a matroid, although there are some subsets that will be common to all.
**Exercise 2.3** From the previous exercise, examine all the independent sets that are maximal (not contained in any other independent set). What do you notice? Try to conjecture a statement about the size of the maximal independent sets and prove it.
An alternative formulation for a matroid is the following.
Let \(E\) be a non-empty finite set. A collection of bases, \(\mathcal{B}\) is a collection of subsets of \(E\). A matroid \(M\) is a pair \(M = (E, \mathcal{B})\) where \(\mathcal{B}\) is a collection of bases satisfying the following properties:
- **i)** No \(X \in \mathcal{B}\) properly contains any \(Y \in \mathcal{B}\).
- **ii)** If \(B_1\) and \(B_2\) are bases and if \(e \in B_1\) there is an \(f \in B_2\) (different from \(e\)) such that \(B_1 \setminus \{e\} \cup \{f\}\) is also a base.
The following exercise will relate this notion of a matroid to the previous notion, by defining one of the possible matroids (the largest) from exercise 2.2.
**Exercise 2.4** Verify that for \(E = \{1, 2, 3, 4\}\), that \(\mathcal{B} = \{\{1, 2, 3\}, \{1, 2, 4\}, \{1, 3, 4\}, \{2, 3, 4\}\}\) is a collection of bases satisfying the required properties.
**Exercise 2.5** From the previous exercise, can you notice a property about the sizes of the bases defining the matroid? Try to prove your claim. For this, property ii is very useful.
**Exercise 2.6** Try to relate the independent sets from exercise 2.2 with the bases from 2.4. What do you observe? Try to create a connection between the two different formulations of matroids bases on the properties of independent sets and bases, and the individual independent sets and bases themselves.
A bijection of sets \(A\) and \(B\) is a function \(f : A \to B\) such that:
- **i)** If \(b \in B\) then there is an \(a \in A\) such that \(f(a) = b\);
- **ii)** For \(x, y \in A\) if \(f(x) = f(y)\) then \(x = y\).
Note that if a bijection exists, then there is an inverse function $g : B \to A$, defined by $g(b) = a$ iff $f(a) = b$, that has the same properties as $f$, with the roles of $A$ and $B$ switched.
We say two matroids $M_1 = (E_1, \mathcal{I}_1)$ and $M_2 = (E_2, \mathcal{I}_2)$ are isomorphic if there is a bijection between $E_1$ and $E_2$ such that independent sets are mapped to independent sets by $f$ and by $g$.
**Exercise 2.7** For $E = \{1, 2\}$ and matroids $M_1$ defined by $\mathcal{I}_1 = \{\emptyset\}$, $M_2$ defined by $\mathcal{I}_2 = \{\emptyset, \{1\}\}$, $M_3$ defined by $\mathcal{I}_3 = \{\emptyset, \{2\}\}$, $M_4$ defined by $\{\emptyset, \{1\}, \{2\}\}$ and $M_5$ defined by $\mathcal{I}_5 = \{\emptyset, \{1\}, \{2\}, \{1, 2\}\}$, which matroids are isomorphic?
**Exercise 2.8** Determine how many non-isomorphic matroids on the set $E = \{1, 2, 3\}$ exist. To do this, first determine which collections $\mathcal{I}$ of independent sets form matroids with the set $E$. Then, determine which ones are isomorphic.
Recall the example of the matroid determined by the sets of linearly independent columns of the matrix $A$, which we will denote by $M[A]$. A matroid that is isomorphic to such a matroid for a given matrix $A$ over a field $F$ is said to be $F$–representable, and $A$ is an $F$–representation. It is a fact that not all matroids are $F$–representable over every field $F$.
3 Graphs and matroids
A simple graph $G = (V, E)$ is a set $V$ of distinct vertices and a set $E$ of distinct edges with certain restrictions. More precisely, we will look at finite graphs for which $V$ and $E$ are finite. Associated to each edge $e \in E$ is an unordered pair of distinct vertices $\{v_i, v_j\}$; we do not care about the direction the edge travels, just the vertices it connects. We also will only consider graphs where any pair of vertices is connected by at most one edge. A path from $v_i$ to $v_j \neq v_i$ in $G$ is given by a sequence of edges such that each pair of adjacent edges shares a common vertex, with the first edge having $v_i$ as a vertex and the last edge having $v_j$ as a vertex. A graph $G$ is connected if for every pair of vertices, there exists a path between them. A cycle is a non-trivial path that starts and ends at the same vertex.
A tree is a connected graph that contains no cycles. If a tree has $n$ vertices, one can show that it has $n - 1$ edges. A spanning tree for a graph $G$ is a tree that uses the edges of $G$ and includes every vertex of $G$. Every connected graph has a spanning tree, but there may be many such spanning trees for a given graph.
Consider the complete graph on $n$ vertices. This a graph with $n$ vertices and an edge in between each vertex pair.
Exercise 3.1 List out the spanning trees for $n = 3$. How many are there? How about for $n = 4$?
A forest is graph that contains no cycles. A tree is a connected forest.
Exercise 3.2 Determine the possible forests in the complete graph on 3 vertices.
We can define a matroid on a graph $G$ by letting $E$ be the set of edges and the set of independent sets $\mathcal{I}$ be the edge sets of forests in the graph $G$. Such a matroid is said to be a cycle matroid, denoted by $M(G)$.
Exercise 3.3 Verify that the collection of edge sets of forests on the complete graph on 3 vertices satisfies the properties of a collection of independent sets.
A matroid $M$ is a graphic if it is isomorphic to a cycle matroid for some graph.
We may define a matroid in a third formulation as follows. Let $E$ be a finite non-empty set and $C$ and non-empty collection of subsets of $E$ satisfying the following two properties:
i) No cycle properly contains another cycle.
ii) For two cycles $C_1$ and $C_2$ such that $e \in C_1 \cap C_2$, there is a cycle $C_1 \cup C_2$ not containing $e$.
Exercise 3.4 For the complete graph on three vertices, list out the cycles. Do the same for the complete graph on four vertices.
Exercise 3.5 Examine the formulation of a matroid using cycles and look at the collection of cycles you created from the previous exercise. What similarities do you see? Also, examine the formulation of a matroid in terms of independent sets. How does this formulation relate the independent sets to the cycles you found in the previous exercise?
4 Operations with matroids
Now that we have seen several different formulations of matroids and examples of these, one may ask if there are ways of building new matroids from existing matroids.
We introduce the direct sum of two matroids as follows:
Let \( M_1 = (E_1, \mathcal{I}_1) \) and \( M_2 = (E_2, \mathcal{I}_2) \) be two matroids given by collections of independent sets where \( E_1 \) and \( E_2 \) are disjoint (\( E_1 \cap E_2 = \emptyset \)). Then
\[
M_1 \oplus M_2 = (E_1 \cup E_2, \{ I_1 \cup I_2 : I_1 \in \mathcal{I}_1, I_2 \in \mathcal{I}_2 \})
\]
**Exercise 4.1** Verify that the three properties that independent sets must satisfy to form a matroid are in fact satisfied in this construction. Hint: If \( |I_1 \cup I_2| = |J_1 \cup J_1| + 1 \), it must follow that either \( |I_1| > |J_1| \) or that \( |I_2| > |J_2| \). Use this and property 2 to verify property 3.
Observe that if \( G_1 \) and \( G_2 \) are disjoint graphs, then the cycle matroids determined by these graphs can be combined using the direct sum process.
**Exercise 4.2** Let \( G_1 \) and \( G_2 \) be two disjoint complete graphs on 3 vertices, and \( M(G_1) \) and \( M(G_2) \) be the corresponding cyclic matroids. Write out the independent sets of \( M(G_1) \oplus M(G_2) \). It may be helpful to consider labelling the vertices and edges of \( G_1 \) as \( v_i, e_i \) and the vertices and edges of \( G_2 \) as \( w_i, f_i \).
The direct sum of two \( F \)-representations is also defined. To see this, suppose that \( M_1 = M[A_1] \) and \( M_2 = M[A_2] \). Then, consider the matrix
\[
A_3 = \begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix}
\]
**Exercise 4.3** Let
\[
A_1 = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \end{pmatrix} \quad \text{and} \quad A_2 = \begin{pmatrix} 2 & 1 & 0 \\ 2 & 1 & 1 \end{pmatrix}.
\]
Determine the matroids \( M[A_1] \) and \( M[A_2] \). Next, determine \( M[A_1] \oplus M[A_2] \) using the definition of the direct sum of matroids. Then, verify that this is isomorphic to \( M[A_3] \), where
\[
A_3 = \begin{pmatrix} 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 1 & 0 \\ 0 & 0 & 0 & 2 & 1 & 1 \end{pmatrix}.
\]
It may be easier to keep track of everything by labelling the columns of \( A_1 \) by \( \{1, 2, 3\} \), the columns of \( A_2 \) by \( \{4, 5, 6\} \), and the columns of \( A_3 \) by \( \{1, 2, 3, 4, 5, 6\} \).
*Solution:* \( M[A_1] \) is given by \( E_1 = \{1, 2, 3\} \) and \( \mathcal{I}_1 = \{\emptyset, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}\} \). \( M[A_2] \) is given by \( E_2 = \{4, 5, 6\} \) and \( \mathcal{I}_2 = \{\emptyset, \{4\}, \{5\}, \{6\}, \{4, 6\}, \{5, 6\}\} \). One can easily compute the direct sum from this and verify it equals \( M[A_3] \), though this is tedious (there are 42 independent sets).
References
1. https://www.math.lsu.edu/~oxley/survey4.pdf
2. https://www.whitman.edu/Documents/Academics/Mathematics/hillman.pdf |
Gravitational radiation from pulsating white dwarfs
M. Benacquista
D. M. Sedrakian
M. V. Hairapetyan
K. M. Shahabasyan
A. A. Sadoyan
Follow this and additional works at: https://scholarworks.utrgv.edu/pa_fac
Part of the Astrophysics and Astronomy Commons
Recommended Citation
M. Benacquista, et. al., (2003) Gravitational radiation from pulsating white dwarfs. Astrophysical Journal 596:2L223. DOI: http://doi.org/10.1086/379532
This Article is brought to you for free and open access by the College of Sciences at ScholarWorks @ UTRGV. It has been accepted for inclusion in Physics and Astronomy Faculty Publications and Presentations by an authorized administrator of ScholarWorks @ UTRGV. For more information, please contact email@example.com, firstname.lastname@example.org.
GRAVITATIONAL RADIATION FROM PULSATING WHITE DWARFS
M. BENACQUISTA,¹ D. M. SEDRAKIAN,² M. V. HAIRAPETYAN,² K. M. SHAHABASYAN,³ AND A. A. SADOYAN³
Received 2003 May 2; accepted 2003 September 4; published 2003 September 29
ABSTRACT
Rotating white dwarfs undergoing quasi-radial oscillations can emit gravitational radiation in a frequency range from 0.1 to 0.3 Hz. Assuming that the energy source for the gravitational radiation comes from the oblateness of the white dwarf induced by the rotation, the strain amplitude is found to be $\sim 10^{-27}$ for a white dwarf at $\sim 50$ pc. The Galactic population of these sources is estimated to be $\sim 10^7$ and may produce a confusion-limited foreground for proposed advanced detectors in the frequency band between space-based and ground-based interferometers. Nearby oscillating white dwarfs may provide a clear enough signal to investigate white dwarf interiors through gravitational wave asteroseismology.
Subject headings: gravitational waves — stars: oscillations — white dwarfs
1. INTRODUCTION
There are a number of gravitational radiation detectors, planned, under construction, and operational, covering a wide frequency spectrum from $\sim 10^{-9}$ Hz all the way up to $\sim 10^4$ Hz. Expected sources of gravitational radiation include numerous astrophysical sources such as compact binaries, supermassive black holes, and binary coalescence. In addition, there is an expected cosmological background of gravitational radiation arising from the very earliest times of the universe. The coverage of the spectrum is not complete, and the gap between space-based interferometers (such as the Laser Interferometer Space Antenna) and ground-based interferometers (such as the Laser Interferometer Gravitational-Wave Observatory and Virgo) has been proposed as a possible “clean window,” devoid of continuous foreground sources, through which the cosmological background of gravitational radiation could be seen (Seto, Kawamura, & Nakamura 2001). Most binary white dwarf systems will coalesce before their gravitational wave frequency rises above 0.1 Hz, while more massive binaries such as double neutron star or black hole binaries will be sweeping through this band on the way to their eventual coalescence in the ground-based frequency band. However, rotating white dwarfs undergoing quasi-radial oscillations will emit gravitational radiation in this frequency band. These sources will be essentially monochromatic and long-lived. Given the number of white dwarfs in the Galaxy, it is quite possible that this population will produce a confusion-limited foreground of sources in this frequency band that will mask the cosmological background. We propose a possible energy source for these oscillations and attempt to estimate the average signal strength.
2. QUASI-RADIAL OSCILLATIONS
Quasi-radial oscillations of rotating white dwarfs were investigated in the early 1970s (Papoyan, Sedrakian, & Chubaryan 1972; Haroutyunyan, Sedrakian, & Chubaryan 1972), when the frequency spectrum of the fundamental oscillation mode for maximally rotating white dwarfs was determined. These stars are oblate because of their rotation, and consequently, they have a nonzero quadrupole moment. The oscillations add a time dependence to the quadrupole moment (Vartanian & Hajian 1977). The oscillation is described by assigning each mass element a time-dependent coordinate given by $x_\alpha = x_\alpha^0 (1 + \eta \sin \omega t)$, where $\eta \ll 1$ and a constant. Thus, the reduced quadrupole moment is given by
$$Q_{\alpha\beta} = \int \rho(x_\alpha x_\beta - \frac{1}{3} \delta_{\alpha\beta} x^2) \, d^3 x$$
$$= Q_{\alpha\beta}^0 (1 + 2\eta \sin \omega t),$$
(1)
where $Q_{\alpha\beta}^0$ are the components of the quadrupole moment of the rotating oblate white dwarf in equilibrium and we have neglected terms of order $\eta^2$. Taking the axis of rotation to lie along the $z$-axis, the nonzero components of the quadrupole moment obey
$$Q^0 = -Q^0_{zz} = 2Q^0_{xx} = 2Q^0_{yy}.$$
(2)
The power emitted in gravitational radiation is given by
$$J = \frac{G}{5c^5} \left| \frac{d^3}{dt^3} Q_{\alpha\beta} \right|^2,$$
(3)
and consequently, one obtains
$$J = \frac{6G}{5c^5} \eta^2 \omega^6 |Q^0|^2 \cos^2 \omega t' = J_0 \cos^2 \omega t',$$
(4)
where the retarded time is $t' = t - r/c$ for a source at distance $r$.
To determine the waveform and the angular distribution of the radiation, we rotate to coordinates in which the wavevector lies along the $z$-axis and use the transverse-traceless gauge. Consequently,
$$h_+ = \frac{1}{2}(h_{xx} - h_{yy}) = \frac{3GQ^0 \eta \omega^2}{c^4 r} \sin^2 \theta \sin \omega t',$$
(5)
$$h_x = h_y = 0,$$
(6)
where $\theta$ is the angle between the wavevector and the axis of rotation of the white dwarf. We can express the strain amplitude
in terms of the power by combining equation (4) with equation (5) to obtain
\[ h_+ = \sqrt{\frac{15GJ_0}{2c^3}} \frac{1}{r\omega} \sin^2 \theta \sin \omega t'. \]
(7)
If an energy source can be found to drive the pulsations, the rate at which power is put into the vibrations can be combined with the lifetime of the energy source to estimate the strain amplitude from an individual white dwarf and thus from the Galactic population as a whole. We discuss a possible mechanism in the next section.
3. ENERGY SOURCE
If there is no permanent source of energy to feed the gravitational radiation, the oscillation energy will quickly radiate away in about 1000 yr (Vartanian & Hajian 1977). Since the ultimate source of the gravitational radiation from the white dwarf is the oblateness arising from the rotation, we propose that the deformation energy of the white dwarf provides the energy to drive the oscillations. In this scenario, as the white dwarf spins down, it will transition from oblate to spherical. This transition will trigger starquakes that will feed the oscillations that drive the gravitational radiation. We assume that a part of the deformation energy will be converted into gravitational radiation while the remaining part will dissipate through thermal, electromagnetic, and other channels. We use the technique of Sahakian, Sedrakian, & Chubarian (1972) to calculate the deformation energy.
From numerical results, the dependence of the mass of the rotating and nonrotating configurations is found to be linear with the baryon number, so \( M = kN \) and \( M_0 = k_0 N \), where the subscript “0” indicates the nonrotating configuration. The mass difference between the rotating and nonrotating configurations with the same baryon number is a result of the additional energy of rotation \([W_r(\Omega) = \frac{1}{2}I\Omega^2]\) as well as the potential energy due to crustal deformation \([W_s(\Omega)]\); thus,
\[ W_c(\Omega) = \Delta Mc^2 - W_r(\Omega), \]
(8)
where \( \Omega \) is the angular velocity of the white dwarf, \( I \) is its moment of inertia, and the mass difference in grams is given by
\[ \Delta M = (k - k_0)N = 8.96 \times 10^{-29}N. \]
(9)
The appropriate parameters for maximally rotating white dwarfs were calculated in Haroutyunian, Sedrakian, & Chubaryan (1971) and Sahakian et al. (1972). These parameters are presented in Table 1. To obtain the deformation energy for rotation rates less than \( \Omega_{\text{max}} \), we use the fact that the results of Haroutyunian et al. (1971) were obtained using a linear expansion in the small dimensionless parameter \( \Omega^2/8\pi G\rho_c \), where \( \rho_c \) is the nonrotating central density. Consequently, we can write
\[ W_s(\Omega) = \left( \frac{\Omega}{\Omega_{\text{max}}} \right)^2 W_s(\Omega_{\text{max}}). \]
(10)
It now remains to determine the timescale, \( \tau \), for a spin-down mechanism so that we can relate the power in gravitational radiation to the decrease in deformation energy by
\[ J_0 = \beta \frac{W_c}{\tau}, \]
(11)
where \( \beta \ll 1 \) is a “branching ratio” that quantifies the fraction of deformation energy that goes into gravitational radiation from the fundamental mode.
One possible mechanism for the spin-down of a rotating white dwarf is magnetodipole radiation torque, which occurs if the magnetic field is oblique (Schmidt et al. 2001). Observational data for 65 isolated white dwarfs indicates the magnetic field strength on the surface of these stars lies in the range \( \sim 3 \times 10^4 \) to \( \sim 10^9 \) G (Wickramasinghe & Ferrario 2000). If we define \( \alpha \) to be the angle between the magnetic and rotation axes, the spin-down rate of the white dwarf is given by
\[ \dot{\Omega} = -\frac{2\mu^2\Omega^3}{3Ic^3} \sin^2 \alpha, \]
(12)
where \( \mu = BR^3 \) is the magnetic moment, \( B \) is the magnetic field strength, and \( R \) is the radius of the white dwarf. The characteristic timescale is then
\[ \tau = \frac{\Omega}{2 |\dot{\Omega}|}. \]
(13)
We neglect the influence of gravitational radiation on the spin evolution because the ratio of the energy lost through gravitational radiation to that of the energy lost through the magnetic-dipole spin-down mechanism is about 3.6\( \beta \). With this information, we can now estimate the gravitational radiation luminosity and strain amplitude for several white dwarfs.
4. STRAIN AMPLITUDES AND LUMINOSITIES
The rotation rates for white dwarfs are difficult to measure, since these objects have little or no surface blemishes and gravitational broadening of their spectral lines overwhelms the expected rotational broadening (Kawaler 2003). Fortunately, some magnetic white dwarfs show time variability in their magnetic features that allows for their rotation rates to be inferred (Wickramasinghe & Ferrario 2000; Kawaler 2003). Magnetic white dwarfs are thought to make up roughly 1% of the
### Table 2
**Isolated Magnetic White Dwarfs**
| Name | $r$ (pc) | $B$ (MG) | $P_{\text{rot}}$ | $M/M_\odot$ |
|-----------------------|----------|----------|------------------|-------------|
| PG 1031+234 | 142$^a$ | 500$^b$ | 3.4 hr | 0.93$^c$ |
| EUVE J0317−855 | 35$^d$ | 450 | 725 s | 1.35 |
| PG 1015+015 | 66$^e$ | 90 | 99 minutes | 1.15$^c$ |
| Feige 7 | 49 | 35 | 2.2 hr | 0.6 |
| G99-47 | 8$^f$ | 25 | 1 hr? | 0.71$^c$ |
| KPD 0253+5052 | 81$^d$ | 17 | 3.79 hr | ... |
| PG 1312+098 | ... | 10 | 5.43 hr | ... |
| G217-037 | 11$^c$ | $\leq 0.2$ | 2−20? hr$^c$ | 0.89 |
$^a$ Heyl 2000.
$^b$ Ranges from 500 to 1000.
$^c$ Liebert, Bergeron, & Holberg 2003.
$^d$ Downes 1986.
$^e$ $P_{\text{rot}} = 2$ hr was used for the calculation.
Assuming a Galactic distribution of white dwarfs to follow the disk population, we assign a density distribution of
$$\rho = \rho_0 e^{-r/R} e^{-z/h}$$
in Galactocentric cylindrical coordinates, with $R = 2.5$ kpc and $h = 200$ pc. Taking the solar location as $r_s = 8.5$ kpc and $z_s = 0$, we obtain $\rho_0 = 5.5 \times 10^{-4}$ pc$^{-3}$ and a total number of $N = 8.6 \times 10^6$ in the Galaxy.
Expected strain amplitudes, $\tau$, and $\eta$ for the eight isolated magnetic white dwarfs are given in Table 3. These are calculated by first determining $\tau$ from equations (12) and (13), where we have chosen $\sin^2 \alpha = \frac{1}{2}$. This is a reasonable assumption, since a distribution in $\alpha$ that is either uniform or spiked at $\alpha = \pi/4$ is supported by observation (Schmidt & Norsworthy 1991). Choosing a value for $\beta$ with which to calculate the luminosity is somewhat problematic because of the dearth of observational data for pulsations in this frequency range. Most observations of white dwarf pulsations are in the millihertz range, and consequently, any pulsations in the decihertz range will be averaged out in an observation. We choose an upper bound to $\beta$ by requiring that the largest Doppler broadening of spectral lines due to pulsations be less than thermal Doppler broadening. The most stringent constraint comes from PG 1031+234 and yields $\beta = 10^{-4}$. We note that the resulting pulsational Doppler broadening in the remaining seven white dwarfs is at least an order of magnitude below the thermal Doppler broadening. Thus, the luminosity is calculated from equation (11) and used with equations (14) and (7) to determine the strain amplitude where we have averaged over all orientations and taken $\beta = 10^{-4}$. We have used the average mass of 0.95 $M_\odot$ whenever the mass was undetermined, and we have used the average distance of 46 pc whenever the distance was undetermined. The expected energy flux on Earth (in units of ergs s$^{-1}$ cm$^{-2}$) for a population made entirely of each type of white dwarf is also shown in Table 3. The flux, $F$, is calculated using
$$F = \frac{4\pi\rho_h hf(h)I_0}{4\pi(3 \times 10^{18})^2},$$
where $h = 200$ pc and $f(h) = 6.15$ is calculated using Appendix A of Hils, Bender, & Webbink (1990). Finally, we note that a simple average of the strain amplitudes in Table 3 gives $h_\perp = 5.9 \times 10^{-28}$ and an average flux of $F = 9.21 \times 10^{-16}$ ergs s$^{-1}$ cm$^{-2}$. The flux is spread out over a frequency band of $\nu_1 = 0.12$ to $\nu_2 = 0.32$ Hz, and we can estimate an average strain amplitude for the Galactic population of pulsating white dwarfs by using the angle and polarization averaged expression
### Table 3
**Strain Amplitudes and Fluxes for Isolated White Dwarfs with $\beta = 10^{-4}$**
| Name | $h_\perp$ | $F$ | $\tau$ (Gyr) | $\eta$ |
|-----------------------|-----------|--------------|--------------|--------------|
| PG 1031+234 | 6.0 $\times 10^{-29}$ | 6.1 $\times 10^{-17}$ | 11 | 1.0 $\times 10^{-2}$ |
| EUVE J0317−855 | 1.0 $\times 10^{-27}$ | 6.7 $\times 10^{-15}$ | 1.7 | 4.0 $\times 10^{-3}$ |
| PG 1015+015 | 9.3 $\times 10^{-30}$ | 1.1 $\times 10^{-18}$ | 571 | 7.1 $\times 10^{-4}$ |
| Feige 7 | 1.6 $\times 10^{-28}$ | 4.9 $\times 10^{-17}$ | 125 | 5.2 $\times 10^{-4}$ |
| G99-47 | 3.5 $\times 10^{-27}$ | 5.9 $\times 10^{-16}$ | 50.6 | 3.7 $\times 10^{-4}$ |
| KPD 0253+5052 | 2.9 $\times 10^{-30}$ | 4.6 $\times 10^{-20}$ | 11845 | 3.5 $\times 10^{-4}$ |
| PG 1312+098 | 1.5 $\times 10^{-30}$ | 3.8 $\times 10^{-21}$ | 70266 | 2.0 $\times 10^{-4}$ |
| G217-037 | 9.0 $\times 10^{-31}$ | 8.2 $\times 10^{-23}$ | 2.4 $\times 10^{7}$ | 4.1 $\times 10^{-6}$ |
of Douglas & Braginsky (1979) and averaging over the frequency range $\Delta \nu = \nu_2 - \nu_1$ to obtain
$$h_{+,\text{ave}} = \frac{\ln \nu_2 / \nu_1}{\Delta \nu} \sqrt{\frac{4GF}{\pi c^3}},$$ \hspace{1cm} (17)
which gives $h_{+,\text{ave}} = 8.35 \times 10^{-27}$.
5. CONCLUSIONS
We have shown that the Galactic population of white dwarfs can produce a background of gravitational radiation in the frequency range of 0.12–0.32 Hz through quasi-radial pulsations. The source of energy to drive these pulsations is found in the deformation energy of the white dwarf because of its rotation. This energy can be extracted from the white dwarf as it spins down. We have proposed that a population of isolated magnetic white dwarfs that are the remnants of merged double-degenerate binaries can be sources of this gravitational radiation. Estimates of the signal strength over the frequency band of interest indicate that this population may be comparable in strength to the level of the stochastic cosmological background of gravitational radiation predicted by standard inflationary models. In addition, some white dwarfs may be near enough that their signals will stand above the background and their individual parameters can be measured. If the frequency and amplitude of the gravitational radiation from one of these nearby single white dwarfs can be measured, then it will provide an opportunity to test models of white dwarf interiors and give a unique opportunity to check the main parameters for white dwarfs and to understand their dissipation mechanisms.
This work is supported by CRDF award AP2-3207 and 12006/NFSAT PH067-02. M. B. is also supported by NASA Cooperative Agreement NCC5-579.
REFERENCES
Douglas, D. H., & Braginsky, V. G. 1979, in General Relativity: An Einstein Centenary Survey, ed. S. W. Hawking & W. Israel (Cambridge: Cambridge Univ. Press), 30
Downes, R. A. 1986, ApJS, 61, 569
Dupuis, J., Vennes, S., & Charvet, P. 2002, ApJ, 580, 1091
Ferrario, L., Vennes, S., Wickramasinghe, D. T., Bailey, J. A., & Christian, D. J. 1997, MNRAS, 292, 205
Haroutyunian, G. G., Sedrakian, D. M., & Chubaryan, E. V. 1971, Soviet Astrophys., 7, 467
———. 1972, AZh, 49, 1216
Heyl, J. S. 2000, MNRAS, 317, 310
Hils, D., Bender, P. L., & Webbink, R. F. 1990, ApJ, 360, 75
Kawaler, S. D. 2003, in IAU Symp. 215, Stellar Rotation, ed. A. Maeder & P. Eenens (San Francisco: ASP), in press (astro-ph/0301539)
Liebert, J., Bergeron, P., & Holberg, J. B. 2003, AJ, 125, 348
Liebert, J., Dahn, C. C., & Monet, D. G. 1988, ApJ, 332, 891
Papoyan, V. V., Sedrakian, D. M., & Chubaryan, E. V. 1972, AZh, 49, 750
Sahakian, G. S., Sedrakian, D. M., & Chubarian, E. V. 1972, Soviet Astron., 8, 541
Schmidt, G. D., & Norsworthy, J. E. 1991, ApJ, 366, 270
Schmidt, G. D., Vennes, S., Wickramasinghe, D. T., & Ferrario, L. 2001, MNRAS, 328, 203
Seto, N., Kawamura, S., & Nakamura, T. 2001, Phys. Rev. Lett., 87, 221103
Vartanian, Yu. L., & Hajian, G. S. 1977, AZh, 54, 1047
Wickramasinghe, D. T., & Ferrario, L. 2000, PASP, 112, 873 |
A university stands for humanism, for tolerance, for reason, for the adventure of ideas and for the search of truth. It stands for the onward march of the human race towards ever higher objectives. If the Universities discharge their duties adequately, then it is well with the nation and the people.
The symbol is a graphic statement which stands for international academic exchange and onwards search of knowledge for the betterment of human being.
The overlapping circular segments of the design denote global interaction, creating a flame emitting enlightenment, this flame emerges out of the traditional Indian ‘diya’ (lamp)-a source of Light, Understanding and Brotherhood.
The design is also representative of the rose-bud closely associated with the name of Pt. Jawaharlal Nehru.
JNU News is a bimonthly journal of Jawaharlal Nehru University. It serves to bridge the information gap and tries to initiate constant dialogue between various constituents of the University community as well as with the rest of the academic world. Views expressed are those of the contributors and not necessarily of JNU News. All articles and reports published in it may be freely reproduced with acknowledgment.
A Conversation with Professor Didier Girard, Professor at the University of Tours, France, and General Coordinator of several Erasmus Mundus multi-centre postgraduate programmes, including the Erasmus Mundus Joint Doctorate in “Cultural Studies in Literary Interzones”, of which JNU is a degree-awarding partner.
Q: You visited JNU just a few years back as the General Coordinator of the prestigious Erasmus Mundus Joint Doctorate programme, of which JNU is a degree-awarding partner. What is your impression of the place?
Prof. Girard: The campus is very impressive for anybody coming from Europe as one enters a whole universe, with beautiful grounds, luxuriant vegetation and buildings of many different shapes and sizes scattered all over the place, but with specially-dedicated areas for different departments. But what I was most impressed by was the wide range of social events (and not only academic) that take place on a daily basis. This makes it very easy to meet and socialize with all sorts of people even if you are not a member of this or that faculty or research centre, and that makes JNU quite unique in my experience.
Q: Your academic pursuits, and your spearheading numerous international academic collaborations have taken you to different universities in the world. What are some of the areas where JNU can improve and learn from top universities in the world, especially in the domain of international collaboration?
Prof. Girard: This is obviously a far-reaching question! I have been working with two dozen universities on 5 continents!!! There are so many different cultural traditions, economic and political parameters sometimes, and social textures that may differ drastically from one place to another. But I’ll try to answer your question anyway, as best as I can, focusing on “best examples” (but I could equally list the hindrances and difficulties to be encountered in the different countries, but only if you insist!)
In Western Europe, universities are usually not far from city centres (and that helps international students and academics to interact more easily and more frequently with the local culture, not only the academic milieu). Obviously, studying and living in Bergamo, Paris, Rome, Naples, Tübingen, Barcelona,
Stockholm, Madrid, Lisbon, Santiago de Compostela, Berlin, St Andrews, Krakow, Oxford, or in Tours in the Loire Valley offer cultural feasts as historical artefacts, libraries, and museums abound, sometimes literally next door. Some of these universities (not all of them, far from it!) also have a long institutional experience with international collaborations and large pools of high-skilled civil servants with a strong expertise in such matters (I am thinking for instance about Santiago de Compostela, Bergamo, or Tübingen). Working with such universities teaches you a lot in technical and administrative matters (degree-awarding and recognition policies, international accountancy, Quality Enhancement etc.). In Eastern Europe (Western Balkans, Russia, and Mitteleuropa countries), the range of courses is tantalizing with subjects that have disappeared from standardized/globalized syllabuses, and in itself, this can be very exciting for the curious international scholar. Pastoral care and Supervision are also unrivalled. In Australia (as in Great Britain), digital and library resources are a real goldmine and campus life is just great (the real inconvenience that goes with it is that international students often have difficulties in making ends meet as daily life is quite expensive when compared to other parts of the world. Southern American universities also have their own charm with fantastic staff (usually with multifarious ethnic origins) and an exciting choice of original academic topics. The Iberoamerican University in Mexico City is a case in point when it comes to international collaborations, because it is also equipped with a very successful fundraising foundation that helps when it comes to elite programmes that are not run with the same standard national procedures. They are in fact extremely flexible when it comes to experimental educational ventures. In Northern American universities of the highest quality, graduate centres provide post-graduate researchers of all levels state-of-the-art material and teaching staff and that is of course a unique opportunity for anybody to spend one or two semesters as standards are very high and stimulating.
Having said that, I should maybe add that Indian universities (and J.N.U. more than any other in the sub-continent) combine many of these advantages for international collaborations. The only drawback I can fathom is sometimes a too rigid administration and a difficulty to adapt to more challenging and demanding international situations that require experimental policies.
Q: As a global academic who has been at the spearhead of numerous pathbreaking international collaborations, what are your expectations from a partner university?
Prof. Girard: When one really goes for the genuinely global option (I mean not only collaborations based on bilateral time-limited exchanges, but programmes with one model imposed on partner institutions), when one works hand in hand with an international team of collaborators who collegially design a programme, a partner university must be ready to embrace the unknown and evolve creatively beyond one’s rigidities to accommodate the collective requirements of the collaborative programme.
Q: What role do you think the University plays in today’s changing world order? As a university professor of humanities, and a major administrator of international collaborations, what are the challenges that you face in the current academic and political scenario? How do you react to these challenges, and what do you do to negotiate or overcome them?
Prof. Girard: I think one should always remember Derrida’s final words on what a university is:
“The university is not simply a place to play, not just a playground. But in at least some of the places, within the university – the university is not a homogeneous field –, you can study without expecting any efficient or immediate result. You may search, just for the sake of searching, and try for the sake of trying. So, there is a possibility of what I would call playing. It is perhaps the only place within society where play is possible to such an extent.”
One must remain optimistic, whatever restrictions and budget cuts are imposed everywhere on scientists and researchers. So the enemy is usually within, the tendency among today’s intellectuals working in universities to surrender to bean-counters’s demands, and not face these issues by enforcing a vision of what knowledge is at a given time, how it could expand, and evolve. So let’s take the power and make administrators understand that ideas are the ingredients of the best recipes in further education, and that is what professors are best at, aren’t they? Of course, this means that one has to forget one’s books, one’s own career and one’s own publications for a while to give future generations the means to renew the possibilities to spread knowledge and a critical spirit on the world as we know it.
There is nothing new to it, but we must upgrade the Humanities, again and again.
Q: What vision do you have for the prospects of international collaboration in academics in a world that seems to be sinking daily, as it seems, into parochialisms and sectarianisms of various sorts?
Prof. Girard: Yes, “sinking” is the right and fatal word! Nobody could say it better than you do!
We have been speaking for too long about comparative literatures and post-colonial studies. Too much theory. Time has come for action! Such international collaborations are direct, existential, ontological (sometimes!) ventures into the unknown or what some still want to call “otherness”. Reach out for your neighbours, reach out for strangers and through transdisciplinarity and multiversity, intolerance, parochialism and obscurantism will vanish into thin air, naturally! Confronting oneself or one’s way of thinking with others’ do have an immediate impact on what you produce, or you don’t even want to join the game!
I do not have a particular vision for the future. I simply want to keep my eyes open! Education at a global level can also be an act of resistance, and that’s where you get the kick out of such intellectual challenges.
Q: What is your advice to students of JNU in the current context? What academic dreams should they cherish? What goals should they aspire for?
Prof. Girard: My piece of advice to young people? I think you (and everybody, old and young) should google these words and add “W. Burroughs”. His advice to young people is sheer wisdom, and fun!
I am not young anymore, so I would not risk any “piece of advice” to young people. But I certainly believe we should all take the time to think about our physical and aesthetic relations to our immediate environment (natural, political, social, younameit). Being global does not mean to be everywhere or anywhere, as in some sort of existential competition to “conquer” the whole world, but rather to try and test various environments for a while and then see what happens. This is obviously even truer when it comes to the professional and intellectual environment. If knowledge does not make you happier, if it does not have any impact on your life, it means that what you have learnt is not what you aspire to! Change, accept and embrace changes around and in you. Big Data in the post-digital world takes care of the rest.
New Appointments
- Prof. Arun Sidram Kharat, School of Life Sciences as Acting Director, Internal Quality Assurance Cell.
- Prof. Ajay Kumar Dubey, Centre for African Studies, School of International Studies as Director, Energy Studies Programme, School of International Studies for a period of two years.
- Prof. Sudheer Pratap Singh, Centre for Indian Language, School of Language, Literature and Culture Studies as Advisor, Equal Opportunity Office, for a period of two years.
- Dr. Mahesh Ranjan Debata, Centre for Inner Asian Studies, School of International Studies as Proctor of the University has been extended for another two years.
Retirements/Resignations/Voluntary Retirements
- Prof. Ravi S. Srivastava, Centre for the Study of Regional Development, School of Social Sciences.
- Prof. Mohan Rao, Centre for Social Medicine and Community Health, School of Social Sciences.
- Prof. Ranabir Chakravarti, Centre for Historical Studies, School of Social Sciences.
- Prof. Anuradha M. Chenoy, Centre for Russian and Central Asian Studies, School of International Studies.
- Shri Jagdish Chander, Assistant Registrar, School of Computer and Systems Sciences.
- Shri Jagdish Kumar, Private Secretary, School of Physical Sciences.
- Shri Tawar Singh, Professional Assistant, Dr. B. R. Ambedkar Central Library.
- Shri Siddhu Prasad, Security Guard, Security Office.
- Shri Jagbir Singh, Security Guard Security Office.
- Smt. Maya Devi, Safaikarmchari, Jhelum Hostel.
The following faculty members have been appointed as Wardens in hostels for the period mentioned against each for a period of 2 years to be effective from the date of joining.
| S.No | Name & Designation | School/Centre | Hostel | Period |
|------|------------------------------------|---------------|------------|---------------------------------------------|
| 1 | Dr. Nagendra Shreenivas | CRS/SLL&CS | Narmada Hostel | For 2 years vice Dr. Bimol Akojam |
| 2 | Dr. Suresh R. | CSRD/SSS | Jhelum Hostel | For 2 years vice Dr. Himanshu |
| 3 | Dr. Pravesh Kumar | CCP&PT/SIS | Narmada Hostel | For 2 years vice Dr. G. Srinivasan |
| 4 | Dr. Amit Kumar Mishra | SES | Jhelum Hostel | For 2 years vice Dr. N. Chandrasegaran |
| 5 | Dr. Gopal Ram | CL/SLL&CS | Kaveri Hostel | For 2 years vice Prof. Sujoy Chakravarty |
| 6 | Dr. Ramavtar Meena | SES | Sabarmati Hostel | For 2 years vice Prof. Arun Kumar Mohanty |
| 7 | Dr. Sanjeev Sharma | CSRD/SSS | Brahmaputra Hostel | For 2 years vice Dr. Kaustav Banerjee |
| 8 | Dr. Nemthiannagi Guite | CSM&CH/SSS | Ganga Hostel | For 2 years vice Dr. Mallarika Sinha Roy |
| 9 | Dr. Garima Dalal | LEC | Ganga Hostel | For 2 years vice Dr. Rita Sharma |
| 10 | Dr. Manuradha Chaudhary | CRS/SLL&CS | Tapti Hostel | For 2 years vice Dr. Rosina Nasir |
| 11 | Dr. Anuja | CSDE/SSS | Yamuna Hostel | For 2 years vice Dr. Archana Upadhyayay |
| 12 | Dr. Reeta Sony | CSSP/SSS | Sabarmati Hostel | For 2 years vice Dr. Sangeeta Dasgupta |
The QS World University Rankings 2018
The QS World University Rankings 2018 has ranked our university at 228 in the world in Arts and Humanities (Highest Rank within India).
Dr. Rakesh Arya, CSRD, SSS has been Awarded Best Coordinator, National Award for conducting E-learning courses of IIRS-ISRO for the year 2017 by Indian Institute of Remote Sensing and Indian Space Research Organization, Govt. of India.
JNU Sports Office
Nidhi Mishra (PHD, CHS/SSS) participated in the 8th International open Para athletics meeting 18 – 20 March, 2018 held in UAE. She won 2 Bronze Medals in 100 m and Shot put events. Our office has provided her with all possible logistic and technical support. We are proud of you Nidhi.
Jeetu Kanwar (1st Year PHD, CSMCH/ SSS), JNU student has won 3 gold medals in 100 m, 200 m and long Jump events at 14th National Para Athletics Championship for Cerebral Palsy, being held at Patna, 22 – 23 March, 2018 organized by Indian Sports Federation of Cerebral Palsy. JNU Sports Office has been providing all possible logistic and technical support to her. We are proud of you Jeetu.
Campus Activities
CSSP, JNU ranks 11th amongst the Top S&T Think Tanks in the world
The Centre for Studies in Science Policy (CSSP) in Jawaharlal Nehru University ranks amongst the Top Science and Technology Think Tanks for 2017 around the world, according to an annual report compiled by the Think Tanks and Civil Societies Program (TTCSP) of the University of Pennsylvania, USA. The TTCSP records the role and evolution of public policy think tanks in governments and civil society across the world with the objective to better integrate and create a bridge between academic knowledge and think-tank institutions. CSSP undertakes interdisciplinary academic research in science, technology and
innovation studies (STIS) within the broad field of science policy studies.
**The TTCSP Report 2017**
Released on 31 January, 2018, the report assesses the performance of global think tanks based on a range of criteria, including outreach, academic rigour, management, and impact in public policymaking. The ranking are compiled using a participatory approach that calls upon over 6000 think tanks across the world to submit nominations for participating organizations and the expert panel of judges.
CSSP of JNU ranks $11^{th}$ among 68 think tanks in the Top Science and Technology Think Tanks (Table 24). CSSP is top-ranked in South Asia in the specified category, and has the best ranking among all Indian Think Tanks across the categories. CSSP is also found a place amongst the Best University Affiliated Think Tanks (Table 39), and Best Use of Social Media and Networks (Table 40) in the TTCSP Report. The rankings, in categories spreading over 52 Tables, include about 30 other Indian organizations, including the Energy and Resources Institute (TERI), Centre for Science and Environment (CSE), Institute for Defence Studies and Analysis (IDSA), Observer Research Foundation (ORF), Indian Council for Research on International Economic Research (ICRIER), and Centre for Internet and Society (CIS). The report, released annually for over a decade, is part of a campaign by the TTCSP to increase the public visibility and performance of public policy think tanks globally and strengthen cross-regional collaboration. The TTCSP Report is freely available online on [http://bit.ly/2s5nHyq](http://bit.ly/2s5nHyq).
*Anup Kumar Das, Documentation Officer
Centre for Studies in Science Policy, SSS*
**Photo Exhibition**
On 5 February, 2018, the JNU Nature & Wildlife Club in collaboration with JNU Photography Club successfully organised a photo exhibition, JNU Melange 2018, wherein Nature & Wildlife Club displayed photographs of different events that were held in the previous semester i.e Monsoon semester, 2017. Nature walk, Nature trek and Documentary screening were some of the major events with huge number of participants. The objective of organising such an event is to brief the student community about its activities and to promote the cultural clubs. The exhibition also had fascinating photographs of JNU’s flora and fauna which were contributed by the emerging photographers of JNU Photography Club. The photo exhibition witnessed many students as well as outsiders who had come especially to explore Melange. The exhibition was officially inaugurated in the presence of Rector-3, Prof. Rana Pratap Singh, accompanied by the Dean of Students, Cultural Activities Coordinator and other officials. The exhibition attracted many students to join various clubs as per their areas of interest.
*Meeta Narain, Chairperson
Centre of Russian Studies, SLL&CS*
**A musical evening with Bombay Jayashree in JNU (Organized by JNU Cultural Committee & Spic Macay)**
JNU had the pleasure of having Bombay Jayashree amongst us in association with the JNU Spic Macay chapter on 24 February, 2018. Her magical voice was really a boon to hear. We witnessed a huge crowd for this musical evening. The academy award nominated vocalist made us realize how soothing a voice can be.
“Bombay” Jayashree Ramnath is an Academy Award nominated, Indian Carnatic music vocalist and music composer. She is a disciple of violin maestro Lalgudi Jayaraman. She has performed at various festivals and venues all across India and in over twenty different countries. Jayashri worked with Ang Lee on his motion picture, *Life of Pi*. She performed the lyrics for “Pi’s Lullaby”,
which was nominated for the 2012 Oscars in the Best Original Song category. She has also composed music for actor Revathi’s films Verukku Neer and Kerala Cafe. In 2004, Jayashree composed music for Silappadhikaaram, a dance drama commissioned by the Cleveland Cultural Alliance.
Another dimension of music which Jayashri is focused on is in exploring the therapeutic and healing value that music can generate. She has been working closely with institutions like Kilikili, Sampoorna in Karnataka and Sankalp in Tamil Nadu which care for autistic children. This domain is a matter of serious engagement for Jayashree and her students. Some other institutions that Jayashree has worked with include The Banyan Chennai (rehabilitation of homeless/mentally challenged women), Vasantha Memorial Trust (cancer patients), Stepping Stones Orphanage Home, Malaysia, Multiple Sclerosis Society of India, Bangalore. and more.
We are very thankful to Spic Macay for providing us with the opportunity to have such an eminent artist amongst us. Many students and faculty, including the Vice-Chancellor, attended the programme. The artists were facilitated by the VC. After the programme, there was a small question-answer session for the music enthusiasts. A great interaction took place between the students and the artists which delighted the audience more. Bombay Jayashree spoke about her training and experiences.
The programme was a grand success with the entire cultural team of JNU working under the guidance of the Cultural Activities Coordinator, Prof. Meeta Narain and the faculty members Dr. Bhaswati Sarkar, Dr. Sudesh Yadav, Dr. Arihant Kumar, and the student coordinators, Anupriya, Apoorva, Prashant, LakhyaJit, Rohit and Vipin.
We all had a great time working as a team and hearing to Bombay Jayashree, and look forward to many more programmes in future.
Anupriya
Convenor, Fine Arts Club
International Water Day Poster Making
On 22 March, 2018 JNU Nature & Wildlife Club in association with IHA successfully organised a poster making competition to observe the International Water Day. The main objective of the event was to raise awareness regarding environmental protection, especially water, among university students and to seek their view point. Students from different schools of the university participated and made beautiful posters. After the competition was over, Professor-in-charge of the Club, Dr. Sudesh Yadav with his speech threw light on World Water Day. It was then followed by prize distribution by Prof. Meeta Narain, Cultural Activities Coordinator and Dr. Sudesh Yadav. The JNU Nature & Wildlife Club would like to sincerely thank IHA and the Cultural Club coordinators for their cooperation and support which made the event a success.
Meeta Narain, Chairperson
Centre of Russian Studies, SLL&CS
Summary Report of a Public Lecture organised by JNU Literary Club
The JNU Literary Club successfully completed its proposed ‘Public Lecture’ in 2 sessions titled ‘Human Trafficking in India & State-Civil Society Interface’ and ‘Political Activities and Associated Violation of Human Rights’ on 19 March, 2018 in association with the National Human Rights & Crime Control Bureau.
The panel in the first session had representation from State, Civil Society and Academia. Whereas Prof Mondira threw light upon the theoretical aspects of Human Trafficking, Dr Meeran Borwankar spoke of the real life hurdles that she had to face while being a law enforcer. Shri Rakesh Sinha, in the
2nd session, spoke of the necessity to decolonize Indian mind, micro narrative and inclusiveness.
**Meeta Narain, Chairperson Centre of Russian Studies, SLL&CS**
**Summary report of the event ‘जीवन का गणित: परिच्छेद और विमोचन’ organised by JNU Literary Club**
The JNU Literary Club in association with Indic Academy organized a book launch event, titled ‘जीवन का गणित: परिच्छेद और विमोचन’, of 'JIWAN KA GANIT – A Poetry Collection by Anikul' on 25 March, 2018. Indic Academy seeks to build an intellectual, Cultural and Spiritual Renaissance based on Indic civilizational thought. Smt. Meenakshi Lekhi, Hon’ble MP from New Delhi, graced the event as Chief Guest. The guests of honour included novelist & Consulting Editor, Swarajya, Prof A Ranganathan, former editor National Affairs, Swarajya, Shri Surajit Dasgupta, author Lt. Col. (Rtd) Manish Jaitly and Prof Rajesh Paswan from CIL, JNU. Meenakshi Lekhi recited poem ‘Naxalwaad’ from the book and congratulated the author Anikul.
**Meeta Narain, Chairperson Centre of Russian Studies, SLL&CS**
**JNU celebrated 127th Birth Anniversary of Dr BR Ambedkar**
JNU celebrated the 127th Birth Anniversary of Dr BR Ambedkar at the Convention Centre on 14 April, 2018 which was attended by a huge audience. The speakers on the occasion spoke of Dr Ambedkar’s unparalleled service to modern India.
Prof. RP Singh, Rector-III JNU, initiated the discussion by saying the lasting memory of Baba Saheb Ambedkar will be to focus on educational reform and expansion in India. The Vice-Chancellor, Prof M Jagadesh Kumar, sat through the entire event and listened to the experts on Dr Ambedkar. Prof RK Kale, former VC of Central University of Gujarat, said in his Keynote address that to fully realise Dr Ambedkar’s dream each university in India should have an Engineering School. And the JNU VC responded to this suggestion by informing the audience that JNU has started an Engineering School and the first batch of students will join soon. He further said that Dr Ambedkar stands tall in JNU, in both senses of the term, because the University Library, the tallest building on the campus, was last year named after him, and also because Dr Ambedkar put great importance on books and libraries.
Prof Vivek Kumar, Ambedkar Chair at JNU, said Dr Ambedkar needed to be decoded in 21st Century India, because he was an inspiration for all sections of Indian society. The Chief Guest for the occasion, the famous Agriculture Scientist, Dr RS Kureel, emphasised on Indian modes of learning and thinking to usher in a real social change in our country.
The VC assured the audience that we would make this event a regular feature in JNU’s calendar.
**R.P. Singh, Rector III**
**Introduction of E-Rickshaw Service**
JNU initiated a landmark green project on campus on 17 April, 2018 and introduced E-Rickshaw Service to facilitate eco-friendly mobility for commuters on the campus. The service was flagged off by the Vice-Chancellor, Prof. M. Jagadesh Kumar. This will not only reduce the pollution and congestion in the campus but will also discourage people from using motorized vehicles in JNU.
E-Rickshaws will be plying on three routes on the campus, connecting all residential areas, hostels and shopping complexes, library, academic buildings and all the exit gates. For this facility the service provider will be charging a low fare of Rs. 10 per ride, per person. To promote women entrepreneurship and gender friendly ambience in the campus, Vice-Chancellor said that in the coming months all E-Rickshaws will be driven by female drivers.
**R.P. Singh, Rector III**
Third Annual Memorial Christopher Freeman Lecture on “Evolution from the Economics of Innovation to Economic Development”
The research students of Centre for Studies in Science Policy (CSSP) in association with the 4th India LICS Conference and Training Workshop 2017 organized the Third Annual Memorial Christopher Freeman Lecture at JNU Convention Centre on 5 November, 2017. A special lecture titled “Evolution from the Economics of Innovation to Economic Development” was delivered by Prof. Smita Srinivas, who is the Founder Director of the research platform Technological Change Lab (TCLab) and currently an Honorary Professor at Indian Council for Research on International Economic Relations (ICRIER). Chair of the Session, Prof. Prajit K. Basu of the University of Hyderabad, introduced the speaker and briefly discussed the thematic area of the Lecture.
In the talk, Srinivas further elaborated her research interests in the institutional explanations and plans for economic transformation and governance. Her recent work has analyzed gaps and tensions between the institutional and behavioural assumptions of evolutionary economics with those of ‘late’ industrial political economy and development economics. Her wider research interests include comparative development data, social policy, skills, moral philosophy and value preferences in economics and governance. She elaborated her experience in higher education reform initiatives in economics and policy-focused professional schools in the US, India, and East Africa. Srinivas has strong interests in problem-framing and solving and the use of heuristics in economic theory in realistic development plans and policy design. She further discussed how TCLab, which she founded, deploys three-way research focus on economic theory, policy design, and realistic development plans. Much of economic development has tended to exclude one or more of these elements.
The Lecture attracted enthusiastic commentaries from the learned audience. Prof. Mammo Muchie of the Tshwane University of Technology in South Africa elaborated how personally and academically he benefitted from Christopher Freeman, as his doctoral supervisor and his academic mentor. Christopher Freeman was instrumental in the formation of the Globelics – a global research network for scholars of innovation studies. There were other discussants such as Prof. Sujit Bhattacharya of CSIR-NISTADS and Prof. Rajeswari S. Raina of Shiv Nadar University. The participants of India LICS Training Workshop also interacted with the speaker in this session to broaden their research perspectives and research agendas. The 3rd Christopher Freeman Lecture concluded with vote of thanks by Dr. Saradindu Bhaduri, Chairperson, CSSP. He thanked the participants and resource persons for the successful conclusion of the IndiaLICS Training Workshop 2017, which was jointly organized by CSSP, Jawaharlal Nehru University, and CSIR-NISTADS.
Anup Kumar Das, Documentation Officer
Centre for Studies in Science Policy, SSS
GIAN programme on “Cloud Data Center Service Provisioning: Theoretical and Practical Approaches”
School of Computer and Systems Sciences organized a two weeks Global Initiative on Academic Network (GIAN) programme on “Cloud Data Center Service Provisioning: Theoretical and Practical Approaches” from 29 January to 9 February, 2018 in the seminar hall of the School.
The Resource Person for the course was Prof. Jemal H. Abawajy, Director and full Professor at the Distributed Systems and Security Cluster, Faculty of Science, Engineering and Built Environment, Deakin University, Australia. He is a Senior Member of IEEE Society; IEEE Technical Committee on Scalable Computing; IEEE Technical Committee on Dependable Computing and Fault Tolerance and IEEE Communication Society. His leadership is extensive, spanning industrial, academic and professional areas (e.g., Academic Board, Faculty Board and Research Integrity Advisory Group). He has been actively involved in the organization of more than 300 national and international conferences in various capacities including chair, general co-chair, vice-chair, best paper award chair, publication chair, session chair, and program committee. Professor Abawajy has served on the editorial board of numerous international journals and currently serves as associate editor of the IEEE Transaction on Cloud Computing, International Journal of Big Data Intelligence, and International Journal of Parallel, Emergent, and Distributed Systems. He has also guest edited many special issue journals. He is the author/co-author of five books, more than 250 papers in conferences, book chapters, and journals such as IEEE Transactions on Computers and IEEE Transactions on Fuzzy Systems. Professor Abawajy has delivered numerous keynote addresses, invited seminars, and media briefings (e.g., Voice of America’s English Radio).
The objective of the course was to enable the participants to have a thorough knowledge and practical experience in advanced resource management solutions and scheduling techniques for Cloud computing. The course examined, in depth, the critical resource management and scheduling features such as efficiency, scalability, reliability, energy conservation, and environmental impact as well as mechanisms for the implementation of resource management and scheduling policies for Cloud. Thus, this course enhanced the participants’ knowledge of the key functions in Cloud computing resource management and scheduling as well as equipped the participants with the skills and knowledge required for a career in Cloud computing or Cloud computing research and development. The knowledge gained in this course will enable attendees to perform deep and wide analysis of issues related to virtualized resources optimization, implement appropriate resource provisioning techniques, and perform high-quality research in the area.
D.P. Vidyarthi, Zahid Raza
Course Coordinators
“Interdisciplinary exchange on Russian culture”: an SAA and CRS collaboration
An event on the rich cultural heritage of Russia and India titled “Interdisciplinary exchange on aspects of Russian
Culture: an SAA and CRS Collaboration” was held on 10 February, 2017 in the SAA auditorium. The event was organized jointly by the Centre of Russian Studies, School of Language, Literature and Culture Studies and The School of Arts and Aesthetics. There were presentations and papers on cinema, folklore and popular culture of Russia and India.
The event was attended by large group of students of the graduate courses. The presentations were sectioned into three categories – Cinema, Folklore and Popular Culture of India and Russia.
Scholars participated actively to explore the different aspects of Russian cinema – be it the representation of the communist/post Soviet society, depiction of women soldiers or traditional folk heroes. There were papers on the comparative study of beliefs and superstitions as well as folk tales and on Russian cuisine. An interesting presentation traced leftist ideologies within the popular visual culture of Bengal. The speakers were introduced by Dr. Suryanandini Narain, SAA. The presentations were as follow:
1. Dr. Sonu Saini: Comparing the Folktale ‘The fox and the crane’ through Russian and Indian Animation
2. Dr. Manuradha Chaudhary: Similarities and Differences in North Indian and Russian superstitions or beliefs
3. Konstantin: Cinema of Sergey Loban and Yuriy Bykov as the representation of ideological and imaginary “culdesac” of post-soviet/communist Russian society in the XXI century
4. Sunita Acharya: Russian Cinema and Women Soldiers: An analysis of the contemporary film “Battalion”
5. Krishanu Nath: From an Artist’s Diary to the Red Dome of the Minar: Tracing Leftist Ideologies within the Popular Visual Culture of Bengal
6. Mona Agnihotri: Nasruddin in Cinema: Journey from a Folk Hero to a Film Hero!
7. Dr. Anju Mehta: A Russian Meal
The guests and audience members were welcomed by Dr. Richa Sawant, CRS. The event was inaugurated by the Dean SLL&CS, Prof. Rekha Rajan, present at the occasion were Dean SAA, Prof. Bishnupriya Dutt and Chairperson CRS, Prof. Meeta Narain. Addressing the gathering Prof. Rekha Rajan said that such an event was the first of its kind. She emphasized the commonalities between the two schools and said that it should be explored further. She especially encouraged research scholars to continue their studies in interdisciplinary areas.
Speaking at the event Prof. Bishnupriya Dutt reminded the audience of how Russian theatre has, over the decades, brought together the cultures of Russia and India. This is also reflected in the ties between CRS and SAA, which go back a long time, she added. Chairperson CRS Prof. Meeta Narain said that such an event provides a platform for exchange of ideas, especially for research scholars. It encourages in-house potential and gives them an opportunity to express ideas, which at times, is not possible in large conferences.
After the presentations there was a cultural programme organized by members of the cultural committee. The students performed folk dances of Russia and sang popular songs from Russian films. Post graduate students from the Institute of Asian and African Studies, Moscow State University, who were in JNU on an exchange programme also participated in the event and read poems and sang songs in Hindi.
Richa Sawant, CRS & Suryanandini Narain, SAA, Conveners
National Science Day 2018
National Science Day was organized in JNU jointly with department of Science and Technology on 28 February, 2018 to commemorate the invention of the Raman Effect by the great Indian Physicist Sir C. V. Raman. The theme for National Science Day 2018 was “Science and Technology for Sustainable Future”. The inauguration was done by Prof. M. Jagadesh Kumar, Vice Chancellor, JNU with a scientific talk on “Technology for sustainable future”. The occasion was graced by the presence of Prof. Ashutosh Sharma, Secretary, DST as chief Guest and Sh. Anil Manekar, Director General, National Council of Science Museums as key note speaker. Prof. Ashutosh Sharma stressed upon the need and importance of science and technology for sustainable future. Shri Anil Manekar emphasized on the significance of effective science communication in India. Dr. Chander Mohan, Scientist G, DST also graced the occasion with his presence. The inaugural session was followed by one on scientific talks by speakers of 8 different science schools and centres.
Session I chaired by Prof. S.C. Garkoti, Rector II, JNU,
included Dr. Ved Prakash Gupta from School of Physical Sciences on "Matrices: From finite to infinite"; Dr. Satyendra Sharma from Special Centre of Nanosciences on "Ferroelectric materials for future electrocaloric refrigeration applications"; Prof. AL Ramanathan from School of Environmental Sciences on "Glacier and water resource management in Himalayas, India"; and Prof. S. Balasundram from School of Computer and System Sciences on "On robust support vector regression with asymmetric Huber loss". Session II chaired by Prof. P.K. Dhar, School of Biotechnology, comprised scientific presentations by Dr. Shailja Singh, Special Centre of Molecular Medicine on "Cellular and molecular basis of host pathogen interactions: Unfolding new definitions towards therapeutic intervention strategies"; Dr. Ranjana Arya, School of Biotechnology on "Understanding pathomechanism of rare genetic disorder altering sialic acid levels"; Dr. Karunakar Kar, School of Life Sciences on "Targeting protein aggregation through Nanoparticle based inhibitor"; Dr. Arnab Bhattacharjee, School of Computer and Integrative Sciences on "Dancing on DNA: How proteins scan their target sites".
More than 200 poster presentations by student and faculty across science schools and centres were displayed in the poster session. The posters were judged by a panel of judges and best poster awards were presented. Almost 950 student participants was registered from JNU and other Universities across Delhi/NCR region. The Valedictory session was conducted by Prof. Rana P. Singh, Rector-III with poster award distribution.
The event was sponsored partially by Vigyan Parasar, DST in addition to DST-PURSE II and Office of Research & Development, JNU.
Ranjana Arya, Co-ordinator National Science Day 2018
5वें गुणाकर मुले स्मृति व्याख्यान का आयोजन
विश्वविद्यालय में दिनांक 8 मार्च, 2018 को सम्मेलन केन्द्र के लेखार हॉल-3 में 5वें गुणाकर मुले स्मृति व्याख्यान (मेमोरियल लेखार) का आयोजन किया गया। इस अवसर पर बतौर मुख्य अतिथि प्रो. चन्दन चौबे, दिल्ली विश्वविद्यालय, दिल्ली तथा बतौर विशिष्ट अतिथि प्रो. नन्द किशोर पांडेय, निदेशक, कन्द्राय हिंदी प्रशिक्षण संस्थान, आगरा ने शिरकत की। हर वर्ष की तरह स्वर्गीय गुणाकर मुले जी की धर्मपत्नी श्रीमती शांति मुले ने भी कार्यक्रम की शोभा बढ़ाई।
कार्यक्रम में मंच संचालन डॉ. पूनम कुमारी, एसोसिएट प्रोफेसर ने किया। स्वागत वक्तव्य जेएनयू के कुलसचिव डॉ. प्रभाद कुमार ने दिया। प्रो. चन्दन चौबे ने अपने उद्वेदधन में हिंदी के बदलते स्वरूप एवं बदलते समय में हिंदी के महत्व पर जोर दिया। उन्होंने कई प्रसंगों के माध्यम से आधुनिक हिंदी के बढ़ते व्यापक रूप पर चर्चा की। प्रो. नन्द किशोर पांडेय जी ने अपने सारगर्भित उद्वेदधन में न केवल हिंदी के राजभाषायी स्वरूप पर विस्तार से प्रकाश डाला। अपितु हिंदी के विभिन्न पक्षों तथा भाषिक साहित्यिक, सांस्कृतिक आदि के माध्यम से हिंदी के व्यापक स्वरूप का उल्लेख किया। उन्होंने कहा कि अंग्रेजी भाषा या अन्य कोई विदेशी भाषा सीखना अच्छी बात है, पर अपनी भाषा, स्वभाषा को त्वार्गकर दूसरी भाषा बाद्य भाषा का गुणागार करना अच्छी बात नहीं है। उन्होंने भारतीय भाषाओं में परीक्षा देने वाले छात्रों तथा अंग्रेजी भाषा के माध्यम से परीक्षा देने वाले छात्रों का तुलनात्मक विवरण प्रस्तुत कर भारतीय भाषाओं एवं हिंदी के महत्व पर प्रकाश डाला।
कार्यक्रम के अंत में धन्यवाद ज्ञापन प्रो. सुधीर प्रताप सिंह ने दिया। उन्होंने कार्यक्रम को सफल बनाने में सभी प्रयास एवं परोक्ष रूप से सहयोग देने वाले व्यक्तियों का आभार व्यक्त किया। कार्यक्रम का समापन जलपाण के साथ हुआ।
सुमेर सिंह, सहायक निदेशक
(रा.भा.), जेएनयू
ICSSR-Harvard University-Indian Oil Corporation Ltd.-NIDM Collaborative Two Week Workshop on 'Social Sciences Approach and Institutional Decision Making in Disaster Research'
The transdisciplinary, Special Centre for Disaster Research held its first specialized long duration capacity building workshop (5-18 February) for Assistant and Associate Professors drawn from across the country in collaboration and with the support provided by ICSSR. The workshop started by generating an understanding on the need for transdisciplinarity in Social Sciences so that research could be more holistic, broad based and reflective of the real need.
of people and institutions. The sessions focused on the various aspects embedded within the “Social Science Approach to Disaster Research” such as human geography, geo-information systems, institutional ability to absorb and create resonance with people’s requirements, economics of disasters and institutional and leadership effectiveness. The role of youth during the stages of preparedness and rescue were much discussed and debated with greater demands of sensitizing educational curricula and teaching methods. The last three days uplifted the discussion from the threshold achieved through the social sciences approach to its translation into “Reinforcing Institutional Decision Making in Disaster Preparedness and Mitigation”. The international segment was designed and structured in collaboration with experts from the Crisis Management Programme of the Kennedy School of Governance at Harvard University. Many other country experts also participated to share their work in the field such as Bangladesh, Sri Lanka, Philippines and Japan. This workshop also generated arguments and concerns to be taken in the forthcoming experts’ meeting of the International Network of Disaster Studies at Iwate University in Morioka, Japan in July. The discussion ignited innovating ideas which strengthened the base for its approaching MA and Ph.D programmes. The first segment of the workshop was coordinated by Prof. Milap Punia (Director) and Prof. B.S. Waghmare (Co-Director). The second international segment of the workshop was coordinated by Dr. Sunita Reddy (Director). The overall coordination and theme setting was done by Prof. Amita Singh (Chairperson, SCDR) with her extremely vibrant and passionate group of research students drawn from across the university.
The workshop had a threefold focus;
(1) Decision Making, Law and Institutional Capacity
(2) Vulnerable Communities, Livelihood and Role of Industry
(3) GIS, ICT and Artificial Intelligence
**Fourfold Objective of the Workshop:**
Many issues emerged from the workshop and greater clarity ensued in the direction of disaster studies. Some of the major directions which made their visibility during the 18 days long discussion were;
1. Improve and strengthen the Disaster Management Act 2005, legal framework of accountability and institutional functioning. A detailed report is being published separately.
2. Longitudinal ethnographic studies, need assessment, monitories, evaluation of existing welfare programmes. A closer look at District and Panchayat level work should be undertaken on one hand and intensive collaboration with UN agencies on the other is required.
3. Making disaster economics the base, compensation on measurement of damages and losses be linked to human rights. A textbook on ‘Disaster Economics’ is being published for students and researchers by Palgrave-Springer.
4. New and innovative appropriate methodological tools and techniques to be taught to stakeholders in administration and community institutions to reduce losses and damages. A theme focused three day workshop will be organized in the coming month in collaboration with the IIT Delhi and SDMAs of some states.
5. Ethics of decision making beyond sensitive and compassionate administration which justice administration and police ought to be trained in. It was decided to write to institutions and in doing so collaborate with ICMR on the ethical guidelines for social sciences based field surveys in disaster research.
6. Hands on skills of quantitative and qualitative analytical tools and packages like SPSS, N-Vivo. Tools like remote sensing, GIS mapping, disaster warning systems, disaster preventive engineering retrofitting methods are to be understood. A holistic collaboration with the Ministry of Earth Sciences and Remote Sensing institutions would be
the answer to sustainable policies.
**Workshop inauguration:**
The workshop was inaugurated on 5 February, 10 am, with a lamp lighting ceremony. The inaugural speech was delivered by the Vice Chancellor of JNU Prof. M.Jagadesh Kumar highlighting the need for Institutional collaboration, mutual learning and hand holding of grassroots communities. He also emphasized that transdisciplinarity requires that new bridges of understanding should be built between science, social science and humanities. Notwithstanding the speed with which the world is changing he expressed hope that a proper stitch in time can save higher education from repetitive and overlapping research. ICSSR Member Secretary Prof. V.K. Malhotra wished to interact with the participants and generated a lively debate on many issues concerning disaster studies, new research areas and proposal evaluation at the ICSSR. It was a delightful news from him that the ICSSR list of areas for research funding has already included disaster studies in social sciences. After him, the Rector Prof. Chintamani Mahapatra presented and shared data on disasters and the concomitant complexity of disaster mitigation besides an overlapping commonality across the world. He compared human and governmental responses to some of the major disasters across the world and the yet to be explored connect between development and disasters. The Vice Chancellor also released a 12 min. documentary film on the Special Centre for Disaster Research highlighting the focus and content of the new centre..
**Participants & Experts:**
The workshop had a list of 20 committed scholars from the Asia Pacific Governance Network called NAPSIPAG located in Sri Lanka.
The workshop also brought together 36 participants from across the country. Around 20 participants represented institutions in remote and fragile geographical regions in Jammu & Kashmir, Kerala, Tamil Nadu, Andhra Pradesh, Odisha, Bihar, Puducherry, Maharashtra, Himachal Pradesh, Uttar Pradesh, Rajasthan, Haryana and Uttarakhand.
**Special sessions:**
The workshop had two special and unique features. First was an open house discussion on the rights of nonhumans in the Disaster Management Act 2005. This was coordinated by former Member of NDMA Shri. K.M. Singh Rtd. IPS, and Mr. Gajendra Sharma (World Animal Protection). The award winning film by Louise Psihoyos ‘Racing Extinction’ was screened. The film educated the audience on the need for recognizing the rights of nonhuman species for two reasons: first they share this earth with the stronger species ‘human’ as their intrinsic right. Second, we protect ourselves from destruction if we protect them as the film documents how they conserve forests above the ground and at the ocean bed which together provide us more than 80% of Oxygen. The film also established how change of life styles may nail down the calamities of climate change. However, several hurdles have to be crossed in this debate, such as human intransigence, greed, ignorance and insensitivity. A profound neglect of nonhuman rights in DMA 2005 prevents a holistic policy for protecting life during disasters.
The second unique session was a visit to the urbanized villages in Hauz Khas and then for a daylong walk to Chandni Chowk. The group was divided into three sub groups of heritage, food market and craft market community explorations and then followed a charged discussion on how one can plan resilience during disasters in such a highly vulnerable, no-option displacement, heritage value crumbling buildings where density and diversity of habitation is the highest in the city. There was concern but also the strongly felt need for electricity wiring, drainage and ambulance pathways as preemptive preparedness measures for disasters.
A Quiz and Round Table Discussions were also held with Prof. Amita Singh (Chairperson SCDR) as Co-ordinator
**Collaboration and Networking Meet on 16th evening:**
Since no transdisciplinary research is possible without discovering willing and equivalent collaborations, the Indian Oil corporation Ltd. shared sponsorship with ICSSR enabled a widely attended session on ‘Culture, Industry and
Disasters’ at Ashoka Hotel. The session started with a panorama of Indian cultural diversity followed with group meetings and dinner table discussions and information sharing.
**Outcomes of the Workshop:**
1. An archive of information based in a kiosk/dashboard to be constantly fed through field research rather than replicating available information on other sites of the government.
2. Course curriculum design which can now set priorities and linkages of teaching various aspects of disasters.
3. A strategy for transdisciplinary collaborations and working together.
4. A scheme of amendment to the Disaster Management Act of 2005.
5. Trained leaders for starting courses/centres and new papers on the ‘social science perspective of disaster management’ at their respective institutions.
---
**Conference organized by the School of Physical Sciences: SPS March Meeting on “Responsive Molecules and Materials”**
The School of Physical Sciences, JNU has a long standing tradition in organizing a conference every year, known as March Meeting. This year, the meeting was held on the theme of “Responsive Molecules and Materials”, 16 – 17 March, 2018. These responsive molecules and materials offer innumerable potential applications in various fields such as energy, catalysis, medicine etc. We widened the scope of the conference by including responsive materials to cover various fields in chemistry such as physical and biophysical chemistry, crystal engineering, newer synthetic methodology, material and polymer chemistry, computational chemistry, inorganic and bio-inorganic chemistry, supramolecular chemistry, biomaterials and medicinal chemistry etc. Experts from prestigious institutes across the country were invited to deliver talks in this meeting. The aim of the conference was to motivate M.Sc. and Ph.D. students of SPS and enable them to learn about new science from the experts in their field. This conference provided a great opportunity for SPS students and faculty to interact with eminent scientists in these areas and understand the recent developments about various responsive molecules and materials. A popular level talk was also delivered on the use of responsive molecules in medicinal chemistry and biomaterials in drug delivery. Invited Speakers to this year’s March Meeting were: T. P. Radhakrishnan (University of Hyderabad), C. Malla Reddy (IISER Kolkata), Debabrata Maiti (IIT Bombay), Tushar Jana (University of Hyderabad), Sasanka Deka (University of Delhi), Biswarup Pathak (IIT Indore), G. Mugesh (IISc Bangalore), Joyanta Choudhury (IISER Bhopal), Gouriprasanna Roy (Shiv Nadar University), V. G. Anand (IISER Pune) and Amit Dinda (AIIMS New Delhi). In addition, Kedar Singh, Supriya Sabbani and Manoj Munde from the School also presented talks. In this conference, the students of SPS presented their research work in a poster. A poster competition was also organized for the students of SPS and four prizes were awarded for the best poster presentation.
---
**Amita Singh, Chairperson**
**Special Centre for Disaster Research**
**Dinabandhu Das, Pijus Kumar Sasmal,**
**School of Physical Sciences, Coordinators**
वर्षा शर्मा, टेक्नीकल ऑफिसर, जीवन विज्ञान संस्थान से बातचीत पर आधारित
प्रश्न – जेएनयू में आप कब आईं?
श्रीमती शर्मा – मैं जेएनयू में 1981 में आई थी। पोस्ट डॉक्टरल (सीएसआईआर) पोजिशन पर, फिर मैंने रिसर्च एसोसिएट, यूमन साइंस्टिट्स की पोजिशन पर काम किया, रिसर्च में न्यूरोसाइटिस्ट की तरह काम किया। 2006 में यार्डन का काम किया। 2008 में टेक्नीकल ऑफिसर के पद पर कार्य किया। 2017 में सेवानिवृत्त हुईं।
प्रश्न – आपके समय का जेएनयू कैसा था?
श्रीमती शर्मा – बहुत अच्छा था। सारे लोग बहुत मदद करने वाले थे। कोई भी परेशानी नहीं होने देते थे। सब परिवार की भावि रहते थे। जेएनयू के सारे कुलपतियों ने मेरी बहुत मदद की। प्रो. रामेश्वर सिंह का सुपरवायर की तरह पूरा सहयोग व स्वतंत्रता मिली। प्रो. काले व प्रो. राजेन्द्र प्रसाद ने बड़े भाई की तरह मदद की।
प्रश्न – जेएनयू में आज आप किस तरह का बदलाव देखते हैं?
श्रीमती शर्मा – अब समय के साथ बहुत बदलाव आ गया है। पढ़ाई के संदर्भ में कुछ छात्र छात्राएं राजनीति का शिकार हो रहे हैं। कुछ अभी भी जेएनयू का मान बढ़ाने में लगे हैं। कुछ अभी भी तन, मन, धन से छात्रों का जीवन संवारने में लगे हैं।
प्रश्न – जेएनयू ने आपके जीवन पर क्या असर डाला?
श्रीमती शर्मा – जेएनयू ने मेरे जीवन का अर्ध ही बदल दिया है। अच्छे संदर्भ से मानवता व करुणा का पाठ पढ़ाया, छात्र छात्राओं शिक्षकों का प्रभ व प्रोटेक्शन मिला। हमेशा कुछ भी निर्णय करने से पहले दूसरे पक्ष की तरफ से साचना सिखाया। व्यवहार कुशलता व सामाजिक रिश्तों का अर्ध, विवेक व मानसिक तीर पर तर्क वितर्क करना सिखाया।
प्रश्न – जेएनयू से जुड़ा कोई अनुभव जो आप हमें बताना चाहेंगे?
श्रीमती शर्मा – अनुभव तो बहुत है खड़े-मौदे पर यही प्रार्थना है कि अपने स्वधर्म का पालन करें, गलत राजनीति न करें, किसी को झूठे इल्जामों में न फसाएं, व्यथ व गलत आरोप लगाकर मानसिक प्रताड़ना न करें। स्टाफ की पॉलिसी जो कहती है उनको लागू करके उनके परिवार को सुखी, समृद्ध बनाएं और उन्हें आगे बढ़ने की प्रेरणा दें।
प्रश्न – जेएनयू न्यूज़ के पाठकों के लिए कोई संदेश?
श्रीमती शर्मा – पाठकों से यह निवेदन है कि किसी की बातों में न आकर अपने स्वयं के विवेक से सही व गलत का निर्णय करें। अपने आस-पास हो रही घटनाओं दुर्घटनाओं को ध्यान से देखें व जेएनयू को आगे बढ़ाने में सहायता करें। अपने परिवेश को देखते हुए अपनी सामर्थ्य के अनुसार जो आप कर सकते हैं करें, दुर्गे नहीं यही स्वर्णर्थ है। जय हिंद!
Sports in JNU (1986)
A JNU girl student doing long sling on over hanging rock
Trekking in Jammu & Kashmir
Intra Hostel Chess Championship (Kaveri)
JNU Open Table Tennis Championship
Winners of Broad Jump
Winners of High Jump
Winners of High Jump
Our Publications
“Non-discrimination and Equality in India”, Contesting Boundaries of Social Justice by Prof. Vidhu Verma, Centre for Political Studies, SSS. Published by Routledge Contemporary South Asia Series. ISBN No. 9780415677752.
“Indian philosophy in a nutshell”, by Prof. C. Upender Rao, School of Sanskrit and Indic Studies.
“Buddha Stutih”, by Prof. C. Upender Rao, School of Sanskrit and Indic Studies. Published by Phnom Penh, Cambodia.
“Unequal Worlds Discrimination and Social Inequality in Modern India” by Prof. Vidhu Verma, Centre for Political Studies, SSS. Published by Oxford University Press. ISBN No. 9780199453283
“Srivijaya Kavirajamargam: The Way of the King of Poets”, by Prof. R.V.S. Sundaram, The Kannada Language Chair, Centre of Indian languages, SLL&CS and Dr. Deven M. Patel, South Asia Studies – Published by Manohar. ISBN No. 9789350981764
Obituary
जेएनयू में प्रोफेसर एमेरिटस और हिंदी के वरिष्ठ कवि केदारनाथ सिंह का निधन
समकालीन कविता के प्रमुख हस्ताक्षर और अजेय द्वारा संपादित ‘तीसरा सप्तक’ के प्रमुख कवि केदारनाथ सिंह का अखिल भारतीय आयुर्विज्ञान संस्थान में 19 मार्च, 2018 को निधन हो गया। वे 84 साल के थे। वे तब्बे समय से बीमार चल रहे थे। उत्तर प्रदेश के बलिया जिले के चकिया गाँव में 1 जुलाई 1934 को जन्मे केदारनाथ सिंह ने काशी हिन्दू विश्वविद्यालय से 1956 में हिन्दी से एमए किया और 1964 में उत्तरी पूंजीचड़ी की उपाधि हासिल की। वे जवाहरलाल नेहरू विश्वविद्यालय के भारतीय भाषा केंद्र में बौद्ध आचार्य, अध्यक्ष और प्रोफेसर एमेरिटस रहे।
भारतीय ज्ञानपीठ की ओर से उन्हें साल 2013 में 49वाँ ज्ञानपीठ पुरस्कार दिया गया। वे यह पुरस्कार पाने वाले हिन्दी के दर्शन लेखक थे। इसके अलावा उन्हें मेधिलीशरण गुप्त सम्मान, कुमारन आशान पुरस्कार, जीवन भारती सम्मान, दिनकर पुरस्कार, साहित्य अकादमी पुरस्कार, व्यास सम्मान आदि से भी सम्मानित किया गया था।
Prof. Arun Kumar Mohanty, Centre for Russian and Central Asian Studies, School of International Studies, passed away on Saturday, 3 February, 2018.
Dr. Navneet Sethi, Associate Professor, Centre for English Studies, School of Language, Literature and Culture Studies, passed away on 31 March 2018. A Memorial Meeting was held at the Centre on 6 April 2018, where her current and former students, her family members, her colleagues at the Centre and from other schools of the University, as well as members of the differently-abled community of the University paid emotional tribute to her departed soul.
Shri Pawan Kumar Shrivastava, DG, Bhopal
Aparaajita: How would you describe your association with JNU and your first impressions of the campus?
Shri Pawan Shrivastava: I arrived at JNU in the summer of 1990. Having spent close to 6 years in Allahabad from a middle class background, I was familiar with the caste system and social and cultural hierarchy prevalent in UP. Hierarchy (even a one year senior student was a senior and a sir). Mostly male students were friends with males and similarly female students were friends with females (especially as the girl students were housed in two distant and well guarded hostels). Coming to JNU was a shock. I had to share my room with a rank youngster (student of BA 1st year). Neighbours were not friends and girls and boys lived in different hostels but within the same campus and were free to roam at will. The mess bearers who served food to us would share the same table with us at the end of dinning time. Community living in a hostel had generated a very strong bond among the student fraternity of the same hostel. However, the student life was quite egalitarian yet aloof. I met old student friends from Allahabad and made many new friends. The campus was huge, the buildings were all alike, there was a strong student union and the bougainvillea was in full bloom. I started liking JNU.
Aparaajita: Why did you choose JNU for higher studies?
Shri Pawan Shrivastava: I was at Allahabad and wanted to pursue research in Sedimentology, a subject that was not available at Allahabad. I already had a UGC-CSIR scholarship and learned that Prof Asthana teaches sedimentology in SES, JNU. I applied and secured admission in M.Phil. in Environmental Sciences.
Aparaajita: What has been the trajectory of your career since you left JNU?
Shri Pawan Shrivastava: Even though I was offered a Ph.D. in a US university post my M.Phil. by one of my local Professors, and the fact that I was already selected in the Geological Services of India and was a CSIR scholar, I plunged into the fascination of Civil Services and was appointed in the Indian Police Services in the year 1992. Ever since I am in the same profession and am on the path of progression.
Aparaajita: How has JNU impacted your career?
Shri Pawan Shrivastava: The biggest impact that JNU made was exposure to the big wide world. From a small town very old university to a new but major university in the capital was a huge change. JNU taught me extreme independence. JNU provided me with friends who are similarly placed. It has a strong ever-helping alumni. It certainly makes the job easier.
Aparaajita: What has been the most challenging part of your work and why?
Shri Pawan Shrivastava: In a career spanning 25 years now there have been extreme challenges that one has faced. During the initial years I was posted in a naxalite area. Keeping the morale of the police force high and combating the naxals was the greatest challenge then. Police work throws up challenges on a regular basis whether they are simple or grievous crime or a sudden law and order problem or traffic or daily dealing with public. You realize that there is so much of inequality in society and so many grievances and I take immense satisfaction in attempting to resolve them. I also had the opportunity to contemplate, create and complete two major projects. The biggest compliment I received was from a senior IAS officer who observed that I could execute what he could only think.
Aparaajita: Would you like to share any special memories that you associate with your time in JNU?
Shri Pawan Shrivastava: We had our share of sorrows and laughter while we were at JNU. I vividly recall a friend Devesh who was a short framed body builder who came running to me and Ashutosh at Kaveri hostel one day saying there is an ongoing shooting at Godavari Dhaba and urging us to witness that. Ashutosh had a different idea and said “शूटिंग चल रही है तो हमऊँ एक्टिंग करवा”. He was wearing jeans and kurta with black shades. Impromptu we created an act where he became a don and we his sidekicks and at Godavari dhaba we cornered the director to give a role to him. The act worked and the director did shoot a scene for him. Whether it was aired or not we do not know nor did we bother to know.
Aparaajita: What would your message be to the readers of JNU NEWS and the students of JNU?
Shri Pawan Shrivastava: JNU is a great leveller amongst the student fraternity. It gives tremendous exposure to them at all levels, especially in academics. If students seriously resolve to achieve anything in their life, JNU is the most apt springboard for them.
विश्वविद्यालय की विशेषताएँ होती हैं, मानववाद, सहिष्णुता, तर्कशीलता, विचार का साहस और सत्य की खोज। विश्वविद्यालय का काम है उच्चतर आदर्शों की ओर मनुष्य जाति की सतत यात्रा को सम्भव करना। राष्ट्र और जनता का हित तभी हो सकता है जब विश्वविद्यालय ठीक से अपने दायित्वों की निर्वाह करें।
जवाहरलाल नेहरू
E-Rickshaw in Campus
Editorial Board
Chairperson: Prof. Saugata Bhaduri, CES/SLL&CS, Members: Dr. Shobha Sivasankaran, CF&FS/SLL&CS, Dr. Rohini Muthuswami, SLS, Dr. Asutosh Srivastava, SC&SS, Dr. Priti Singh, CCUS & LAS/SIS, Dr. Amit Thorat, CSRD/SSS, Dr. Ganga Sahay Meena, CIL/SLL&CS, Ms. Ritu Nidhi, CIS,
Member Secretary: Ms. Poonam S. Kudaisya, PRO, Photos by: Sh. Vakil Ahmad
Published by: POONAM S. KUDAI SYA, Public Relations Officer for and on behalf of the Jawaharlal Nehru University, New Delhi-110067, Tel.: 26742601, 26704046, 26704017, JNU Website: http://www.jnu.ac.in
Printed by: MAXCOMM INDIA PVT LTD, C-17, Palparganj Industrial Area, Delhi-110092, Phone: 011-43631789, Mob: 8750454545 |
A narrative of the death of Captain James Cook. To which are added some particulars, concerning his life and character, and observations respecting the introduction of the venereal disease into the Sandwich Islands / By David Samwell, surgeon of the Discovery.
Contributors
Samwell, David, -1799.
Robinson, George, 1736-1801.
Publication/Creation
London : G.G.J. and J. Robinson, 1786.
Persistent URL
https://wellcomecollection.org/works/kqx629p6
License and attribution
This work has been identified as being free of known restrictions under copyright law, including all related and neighbouring rights and is being made available under the Creative Commons, Public Domain Mark.
You can copy, modify, distribute and perform the work, even for commercial purposes, without asking permission.
SAMWELL, D.
C
Digitized by the Internet Archive in 2017 with funding from Wellcome Library
https://archive.org/details/b28754955
A NARRATIVE OF THE DEATH OF CAPTAIN JAMES COOK.
TO WHICH ARE ADDED SOME PARTICULARS, CONCERNING HIS LIFE AND CHARACTER, AND OBSERVATIONS RESPECTING THE INTRODUCTION OF THE VENEREAL DISEASE INTO THE SANDWICH ISLANDS.
BY DAVID SAMWELL, SURGEON OF THE DISCOVERY.
LONDON:
PRINTED FOR G. G. J. AND J. ROBINSON, PATER-NOSTER-ROW. MDCCCLXXXVI.
WELLCOME
LIBRARY
INSTITUTE
TO those who have perused the account of the last voyage to the Pacific Ocean, the following sheets may, at first sight, appear superfluous. The author, however, being of opinion, that the event of Captain Cook's death has not yet been so explicitly related as the importance of it required, trusts that this Narrative will not be found altogether a repetition of what is already known. At the same time, he wishes to add his humble testimony to the merit of the account given of this transaction by Captain King. Its brevity alone can afford an excuse for this publication, the object of which is to give a more particular relation of that unfortunate affair, which he finds is in general but imperfectly understood. He thinks himself warranted in saying this, from having frequently observed, that the public opinion seemed to attribute the loss of Captain Cook's life, in some measure, to rashness or too much confidence on his side; whereas nothing can be more ill-founded or unjust. It is, therefore, a duty which his friends owe to his character, to have the whole affair candidly and fully related, whatever facts it may involve, that may appear of a disagreeable nature to individuals.
viduals. The author is confident, that if Captain King could have foreseen, that any wrong opinion respecting Captain Cook, would have been the consequence of omitting some circumstances relative to his death; the good-natured motive that induced him to be silent, would not have stood a moment in competition with the superior call of justice to the memory of his friend. This publication, he is satisfied, would not have been disapproved of by Captain King, for whose memory he has the highest esteem, and to whose friendship he is under many obligations. He is sanguine enough to believe, that it will serve to remove a supposition, in this single instance, injurious to the memory of Captain Cook, who was no less distinguished for his caution and prudence, than for his eminent abilities and undaunted resolution.
The late appearance of this Narrative has been owing to the peculiar situation of the writer, whose domestic residence is at a great distance from the metropolis, and whose duty frequently calls him from home for several months together. He has the pleasure of adding, that, in publishing the following account of Captain Cook's death, he acts in concurrence with the opinion of some very respectable persons.
NARRATIVE
OF THE
DEATH
OF
CAPTAIN COOK.
IN the month of January, 1779, the Resolution and Discovery lay about a fortnight at anchor in the bay of Kerag,e,goo,ah *, in the Island of Ou,why,ee. During that time, the ships were most plentifully supplied with provisions by the natives, with whom we lived on the most friendly terms. We were universally treated by them with kind attention and hospitality; but the respect they paid to Captain Cook, was little short of adoration. It was, therefore, with sentiments of the most perfect good-will
* I take it for granted, that most of those into whose hands these pages may fall, have perused Captain Cook's last Voyage, and therefore, I have all along mentioned the names of the principal actors in this account, as people with whom they are already acquainted. But as I differ so much in the orthography of the language of the Sandwich Islands from that used in the printed Voyage, it becomes necessary for me to explain the names I use in this narrative, by those already known. It may appear strange, how we should differ so much; but so it is:—which is the most accurate, some future visitor may determine.
Karakakooa I call Ke,rag,e,goo,ah,
Terreeoboo — Kariopoo,
Kowrowa — Kavaroah,
Kaneecabareea — Kaneckapo,herei,
Maiha Maiha — Ka,mea,mea.
towards the inhabitants, that we left the harbour, on the fourth of February. It was Captain Cook's intention to visit the other islands to leeward, and we stood to the westward, towards Mowee, attended by several canoes full of people, who were willing to accompany us as far as they could, before they bad us a final adieu.
On the sixth, we were overtaken by a gale of wind; and the next night, the Resolution had the misfortune of springing the head of her foremast, in such a dangerous manner, that Captain Cook was obliged to return to Keragegooah, in order to have it repaired; for we could find no other convenient harbour on the island. The same gale had occasioned much distress among some canoes, that had paid us a visit from the shore. One of them, with two men and a child on board, was picked up by the Resolution, and rescued from destruction: the men, having toiled hard all night, in attempting to reach the land, were so much exhausted, that they could hardly mount the ship's side. When they got upon the quarter-deck, they burst into tears, and seemed much affected with the dangerous situation from which they had escaped; but the little child appeared lively and cheerful. One of the Resolution's boats was also so fortunate as to save a man and two women, whose canoe had been upset by the violence of the waves. They were brought on board, and, with the others, partook of the kindness and humanity of Captain Cook.
On the morning of Wednesday, the tenth, we were within a few miles of the harbour; and were soon joined by several canoes, in which appeared many of our old acquaintance, who seemed to have come to welcome us back.
back. Among them was Coo,aha, a priest: he had brought a small pig, and some cocoa nuts in his hand, which, after having chaunted a few sentences, he presented to Captain Clerke. He then left us, and hastened on board the Resolution, to perform the same friendly ceremony before Captain Cook. Having but light winds all that day, we could not gain the harbour. In the afternoon, a chief of the first rank, and nearly related to Kariopoo, paid us a visit on board the Discovery. His name was Ka,mea,mea: he was dressed in a very rich feathered cloke, which he seemed to have brought for sale, but would part with it for nothing except iron daggers. These, the chiefs, some time before our departure, had preferred to every other article; for having received a plentiful supply of hatchets and other tools, they began to collect a store of warlike instruments. Kameamea procured nine daggers for his cloke, and being pleased with his reception, he and his attendants slept on board that night.
In the morning of the eleventh of February, the ships anchored again in Keragegooah bay, and preparation was immediately made for landing the Resolution's foremast. We were visited but by few of the Indians, because there were but few in the bay. On our departure, those belonging to other parts, had repaired to their several habitations, and were again to collect from various quarters, before we could expect to be surrounded by such multitudes as we had once seen in that harbour. In the afternoon, I walked about a mile into the country, to visit an Indian friend, who had, a few days before, come near twenty miles, in a small canoe, to see me, while the ship lay becalmed.
As the canoe had not left us long before a gale of wind came on, I was alarmed for the consequence: however, I had the pleasure to find that my friend had escaped unhurt, though not without some difficulties. I take notice of this short excursion, merely because it afforded me an opportunity of observing, that there appeared no change in the disposition or behaviour of the inhabitants. I saw nothing that could induce me to think, that they were displeased with our return, or jealous of the intention of our second visit. On the contrary, that abundant good nature which had always characterised them, seemed still to glow in every bosom, and to animate every countenance.
The next day, February the twelfth, the ships were put under a taboo, by the chiefs, a solemnity, it seems, that was requisite to be observed before Kariopoo, the king; paid his first visit to captain Cook, after his return. He waited upon him the same day, on board the Resolution, attended by a large train, some of which bore the presents designed for Captain Cook, who received him in his usual friendly manner, and gave him several articles in return. This amicable ceremony being settled, the taboo was dissolved, matters went on in the usual train, and the next day, February the thirteenth, we were visited by the natives in great numbers; the Resolution's mast was landed; and the astronomical observatories erected on their former situation. I landed, with another gentleman, at the town of Kavaroah, where we found a great number of canoes, just arrived from different parts of the island, and the Indians busy in constructing temporary huts on the beach, for their residence during the stay of the ships. On our return on board the Discovery, we learned, that an Indian had
had been detected in stealing the armourer's tongs from the forge, for which he received a pretty severe flogging, and was sent out of the ship. Notwithstanding the example made of this man, in the afternoon another had the audacity to snatch the tongs and a chisel from the same place, with which he jumped overboard, and swam for the shore. The master and a midshipman were instantly dispatched after him, in the small cutter. The Indian seeing himself pursued, made for a canoe; his countrymen took him on board, and paddled as swift as they could towards the shore; we fired several muskets at them, but to no effect, for they soon got out of the reach of our shot. Pareah, one of the chiefs, who was at that time on board the Discovery, understanding what had happened, immediately went ashore, promising to bring back the stolen goods. Our boat was so far distanced, in chasing the canoe which had taken the thief on board, that he had time to make his escape into the country. Captain Cook, who was then on shore, endeavoured to intercept his landing; but, it seems, that he was led out of the way by some of the natives, who had officiously intruded themselves as guides. As the master was approaching near the landing-place, he was met by some of the Indians in a canoe: they had brought back the tongs and chisel, together with another article, that we had not missed, which happened to be the lid of the water-cask. Having recovered these things, he was returning on board, when he was met by the Resolution's pinnace, with five men in her, who, without any orders, had come from the observatories to his assistance. Being thus unexpectedly reinforced, he thought himself strong enough to insist upon having the thief, or the canoe
canoe which took him in, delivered up as reprizals. With that view he turned back; and having found the canoe on the beach, he was preparing to launch it into the water, when Pareah made his appearance, and insisted upon his not taking it away, as it was his property. The officer not regarding him, the chief seized upon him, pinioned his arms behind, and held him by the hair of his head; on which, one of the sailors struck him with an oar: Pareah instantly quitted the officer, snatched the oar out of the man's hand, and snapped it in two across his knee. At length, the multitude began to attack our people with stones. They made some resistance; but were soon overpowered, and obliged to swim for safety to the small cutter, which lay farther out than the pinnace. The officers, not being expert swimmers, retreated to a small rock in the water, where they were closely pursued by the Indians. One man darted a broken oar at the master; but his foot slipping at the time, he missed him, which fortunately saved that officer's life. At last, Pareah interfered, and put an end to their violence. The Gentlemen, knowing that his presence was their only defence against the fury of the natives, entreated him to stay with them, till they could get off in the boats; but that he refused, and left them. The master went to seek assistance from the party at the observatories; but the midshipman chose to remain in the pinnace. He was very rudely treated by the mob, who plundered the boat of every thing that was loose on board, and then began to knock her to pieces, for the sake of the iron-work; but Pareah fortunately returned in time to prevent her destruction. He had met the other gentleman on his way to the observatories, and suspecting his
his errand, had forced him to return. He dispersed the crowd again, and desired the gentlemen to return on board: they represented, that all the oars had been taken out of the boat; on which he brought some of them back, and the gentlemen were glad to get off, without farther molestation. They had not proceeded far, before they were overtaken by Pareah, in a canoe: he delivered the midshipman's cap, which had been taken from him in the scuffle, joined noses with them, in token of reconciliation, and was anxious to know, if Captain Cook would kill him for what had happened. They assured him of the contrary, and made signs of friendship to him in return. He then left them, and paddled over to the town of Kavaroah, and that was the last time we ever saw him. Captain Cook returned on board soon after, much displeased with the whole of this disagreeable business; and the same night, sent a lieutenant on board the Discovery, to learn the particulars of it, at it had originated in that ship.
It was remarkable, that in the midst of the hurry and confusion attending this affair, Kanynah (a chief who had always been on terms particularly friendly with us) came from the spot where it happened, with a hog to sell on board the Discovery: it was of an extraordinary large size, and he demanded for it a pahowa, or dagger, of an unusual length. He pointed to us, that it must be as long as his arm. Captain Clerke not having one of that length, told him, he would get one made for him by the morning; with which being satisfied, he left the hog, and went ashore without making any stay with us. It will not be altogether foreign to the subject, to mention a circumstance, that happened to-day on board the Resolution.
An Indian chief asked Captain Cook at his table, if he was a Tata Toa; which means a fighting man, or a soldier. Being answered in the affirmative, he desired to see his wounds: Captain Cook held out his right-hand, which had a scar upon it, dividing the thumb from the finger, the whole length of the metacarpal bones. The Indian, being thus convinced of his being a Toa, put the same question to another gentleman present, but he happened to have none of those distinguishing marks: the chief then said, that he himself was a Toa, and shewed the scars of some wounds he had received in battle. Those who were on duty at the observatories, were disturbed during the night, with shrill and melancholy sounds, issuing from the adjacent villages, which they took to be the lamentations of the women. Perhaps the quarrel between us, might have filled their minds with apprehensions for the safety of their husbands: but, be that as it may, their mournful cries struck the sentinels with unusual awe and terror.
To widen the breach between us, some of the Indians, in the night, took away the Discovery's large cutter, which lay swamped at the buoy of one of her anchors: they had carried her off so quietly, that we did not miss her till the morning, Sunday, February the fourteenth. Captain Clerke lost no time in waiting upon Captain Cook, to acquaint him with the accident: he returned on board, with orders for the launch and small cutter to go, under the command of the second lieutenant, and lie off the east point of the bay, in order to intercept all canoes that might attempt to get out; and, if he found it necessary, to fire upon them. At the same time, the third lieutenant of the Resolution, with the launch and small cutter, was sent on the
the same service, to the opposite point of the bay; and the master was dispatched in the large cutter, in pursuit of a double canoe, already under sail, making the best of her way out of the harbour. He soon came up with her, and by firing a few muskets, drove her on shore, and the Indians left her: this happened to be the canoe of Omea, a man who bore the title of Orono. He was on board himself, and it would have been fortunate, if our people had secured him, for his person was held as sacred as that of the king. During this time, Captain Cook was preparing to go ashore himself, at the town of Kavaroah, in order to secure the person of Kariopoo, before he should have time to withdraw himself to another part of the island, out of our reach. This appeared the most effectual step that could be taken on the present occasion, for the recovery of the boat. It was the measure he had invariably pursued, in similar cases, at other islands in these seas, and it had always been attended with the desired success: in fact, it would be difficult to point out any other mode of proceeding on these emergencies, likely to attain the object in view. We had reason to suppose, that the king and his attendants had fled when the alarm was first given: in that case, it was Captain Cook's intention to secure the large canoes which were hauled up on the beach. He left the ship about seven o'clock, attended by the lieutenant of marines, a sergeant, corporal, and seven private men: the pinnace's crew were also armed, and under the command of Mr. Roberts. As they rowed towards the shore, Captain Cook ordered the launch to leave her station at the west point of the bay, in order to assist his own boat. This is a circumstance worthy of notice; for it clearly shews,
that he was not unapprehensive of meeting with resistance from the natives, or unmindful of the necessary preparation for the safety of himself and his people. I will venture to say, that from the appearance of things just at that time, there was not one, beside himself, who judged that such precaution was absolutely requisite: so little did his conduct on the occasion, bear the marks of rashness, or a precipitate self-confidence! He landed, with the marines, at the upper end of the town of Kavaroah: the Indians immediately flocked round, as usual, and shewed him the customary marks of respect, by prostrating themselves before him. There were no signs of hostilities, or much alarm among them. Captain Cook, however, did not seem willing to trust to appearances; but was particularly attentive to the disposition of the marines, and to have them kept clear of the crowd. He first enquired for the king's sons, two youths who were much attached to him, and generally his companions on board. Messengers being sent for them, they soon came to him, and informing him that their father was asleep, at a house not far from them, he accompanied them thither, and took the marines along with them. As he passed along, the natives everywhere prostrated themselves before him, and seemed to have lost no part of that respect they had always shewn to his person. He was joined by several chiefs, among whom was Kanynah, and his brother Koohowrooah. They kept the crowd in order, according to their usual custom; and being ignorant of his intention in coming on shore, frequently asked him, if he wanted any hogs, or other provisions: he told them that he did not, and that his business was to see the king. When he arrived at the house, he ordered some of the
the same time, the government is also taking steps to improve the infrastructure and provide better services to the people.
In conclusion, the government's efforts to improve the quality of life for its citizens are commendable. However, there is still much work to be done in order to achieve the desired results. The government must continue to prioritize the needs of its citizens and work towards creating a more prosperous and equitable society.
the Indians to go in, and inform Kariopoo, that he waited without to speak with him. They came out two or three times, and instead of returning any answer from the king, presented some pieces of red cloth to him, which made Captain Cook suspect that he was not in the house; he therefore desired the lieutenant of marines to go in. The lieutenant found the old man just awaked from sleep, and seemingly alarmed at the message; but he came out without hesitation. Captain Cook took him by the hand, and in a friendly manner, asked him to go on board, to which he very readily consented. Thus far matters appeared in a favourable train, and the natives did not seem much alarmed or apprehensive of hostility on our side; at which Captain Cook expressed himself a little surprized, saying, that as the inhabitants of that town appeared innocent of stealing the cutter, he should not molest them, but that he must get the king on board. Kariopoo sat down before his door, and was surrounded by a great crowd: Kanynah and his brother were both very active in keeping order among them. In a little time, however, the Indians were observed arming themselves with long spears, clubs, and daggers, and putting on thick mats, which they use as armour. This hostile appearance increased, and became more alarming, on the arrival of two men in a canoe from the opposite side of the bay, with the news of a chief, called Kareemoo, having been killed by one of the Discovery's boats, in their passage across: they had also delivered this account to each of the ships. Upon that information, the women, who were sitting upon the beach at their breakfasts, and conversing familiarly with our people in the boats, retired, and a confused murmur spread through
through the crowd. An old priest came to Captain Cook, with a cocoa nut in his hand, which he held out to him as a present, at the same time singing very loud. He was often desired to be silent, but in vain: he continued importunate and troublesome, and there was no such thing as getting rid of him or his noise: it seemed, as if he meant to divert their attention from his countrymen, who were growing more tumultuous, and arming themselves in every quarter. Captain Cook, being at the same time surrounded by a great crowd, thought his situation rather hazardous: he therefore ordered the lieutenant of marines to march his small party to the water-side, where the boats lay within a few yards of the shore: the Indians readily made a lane for them to pass, and did not offer to interrupt them. The distance they had to go might be about fifty or sixty yards; Captain Cook followed, having hold of Kariopoo's hand, who accompanied him very willingly: he was attended by his wife, two sons, and several chiefs. The troublesome old priest followed, making the same savage noise. Keowa, the younger son, went directly into the pinnace, expecting his father to follow; but just as he arrived at the water-side, his wife threw her arms about his neck, and, with the assistance of two chiefs forced him to sit down by the side of a double canoe. Captain Cook expostulated with them, but to no purpose: they would not suffer the king to proceed, telling him, that he would be put to death if he went on board the ship. Kariopoo, whose conduct seemed entirely resigned to the will of others, hung down his head, and appeared much distressed.
While
The first step in the process is to identify the specific needs and requirements of the organization. This involves conducting a thorough analysis of the current state of the organization, including its strengths, weaknesses, opportunities, and threats (SWOT analysis). The next step is to develop a clear vision and mission statement that aligns with the organization's goals and objectives. This statement should be concise, specific, and inspiring, and it should guide all decision-making processes within the organization.
Once the vision and mission statement have been established, the next step is to create a strategic plan that outlines the steps needed to achieve the desired outcomes. The strategic plan should include specific goals, objectives, and action plans, as well as timelines and milestones for each phase of the project. It should also identify the resources required to implement the plan, such as personnel, equipment, and funding.
The implementation phase of the strategic planning process involves putting the plan into action. This includes assigning responsibilities, allocating resources, and monitoring progress against the plan. It is important to keep the organization's stakeholders informed about the progress being made and to seek their input and feedback throughout the process.
Finally, the evaluation phase of the strategic planning process involves assessing the effectiveness of the plan and making adjustments as necessary. This includes evaluating the results achieved against the original goals and objectives, as well as identifying any areas where the plan may need to be revised or improved. The evaluation phase is crucial for ensuring that the organization remains aligned with its vision and mission and continues to achieve its goals over time.
While the king was in this situation, a chief, well known to us, of the name of Coho, was observed lurking near, with an iron dagger, partly concealed under his cloke, seemingly, with the intention of stabbing Captain Cook, or the lieutenant of marines. The latter proposed to fire at him, but Captain Cook would not permit it. Coho closing upon them, obliged the officer to strike him with his piece, which made him retire. Another Indian laid hold of the serjeant's musket, and endeavoured to wrench it from him, but was prevented by the lieutenant's making a blow at him. Captain Cook, seeing the tumult increase, and the Indians growing more daring and resolute, observed, that if he were to take the king off by force, he could not do it without sacrificing the lives of many of his people. He then paused a little, and was on the point of giving his orders to reembark, when a man threw a stone at him; which he returned with a discharge of small shot, (with which one barrel of his double piece was loaded). The man, having a thick mat before him, received little or no hurt: he brandished his spear, and threatened to dart it at Captain Cook, who being still unwilling to take away his life, instead of firing with ball, knocked him down with his musket. He expostulated strongly with the most forward of the crowd, upon their turbulent behaviour. He had given up all thoughts of getting the king on board, as it appeared impracticable; and his care was then only to act on the defensive, and to secure a safe embarkation for his small party, which was closely pressed by a body of several thousand people. Keowa, the king's son, who was in the pinnace, being alarmed on hearing the first firing, was, at his own entreaty, put on shore again; for even
even at that time, Mr. Roberts, who commanded her, did not apprehend that Captain Cook's person was in any danger: otherwise he would have detained the prince, which, no doubt, would have been a great check on the Indians. One man was observed, behind a double canoe, in the action of darting his spear at Captain Cook, who was forced to fire at him in his own defence, but happened to kill another close to him, equally forward in the tumult: the serjeant observing that he had missed the man he aimed at, received orders to fire at him, which he did, and killed him. By this time, the impetuosity of the Indians was somewhat repressed; they fell back in a body, and seemed staggered: but being pushed on by those behind, they returned to the charge, and poured a volley of stones among the marines, who, without waiting for orders, returned it with a general discharge of musketry, which was instantly followed by a fire from the boats. At this Captain Cook was heard to express his astonishment: he waved his hand to the boats, called to them to cease firing, and to come nearer in to receive the marines. Mr. Roberts immediately brought the pinnace as close to the shore as he could, without grounding, notwithstanding the showers of stones that fell among the people: but Mr. John Williamson, the lieutenant, who commanded in the launch, instead of pulling in to the assistance of Captain Cook, withdrew his boat further off, at the moment that everything seems to have depended upon the timely exertions of those in the boats. By his own account, he mistook the signal: but be that as it may, this circumstance appears to me, to have decided the fatal turn of the affair, and to have removed every chance which remained
The first step in the process of making the final decision is to determine the amount of money that will be needed for the project. This can be done by creating a budget that includes all of the necessary expenses, such as labor, materials, and equipment. Once the budget has been established, it is important to ensure that it is realistic and achievable.
Next, it is important to consider the potential risks and challenges that may arise during the construction process. This can include factors such as weather conditions, unforeseen obstacles, and changes in regulations or laws. By anticipating these potential issues, it is possible to develop contingency plans and strategies to mitigate their impact.
Another important aspect of the decision-making process is to involve stakeholders in the planning and implementation stages. This can include local communities, government agencies, and other relevant parties. By engaging with these stakeholders, it is possible to gather valuable input and feedback, which can help to ensure that the project is well-received and successful.
Finally, it is essential to monitor the progress of the project throughout its duration. This can include regular check-ins with the construction team, as well as periodic reviews of the budget and schedule. By staying informed and responsive to any changes or issues that arise, it is possible to make adjustments and corrections as needed to keep the project on track.
In conclusion, making the final decision to proceed with a construction project requires careful consideration of various factors, including the budget, potential risks, stakeholder involvement, and ongoing monitoring. By following these steps, it is possible to increase the likelihood of success and achieve the desired outcome.
with Captain Cook, of escaping with his life. The business of saving the marines out of the water, in consequence of that, fell altogether upon the pinnace; which thereby became so much crowded, that the crew were, in a great measure, prevented from using their fire-arms, or giving what assistance they otherwise might have done, to Captain Cook; so that he seems, at the most critical point of time, to have wanted the assistance of both boats, owing to the removal of the launch. For notwithstanding that they kept up a fire on the crowd from the situation to which they removed in that boat, the fatal confusion which ensued on her being withdrawn, to say the least of it, must have prevented the full effect, that the prompt co-operation of the two boats, according to Captain Cook's orders, must have had, towards the preservation of himself and his people. At that time, it was to the boats alone, that Captain Cook had to look for his safety; for when the marines had fired, the Indians rushed among them, and forced them into the water, where four of them were killed: their lieutenant was wounded, but fortunately escaped, and was taken up by the pinnace. Captain Cook was then the only one remaining on the rock: he was observed making for the pinnace, holding his left-hand against the back of his head, to guard it from the stones, and carrying his musket under the other arm. An Indian was seen following him, but with caution and timidity; for he stopped once or twice, as if undetermined to proceed. At last he advanced upon him unawares, and with a large club*,
* I have heard one of the gentlemen who were present say, that the first injury he received was from a dagger, as it is represented in the Voyage; but, from the account
The first step in the process is to identify the specific needs and requirements of the organization. This involves conducting a thorough analysis of the current state of the organization, including its strengths, weaknesses, opportunities, and threats (SWOT analysis). The next step is to develop a clear vision and mission statement that aligns with the organization's goals and objectives. This statement should be concise, specific, and inspiring, and it should guide all subsequent planning and decision-making processes.
Once the vision and mission statement have been established, the next step is to create a strategic plan that outlines the steps needed to achieve these goals. The strategic plan should include specific objectives, timelines, and resources required to implement the plan. It should also identify potential risks and challenges, and outline strategies for addressing them.
The final step in the process is to implement the strategic plan and monitor its progress. This involves regularly reviewing the plan and making adjustments as necessary to ensure that it remains aligned with the organization's goals and objectives. It also involves measuring the success of the plan and using this information to inform future planning and decision-making processes.
In conclusion, effective strategic planning is essential for organizations that want to achieve their goals and objectives. By following the steps outlined above, organizations can develop a clear vision and mission statement, create a strategic plan, and implement and monitor this plan to ensure its success.
or common stake, gave him a blow on the back of the head, and then precipitately retreated. The stroke seemed to have stunned Captain Cook: he staggered a few paces, then fell on his hand and one knee, and dropped his musket. As he was rising, and before he could recover his feet, another Indian stabbed him in the back of the neck with an iron dagger. He then fell into a bite of water about knee deep, where others crowded upon him, and endeavoured to keep him under: but struggling very strongly with them, he got his head up, and casting his look towards the pinnace, seemed to solicit assistance. Though the boat was not above five or six yards distant from him, yet from the crowded and confused state of the crew, it seems, it was not in their power to save him. The Indians got him under again, but in deeper water: he was, however, able to get his head up once more, and being almost spent in the struggle, he naturally turned to the rock, and was endeavouring to support himself by it, when a savage gave him a blow with a club, and he was seen alive no more. They hauled him up lifeless on the rocks, where they seemed to take a savage pleasure in using every barbarity to his dead body, snatching the daggers out of each other's hands, to have the horrid satisfaction of piercing the fallen victim of their barbarous rage.
count of many others, who were also eye-witnesses, I am confident, in saying, that he was first struck with a club. I was afterwards confirmed in this, by Kaireekea, the priest, who particularly mentioned the name of the man who gave him the blow, as well as that of the chief who afterwards struck him with the dagger. This is a point not worth disputing about: I mention it, as being solicitous to be accurate in this account, even in circumstances, of themselves, not very material.
the most important and influential figures in the history of the Church. He was a man of deep faith, a scholar, a writer, and a teacher. His writings on theology and spirituality have had a profound impact on Christian thought and continue to be studied and admired today.
In his writings, St. Augustine explored the nature of God, the meaning of life, and the relationship between faith and reason. He emphasized the importance of humility, repentance, and forgiveness in the Christian life. He also wrote extensively about the nature of sin and the need for redemption through Christ's sacrifice.
St. Augustine's influence extends far beyond his own lifetime. His teachings have been incorporated into the liturgy of the Catholic Church and have inspired countless generations of Christians. His writings continue to be studied and appreciated by scholars and laypeople alike, and his legacy lives on in the hearts of believers around the world.
In conclusion, St. Augustine was a man of great wisdom and insight who left an indelible mark on the history of Christianity. His writings remain a source of inspiration and guidance for all who seek to understand the mysteries of the faith.
I need make no reflection on the great loss we suffered on this occasion, or attempt to describe what we felt. It is enough to say, that no man was ever more beloved or admired: and it is truly painful to reflect, that he seems to have fallen a sacrifice merely for want of being properly supported; a fate, singularly to be lamented, as having fallen to his lot, who had ever been conspicuous for his care of those under his command, and who seemed, to the last, to pay as much attention to their preservation, as to that of his own life.
If any thing could have added to the shame and indignation universally felt on the occasion, it was to find, that his remains had been deserted, and left exposed on the beach, although they might have been brought off. It appears, from the information of four or five midshipmen, who arrived on the spot at the conclusion of the fatal business, that the beach was then almost entirely deserted by the Indians, who at length had given way to the fire of the boats, and dispersed through the town: so that there seemed no great obstacle to prevent the recovery of Captain Cook's body; but the lieutenant returned on board without making the attempt. It is unnecessary to dwell longer on this painful subject, and to relate the complaints and censures that fell on the conduct of the lieutenant. It will be sufficient to observe, that they were so loud, as to oblige Captain Clerke publickly to notice them, and to take the depositions of his accusers down in writing. The Captain's bad state of health and approaching dissolution, it is supposed, induced him to destroy these papers a short time before his death.
It is a painful task, to be obliged to notice circumstances, which seem to reflect upon the character of any man. A strict regard to truth, however, compelled me to the insertion of these facts, which I have offered merely as facts, without presuming to connect with them any comment of my own: esteeming it the part of a faithful historian, "to extenuate nothing, nor set down ought in malice."
The fatal accident happened at eight o'clock in the morning, about an hour after Captain Cook landed. It did not seem, that the king, or his sons, were witnesses to it; but it is supposed that they withdrew in the midst of the tumult. The principal actors were the other chiefs, many of them the king's relations and attendants: the man who stabbed him with the dagger was called Nooāh. I happened to be the only one who recollected his person, from having on a former occasion mentioned his name in the journal I kept. I was induced to take particular notice of him, more from his personal appearance than any other consideration, though he was of high rank, and a near relation of the king: he was stout and tall, with a fierce look and demeanour, and one who united in his figure the two qualities of strength and agility, in a greater degree, than ever I remembered to have seen before in any other man. His age might be about thirty, and by the white scurf on his skin, and his sore eyes, he appeared to be a hard drinker of Kava. He was a constant companion of the king, with whom I first saw him, when he paid a visit to Captain Clerke. The chief who first struck Captain Cook with the club, was called Karimano, oraha, but I did not know him by his name. These circumstances I learnt of honest Kaireekea, the priest; who added, that they
they were both held in great esteem on account of that action: neither of them came near us afterwards. When the boats left the shore, the Indians carried away the dead body of Captain Cook and those of the marines, to the rising ground, at the back of the town, where we could plainly see them with our glasses from the ships.
This most melancholy accident, appears to have been altogether unexpected and unforeseen, as well on the part of the natives as ourselves. I never saw sufficient reason to induce me to believe, that there was any thing of design, or a pre-concerted plan on their side, or that they purposely fought to quarrel with us: thieving, which gave rise to the whole, they were equally guilty of, in our first and second visits. It was the cause of every misunderstanding that happened between us: their petty thefts were generally overlooked, but sometimes slightly punished: the boat, which they at last ventured to take away, was an object of no small magnitude to people in our situation, who could not possibly replace her, and therefore not slightly to be given up. We had no other chance of recovering her, but by getting the person of the king into our possession: on our attempting to do that, the natives became alarmed for his safety, and naturally opposed those whom they deemed his enemies. In the sudden conflict that ensued, we had the unspeakable misfortune of losing our excellent Commander, in the manner already related. It is in this light the affair has always appeared to me, as entirely accidental, and not in the least owing to any previous offence received, or jealousy of our second visit entertained by the natives.
Pareah seems to have been the principal instrument in bringing about this fatal disaster. We learnt afterwards, that it was he who had employed some people to steal the boat: the king did not seem to be privy to it, or even apprized of what had happened, till Captain Cook landed.
It was generally remarked, that at first, the Indians shewed great resolution in facing our fire-arms; but it was entirely owing to ignorance of their effect. They thought that their thick mats would defend them from a ball, as well as from a stone; but being soon convinced of their error, yet still at a loss to account how such execution was done among them, they had recourse to a stratagem, which, though it answered no other purpose, served to shew their ingenuity and quickness of invention. Observing the flashes of the muskets, they naturally concluded, that water would counteract their effect, and therefore, very sagaciously, dipped their mats, or armour in the sea, just as they came on to face our people: but finding this last resource to fail them, they soon dispersed, and left the beach entirely clear. It was an object they never neglected, even at the greatest hazard, to carry off their slain; a custom, probably, owing to the barbarity with which they treat the dead body of an enemy, and the trophies they make of his bones*.
PARTICU-
* A remarkable instance of this I met with at Atowai. Tamataherei, the queen of that island, paid us a visit one day on board the Discovery, accompanied by her husband Taoh, and one of her daughters by her former husband Oteeha. The young princess, whose name was Ore, reemo, horanee, carried in her hand a very elegant fly-flap, of a curious construction: the upper part of it was variegated with alternate rings of tortoise-shell and human bone, and the handle, which was well polished, consisted of the greater part of the os humeri of a chief, called Mahowra. He had belonged to
SOME PARTICULARS,
CONCERNING THE LIFE AND CHARACTER OF CAPTAIN COOK.
CAPTAIN COOK was born at Marton, in Cleveland, in the county of York, a small village, distant five miles south-east from Stockton. His name is found in the parish register in the year 1729 (so that Captain King was mistaken, in placing the time of his birth in the year 1727). The cottage in which his father formerly lived, is now decayed, but the spot where it stood is still shewn to strangers. A gentleman is now living in that neighbourhood, with whom the old man formerly worked as a common day-labourer in the fields. However, though the neighbouring island of Oahoo, and, in a hostile descent he made upon this coast, had been killed by Otecha, who was then sovereign of Atowai. And thus we found Orereemohoranee carrying his bones about, as trophies of her father's victory. The queen set a great value upon it, and was not willing to part with it for any of our iron ware; but happening to cast her eyes upon a wash-hand basin of mine, it struck her fancy, and she offered to exchange; I accepted of her proposal, and the bones of the unfortunate Mahowra came at last into my possession.
placed in this humble station, he gave his son a common school education, and at an early age, placed him apprentice with one Mr. Saunderson, a shopkeeper at Staith, (always pronounced Steers) a small fishing-town on the Yorkshire coast, about nine miles to the northward of Whitby. The business is now carried on by the son of Mr. Saunderson, in the same shop, which I had the curiosity to visit about a year and half ago. In that situation young Cook did not continue long, before he quitted it in disgust, and, as often happens in the like cases, betook himself to the sea. Whitby being a neighbouring sea-port, readily offered him an opportunity to pursue his inclination; and there we find he bound himself apprentice, for nine years, in the coal trade, to one Mr. John Walker, now living in South Whitby. In his employ, he afterwards became mate of a ship; in which station having continued some time, he had the offer of being master, which he refused, as it seems he had at that time turned his thoughts towards the navy. Accordingly, at the breaking out of the war in 1755, he entered on board the Eagle, of sixty-four guns, and in a short time after, Sir Hugh Palliser was appointed to the command of that ship, a circumstance that must not be passed unnoticed, as it proved the foundation of the future fame and fortune of Captain Cook. His uncommon merit did not long escape the observation of that discerning officer, who promoted him to the quarter-deck, and ever after patronized him with such zeal and attention, as must reflect the highest honour upon his character. To Sir Hugh Palliser is the world indebted, for having first noticed in an obscure situation, and afterwards brought forward in life, the greatest nautical genius that
The first step in the process is to identify the problem and its root causes. This can be done through various methods such as interviews, surveys, and data analysis. Once the problem is identified, the next step is to develop a solution that addresses the root causes. This may involve creating new policies, procedures, or systems, or making changes to existing ones.
Once a solution has been developed, it must be implemented. This involves training staff on the new processes and ensuring that everyone understands their role in the implementation. It is also important to monitor the progress of the implementation and make adjustments as needed.
Finally, the effectiveness of the solution must be evaluated. This can be done through various methods such as surveys, interviews, and data analysis. The results of the evaluation will help determine whether the solution was successful and what changes need to be made for future projects.
In conclusion, problem-solving is an essential skill in any organization. By following these steps, organizations can effectively address problems and implement solutions that lead to improved performance and outcomes.
that ever any age or country has produced. In the year 1758, we find him master of the Northumberland, then in America, under the command of Lord Colville. It was there, he has been heard to say, that during a hard winter he first read Euclid, and applied himself to the study of astronomy and the mathematics, in which he made no inconsiderable progress, assisted only by his own ingenuity and industry. At the time he thus found means to cultivate and improve his mind, and to supply the deficiency of an early education, he was constantly engaged in the most busy and active scenes of the war in America. At the siege of Quebec, Sir Hugh Palliser made him known to Sir Charles Saunders, who committed to his charge the conducting of the boats to the attack of mount Morenci, and the embarkation that scaled the heights of Abraham. He was also employed to examine the passage of the river St. Laurence, and to lay buoys for the direction of the men of war. In short, in whatever related to the reduction of that place in the naval department, he had a principal share, and conducted himself so well throughout the whole, as to recommend himself to the commander in chief. At the conclusion of the war, Sir Hugh Palliser having the command on the Newfoundland station, he appointed him to survey that Island and the coast of Labrador, and gave him the Grenville brig for that purpose. How well he performed that service, the charts he has published afford a sufficient testimony. In that employment he continued till the year 1767, when the well known voyage to the South Sea, for observing the transit of Venus, and making discoveries in that vast ocean was planned. Lord Hawke, who then presided
presided at the Admiralty, was strongly solicited to give the command of that expedition to Mr. Alexander Dalrymple; but through the interest of his friend Sir Hugh Palliser, Captain Cook obtained the appointment, together with the rank of lieutenant. It was stipulated, that on his return, he should, if he chose it, again hold the place of surveyor of Newfoundland, and that his family should be provided for, in case of any accident to himself.
He sailed from England in the Endeavour, in the year 1768, accompanied by Mr. Banks and Dr. Solander, and returned in 1771; after having circumnavigated the globe, made several important discoveries in the South Sea, and explored the islands of New Zealand, and great part of the coast of New Holland. The skill and ability with which he conducted that expedition, ranked his name high as a navigator, and could not fail of recommending him to that great patron of naval merit, the Earl of Sandwich, who then presided at the board of Admiralty. He was promoted to the rank of master and commander, and a short time afterwards, appointed to conduct another expedition to the Pacific Ocean, in search of the supposed southern continent. In this second voyage he circumnavigated the globe, determined the non-existence of a southern continent, and added many valuable discoveries to those he had before made in the South Sea. His own account of it is before the public, and he is no less admired for the accuracy and extensive knowledge which he has displayed in that work, than for his skill and intrepidity in conducting the expedition. On his return, he was promoted to the rank of post-captain, and appointed one of the captains of Greenwich hospital. In that Retirement
ment he did not continue long: for an active life best suiting his disposition, he offered his services to conduct a third expedition to the South Sea, which was then in agitation, in order to explore a northern passage from Europe to Asia: in this he unfortunately lost his life, but not till he had fully accomplished the object of the voyage.
The character of Captain Cook will be best exemplified by the services he has performed, which are universally known, and have ranked his name above that of any navigator of ancient or of modern times. Nature had endowed him with a mind vigorous and comprehensive, which in his riper years he had cultivated with care and industry. His general knowledge was extensive and various: in that of his own profession he was unequalled. With a clear judgment, strong masculine sense, and the most determined resolution; with a genius peculiarly turned for enterprise, he pursued his object with unshaken perseverance:—vigilant and active in an eminent degree:—cool and intrepid among dangers; patient and firm under difficulties and distress; fertile in expedients; great and original in all his designs; active and resolved in carrying them into execution. These qualities rendered him the animating spirit of the expedition: in every situation, he stood unrivalled and alone; on him all eyes were turned; he was our leading-star, which at its setting, left us involved in darkness and despair.
His constitution was strong, his mode of living temperate: why Captain King should not suppose temperance as great a virtue in him as in any other man, I am unable to guess. He had no repugnance to good living; he always kept a good table, though he could bear the reverse without murmuring. He was a modest man, and rather bashful;
ful; of an agreeable lively conversation, sensible and intelligent. In his temper he was somewhat hasty, but of a disposition the most friendly, benevolent, and humane. His person was above six feet high, and though a good-looking man, he was plain both in address and appearance. His head was small, his hair, which was a dark brown, he wore tied behind. His face was full of expression, his nose exceedingly well-shaped, his eyes, which were small and of a brown cast, were quick and piercing: his eyebrows prominent, which gave his countenance altogether an air of austerity.
He was beloved by his people, who looked up to him as to a father, and obeyed his commands with alacrity. The confidence we placed in him was unremitting; our admiration of his great talents unbounded; our esteem for his good qualities affectionate and sincere.
In exploring unknown countries, the dangers he had to encounter were various and uncommon. On such occasions, he always displayed great presence of mind, and a steady perseverance in pursuit of his object. The acquisition he has made to our knowledge of the globe is immense, besides improving the art of navigation, and enriching the science of natural philosophy.
He was remarkably distinguished for the activity of his mind: it was that which enabled him to pay an unwearied attention to every object of the service. The strict economy he observed in the expenditure of the ship's stores, and the unremitting care he employed for the preservation of the health of his people, were the causes that enabled him to prosecute discoveries in remote parts of the globe, for such a length of time as had been deemed impracticable.
ble by former navigators. The method he discovered for preserving the health of seamen in long voyages, will transmit his name to posterity as the friend and benefactor of mankind: the success which attended it, afforded this truly great man more satisfaction, than the distinguished fame that attended his discoveries.
England has been unanimous in her tribute of applause to his virtues, and all Europe has borne testimony to his merit. There is hardly a corner of the earth, however remote and savage, that will not long remember his benevolence and humanity. The grateful Indian, in time to come, pointing to the herds grazing his fertile plains, will relate to his children how the first stock of them was introduced into the country; and the name of Cook will be remembered among those benign spirits, whom they worship as the source of every good, and the fountain of every blessing.
It may not be amiss to observe, that the plate engraved by Sherwin, after a painting by Dance, is a most excellent likeness of Captain Cook; and more to be valued, as it is the only one I have seen that bears any resemblance to him.
The first step in the process is to identify the problem or issue that needs to be addressed. This can be done through research, surveys, or interviews with stakeholders. Once the problem has been identified, the next step is to develop a plan of action. This plan should include specific goals and objectives, as well as a timeline for completion.
It is important to involve all relevant parties in the planning process, including those who will be affected by the changes. This can help ensure that everyone's concerns are taken into account and that the plan is realistic and achievable.
Once the plan has been developed, it is important to communicate it clearly to all stakeholders. This can be done through meetings, presentations, or written documents. It is also important to monitor progress and make adjustments as needed.
Finally, it is important to evaluate the success of the plan once it has been implemented. This can be done through feedback from stakeholders, surveys, or other methods. This evaluation can help identify areas for improvement and inform future planning efforts.
In conclusion, effective planning requires careful consideration of the problem or issue at hand, development of a clear and achievable plan, communication with all relevant parties, monitoring of progress, and evaluation of success. By following these steps, organizations can increase their chances of achieving their goals and improving their performance.
OBSERVATIONS,
RESPECTING THE
INTRODUCTION
OF THE
VENEREAL DISEASE
INTO THE
SANDWICH ISLANDS.
THIS publication affording a convenient opportunity, I embrace it, to offer a few remarks upon a subject in some degree affecting the reputation of the late voyages to the South Sea Islands. If we for a moment suppose, that they have been the means of disseminating the venereal disease among the inhabitants, the evil is of such a magnitude, that we are induced to wish they had never been undertaken. For who would not sooner remain ignorant of the interesting discoveries which have been made, than bear the reflection of their having been attended with such an irreparable injury to a happy and uncontaminated race of people!
L'assurance signale à l'extérieur obéissant
à la loi de l'âge, l'usage plus à l'intérieur
de la loi de la nature. Il est un sentiment
qui ne peut être satisfait qu'en se
soumettant à la loi de la nature.
L'assurance signale à l'extérieur obéissant
à la loi de l'âge, l'usage plus à l'intérieur
de la loi de la nature. Il est un sentiment
qui ne peut être satisfait qu'en se
soumettant à la loi de la nature.
It is a point of dispute between Captain Wallis and Monsr. Bougainville, which of their ships it was, that introduced the disease to Otaheite. And we find, that Captain Cook was apprehensive of his people having left it at the Friendly Islands. Without enquiring into the grounds of conviction they had in former voyages, I am strongly inclined to believe, from my observations in the last, that it is a subject about which they are very liable to be deceived; and that what is laid down as positive fact, could be no more than matter of opinion.
In the last voyage, both Captains Cook and King were of opinion, that the inhabitants of Sandwich Islands received that distemper from our people. The great deference I pay to their judgment on every occasion, will hardly allow me to dissent from it in the present instance; and yet I must be allowed to say, that the same evidence which proved convincing to them in this case, did by no means appear so to me, and I will endeavour to assign my reasons. When we first discovered Sandwich Islands, in the month of January, 1778, the ships anchored at two of them (viz. Atowai and Neehaw) where parties were sent ashore for water, and to purchase provisions of the natives. On this occasion, I must bear my testimony (for I was then in the Resolution) to the very particular care taken by Captain Cook, to prevent any of his people who were not in perfect health, from having communication with the shore, and also to prevent women from coming on board the ships. That this humane precaution answered the intended purpose, we had great reason to believe; for not one of those who did go on shore was afterwards
afterwards in the surgeon's list, or known to have any complaint; which was the most convincing proof we could have, of their being well at the time. We therefore were under no apprehensions on this head, when we visited these islands a second time, about eleven months from our first discovering them. We then fell in with two islands, (viz. Mowee and Ouwhyee) belonging to the group, which we had not seen before; and very soon found that the venereal disease was not unknown to the natives. This excited no little concern and astonishment among us, and made us anxious to learn whether or no, so dreadful a calamity had been left at Atowai by our ships, and so propagated to these islands. But the scanty knowledge we had of their language, made this a matter of great difficulty, and rendered the best intelligence we could get but vague and uncertain. While we were cruising off Ouwhyee, I was told, that some Indians had visited the Revolution with that complaint upon them, and that they seemed to intimate, that our ships had left it at Atowai; whence it had found its way to this island.
This account, I confess, appeared at once very improbable to me, and rendered me very desirous of an opportunity to examine some of them myself: for I found the above story gaining universal belief, and felt somewhat hurt, that we should take to ourselves the ignominy of such an imputation, without sufficient proof of its being just. During our stay at Keragegooah bay, where we had constant opportunities of directing our enquiries to the most intelligent of the natives, I met with none who could give me any information on the subject, nor could I learn
I learn that they had the least idea of our having left it at Atowai, or that it was a new thing amongst them. This circumstance, added to the very flight reliance, which experience had taught me to place in any intelligence obtained from the Indians, through the medium of their language, confirmed me in the opinion I had entertained from the first, that the meaning of those Indians had been misunderstood on board the Resolution. An instance happened soon afterwards which convinced me, that no credit whatsoever is to be given to such information. We had not been long arrived at Atowai a second time, before an Indian came on board the Discovery, who appeared to the gentleman who first spoke to him, clearly to charge us with having left the disease at that island, on our former visit. As I was known to be an unbeliever, the man was at last referred to me; and, I confess, I was a little staggered at first with the answers he gave me: but presently, suspecting from his manner, that he would answer every question proposed to him in the affirmative, I asked him, if they did not receive the disease first from Oahoo; a neighbouring island, which we had not touched at, when we were in these parts before: the man directly answered, that they had; and strenuously persisted in the same, every time the question was put to him, either by myself, or the gentleman who had first examined him. Such contradictory accounts as these, prove nothing, but our ignorance of their language, and consequently, how apt we are to be misled in enquiries of this sort. I never put any confidence in them myself, and have often been surprised to see others put so much. Yet those who have maintained
maintained that we left the disease at Sandwich Islands, have no better foundation than this, to rest their opinion upon. Whether it be sufficient to support such an accusation, I will leave others to judge, after what I have related above; and proceed to point out such other circumstances as tend to prove, that the disease was not left at these islands by our ships. From every thing we could learn, it appeared, that there is but little intercourse between Atowai and the islands to windward, especially Ouwhyee, which is about fifty leagues distant: and the nearest to Atowai, which is Oahoo, is five and twenty leagues. There is generally some misunderstanding between them, and, excepting for hostile purposes, the inhabitants rarely visit each other. But were we even to allow, that there is a frequent intercourse between them, which from the distance alone is highly improbable, yet it is hardly possible, that the disease should have spread so far, and so universally, as we found it at Ouwhyee, in the short space of time which intervened between our first and second visit to the Sandwich Islands. On the same supposition, it will appear very extraordinary, that we should have found it more common by far at Ouwhyee than at Atowai, the place where we are supposed to have first left it. That this was the case, however, from my situation at that time, as surgeon of the Discovery, I am able to pronounce with some certainty. The priests pretended to be expert at curing it, and seemed to have an established mode of treatment; which by no means implied, that it was a recent complaint among them, much less that it was introduced only a few months before.
F Whence,
Whence, or at what time, the inhabitants of these islands received disease, or whether or not it be indigenous among them, is what I do not pretend even to guess: but from the circumstances above-mentioned, I think myself warranted in saying, that there are by no means sufficient proofs of our having first introduced it; but that, on the contrary, there is every reason to believe, that they were afflicted with it before we discovered those islands.
THE END.
| Date | Time | Location | Event Description |
|------------|--------|-----------|-------------------|
| 10/25/2023 | 9:00 AM| School | Annual Science Fair |
| 11/1/2023 | 2:00 PM| Library | Book Reading Event |
| 11/15/2023 | 6:00 PM| Community Center | Movie Night |
| 12/20/2023 | 7:00 PM| Auditorium | Holiday Concert |
**Contact Information**
For more information, please contact:
- **Email:** email@example.com
- **Phone:** (555) 123-4567
- **Website:** www.eventcenter.org |
Elastic anomalies near a phase transition point
I. F. Lyuksyutov
L. D. Landau Institute of Theoretical Physics, USSR Academy of Sciences
(Submitted March 22, 1977)
Zh. Eksp. Teor. Fiz. 73, 732–739 (August 1977)
The problem of a phase transition in an anisotropic compressible lattice is considered. Solutions to the "fast" parquet equations are found. Anomalies in the elastic constants at the transition point are computed.
PACS numbers: 62.20.Dc, 64.60.—i
1. INTRODUCTION
Near a second-order phase transition point the fluctuations exert a decisive influence on the behavior of the system. In this case it may turn out that the stable—in the self-consistent field approximation—Hamiltonian loses its stability when allowance is made for the fluctuations. Such an instability obtains in the case when there is a strictive coupling between the order parameter of the transition and the elastic degrees of freedom. This question has been considered by various authors.\(^{[1–3]}\) In the case of an elastically isotropic crystal, the problem is rigorously soluble, and, as has been shown by Larkin and Pikin,\(^{[1]}\) the transition is of first order when a nonzero shear modulus is present and the specific heat \(C_v\) diverges. In the case of anisotropic elastic properties Khmel'nietskii and Shneerson\(^{[2]}\) have shown that the renormalization-group equations describing the transition do not possess stable solutions. The physical causes of the two instabilities are different. In the present paper we obtain the general form of the solutions to the "fast" parquet equations that have been investigated for stability by Khmel'nietskii and Shneerson,\(^{[2]}\) when the equations depend not only on a slow logarithmic variable, but also on fast angular variables. This allowed the computation of those anomalies in the elastic properties that arise as a result of the fluctuations in the order parameter. Since in the case under consideration the transition is of first order (though close to a second-order transition), as the transition point is approached from above, the elastic constants (the stiffness) first decrease according to the specific-heat law and then undergo a finite jump downwards.
In the paper we compute the values of the elastic constants at the transition point both before and after the jump. The obtained values are determined by the various angle-averaged bare values of the elastic and striction constants, as well as by the fourth-order constants in the expansion of the thermodynamic potential. Relations between the anomalies of the various constants are also obtained. Approximate computations are carried out for the specific example of the uniaxial ferroelectric: triglycine sulfate (TGS).
2. THE RENORMALIZATION-GROUP EQUATIONS
Let us consider the simplest example of a phase transition with a scalar order parameter in a compressible anisotropic lattice. The phase transitions in the uniaxial ferroelectrics can serve as an example. The Hamiltonian for the transition has the form
\[
H = \sum_q \frac{1}{s} (\tau + s_{im} q_i q_m) P(q) P(-q) \\
+ \lambda \sum_{q_1 + q_2 + q_3 + q_4 = 0} P(q_1) P(q_2) P(q_3) P(q_4) \\
+ \sum_{q_1 + q_2 + q_3 = 0} \alpha_{im} u_{im}(q_1) P(q_2) P(q_3) \\
+ \sum_q \frac{1}{2} c_{\alpha im} u_{\alpha}(q) u_{im}(-q).
\]
Here the \(s_{im}\) characterize the anisotropy of the gradient term; the \(\alpha_{im}\), the striction constants. In them only two indices have been written out since the other two are determined by the orientation of the order parameter in the lattice; the
\[
u_{ik} = u_k^{(0)} + \frac{1}{2} (\partial u_k / \partial x_i + \partial u_k / \partial x_i)
\]
are the elastic deformations and \(P\) is the order parameter.
In the approximation under consideration the elastic-deformation Hamiltonian is quadratic; therefore, the corresponding functional integral is Gaussian, and can be integrated over all the elastic variables. As a result, there arise two different interactions of the fluctuations via the phonons: via the homogeneous deformations, i.e., with momentum \(q\) equal to zero, this interaction being isotropic; and via the inhomogeneous deformations. The latter depends on the angles in the case of an anisotropic lattice.
Let us now analyze the behavior of the system in the presence of such interactions within the framework of the parquet approximation in the four-dimensional model.\(^{[4]}\) The results can be extended to the three-dimensional situation by Wilson's \(\varepsilon\)-expansion method.\(^{[5]}\) In addition, below we shall analyze the case of uniaxial ferroelectric crystals, which are effectively four-dimensional.\(^{[4]}\)
It is convenient to write out the parquet equations for each of the interactions separately. Let \(\lambda\) be the vertex corresponding to the point interaction, \(\nu\) the vertex corresponding to the interaction via the homogeneous deformations, and \(\mu\) the vertex corresponding to the interaction via the inhomogeneous deformations. The equations are represented graphically in Fig. 1. The heavy
lines correspond to the $P$ field; the dashed lines, to the homogeneous deformations; and the wavy lines, to the inhomogeneous deformations. Thus, the analytic expression for the equations has the form
$$-d\lambda/d\xi = 36\lambda^2 \langle g^2 \rangle - 48\lambda \langle \mu g^2 \rangle + 16 \langle \mu^2 g^2 \rangle,$$
$$-d\nu/d\xi = 24\lambda \nu \langle g^2 \rangle - 16\nu \langle \mu g^2 \rangle - 4\nu^2 \langle g^2 \rangle,$$
$$-dv/d\xi = 24\lambda v \langle g^2 \rangle - 16v \langle \mu g^2 \rangle - 4v^2 \langle g^2 \rangle,$$
$$\frac{\partial G^{-1}}{\partial q} = \frac{\partial G_{s^{-1}}}{\partial q} - 8 \int \frac{d^d p}{(2\pi)^d} \mu (\xi - p) G^2 (p) p.$$
Here $G(q) = g/q^2$, $\xi = \ln(\max\{\tau, q^2\})$, $q$ is the momentum, $\lambda(\xi = 0) = \lambda_0$, $\mu(\xi = 0) = \mu_0$, $\nu(\xi = 0) = \nu_0$, and the symbol $\langle \rangle$ denotes averaging over the angles.
As is well known, usually in the theory of phase transitions the Green function is renormalized in the second approximation. The presence of the long-range interaction connected with the acoustic phonons leads to a situation in which the renormalization appears even in the first approximation.\textsuperscript{14} However, Eqs. (2)–(5) have been constructed such that this circumstance will not play any role below, although Eq. (5) must be taken into account when carrying out the averaging in specific calculations. Let us first consider the Eqs. (2) and (3). Let us divide both sides of Eq. (3) by $\mu^2$. We obtain
$$\frac{d}{d\xi} \frac{1}{\mu} = 24\lambda \frac{1}{\mu} \langle g^2 \rangle - 16 \frac{1}{\mu} \langle \mu g^2 \rangle - 4 \langle g^2 \rangle.$$
Let us now average (6) over the angles and subtract the averaged equation from (6). We have
$$\frac{d}{d\xi} \left( \frac{1}{\mu} - \left\langle \frac{1}{\mu} \right\rangle \right) = (24\lambda \langle g^2 \rangle - 16 \langle \mu g^2 \rangle) \left( \frac{1}{\mu} - \left\langle \frac{1}{\mu} \right\rangle \right).$$
It is clear from (7) that $1/\mu - \langle 1/\mu \rangle = \varphi \eta$, where $\varphi$ depends only on the angles and does not depend on $\xi$, while $\eta$ does not depend on the angles.
Thus, the angular and logarithmic variables separate. This allows us to rewrite the Eqs. (2)–(5) in the following form:
$$-d\lambda/d\xi = 36\lambda^2 \langle g^2 \rangle - 48\lambda \langle \mu g^2 \rangle + 16 \langle \mu^2 g^2 \rangle,$$
$$-d\nu/d\xi = 24\lambda \nu \langle g^2 \rangle - 16\nu \langle \mu g^2 \rangle - 4\nu^2 \langle g^2 \rangle,$$
$$dz/d\xi = 4((1-z)\nu \langle g^2 \rangle),$$
$$-dv/d\xi = 24\lambda v \langle g^2 \rangle - 16v \langle \mu g^2 \rangle - 4v^2 \langle g^2 \rangle,$$
$$\frac{\partial G^{-1}}{\partial q} = \frac{\partial G_{s^{-1}}}{\partial q} - 8 \int d\xi v \int \frac{dQ}{(2\pi)^d} \frac{(1-z)\gamma}{1-z\gamma} p.$$ Here $\mu = \nu(1-z)/\nu(1-z\gamma)$; $z$ and $\nu$ do not depend on the angles; $z(\xi = 0) = 0$; and $\gamma$ characterizes the dependence on the angles: $\gamma = \mu(\xi = 0)/\nu(\xi = 0)$, where $\nu(\xi = 0) = \mu_{\text{max}}(\xi = 0)$. The angular factor $\gamma$ is chosen such that $\gamma_{\text{max}} = 1$; then the factor $(1-z)$ in the numerator will eliminate the singularity in the denominator at $z\gamma_{\text{max}} = 1$. Since it follows from Eq. (10) that $z < 1$, pole singularities will not arise in the angular integrals.
It is convenient to go over to the variables $x = \lambda/v$ and $z$. As a result, for the Eqs. (8)–(10) and (12), we have
$$(z-1) \frac{dx}{dz} = 3x^{1-x} \left\{ 8 \frac{\langle \mu g^2 \rangle}{\nu \langle g^2 \rangle} - 1 \right\} + 4 \frac{\langle \mu^2 g^2 \rangle}{\nu^2 \langle g^2 \rangle},$$
$$\frac{\partial G^{-1}}{\partial q} = \frac{\partial G_{s^{-1}}}{\partial q} - 2 \int \frac{dx}{\langle g^2 \rangle} \int \frac{dQ}{(2\pi)^d} \frac{\gamma g^2}{1-z\gamma} p.$$ Notice that the angular integrals can be assumed to be known functions of the parameters $z$ and $g$, since the angular dependence of $\gamma$ is determined by the bare striction and elastic constants, while the angular dependence of $g$ is determined by the point symmetry of the crystal. Therefore, the system (13)–(14) can easily be integrated numerically. The behavior of the vertex $\nu$ is completely determined by the variable $z$. Let us introduce $y = \nu/v$. Then for $y$ we have the equation
$$dy/d\xi = 4y(y-1)\nu \langle g^2 \rangle.$$ Now, using (15) and (10), we obtain
$$y = y_0(1-z)/(1-y_0z).$$ In Khmel'nițkii and Shnerson's paper\textsuperscript{[2]} it is shown that the system (8)–(12) does not have stable solutions if $\gamma$ depends on the angles. The question of the behavior of the solutions is closely tied with the question of the stability of the thermodynamic potential.
It is customarily assumed that stability is lost when the effective interaction between the fluctuations changes sign and becomes negative. If we are discussing a transition to a homogeneous state and we leave out the question of the influence of the boundary conditions, then the quantity $\lambda - \nu$ can be assumed to be the effective interaction, since the vertex $\mu$ corresponds to a nonzero momentum transfer. The Larkin-Pikin effect\textsuperscript{[11]} corresponds, in the formulation under consideration, to the growth of the vertex $\nu = \nu(1-z)/(1-z\gamma_0)$, since $y_0 > 1$ ($y_0 = (K_0 + \frac{4}{3}L_0)/K_0$ in the isotropic case, where $K_0$ and $L_0$ are the bulk and shear moduli). There is therefore a pole at $z = y_0^{-1}$, which leads to the loss of stability.
In the general case the system (13)–(16) is solved numerically. It makes sense to analyze such a solution for a specific experimental situation. However, if $\langle \gamma \rangle$ and $\langle \gamma^2 \rangle$ vary slowly right up to the transition point, then we can obtain an analytic solution approximating the solution to the system (13)–(16). If the coupling between the fluctuations and the acoustic phonons is strong, then stability is lost even in the case of weak renormalization of the interaction, and therefore $\langle \gamma \rangle$ and $\langle \gamma^2 \rangle$ almost do not change. Apparently, it is precisely such a situation that is realized in the uniaxial ferroelectrics: triglycine sulfate and triglycine selenate (TGSes). Assuming $\langle \gamma \rangle$ and $\langle \gamma^2 \rangle$ to be constants, we obtain from Eq. (13)
$$z = 1 - \exp \left\{ -\frac{2}{A} \arctg \frac{6A(x_1-x)}{A^+((6x-8\langle \gamma \rangle + 1)(6x-8\langle \gamma \rangle + 1))} \right\},$$
$$A = (48\langle \gamma^2 \rangle - (8\langle \gamma \rangle - 1)^2)^{-1}.$$ The solution (17) has meaning only when $48\langle \gamma^2 \rangle$
\( > (3\langle \gamma \rangle - 1)^2 \), i.e., under conditions of sufficiently strong anisotropy.
3. COMPUTATION OF THE ELASTIC-CONSTANT ANOMALIES
Let us now consider the question of the elastic-constant anomalies. Figure 2 shows a typical behavior of the stiffness. The behavior of the stiffness along the segment 0–1 corresponds to the temperature dependence of the specific heat, since, owing to the strictive coupling, the equations for the fluctuation anomalies of the stiffness and for the specific heat coincide up to a constant factor. At the point 1 there occurs a first-order transition, and the stiffness decreases discontinuously to the point 2. We can compute in terms of the bare values of the elastic and striction constants and the fourth-order constant in the expansion of the thermodynamic potential the values \( c_1 \) and \( c_2 \) of the stiffness at the points 1 and 2.
Let us consider as an example the elastic modulus \( c_{xxx} \). Because of the striction, the fluctuations in \( P \) will make a contribution to \( c_{xxx} \). A calculation shows that the fluctuation contribution is determined only by the striction constant \( \alpha_{xx} \). The equations for the quantities \( \nu_x = \frac{1}{2} \alpha_{xx}^2 / c_{xxxx} \) and \( c_x = c_{xxxx} \) are represented graphically in Fig. 1c and Fig. 3. Their analytic expression corresponds to the equations
\[
-d\nu_x/dz = 24\nu_x\langle g^2 \rangle - 16\nu_x\langle \mu g^3 \rangle - 4\nu_x^2\langle g^4 \rangle,
\]
\[
-d\ln c_x/dz = 4\nu_x\langle g^2 \rangle.
\]
Using Eqs. (9), (10), (18), and (19), we easily obtain
\[
c_x = c_{0x}(1 - z\nu_{0x}/\nu_x).
\]
Here \( c_{0x} \) is the bare value of \( c_x \) and \( \nu_{0x} \) is the bare value of \( \nu_x \).
It can be seen from (20) that, first, to determine the elastic-modulus anomalies at the point 1, we need to know only the parameter \( z \). Secondly, we have the following invariant for the anomalies of the various elastic moduli:
\[
(c_{xx} - c_x)/v_{xx} = \text{const}.
\]
Since the dimensionality of the space in no way enters explicitly into (21), we should expect that such a simple relation will obtain also in three-dimensional space.
The computation of the magnitudes of the elastic constants after the jump is closely tied with the problem of the computation of the condensate that separates out during the fluctuation instability of a second-order transition. The instability is connected with the fact that the coupling constant attached to the fourth power of the parameter in the expansion of the thermodynamic potential becomes negative. In the case of the self-consistent field theory a sixth-order positive constant \( \Gamma \) was usually added, which restored the stability at large values of the condensate. However, near a second-order phase transition point the behavior of this constant is determined by the behavior of the fourth-order constant and, as it turns out, \( \Gamma \) decreases rapidly in the case of a second-order transition.\(^{[5]}\)
In the case of an unstable behavior of the fourth-order constant, this constant can also change sign. Therefore, the question arises of the stabilization of the thermodynamic potential at large values of the condensate. It has been shown\(^{[6]}\) that of greatest importance are the anharmonicities represented by the ring diagrams constructed from the fourth-order vertices. Some of the diagrams making contributions to the sixth-order vertex are shown in Fig. 4. Summing the diagrams of all orders higher than the fifth, we easily obtain the result for the additional contribution to the potential from the ring diagrams:
\[
\Delta \Phi = \frac{1}{16\pi^4} \left\langle \Lambda^4 \ln \frac{42\Lambda}{\tau} p^4 \right\rangle p^4, \quad \Lambda = \lambda - \frac{1}{3} \nu - \frac{2}{3} \mu.
\]
Here \( \mu \) depends on the angles and the symbol \( \langle \rangle \) denotes averaging over the angles. Let us show that the quantity \( \lambda - \frac{1}{3} \nu - \frac{2}{3} \mu \) arising here is positive. As was noted above, the transition occurs at \( \lambda \sim \nu \), so that we obtain
\[
\Lambda = \frac{1}{3} \nu \frac{y_0 - \gamma}{(1 - z\gamma)(1 - zy_0)}.
\]
We have set here \( \lambda = \nu \) and \( x = y_0(1 - z)/(1 - zy_0) \). Since \( \gamma_{\text{max}} = 1 \) and \( y_0 > 1 \), the quantity \( \lambda - \frac{1}{3} \nu - \frac{2}{3} \mu \) is always positive.
Equating the thermodynamic potential and its derivative at the transition point to zero, we can obtain two equations from which we can determine the value of \( x \) at the transition point, as well as the magnitude of the condensate:
\[
P = 8\pi^4 v/\langle \Lambda^4 \rangle
\]
\[
x = \frac{y_0(1 - z)}{1 - zy_0} - \frac{1}{16\pi^4} \frac{1}{v} \left[ \langle \Lambda^4 \rangle + \left\langle \Lambda^4 \ln \frac{96\Lambda\pi^2}{\langle \Lambda^4 \rangle} \right\rangle \right].
\]
It is clear that, to find the transition point, it is necessary to simultaneously use the relations (22) and the solutions to the system (13)–(16). Using the obtained results, we can compute the jumps in the elastic constants due to the condensate. Thus, for the stiffness \( c_{xxxx} = c_x \) we have
\[
\frac{1}{c_{1x}} = \frac{1}{c_{0x}} \left[ 1 + \frac{8\pi^4 v_{st}}{\langle \Lambda^4 \rangle} \right].
\]
Here the indices 1 and 2 respectively correspond to the points 1 and 2 in Fig. 2. Thus, we can derive simple expressions for the jumps in the elastic constants. The most attractive for comparison with experiment is the relation (21), since for its verification it is necessary to know only the elastic moduli and the striction constants. This relation is not trivial and arises from the renormalization-group equations. In contrast to this, the analogous expression obtained from the formula (23) will express a trivial fact that follows from the presence of the strictive coupling. Besides the indicated verification of the relation (21), we can carry out calculations directly for the magnitudes of the jumps, but this requires either a numerical, or an approximate, integration of the system (13)–(16).
4. ELASTIC ANOMALIES IN TGS
As is well known, an effectively four-dimensional situation obtains in uniaxial ferroelectrics owing to the dipole–dipole interaction.\(^{[4]}\) Therefore, the considered theoretical ideas can be verified on these materials. The most suitable objects for comparison are TGS and its analog TGSe. According to the experimental results obtained in Ref. 7, the fluctuation correction to the elastic constants in TGS behaves like ln\(\tau\), which should testify to the effective four-dimensional nature of the fluctuations (strictly speaking, the dependence should have the form \(\ln^{1/3} \tau\),\(^{[4]}\) but this is difficult to detect experimentally). The elastic and striction constants have also been measured\(^{[8,9]}\) below the transition point. An estimate for the magnitude of the effective constant \(\lambda - \nu\) in the ferroelectric phase is given in Jona and Shirane's book.\(^{[10]}\)
This allows us to estimate the fluctuation corrections at the transition point. Naturally, using, instead of data above the transition point, data measured in the ferroelectric phase, besides far from the transition point, we can obtain only order-of-magnitude estimates. The inaccuracies in the determination of the elastic moduli are an important source of errors here. Thus, for the moduli \(c_{15}, c_{25}, c_{35},\) and \(c_{44}\) the error can be of the order of the constants themselves. Such is the error for the striction constants, for they are computed, using the elastic moduli.
Therefore, it is reasonable to perform only approximate computations. As is well known, TGS belongs to the monoclinic system, and the spontaneous moment in it is directed along the Y axis. Therefore, a logarithmic divergence will occur in the case of momenta lying in the XZ plane. As noted above, the averaging over the angles should be done with some weight connected with the anisotropy of the Green function. Since in the present case the calculations are approximate calculations, it is reasonable to consider the Green function to be isotropic. Let us now compute the bare values \(\mu_0\) and \(\nu_0\). The computations are carried out by an elementary Gaussian integration over the inhomogeneous and homogeneous deformations, respectively. Unfortunately the literal expression, especially for \(\mu_0\), is too unwieldy to give here, especially as the integration is trivial. Numerically, these quantities have the following form:
\[
\mu_0 = \frac{46q_x^4 + 48q_y^4 + 90q_z^4 + 76q_x^2q_y^2 + 28q_x^2q_z^2 + 67q_y^2q_z^2 - 144q_x^2q_z}{3.0q_x^4 + 2.5q_y^4 + 7.3q_z^4 + 5.4q_x^2q_y^2 + 1.6q_x^2q_z^2 - 2.3q_y^2q_z^2 + 0.2q_x^2q_z}.
\]
The quantity \(\mu_0\) has been expressed in units of \(10^{-11}\) cm\(^2\)/dyn, \(\mu_{0\text{max}} \sim 15 \times 10^{-11}\) cm\(^2\)/dyn, \(\nu_0 \sim 25 \times 10^{-11}\) cm\(^2\)/dyn. The computation of the mean values of \(\mu_0\) is also performed numerically and yields \(\langle \mu_0 \rangle \sim 8 \times 10^{-11}\) cm\(^2\)/dyn, \(\langle \gamma \rangle \sim 0.53\), \(\langle \gamma^2 \rangle \sim 0.35\). The value of the effective constant \(\lambda_0 - \nu_0 \sim 5 \times 10^{-11}\) cm\(^2\)/dyn. Hence we obtain \(x_0 \sim 2\), \(y_0 \sim \frac{5}{8}\).
Thus, the situation in TGS corresponds to the case of strong coupling. Therefore, we can assume that \(z_n\) at the transition point differs little from \(x_0\) and that \(z_n\) at the transition point will be small. In that case \(\langle \gamma \rangle\) and \(\langle \gamma^2 \rangle\) will be almost constants, and we can use the approximate solution (17). Substituting the values of \(x_0\), \(\langle \gamma \rangle\), and \(\langle \gamma^2 \rangle\), we obtain from (17) the stability conditions:
\[
x = \frac{(4 + \gamma z)}{(2 + 9x)}, \quad x = \frac{\gamma}{z}(1 - z)/(1 - \gamma z).
\]
From here we find \(z_n \sim 0.05; x_n \sim 1.7\). The correction to \(\langle \gamma \rangle\), \(\Delta \langle \gamma \rangle = z_n (\langle \gamma \rangle - \langle \gamma^2 \rangle) = 0.01\), so that the initial assumptions are correct.
Knowing \(z_n\), we can estimate the fluctuation corrections to the elastic constants. We have \(\Delta c_{11}/c_{11} \sim 0.005\), \(\Delta c_{22}/c_{22} \sim 0.01\), and \(\Delta c_{33}/c_{33} \sim 0.003\). The experimental value for \(\Delta c_{33}/c_{33} \sim 0.05\), determined from the \(c_{11}\) and \(c_{22}\) anomalies, is considerably higher. In comparing with experiment, we should take into account the fact that, for the reasons indicated above, the above-used relations between the striction constants bear no relation to reality, although the order of their magnitude should be substantially closer to the experimentally measurable order. Therefore, the greatest experimental value should be compared with the greatest computed value, i.e., the experimental value for \(\Delta c_{33}/c_{33}\) should be compared with the computed value of \(\Delta c_{22}/c_{22}\). On the whole, on account of the above-indicated reasons, the question of comparison with experiment remains open.
The author is deeply grateful to V. L. Pokrovskii for supervising the work, to D. E. Khmel'nitskii for constant attention to the work and for numerous discussions, to K. A. Minaeva for elucidating the experimental situation, and to A. I. Shapiro for initial support.
---
\(^1\)This circumstance was pointed out to the author by D. E. Khmel'nitskii.
\(^2\)A. I. Larkin and S. A. Pikin, Zh. Eksp. Teor. Fiz. 56, 1664 (1969) [Sov. Phys. JETP 29, 891 (1969)].
\(^3\)D. E. Khmel'nitskii and V. L. Shneerson, Zh. Eksp. Teor. Fiz. 69, 1100 (1975) [Sov. Phys. JETP 42, 560 (1975)].
\(^4\)D. J. Bergman and V. I. Halperin, Phys. Rev. B13, 2145 (1976); A. Aharonov, M. A. deMoura, J. Imry, and T. C. Lubensky, Phys. Rev. B13, 2176 (1976).
\(^5\)A. I. Larkin and D. E. Khmel'nitskii, Zh. Eksp. Teor. Fiz. 56, 2087 (1969) [Sov. Phys. JETP 29, 1123 (1969)].
\(^6\)K. G. Wilson and J. Kogut, Phys. Rep. 12C, 75 (1974).
\(^7\)I. F. Lyuksyutov and V. L. Pokrovskii, Pis'ma Zh. Eksp.
Amorphization of a Heisenberg ferromagnet with anisotropically distributed exchange couplings
E. V. Kuz'min and G. A. Petrakovskii
Physics Institute, USSR Academy of Sciences (Siberian Branch)
(Submitted March 24, 1977)
Zh. Eksp. Teor. Fiz. 73, 740–752 (August 1977)
The amorphization of a crystalline ferromagnet with anisotropically distributed exchange parameters is investigated. The amorphous ferromagnet is treated in the framework of a lattice model with fluctuating exchange couplings. With the use of the single-site approximation in the coherent-potential method, equations are found for the parameters of the coherent exchange matrix by means of which the magnon states of the amorphized ferromagnet are described on the average. The case of the amorphization of a quasi-two-dimensional ferromagnet with intraplanar ($J_0$) and interplanar ($K_0$) exchange parameters when the exchange interactions become isotropic is investigated. The coherent exchange parameter and the modified density of magnon states are found by using a distribution function corresponding to the mixing of the $J_0$ and $K_0$ couplings on amorphization. It is shown that the Curie temperature increases substantially on amorphization of a quasi-two-dimensional crystal.
PACS numbers: 75.50.Kj, 75.30.Et
1. INTRODUCTION
The problem of magnetic order in amorphous materials was raised by Gubanov\textsuperscript{[1]} and has undergone considerable development since then. Important results have been obtained in the papers of Handrich,\textsuperscript{[2]} Montgomery \textit{et al.},\textsuperscript{[3]} Foo and Bose,\textsuperscript{[4]} Gubernatis and Taylor,\textsuperscript{[5]} and others. A characteristic feature of these theoretical papers is that they treat magnetically and structurally stable systems of the cubic-ferromagnet type. In the crystalline state, such substances are characterized by only one exchange-coupling parameter, the magnitude of which is fixed over the whole crystal. The amorphization of such crystals is accompanied by the appearance of fluctuating exchange. Therefore, the results of the aforementioned papers reduce principally to a decrease of the magnetization and Curie temperature $T_C$ of the ferromagnets as they become amorphous. An important aspect is that the ferromagnetism can disappear completely when the exchange fluctuations reach a certain critical size.\textsuperscript{[4]} For this class of substances the existing experiments basically confirm the theoretical ideas.\textsuperscript{[5–8]}
It has been postulated\textsuperscript{[9]} that the strongest effects will arise in the amorphization of magnetic crystals whose magnetic structure is determined in an essential way by the geometry of the distribution of exchange couplings. Such a situation obtains, e.g., in quasi-low-dimensional magnets. The description of such magnets requires the introduction of at least two different exchange parameters. The type of magnetic order and the temperature of the magnetic phase transition in quasi-low-dimensional magnets are determined by the weak exchange that couples the magnetic chains or layers.\textsuperscript{[10]} At the same time, the same characteristics of the amorphized substance are more likely to be determined by a certain averaged exchange. Consequently, it is reasonable to expect that the amorphization of quasi-low-dimensional systems can lead to both a change in the type of magnetic order and a sharp increase in the temperature of the magnetic phase transition. Of course, the traditional effects of amorphization will remain,\textsuperscript{[11]} but in a number of cases their role becomes secondary.
The most important consequence of the amorphization of a substance is the disappearance of the periodic crystalline structure. This is the reason why the theoretical description of the magnets encounters great difficulties of a fundamental character. Because of the absence of translational invariance in the amorphous substance, the traditional methods developed in the theory of solids for perfect crystals do not work.
For an approximate description of the properties of an amorphous substance we can start from the assumption that, after averaging over all possible realizations, translational invariance is re-established on the average. The substance can then be described in terms of a certain ideal crystal with certain effective parameters. The procedure for averaging over the realizations should, in the general case, take into account fluctuations of the |
CASE STUDY - DAVIS SQUARE, SOMERVILLE, MASSACHUSETTS
This Case Study appears in *Transit-Friendly Streets: Design and Traffic Management Strategies to Support Livable Communities* published by the Transportation Research Board, TCRP Report 33 [http://onlinepubs.trb.org/onlinepubs/tcrp/tcrp_rpt_33.pdf](http://onlinepubs.trb.org/onlinepubs/tcrp/tcrp_rpt_33.pdf)
*It's hip to be in Davis Square.*—Boston Globe, January 26, 1997
Somerville, Massachusetts, the most densely populated streetcar suburb in New England, is home to 76,000 people. In 1973, Davis Square, one of the city’s largest central squares and a traditional commercial center, was selected as the location for a new station on the Red Line T (subway), using a former freight rail line that bisected the community.
While the station was being planned, the city and the community developed a visionary strategy to radically transform the streets and pedestrian access to the square, provide additional on-street parking, improve its visual appearance, and create opportunities for new development. At the heart of Davis Square is a complex six-point intersection, consisting of four major collector roadways and two smaller roadways (Figure 3-19). Until their reconfiguration in the 1970s, two major, pedestrian-unfriendly streets bisected the square, and several freight trains ran right through the square each day on the Boston and Maine Railroad, forcing traffic to back up for long periods of time. While the Massachusetts Bay Transportation Authority (MBTA) was building the station and a new plaza, the city of Somerville set to work on construction of pedestrian-oriented streetscape and landscape improvements, facade renovations, and a redevelopment plan to attract new businesses.
Once a gritty, down-at-the-heels intersection, Davis Square is now a vibrant nightspot and popular shopping district. New restaurants and nightclubs attract a young crowd from all over the Boston area to what is billed as an alternative to Harvard Square in Cambridge. There are also many new professional offices and neighborhood-oriented services. Yet, the square retains its residential character and ably serves the needs of a diverse mix of residents. What has been achieved goes far beyond what the farsighted community envisioned when it began to plan the square’s revival in the mid-1970s.
**Project Goals**
The 20-year revitalization of Davis Square has occurred, not as the result of one plan or initiative, but as a series of plans that have evolved over time as the needs of the area have changed. The square’s success is attributable to the city’s sustained commitments coupled with a very involved and energetic residential community. These parties wielded significant political influence in the city and were able to develop a long-term vision at a time when the area was suffering from the urban decay and disinvestment faced by many 19th century industrial, working-class neighborhoods.
The primary goal set forth in the Davis Square Action Plan adopted in 1982 was to use the new Red Line Station as a cornerstone for redevelopment, strengthening Davis Square as a viable shopping district while preserving the residential character of the neighborhood. After convincing the MBTA to route its Red Line extension through Davis Square, the city of Somerville then set out to improve access to Davis Square for pedestrians, cars, buses, and bicycles.
In a planning study prepared by the city’s Office of Planning and Community Development (OPCD) and its consultants in 1980 (Working Paper No. 5 of the Davis Square Planning Study), these goals were explicitly stated:
- Ensure pedestrian safety and convenience throughout Davis Square while minimizing traffic congestion;
- Design traffic patterns that promote the aims of the Davis Square business community;
Provide safe and convenient pedestrian, car, and bus access to the (new MBTA Red Line) station; and
Work with the MBTA and other agencies to appropriately reuse the railroad right-of-way.
Davis Square residents, in an effort to retain the residential character of their neighborhood, decided early on that the new Red Line Station should not provide any parking facilities, which they feared would destroy the character and human scale of their neighborhood. A 1989 Davis Square Parking Study further reinforced this basic commitment to reducing the number of vehicular trips to Davis Square as a way of reducing parking demand and relieving congestion.
**Design and Planning Process**
The OPCD and the Metropolitan Area Planning Council put together the first Davis Square urban design and business study in 1977, while the Red Line extension was in the planning stage. That same year, the Davis Square Task Force was formed; it was composed of members of the Ward Six Civic Association, local business owners, and local officials and was to act as an advisory committee. The task force provided input about the revitalization plans, addressed issues related to construction of the Red Line extension, and helped determine the character of new development in the square. The OPCD hired consultants to study potential land uses, including office and retail, traffic, parking, and other issues. The studies were generated as a series of working papers, termed the “Davis Square Planning Study.” The findings of these studies, along with input from the task force, were synthesized for the Davis Square Action Plan, which was adopted in 1982.
To help resolve the traffic issues within Davis Square, a 1976 federally subsidized TOPICS (Traffic Operations Program to Increase Capacity and Safety) study was conducted. This study recommended that two of the square’s major two-way streets (Highland Avenue and Elm Street) be converted to one-way and that intersection signalization be simplified. This led to 5 years of fast-moving traffic careening through the square and to what many residents decried as a very dangerous situation for pedestrians. “I can’t cross the street” and “you take your life into your hands” were among the complaints that the city heard from Davis Square residents. While planning the square’s revitalization, the city and the Davis Square Task Force decided that, along with the major MBTA improvements to the square, measures had to be taken to mitigate the pedestrian-hostile effects wrought by the TOPICS program.
Finally, the city of Somerville and the Task Force initiated many other projects to accompany the Red Line extension and Davis Square improvements. Property redevelopment activities included a storefront and facade improvement grant program, financing for building renovations, and designation of a portion of Davis Square as an urban renewal district. Property acquisition, clearance, infrastructure upgrades, and development took place within the boundaries of this district. The district was later developed as a 100,000-ft² office and retail complex, including public open space and a parking garage that serves patrons and employees of local businesses.
**Design Strategies and Features**
The Davis Square intersection was radically reconfigured, with a plaza added between the subway station entrances as well as other pedestrian enhancements. Sidewalks, which average about 10 ft in width, were widened at intersections and at other strategic spots, particularly crosswalks, to enhance pedestrian capacity, circulation, and safety. The sidewalks, many of the crosswalks, and the pedestrian islands are brick paved and the crosswalks are clearly marked (Figure 3-20). Safety islands are provided at some intersections. On Elm Street and Highland Avenue, crosswalks and neckdowns are provided at midblock locations (to reduce walking distance between intersections for pedestrians) and large signs advise vehicles to stop for pedestrians in the unsignaled crosswalks.
The four collector streets—Holland Street, Elm Street, Highland Avenue, and College Avenue—average about 40 ft in width, with two travel lanes and parallel parking with 1-hour
meters on both sides. Holland Street and College Avenues are two-way, whereas Elm Street and Highland Avenues are one-way within the square. Several smaller oneway streets also connect to the square. The MBTA developed a central plaza linking the two station entrance buildings built on an old railroad right-of-way. This plaza replaced a poorly defined open area containing at-grade parking spaces and debris. The plaza is designed to serve as the center of Davis Square, a gathering place and center for activities and outdoor entertainment. The plaza and the station were both eligible for state percent-for-art moneys. One percent of the cost of constructing the new station entrances was used to commission several figurative sculptures, some of which represent local citizens, which are set within the plaza. In addition, tiles designed by neighborhood children were installed in the station and a large sculpture was commissioned to hang over the tracks. The public art projects fit in with the city’s goal of creating a community place—a place where residents can feel a sense of ownership. The old Boston and Maine Railroad right-of-way has become a bike path and greenway and, for part of its length, a designated bus way leading to the subway station. The bus way functions as a passenger stop for the College Avenue station entrance building, also called the head house, and there are other designated bus stops along Holland Street. MBTA buses and Tufts University’s van services connect travelers to the subway line at these points.
Most of the remaining railroad right-of-way between Davis Square and the Alewife MBTA station in Cambridge (the Red Line’s northern terminus) was redeveloped and landscaped as a linear park or bicycle/pedestrian pathway. A public park was constructed directly behind the Holland Street MBTA head house as part of a later project. The linear park connects at Alewife MBTA station with the Minuteman Trail, a 13-mile bicycle path traversing the towns of Arlington, Lexington, and Bedford. On the east side of Davis Square, an additional portion of the old right-of-way was redeveloped as a bike path. Known as the Grove-Cedar Streets segment, this facility was constructed in 1994 and is being upgraded with new lighting and grading with portions being used as a community garden.
**Impacts and Assessment**
The MBTA station and associated improvements have significantly transformed Davis Square, efficiently balancing a significant amount of vehicular traffic with an active pedestrian environment. The streetscape improvements surrounding Davis Square station enhance pedestrian access to the station and local businesses while slowing traffic. These improvements also give the commercial area a more coherent appearance.
**Transit Impacts**
Undoubtedly the transit improvements have contributed significantly to the square’s overall health, and the square’s physical environment enhances public access and desire to use transit. Locating the subway station in Davis Square significantly decreased travel time into Boston for Somerville residents, making the area an attractive residential alternative for Boston-bound commuters. The presence of mass transit also makes the square’s commercial district more attractive for office development, which contributes to the area’s daytime activity.
The careful planning of Davis Square and its transit station has made it possible for people to reach the square without bringing more cars into this densely settled area. In fact, a before-and-after study completed in 1987 found that most subway riders boarding at Davis Square walk to the station (66 percent) and many others take a bus (21 percent). The station entrance buildings serve as bus shelters for passengers transferring between buses and between bus and subway. This multimodal function is particularly well planned at the College Avenue head house, where use of the former right-of-way as a bus way gives priority to buses, reducing the overall traffic moving through the intersection as well as bus travel times (Figure 3-21).
**Traffic Impacts**
The TOPICS changes made in the late 1970s were clearly designed to move automobile and truck traffic rapidly through the square. The result ignored the needs of pedestrians and would probably have led to further deterioration of the square had not the construction of the subway
extension played a major role in reversing that trend. Now, the wider sidewalks, neckdowns, on-street parking, and irregular street configuration help to slow the traffic entering the square. At the same time, the pedestrian islands, medians, and center island serve to channel and calm the movement of cars within the intersection.
The street flow patterns do offer motorists numerous alternatives and turning options: two concurrent rotary (roundabout) patterns are formed by the one-way traffic on Highland Avenue and Elm Street in conjunction with two other adjoining streets. This gives motorists looking for on-street parking the option of re-circulating through the square. A traffic and circulation study conducted by the city in 1981 showed traffic volumes through the Davis Square area to be over 79,000 cars in a 24-hour period. Although recent traffic counts are not available, traffic has undoubtedly increased since that time as the popularity of Davis Square as a destination has grown. Traffic generally moves smoothly but slowly, and during evening rush hour all cars make it through the intersection in two cycles, if not one. The slower movement of traffic has made the square safer and easier for pedestrians to cross, although it has encouraged their inclination to jaywalk.
Although local businesses pushed for an increase in parking, residents thought more parking would lead to disintegration of the neighborhood’s urban fabric. “Park and ride” facilities were completely ruled out and even “kiss and ride” dropoffs were discouraged. As a result, no facilities for commuter parking are provided in the square.
**Pedestrian Impacts**
The new brick and granite paving, upgraded lighting, and facade improvements have given the plaza and surrounding streets a fresh, well-maintained appearance, in marked contrast to the square’s previous unkempt and deteriorated state. The plaza is principally used as a central square by residents who sit, watch, rest from shopping or exercise, or wait for the next bus. The plaza also functions as a resting place for cyclists who use the adjacent bike paths and as a meeting place or “front yard” for adjacent businesses. Annual community events such as ArtBeat, sponsored by the city and the Somerville Arts Council, are staged there. Periodically, the plaza is used for public speaking events as well. Because of the density of Somerville and the proximity of residential neighborhoods to the Davis Square Station (as well as the carefully controlled parking supply), the number of people who walk to the station is high. The transformation of Davis Square from a pedestrian hazard to a pedestrian attraction has offered an added incentive for walkers. By making crossings safer, by making the sidewalks more comfortable and better lighted, and by offering a diverse mix of retail activity along the streets, the square has become extremely friendly to pedestrians.
Efforts have recently been made to further improve pedestrian use of the square. Pedestrian crossing signal cycles, however, may need to be lengthened to reduce the amount and diversity of jaywalking that takes place. The pedestrian only cycle, with a “chirping” walk signal, occurs about every 21/2 min (slightly shorter when the pedestrian-activated light is pushed—then it cycles at 45 sec to 1 min). The pedestrian signal lasts a brief 8 sec followed by 12 sec of a flashing “don’t walk.” There are, however, pauses in traffic flow long enough for pedestrians to cross against the “don’t walk” signal, which they do quite often. For example, the light on College Avenue is red for 40 sec, which gives pedestrians coming out of the College Avenue entrance time to cross; however, cars turning during the 45-sec, right-turn, green arrow from Highland onto College often conflict with these pedestrians. *Figure 3-21.*
**Economic Impacts**
When the new transit station opened in 1984, business around Davis Square did not immediately thrive. The number of retail stores in the area declined from 68 in 1977 to 56 in 1987. However, many nonretail uses, such as beauty salons and real estate offices, had already begun to fill the empty retail spaces. With the Boston area’s emergence from its long recession, the area truly began to revive. The completion in 1993 of a new 100,000-ft² office and retail development on an
urban renewal site adjacent to the Holland Avenue head house also may have helped spur investment. Many other smaller properties have subsequently been redeveloped. Clearly, the community’s vision of a rebirth of commercial and retail activity has, in the past few years, been fully realized. All benefit from their proximity to the MBTA station, which has enabled the square’s businesses to reach a wider patronage while serving local residents well. Retail vacancy rates around the square are close to zero.
**Costs**
The total cost of the Davis Square portion of the MBTA Red Line extension project was approximately $29 million. Urban Systems Program funding totaling $1.2 million was received from MASS DPW to fund streetscape improvements; $100,000 in CDBG moneys was received in 1982 to provide special paving, landscaping, and street furniture. HUD money was also used for streetscape materials; Congestion Mitigation Air Quality (CMAQ) money is being used to fund implementation of a new bus shuttle; and the Massachusetts Highway Department is paying for an expansion of the bike path. Somerset Bank funded a $1 million storefront rehabilitation program. In 1981, Davis Square won Commercial Area Revitalization District (CARD) designation, making it eligible for low-interest industrial revenue bond financing for business expansions and new construction.
**Conclusions**
The city and community leaders agree that the residents’ intensive involvement throughout the planning process helped to set the direction and has led to the success of Davis Square. Many regional planners and even some city officials believed 20 years ago that the only way a new transit station could succeed was as a commuter “park and ride” facility. The city and residents have proven that they were right in fighting to preserve the residential character of their community and to create a setting for transit based on a comfortable balance between pedestrians and vehicles—instead of an automobile-dominated “park and ride” serving commuters from distant suburbs.
**Next Steps**
Today, 13 years after the opening of the Davis Square Red Line station in 1984, the city is continuing to implement the vision set out in the mid-1970s with continued pedestrian and transit-related enhancements: refurbishing of the central plaza and introduction of a new shuttle bus system that will provide even better access to the station. In addition, the Massachusetts Highway Department will be conducting a bike path enhancement project, which will revisit the existing bike path spaces, replace fences along the bike path with friendlier gates, and add street lighting to facilitate nighttime riding. The city is also working to redevelop some of the other squares in the city, most notably Union Square where, among other initiatives, they are exploring the feasibility of locating a commuter rail stop. With moneys from CMAQ and a matching grant from the city, Somerville also has plans to develop a cross-town shuttle to link each of the city’s numerous squares to each other, to key commercial areas, and to the Red Line, thereby enhancing north-south transit service and eliminating half the bus transfers in the city.
**Lessons Learned**
- There needs to be an overall shared vision and consensus about what ought to be done. In Somerville, this vision was developed nearly 25 years ago and is being faithfully and incrementally implemented through careful planning.
- Incremental changes were considered positive measures because they allowed for evaluation and corrections of what had been accomplished.
- Taking the steps necessary to create a walkable neighborhood will encourage people to walk to a transit station.
- An intersection that is hostile to pedestrians and friendly to vehicles can be reconfigured so that it is friendly to both.
Figure 3-19. Davis Square, Somerville, Massachusetts.
Figure 3-20. In Davis Square, ample paved pedestrian crosswalks and refuges are provided to make navigating between the Red Line T (subway) station, bus stops, and this seven-point intersection easier.
Figure 3-21. A dedicated bus way serves the College Avenue T station entrance. |
The “Matthew Effect” and Market Concentration: Search Complementarities and Monopsony Power
Jesús Fernández-Villaverde, Federico S. Mandelman, Yang Yu, and Francesco Zanetti
Working Paper 2021-4
January 2021
Abstract: This paper develops a dynamic general equilibrium model with heterogeneous firms that face search complementarities in the formation of vendor contracts. Search complementarities amplify small differences in productivity among firms. Market concentration fosters monopsony power in the labor market, magnifying profits and further enhancing the output share of high-productivity firms. The combination of search complementarities and monopsony power induce a strong “Matthew effect” that endogenously generates superstar firms out of uniform idiosyncratic productivity distributions. Reductions in search costs increase market concentration, lower the labor income share, and increase wage inequality. The model also transforms short-lived negative aggregate shocks into persistent recessions that heighten market concentration.
JEL classification: C63, C68, E32, E37, E44, G12
Key words: market concentration, superstar firms, search complementarities, monopsony power in the labor market
https://doi.org/10.29338/wp2021-04
The authors gratefully acknowledge Michael Peters for an outstanding discussion and most helpful suggestions to simplify our analysis, as well as Luis Garicano and Gustavo Ventura for insightful comments. Ryan Zalla provided outstanding research assistance. Zanetti gratefully acknowledges financial support from the British Academy. The views expressed in this paper are solely the responsibility of the authors and should not be interpreted as reflecting the views of the Federal Reserve Bank of Atlanta or the Federal Reserve System. Any remaining errors are the authors’ responsibility.
Please address questions regarding content to Jesús Fernández-Villaverde, University of Pennsylvania, 133 South 36th Street, Philadelphia, PA 19104, firstname.lastname@example.org; Federico Mandelman, Federal Reserve Bank of Atlanta, 1000 Peachtree Street NE, Atlanta, GA, 30309, email@example.com; Yang Yu, Shanghai University of Finance and Economics, 318 Wuchuan Road, Shanghai, China, firstname.lastname@example.org; or Francesco Zanetti, University of Oxford, Manor Road, Oxford, OX1 3UQ, United Kingdom, email@example.com.
Federal Reserve Bank of Atlanta working papers, including revised versions, are available on the Atlanta Fed’s website at www.frbatlanta.org. Click “Publications” and then “Working Papers.” To receive e-mail notifications about new papers, use frbatlanta.org/forms/subscribe.
1 Introduction
Merton (1968) famously identified the “Matthew effect”: *For whoever has will be given more, and they will have an abundance. Whoever does not have, even what they have will be taken from them*. Merton’s insight was straightforward: small exogenous differences get amplified, often by orders of magnitude, by the endogenous responses of agents to those small differences. For instance, in Merton’s original analysis, small differences in scientific productivity are magnified by the extreme inequality in the allocation of limited resources (grant money, graduate students, journal pages). Imagine a national research agency that only has money to finance one research lab, but can correctly identify ex-ante differences in scientific productivity among professors. Even if professor A is just 1% more productive than professor B, professor A will get the funds to run a lab, become famous and influential. In contrast, professor B will linger in obscurity.
In this paper, we argue that a “Matthew effect” drives the high levels of market concentration observed in the data, with a few superstar firms and many small firms, even when the differences in productivity among firms are minor. The “Matthew effect” operates through strategic complementarities under direct search and monopsony power in the labor market.
Let us unpack these mechanisms. Firms need to sign vendor contracts with their suppliers before producing. This process involves costly search. If you are going to operate an ice cream truck company, you need to find a supplier of milk, a supplier of waffle cones, a supplier of toppings, a supplier of ice cream mixers, a supplier of trucks, etc. This search is costly in time and resources.\(^1\)
Intermediate-goods suppliers search with higher effort when they are more productive because the potential profit from a vendor match is larger. For example, a high-productivity waffle cone manufacturer will pay the costs in time and resources of attending a trade fair for the restaurant industry, but a low-productivity manufacturer will not. Conversely, final-goods producers send more buying agents when they know that the intermediate-goods suppliers are searching for buyers. This decision is particularly salient with directed search: i.e., the ice cream company receives a directory of the booths in the trade fair and, upon seeing that a high-productivity waffle cone producer is attending the fair, sends an agent to visit that booth right away.
\(^1\)This example is taken from the fascinating tale of how Mister Softee tried and failed to establish an ice cream business in Suzhou in the 2000s. See [https://supchina.com/podcast/the-rise-and-fall-of-a-suzhou-soft-serve-baron/](https://supchina.com/podcast/the-rise-and-fall-of-a-suzhou-soft-serve-baron/).
Hence, high-productivity intermediate-goods vendors will form more matches with final-goods firms. In the terminology of Bulow et al. (1985), a strategic complementarity appears because the stronger the search of the intermediate-goods producers, the stronger the directed search of the final-goods firms and vice versa.
The search complementarity mechanism will induce a highly concentrated distribution of firms’ size, vacancies, and output. In particular, small differences in productivities among intermediate-good firms will result in large differences in firms’ size and output. Interestingly, this “Matthew effect” transforms a uniform distribution of idiosyncratic productivity into a highly skewed firm’s distribution characterized by the presence of superstar firms. In contrast with most of the literature, we do not need fat tails in the distribution of firms’ idiosyncratic properties (e.g., productivity, demand shifters, etc.) to generate this result.
The process, however, does not end up here. In our model, there is a second mechanism reinforcing the “Matthew effect”: the labor market power of large firms. If firm A is the waffle cone producer in a region, it has market power when hiring bakers. This market power creates two effects. First, more productive firms will pay higher wages (as we observe in the data): the surplus of a labor match is larger and the worker will receive part of it. However, conditional on productivity, the wages will be a lower share of the surplus. That is, large firms will have a lower labor income share (again, as we see in the data). The higher the market power, the stronger these two effects will be.
This labor market power will also have a consequence for search under strategic complementarities. Since larger firms will keep a higher share of the surplus of a labor match, larger firms will also have a stronger incentive to search with more intensity (beyond the direct effect of higher productivity). That higher search intensity will be reinforced by the response of final-good producers. That is, search complementarities transform labor market power into significant differences in the market structure and firms’ sizes and output.
The mechanisms outlined above also have significant business cycle implications. After a negative aggregate productivity shock, firms will decrease their search effort. This fall in search effort will amplify the original shock and make it more persistent over time. Furthermore, the reduction in aggregate productivity will affect low-productivity firms disproportionately because their profit margins are smaller. Thus, low-productivity firms will reduce their search effort more than high-productivity firms, leading to more market concentration.
To explore these mechanisms formally, we first develop a simple model that isolates the effect of search complementarities and monopsony power for the distribution of firms, market concentration, and the effect of aggregate shocks. The model will not be designed for quantitative work, but it illustrates all our ideas transparently.
Next, we build a dynamic general equilibrium model with heterogeneous firms and frictional labor markets. At the core of our model, we embed the integration of complex production processes that require long-lasting vendor relations among different intermediate- and final-goods firms. This assumption is motivated by the strong empirical evidence on the existence of sophisticated multi-firm value-chains. In the model, the intermediate-goods firms manage a continuum of product lines and search for buyers of those goods. Final-goods producers assign buying agents to find those product lines and sign contracts with them. The two-sided search among firms leads to strategic complementarities: intermediate-goods firms’ optimal search effort increases with the visits of final-goods producers’ buying agents, and vice versa.
Our model is enriched with monopsony power in the labor market by adding search and matching frictions that let firms to set wages below the marginal product of labor. To do so, we consider a matching technology in the labor market à la Butters (1977), which allows multiple workers to apply to a single vacancy randomly. This environment provides the firm with the power to select one worker among multiple job applicants. A firm operating in concentrated markets can induce workers to accept a low wage since it could threaten the worker to forgo future job offers if the worker declines a wage offer. Intuitively, if just a limited number of firms dominate a segment of the labor market, a prospective employee that rejects a job offer may be excluded *de facto* from future consideration since the firm will prefer other job applicants. Top firms exploit their market power to offer a low wage and gain profits, which in turn increases profits and encourages firms with market power to search more actively and attract more visits from potential partners. Labor market power enhances search complementarities in the goods market and it is a critical force to generate market concentration.
We calibrate the model to match quarterly U.S. data and then use it as a measurement device. Given that we want to be consistent with the differences in measured total factor productivity across manufacturing plants, the rate of factory idleness, and labor market observations, how much monopsony power do we need to account for market concentration by the top 10% of firms? From this exercise, we back up what we judge as a mild form of monopsony power: the
equivalent of firms being able to punish workers that reject an offer by turning down their future applications for around six months. This reasonable degree of monopsony power enhances our trust in the model as a quantitative laboratory for further exercises.
How can we use our model to think about the recent experience of advanced economies? Several studies have documented a steady increase in market concentration over the past three decades. For example, Autor et al. (2020) show that, starting in the early 1980s, sales moved towards the most productive firms across U.S. industries. At the same time, labor markets became increasingly dominated by fewer players, lowering the bargaining power of workers and deepening income inequality (Wu, 2019). Monopsony power in labor markets has also boosted firm profits and market concentration (Hershbein et al., 2020). Furthermore, the early 1980s witnessed the outset of the Great Moderation, a sustained period with low volatility.
Our model presents a simple mechanism to jointly account for all these observations: a fall in the costs of signing vendor contracts. While all firms increase their search effort when the costs of signing vendor contracts fall, high-productivity firms make a disproportionate gain out of this change, as their search effort decisions are much more non-linear on search costs than those of low-productivity firms. Thus, in our model, lower search costs lead to i) higher market concentration, ii) lower labor income shares, iii) more labor market power, and iv) lower responses of output to aggregate shocks. We observe the same four facts in U.S. data.
What do we have in mind in terms of lower search costs? Improvements in IT technology. The internet has made it much easier to identify vendors and suppliers, to manage them, to run logistics and inventories, to handle sophisticated value-added chains, etc.
While market structure changes the response of the economy to aggregate shocks, the effect also works in the opposite direction: the market structure is endogenously determined by the realization of aggregate shocks. The persistently low search effort and output by low-productivity firms after a large negative aggregate shock result in an increasing market exit among the production lines owned by these firms. As a result, deep slumps render the market-structure increasingly concentrated.
This behavior also matches the empirical evidence. Using U.S. Census firm-level panel data, Salgado et al. (2019) show that business cycles are skewed. That is, during recessions, a subset of firms significantly underperforms, leading to a large fat left tail in the production distribution. The process is reversed in expansions, when the right tail becomes fatter.
The market concentration effect of negative aggregate shocks also appears in the customer base literature. Chevalier and Scharfstein (1996) find that, during recessions, small (and liquidity constrained) firms invest less in expanding their customer base and raise prices to boost their liquidity positions. Bigger firms expand their customer base in recessions, which renders the market more concentrated afterward. Investment in customer base resembles network formation in the context of our model.
Our paper connects with many other different areas of research. First, and most importantly, there is a tradition of papers exploring the firm’s size distribution that goes back to the span-of-control model by Lucas (1978). One can think about our theory as an endogenous determinant of the span-of-control: production links must be formed either within firms or between firms. Directed search and strategic complementarities determine how many of these links are created in our model.
Our theory has two advantages with respect to a simple span-of-control model. First, we can generate larger dispersion in firms size that is compatible with observed differences in measured total factor productivity across plants. Second, our model allows us to have a simple margin to account for the simultaneous increase in market concentration and fall of the labor income share: the reduction in direct search costs, which we link with observed improvements in IT. In a simple span-of-control model, one would need to resort to either production functions getting closer to linear or a change in the underlying distribution of managerial talent to generate similar outcomes (see, however, for more flexible versions of the span-of-control model, Garicano and Rossi-Hansberg, 2006).
Linked with the Lucas’ tradition, there is much recent research focused on growing market concentration. Aghion et al. (2019) find that IT explains the lower cost of production for bigger firms, with newer firms (or less efficient ones) finding increasing difficulty to contest them. Similarly, Akerman et al. (2013), Bessen (2017), and Unger (2019) attribute this winner-takes-all mechanism to economies of scale arising from intangible capital and advances in information technology, which greatly improve the product and inventory logistics.
A second strand of the literature has been devoted to understanding the recent decline in the labor share of output. See, among many others, Elsby et al. (2013), Karabarbounis and Neiman (2014), and references therein. De Loecker and Eeckhout (2018) attributes this phenomenon to a raise in weighted average firm markups, with Gutiérrez and Philippon (2018) emphasizing the
role of weakening antitrust U.S. enforcement. Closer to our work is Autor et al. (2020), who argue that the decline in the labor share should be attributed to the reallocation of market share towards “superstar” firms with higher markups. Consistent with this hypothesis Peters (2020) find that markups vary systematically across firms, with incumbents investing to increase productivity growth (further raising markups). However, a creative destruction mechanism also exists in this last paper, as new and more efficient firms displace incumbents. Higher entry costs or frictions may thus deter this key pro-competitiveness mechanism.
Third, our paper also contributes to the growing theoretical literature of monopsony in labor markets, which is generated through diverse mechanisms. Examples include Ashenfelter et al. (2010), Berger et al. (2019), Manning (2011), Card et al. (2018); and Lamadon et al. (2019). In turn, empirical papers finding substantial market power in the labor market include Azar et al. (2019), Saiger et al (2010), Falch (2010); Ransom and Sims (2010) and Matsudaira (2014)).
Fourth, starting with seminal contributions of Diamond (1982) and Weitzman (1982), several papers have linked strategic complementarities to aggregate fluctuations. See, without being exhaustive, Diamond and Drew Fudenberg (1989), Huo and Ríos-Rull (2013), Kaplan and Menzio (2016), and Taschereau-Dumouchel and Schaal (2015). We depart, though, from those papers in our focus on how strategic complementarities and monopsony power create a “Matthew effect” on market concentration and analyze how those mechanisms interacts with aggregate shocks.
Finally, in Fernández-Villaverde et al. (2019), we explore how fiscal policy and strategic complementarities interplay to explain the persistence of both the business cycle and the unemployment rate. Our previous work abstract, however, from firm heterogeneity, market concentration, and monopsonistic labor markets. It focuses, instead, on the possibility of multiple equilibria, which do not play any role in the current paper.
The remainder of the paper is structured as follows. Section 2 develops a simple model to outline the main ideas in our paper. Section 3 extends the simple model to a more fleshed-out dynamic general equilibrium model. Section 4 calibrates the model to U.S. data and uses it to measure monopsony power in the labor market. Section 5 presents our quantitative findings. Section 6 concludes.
2 A simple model
We start our analysis by presenting a simple model, with a closed-form solution, that embodies the central mechanisms we want to explore. The model incorporates an interplay between directed search and endogenous search effort that begets search complementarities. We will extend this simple model along two dimensions. First, we will incorporate endogenous variations in market concentration through the entry and exit of product lines. Our second extension will introduce monopsony power in the labor market to evaluate how such a power interrelates with market concentration.
While neither the simple model nor its two extensions are designed for quantitative work (we will impose restrictive functional forms and parametric choices), the mechanisms that drive the results above are transparent. In Section 3, we will present an extended model that give us quantitative predictions.
2.1 Environment
Time is discrete and infinite. The economy is composed of $J + 1$ islands. Each island $j \in \{1, 2, ..., J\}$ hosts an intermediate-goods producer ($I$), such as General Mills or Kellogg’s. Each intermediate-goods producer operates a unitary measure of product lines, such as the many food brands manufactured by General Mills, and has an idiosyncratic productivity shock $x_j$. The central island $J + 1$ hosts a representative household and a final-goods producer ($F$), such as Walmart, that purchases food items from General Mills.
The intermediate-goods producers and the final-goods producer must form a vendor relation before starting production, e.g., General Mills will not produce breakfast cereals if it does not have access to a supermarket to sell them. The intermediate-goods producer either does not have the technology to reach consumers directly or that is too costly for it to do so. In fact, General Mills and similar firms do not sell to final consumers.
The process of search to form vendor relations is directed. At the start of each period $t$, the firm $F$ decides how many buying agents to send to each island to maximize its total profits. The firm $F$ can pick any positive real number of buying agents.
Production begins when a buying agent from firm $F$ signs a contract with a single product line in the firm $I$. In our example, Walmart decides how many buying agents to send to General
Mills. Each Walmart buying agent will work with a General Mills’ brand manager to reach a vendor contract for that brand. The more buying agents Walmart sends, the more contracts can be signed.\footnote{This buying agent, in real life, is a bundle of different workers (from logistics, legal, marketing). For our purposes, we can ignore that margin since we only care about is Walmart’s total buying cost.} The number of active product lines is equal to the number of buying agents that sign a contract. The total output for each signed contract is $2z_t x_j$, where $z_t$ is an aggregate productivity shock in period $t$. Output is equally split between firm $F$ and $I$.
Buying agents who fail to sign a contract with a product line in firm $I$ withdraw from the island, while the unmatched product line of firm $I$ stays idle for the period. A law of large numbers holds in the economy and, thus, probabilities equate realized shares. That is, if the equilibrium implies a 0.32 probability of meeting in any island $j$, a match occurs in 32% of product lines in this island.
The representative household owns all the firms in the economy, receives the aggregate net profits from them, and consumes them. Since our focus is on the consequences of firm heterogeneity, the representative household assumption simplifies our analysis.
At the end of each period $t$, all the vendor matches are dissolved, buying agents from firm $F$ return to their headquarters, and the searching process restarts \textit{ex novo} in period $t + 1$. This assumption transforms the dynamic programming problem of the firms into a sequence of static optimization problems. Figure 1 summarizes the structure of the economy.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{economy_structure.png}
\caption{Structure of the economy}
\end{figure}
A matching function determines the probability of meeting and signing a vendor contract. The likelihood of matching in each island $j$ depends on the measure of buying agents from firm $F$, $n_j^F$, the measure of product lines owned by firm $I$, $n_j^I$, and the effort firm $I$ exerts in finding a buying agent from firm $F$, $\sigma_j^I \in [0, 1]$ (to save on notation, we will only use a subindex $t$ for a variable when needed to avoid confusion). More precisely, the measure of newly formed matches is established by a matching function that is affine on $\sigma_j^I$ and Cobb-Douglas between $n_j^F$ and $n_j^I$:
$$M(\sigma_j^I, n_j^I, n_j^F) = \phi \sigma_j^I (n_j^F)^{\frac{1}{2}} (n_j^I)^{\frac{1}{2}}.$$
Since we assume that $n_j^I = 1$, the matching probabilities for firm $F$ and $I$ are $\pi^I(\sigma_j^I, n_j^F) = \phi \sigma_j^I (n_j^F)^{\frac{1}{2}}$ and $\pi^F(\sigma_j^I, n_j^F) = \phi \sigma_j^I (n_j^F)^{-\frac{1}{2}}$, respectively. Similarly, the matching probability for each product line of firm $I$ is $M(\sigma_j^I, n_j^I, n_j^F)/n_j^I$ and for each buying agent $M(\sigma_j^I, n_j^I, n_j^F)/n_j^F$.
Output in each island $j$ is:
$$y_j = 2\phi \sigma_j^I (n_j^F)^{\frac{1}{2}} z_t x_j.$$
(1)
The cost of search effort for firm $I$ in island $j \in \{1, 2, ..., J\}$ is:
$$c(\sigma_j^I) = \frac{(\sigma_j^I)^3}{3}.$$
(2)
We pick a power of 3 in the function above for algebraic convenience, but all we need is convexity of the search cost.
The firm $F$ pays a unit cost of sending buying agents equal to $\kappa$, which we normalize to $\kappa = \phi/2$. Thus, the consumption that the representative household gets from island $j$ is:
$$c_j = 2\phi \sigma_j^I (n_j^F)^{\frac{1}{2}} z_t x_j - \frac{(\sigma_j^I)^3}{3} - \kappa n_j^F.$$
### 2.2 Nash equilibria
To find the Nash equilibria in our model, we consider the problem of firm $I$ in island $j$ that takes the measure of buyers from sector $F$ on its island, $n_j^F$, as given. The profit function for firm $I$ is:
$$J(\sigma_j^I, n_j^F | x_j, z_t) = \phi \sigma_j^I (n_j^F)^{1/2} z_t x_j - \frac{(\sigma_j^I)^3}{3}.$$
(3)
Maximizing \( J \left( \sigma_j^I, n_j^F \mid x_j, z_t \right) \) with respect to \( \sigma_j^I \), we obtain the best response function for firm \( I \) in island \( j \):
\[
\sigma_{j,t}^I = \sqrt{\phi \hat{n}_j^F z_t x_j}
\]
(4)
where, to simplify notation, we have defined \( \hat{n}_j^F \equiv (n_j^F)^{1/2} \).
Let us consider now the problem of firm \( F \). Since the search process in the intermediate-goods market is directed, the firm \( F \) sends enough buyers to visit island \( j \) to exploit all profit opportunities. Hence, the firm \( F \)'s income from sending an additional buying agent to an island (the matching probability times the revenue per signed contract) is equal to the unit cost of sending the agent \( \kappa \), which we normalize to \( \kappa = \phi / 2 \):
\[
\hat{n}_j^F = \sigma_j^I z_t x_j.
\]
(5)
Equations (4) and (5) show why we have (strategic) search complementarities in the sense of Bulow et al. (1985): firm \( I \)'s search effort is (weakly) increasing in firm \( F \)'s number of buying agents (equation 4) and firm \( F \)'s number of buying agents is an affine function of firm \( I \)'s search effort (equation 5). An increase in search effort from firm \( I \) on island \( j \) increases the profits for firm \( F \) and, thus, attracts a larger measure of buying agents on the island, raising the profits for firm \( I \) and further stimulating search effort.
Directed search is at the core of this result: firm \( F \)'s decision depends on firm \( I \) in island \( j \)'s search effort because firm \( F \) can direct its buying agents to island \( j \). With random search, an increment in the search effort of firm \( I \) in island \( j \) would only affect firm \( F \)'s decision by changing the revenue of an additional contract in island \( j \) times the probability that the additional buying agent would arrive at the island. When \( J \) is large, the effect would be negligible.
A (within period and island) pure strategy Nash equilibrium is a tuple \( \{ \sigma_j^I, \hat{n}_j^F \} \) that is a fixed point of (4) and (5). The system has two Nash equilibria in pure strategies. One Nash equilibrium, \( \{ \sigma_j^I, \hat{n}_j^F \} = \{0, 0\} \), is not very interesting and we will ignore it. Also, at the cost of some extra notation, we could assume that a minimum number of matches occur even when \( \sigma_j^I = 0 \) and this equilibrium would disappear.
The other equilibrium is \( \{ \sigma_j^I, \hat{n}_j^F \} = \{ \phi z_t^2 x_j^2, \phi z_t^3 x_j^3 \} \). Then, equation (1) implies that the output in island \( j \) is \( 2\phi^3 z_t^6 x_j^6 \), with the firm \( I \)'s search cost being \( \frac{1}{3} (\sigma_j^I)^3 = \frac{1}{3} \phi^3 z_t^6 x_j^6 \) and the firm \( F \)'s search cost \( n_j^F \kappa = \frac{1}{2} \phi^3 z_t^6 x_j^6 \). Thus, consumption, \( c_j \), after the search costs, is \( \frac{7}{6} \phi^3 z_t^6 x_j^6 \).
By summing over the islands, we get the aggregate output $y_t$:
$$y_t = 2\phi^3 z_t^6 \sum_{j=1}^{J} x_j^6,$$
(6)
and aggregate consumption $c_t = \frac{7}{6}\phi^3 z_t^6 \sum_{j=1}^{J} x_j^6$.
Equation (6) reveals how a $\Delta$ difference in productivity (either at the island or aggregate level) leads to a $\Delta^6$ difference in output. The degree of amplification, 6, is determined by the curvature of the search cost function (equation 2). We can increase or decrease the amplification effect by adjusting the search cost function.
To illustrate these derivations, we fix the number of islands $j$ to 3 for the rest of this section. We set $\phi = 0.5^{1/3}$, which implies that, when $z_t x_j = 1$, the matching probability for firm $I$ is 0.5. For the moment, $z_t = 1$. With this choice of parameter values, output in island $j$ is $x_j^6$. Just for simplicity, we assume that productivity across islands is $x_1 = 0.95$, $x_2 = 1$, and $x_3 = 1.05$.

Figure 2 plots the best response function of firm $I$ in each island (continuous blue line) and the optimality condition of firm $F$ regarding the number of buying agents sent to the island (discontinuous red line). In the left panel, we plot the functions for island 1; in the center panel, we plot the functions for island 2; and in the right panel, we plot the functions for island 3. The circle markers plot the Nash equilibria, $\{\sigma_j^f, \hat{n}_j^f\}$, for each island.
As implied by equations (4) and (5), a higher productivity triggers strong strategic complementarities and a “Matthew Effect” of degree 6. While island 3 is only 10.5% more productive than island 1, it exerts 22% more search effort and attracts 35% more visits from firm $F$ than
island 1, which generates an output 82% larger. Specifically, \((\sigma_1^I, \hat{n}_1^F, y_1) = (0.72, 0.68, 0.74)\), in comparison with \((\sigma_3^I, \hat{n}_3^F, y_3) = (0.88, 0.92, 1.34)\).
A similar amplification phenomenon appears after an aggregate productivity shock. The left panel of Figure 3 plots a one-period aggregate productivity shock that decreases \(z_t\) from its original value of 1 to 0.95 in the second period and fully recovers in the third period. The right panel of Figure 3 plots the impulse-response function (IRF) of output to the shock in the left panel in each of our three islands. Again, we can see the amplification: a reduction of 5% of aggregate productivity results in a 26% fall in output. Given the strong parametric assumptions we have made to get closed-form solutions, the reduction of output is uniform across islands. This uniformity is easy to break by introducing, for instance, fixed costs.

Figure 2 and 3 take the distribution of island size as exogenous. Next, we endogenize market concentration by allowing for entry and exit of product lines and show that the “Matthew Effect” becomes even more potent.
### 2.3 Endogenous market concentration
Now, we enrich our simple model by introducing entry and exit of product lines for intermediate-goods firms \(I\). The entry and exit margin will deliver three new results: i) the “Matthew effect” becomes even more prominent than before; ii) market concentration will depend on the cost of signing a vendor contract; and iii) aggregate productivity shocks change market concentration.
of vendor contracts on each island (i.e., the size distribution of firms $I$) and make the effects of short-lived aggregate shock highly persistent.
We assume that unmatched product lines of the firm in sector $I$ in each island $j$ become obsolete and exit the economy with probability $\chi$. Conversely, new product lines are created at the constant rate $n$ in each period $t$. This assumption can be micro-founded with a fixed operation cost with a cash-on-hand constraint: in the absence of a positive cash flow, the product line is forced to close. To simplify, we will assume that firms decide on search effort without accounting that foregoing a match may make them obsolete in the next period (we will remove this simplification in the extended in Section 3). For simplicity, the entry rate is exogenous. Our results hold, with heavier notation, if entry is endogenous.
The measure of product lines in each island $j$ follows the law of motion:
$$n_{j,t+1}^I = n_{j,t}^I - \chi \cdot \left[1 - \pi^I \left(\sigma_{j,t}^I, \hat{n}_{j,t}^F\right)\right] n_{j,t}^I + \frac{n}{\text{Entry}}$$
(7)
where $\chi \cdot \left[1 - \pi^I \left(\sigma_{j,t}^I, \hat{n}_{j,t}^F\right)\right]$ is the fraction of unmatched product lines that exit island $i$, and $n$ is the measure of new entrance of product lines. The measure $n_{j,t+1}^I$ increases in the matching probability $\pi^I \left(\sigma_{j,t}^I, \hat{n}_{j,t}^F\right)$. Thus, the exit rate for product lines is lower in an island with higher probability of establishing a vendor contract with firm $F$, leading to a subsequent higher measure of active product lines in the island. Equation (7) implies that the steady-state measure of product lines is:
$$n_j^I = \frac{n}{\chi \cdot \left(1 - \pi_j^I\right)}$$
(8)
We set $\chi = 0.282$ to generate a steady-state measure of product lines in island 3 of 1 that are consistent with our previous subsection (i.e., a steady-state measure of product lines equal to 0.58 and 0.72 in islands 1 and 2, respectively). Figure 4 shows that the steady-state output share in islands 1, 2, and 3 is equal to 0.17, 0.29, and 0.54, respectively. While island 3 is still only 10.5% more productive than island 1 (as in the case without entry-exit), island 3’s output is now 209% larger than island 2’s output, instead of 82% as without entry-exit. Equation (8) tells us why. Due to its higher productivity, island 3 searches more actively, attracts more vendors and accumulates more product lines. As $\pi_j^I$ gets close to one, this mechanism becomes arbitrarily large. That is, entry-exit generates an even stronger “Matthew effect.”
But market concentration also depends on the cost of signing a vendor contract. For example, imagine that due to the enhancements in search technology (e.g., better logistics software), it becomes cheaper for firm $F$ to send buying agents to each island. Technically, we let the unit cost of visiting each island, $\kappa$, decrease at a constant one-percent rate per period (i.e., $\kappa_t = 0.99^{t-1} \cdot \phi/2$).
Figure 5 plots the unit cost of visiting each island (left panel), the measure of productive lines for firm $I$ (central panel), and the final output share (right panel) for each island. The decline in unit search cost attracts more buying agents from firm $F$ to all islands and, thus, increases the probability of forming a vendor relation and the number of active product lines ($n^f_{j,t}$, middle panel). While all three islands have more active product lines, search complementarities make
the increase in $n^f_{j,t}$ proportional to each island’s productivity. Therefore, island 3 benefits the most from the decline in $\kappa$ and the output shares of islands 1 and 2 fall over time. In comparison, in the model without entry and exit, the output in all three islands grows at the same rate, and market concentration remains unchanged. That is, we need both search complementarities \textit{and} entry-exit to transform reductions in search cost into changes in market concentration.
Our result is consistent with the finding in Aghion et al. (2019), who show that the increasing share of output for high productivity firms is mostly accounted for by a decreasing cost of expanding new businesses. Consider the following example. Historically, each Whole Foods store sourced its products with independent local suppliers (or “local foragers”). Following the Amazon-Whole Foods merger, Amazon took advantage of its leadership in logistics software to revamp the existing Whole Foods vendor contract arrangements and started prioritizing contracts with national, higher-productivity suppliers at the expense of local foragers.

Figure 6 shows the IRFs of the measure $n^f_{j,t}$ and output $y_{j,t}$ in the three islands (right panel) to a one-period decrease in aggregate productivity ($z_t$) from 1 to 0.95 (left panel). Output (aggregate and in each island) falls by 26%, as in the case without entry and exit: both versions of the model behave in the same way at impact. The difference with respect to Figure 3 is that now, through entry and exit, we have i) persistence of the output fall (even if the productivity shock only lasts for one period) and that ii) such persistence is asymmetric across islands. In period 10, output is still 0.55% below its initial level in island 1, 0.95% in island 2, and 1.72% is island 3. In this version of our simple model, negative productivity shocks lower firm concentration.
2.4 Monopsony power of the labor market
Our final step in this Section is to show how search complementarity interacts with monopsony power in the labor market.\footnote{Berger et al. (2019), Hershbein et al. (2020), and Manning (2020), among others, have shown evidence regarding the effect of monopsony power in the labor market and market concentration.} In particular, we will show five additional results: i) monopsony power lowers wages ceteris paribus; ii) wages grow with the productivity of the firm; iii) monopsony power reduces the marginal effect of the firm’s productivity on wages; iv) monopsony power strengthens the “Matthew effect” of productivity differences even further and increases wage inequality; v) reductions in the cost of signing a vendor contract lower labor income share, but redistributes labor toward higher productivity jobs.
Before we can discuss the role of monopsony power in the labor market, we need to specify labor supply and demand. To keep the model as transparent as possible, we assume that right after the vendor-relation formation, a measure $u_t$ of workers from the representative household are randomly matched to active product lines. The labor match lasts for one period and separates at the end of each period. The measure of labor market meeting is determined by the Leontief function:
$$M_t = \min \left( u_t, \sum_j \pi^I_{j,t} n^I_{j,t} \right),$$ \hspace{1cm} (9)
where $\sum_j \pi^I_{j,t} n^I_{j,t}$ is the total measure of active product lines.
For simplicity, we assume that $u_t = \sum_j \pi^I_{j,t} n^I_{j,t}$, so the meeting probability is equal to one for both sides of the match.\footnote{This assumption eliminates the need to keep track of the percentage of workers or product lines not matched. We can justify the number of workers being a function of the active product lines with the representative household’s preferences without wealth effects.} A worker’s probability of meeting with an active product line in island $j$ is $s_{j,t} = \pi^I_{j,t} n^I_{j,t} / \sum \pi^I_{k,t} n^I_{k,t}$, the share of active product lines in island $j$.
The wage in island $j$, $w_{j,t}$, is determined by Nash bargaining between the worker and an active product line. If the worker rejects the wage offer, she becomes unemployed in this period and the active product line receives a zero profit.
To introduce monopsony power of the labor market, we assume that active product lines in the same island negotiate wages in a collective way: if a worker rejects an offer from an active product line in island $j$, all other active product lines in island $j$ would “punish” the worker by refusing to match with her with probability $\lambda$ in the next period.\footnote{For simplicity, we assume that firms have exogenous commitment to this negotiation rule.} Then, if a worker declines a
wage offer from an active product line, she forgoes \( w_{j,t} + \lambda s_{j,t+1} \cdot w_{j,t+1} \), the lost wage today plus the probability of losing a wage tomorrow, which is proportional to the island’s labor market share, \( s_{j,t+1} \). Firms will optimally take advantage of this forgone income to increase their profits.
To see this, notice that the total surplus of a labor market match is \( LTS_{j,t} = (2z_t x_j - w_{j,t}) + (w_{j,t} + \lambda s_{j,t+1} w_{j,t+1}) \), where \( (2z_t x_j - w_{j,t}) \) and \( (w_{j,t} + \lambda s_{j,t+1} \cdot w_{j,t+1}) \) are the surplus of the active product line and the worker’s payoff from the labor market match, respectively (here we implicitly assume linear preferences on income for the worker). Nash bargaining implies that \( 2z_t x_j - w_{j,t} = \tau \cdot LTS_{j,t} \) and \( w_{j,t} + \lambda s_{j,t+1} \cdot w_{j,t+1} = (1 - \tau) \cdot LTS_{j,t} \), where \( \tau \) and \( (1 - \tau) \) are the bargaining shares of the active product line and the worker, respectively.
Suppose, first, that labor market punishment is forbidden, i.e., \( \lambda = 0 \). In this case, the wage, \( w^*_j = (1 - \tau) \cdot 2z_t x_j \), is a fraction \( 1 - \tau \) of output. The derivative of the wage with respect to the island productivity \( x_j \) is \( (1 - \tau) \cdot 2z_t \).
When \( \lambda > 0 \), we have instead:
\[
w_j = \frac{(1 - \tau) 2z_t x_j}{1 + \tau \lambda s_j} = \frac{1}{1 + \tau \lambda s_j} w^*_j < w^*_j,
\]
where we can see the monopsony wedge \( \frac{1}{1 + \tau \lambda s_j} < 1 \).
From this expression, we have:
\[
\frac{dw_j}{dx_j} = \frac{(1 - \tau) 2z_t}{1 + \tau \lambda s_j} - \frac{\tau \lambda}{(1 + \tau \lambda s_j)^2} \frac{\partial s_j}{\partial x_j} < (1 - \tau) \cdot 2.
\]
since higher productivity islands have more active product lines everything else equal \( (\frac{\partial s_j}{\partial x_j} > 1) \).
Equations (10) and (11) teach us three lessons. First, the monopsony wedge lowers the island’s wage \( i \) with respect to the case without monopsony power. Second, \( w_j \) increases with the island’s productivity, but decreases with the island’s active product lines share. The latter change is a general equilibrium effect: the island’s share depends on its productivity but also on the productivity of all the other firms in the economy. That is, if firms in other islands are more productive, they will decrease the number of workers in the current island and, therefore, suppress search efforts and wages. Third, wages grow more slowly than productivity in the firms’ cross-section.
Figure 7 illustrates these three lessons by plotting the distribution of wage in the steady
state of the economy ($z_t = z = 1$) with no monopsony power ($\lambda = 0$) and with monopsony power ($\lambda = 0.1$). Since we calibrate $\tau = 0.5$, we have $(1 - \tau)2z_t = 1$. To make our exercise comparable with the previous subsections, we reset $x_1 = 1.9$, $x_2 = 2$, and $x_3 = 2.1$. Then, when labor market punishment is forbidden, firms’ profits and the Nash equilibrium are then same as in Subsection 2.3.

**Figure 7:** Wage with different $\lambda$
Figure 7 shows how, when $\lambda = 0$, wages grow one-to-one with productivity: $w_1 = 1.9$, $w_2 = 2$, and $w_3 = 2.1$. However, under monopsony power, wages i) are lower, ii) and grow more slowly than productivity: $w_1 = 1.89$, $w_2 = 1.98$, and $w_3 = 2.04$. The wedge between wages and productivity is increasing in the island’s output share.
We move now to analyze the effects of monopsony power on market concentration. As before, we assume that firm $F$ and firm $I$ evenly split their joint surplus $(2z_t x_j - w_{j,t})$. Equations (4) and (5) become:
$$\sigma^I_{j,t} = \sqrt{\phi(z_t x_j - w_{j,t}/2)} \hat{n}^F_j$$
(12)
and
$$\hat{n}^F_{j,t} = \phi \sigma^I_{j,t} (z_t x_j - w_{j,t}/2).$$
(13)
We just saw that, with monopsony power, firms pay a lower wage and achieve a higher profit. This higher profit provides firms a higher incentive to search. Figure 8 documents this result by plotting the steady-state output share for each island. In the left panel, we plot the distribution of output shares when $\lambda = 0$, which is the same as in Figure 4. In the right panel, we plot the
distribution of output shares when $\lambda = 0.1$. The incremental incentive of search is highest for island 3 as it has the greatest effective labor market power due to its size, and is lowest for island 1. As a result, labor market power makes the market structure more concentrated. Island 3’s share of output grows from 0.54 to 0.62 and island’s 2 share falls from 0.17 to 0.14.
This additional strengthening of the “Matthew effect” stands in contrast with the results from a classic model of monopsony in the labor market. In such a classic model, monopsony leads to a smaller firm, since the monopsonist wants to equate marginal revenue product of labor to the marginal cost of labor by reducing labor hired. In our model, the monopsonist wants to hire more workers, because larger size allows it to keep more of the total surplus.
Another way to think about this mechanism is that a higher $\lambda$ leads to a lower labor income share: firms that keep a larger share of the labor surplus grow more in size. When $\lambda = 0$, the labor income share is 0.5 (the Nash bargaining parameter). When $\lambda = 0.1$, the labor income share is 0.49. But, although the share of labor income is lower, the total labor income is 33% higher. Labor income share falls because, when $\lambda = 0.1$, we are providing the incentives for higher productivity firms to scale up and relocate more workers from the low-wage jobs in islands 1 and 2 to the highest-wage jobs in island 3.
We should be careful mapping our results to findings from cross-sectional regression of wages on labor market power such as those reported in Marinescu et al. (2020). In our model, all firms have the same monopsony power. Thus, our model’s predictions are about two economies with different monopsony power in the labor market (e.g., the U.S. vs. France), not about two
firms within the same economy. To think about the latter case, we would need to consider some dimension along which firms diverge, possibly by producing a differentiated good.
Figure 9 displays the aggregate labor income share for different values of $\kappa$ and $\lambda$. As we move over the $\lambda$-axis, we see the labor share reduction that we described above. But, interestingly, Figure 9 shows that our model has another mechanism to account for the recent reduction of the labor share in the U.S. economy: a fall in $\kappa$. Since a fall in the cost of signing vendor-contract relations leads to higher market concentration, it will also lead to firms’ higher market power. As we move over the $\kappa$-axis, the labor income share falls but output and productivity increases.
Thus, our model predicts that changes such as better software and other technologies to manage vendors and suppliers deliver i) more market concentration and ii) lower labor income share but also iii) higher average wages and iv) higher productivity. More pointedly, our model also suggests that the differences observed between the U.S. and Europe over the last decades in terms of market concentration, labor income shares, and wage and productivity growth\footnote{See, for some empirical documentation of these differences, \cite{cette2019}, and \cite{covarrubias2019}.} may be due to differences in the speed of adopting information technologies that allow for a cheaper scale-up of businesses on each side of the Atlantic. Also, European labor market regulations might limit the extent to which European firms can exert their monopsony power in the labor market, limiting their ability to scale up production.
Figure 10, which displays the wage distribution for different values of $\kappa$ and $\lambda$, documents these effects. In each plot, the vertical top-circled line presents workers’ density for each wage, and the vertical discontinuous line, the average wage. Either a higher $\lambda$ or a lower $\kappa$ make the
market structure more concentrated and, therefore, allocate more workers to more productive firms (i.e., an increase in the height of the vertical line at the right). However, $\lambda$ and $\kappa$ have different effects on the level of wages. A higher $\lambda$ generally decreases the wage for every worker (i.e., shifts all the vertical lines to the left). In contrast, a lower $\kappa$ reduces the highest wage but increases the medium and the lowest wages. On the other hand, a higher $\lambda$ also increases wage inequality: more workers move to higher-wage jobs.

We finish this subsection with Figure 11, the analogous to Figure 6 but with monopsony power. Aggregate output falls 27% at impact and, as before, the IRFs show persistence. As in Figure 6, island 3 is the one that experiences the largest output over time. Notice, however, that our simple model ignores an important factor of wage bargaining. The stronger market power of high productivity firms can increase low-productivity firms’ employees’ outside option value by making it easier to find high-paying jobs, which lowers the low-productivity firms’ profit margin. In the extended model, we will see how this missing factor can make low-productivity firms more responsive to productivity shocks.
2.5 Taking stock
We can now summarize the eight main takeaways from our simple model:
1. Search complementarities, under directed search, result in a “Matthew effect” that transforms small differences in productivity into large output differences. This effect appears in the cross-section (among different firms) and the time-series (for temporary aggregate productivity shocks).
2. Entry and exit make the “Matthew effect” even more prominent.
3. Entry and exit make the degree of market concentration depend on the cost of signing a vendor contract. This observation gives us a theory of why market concentration has been growing in the U.S. economy over time: the fall in search costs related to business relations.
4. Monopsony power in the labor market strengthens the “Matthew effect” across firms further. In our model, search complementarities imply that firms want to get bigger and hire more workers to keep more of the surplus. This result stands in stark contrast with the classic monopsony model, in which firms want to reduce the amount of labor they hire, which leads to relatively smaller firms.
5. Monopsony power lowers wages for a given level of productivity, and such a percentage of reduction grows with the firm size.
6. Higher monopsony power increases wage inequality by redistributing workers to higher market firms, which are also higher productivity firms.
7. Reductions in the cost of signing a vendor contract lower the labor income share, but shifts workers’ distribution toward high-wage jobs.
8. With entry and exit and monopsony power, aggregate productivity shocks change market concentration, generating a long persistence of the effects of short-lived aggregate shocks.
Let us analyze how these takeaways appear in our extended, quantitative model.
3 Extended model
In this section, we extend our simple model along several important dimensions. First, we broaden the analysis to a general equilibrium framework by including utility-maximizing households that choose consumption and labor supply and allowing for a richer heterogeneity in intermediate goods producers. Second, we introduce persistence in vendor relations. Third, we flesh out the monopsony power in the labor market to be consistent with the granular search theory in Jarosch et al. (2019).
3.1 The representative household
The economy is populated by a continuum of households of size one. Each household has preferences represented by:
$$\mathbb{E}_0 \sum_{t=0}^{\infty} \beta^t \left[ \log (C_t) + \xi (1 - h_t) \right],$$ \hspace{1cm} (14)
where $\mathbb{E}_0$ is the conditional expectations operator at time 0, $\beta \in (0, 1)$ is the discount factor, $\xi \geq 0$ is the marginal disutility of labor, $C_t$ is consumption of final goods, and $h_t$ is total hours worked in the household (defined below). The time constraint is normalized to one. Total hours worked is equal to $h_t = \sum_j \hat{n}_{j,t} h_{j,t}$, where $\hat{n}_{j,t}$ represents the fraction of households working in $j$-type product line. The household’s budget constraint is $C_t = \sum_j n_{j,t} w_{j,t} h_{j,t} + \Pi_t$, where $w_{j,t}$ and $h_{j,t}$ are the wage rate and the labor supply in $j$-type product line, respectively. $\Pi_t$ is the per-capita profit from ownership of firms. The wage is different across product lines for the presence of search and matching frictions.
3.2 The labor market and the goods market: an overview
There are $j = 1, 2, ..., J$ types of firms in the intermediate-goods sector $I$, and each $j$-type firm manufactures identical intermediate goods using a technology with different productivity. We denote the idiosyncratic productivity for firm $I$ of type $j$ as $x_j$. Without loss of generality, we assume strictly increasing idiosyncratic productivity in the index of firm type (i.e., $x_1 < x_2 < ... < x_J$). Each firm $I$ manages a positive measure of product lines, which we interpret as firm size. The distribution of firms size is endogenously determined by search-matching and entry-exit processes, as we describe below. A law of large numbers holds in this economy, equating individual probabilities with realized shares.
To manufacture goods, a product line must first form a vendor relationship with a final goods producer (firm $F$) and match with a worker. Firms in the final-goods sector $F$ have the same productivity. Each firm sends buying agents to form vendor relationships with product lines that supply intermediate goods to them. Search is directed, and each firm in sector $F$ optimally chooses the $j$-type firm in sector $I$ to visit. Since $J$ types of firm $I$ exist, there are $J$ segmented inter-firm sub-markets, indexed by $j$. Sending a buying agent to submarket $j$ incurs the unit cost $\kappa$. Each firm $I$ in submarket $j$ chooses the costly search effort, denoted by $\sigma^I_{j,t}$, to maximize the expected profits. Variable search effort and directed search generate strategic complementarities since the degree of optimal search effort exerted by firm $I$ is increasing in the measure of buying agents sent by firm $F$. Similarly, the optimal measure of buying agents sent by firm $F$ also will be increasing in the search effort exerted by firm $I$.
After forming a vendor relation, each vendor relation without a worker posts one vacancy (without any costs) in the labor market and stays idle. At the end of each period, vendor relations and labor market matches separate exogenously with probability $\hat{\delta}$ and $\delta$, respectively, and in either case, workers become unemployed.
Figure 12 summarizes the timeline for firm $I$. At the beginning of each period, product lines search for buying agents to establish a vendor relation. Next, vendor relations search for workers. If successfully matched with a worker, the vendor relation enters the production stage; otherwise it stays idle. Vendor relations and labor market matches separate randomly at the end of each period.
3.3 The labor market: search frictions and monopsony power
3.3.1 Matching function
We assume a frictional labor market. We depart from the DMP framework by allowing multiple workers to apply to a single vacancy randomly. With this simple variation with respect to the standard model, we give firms monopsony power in the labor market because they can threaten to preclude workers who decline a wage offer from future job offers. Therefore, the bargained wage may be below the marginal product of labor.
The matching technology is formulated by the process of randomly placing balls in urns as in Butters (1977). Product lines play the role of urns and workers the role of balls. An urn becomes “productive” when it has a ball in it. Even with exactly the same number of urns and balls, a random placing of the balls in the urns will not match all the pairs exactly because of a coordination failure by those placing the balls in the urns. Some urns will end up with more than one ball and some with none. In the context of the labor market, if only one worker could occupy each job, an uncoordinated application process by workers will lead to overcrowding in some jobs and to no applications in others. As illustrated by Petrongolo and Pissarides (2001), the imperfection that leads to unemployment in this environment is the lack of information about other workers’ action.
In the simplest version of this process, we assume that workers and vendor relations are discrete. There are $u_t$ number of unemployed workers who know the location of $v_t$ number of
unmatched product lines. If a product line receives one or more job applications, it selects one applicant and forms a match (the selection criterion is specified below), while the other applicants become unemployed in the current period $t$.
Given that each product line receives a worker’s application with probability $1/\tilde{n}_t$, and there are $u_t$ applicants, with probability $(1 - 1/v_t)^{u_t}$ a given product line will receive no applications. Thus, the number of labor market matches formed in each period is:
$$E_t = v_t \left[1 - (1 - 1/v_t)^{u_t}\right]$$
We let the measure of each product line and worker be infinitely small, such that $\tilde{n}_t$ and $u_t$ tends to infinity, in which case we have that $\lim_{v_t,u_t\to\infty} E_t = v_t \left(1 - e^{v_t/u_t}\right)$. Then, the vacancy filling rate is:
$$p^n_t = \frac{E_t}{v_t} = 1 - e^{-u_t/v_t},$$
and the job finding rate is:
$$p^u_t = \frac{E_t}{u_t} = v_t/u_t \cdot \left(1 - e^{-u_t/v_t}\right).$$
To introduce labor market power, we adopt a “granular search” approach proposed by Jarosch et al. (2019), who show that large firms hold a strong bargaining power by threatening workers with future job refusals since workers can hardly avoid large employers and they are likely to re-apply to job-openings from the same firms in the future. We denote the measure of vendor relations managed by the type-$j$ firms that are not matched with a worker as $v_{j,t}$, which we interpret as a proxy for the size of the labor market. Thus, the number of unmatched product lines is $v_t = \sum_{j=1}^{J} v_{j,t}$. We define the relative labor market size, $s_{j,t}$, as the fraction of unfilled product lines owned by type-$j$ firms with $s_{j,t} = v_{j,t}/v_t$ and $\sum_{j=1}^{J} s_{j,t} = 1$, which is endogenously determined. In general, a more productive firm and a firm that searches more actively gain a higher $s_{j,t}$ that generates stronger labor market power.
The dynamics in the model depend on the three important probability functions. First, since meetings between workers and job openings are i.i.d. across workers, the conditional probability $p^n$ that a worker meeting a product line is not the only applicant to the job opening. Second, the probability $s \cdot p^u$ that a worker meets with a product line owned by a large firm (that possesses the fraction $s$ of matched product lines of the economy). Lastly, the probability that a worker
contacts a product line owned by the same firm and that the product line has more than one job applicant: \( s \cdot p^a \cdot p^b \).
Labor market matches separate exogenously with probability \( \delta \). In addition, if a product line becomes obsolete (with probability \( \hat{\delta} \)), the labor market match terminates.
### 3.3.2 Monopsony power and value functions
The wage is determined by Nash bargaining. The bargaining set is within the output of the product line \( y_{j,t} \) and the disutility of working \( \xi h_{j,t} \). When multiple homogeneous workers apply to a single vacancy, the product line offers a wage contract to one candidate. The product line exerts its monopsony power by threatening the worker to forgo future hiring if the current offer is rejected. This threat is particularly powerful when the product line belongs to a firm of large size since job applicants are likely to re-encounter the same firm in the future with a probability proportional to relative labor market size (\( s_j \)). Thus, more productive (and therefore larger) firms retain a stronger threatening power.
The firm that operates the product line precludes workers who reject a current job offer from future hiring with probability \( \tilde{\delta} \), such that the expected duration of the punishment is \( 1/\tilde{\delta} \) periods. To rule out the complicated case of everlasting punishments and the possibility that a worker is punished by multiple firms, we assume that firms withdraw punishment to workers once hired by other production lines.
We now define the Bellman equations that determine the value of an unemployed worker without punishment \( U_t \), of an unemployed worker punished by type-\( j \) firm \( \tilde{U}_{j,t} \), of an employed worker in a type-\( j \) firm \( W_{j,t} \), of a product line owned by type-\( j \) firm that is matched with a worker \( J_{j,t} \), and of a product line that is not matched with a worker \( X_{j,t} \).
The value of an unemployed worker without punishment is:
\[
U_t = \xi + \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ p^u_t \sum_k s_{k,t} W_{k,t+1} + (1 - p^u_t) U_{t+1} \right] \right\},
\]
where \( \xi \) is the flow of utility from being unemployed at period \( t \). In the next period \( t + 1 \), the worker finds a job in a \( k \)-type product line with probability \( p^u_k \cdot s_{k,t} \) and becomes employed, or s/he remains unemployed with probability \( 1 - p^u_k \). The continuation value is discounted by the stochastic discount factor, \( \beta (C_t/C_{t+1}) \).
The value of an unemployed worker under punishment by type-$j$ firm is:
\begin{align*}
\widetilde{U}_{j,t} &= \delta U_{j,t} \\
&+ (1 - \delta) \left\{ \xi + \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \left\{ p^u_t \left[ \sum_{k \neq j} s_{k,t} W_{k,t+1} + s_{j,t} (1 - p^n_t) W_{j,t+1} \right] \right\} \right] \right\},
\end{align*}
where with probability $\delta$, the punishment to the worker is forgiven and the value of unemployment becomes $U_{j,t}$. Otherwise, the worker is under punishment. The first two terms in the first row in the second curly bracket show that the worker finds a job in a type-$k$ product line ($k \neq j$) with probability $p^u_t \cdot s_{k,t}$ that brings value $W_{k,t+1}$, and with probability $p^n_t \cdot s_{j,t}$, the worker’s job application reaches type-$j$ product line, and the worker is hired either if the firm is not the one that enforces punishment, or if the firm that enforces punishment has no other applications, which occurs with probability $1 - p^n_t$. The term in the second row represents the expected value of remaining unemployed in the next period $t + 1$, either because the worker fails to meet any vacancy (with probability $1 - p^u_t$) or because the worker meets a type-$j$ product line, but the product line has alternative applicants (with probability $p^u_t s_{j,t} p^n_t$) and, thus, rejects the worker.
By multiplying equation (15) by $(1 - \delta)$ and subtracting equation (16) from it, we obtain the loss of value associated with labor market punishment:
\begin{equation}
U_t - \widetilde{U}_{j,t} = \left( 1 - \delta \right) \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ \frac{s_{j,t} p^u_t p^n_t (W_{j,t+1} - U_{t+1})}{(1 - p^u_t + s_{j,t} p^n_t p^n_t)} \right] \right\}.
\end{equation}
In the deterministic steady state, equation (17) reduces to:
\begin{equation}
U - \widetilde{U}_j = \frac{\left( 1 - \delta \right) \beta s_j p^n_t p^n_t (W_j - U)}{1 - \beta (1 - p^u_t + s_j p^n_t p^n_t) \left( 1 - \delta \right)}.
\end{equation}
Equation (18) shows that if $\delta = 1$, there is no labor market punishment and $U = \widetilde{U}_j$. If $\delta < 1$, however, equation (16) implies that $U > \widetilde{U}_j$, that is, labor market punishment generates a loss to the worker since s/he prefers working to being unemployed (i.e., $W_j > U$). Moreover, equation (18) shows that the loss of value due to labor market punishment strictly increases with the firm’s relative labor market size ($s_j$), and strictly decreases with the probability of
forgiving ($\widetilde{\delta}$). When the firm’s labor market size is zero, we have $U = \widetilde{U}_j$.
The value of an employed worker in a vendor-relation is:
$$W_{j,t} = w_{j,t} + \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ (1 - \delta - \widehat{\delta}) W_{j,t+1} + (\delta + \widehat{\delta}) U_{t+1} \right] \right\}, \tag{19}$$
where the first term on the right-hand-side (RHS) of equation (19) is the current period wage $w_{j,t}$ to be determined by Nash bargaining. The job relationship terminates randomly either because the job separates with probability $\delta$ or the vendor-relation dissolves with probability $\widehat{\delta}$ and in both instances the worker becomes unemployed and gains value $U_{t+1}$. Otherwise, the worker continues the job relation and earns value $W_{j,t+1}$.
Similarly, the value of firms in a vendor-relation with a worker is:
$$J^k_{j,t} = \Pi^k_{j,t} + \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ (1 - \delta - \widehat{\delta}) J^k_{j,t+1} + \delta X^k_{j,t+1} + \widehat{\delta} \widetilde{J}^k_{j,t+1} \right] \right\}, \ k \in \{I, F\}, \tag{20}$$
where the first term on the RHS of equation (20) is current profit, and the second term is the continuation value in the next period $t+1$, in which the job separates with probability $\delta$ and the idle vendor-relation gets $X^k_{j,t+1}$ (defined below), or the vendor-relation dissolves with probability $\widehat{\delta}$ and each firm gets $\widetilde{J}^k_{t+1}$.
The value of an idle product line without a worker is:
$$X^k_{j,t} = \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ p^n_t J^k_{j,t+1} + (1 - p^n_t) X^k_{j,t+1} \right] \right\}, \ k \in \{I, F\}. \tag{21}$$
Equation (21) shows that an idle product line produces zero profits in period $t$, but by hiring a worker with probability $p^n_t$ it receives the value $J^k_{j,t+1}$. Otherwise, with probability $(1 - p^n_t)$, the product line remains unmatched and earning $X^k_{j,t+1}$ in the next period $t+1$.
The value of a product line without a vendor-relation is:
$$\widetilde{J}^I_{j,t} = -c(\sigma_{j,t}) + \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ \pi^I_{j,t} X^I_{j,t+1} + (1 - \pi^I_{j,t}) (1 - \chi) \widetilde{J}^I_{j,t+1} \right] \right\}. \tag{22}$$
Equation (22) shows that a product line without a vendor relation exerts search effort in period $t$, and in the next period $t+1$, finds a firm in sector $I$ with probability $\pi^I_{j,t}$ that yields a value $X^I_{j,t+1}$. Otherwise, if it survives obsolesce with probability $(1 - \chi)$, it remains without a vendor
relation and yields a value of $\widetilde{J}_{j,t+1}^I$.
Lastly, when a vendor-relation terminates, the buying agent of firm $F$ returns to the central island and receives zero value:
$$\widetilde{J}_{j,t}^F = 0 \quad (23)$$
Firms in the intermediate and final-goods producers, $I$ and $F$, split the joint-profit from the match by Nash bargaining, which yields:
$$\frac{X_{j,t}^I - \widetilde{J}_{j,t}^I}{\tau} = \frac{X_{j,t}^F - \widetilde{J}_{j,t}^F}{1 - \tau} \quad (24)$$
where $X_{j,t}^k - \widetilde{J}_{j,t}^k$ is the capital gain by signing a vendor-contract. The parameter $\tau$ is the bargaining share of firm $I$.
### 3.3.3 Wage determination
The wage is negotiated between the worker and the vendor-relation by Nash bargaining. The total surplus from forming match in the labor market ($LTS_{j,t}$) is equal to:
$$LTS_{j,t} = (J_{j,t} - X_{j,t}) + \left(W_{j,t} - \widetilde{U}_{j,t}\right), \quad (25)$$
where $J_{j,t}$ and $X_{j,t}$ are joint-values of vendor-relation with $J_{j,t} = J_{j,t}^I + J_{j,t}^F$, and $X_{j,t} = X_{j,t}^I + X_{j,t}^F$. Equation (25) departs from the standard bargaining protocols because the worker surplus depends on $\widetilde{U}_{j,t}$ rather than $U_t$, and the additional surplus $(U_t - \widetilde{U}_{j,t})$ arises from the firm’s credible threat of future rejection.
Thus, the bargained wage ($w_{j,t}$) satisfies:
$$W_{j,t} - \widetilde{U}_{j,t} = (1 - \widetilde{\tau}) \ LTS_{j,t}, \quad (26)$$
and
$$J_{j,t} - X_{j,t} = \widetilde{\tau} \ LTS_{j,t}, \quad (27)$$
where $\widetilde{\tau}$ is the vendor-relation’s bargaining share.
In the online appendix, we prove the following proposition.
Proposition 1. In the steady state, ceteris paribus, the wage decreases with the firm’s vacancy share \((s_j)\) and increases with the probability of forgiveness \((\delta)\).
Proposition 1 shows that, conditional on a level of productivity, a greater market power –either because a firm represents a larger share in the labor market or because a firm has a lower probability of forgiveness– implies a lower wage.
### 3.4 The goods market: vendor contract formation
As in the simple model, the matching process in each submarket is governed by a technology with variable search intensity. Following Burdett and Mortensen (1980), the number of newly formed vendor relations in market \(j\) is:
\[
M\left(\tilde{n}^F_{j,t}, \tilde{n}^I_{j,t}, \sigma^I_{j,t}\right) = \psi \sigma^I_{j,t} H\left(\tilde{n}^F_{j,t}, \tilde{n}^I_{j,t}\right),
\]
where \(\sigma^I_{j,t}\) is the firm \(I\)’s variable search effort, \(\tilde{n}^F_{j,t}\) is the measure of firm \(F\)’s buying agents, and \(\tilde{n}^I_{j,t}\) is the measure of product lines owned by type-\(j\) firm \(I\). The parameter \(\psi\) controls the efficiency in matching. The function \(H(\cdot)\) has constant returns to scale and it is strictly increasing in both arguments.
Each submarket \(j\) has a tightness ratio \(\theta_{j,t}\), defined as \(\theta_{j,t} = n^F_{j,t}/n^I_{j,t}\). The probability that a product line forms a joint venture with a firm in sector \(F\) is:
\[
\pi^I_{j,t} = \frac{M\left(\tilde{n}^F_{j,t}, \tilde{n}^I_{j,t}, \sigma^I_{j,t}\right)}{\tilde{n}^I_{j,t}} = \psi \sigma^I_{j,t} \mu\left(\theta_{j,t}\right),
\]
and the probability that a firm in sector \(F\) forms a vendor relation with firm-type \(j\) in sector \(I\) is:
\[
\pi^F_{j,t} = \frac{M\left(\tilde{n}^F_{j,t}, \tilde{n}^I_{j,t}, \sigma^I_{j,t}\right)}{\tilde{n}^F_{j,t}} = \psi \sigma^I_{j,t} q\left(\theta_{j,t}\right),
\]
where \(\mu\left(\theta_{j,t}\right) = H\left(\theta_{j,t}, 1\right)\) and \(q\left(\theta_{j,t}\right) = H\left(1, 1/\theta_{j,t}\right)\). Then, \(\mu'\left(\theta_{j,t}\right) > 0\) and \(q'\left(\theta_{j,t}\right) < 0\).
Each firm in sector \(I\) faces the cost of searching with intensity \(\sigma^I_{j,t}\) equal to:
\[
c\left(\sigma^I_{j,t}\right) = \frac{\left(\sigma^I_{j,t}\right)^{1+\nu}}{1+\nu}, \quad j \in \{1, 2, ..., J\},
\]
3.4.1 Production technology
A product line manufactures intermediate goods according to the production technology:
\[
\tilde{y}_{j,t} = x_j h_{j,t},
\]
where \(\tilde{y}_{j,t}\) is the output for firms in the intermediate-goods sector (a tilde indicates intermediate-goods sector variables), and \(x_j\) is the idiosyncratic productivity for type-\(j\) intermediate goods producer. Each product line matches with one worker and hours are fixed to one (i.e., \(h_{j,t} = 1\)).
Final goods producers transform the intermediate goods into the final goods \(y_{j,t}\) with the linear production technology:
\[
y_{j,t} = z_t \tilde{y}_{j,t} = z_t x_j,
\]
where the aggregate productivity \(z_t\) follows \(\log(z_t) = \rho_z \log(z_{t-1}) + \sigma_z \epsilon_t\), where \(\epsilon_t \sim \mathcal{N}(0, 1)\).
Total output is split among the worker, the product line, and the final goods producer, such that \(w_{j,t} + \Pi^I_{j,t} + \Pi^F_{j,t} = y_{j,t}\), where \(W_{j,t}\), \(\Pi^I_{j,t}\), and \(\Pi^F_{j,t}\) are the wage of the worker, profits of the product line and final goods producer (conditional on vendor relation formation and labor market matching), respectively, which are determined by Nash bargaining.
3.4.2 Optimal search effort for intermediate goods producers
The product line chooses the optimal search effort by maximizing the value \(\tilde{J}_{j,t}\):
\[
\max_{\sigma^I_{j,t} \geq 0} -c(\sigma_{j,t}) + \mathbb{E}_t \left\{ \beta \left( \frac{C_t}{C_{t+1}} \right) \left[ \pi^I_{j,t} X^I_{j,t+1} + (1 - \pi^I_{j,t}) (1 - \chi) \tilde{J}^I_{j,t+1} \right] \right\},
\]
where \(\pi^I_{j,t}\) is the probability of vendor relation formation. \(J_{j,t}(0)\) and \(J_{j,t}(1)\) are the ex-post value of product line defined in equation (20) that are conditional on success and failure of vendor contract, respectively. The interior solution to the maximization problem in equation (30) is:
\[
(\sigma^I_{j,t})^\nu = \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \psi \mu(\theta_{j,t}) \Delta J^I_{j,t+1} \right],
\]
where \(\Delta J^I_{j,t}\) is the capital gain due to the establishment of a vendor contract:
\[
\Delta J^I_{j,t+1} = X^I_{j,t+1} - (1 - \chi) \tilde{J}^I_{j,t+1},
\]
which includes the capital gain \( X^I_{j,t+1} - \widetilde{J}^I_{j,t+1} \), and the gain \( \chi \widetilde{J}^I_{j,t+1} \) that captures the fact that product line with a vendor contract does not become obsolete.
The left-hand side (LHS) of equation (31) is the marginal cost of exerting search effort to form a vendor relation for \( j \)-type firm in sector \( I \), and the RHS of the equation is the expected benefit of signing a vendor contract, which increases in tightness \( \theta_{j,t} \) (since \( \mu'(\theta_{j,t}) > 0 \)) and in the capital gain from forming a vendor relation.
The solution to the optimization problem is:
\[
\sigma^I_{j,t} = \left\{ \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \psi \mu(\theta_{j,t}) \Delta J^I_{j,t+1} \right] \right\}^{\frac{1}{\nu}}.
\]
(33)
Since \( \nu > 1 \) and \( \mu(\cdot) \) is an increasing function, equation (33) shows that the optimal search intensity \( \sigma^I_{j,t} \) increases with the tightness ratio \( \theta_{j,t} \), implying that \( \sigma^I_{j,t} > 0 \).
In the online appendix, we show the following result.
**Proposition 2.** In the steady state, ceteris paribus, firm \( I \)'s search effort increases with the firm’s vacancy share \( s_j \), and it decreases with the probability of forgiveness \( \tilde{\delta} \).
Proposition 2 establishes that strong market power –either because a firm owns a larger share in the labor market, or because it exercises a lower probability of forgiveness– implies a greater search effort (conditional on a level of productivity and number of visiting from sector \( F \)). Intuitively, a strong labor market power enables firms to offer a lower wage to the worker, which expands the firm’s profit for every signed vendor contract, which in turn stimulates an active search. As we will show later, a critical implication of Proposition 2 is that labor market power entails a more concentrated market structure.
### 3.4.3 Buying agents and search complementarity
The expected value of sending a buying agent for a firm in sector \( F \) is:
\[
V^F_t = \max_j \left\{ -\kappa + \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \pi^F_{j,t} \left( X^F_{j,t+1} - \widetilde{J}^F_{j,t+1} \right) \right] \right\}.
\]
(34)
Equation (34) shows that each firm in sector \( F \) pays a unit cost \( \kappa \) for each agent who visits submarket \( j \) that may establish a vendor relation with probability \( \pi^F_{j,t} = \psi \sigma^I_{j,t} q(\theta_{j,t}) \), and brings a capital gain \( X^F_{j,t+1} - \widetilde{J}^F_{j,t+1} \).
Firms in sector $F$ send buying agents to visit prospective intermediate goods suppliers at the optimal submarkets until the value of forming a vendor relation collapses to zero (recall that a law of large numbers hold in the economy and thus, conditional on the aggregate states, expected and realized profits are equated): $V_t^F = 0$.
Substituting this last condition into equation (34), we get:
$$\max_j \left\{ -\kappa + \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \psi \sigma_{j,t}^I q(\theta_{j,t}) \left( X_{j,t+1}^F - \bar{J}_{j,t+1}^F \right) \right] \right\} = 0,$$
such that the expected capital gain in each submarket $j$ is equal to the cost $\kappa$:
$$q(\theta_{j,t}) \sigma_{j,t}^I \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \psi \cdot \left( X_{j,t+1}^F - \bar{J}_{j,t+1}^F \right) \right] = \kappa,$$
(35)
and consequently the submarkets with a higher capital gain, $X_{j,t+1}^F - \bar{J}_{j,t+1}^F$, attracts more buying agents to visit. The inflow of buying agents increases the tightness ratio in each submarket, which decreases the matching probability for those buying agents. In equilibrium, the tightness ratio adjusts to make the expected gain from entering into all submarkets equal to the cost $\kappa$.
Equation (35) implies that, because $q(\cdot)$ is a decreasing function, the tightness ratio $\theta_{j,t}$ increases with intermediate goods producers’ search effort $\sigma_{j,t}^I$:
$$\theta_{j,t} = q^{-1} \left[ \frac{\kappa}{\sigma_{j,t}^I \mathbb{E}_t \left[ \beta \left( \frac{C_t}{C_{t+1}} \right) \psi \cdot \left( X_{j,t+1}^F - \bar{J}_{j,t+1}^F \right) \right]} \right].$$
(36)
As in our simple model, directed search is key to generate search complementarity in the formation of vendor relations.
### 3.5 Period equilibrium
Given the aggregate state of the economy, the period equilibrium of submarket $j$ is a tuple of $\{\sigma_{j,t}^I, \theta_{j,t}\}$ that is a fixed point of the product of the best response function (33) and the optimality condition (36). As before, we ignore the trivial equilibrium with zero output. The whole dynamic equilibrium of the economy is a repetition of these period equilibria as linked by the value functions outlined above.
To determine the measure of firms and aggregate output, we assume that new product lines
are created at the constant rate $n$ in each period $t$. The measure of product lines that remain unmatched with final goods producer in the next period $t+1$ ($\tilde{n}_{j,t+1}^I$) is equal to those lines that fail to sign a vendor-contract and do not become obsolete ($(1 - \pi_{j,t}^I)(1 - \chi)\tilde{n}_{j,t}^I$), plus those that recently separate from vendor-relation ($\hat{\delta}n_{j,t}$), and the new product line ($n$), such that:
$$\tilde{n}_{j,t+1}^I = (1 - \pi_{j,t}^I)(1 - \chi)\tilde{n}_{j,t}^I + \hat{\delta}n_{j,t} + n$$ \hspace{1cm} (37)
Using the definition of the tightness ratio $\theta_{j,t}$, the measure of buying agents sent to submarket $j$ is $\tilde{n}_{j,t}^F = \tilde{n}_{j,t}^I\theta_{j,t}$, and the measure of vendor-relation ($n_{j,t+1}$) comprises those that survive vendor-relation separation ($(1 - \hat{\delta})n_{j,t}$) plus new vendor-relation formation $(\pi_{j,t}^I\tilde{n}_{j,t}^I)$, such that:
$$n_{j,t+1} = (1 - \hat{\delta})n_{j,t} + \pi_{j,t}^I\tilde{n}_{j,t}^I.$$ \hspace{1cm} (38)
The measure of vendor relations matched with a worker ($\hat{n}_{j,t+1}$) comprises those that do not separate with worker and do not dissolve ($(1 - \delta - \hat{\delta})\hat{n}_{j,t}$) plus the new labor market matches ($p_t^nv_{j,t}$):
$$\hat{n}_{j,t+1} = (1 - \delta - \hat{\delta})\hat{n}_{j,t} + p_t^nv_{j,t}.$$ \hspace{1cm} (39)
The measure of vendor relations that are unmatched with workers is $v_{j,t} = n_{j,t} - \hat{n}_{j,t}$, and the measure of vacancies corresponds to the total measure of vendor relations that are unmatched with workers $v_t = \sum v_{j,t}$.
Unemployment is equal to $u_{t+1} = (1 - p_t^u)u_t + (\hat{\delta} + \hat{\delta})\sum \hat{n}_{j,t}$, where the first term on the RHS shows the unemployment outflow induced by job creation ($p_t^uu_t$), and the second term shows the unemployment inflow from random job and vendor-relation separation.
Aggregate output is a weighted sum of final goods produced across submarkets $Y_t = \sum_{j=1}^{J}\hat{n}_{j,t}y_{j,t}$, where $\hat{n}_{j,t}$ is the measure of vendor relations matched with worker, determined by equation (39), and $y_{j,t}$ is the final output of vendor relations, determined by equations (28) and (29), respectively. Aggregate output is used for aggregate consumption, $C_t$, search costs, and entry costs, which yields:
$$Y_t = C_t + \sum_{j=1}^{J}\tilde{n}_{j,t}^I\left(\frac{\sigma_{j,t}^I}{1+\nu}\right)^{1+\nu} + \kappa\sum_{j=1}^{J}\tilde{n}_{j,t}^F.$$
4 Calibration and measurement
We calibrate our model by matching its deterministic steady state (DSS) to post-WWII U.S. data at a quarterly frequency. A discount factor $\beta$ of 0.987 (equivalent to 0.95 at a yearly frequency) replicates an average annual interest rate of 5% over the sample period.
We pick 20 productivity types, $J$, such that each type of firm $I$ corresponds to a vigintile of the productivity distribution. Hence, type-1 firms are the bottom 5% of the productivity distribution and type-20 firms the top 5%. In our model, the measured total-factor productivity (mTFP) of firms $I$ results from the combination of the exogenous productivity, $x_j$, and the endogenous product line utilization rate, $n_j / \left( n_j + \tilde{n}_j^f \right)$. Thus, we calibrate the dispersion of $x_j$ to match the observations by Syverson (2011) that the average ratio of mTFP between industry plants’ at the 90th and 10th percentile of the productivity distribution using four-digit SIC industries in the U.S. manufacturing sector is 1.92. We match this ratio by assuming that $\log(x_j)$ is uniformly distributed between $-0.12$ and $0.12$.
With respect to the search cost function, we set $\nu = 3$, implying that the marginal search cost is a quadratic function of the search effort. We normalize the cost of signing a vendor-contract to be equal to the average productivity of vendor relations, i.e., $\kappa = 1$ (the parameter $\psi$, to be calibrated below, varies to compensate for this normalization).
We calibrate $\hat{\delta} = 1/16$ to replicate the average duration of 4 years in vendor relations in the Compustat Customer Segment data (which report the major customers for a subset of U.S. listed companies on a yearly basis). For the $H(\cdot)$ function, we assume a Cobb-Douglas form, $\psi \left( \tilde{n}_j^F \right)^\alpha \left( \tilde{n}_j^f \right)^{1-\alpha}$, where $\alpha = 0.5$ imposes symmetry. By setting $\psi = 0.54$, we get that 88% of product lines for the medium firms are active in the DSS, matching the observed 12% average rate of idleness in the U.S. non-manufacturing and manufacturing sectors before the Great Recession (Michaillat and Saez, 2015, and Ghassibe and Zanetti, 2020).
Following Shimer (2005) and Thomas and Zanetti (2009), the flow value of unemployment $\xi$ (the marginal value of leisure in our model) is set to 40% of the mean labor productivity. The worker’s bargaining share $\tilde{\tau}$ is set to 0.65, such that the labor income share of output is equal to 0.66, consistent with the long-run average of labor share in the U.S. economy. With $\tau = 0.5$, the remaining 34% of total income is evenly distributed between firms $I$ and $F$.
We normalize population to one. Following Shimer (2005), we target the quarterly job finding
rate $p^u = 0.7$, an unemployment rate, $u = 0.055$, and labor market tightness $v/u = 1.3$. These targets imply that the probability of filling a vacancy, equal for all firms, is $p^n = (1 - e^{v/u}) = 0.54$, the employment-to-unemployment (EU) transition rate is $0.041$ ($0.041/(0.041 + 0.7) = 0.055$), and the EU transition probability from vendor-contract dissolutions is $(1 - p^n)\hat{\delta} = 0.019$. Thus $\delta$, the exogenous job separation rate, is $0.041 - 0.019 = 0.022$. We set the creation rate of new product lines, $\hat{n}$, equal to 0.0017 to be consistent with this calibration.
In our model, the rate of obsolescence of a product line can be interpreted as the rate of plant exit. Lee and Mukoyama (2015) estimate the average exit rate of manufacturing plants equal to 5.5% on a yearly basis (1.4% on a quarterly basis) using Longitudinal Research Database (LRD) from the U.S. Census Bureau (see Hamano and Zanetti, 2017, for a discussion on the empirical estimates of plant entry and exit rates). Hence, we set the rate of product line obsolescence $\chi = 0.13$ and get an average obsolete rate equal to 1.4%.
Finally, we use the model to measure the probability of labor market forgiveness, $\tilde{\delta}$, equal to 0.51, that matches the output share of top 10% firms of 0.64 reported by Autor et al. (2020). A value $\tilde{\delta} = 0.51$ means that firms forgive workers on average after 2 periods (i.e., after six months). In our numerical analysis below, we will vary $\tilde{\delta}$ to assess the non-linear effect of monopsony power on market concentration. Table 1 summarizes our model’s calibration.
| Description | Parameter | Value |
|--------------------------------------------------|-----------|---------|
| Discount factor | $\beta$ | 0.987 |
| Number of firm type | $J$ | 20 |
| Productivity | $\log(x_j)$ | $\mathcal{U}[-0.12,0.12]$ |
| Search cost function, curvature | $\nu$ | 3 |
| Cost of sending a buying agent | $\kappa$ | 1 |
| Vendor contract expiration rate | $\hat{\delta}$ | $1/16$ |
| Matching elasticity | $\alpha$ | 0.5 |
| Matching efficiency | $\psi$ | 0.54 |
| Flow value of unemployment | $\xi$ | 0.4 |
| Worker’s bargaining share | $\tilde{\tau}$ | 0.65 |
| Final goods firm’s bargaining share | $\tau$ | 0.5 |
| Exogenous job separation rate | $\delta$ | 0.022 |
| Inflow of product line | $\hat{n}$ | 0.0017 |
| Rate of product line obsolescence | $\chi$ | 0.14 |
| Probability of labor market forgiveness | $\tilde{\delta}$ | 0.51 |
**Table 1:** Calibration
5 Quantitative results
In this section, we report the main quantitative findings from our extended model:
1. Search effort is increasing with the level of productivity.
2. Search complementarities, in the presence of direct search, induce market concentration in terms of firms’ size, vacancies, and output.
3. Monopsony power in the labor market reinforces the role of search complementarities: high-productivity firms get bigger. Monopsony power lowers wages and the labor income share, but it also moves workers toward high-wage jobs and increases wage inequality.
4. Monopsony power in the labor market, in the absence of strategic complementarities, has a limited effect on market concentration.
5. Lower search costs increase market concentration.
6. Search complementarities amplify the effect of negative aggregate productivity shocks and make them much more persistent. Negative aggregate productivity shocks also increase market concentration because they disproportionally affect the output of low-productivity firms and lower the volatility of the economy in response to negative aggregate shocks.
Notice how these findings mirror our simple model’s main takeaways in Section 2. Let us review each of these quantitative findings in more detail.
5.1 Search effort is increasing with the level of productivity
Figure 13 plots, for each productivity level \((j)\), the search efforts \(\{\sigma_j^I\}_{j=1}^J\) (top panel) of firms \(I\), the measure of buying agents sent by firm \(F\) to each island \(\{\bar{n}_j^F\}_{j=1}^J\) (middle panel), and the probability for product-lines to form vendor relations \(\{\pi_j^f\}_{j=1}^J\) (bottom panel), all at the DSS. Higher productivity intermediate-goods producers search more intensively, attract more buying agents, and enjoy a higher matching probability.
5.2 Search complementarities induce market concentration
We turn now to the distribution of firm size (i.e., measure of product lines), vacancies, and output. Equations (37) and (38) give us the measures of production lines that are unmatched:
\[
\tilde{n}_j^I = \frac{n}{(1 - \pi_{j,t}^I) \chi}
\]
and matched in the DSS:
\[
n_j = \frac{n \pi_{j,t}^I}{\delta (1 - \pi_{j,t}^I) \chi}.
\]
Therefore, firm size is:
\[
n_j + \tilde{n}_j^I = \frac{n \left(1 + \pi_j^I / \hat{\delta}\right)}{(1 - \pi_j^I) \chi},
\]
which is strictly increasing in the probability of vendor relation formation \( \pi_j^I \), and strictly decreasing in the rate of product line obsolescence \( \chi \).
The upper panel of Figure 14 plots the distribution of the measure of product lines matched with a worker. The measure of product lines is increasing in the firm’s productivity and highly concentrated among the “superstar firms”: the top 5% own 48% of product lines at the DSS and the next 5% owns an additional 14.5%.
The high concentration of product lines appears despite a moderate productivity dispersion in our calibration. The top 5% firms own around three times as many product lines as the next 5% of firms, although the former ones are only 1.3% more productive than the latter. Why do we have such large differences? Because equations (40)-(41) imply that the measure of product lines is nonlinear in $\pi_j^I$. As shown in the bottom panel of Figure 13, the nonlinearity becomes stronger when the measure gets close to one, which is the case for the most productive firms.
With respect to the distribution of vacancies (bottom panel of Figure 14), since all firms $I$ have the same vacancy-filling rates, vacancies are a measure of effective labor demand. The number of vacancies posted by the firm of $j$-type productivity:
$$v_j = \frac{n_j}{1 + p^n / (\delta + \hat{\delta})} = \frac{(\delta + \hat{\delta}) n \pi_j^I}{\hat{\delta} (1 - \pi_j^I) \chi (p^n + \delta + \hat{\delta})},$$
is strictly increasing in the probability of forming a vendor relation $\pi_j^I$, but strictly decreasing in the rate of product line obsolescence $\chi$, for a given probability of matching with a worker $p^n$. Intuitively, a higher $\pi_j^I$ or a lower $\chi$ decreases the rate of product line obsolescence and, hence, raise the DSS measure of product lines ($n_j + \tilde{n}_j$). As a firm gains more product lines, it also has a greater need to expand hiring ($v_j$) to staff a larger production. For instance, at the DSS, the top 5% of firms post 47% of all vacancies and the next 5%, 14.3%.
With respect to the distribution of output, type-$j$ firm produces, at the DSS:
\[
y_j = \underbrace{z x_j}_{\text{Output per active product line}} \cdot \underbrace{\frac{n_j}{n_j + \tilde{n}_j^I}}_{\text{prod. line util. rate}} \cdot \underbrace{\frac{n_j + \tilde{n}_j^I}{n_j}}_{\text{Measure of prod. line}} \cdot \underbrace{\frac{\tilde{n}_j}{n_j}}_{\text{labor market matching}}
\] (42)
Equation (42) embodies the four complementary channels that generate market concentration in our model. First, high-productivity firms produce larger output per active product line (the first term in the RHS of equation (42)). While this channel is present in most models with heterogeneous firms, in our model it explains only a small fraction of industry concentration given the small calibrated productivity dispersion. Second, high-productivity firms search more actively for potential partners, and, due to directed search and search complementarities, potential partners send more buying agents to them. Consequently, high-productivity firms have a higher product line utilization rate (the second term in the RHS of equation (42)). Third, since the product lines of high-productivity are active more frequently, fewer of them become obsolete, which leads to an increase in the measure of high-productivity product lines (the third term in the RHS of equation (42)). Four, firms have monopsony power (i.e., $\tilde{\delta} < 1$). This monopsony power increases the profit share of vendor matches and provides incentives to intermediate goods producers to exert a higher search effort and expand their active product lines. By gaining market share, high-productivity firms can extract a larger share of the labor surplus. This effect, however, is somewhat hidden in equation (42) because it affects $\tilde{n}_j^I$ and $n_j^I$ nonlinearly. Thus, we will return to these nonlinear relations in the next subsection.
Like channels two and three, channel four is a novel and powerful mechanism to generate market concentration. The key point is that the effect of monopsony power in the labor market is not uniform across firms. Its effect is minimal for low-productivity firms, since their labor market power is limited due to their small labor market size. However, labor market punishment increases the output share of high-productivity firms ($s_j$), which in turn increases labor market power. In other words, market concentration and monopsony power reinforce each other.
Figure 15 displays the quantitative implication of these channels. The top panel shows the distribution of the rate of utilization of product lines at the DSS. High-productivity firms form vendor relations more quickly and, thus, utilize their product lines more efficiently. While the bottom 5% of firms have 54% of their product lines active, the top 5% operate 95% of their
product lines. The bottom panel shows the distribution of $I$-firms’ output share ($y_j/Y$). The most productive firms account for a disproportionate share of output: the top 5% produce 49% of output and the next 5% produce an additional 15%, while the bottom 5% only generate 0.13% of output. These numbers are in line with the empirical observations documented by Autor et al. (2020).

The top panels of Figures 14 and 15 prove that these substantial differences in output are mainly because high-productivity firms have more product lines and that a larger share of them are active. The difference in exogenous idiosyncratic productivity between the top 5% of “superstar firms” and the bottom 5% of “lightweight firms” is 24%. Yet, our model generates a ratio between the outputs of the top and bottom firms of 372.
We can compare this result with a simple span-of-control model à la Lucas. With a production function $xl^\gamma$ where $x$ is managerial talent and $l$ is hours worked, the output ratio in such a model between two firms with $x = e^{-0.12}$ and $x = e^{0.12}$ (the same dispersion in managerial abilities than the dispersion in productivities in our model), is $e^{0.24 \frac{1}{1-\gamma}}$. To replicate an output ratio of 372, we would need $\gamma = 0.96$, which is much higher than other estimates of returns to scale. For example, Atkeson and Kehoe (2005) argue that, in a span-of-control model, we should calibrate $\gamma = 0.85$, while Guner et al. (2018) estimate an even lower $\gamma = 0.77$. An alternative way to think about this is that a span-of-control model with $\gamma = 0.96$ would generate differences in mTFP much larger than the ratio of 1.92 documented by Syverson (2011).
Our results also challenge the predictions of the classic model of market power regarding excess capacity (an idea that goes back to Wicksell, 1934). That is, firms with market power operate under excess capacity in equilibrium. Figures 14 and 15 show that our model delivers the opposite result: top firms operate at a higher utilization rate, eliminating a key source of inefficiency in the economy—the more concentrated the market, the greater the rate of utilization of product lines.
### 5.3 Monopsony power: market structure and wages
We mentioned before that the role of monopsony power in driving market concentration is hard to gauge directly from Equation (42) because it affects $\tilde{n}_j^I$ and $n_j^I$ nonlinearly. Thus, we show in Figure 16 the market structure at the DSS for three alternative degrees of threatening power $\tilde{\delta}$: 0.51 (our benchmark calibration), 0.75, and 1.

The top panel of Figure 16 is the same as the bottom panel of Figure 15: the top 10% of firms produce 64.1% of output, our calibration target. The middle panel of Figure 16 documents that, as we increase $\tilde{\delta}$ to 0.75 (equivalent to an average rate of forgiveness of 1.3 periods), the share of output of the top 10% of firms falls to 56.9%. When we completely eliminate monopsony power (i.e., $\tilde{\delta} = 1$), the share of output of the top 10% of firms becomes 53.1%.
---
\footnote{Figure 16 also illustrates Proposition 2: firms with a larger market share search more intensely, but this}
Figure 16 justifies our assertion in Section 4 that we can think about our model as a measurement device: the model tells how much monopsony power we need to account for market concentration that is consistent with mTFP, rate of idleness, and labor market observations. We find it intriguing that our model measures a moderate amount of monopsony power (a punishment that lasts only six months on average), but that such a monopsony power can increase market concentration at the top 10% of firms from 53.1% to 64.1% of output.

**Figure 17:** Wage with different labor market power
We move now to wages. In our model, monopsony power affects the wage distribution via two channels. In the first channel, the equilibrium wage decreases with the threatening power (i.e., increases with $\tilde{\delta}$). We can see this effect in Figure 17, which plots the distribution of wages for $j$-type firms with high labor market power (dark-green histograms), medium labor market power (green histograms), and no labor market power cases (light-green histograms). As stated in Proposition 1, wages are increasing in the productivity of the firm, but decreasing with monopsony power. Moreover, the wage differential within the $j$-type firm is nonlinear: the more productive the firm, the stronger the threatening power and, therefore, the larger the share of the surplus kept by the firm.
Our result corroborates a large empirical literature on the negative effect of market concentration on wage compensation. For example, see Dube et al. (2016), Benmelech et al. (2018), Qiu and Sojourner (2019), and Naidu et al. (2018). Also, Berger et al. (2019) find that markdowns, intensity decreases with $\tilde{\delta}$.
the ratio between the marginal revenue product of labor and its wage, are increasing in firm size. Jarosch et al. (2019) show that employer market power is also boosted by search and matching frictions. Finally, Peters (2020) firms’ market power is endogenous and the distribution of markups emerges as an equilibrium outcome.
The second channel through which monopsony power affects the wage distribution is that it reallocates workers toward high-productivity firms that have lower firm labor income shares. This last point is a standard feature of search and matching models. Intuitively, when a firm’s productivity is sufficiently low, the firm still needs to compensate the workers that have the outside value of finding another job and receives zero profit. In this case, the labor income share is close to one. As we increase the firm’s productivity, the outside value becomes less binding and the labor income share decreases. In addition, by reallocating workers toward high-productivity firms, monopsony power increases wage inequality.
These results agree with the empirical evidence. De Loecker et al. (2020) find that the decline in the economy-wide labor share is predominantly driven by large, high markup firms that have individually low labor shares. Similar findings are reported by Autor et al. (2020) and Kehrig and Vincent (2017).
But labor income share also falls when $\tilde{\delta}$ decreases (i.e., we assume a higher threatening power). In our benchmark calibration ($\tilde{\delta} = 0.51$), the labor income share is 0.66 and it increases to 0.663 when $\tilde{\delta} = 0.75$ and 0.667 when $\tilde{\delta} = 1$. While this effect is modest, we could substantially increase it if we were to assume (as it is likely to be the case in the real work) that high-productivity firms also have a higher $\tilde{\delta}$ (for example, through better HR processes to “punish” workers that do not accept low wage offers).
### 5.4 Monopsony power without search complementarities
We just saw how monopsony power in the labor market amplifies the effect of search complementarities on market concentration. But, does monopsony power generate market concentration in the absence of search complementarities? Yes, but the effect is mild.
To see this, we compute the market structure in the DSS model without search complementarities. To make our analysis comparable to our benchmark results, we fix all tightness ratios $\theta_{j,t}$ at their values in the benchmark DSS, but let the search effort ($\sigma^j_t$) vary. Thus, when
$\tilde{\delta} = 0.51$, the distribution of output shares across would be the same with or without search complementarities.
Figure 18 shows the output shares for the same three levels of monopsony power as in Figure 16: 0.51 (our benchmark calibration), 0.75, and 1. By construction, the top panel of Figure 18 is identical to the top panel of Figure 16. The middle and lower panel of Figure 18 show that the role of monopsony power in market concentration becomes milder as the incentives to scale up production become smaller. For example, the top 5% of firms decrease their output share from 49.5% to 46.2% when $\tilde{\delta}$ increases from 0.51 to 0.75.
### 5.5 Lower search costs increase market concentration
We study the dependence of market structure on the cost of signing a vendor contract by considering a 5% permanent decline in the unit cost of visiting each submarket, $\kappa$, from 1 to 0.98. The bottom panel in Figure 19 shows our benchmark case of $\kappa = 1$ while the top panel shows the firms’ output when $\kappa = 0.98$. A lower unit search cost induces all firms to search more actively, attracting a larger number of buying agents from sector $F$ to visit them, increasing the probability of forming a vendor relation and raising the number of product lines at the new DSS. However, the top 5% of “superstar firms” benefit the most from the reduction in $\kappa$, growing from producing 49.3% of output to producing 65.6%.
As in the case of the basic model, we interpret these results as suggesting that improvements in IT over the last few decades (or, more generally, in the ability to scale up production) have been a critical factor behind the recent increase in market concentration documented by Autor et al. (2020) and others.
5.6 Response of output to aggregate shocks
Finally, we explore how the model responds to an aggregate shock. To do this, we implement a negative aggregate productivity shock, which reduces all firms’ log-productivity. To reduce the computational burden of keeping track of 20 different types of firms, we slightly simplify our problem by assuming, for this subsection only, that the utility function of the household is linear in consumption.
Figure 20 shows the IFRs of the output of type-1 firms (the bottom 5%; continuous blue line), the type-10 firms (the median firms; discontinuous black line), and the type-20 firms (the top 5%; firms; discontinuous red line) to an aggregate shock that reduces log aggregate productivity by 10%. Aggregate productivity reverts back to the steady state with a persistence of 0.95. We express the IRFs in percentage deviations with respect to the DSS to allow for easier comparison.
At impact, all firms’ output drops by 9.52%. The recovery after this drop is slow because
the productivity shock reduces firms’ incentive to search. Thus, more product lines remain idle and they become obsolete at a higher rate. The process of product lines reduction is protracted and induces a lot of endogenous persistence in output.

**Figure 20:** Response of output to 10% TFP shock
Interestingly, the recovery process is uneven across firms and increases market concentration. Specifically, it takes longer for low-productivity firms to recover. Intuitively, low-productivity firms’ profit share is lower due to the worker’s outside option of finding higher-paying jobs. Consequently, the low-productivity firms’ search effort is more sensitive to productivity shocks, and these low-productivity firms lose more product lines in relative terms. In fact, for a few periods after the shock, the higher obsolescence rate of the non-active product lines of these low-productivity firms is such a powerful mechanism that their output continues dropping even if aggregate productivity is reverting to its mean. This mechanism accounts for the U-shaped IRF of low productivity firms. In comparison, high-productivity firms recover faster, and market concentration increases.
Our findings agree with Şahin et al. (2011), who documented, for example, that between December 2007 and December 2009, jobs declined 10.4% in small firms (those with fewer than fifty employees), compared with 7.5% in large ones. These differences are close in magnitude to what we see in Figure 20.
Figure 20 also links our model with the Great Moderation of the U.S. economy after 1984.
Since larger firms react less to negative aggregate shocks, the fall in search costs that we have argued above helps us understanding the growth in the size of superstar firms also helps us to account for the lower aggregate volatility of the economy.
6 Conclusion
Search complementarities have enormous consequences for market structure and the firm’s size distribution. Through a “Matthew effect,” small differences in productivity are transformed into large differences in firms’ size, vacancies, and output. The key of this “Matthew effect” lies with the endogenous search decisions of intermediate- and final-goods producers under directed search: higher productivity leads to higher search effort by the intermediate-good producers and more buying agents by the final-goods producers. The presence of monopsony power in the labor market reinforces the process even more. The forces combine to generate superstar firms with output shares that match empirical observations.
Our model also suggests that a reduction in search costs (which can be more generally understood as a fall in the cost of scaling a business up) lead to i) higher market concentration; ii) lower labor income shares; and iii) more monopsony power by firms. We interpret the IT revolution since the 1980s in the U.S. and other advanced economies as a reduction in search costs (better logistics software, better inventory control, easier database management, etc.). Thus, our model offers a simple and parsimonious explanation of several important aspects of the data.
There is much scope for further investigation. We want to look at microdata to cross-validate the forces we highlight in our theoretical and quantitative analysis. We want to incorporate a firm’s life-cycle. We want to think about innovation and technological adoption within the context of strategic complementarities. We want to think more about heterogeneity among different industry sectors. Are search costs as relevant in heavy manufacturing as in consumer services? Do the differences among industries in terms of market structure and the firm’s size distribution align with our model? Finally, we also want to think about the policy implications of our model. We hope to explore some of these avenues of research shortly.
References
Aghion, P., Bergeaud, A., Boppart, T., Klenow, P. J., and Li, H. (2019). A theory of falling growth and rising rents. Working Paper 26448, National Bureau of Economic Research.
Akerman, A., Helpman, E., Itskhoki, O., Muendler, M.-A., and Redding, S. (2013). Sources of wage inequality. *American Economic Review*, 103(3):214–219.
Ashenfelter, O., Farber, H., and Ransom, M. (2010). Labor market monopsony. *Journal of Labor Economics*, 28(2).
Atkeson, A. and Kehoe, P. (2005). Modeling and measuring organization capital. *Journal of Political Economy*, 113(5):1026–1053.
Autor, D., Dorn, D., Katz, L. F., Patterson, C., and Van Reenen, J. (2020). The fall of the labor share and the rise of superstar firms. *Quarterly Journal of Economics*, 135:645–709.
Azar, J., Huet-Vaughn, E., Marinescu, I. E., Taska, B., and Von Wachter, T. (2019). Minimum wage employment effects and labor market concentration. SSRN Scholarly Paper, Social Science Research Network, Rochester, NY.
Benmelech, E., Bergman, N., and Kim, H. (2018). Strong employers and weak employees: How does employer concentration affect wages? Working Paper 24307, National Bureau of Economic Research.
Berger, D. W., Herkenhoff, K. F., and Mongey, S. (2019). Labor market power. Working Paper 25719, National Bureau of Economic Research.
Bessen, J. E. (2017). Industry concentration and information technology. SSRN Scholarly Paper ID 3044730, Social Science Research Network, Rochester, NY.
Bulow, J. I., Geanakoplos, J. D., and Klemperer, P. D. (1985). Multimarket Oligopoly: Strategic Substitutes and Complements. *Journal of Political Economy*, 93(3):488–511.
Burdett, K. and Mortensen, D. T. (1980). Search, layoffs, and labor market equilibrium. *Journal of Political Economy*, 88(4):652–672.
Butters, G. R. (1977). Equilibrium distributions of sales and advertising prices. *Review of Economic Studies*, 44(3):465–491.
Card, D., Cardoso, A. R., Heining, J., and Kline, P. (2018). Firms and labor market inequality: Evidence and some theory. *Journal of Labor Economics*, 36(S1):S13 – S70.
Clette, G., Koehl, L., and Philippon, T. (2019). Labor shares in some advanced economies. Working Paper 26136, National Bureau of Economic Research.
Chevalier, J. A. and Scharfstein, D. S. (1996). Capital-market imperfections and countercyclical markups: Theory and evidence. *American Economic Review*, 86(4):703–725.
Covarrubias, M., Gutiérrez, G., and Philippon, T. (2019). From good to bad concentration? u.s. industries over the past 30 years. Working Paper 25983, National Bureau of Economic Research.
De Loecker, J. and Eeckhout, J. (2018). Global market power. Working Paper 24768, National Bureau of Economic Research.
De Loecker, J., Eeckhout, J., and Unger, G. (2020). The rise of market power and the macroeconomic implications. *Quarterly Journal of Economics*, 135(2):561–644.
Diamond, P. (1982). Aggregate demand management in search equilibrium. *Journal of Political Economy*, 90(5):881–894.
Diamond, P. and Drew Fudenberg (1989). Rational expectations business cycles in search equilibrium. *Journal of Political Economy*, 97(3):606–619.
Dube, A., Lester, T. W., and Reich, M. (2016). Minimum wage shocks, employment flows, and labor market frictions. *Journal of Labor Economics*, 34(3):663–704.
Elsby, M., Hobijn, B., and Sahin, A. (2013). The decline of the U.S. labor share. *Brookings Papers on Economic Activity*, 44(2 (Fall)):1–63.
Falch, T. (2010). The elasticity of labor supply at the establishment level. *Journal of Labor Economics*, 28(2):237–266.
Fernández-Villaverde, J., Mandelman, F., Yu, Y., and Zanetti, F. (2019). Search complementarities, aggregate fluctuations, and fiscal policy. Working Paper 26210, National Bureau of Economic Research.
Garicano, L. and Rossi-Hansberg, E. (2006). Organization and Inequality in a Knowledge Economy*. *The Quarterly Journal of Economics*, 121(4):1383–1435.
Ghassibe, M. and Zanetti, F. (2020). State dependence of fiscal multipliers: the source of fluctuations matters. Mimeo, University of Oxford.
Guner, N., Parkhomenko, A., and Ventura, G. (2018). Managers and Productivity Differences. *Review of Economic Dynamics*, 29:256–282.
Gutiérrez, G. and Philippon, T. (2018). Ownership, concentration, and investment. *AEA Papers and Proceedings*, 108:432–437.
Hamano, M. and Zanetti, F. (2017). Endogenous turnover and macroeconomic dynamics. *Review of Economic Dynamics*, 26:263–279.
Hershbein, B., Macaluso, C., and Yeh, C. (2020). Monopsony in the U.S. labor market. Technical report, Working Paper.
Huo, Z. and Ríos-Rull, J.-V. (2013). Paradox of thrift recessions. Working Paper 19443, National Bureau of Economic Research.
Jarosch, G., Nimczik, J. S., and Sorkin, I. (2019). Granular search, market structure, and wages. Working Paper 26239, National Bureau of Economic Research.
Kaplan, G. and Menzio, G. (2016). Shopping externalities and self-fulfilling unemployment fluctuations. *Journal of Political Economy*, 124(3):771–825.
Karabarbounis, L. and Neiman, B. (2014). The global decline of the labor share. *Quarterly Journal of Economics*, 129(1):61–103.
Kehrig, M. and Vincent, N. (2017). Growing Productivity without Growing Wages: The Micro-Level Anatomy of the Aggregate Labor Share Decline. CESifo Working Paper Series 6454, CESifo.
Lamadon, T., Mogstad, M., and Setzler, B. (2019). Imperfect competition, compensating differentials and rent sharing in the u.s. labor market. Working Paper 25954, National Bureau of Economic Research.
Lee, Y. and Mukoyama, T. (2015). Entry and exit of manufacturing plants over the business cycle. *European Economic Review*, 77:20–27.
Lucas, R. (1978). On the size distribution of business firms. *Bell Journal of Economics*, 9(2):508–523.
Manning, A. (2011). Imperfect competition in the labor market. In Ashenfelter, O. and Card, D., editors, *Handbook of Labor Economics*, volume 4B, chapter 11, pages 973–1041. Elsevier, 1 edition.
Manning, A. (2020). Monopsony in labor markets: A review. *ILR Review*.
Marinescu, I., Ouss, I., and Pape, L.-D. (2020). Wages, hires, and labor market concentration. Working Paper 28084, National Bureau of Economic Research.
Matsudaira, J. D. (2014). Monopsony in the low-wage labor market? Evidence from minimum nurse staffing regulations. *Review of Economics and Statistics*, 96(1):92–102.
Merton, R. K. (1968). The Matthew effect in science. *Science*, 159(3810):56–63.
Michaillat, P. and Saez, E. (2015). Aggregate demand, idle time, and unemployment. *Quarterly Journal of Economics*, 130(2):507–569.
Naidu, S., Posner, E. A., and Weyl, G. (2018). Antitrust remedies for labor market power. *Harvard Law Review*, 132:536.
Peters, M. (2020). Heterogeneous markups, growth, and endogenous misallocation. *Econometrica*, 88(5):2037–2073.
Petrongolo, B. and Pissarides, C. A. (2001). Looking into the black box: A survey of the matching function. *Journal of Economic literature*, 39(2):390–431.
Qiu, Y. and Sojourner, A. (2019). Labor-market concentration and labor compensation. *Available at SSRN 3312197*.
Ransom, M. and Sims, D. (2010). Estimating the firm’s labor supply curve in a “new monopsony” framework: School teachers in missouri. *Journal of Labor Economics*, 28:331–355.
Şahin, A., Kitao, S., Cororaton, A., and Laiu, S. (2011). Why small businesses were hit harder by the recent recession. *Current Issues in Economics and Finance*, 17(4):1–7.
Salgado, S., Guvenen, F., and Bloom, N. (2019). Skewed business cycles. Working Paper 26565, National Bureau of Economic Research.
Shimer, R. (2005). The cyclical behavior of equilibrium unemployment and vacancies. *American Economic Review*, 95:25–49.
Syverson, C. (2011). What determines productivity? *Journal of Economic literature*, 49(2):326–65.
Taschereau-Dumouchel, M. and Schaal, E. (2015). Coordinating business cycles. 2015 meeting papers 178, Society for Economic Dynamics.
Thomas, C. and Zanetti, F. (2009). Labor market reform and price stability: An application to the Euro area. *Journal of Monetary Economics*, 56(6):885–899.
Unger, R. M. (2019). *The Knowledge Economy*. Verso Books.
Weitzman, M. (1982). Increasing returns and the foundations of unemployment theory. *Economic Journal*, 92(368):787–804.
Wicksell, K. (1934). *Lectures on Political Economy*. Macmillan Company.
Wu, L. (2019). Partially directed search in the labor market. *University of Chicago, mimeo*.
Appendix
Proof of Proposition 1
In the deterministic steady state (DSS), ceteris paribus, the wage decreases with the firm’s vacancy share \((s_j)\) and increases with the probability of forgiveness \((\tilde{\delta})\)
Proof. We begin our proof by showing that the ex-ante value of employment \(W_j\) decreases with the firm’s vacancy share, \(s_j\), and it increases with the probability of forgiveness, \(\tilde{\delta}\). We denote the total surplus in a labor market without labor market power as:
\[LTS^*_j = W_j - U + J_j - X_j,\]
so that the following equality holds:
\[LTS_j = LTS^*_j + U - \widetilde{U}_j.\]
Equation (26) implies that:
\[W_j = \widetilde{U}_j + (1 - \tilde{\tau}) \, LTS_j,\]
or, equivalently,
\[W_j - U = (1 - \tilde{\tau}) \, LTS^*_j - \tilde{\tau} \left( U - \widetilde{U}_j \right).\]
(43)
Equation (18) entails that:
\[U - \widetilde{U}_j = \Gamma \left( s_j, \tilde{\delta} \right) (W_j - U)\]
(44)
with
\[\Gamma \left( s_j, \tilde{\delta} \right) = \frac{\left( 1 - \tilde{\delta} \right) \beta s_j p^u p^n (W_j - U)}{1 - \beta (1 - p^u + s_j p^u p^n) \left( 1 - \tilde{\delta} \right)}.\]
(45)
Notice that \(\partial \Gamma / \partial s_j > 0\) and \(\partial \Gamma / \partial \tilde{\delta} < 0\).
Substituting equation (44) into equation (43), it yields the following value for employment:
\[W_j = U + \frac{1 - \tilde{\tau}}{1 + \tilde{\tau} \Gamma \left( s_j, \tilde{\delta} \right)} \, LTS^*_j,\]
(46)
which implies that \(W_j\) decreases with \(s_j\), and it increases with \(\tilde{\delta}\). Since changes in \(s_j\) or \(\tilde{\delta}\) determine the split of the total surplus between firms and workers, they involve a variation in \(\Gamma(s_j, \tilde{\delta})\), and do not have a first-order effect on the value of \(U\) and \(LTS^*_j\).
Next, we show that the current period wage, \(w_j\), decreases with \(s_j\) and increases with \(\tilde{\delta}\). Equation (19) implies that:
\[W_j = w_j + \beta \left[ (1 - \delta - (1 - \pi_j) \chi) \, W_j + (\delta + (1 - \pi_j) \chi) \, U \right],\]
or:
\[ w_j = (1 - \beta) W_j + \beta (\delta + (1 - \pi_j) \chi) (W_j - U), \]
(47)
which shows that \( w_j \) strictly increases with \( W_j \). Therefore, we have that \( w_j \) decreases with \( s_j \) and increases with \( \delta \).
\[\square\]
**Proof of Proposition 2**
In the DSS, *ceteris paribus*, firm \( I \)'s search effort increases with the firm’s vacancy share \( s_j \), and it decreases with the probability of forgiveness \( \tilde{\delta} \).
*Proof.* We begin our proof by showing that the value of a firm matched to a worker (\( J_j \)) increases with \( s_j \) and decreases with \( \tilde{\delta} \).
Equation (21) implies:
\[ X_j = \alpha_{XJ} J_j, \]
(48)
where \( \alpha_{XJ} = \frac{\beta p^n}{1 - \beta (1 - p^n - \chi)} < 1 \). We rewrite equation (27) as:
\[ (1 - \alpha_{XJ}) J_j = \tilde{\tau} LTS_j, \]
or, equivalently:
\[ J_j = \frac{\tilde{\tau}}{1 - \alpha_{XJ}} \left( LTS_j^* + U - \tilde{U}_j \right). \]
(49)
Substituting equations (44) and (46) into equation (49), it yields:
\[ J_j = \frac{\tilde{\tau}}{1 - \alpha_{XJ}} \cdot \frac{(1 - \tilde{\tau}) \Gamma \left( s_j, \tilde{\delta} \right)}{1 + \tilde{\tau} \Gamma \left( s_j, \tilde{\delta} \right)} \cdot LTS_j^*, \]
(50)
where \( \Gamma(s_j, \tilde{\delta}) \) is defined by equation (45). Equation (50) implies that \( J_j \) increases with \( \Gamma(\cdot) \). Since \( \partial \Gamma / \partial s_j > 0 \) and \( \partial \Gamma / \partial \tilde{\delta} < 0 \), \( J_j \) increases with \( s_j \), and decreases with \( \tilde{\delta} \). Consequently, equation (48) implies that \( X_j \) increases with \( s_j \) and decreases with \( \tilde{\delta} \). From equations (23) and (24), it is straightforward to show that \( X_j^I = X_j / 2 + \tilde{J}_j^I / 2 \), which implies that \( X_j^I \) increases with \( X_j \), and it thus increases with \( s_j \) and decreases with \( \tilde{\delta} \).
Next, we show that \( \Delta J_j^I = X_j^I - (1 - \chi) \tilde{J}_j^I \) increases with \( X_j^I \), and, thus, it increases with \( s_j \) and decreases with \( \tilde{\delta} \). We prove \( d \Delta J_j^I / d X_j^I > 0 \) in two steps.
In the first step, we show that \( \tilde{J}_j^I \) increases with \( X_j^I \). Specifically, by denoting the optimal search effort with \( \sigma^* \), and expressing \( \tilde{J}_j^I \) and \( \sigma^* \) as functions of \( X_j \), we re-write equation (30) in the DSS as:
\[ \tilde{J}_j^I (X_j^I) = -c \left( \sigma^* (X_j^I) \right) + \beta \left[ \pi_j^I \left( \sigma^* (X_j^I) \right) \cdot X_j^I + (1 - \pi_j^I (\sigma^* (X_j))) \cdot \tilde{J}_j^I (X_j^I) \right], \]
(51)
which we solve explicitly for $\tilde{J}_j^I(X_j^I)$:
\[
\tilde{J}_j^I(X_j^I) = \frac{\beta \pi_j^I(\sigma^*(X_j^I)) \cdot X_j^I - c(\sigma^*(X_j^I))}{1 - \beta(1 - \pi_j^I(\sigma^*(X_j)))}.
\] (52)
An increase of $X_j^I$ by $\Delta$ is equal to:
\[
\tilde{J}_j^I(X_j^I + \Delta) = -c(\sigma^*(X_j^I + \Delta)) +
\beta \left[ \pi_j^I(\sigma^*(X_j^I + \Delta)) \cdot (X_j^I + \Delta) + (1 - \pi_j^I(\sigma^*(X_j + \Delta))) \cdot \tilde{J}_j^I(X_j^I + \Delta) \right]
> -c(\sigma^*(X_j^I)) + \beta \left[ \pi_j^I(\sigma^*(X_j^I)) \cdot (X_j^I + \Delta) + (1 - \pi_j^I(\sigma^*(X_j))) \tilde{J}_j^I(X_j^I + \Delta) \right],
\] (53)
which implies:
\[
\tilde{J}_j^I(X_j^I + \Delta) > \frac{\beta \pi_j^I(\sigma^*(X_j^I)) \cdot (X_j^I + \Delta) - c(\sigma^*(X_j^I))}{1 - \beta(1 - \pi_j^I(\sigma^*(X_j)))}.
\] (54)
By comparing equation (54) to equation (52), it shows that:
\[
\tilde{J}_j^I(X_j^I + \Delta) > \tilde{J}_j^I(X_j^I),
\]
clearly implying that $\tilde{J}_j^I$ increases with $X_j^I$.
In the second step, we show that $\Delta J_j^I$ increases with $X_j^I$. From equation (30), we have that:
\[
\tilde{J}_j^I = \frac{\beta \pi(\sigma_j) \Delta J_j^I - c(\sigma_j)}{1 - \beta(1 - \chi)}.
\] (55)
We denote $G'(X_j^I) = \beta \pi'(\sigma_j(X_j^I)) \Delta J_j^I(X_j^I) - c'(\sigma_j(X_j^I))$, and we treat $\sigma_j$ and $\Delta J_j^I$ as functions of $X_j^I$. Since $\partial \tilde{J}_j^I / \partial X_j^I > 0$, the following holds:
\[
G'(X_j^I) = \beta \pi' \frac{d \sigma_j}{d X_j^I} \Delta J_j^I + \beta (\sigma_j(X_j^I)) \frac{d \Delta J_j^I}{d X_j^I} - c' \frac{d \sigma_j}{d X_j^I} > 0.
\] (56)
The optimality condition for firm $I$’s problem (equation (30)) implies that:
\[
\beta \pi' \Delta J_j^I - c'(\sigma_j) = 0,
\] (57)
and by substituting equation (58) into equation (57), it yields:
\[
\frac{d \Delta J_j^I}{d X_j^I} > 0,
\] (58)
which shows that $\Delta J_j^I$ increases with $X_j^I$, and consequently it increases with $s_j$ and decreases with $\tilde{\delta}$. By using these findings in equation (33), we have that firm $I$’s search effort increases with the firm’s labor market share $s_j$ and decreases with the probability of forgiveness $\tilde{\delta}$. □ |
THE 2006-8 NATIONAL REPORTS ON STRATEGIES FOR SOCIAL PROTECTION AND SOCIAL INCLUSION
WHAT DO THEY DELIVER FOR PEOPLE IN POVERTY?
REPORT AND KEY MESSAGES BY THE EUROPEAN ANTI-POVERTY NETWORK
December 2006
| **Date** | **Time** | **Location** | **Event** |
|----------|----------|--------------|-----------|
| 1/1/2023 | 9:00 AM | Room A | Breakfast |
| 1/1/2023 | 10:00 AM | Room B | Workshop |
| 1/1/2023 | 11:00 AM | Room C | Lunch |
| 1/1/2023 | 1:00 PM | Room D | Presentation|
| 1/1/2023 | 2:00 PM | Room E | Networking |
*Note: All times are in local time.*
# TABLE OF CONTENT
INTRODUCTION .................................................................................................................. 4
CHAPTER 1: THE POLICY CONTENT OF THE NATIONAL ACTION PLANS: AN EAPN ASSESSMENT ........................................................................................................... 5
1.1 What priorities are most frequently identified? .......................................................... 5
1.2 Will the measures have a major impact on eradicating poverty? ................................. 14
1.3 Other challenges and policy measures that need to be addressed ............................. 17
CHAPTER 2: DEVELOPMENTS IN GOVERNANCE ............................................................... 22
2.1 Co-ordination and implementation mechanisms and tools ...................................... 22
2.2 Governance and the role of stakeholders .................................................................... 26
2.3 Impact of engagement in the process – the experience of EAPN networks ............... 29
CHAPTER 3: EAPN’S KEY MESSAGES AND PROPOSALS ON THE OMC ON SOCIAL PROTECTION AND SOCIAL INCLUSION ........................................................................... 30
Key messages .................................................................................................................. 30
EAPN proposals to contribute to the success of policies to fight poverty in the EU .......... 33
The content of this report was developed thanks to the contribution of the members of the Review Group of EAPN, and in particular Sergio Aires, Werner Binnenstein-Bachstein, Viktorija Daugytė, Katherine Duffy, Liz Gosme, Bruno Grouès, Gunvi Haggren, Robin Hanan, Ludo Horemans, Saskia Jung, Maciej Kucharczyk, Slavka Macakova, Kamila Plowiec, Riitta Särkelä, Alida Smeekes, Vito Telesca, Robert Urbé, Sarah Welford, Dag Westerheim.
EAPN would like to thank Katherine Duffy for her work in writing this report.
INTRODUCTION
From its launch in 2000, the EU social inclusion strategy has been a key concern of the European Anti Poverty Network (EAPN). EAPN members at national and European level have been active in supporting and working within the context of the strategy. EAPN has produced key reports evaluating the National Action Plans on social inclusion (NAPs/incl)\(^1\).
After five years (2001-2006) the strategy has been ‘streamlined’ at European level so that the social inclusion, pensions and health strategies are now integrated as three parts of a single National Strategy Report on Social Protection and Social Inclusion (hereafter referred to as the National Reports). Streamlined reports were submitted this year for the period 2006-2008. At national level some Member States have retained distinct NAPs/incl and some not. In either case, national EAPN networks have actively participated in the national processes.
This report presents EAPN’s impressions of the impact of the streamlined process on the fight against poverty. The central focus of this report is on the NAPs/incl integrated in the National Reports. EAPN’s report is based on:
- A brief review of the Member States’ National Reports for 2006-8, and in particular the NAPs/incl therein.
- EAPN national networks’ and member European organisations’ responses to an EAPN questionnaire. Responses covered the content of the inclusion chapters of the National Reports, the engagement in the national inclusion process of social Non-Governmental Organisations (NGOs) and people experiencing poverty and the impact of streamlining with regard to the overall Lisbon strategy on the social inclusion content of the National Reports.
- Discussions in a meeting of the EAPN Executive Committee (5 September) and in a EAPN Round Table on social inclusion organised by the EAPN Social Inclusion Review Group (Vilnius, 25 November).
The report has three parts:
1. EAPN’s views on the policy content of the Inclusion chapters of the National Reports.
2. EAPN’s views on developments in governance.
3. EAPN’s key messages and proposals for strengthening the streamlined Open Method of Coordination (OMC) that is the framework for the National Reports.
---
\(^1\) Duffy, K (2003) ‘National Action plans on inclusion 2003-5: where is the political energy? EAPN response to the second round of National Action Plans’, Brussels, EAPN.
Duffy K and Jeliazkova M (2005) ‘Back to the Future: the Implementation Reports on the National Action plans on Social Action plans on Social Inclusion – an EAPN assessment’, Brussels EAPN, October.
CHAPTER 1: THE POLICY CONTENT OF THE NATIONAL ACTION PLANS: AN EAPN ASSESSMENT
The Open Method of Coordination on social protection and social inclusion, through the development of the National Plans in particular, has allowed for a sense of continuity in the EU’s struggle to combat poverty.
The new European guidelines for the National reports, adopted at the beginning of 2006 have not only set a framework for the drafting of the report. The guidelines themselves point to some key challenges which Member States are recommended to address in their Reports, and this has resulted in National Reports often reflecting these priorities. This chapter looks at the priorities identified by Member states and analyses their relevance and adequacy to address the overall picture of poverty today.
EAPN argues that although these priorities should indeed be urgently addressed and that emphasis must be placed on implementation of concrete measures, these should not hinder developments with regard to addressing other concerns, be they broader or more targeted, nor should they reduce the fight against poverty to a limited field of action.
Despite the continuity provided by the OMC, trends in inequality, poverty and exclusion at EU level show relatively little change over the past years. Some countries (Cyprus, Lithuania) indicate that there have been declines in inequality and poverty respectively, the UK indicates that there has been a decline in child poverty to the EU average. However, in many states there has been little change overall or even an increase in the risk of poverty, especially for children, for the long term unemployed and for migrants and ethnic minorities. The recognition of insufficient impact on figures and most importantly of tangible change in the life of people experiencing poverty should be the primary concern and the driving force of the Open Method of Coordination.
1.1 What priorities are most frequently identified?
Member states were asked to focus their NAPs/incl on only a limited set of priorities, and to also take into account the priorities previously identified at European level. Caritas Europa, also a member organisation of EAPN, analysed the priorities at headline level in the social inclusion chapters of the National Reports.\(^2\) The analysis is based on all 27 reports. Grouped as below, the themes and challenges at headline level are ranked from most to least common in the reports. It should be noted first that reference to groups may be made below headline level; for example measures for people with a disability are commonly referred to in the reports, often below headline level. Second headline reference does not necessarily correspond with the weight or extent of the measures.
\(^2\) Preliminary report by Caritas Europa on the National Reports on SPSI sent to the Commission 1 December.
Unemployment/ labour market integration (24)
Long term unemployment and / or inactivity and low skill are often addressed. The cause of unemployment is usually identified as structural mismatch between labour market demand for high skills and the supply of low skilled labour market participants. There is nonetheless little attention to adequate replacement income to support people in this position.
Better governance, participation (18)
This can be seen as an acknowledgement of developments in governance through the past experience of the OMC particularly on social inclusion, and a need to continue and deepen this approach, as well as better implement the new streamlined method.
Child poverty/ families/ intergenerational poverty (17)
It is poverty commonly reported as higher than for adults. This higher risk may suggest that the presence of children results in poverty in households that would not otherwise be poor – implying therefore that the additional costs of rearing children are not fully covered either by salaries or by benefit incomes. Low birth rates (e.g. Germany, Italy, Estonia, Poland) and even negative population growth (in some new Member States) are identified as a problem. However the link is not made to child or family poverty or to the costs (or lost employment opportunities) of child rearing. Also, there is no specific focus on large families and single parent families.
Education, vocational training (15)
The main concern expressed in the National Reports is school drop-out and its impact on labour market position.
Integration of migrants, minority ethnic groups, trafficking (11)
In some reports it is not clear whether the term ‘migrant’ is used also to cover minority ethnic groups (Roma, for example) who are nationals of the state they live in. Poverty is usually identified as due to weak access to the labour market and to support services, either by law or because of language barriers. There seems a policy gap between addressing migrant integration into the labour market and combating poverty among migrants and promoting social integration. The situation of asylum seekers and refugees is little addressed despite severe risks of poverty and inadmissibility of legal employment for many of them. Where data are collected separately by ethnic origin, it is clear that some minorities are at much greater risk of poverty than the majority ethnic group. In the UK Pakistani/ Bangladeshi origin households with someone in employment are poorer than white households with no-one in employment. Roma, numbering 8 million in the EU, are identified as in severe poverty in some new Member States. The Estonian national report mentions the severe concentration amongst ‘non-Estonians’ of unemployment and regionally concentrated poverty.
Access to/ equality of services (11)
As part of the ‘active inclusion’ approach, access to and delivery of services are addressed in the Reports.
Elderly, dependency (7)
A priority raised in the NAP/incl and which draws attention also to the fact that the Pensions section in the Reports focus mainly on financial sustainability of the system and reforms to the age of retirement.
Social housing, homelessness (6)
EAPN member FEANTSA\(^3\) in its report on the 2006-2008 National Reports noted 15 Member States which identified homelessness and housing exclusion as a priority in their NAP/incl as
---
\(^3\) FEANTSA (2006). *Homelessness and housing exclusion in the National Strategy Reports on Social Protection and Social Inclusion (NSI)*. FEANTSA Evaluation and Recommendations; Brussels, FEANTSA, November.
well as smaller numbers who treat it also in their pensions or health reports. There appears to be more policy attention and more reporting than previously on homelessness and housing exclusion.
**Participation of people with disabilities (4)**
Almost all Member States’ Reports refer to people with a disability, most often in reference to weak position on the labour market.
Whatever the trends in poverty, the groups identified as at additional risk and therefore included in the ‘priority’ list are more or less the same in every Member State. The main driver identified is the same one as in each of the previous rounds of the NAP process - weak position in relation to the labour market. They face additional barriers to insertion, such as the decline in the number of jobs requiring few or no qualifications, discrimination, caring responsibilities, location and language barriers.
EAPN networks broadly agree with the identified priorities, but feel there are some major gaps in most reports. Key groups often in severe poverty and relatively neglected in many National Reports are black and minority ethnic groups, vulnerable adults of working age without children and asylum seekers.
In addition, particular national networks identified specific concerns. For example, the Austrian network regrets the lack of focus on asylum seekers and migrants and also lone parents. In the case of Portugal, the network regrets the lack of reference to ethnic minorities and particularly the Roma. The Luxembourg network sees too much focus on labour market risk and believes that for large families and single parent families there are other risks that are not addressed. The Polish network feels there is not sufficient focus on specific target groups such as large families and young people. The UK network notes the absence of asylum seekers from groups at risk despite severe poverty especially of those whose claim has been rejected. It is also the case in Malta, where mental health and disability are equally absent from the report.
Overall, national networks are more satisfied with the analysis of groups at risk than with the priorities and measures taken to address their situation.
EAPN networks are concerned with the thinking behind the measures for the priorities selected. In particular, EAPN wishes to highlight the lack of a clear focus on poverty as such, as opposed to unemployment or poverty of certain specific groups only. An analysis of some of the key priorities and of the areas lacking attention is given below.
### 1.1.1 The top priority in the National Reports: structural unemployment, low skill and the ‘active inclusion’ solution
**Structural unemployment**
Most National Reports admit severe structural problems in matching supply and demand in the labour market. The way this problem manifests itself as unemployment and/or inactivity varies with the Member State and its labour market regulation. Bulgaria, Estonia and Germany are examples of Member States which identify high unemployment as the key problem. Latvia, Hungary and the UK have lower registered unemployment but relatively high levels of inactivity.
Globalisation and its impact on the size of the labour pool of low skilled people is identified as a key driver of risk of poverty in the EU. Yet the solution offered for every group at risk of poverty, except small children and older retired people is labour market insertion or pre-insertion measures. The Hungarian National Report states that ‘Actions… mainly focus on target groups who… do not have a chance to join the competition’ (p16-17).
The majority of major measures identified in the social Inclusion chapter are in support of labour market integration – it is the main focus of reforms to education and welfare as well as training measures. For example, the Luxembourg National Report states that schools are ‘shifting from paradigm of reproduction of rote learning to acquisition of skills’ … ‘Core competences will be decided for various levels’ (p21).
Government focus on upgrading skill is necessary but not sufficient – lower skilled jobs have been hardest hit by global competition, but many higher level skills can be substituted by lower cost labour too. Governments will have to run a hard policy race to stand still with a risk of heavy negative impact on the quality of life.
The consistency between the NRP’s and National Reports is likely to be seen as positive by governments and by the European Commission. It is evident that the social processes are now inside and subservient to, the orbit of the economic processes – presumably the real aim of the Lisbon reform. For example, many National Reports refer in their opening statements to the goal of greater competitiveness - an aim without apparent limits, but cohesion is expressed as a qualified aim – for example in the Danish national report – which aims at ‘not too much’ lack of cohesion (p 7). There is a risk of substitution of means for ends that can undermine the values that frame the European approach to social inclusion.
With regard to specific labour market integration measures, there is a widespread concern to prevent school drop-out, but there is less emphasis on life long learning to support job change through the anticipated long working life.
The Structural Funds are commonly referred to as being used to support integration measures. It is not clear from the method of reporting whether the resources are adequate to the challenge. This is a charge that can be applied throughout most of the National Reports. Moreover networks are concerned that the link between the social inclusion strategy and the new period of Structural Funds is not made.
The German National Report is one that addresses the EAPN networks’ comment that jobs are not the only route to integration. It suggests a more multi-dimensional approach to integration, at least for young people, referring to ‘expertise agencies’ for work and social integration of youth in deprived areas (p24).
There is a disappointing lack of measures to support the social economy, which is referred to in very few reports.
**Labour market activation as the route out of poverty?**
EAPN networks agree that *quality* jobs are essential to combating poverty. But they are concerned at the dominance of supply side labour market measures in the content of the National Reports and fear that streamlining is emptying the content out of the social inclusion strand. EAPN’s concerns are expressed in the following comments by members:
- Labour market measures are not the only way to get people into jobs
- Jobs are not the only route out of poverty and not a guarantee against poverty
Other measures for other people are required
What does it mean that the language has moved from social inclusion to active inclusion?
EAPN is concerned with the shift in language from poverty to ‘active inclusion’. Fighting poverty should not be reduced to this narrow interpretation which does not fit all situations.
A combination of the threat to jobs arising from globalisation and the potential for ‘moral hazard’ from living mainly on social benefits are the key arguments made by governments’ for their central focus on activation measures. The main measures focus on strengthening work incentives in the benefits system through decreasing the amount of some benefits (e.g. Germany) and increasing elements of compulsion for risk groups (such as people with a disability and single parents) in those countries which have formerly relied more on voluntary approaches to employment integration. The Danish National Report refers to the incentive of public debt remission for people who get a job (p 21).
Yet, what kind of labour market is on offer for those ‘activated’? Conditions for those at the bottom end of the labour market seem everywhere deteriorating. Networks in countries as otherwise different as Estonia, Portugal and the Netherlands report that ‘wages are flat’ and prices of basic goods and utilities are rising. But in-work poverty is not well addressed. Some National Reports refer to minimum wages – but do not address whether the level is adequate or the increase is adequate to prevent poverty or keep a family. The Slovakian network reports very poor employer practices by direct foreign investors from the west European car and supermarket sectors. For example, one very well known supermarket chain is reported to allow just three ten-minute breaks in eight hours. Regression in working conditions is widely reported by networks. This is particularly concerning at a time when many Member States reports’ indicate an increasing trend to compulsion in the activation system. The Finnish National Report is one of the few to state that ‘good working conditions improve productivity’ (p27).
Will Member States put a floor under the labour market?
EAPN networks see little evidence that activation measures increase the total quantity of good jobs. In countries as diverse as Ireland and Slovakia, GDP is rising, but as the examples above indicate, some of those in work are not sharing in the wealth for which they are paying.
Member States are commonly concerned about the poverty risks in large family and lone parent households. A rise in women’s labour market participation is seen as the main way to prevent poverty in these households. However it should be noted that there are few differences in participation rates of women and men in Lithuania and indeed women display higher average levels of education, but women experience a greater risk of poverty than men, by still earning on average less than men for instance. If one salary cannot keep families out of poverty, what will happen when two salaries cannot keep families out of poverty? The challenge raised in this question from the UK network is not addressed in the National Reports.
Do governments have a strategy for people for whom the open labour market is not a realistic option?
The lack of absorptive capacity identified by the Finnish national report is not addressed. Those who are last in the queue may be long term on the margins of the labour market – or repeatedly churned in and out of it as the government and employers shuffle the queue through training and reduced commitment to long term relationships with employees. Groups who are not a priority for governments’ labour market participation targets risk relative neglect. These include poor retired people and many vulnerable adults of working age, particularly those with lower professional or educational skills. A hierarchy of poverty may be reinforced – but every one only gets one life and has an equal right to live it well. In these circumstances, societies’ collective responsibility to ensure opportunities for a decent life and social participation must include recognition of the dignity of all workers whatever their jobs, expansion of the social economy as a key sector for social inclusion and creation of new labour market opportunities and a rethink of the money value of minimum wages and income support.
1.1.2 Child poverty
Child poverty is the second of the two most common key priorities addressed in the National Reports. In many Member States this is a new policy focus (as distinct from family policy). A focus on child poverty – which in almost all Member States is higher than adult poverty – has the potential to enable us to judge the real impact of policies to combat poverty. EAPN networks support strong action to combat child poverty but they are concerned about the direction of the approach in the National Reports, and particularly about losing the focus on the overall objective of eradication of poverty. There is the potential to narrow the concept of poverty in ways actually unhelpful to combating child poverty. Their concerns about the approach taken can be summarised in the following comments from EAPN members
- Agree with the topic but not the tools.
- Does it undermine the universal welfare state?
- You don’t get many rich children in poor families
- Is the implication that children deserve more but their parents do not?
- Children’s behaviour is not the cause of poverty
- There is no such thing as a ‘genetic poverty’
First, many networks are disappointed at the modest ambition of the targets. For example, the Austrian network notes that ‘For the first time at least a target was set – to reduce child poverty from 15% to 10% in 10 years – so not by 2010! The UK network notes that the government has made significant steps, but admits it has narrowly missed its child poverty target. In many Member States adult poverty seems to be addressed simply as an issue of unemployment and low skills and an offer of opportunities. Parental labour market integration can protect some children in work-poor families but poor children need to be guaranteed outcomes regardless of parental position on the labour market. Moreover, the German network notes that lack of measures such as adequacy of
minimum income and poverty-proof wages in Germany reflect overarching challenges about the operation of the labour market that are not addressed.
**A children’s rights based approach would ensure that prevention of poverty was central to measures.** Even where Member States have a strong preventative approach there are gaps which mean there is not a guarantee of access to adequate income and health and other services. For example, children of some migrant groups, ethnic minorities, refugees and asylum seekers do not have equal access to these services.
Overall, EAPN networks welcome the increased concern with combating child poverty but there is a certain distrust of the thinking behind this priority. There are obvious risks of stigmatisation and failure from uncoordinated interventions based on a weak evidential base and a neglect of environmental factors especially income inequality and housing segregation. **Networks want to see more attention paid to financial support, good quality affordable services and children’s rights.**
**Childcare or child welfare?**
The main instrument for increasing women’s labour market participation is an expansion of childcare. Many states are still a very long way from universal provision of quality affordable childcare.
Most states aim to increase access to day-care and to make it more flexible. What concerns EAPN is the lack of focus on the experience of the children of poor parents integrated into a ‘flexible’ labour market. For many such parents, hours will be long and variable, work hard and insecure and pay low and variable even where there are government top-ups. Despite the consequences for children, both of their parents’ employment conditions and their access to good quality affordable childcare, the National Reports do not focus on this dimension of child poverty. Whatever governments’ intentions or practice, none deem it sufficiently important to report an assessment of the potential impact on children’s health and well-being and very few indicate a fully elaborated strategy to ensure quality of care and child development.
Further, the wider social consequences of adapting family life to working culture through changes to childcare provision – including long hours and variable hours - have not been fully taken into account.
Higher child poverty and pensioner poverty indicate that labour market participation during working life is not a guarantee against poverty. It appears that open markets have increased labour market competition and are pushing wages below the costs of family reproduction over the lifecycle. There are major social cohesion implications from this trend and the problem of family formation and stability must be addressed now.
**Inter-generational poverty**
An emerging trend in the National Reports is a strong emphasis on ‘breaking the intergenerational cycle of poverty’. Measures most commonly involve early education and social work intervention in ‘problem’ families. EAPN networks welcome the additional focus on the most disadvantaged children and families. But they are concerned that, as for adults, poverty is becoming problematised in behavioural terms and measures reflect this conception of poverty. The Danish report refers to new law on parental responsibility and parent programmes for *insecure and resource weak parents* (p 9-) as well as a new plan to target 10-14 year old offenders. A pilot health-led intervention targeted on very young children is planned in the UK. The UK network is highly concerned that old arguments about genetic risks
are underpinning emerging approaches to the poorest households, many of them multiply deprived. The Maltese National Report is one of the few to state that it is ‘aiming to introduce measures for youth at risk… while recognising and taking the necessary measures to exclude harmful, indiscriminate practices of early identification programmes’ (p22). Protecting ‘looked after’ children (children living in social care homes) from the severe risk of poverty and mental ill-health they face in many Member States would be much easier if the strategy and expertise of the best performing countries were transferred to the worst, but such children receive little attention in the National Reports.
**Financial support for families**
A number of Member States have increased family focused benefits. In the UK a substantial shift of income to families has focused on ‘tax credit’ top-ups to low pay. However this approach has benefited most those families closer to median income. Workless households and those on the lowest incomes have not gained. Some new Member States such as Latvia and Estonia are reforming family benefits, but from a low base not only in terms of level but coverage. In Estonia some payments for children stop when children are three years old and this is reflected in differential poverty rates for families with children of different ages. Further, the Estonian network is one of many who feel support is too focused on subsidy for ‘childcare’ rather than child welfare. Other networks, including those in Poland and Germany, believe that additional family benefits are more focused on incentives to increase birth rates than on combating family poverty.
### 1.1.3 Access to services
Public services support people to change and during change. High quality accessible services are central to social cohesion. However, there are particular challenges for the poorer Member States. In Bulgaria, for example, the National Report notes that access to services is guaranteed by law but that implementation for most vulnerable groups difficult. There is a lack of community based services and networks – and therefore the state cannot deliver services.
**Geographic disparity in access to services**
Many National Reports identify problems of equal access to quality services – for poor and rural people, minorities and people with a disability. The most common reference to access problems concerns combating regional disparities in services, especially those between rural and urban and areas of industrial decline. Regional differences in services and problems of access for the vulnerable are identified in both richer Member States such as Finland and poorer Member States such as Lithuania. However the scale and depth of problem are clearly very different. In Lithuania ‘Services, especially social services, intended for the most vulnerable population groups, are underdeveloped in Lithuania so far….social services in Lithuania may only be provided to 50 persons per 10000 inhabitants’. (p14)
Networks report that the problem of geographic disparity in service provision (known in the UK as the ‘post-code lottery’) seems to be getting worse. In Portugal the increased centralisation of social services runs the risk of promoting more exclusion and inequalities. Further, increasing inequality in the labour market and therefore in financial resources, combined with
user charges, inhibit access to services for poor people. Uncordinated and producer oriented services are difficult for the vulnerable to navigate. Trends towards pluralism in providers of services may increase not reduce inequality in service provision. Yet, the Inclusion chapters in the National Reports do not address the potential impact on area-based disparity of the proposed Services Directive and the current ‘breaking up’ of public services in some Member States. That the National Reports do not address this potential impact of planned changes in the ownership and delivery mechanisms for services indicates that they are not being used as a planning tool, as was their intended role.
**Service delivery**
The relationship between central government control and service delivery is being reformed in many Member States, but funding at local level remains a problem for service delivery everywhere but especially in new Member States.
In employment services and in social services, there is a pronounced emphasis on individual ‘case management’ approaches for effective service delivery, especially for child poverty and labour market activation. There is also a multi-agency approach identified in some reports. However the extent of this approach is variable depending on the risk group – for example in the UK there is much more development of a multi-agency approach for children at risk than there is for multiply deprived and vulnerable adults.
**The OMC and in particular the inclusion process could do much more to embrace the expertise of local government and NGOs concerning service offer and delivery.** Instead, at this moment, the handling of the Services Directive has disrupted the relationship building that would assist in promoting best practice in service provision.
Concerning services for specific groups, people with a disability are those for whom specific measures are most often identified. The main measures concern labour market integration and access to social and health services.
Improving access to housing and housing services are widely reported and also service developments for homeless people, for example in Finland and Poland.
There are references, for example in the Finnish and Maltese national reports, to services for other specific target groups, such as alcohol and substance misusers, but in general there are few other services for specific target groups which are reported.
Finally, there is often a lack of focus on the specific aspects and dimensions of the strategy and services for vulnerable and poor people. However, the German National Report refers to a preventative approach that has halved the number of homeless people.
However, absence of reporting does not mean absence of measures. But it is not clear whether absence reflects lack of new measures, lack of priority, or the constraints of the streamlined reporting process in terms of the restricted number of priorities that may be chosen, or the restricted space to report them. This is one reason it is difficult to establish how far the social inclusion chapters of the National Reports reflect the national realities.
### 1.1.4 Mainstreaming of measures for specific risk groups
The risk groups earlier identified are essentially those for whom governments have targets to raise their labour force participation as part of the strategy to achieve the 70% labour force participation rate. Measures reflect this priority. **Overall, EAPN networks do no see**
consistent transversal approaches for groups at greater risk and indeed there seems to be a loss of focus on target groups and multidimensional approaches in the 2006-8 National Reports.
The gender dimension of poverty is commonly noted in the national reports but Ireland is one of the very few to consistently address gender for each policy area. In general, gender is not systematically addressed and is focused on labour market integration of women. Moreover, although pensioner poverty has a strong gender dimension, EAPN member organisation AGE\(^4\) notes that this issue is not sufficiently addressed.
Measures for people with a disability are identified in almost all of the National Reports, with a focus on labour market integration. There is recognition in some reports that some people will not be able to participate in the open labour market. Some of the poorer Member States are concerned that there are inadequate resources for support and sheltered employment. Some reports refer to employer subsidies to encourage employers to hire people with a disability. The quality of the labour market integration open to people with a disability is not commonly addressed.
For migrants, the main policy focus is on language support – there is much less focus on anti-discrimination law or equality of access to services. Ireland is an example of a more holistic approach including cultural adaptation of services.
Minority groups are not commonly addressed although examples are the Netherlands and the UK and Malta. Despite the poverty of black minorities in Portugal, the network states that they are absent from the National Report. For Roma, there are multi-dimensional measures reported - for example in Hungary and Bulgaria - and these show the positive influence of the JIMs (Joint Inclusion Memoranda) period. The Bulgarian National Report identifies a need for professional training especially for a multi ethnic environment (p17). However, in some Member States measures are only at a very early stage of implementation or resourcing (e.g. Bulgaria). Further, the Slovakian network reported that ‘the situation of Roma is very difficult - the media and the notion of poverty in the media is very bad. The middle class do not think there is poverty in Slovakia.’
1.2 Will the measures have a major impact on eradicating poverty?
1.2.1 A lack of focus on combating poverty
In general, EAPN feels that the commitment made at the highest political level to make an impact on poverty eradication has somehow slipped off the agenda as such. The change in language – from poverty to active inclusion- is not a detail, and EAPN warns against this tendency. Prioritisation can help address the implementation gap, but it has also created a situation where being poor is not sufficient to benefit from a national strategy against poverty.
Moreover, some measures mentioned remain very broad in scope and do not clearly target people in poverty. The German network is one of several that feel that measures are both insufficiently focused on poor families (education and family policy) or are too limited an
\(^4\) AGE, the European Older People’s platform (2006) Draft (2006) ‘Assessment Paper on the National Reports on Strategies on Social Protection and Social Inclusion 2006-2008 (pension report)’, Brussels, November.
approach (for example a focus on language support for migrants but insufficient measures more generally to support social integration). The French network believes that the Housing Commitment Act of 2000 is disappointing – again because it is insufficiently focused on building houses for people in poverty and in providing shelters for homeless people. The Lithuanian network refers to measures to promote higher and professional education and little focus on basic skills.
1.2.2 A limited analysis of the causes of poverty underlies the National Reports
It is the networks’ view that the analysis of poverty underpinning the direction of the measures focuses too much on individual behaviour and too little on structures, and structural causes of poverty. It results in an unjust division of rights and responsibilities as between the poor, the not-poor and government. There is a need to rethink the approach in a way that will look at the structural issues influencing the poverty situation today in the EU, such as adequacy of income, income inequality, the key role of social protection systems, family policies and above all access to rights and dignity for all.
1.2.3 The impact of the political environment
The political environment is a factor that may inhibit the implementation of the measures in some Member States – not only the political complexion of government, but the precise political situation. For example, the Belgian network suggests that measures taken up were limited to those for which there is political agreement. In a similar vein, the Finnish network suggests the impending change in government limited the measures proposed. The Swedish network says that their national strategy report is not fully adopted yet. It has good content and the NGOs are relatively satisfied with the strategic approach to poverty. However, they do not know if the new government will stand by what has been done. The Irish network reports that their government says that the ‘real’ NAP will not appear until next January: ‘everything to date is lost – and there is no discussion about what will go in next January’. The Portuguese network reports that ‘it seems we are always starting from the beginning and that previous compromises never existed’.
1.2.4 The adequacy of the measures to the challenge of poverty and the political good faith of governments
The majority of EAPN networks are not satisfied that measures are sufficient to the nature, scale and depth of the problems identified. However, the Swedish, Portuguese and Maltese networks are by and large satisfied with their government’s approach to the priorities although some, namely the Portuguese network is not satisfied with the risk groups identified. The French network feels that the measures are going in the right direction, particularly for labour market integration. While there are no real changes to measures from 2004-6, the Austrian network notes that for the first time there are concrete targets. Networks in a number of states agree with the Swedish network’s view that the current balance between general preventative solutions vis-à-vis special measures and projects is not effective in combating poverty. The Swedish network believes the measures are ‘relevant but
not sufficient’ - they wait to see concrete jobs and concrete houses. The Luxembourg network too thinks it is too early to say whether measures will be sufficient until they see the real extent of the budget. EAPN networks commonly referred to lack of transparency over concrete resources as a deficiency in the National Reports.
The Dutch and Polish networks think the measures look sufficient on paper but not in practice. In the Polish case ‘everything is on paper and in practice is not executed’. The Italian network has only just seen a draft – their National Report was very delayed. The network’s view is that ‘in Italy there are some good ideas mentioned in the report but they will never be implemented’.
These remarks recalls the EAPN report on the NAP implementation reports which referred to ‘national theatre’ rather than national action (Duffy and Jeliazkova 2005).
1.2.5 The risk of a narrow concept of the key priorities – active inclusion and child poverty
As the French network asked ‘why the change in the language – from social inclusion to active inclusion?’ Networks are concerned about the loss of the language of ‘eradicating poverty’ and ‘social inclusion’ – and indeed the invisibility of the 2010 goal of ‘making a decisive impact on poverty’. EAPN is concerned that the change in language – including the shift from poverty to child poverty, and from social inclusion to active inclusion, reflects both a narrowing of the concept of poverty and a shift away from a universal and preventative approach to combating poverty and promoting integration.
Networks can see a reporting advantage from a narrower focus on a small number of priorities. However they are concerned already that with the encouragement to select 3 or 4 priorities, governments have been constrained in what they can write about arising from the combination of few priorities and few pages so that it is difficult to know what are the realities ‘on the ground’. There is a loss here of potential cross-national learning. They are further concerned that the narrowing priorities will lead to grouping together of measures, which could prove unhelpful.
1.2.6 Resources for effective implementation
Many networks are critical of the short term nature of initiatives and are concerned about a lack of sufficient resources and skilled staff for delivery – this latter was a concern expressed also in some national reports including Malta and many new Member States. The Finnish national report is one of the few that expresses an aim of ensuring sufficient fully qualified staff for high and equal service delivery.
The Lithuanian and Portuguese networks are concerned by the reliance on Structural Funds and NGOs to deliver services, particularly in the future absence of European initiatives. Networks are getting tired of being told there is not enough money to end poverty – even in the wealthiest countries, whether in the European Union or not. Norway is one of the richest counties in Europe – but EAPN’s Norwegian network reports that the government will spends only 90m euros more for target groups for the 2007 budget year of which 25m is meant for increased expenditure in social benefits – just 2000-3000 people can be helped but 9-10% of the population are at risk. Just 350m euros per year would eradicate income poverty – much
less than the interest on the billions of euros in the oil fund. But the network reports that the government has said it will not spend it because it would damage work incentives. Instead, welfare reform emphasising activation will be launched in spring 2007. Clearly it is theory not money that constrains the fight against poverty.
1.3 Other challenges and policy measures that need to be addressed
The exercise of prioritization of measures has perhaps helped focus the reports, but has not helped in ensuring that the ultimate result is an integrated and comprehensive strategic document on national anti-poverty policies. Specific issues which have a clear impact on poverty are still not being addressed. This section provides EAPN’s view of some of the most obvious omissions in the Plans.
1.3.1 The capacity of labour markets to absorb all who want to work
EAPN networks are concerned about the coherence of the approach to labour market integration. Pension chapters of the National Reports refer to increasing retirement age to increase the financial ‘sustainability’ of the system and increase pension income. However, as the Austrian network asked: ‘If the raised retirement age keeps people in work to support their low pensions, how will the new ones get in?’
Some governments are confronting increased pressure from business to deal with low birth rates and skills mismatch by increasing migration. For example, the Estonian network reported a low birth rate and unemployment but at the same time lack of skills. Employers want to utilise a global labour market to hire less qualified specialists from Russia and higher qualified specialists from India. The network reported that government has a policy to avoid it – but that there is strong pressure from companies to change the law.
The Estonian experience of business pressure to encourage targeted migration is not unusual. It points to the growing need at European level to close the gap between the economic attitude to migrants and the social situation that confronts them in Member States. The Latvian National Report clearly states that ‘employers demonstrate no interest in unqualified workforce’ (p12). Despite a common acknowledgement of this situation, few National Reports face up to the size of the poverty impact for the labour market disadvantaged of the weak absorptive capacity of the open labour market.
Training opportunities are the main instruments offered to prevent poverty caused by the unwillingness of employers to hire disadvantaged groups. The risks of poverty from relying on supply side measures to combat poverty are recognised in the Finnish national report. It is one of the very few to state openly that the problem of poverty from unemployment and labour market disadvantage is long term and beyond the capacity of the individual to influence because there will be an ‘insufficient number of jobs suitable for the structurally unemployable even in the next decade’ (p19). The Finnish National Report states that better income support will be necessary, as well as social insertion measures. Interestingly, it states also that security of income for unemployed people is vital to raising employment (p27) and refers to
the concept of the ‘Interval labour market’ in which employee income is composed of a flexible combination of the employee’s work contribution and various forms of assistance. If the position on the Finnish market is thought to be specific to its high skill labour requirements, then it suggests that if other Member States achieve their goal of moving further towards a high skill high value economy, they will confront the long term unemployment facing the Finns.
EAPN networks are increasingly concerned that European economic policy is forcing Member States into a ‘one-club’ approach to combating poverty through supply side ‘activation’ – a poverty that is in itself reinforced by the macro-economic policy environment.
1.3.2 Social protection and adequate income
Finland’s report is one of the very few that refer explicitly to prevention and to ‘good social protection as the cornerstone of society… It increases social stability and cushions the impact of social change’ (p 14).
Very few reports refer to better income support – with the exception of Baltic states – for example there are widespread rises in benefits in Estonia but from a position of currently limited coverage and very low rates – often insufficient to prevent severe poverty. Wealthy countries have more limited and targeted approaches – increasing some benefits (for example to support family-building) and reducing others, to stimulate work incentives and cut costs. The Finnish National Report states that security of income for the unemployed vital to raising employment (P27). However, the Finnish network is one of many that feel the key missing measure for really cutting poverty is adequate benefits - benefits are both too low and too rigid. The UK network is particularly concerned about benefits for single adults without children – many recipients are multiply deprived and stress and other mental health problems are common. Benefits for adults without children have declined markedly in real terms relative to those of other groups.
The Luxembourg network voices the concern of many networks in stating that one of the key issues is not just the existence of minimum income but the level - and the level necessary is connected to affordable and accessible services.
In this respect, networks are very disappointed in the response by governments to the ‘Active Inclusion’ communication of 2006, which may be explained by the fact that they have very different views on each dimension included (labour market integration, minimum income and services) and have indicated that they are satisfied with the OMC process, but many social NGOs would like stronger measures on minimum income. They fear further deterioration in labour market conditions and neglect of the gross poverty experienced by some disadvantaged people in and out of the labour market - and the exploitative conditions and severe poverty experienced by many migrants, ethnic minorities (Roma) and refugees and asylum seekers – even in the richest countries. The National Reports were an opportunity to redress this which was not taken.
EAPN believes that an adequate income for a life in dignity should be a guaranteed right and that the OMC should help highlight this need and steer political thinking towards a recognition that steps have to be taken to make this a reality throughout the EU.
1.3.3 The missing power – legal rights and legal redress
As mentioned above, Networks believe that legal minimum incomes set at a level and with a mechanism to prevent poverty and enable everyone to share in rising wealth are an essential tool in combating poverty and supporting human dignity. Given the great differences between Member States it may be necessary first for measures to be developed nationally, but an argument about subsidiarity is not an argument for doing nothing. The Charter of Fundamental rights provides a clear basis for taking this debate further.
Networks believe also that enforcement of existing law - for example on discrimination and on employment rights – should be a stronger part of labour market integration and access to services.
1.3.4 Income inequality
Income inequality is referred to in some reports – e.g. Finland – as a driver of relative poverty, and in Cyprus, referring to improvements in the distribution of income. In the Belgian report, in the pension chapter, a remarkable measure is introduced. The guaranteed income for older people will be increased upon the level of the poverty threshold. This is an important precedent. By referring to the poverty threshold for the minimum income of older people, one could argue for a similar application to all other groups with a minimum allowance below the poverty threshold. It could be considered as a good practice for other Member States.
However, EAPN networks are concerned that the Lisbon strategy is accelerating inequality in most Member States. The French EAPN network notes that the mix of tax cuts for the better off and benefit cuts for poorer people is reinforcing inequality. Despite the costs to taxpayers in direct subsidy to employers and indirect subsidy through the tax and benefit system, and all the associated costs arising from the impact on social cohesion, the primary inequality generated by the open labour market is not presented as a problem that can or should be tackled at source. EAPN networks ask – has anyone counted the costs?
1.3.5 Macro-economic policy
As the French network said, ‘What is competitiveness? - According to the 4th Cohesion report of the Commission it’s when people have a better life – but quality of life is falling’. EAPN networks express frustration and even anger about the dominance of the Lisbon reform agenda, its rightness treated almost as an article of religious faith. EAPN networks believe that tax cuts and privatisation programmes raise income inequality, cut resources for welfare and raise costs to poor people – user charges, co-payments, unequal access to services.
The Slovak network was one that referred to the impact of the Stability and Growth Pact and the constraints it imposes on spending on social welfare – the vulnerable are paying for the government’s aim to enter the single currency mechanism.
Networks have long felt that European monetary policy was too restrictive of the demand side of the economy, leading to unemployment and cuts in social services. However, what is perhaps surprising is the strength of feeling about the consequences of the single currency. A member of the Portuguese network said that ‘the Euro - and the stability pact - is becoming
one of the most important reasons for poverty. Salaries are the same but the cost of living is higher’. The Italian network mentioned ‘not to blame the Euro but people speculating and making a profit’.
1.3.6 The responsibilities of employers
In recent years, in many Member States, employers have been given greatly increased rights to manage labour markets in their own interests. At the same time there has been some concomitant increase in their responsibilities to the public interest. There have been improvements in legislation on equal opportunities to access employment and there has been the introduction of the minimum wage in those countries that did not already have it.
But the quality of work and working conditions has not been a big concern to most governments. The potential impact of long hours or insecure work on children was earlier discussed but it has also an impact on elder care.
The main role for employers in combating poverty seems to be as a recipient of public subsidy to support training and employment of risk groups and in some countries sheltered employment and training programmes. Employer subsidies are widespread. Measures may be direct or indirect and vary widely and include employer insurance discounts to take on disadvantaged groups in Hungary, and low wage top ups (Tax Credits) in UK. The cost of low wages to the taxpayer (on childcare subsidies and wage and pension subsidies) is not discussed.
Concerning low pay, minimum wages are sometimes mentioned as an anti-poverty measure - but for example in the UK and Cyprus the level is set low and covers mainly female occupations, contributing to cutting household poverty in dual earner and to a much lesser extent female headed working households. The extent of in-work poverty in the UK indicates the minimum wage is not set at a level where one wage will keep a family out of poverty.
Employer responsibilities in return for the greater freedom they have been given to manage the labour must be set out clearly. They have to include issues about security, progression, work culture and hours of work.
1.3.7 Access to health and housing
Many networks believe that the poverty caused by the relationship between low incomes and health inequality is not addressed. Health inequality features in health chapters of the National Reports, but the poverty dimension is rarely dealt with in the inclusion chapter. The German network is disappointed by the lack of measures on access to health care for people with a disability. The lack of reporting of health measures hides caps (ceilings on resources) and cuts in some countries such as France, where older people have been particularly affected by caps and migrants have been affected by regression in access to health services.
Access to affordable housing is a problem identified in many National Reports, including France. The Luxembourg report identifies measures to ensure up to 10% of rental housing in new developments and increased public construction of affordable housing. Hungary has a programme to cut regional housing inequality and a programme to target poor housing in villages and remote rural areas, supported by structural funds. The Polish report refers to measures to increase emergency accommodation including night shelters. The Finnish report
identifies a range of strong measures to deal with homelessness and an acceptance that sufficient income is a necessary preventative measure. However, networks are concerned that income inequality, rising prices for property and discrimination are hardening housing segregation and problems of accessing secure housing in reasonable condition. But major social housing programmes are not reported.
1.3.8 Public and media awareness and understanding of poverty
The networks are generally concerned about the hardening of the conception of deserving and undeserving poor and the failure of governments to address the public understanding and awareness of poverty. The Lithuanian network suggested that there is a very narrow conception of poverty and public awareness of it – ‘only begging is poverty’. The Portuguese network states that if the fight against poverty is not a public opinion issue then it will be quite difficult to have coherent policies in this field. Governments may await public ‘permission’ to redistribute income, but the public understanding of poverty is not addressed in the National Reports. The Bulgarian report is an exception, it refers on p16 to ‘Measures for increasing the public awareness … and overcoming some prejudices towards ethnic minorities’ and on p21 to ‘increasing the public awareness for the conducted measures …’; but the specific measures are not discussed.
2.1 Co-ordination and implementation mechanisms and tools
2.1.1 The impact of the OMC in driving policy priorities and measures
It is clear that anti-poverty priorities are being driven by European processes and this is a major impact of the OMC. The European processes and funding have had broad impact. The policy priorities and measures evident in most reports (child poverty, active labour markets, access to services and integration of migrants) appear to be a consequence of European policy priorities and exchange. Policy measures for certain risk groups – e.g. long term unemployed and Roma are clearly driven also by the European Social Fund and for Roma by the impact of the JIMs process also. The JIMs have evidently had a major impact in new Member States and are the basis of much that has moved into the NAPs/incl. Promotion of social and civil dialogue has clearly been an impact of European funding more broadly – for example, in Bulgaria.
EAPN believes that the EU lever on the Member State’s commitment to fight poverty is crucial. At the same time, both the Commission and the Council should be cautious of the messages sent and of the extent to which the frameworks set and the broader agenda followed steer the inclusion and poverty agenda. This should not reduce the scope of the effort to eradicate poverty, but rather maximize it.
2.1.2 Streamlining of the social processes
The European objectives and structures for reporting on policies have changed half way through the ‘Lisbon decade’. With the 2010 objectives still in mind, the introduction of the new streamlined approach (social inclusion, health and pensions covered by a single OMC) risks reducing the clarity and precision of focus on combating poverty unless there are robust mechanisms to mainstream combating poverty throughout the National Reports (on Social Protection and Social Inclusion) and in the National Reform Programmes.
The European Commission encouraged consultation between ministries but the outcome is uneven. The streamlining process has the potential of promoting better governance and addressing the multi-dimensional causes of poverty. EAPN nonetheless feels that streamlining has made it more not less likely that the national activity is merely a report to the Commission - over such a big area and in such a new mechanism, the strategic task is not achievable. Nor is it evident that the status of the Joint Report on Social Protection and Social Inclusion is higher now than previously.
Is the 2010 ‘decisive impact on poverty’ a real target? It is a pity that nearly seven years on from the launch of the social inclusion strategy we are asking this question. The view of the networks is that the positive potential of streamlining to integrate dimensions of the social agenda and raise the political profile of combating poverty and promoting inclusion has failed. This is not least because of the way it was introduced and the lack of active management of its implementation as more than an administrative convenience. Networks are sure that there are some governments that agree with them on this point.
### 2.1.3 The impact of the NAPs/incl in promoting institutional development
All Member States have established most of the following: national government departmentally based co-ordinating units, inter-departmental committees and stakeholder mechanisms. What is not clear is the formal or constitutional status of these mechanisms and their link to the formal policy process.
In states where delivery responsibility takes place at regional level, some Member States have invested in regional mechanisms, especially in new Member States, which seem to be making determined efforts to build consultation mechanisms and deliberative policy processes. For example, Hungary undertakes regional consultations and local and county round tables (a legal obligation) in order to guarantee the multi-sector nature of local social policy planning. The full social inclusion structure includes inter-departmental round tables, a Social Policy Council of stakeholders and a Committee Against Social Exclusion which elaborates the national reports. There are also Councils for various disadvantaged groups including Roma. There are still some issues of planning coherence in those countries such as Ireland and Germany with pre-existing national reports on poverty/ wealth. The German National Report refers to a ‘parallel process’ (p28).
Overall, it seems clear that many governments are doing things they would not otherwise do, or do less of, in terms of inter-departmental co-ordination and policy consultation. In this respect, the National Action Plans have driven better co-ordination of anti-poverty strategy. This has been slow to develop but that is the fact with institutional change.
However, ‘streamlining’ seems to have interfered with the ‘bedding in’ process and has taken some of the drive and ambition out of the process. This is evident in the lack in some Member States of a distinct national inclusion process and in the limited content of the reports. Some Member States have sought to keep the strength of the NAPs/incl and have chosen to prepare much larger reports; others have decided also to retain a focus on all seven priorities in the 2006 Joint Inclusion Report.
### 2.1.4 Unbalanced Feeding in and feeding out with the National Reform Programmes (NRP)
It is clear that the economic and employment processes, whether ‘wrapped’ in the NRP or not, dominate and constrain the social processes (feeding in). However, Cyprus is an example where it is stated that service priorities are evaluated in the context of NRP and NAP priorities. The Hungarian report states that it tries to ensure consistency of the NAP with the
convergence programme and NRP, but does not say how. Finland’s national report states that ‘broad based preparation of the NRP guarantees that prevention of poverty and social exclusion is addressed in economic and employment policy’ (p40) but no more information is given.
There is therefore little evidence of poverty proofing of strategies or measures in other dimensions of the Lisbon process (feeding out), which suggests limited impact of key messages from the NAPs/incl. The impact seems all in the other direction. The UK NRP has one reference in it to child poverty – but no reference to the government’s child poverty target. There are two references to the Joint Social Protection/ Social Inclusion report – but no specific reference to the NAPs/incl.
Networks have found the NRP process difficult to access and many have found the culture of the relevant Ministries closed and uninterested in poverty. EAPN will conduct a separate analysis of how the social inclusion priorities are taken up in the NRP.
Amongst other disadvantages, the dominance of the economic agenda on the social approach is inhibiting innovative policy development responding to new challenges. This barrier is not only financial, but intellectual and emotional. Yet it is clear that while a supply side approach to the labour market – such as ‘flexicurity’ as currently understood – can insert more people into the labour market, it is not at all clear that it can keep them there, or keep people out of poverty. Member States should ‘cut some slack’ to deal with the multi-dimensional nature of poverty – asserted and accepted by almost all actors but not truly evident in the elaboration of strategy and policy.
2.1.5 The Structural Funds
The impact of the Structural Funds’ priorities on the prevalence of particular measures is evident in most countries. The Lithuanian report refers to the Single Programming Document as ‘one of the most important documents of Lithuania’ (p 29).
They are the major European financial instrument in the social field and their role is to close inequality gaps - yet most networks believe that they are hardly focused on combating poverty and exclusion.
Further, networks can find no link between the new Structural Funds and the NAPs/incl – even where there was one before, as in Portugal. Another network said that the timing of the new period of programming and preparation of the NAPs/incl would have required co-ordination of decision making nationally that was not possible.
2.1.6 Mainstreaming
A few National Reports refer to mainstreaming, but do not discuss the mechanism. Cyprus refers to mainstreaming policy through the NRP and the Single Programming Document of the Structural Funds.
Latvia intends to draw upon European best practice with a view to establishing mainstreaming in 2007.
In the National Reports, ‘mainstreaming’ in strategies for particular groups at risk was most likely to be illustrated for women and to a smaller extent for people with a disability and Roma – for example, Hungary has desk officers for Roma in all major Ministries and in Portugal the
NR refers to the creation of ‘social inclusion units’ in the different ministries. However, most National Reports say little on mainstreaming.
In the case of France, as indicated earlier, stronger reference could be made to the use of the ‘transversal document’ (cross-cutting policy tools). No other network found any statement of clear intention to progress mainstreaming of anti-poverty strategy in national policy making. In the UK, there are the beginnings of Cabinet Office led inter-departmental discussion on mainstreaming social exclusion through the mechanism of the Comprehensive Spending Review. However, the complexity of effectively mainstreaming is illustrated in the different scope of the Department for Work and Pensions, the Cabinet Office exclusion agenda and the child poverty strategy, which is the government priority and subject to a clear target. The revision of the Public Service Agreement targets, if they become cross-departmental for some key issues, may provide a means of mainstreaming, but it is too early for this to enter the National Report.
EAPN networks believe that the OMC processes have increased inter-departmental contact, but are aware of the practical difficulties and time required to institutionalise contact and achieve mainstreaming. Streamlining did not advance this process, but rather the reverse.
2.1.7 Tools and delivery
Data
It is clear from their own statements that some poorer Member States such as Latvia and Bulgaria do not have the capacity in all policy areas to set targets based on meaningful data and to monitor them.
EU SILC data problems were raised in the Irish national report as a reason for not yet undertaking measures in certain policy areas.
Target setting
A positive impact of the NAP process has been to spread good practice in data development and target setting. Cyprus had no targets in the previous NAP. However, many states set targets only for particular priorities or measures, for example Estonia has no targets for childcare and Bulgaria has few targets overall.
The style of target setting is not the same between Member States – but more importantly within Member States target setting is inconsistently expressed. It is hard to tell whether targets are ambitious or not – e.g. Cyprus – and how resources are related to them (e.g. Bulgaria).
Indicators and monitoring
Only some reports refer in the body of the text to the Laeken indicators and use them to bench-mark their performance – and then not consistently. Overall there is no consistency within or between National Reports in the use of indicators or how these are reported.
Resources, transparency and coherence
The Maltese network is an example of one concerned that the planned projects listed in the annex to their National Report show no clear links to the priorities on children and youth. In most National Reports resources are not reported in a consistent manner and are not clearly linked to the scale of challenges and to targets.
The French network feels that more reference could have been made to the new cross-cutting policy tools, which were the subject of a peer review in mid-2006.
Networks missed consistent reporting of timetables. Very few reports distinguish clearly between existing and new measures nor indicate whether new measures are resourced and ready to go or merely foreseen.
Monitoring and evaluation
These remain as underdeveloped as ever. Monitoring activities are much more rudimentary than consultation on drafting. However Austria intends to engage independent experts in producing material for the next Plan and to have stakeholder involvement in monitoring and implementation.
In most Member States, monitoring and implementation is rudimentary because the NAPs/incl do not drive policy but report it. Nevertheless, the consultation on drafts does allow a kind of evaluation of existing strategy and policies.
2.2 Governance and the role of stakeholders
2.2.1 National governments
At national level, governments are doing more co-ordination and more consultation, including recognising the importance of experiential data in formulating policy strategies. Further, networks report commitment of many of those ministers and officials who have been directly engaged and recognition by them of the value of cross-national learning and of new voices in the policy process.
Cross-nationally, networks are sure that peer exchange between governments has been a positive learning opportunity for most of them - but remain concerned about the lack of access to influence and benefit from that learning for other actors who must be part of successful strategy and policy.
On the negative side, networks are not convinced that there is institutionalised involvement in the inclusion process, of ‘non-NAPs’ departments or even other teams in the same department. Some networks believe that small teams write the reports for European Commission consumption only.
The Swedish network was one of many who made the point that a weakness of the NAP is that it is not a national planning tool as foreseen. ‘The difficulty is NAP is not an instrument. It is just a report of some officers. If you want to influence poverty then you do not do it through the NAP’. Nevertheless, stakeholder models are spreading and these may be seen as ‘proto-institutions’.
Overall, the OMC process – despite being defended by national governments when presented with any alternative, is not being embedded as it could be. If for now governments will not do more, they could at least fully implement the OMC.
2.2.2 The role of the European Commission
The Commission has played a very positive role in promoting the four original objectives of the NAPs/incl and in assessing the strengths and weaknesses of Member States’ approach and achievements. The Laeken indicators have been valuable in enabling common benchmarks to be used.
However, recent developments including streamlining and the launch of the National Reform Strategy have disappointed the social NGOs not least because of the limited consultation and the feeling that NGO concerns have not been heard. At the moment where NGOs and national governments are being exhorted to take a more inclusive and participatory approach to policy
development, there is scope for the governance process at European level to be more open, more transparent and more accessible.
However, social NGOs begin to have concerns about the European Commission’s own strength of purpose for the future of the European strategy to combat poverty and social exclusion. Networks would like to see more high level support from the European Commission in supporting Member States to drive forward the poverty agenda.
2.2.3 Parliaments
Members of Parliament seem not to be involved even when invited – perhaps because NAPs are usually a survey of measures already taken or planned, so there is little political interest. The involvement by the European Parliament is limited also.
Given the aim of influencing policy and promoting new institutional mechanisms there is a democratic deficit arising from the lack of engagement of national Parliaments and the European Parliament. It is disappointing that, to date, the OMC process has not managed to enhance such engagement.
2.2.4 Non-Governmental Organisations
Some national reports refer specifically to strengthening the role of NGOs mainly related to consultation on the inclusion strand and service delivery in a national context. However, there are no resources identified to improve advocacy.
The overall picture appears to have improved, and there have been developments in countries that did not have NGO consultation in the last NAPs/incl (like in Portugal with the direct involvement of the Non Governmental Forum for Social Inclusion) – although for some, there has been less involvement than previously. In Estonia NGOs participated for first time in preparation of the 2006 Plans. In Lithuania, the network was also involved as a key stakeholder in the consultation.
Across the Member States Plan preparation usually involved 2-3 consultation meetings (but this varied from 0-5). In a minority of countries the consultation meetings are open to social partners and NGOs together. In other countries there are separate meetings with NGOs. In one new member state the national government appointed an NGO to act as an NGO umbrella and interlocutor. There is a concern with NGO registration in new Member States. Overall, Belgium and the UK had fairly robust consultation mechanisms. In Belgium an interesting process has been set up involving many actors. Employers are not much engaged but other actors worked together for more than one year. Government organised the meetings. The process is felt to be a big improvement to other rounds. The themes in the NAP on social exclusion were discussed before and chosen by the actors.
The UK anti-poverty NGOs have regular dialogue with the social exclusion team responsible for the European social inclusion agenda (the Department of Work and Pensions). Meetings are now well established and civil servants from other Departments and from the new Social Exclusion Task Force are invited to meetings. A follow up Awareness bid has been successful for the UK network. The first financed the Get heard process of 147 workshops in which people from grass-roots organisations discussed national policy to input into the 2006-8 NAPs/incl. In spring 2006 a stakeholder group was officially launched. It contains different
departments, representatives of devolved government and municipalities and NGOs. Significant activities are planned for 2007, including seminars to develop 2008 themes and a conference of people experiencing poverty.
It seems that social NGOs are gaining more legitimacy for their advocacy role, but not more resource. New stakeholder ‘expert networks’ - including the participation of people in poverty as ‘experiential experts’, provide an opportunity to establish policy development processes with a new dynamism. Without such strong – and well resourced - deliberative networks, governments may lack the will to address the weak public understanding of poverty and to seek ‘permission’ for better measures to combat poverty.
2.2.5 The role of the community sector and of people experiencing poverty
Scandinavian countries refer to user councils at local level but do not precisely say how these work with the process for the NAPs/incl.
Overall, there is reported limited participation in national preparation of people experiencing poverty, especially on a regular basis, exceptions include Belgium and to a lesser extent the UK. In Luxembourg, a ‘round table’ organised by EAPN brought in the views of people in poverty. In Malta, there was a consultation questionnaire, all stakeholders were specifically invited to a consultation and events were advertised in all the Sunday papers. EAPN Malta organised a consultation with service users. – they used the 5th People Experiencing Poverty questionnaire and interviewed 90 service users (p33).
A few Member States have gained much from the specific European conference of People Experiencing Poverty and are developing national models. It is important to retain this distinct European occasion as a model of good practice and to promote more its take up at national level.
2.2.6 Regional government and local municipalities
Some Member States (e.g. Finland, Germany) are making efforts at strengthening the central – local relationship and reforming the organisation of local government (e.g. Denmark) to improve equality of access to services.
In some new Member States, for example Latvia and Hungary, there are models of broad consultation between the centre and the regions, specifically on the inclusion strategy. Policy co-ordination effort is evident in Hungary, which states that social services have to provide two-year updates of planning strategies.
However, as indicated in EAPN’s report on the NAPs/incl ’05 the process exposes the difficulties of national co-ordination between levels of government and this remains the case.
There are practical difficulties still in actually producing a coherent national plan, because the institutional mechanisms - inter-departmental and between national government and other levels of government - and civil society are often too weak. Central government drivers through budget control and performance targets do not necessarily improve the lived experience of poor people.
Involvement of the local level of government – the main implementers of anti-poverty strategy - has to be taken upstream in a consistent way so that their experience better informs policy development. However, in the context of the OMC process, at present the incentives are insufficient for either central government or local government to make determined efforts to engage together in the NAPs/incl. This has been said by local government from the start of the NAP Inclusion process, but not a lot has changed.
2.3 Impact of engagement in the process – the experience of EAPN networks
Many networks have made the NAPs/incl a priority in their work programmes and it has absorbed a lot of time and resources. The payoff is that most say they are taken more seriously in terms of consultation, accepted expertise and role in bringing forward the voice of people experiencing poverty.
While EAPN networks are broadly satisfied that access to National Action Plan drafting is improving for NGOs they are less satisfied about access for people experiencing poverty. They are less satisfied also about mechanisms for engagement in follow-up.
Networks feel that there has been very limited involvement of social NGOs in other parts of the National Reports or in the NRP – which does not appear to be a ‘process’ from the point of view of stakeholder involvement. But streamlining has not always helped – it has inhibited incremental increase in stakeholder involvement. Networks have noted that regarding engagement in the single social process simultaneously, there is no desire on one side and there is no capacity on the other.
The OMC process has given anti-poverty networks a certain legitimacy to lobby concerning the measures taken. Regarding the impact of the OMC on poverty measures, networks have seen evidence of cross-national learning by governments, but few innovative positive measures.
Networks vary in their assessment of the added value of participating in the process. One network (Netherlands) has not changed its focus and another that expended a lot of effort on the NAPs/incl is rethinking the value of this (Ireland) as they did not see any output from the consultation in the Plan content. One network (Lithuania) saw their proposals fully included in the Plan but it is not clear if these were policy proposals about poverty or about the role of anti-poverty NGOs. The UK network has succeeded in getting the issue of in-work poverty up the government agenda and there are some indications that measures may eventually emerge, but are not proposed in the Inclusion chapter of the National Report.
Although many networks have engaged in the process of the OMC, it is clear that this has been done with few extra resources available. This does not reflect the continued commitment at European level to good governance and participation and Governments and the EU level should acknowledge this. Investment at European and national level in those actors committed to the process would get these stronger institutional mechanisms embedded for 2010 – a positive legacy for the NAPs/incl and for Member State and EU capacity to combat poverty and social exclusion.
CHAPTER 3: EAPN’S KEY MESSAGES AND PROPOSALS ON THE OMC ON SOCIAL PROTECTION AND SOCIAL INCLUSION
Key messages
The eradication of poverty as a key objective is losing ground
Poverty is no longer mentioned as a challenge in its own right. It is often restricted to addressing the needs of certain groups or to limited approaches such as child poverty, active inclusion, which are important in their own right but do not have the ambition of ‘making a decisive impact on poverty’ as agreed in Lisbon in 2000. EAPN is concerned that the change in language – including the shift from poverty to child poverty and from poverty to ‘active inclusion’, reflects both a narrowing of the concept of poverty and a shift away from a universal and preventative approach to combating poverty and promoting integration. Some measures remain broad, and do not really tackle the concerns of people in poverty. Ultimately, EAPN feels that the measures are not sufficient to address the nature, scale and depth of the problems identified.
Keeping focus on poverty: surviving in a harsh political context
The revised Lisbon agenda has set the dominant political drive at EU level and this seems to steer all policies, including anti-poverty policies, in the direction of competitiveness and jobs, meeting the stability pact and monetary union criteria. There is clearly a superseding agenda which EAPN feels should be debated in the context of its implications on the fight against poverty and exclusion.
The OMC on Social protection and social inclusion (SPSI) does have an impact on the national policy environment for combating poverty and promoting inclusion
The European external driver does matter. The OMC SPSI has had an impact on governance and cross-state learning as much as on analysis of the challenges faced, and this is a positive development. It has also had a small impact on national strategic policy formulation, and many networks welcome the priorities identified. Structural Funds’ programming has a big influence on employment and training measures but is little used directly to combat poverty.
At the same time, there is a concern around the strong drive in limited priority setting from the EU level, which has sometimes superseded other existing or wider concerns at national level and which is often driven by the EU strive for ‘jobs and growth’ first. EAPN raises concerns about this approach, which at national level seems to give less opportunities to develop a more holistic approach to poverty eradication.
Streamlining: a mixed message
Streamlining was introduced without stakeholder consultation and without meeting the concerns of some Member States committed to the distinct process for the NAPs/incI. The streamlined strategy required a powerful relaunch for the OMC which it did not get. EAPN
feels that the social inclusion strand is losing focus and content in some National Reports, maybe linked to the brevity and the limited scope for priority setting, and that the streamlined process as a whole is not sufficiently poverty-proofed. To date, its impact on addressing the multi-dimensional nature of poverty is not entirely satisfactory. The recent change in the process may mean that it is too soon to evaluate its impact, yet the commitment to poverty eradication is long-standing, and there is little evidence of improvement at EU level. The streamlined process was also intended to give prominence to an independent and equally strong social pillar within the Lisbon strategy. EAPN sees little progress in redressing the imbalance between economic, social and employment policies. Indeed, ‘feeding in’ has been followed to the point where it seems that social processes are inside and subservient to the orbit of economic processes. ‘Feeding out’ in terms of addressing social inclusion concerns in the employment and economic policy formulation of Member States is far from being achieved. EU processes in this respect have not provided the necessary lever to ensure this takes place.
**Risk groups: addressing key issues in a holistic manner**
*Child poverty* is a key concern in Europe today and should be dealt with urgently: the OMC rightly highlights this as one of its main priorities. Nonetheless, EAPN wishes to insist on the fact that although this is an issue to be dealt with as such it cannot entirely be separated from an approach which looks at poverty of families. Addressing this challenge requires tackling the structural causes of poverty, addressing issues of rights and not adopting a purely behavioural approach.
EAPN would like to see the development of a strategy which addresses the needs of all groups at severe risk, and more attention paid to problems of ethnic minorities, asylum seekers and refugees. At the same time there is still an imbalance with a more holistic approach that makes a difference to the lives of *all* people experiencing poverty.
**Active inclusion and eradication of poverty are not the same concept**
EAPN is concerned at the dominance of supply side labour market solutions in the content of the National Reports. Jobs are not the only way out of poverty, and sometimes they are also not sufficient to effectively get people out of poverty. Not enough emphasis has been placed on quality of work and absorption capacity of the Labour market, *adequacy of income* from benefits or wages and access to services.
**Current policy tools are too limited to deliver on poverty**
In order for the OMC to deliver on poverty, stronger links need to be made to other policy tools and processes, not least Economic policy (monetary policy, tax-cuts, single market…), social rights, adequate income both on and off the labour market, enforcement of measures to fight discrimination, a stronger emphasis on multidimensional policy approaches (e.g. family policy), access to services free at the point of need matter (co-payments and user charges which prevent service use). The awareness and support of media and public opinion to the OMC is also joint report still not sufficient.
**Governance in the OMC**
Although there is not sufficient effort put into promoting participation in the other strands of the streamlined European process, developments in *governance* in the NAP/Incl have been more
positive and there is more legitimacy for social NGOs and people experiencing poverty to have their voice heard. At the same time, there is a feeling that participation is mainly encouraged in the analysis of challenges rather than in the priority setting and definition of measures. Cross-government cooperation has been launched in many countries, and NGOs expect much of this. Less satisfactory is the level of engagement of sub-national level and of Parliaments.
EAPN proposals to contribute to the success of policies to fight poverty in the EU
Put poverty back on the EU agenda!
- The Spring Summit 2007, and therefore the Joint Report on strategies for social protection and social inclusion, should acknowledge the danger of moving away from a focus on poverty and should restate the need to make a decisive impact on poverty eradication.
- The language of ‘eradication of poverty’ should not be lost, and in any case should not be replaced with references to ‘active inclusion’. Although this approach is welcome in terms of challenging existing approaches to activation, this is not the same as having a clear policy priority on social inclusion and poverty eradication.
- Refocus on poverty by underlining the distinctiveness of the NAPs/incl, within the streamlined OMC and more broadly the Lisbon agenda
- Introduce more effective measures to poverty-proof policies across the board
- Increase learning about poverty and support new research on structural causes of poverty
- Transfer best practice more effectively: learn most from the countries with the least poverty
- Ensure a balance at national and European level between holistic and targeted approaches
- Where governments won’t do more, do better (implement existing tools in full)
Strengthen the streamlined OMC as an effective strategic tool
- At European level, evaluate the impact of streamlining on the attention to poverty.
- Ensure that key institutional actors (SPC, Commission) act as real ‘guardians of social inclusion’ in overall EU policy-making.
- Introduce a more structured, cross cutting working group on poverty within the Commission, in which NGOs could play a role.
- Refresh the NAPs/incl as a national planning tool
Raise the status and ensure consultation in the preparation of the **Joint Report** on Social Protection and Social Inclusion. Ensure that it contains clear messages and **recommendations** on how to ensure National Reports better meet the challenges highlighted and how they respond to the objectives set at EU level. At European level, **stakeholders** should be involved in the preparation of this report.
Do not disregard the other **priorities** mentioned in previous Joint Reports and EPSCO Conclusions (including access to housing, homelessness, quality services, discrimination, ethnic minorities and migrants…) which are still pressing concerns. Only by keeping these high on the list of EU priorities alongside a holistic and multi-dimensional approach can we achieve a balanced social inclusion agenda for the EU.
Ensure all National Reports identify the measures they will take to evaluate the **impact** of the strategies.
Reinforce governance and mainstreaming of poverty and social inclusion concerns in the **health and pensions** strands of the streamlined OMC.
Strengthen institutional mechanisms for engagement of **Parliaments** and all relevant stakeholders and people experiencing poverty, at national as much as at European level.
Launch **local** action plans and peer reviews as an established part of the OMC SPSI.
Launch a NAPs/incl ‘legacy planning’ conference for 2008.
**Strengthen processes and measures that can impact on poverty**
Provide stronger **mechanisms** to link the OMC Inclusion process to the other social, economic and financial processes (joint meetings on key reports, clear coordinated timetables).
Revise the **Lisbon** strategy process to ensure two-way input, redressing the imbalance between feeding in and feeding out on poverty and social exclusion through improved institutional, reporting and evaluation mechanisms.
Mainstream **stakeholder** involvement in all stages of development of the OMC and Lisbon processes at national and sub-national level as well as European level.
Develop **binding** commitments and measures to support the social inclusion process. In this context, particular attention should be given to horizontal frameworks to
guarantee social standards, particularly in the field of adequate income for a dignified life and to ensure equality, affordability and access to quality services, particularly social services and services of general interest.
- Ensure that the focus of Structural Fund spending and the new PROGRESS programme clearly address issues of poverty and social exclusion, and not just from an angle of ‘feeding in’ to the ‘growth and jobs’ agenda.
**Strengthen communication and visibility of the OMC in the social field**
- Establish a European strategy to promote public understanding/sharing of knowledge on poverty
- Establish social information bureaux in member states – hosted by social NGOs engaged in the OMC in the social field
- Introduce a headline-friendly ‘poverty tracker’ for media and communication
- Deepen the European Meeting of People experiencing poverty – make it a mechanism and build in national people experiencing poverty conferences
- Refocus the European Round Table on Poverty on assessment of the NAPs/incl and forward planning and link the Round Table to the policy processes. |
Building a Housing Justice Framework
Bill Pitkin
URBAN INSTITUTE
Katharine Elder
URBAN INSTITUTE
Danielle DeRuiter-Williams
URBAN INSTITUTE
August 2022
ABOUT THE URBAN INSTITUTE
The nonprofit Urban Institute is a leading research organization dedicated to developing evidence-based insights that improve people’s lives and strengthen communities. For 50 years, Urban has been the trusted source for rigorous analysis of complex social and economic issues; strategic advice to policymakers, philanthropists, and practitioners; and new, promising ideas that expand opportunities for all. Our work inspires effective decisions that advance fairness and enhance the well-being of people and places.
Copyright © August 2022. Urban Institute. Permission is granted for reproduction of this file, with attribution to the Urban Institute. Cover image by GRAPHEK.
## Contents
**Acknowledgments** iv
**Building a Housing Justice Framework** 1
- Background 1
- Pervasiveness of Housing Insecurity 2
- Racial Injustice and Oppression 3
- The Shift from a “Reform” Mindset toward One of Lasting Structural Change 4
- Fragmented Housing Policy and Program Approaches 5
- Why the Concept of “Housing Justice” Matters 7
- Histories of Policy-Driven Exclusion 8
- Predatory Inclusion 8
- Housing Justice Practices 8
- What is Housing Justice? 9
- Moving toward Housing Justice 11
- Land Use and Zoning 12
- Rental Subsidies/Vouchers 13
- Fair Tenant Screening Practices 14
- Reparations 16
- Urban’s Role in Housing Justice 18
**Notes** 20
**References** 23
**About the Authors** 25
**Statement of Independence** 26
Acknowledgments
This report was funded by the Conrad N. Hilton Foundation. We are grateful to them and to all our funders, who make it possible for Urban to advance its mission.
The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders. Funders do not determine research findings or the insights and recommendations of Urban experts. Further information on the Urban Institute’s funding principles is available at urban.org/fundingprinciples.
We appreciate feedback on this report from Mary Cunningham, Sam Batko, Yonah Freemark, and Lauren Lastowka.
Building a Housing Justice Framework
“Justice in housing is everyone realizing the fundamental truth—housing is a human right.”
—US Department of Housing and Urban Development Secretary Marcia L. Fudge at the March 22, 2022, National Low Income Housing Coalition policy forum
Background
Having a safe, affordable, and quality place to call home is fundamental to individual, family, and community life. Across the US, however, people and communities experience high rates of housing insecurity, a reality fueled by historical and ongoing discriminatory practices and racist housing policies.\(^1\) These challenges are particularly stark for Black, Indigenous, and Latinx people, who experience higher levels of housing instability and lower levels of wealth (Yixia Cai, Fremstad, and Kalkat 2021; Massey and Rugh 2018). To remedy these and other inequities, a growing number of advocates, organizers, policymakers, and researchers are calling for a structural overhaul of the country’s housing system. They aim to dismantle the factors that contribute to housing instability, so that everyone—regardless of their race, income, gender identity, disability, and/or sexuality—can live in a safe, affordable home.
The concept of “housing justice” as a framework for advancing this structural approach to housing insecurity has become more prevalent in recent years due to several factors, each of which we examine in more detail below: (1) pervasiveness of housing insecurity, (2) racial injustice and oppression, (3) the shift from a “reform” mindset toward one of lasting structural change, and (4) fragmented housing policy and approaches (see figure 1).
Pervasiveness of Housing Insecurity
Housing has long been tied to the so-called American Dream, but in recent years, housing challenges have risen to become a top concern of community residents across the country. A national poll from August 2021 revealed that 2 out of 3 residents are "extremely/very concerned" about homelessness and the high cost of housing, while 1 out of 4 residents are "somewhat concerned." A key factor contributing to these concerns is that new housing supply has not kept up with demand, leading to a nationwide gap of nearly 4 million homes—up from 2.5 million in 2018, according to Freddie Mac. For renters, finding an affordable place to live is particularly challenging. About 46 percent of renters spend more than a third of their income on housing and 22 percent of renters spend more than half of their income on housing (see figure 2). The National Low Income Housing Coalition estimates that there is a shortage of 7 million rental units for extremely low-income renters (NLIHC 2022). After declining from 2010 to 2016, the number of people experiencing homelessness across the US on a single night rose from 2016 to 2020, driven by increases in unsheltered homelessness.
Racial Injustice and Oppression
There is a long history of housing discrimination in the US, going back to land theft from Indigenous and Black people to what Ta-Nehisi Coates has called the “Quiet Plunder,” when federal policies crafted during the New Deal created backed credit for white homeowners but “Blacks were herded into the sights of unscrupulous lenders who took them for money and for sport.” This discrimination has occurred through systemic racist policies such as redlining, restrictive covenants, and public housing policies that created residential segregation (Rothstein 2018), as well as organically in the housing market, through such practices as landlord and real estate agent discrimination (Tighe, Hatch, and Mead 2017; Langowski et al. 2020), predatory lending (Immergluck 2015), and discriminatory appraisal practices (Korver-Glenn 2018). All of these forms of discrimination affect Black, Indigenous, and Latinx residents particularly hard (see figures 2 and 3). Racism in other systems and markets also contributes to racial inequities in housing (Korver-Glenn 2018). For example, racism in the criminal
legal system and in mass incarceration policies creates a cycle of housing insecurity and the jail system.\(^8\) Discrimination in the educational system and in employment practices leads to disparate outcomes for educational attainment and income, which contributes to higher housing instability for Black, Indigenous, and Latinx residents (Winkler 1993). These historical disparities have been exacerbated by the COVID-19 pandemic and will continue to grow unless there is concerted action to eliminate them.\(^9\)
**FIGURE 3**
*Homelessness in the US by Race/Ethnicity*
| Race/ethnicity | People experiencing homelessness | General population |
|---------------------------------|----------------------------------|--------------------|
| Asian | 1.3% | 5.9% |
| Native Hawaiian or Pacific Islander | 1.5% | 0.2% |
| American Indian or Alaska Native | 3% | 1% |
| Multiple races | 6% | 3% |
| Latinx | 23% | 19% |
| Black | 39% | 13% |
| White | 48% | 76% |
**URBAN INSTITUTE**
Sources: 2020 Point-in-Time Count Estimates from US Department of Housing and Urban Development; 2019 American Community Survey from US Census Bureau
**The Shift from a “Reform” Mindset toward One of Lasting Structural Change**
In recent years, the growing recognition and acknowledgement of persistent racial injustice—led in many cases by the Movement for Black Lives and amplified by demonstrations across the country and the globe in the wake of 2020 police brutality—have brought a renewed focus on the need for structural change rather than a “reform” of policing and policies that do not sufficiently address racial inequities (Dunivin et al. 2022). Recent research shows that although the majority of people in the US
favor “individualism” (the view that individuals’ circumstances result primarily from their own choices) over “systemic thinking” (the view that a person’s circumstances result primarily from how our society and economy are organized), the understanding of the role of structural forces has risen and persisted since mid-2020 (FrameWorks Institute 2022). This growing understanding has been leveraged to bolster support and investment in equity interventions.\textsuperscript{10} There has also been growing interest in implementing place-based efforts such as special purpose credit programs that seek to rectify the effects of generations of economic exclusion.\textsuperscript{11} For some housing experts, policy should take a “restorative” approach, identifying and healing the harms created by discriminatory practices.\textsuperscript{12}
\textbf{Fragmented Housing Policy and Program Approaches}
A challenge in adequately addressing the need for stable and affordable housing is that program and policy approaches tend to be fragmented across various sectors of housing interests. One sector, which includes both nonprofit and for-profit entities, works to increase housing supply through new development and housing finance. In another sector, some advocates focus on preserving current housing and protecting tenants from evictions and displacement, while others work explicitly on fair housing to address discrimination in housing. A third sector focuses on the homelessness response system, which provides services, shelter, and housing to people experiencing homelessness. All three of these areas of the housing field are important aspects of ensuring that people have a safe, stable place to live, but too often they are approached separately in “silos.” As shown in figure 4, these various sectors are usually independent of one another, but can also overlap (e.g., there are housing developers who provide housing for people who are homeless).
Housing justice provides a comprehensive framework that sits at the intersection of three sectors of housing-policy advocacy: tenant rights and fair housing, homelessness, and housing supply (see figure 4). This framework addresses the historical and systemic factors that have created housing insecurity while also advancing a forward-looking mindset of community-led and policy-centered change in addressing housing needs.
With rising interest in housing justice, the Urban Institute launched the Housing Justice Hub\textsuperscript{13} to better understand and advance this growing field. This report provides an overview of what we have learned so far and explains the housing justice framework that guides our work and that will continue to ground us moving forward.
**Why the Concept of “Housing Justice” Matters**
Advocates argue that providing more attention to—and funding for—housing in an inequitable system does not necessarily lead to better housing-stability outcomes, particularly for people of color. As witnessed in the federal programs that prevented foreclosures during the Great Recession and that supported businesses during the COVID-19 pandemic, funding can reinforce, rather than remedy, structural inequities.\textsuperscript{14} \textsuperscript{15} In order to meet the needs of all residents, policymakers and stakeholders must address the historical and systemic factors that have created housing insecurity and that continue to drive inflow into homelessness. A housing justice approach addresses the structural barriers and the intersectional issues that lead to inequitable housing outcomes.
Existing scholarship and policy analyses offer many examples of housing justice in action. Urban Institute’s Housing Justice Library is one entry-point into the ongoing conversation around why housing justice matters, how it is being practiced, and how a history of state-led discrimination continues to threaten it.\textsuperscript{16} Through the library, Urban is cataloging a growing body of research and resources related to housing justice. This catalog brings together diverse voices and media around the concept of housing justice and its policy intersections: education, criminal justice, climate, labor, and other policy areas. The resources in the library showcase different perspectives and tools, all united by a race-conscious, structural lens through which to view housing insecurity and the possible solutions for addressing it.
Altogether, this collection of resources highlights the multidimensional nature of housing injustice as a social problem that interacts with all systems and that is rooted in both historical and present day racism. These resources ultimately helped shape Urban’s definition of housing justice. Collectively, they communicate why housing justice matters in theory, policy, and practice.
Below is a summary of selected resources from the Housing Justice Resource Library that have helped advance our understanding of housing justice, spanning three themes: (1) histories of policy-driven exclusion, (2) predatory inclusion, and (3) housing justice practices.
Histories of Policy-Driven Exclusion
Summarizing key points from Richard Rothstein’s *The Color of Law* (2018), the animated short film “Segregated by Design” charts out the racist policies and state-sanctioned discrimination that fueled housing segregation and the economic exclusion of black households in the twentieth century, including redlining, racial steering, and exclusionary zoning laws.\(^{17}\) The video engages a wide audience through its visual storytelling, and links government-led racism to the inequities we see today in housing, wealth, jobs, and educational opportunity. In this way, the short highlights a key tenet of the housing justice framework: that housing inequality is rooted in longstanding policy and requires an equally holistic and institutionally grounded approach to remedy it.
Predatory Inclusion
Building on that historical analysis, Keeanga-Yamahtta Taylor’s *Race for Profit: How Banks and the Real Estate Industry Undermined Black Homeownership* (2019) examines how federal policy not only has trafficked in displacement, exclusion, and neglect but also has actively exploited black households through what Taylor calls “predatory inclusion.” With the legal end to redlining in the mid-twentieth century, federal homeownership programs continued to sap black wealth by enabling private market actors to peddle risky, low-interest loans to African American households, making them vulnerable to economic insecurity and eventual foreclosure. Here, Taylor foregrounds the broad overlap between discrimination and private-market interests in creating and reproducing housing insecurity across generations. This commodification of housing as a private good—not an inherent right—remains a key barrier to achieving housing justice at a structural level.
Housing Justice Practices
Against this historical backdrop, we also see living examples of advocacy and policy groups who are calling for and practicing housing justice in their own communities. Using a housing justice lens, United Way of Massachusetts Bay and Merrimack Valley (United Way Mass Bay) has grounded its vision for housing equity during the COVID-19 pandemic across three key policy areas: (1) creating deeply affordable housing and expanding supportive services, (2) ensuring equitable representation across sectors, and (3) grounding policy solutions in lived experience. The organization’s agenda aims to break down institutional silos in the housing/homelessness sector while centering lived experience in policy design and implementation—strategies that align with Urban’s housing justice framework. Recognizing housing justice as a collective movement is key to achieving meaningful impact. The United Way Mass
Bay campaign “shares decision-making power with the community, bringing together players from across different sectors to streamline efforts and create change that lasts.”\textsuperscript{18}
Supporting Partnerships for Anti-Racist Communities (SPARC) is a research initiative launched by the Center for Social Innovation. In 2016, SPARC led an ambitious, multi-method study across eight cities in the U.S. to better understand the intersections between racism and homelessness and their implications for policy and racial equity (Olivet et al. 2018). Through quantitative analysis of Homeless Management Information System (HMIS) data across these communities, the authors found that people of color are overrepresented in the homelessness response system and remain at the greatest risk of exiting back into homelessness, relative to their white counterparts. The study also aimed to better understand the web of life experiences and perceptions around homelessness. To that end, the team conducted a series of interviews and focus groups with people of color who had experienced or were currently experiencing homelessness. These conversations identified a wide slate of barriers to housing stability, such as low-paying jobs, interpersonal and institutional discrimination, and a general lack of dialogue and service integration between the homelessness sector and related fields (e.g., behavioral health and criminal legal systems).
What is Housing Justice?
Our working definition of housing justice and related principles were developed in conversation with Urban colleagues through the Office of Race and Equity Research and Equity Scholars Program. To further pressure-test the framework, we also consulted with external working groups of advocates, scholars, and practitioners, all with deep roots in the housing and equity landscape.\textsuperscript{19} These conversations were rigorous and informative, helping us refine and strengthen the housing justice framework. Even so, our definition of housing justice is not fixed, as we hope ongoing conversations with stakeholders and people with lived experience will continue to inform, shape, and guide what housing justice means at the Urban Institute.
Urban Institute’s working definition of housing justice is: “Increasing access to safe, affordable housing and promoting wealth-building by confronting historical and ongoing harms and disparities caused by structural racism” (figure 5).
The term “housing justice” is used by several advocacy organizations and has recently shown up in academic literature. Key groups advancing housing justice as a framework for research and action include the National Coalition for Housing Justice, Alliance for Housing Justice, and UCLA Luskin Institute on Inequality and Democracy.\(^{20}\)
“Increasing access to safe, affordable housing and promoting wealth-building by confronting historical and ongoing harms and disparities caused by structural racism.”
– Urban Institute’s working definition of housing justice
Based on a review of the literature and existing evidence, the key principles that guide work around housing justice include the following (see figure 5):
- **Housing as a human right**: More than a commodity, housing is a basic need that should be guaranteed for individuals and families.
- **Primacy of lived experience**: People closest to the problem of housing insecurity have expertise central to identifying and implementing solutions.
Anti-racism and racial equity: We must create reparative policies that account for the ongoing harm caused by past racist policies, including those related to land use, housing, and the criminal legal system.
Social and economic equity and justice: No one should be poor due to their housing status, and housing stability should create economic opportunity.
Accessibility and inclusion: Housing needs to be available and accessible for all people, especially those who have experienced discrimination in the housing market for far too long, including people of color, people with incarceration histories, single mothers with young children, and those with physical or other disabilities.
Wealth-building and ownership: All people should have opportunity to create wealth as part of having access to stable housing.
Choice and agency: Residents should be empowered to choose where and how they live and exercise power and autonomy.
Community and well-being: A stable, quality home is vital for health and well-being, provides a sense of belonging, and helps connect people to opportunities to learn and earn.
Moving toward Housing Justice
The housing justice framework is comprehensive, encompassing a range of policy and programmatic areas, as outlined in table 1. In some cases, these policy aims have been only weakly enforced or are aspirational in nature. In other instances, the designs of policies and programs have reproduced housing inequality.
Housing Justice is achieved when housing policies and programs at the federal, state, and local (city/county) levels are designed and implemented to incorporate the relevant principles outlined above. To illustrate this, below are four examples of how a housing justice approach could play out in a policy area by incorporating select housing justice principles. In the first three examples (land use and zoning, rental subsidies/vouchers, and fair tenant screening practices), the policy areas have the potential to contribute to housing justice but instead often are implemented in ways that contribute to or reinforce disparities. Applying the housing justice principles could bring these examples closer to housing justice. The final example (reparations) represents a newer policy area that, if implemented broadly, could be transformative in achieving housing justice.
**Land Use and Zoning**
Local governments in the US have substantial control over housing and other development through land use and building regulations such as zoning.\(^{21}\) There is a long history of localities using polices such as height limits and minimum lot size requirements to prioritize certain types of development—primarily single-family housing and commercial properties—at the exclusion of others, such as multifamily and affordable housing (Metcalf et al. 2021). The vast majority of land in many areas of the US is zoned to allow only single-family homes to be constructed.\(^{22}\) "Exclusionary zoning" continues to
perpetuate residential segregation along racial lines and reduces affordability by restricting available housing stock. In recent years, there have been growing calls to end single family zoning (Manville, Monkkonen, and Lens 2020). There have been efforts in Louisville, Boston, Seattle, and other cities to remedy past exclusionary zoning policies by creating land-use policies that promote racial equity and equitable development.\textsuperscript{23}
**APPLYING THE HOUSING JUSTICE PRINCIPLES TO LAND USE AND ZONING**
- **Housing as a human right:** Land-use policies have long been designed to limit the development of sufficient housing and create residential segregation. Reforming land-use policies to promote equitable development of housing and prevent displacement would be consistent with treating housing as a human right rather than a privilege.\textsuperscript{24}
- **Anti-racism and racial equity:** To correct the legacy of racist land-use policies, cities can implement assessments and reviews of current policies and add equity and fair housing goals to their zoning codes and practices.\textsuperscript{25}
- **Social and economic equity and justice:** Land-use reforms that remove barriers to allow for increased density can increase housing supply,\textsuperscript{26} but localities should include policies to address risks of increased development costs and/or displacement of current residents.\textsuperscript{27}
- **Choice and agency:** Increasing density through land-use tools such as “upzoning” can create more residential choices for residents, especially if coupled with policies to support local residents and prevent displacement, such as rent stabilization and community benefit agreements.\textsuperscript{28}
- **Community and well-being:** Land-use reforms such as eliminating parking requirements and increasing density near public transit can create more walkable, environmentally sustainable neighborhoods and create opportunities for recreation and community connection to support health equity (Fedorowicz et al. 2020).\textsuperscript{29}
**Rental Subsidies/Vouchers**
Rental subsidies provide support to low-income tenants for paying rent. The Housing Choice Voucher program is probably the best-known federal subsidy program, and there are many other programs at the federal and local levels. About 2 million households across the US receive Housing Choice Vouchers, although over 8 million more are eligible but do not receive vouchers due to underfunding.\textsuperscript{30} Expanding access to Housing Choice Vouchers and other rental subsidies for all
people who need them would greatly expand the supply of affordable housing and address housing instability.
APPLYING THE HOUSING JUSTICE PRINCIPLES TO RENTAL SUBSIDIES/VOUCHERS
- **Housing as a human right**: Universal vouchers—providing subsidies for all who need them—would ensure that all people who need help paying rent receive assistance.
- **Primacy of lived experience**: Because there are often challenges with using vouchers, administrators should engage with residents to understand the barriers and solutions to overcoming them.
- **Anti-racism and racial equity**: Research shows that universal housing vouchers would lower poverty for people of color and reduce racial disparities in housing cost burden.\(^{31}\)
- **Social and economic equity and justice**: Subsidies and vouchers can provide financial stability for individuals and families, and stable housing provides a foundation for employment. Improving the ability of voucher holders to move to “high opportunity” neighborhoods has demonstrated strong positive effects on future earnings for young children (Chetty, Hendren, and Katz 2016).
- **Accessibility and inclusion**: To prevent discrimination against voucher holders, an increasing number of states and localities have enacted income protection laws, which have improved the programs’ effectiveness (Bell, Sard, and Koepnick 2018).
- **Choice and agency**: In theory, tenant-based vouchers allow renters to live where they choose. Unfortunately, many voucher holders are unable to find landlords who will take their voucher, and vouchers tend to be concentrated in high-poverty, “minority-concentrated” neighborhoods. Changes to federal programs could allow more voucher holders to live in “high opportunity” neighborhoods (Sard et al. 2018).
**Fair Tenant Screening Practices**
The Fair Housing Act (FHA) of 1968 outlaws housing discrimination on the basis of certain protected characteristics, including race, color, religion, national origin, sex, disability, and familial status.\(^{32}\) For renters specifically, the FHA prohibits the denial of a rental unit to any member of a protected class through unjust screening criteria or landlord discrimination. Beyond protecting the privately held right to housing, the FHA also charges cities and counties with a proactive duty to affirmatively further fair housing in their communities, which has major implications for access to fair and affordable rental
units at the local level (Steil et al. 2021). Despite this federal mandate, private and institutional discrimination continues to impede equitable access to housing (Massey 2015). For example, individuals with criminal backgrounds—particularly people of color—are often screened out of the rental application process because of required reporting around prior histories of incarceration (Schneider 2018). This increases the risk of housing instability and recidivism for these individuals (Jacobs and Gottlieb 2020). Strengthening fair housing guidance and its enforcement and reducing the screening criteria for safe and affordable housing are key components of the housing justice movement.
APPLYING THE HOUSING JUSTICE PRINCIPLES TO FAIR TENANT SCREENING PRACTICES
- **Housing as a human right**: Fair housing law is rooted in the basic belief that everyone should have equal and unobstructed access to housing, regardless of identity. That belief will only be meaningfully realized for renters when fair housing regulations are rigorously enforced and the power imbalance that favors landlord discretion in tenant selection is corrected.
- **Primacy of lived experience**: Local assessments to affirmatively further fair housing have a community engagement component that allows local residents, particularly those “historically excluded because of characteristics protected by the Fair Housing Act,” to give input on fair housing issues and goals. Strengthening oversight around the criteria for and consistency of community engagement at every step of the planning process is key to properly giving voice to lived experience (Allen 2018).
- **Anti-racism and racial equity**: Racial stereotypes regularly influence landlord decision-making during the tenant screening process. These judgments include overt acts of prejudice against applicants of color as well as “race-blind” screening algorithms that weight eviction history, credit scores, and criminal legal system involvement—factors that are all highly correlated with race and disproportionately affect Black people. Housing justice demands an expansive understanding of discrimination in order to advance racial equity, moving beyond intentional acts of discrimination to include disparate impact as well (Bhatia 2020).
- **Accessibility and inclusion**: Expanding fair housing protections to include other historically marginalized groups, such as LGBTQ+ populations and people with criminal backgrounds, will broaden the pathway to meaningful inclusion. Several communities across the country have implemented “Ban the Box” measures to prevent landlords from screening out applicants with criminal records (Poulos 2020). Continuing to expand these protections nationally will help promote the proactive inclusion and reintegration of justice-involved populations.
Community and well-being: Fair housing protections are just one piece of a broader, community-based toolkit to promote tenant rights, which can include building more subsidized housing and conducting landlord outreach and education on fair housing obligations through local planning under the FHA’s affirmatively furthering fair housing mandate (Bostic and Acolin 2018).
Reparations
A reparation is the making of amends for a wrongdoing by paying money to or helping those who have been wronged. In the context of this framework, housing justice can play a role in the practical implementation of reparations for American chattel slavery. Black Americans are disproportionately impacted by housing inequities, many of which have their roots in post-WWII redlining policies. But prior to redlining, the ownership and exploitation of Black Americans under slavery and subsequent Jim Crow laws prevented Black Americans from entering and remaining in the housing market and experiencing housing justice. In recent years there has been growing momentum to make reparations, including the return of land and cash endowments.\(^{36}\)
The City of Evanston has developed and implemented a local reparations program focused on achieving the following aims: (1) Revitalize, preserve, and stabilize Black owner-occupied homes in Evanston; (2) increase homeownership and build the wealth of Black residents; (3) build intergenerational equity among Black residents; and (4) improve the retention rate of Black homeowners in the City of Evanston.\(^{37}\)
APPLYING THE HOUSING JUSTICE PRINCIPLES TO REPARATIONS
- **Housing as a human right:** Reversing centuries-long practices that dehumanized Black Americans and excluded them from civil rights and human rights is at the heart of applying reparations policy in the context of housing justice.
- **Wealth-building and ownership:** The wealth gap between Black Americans and their white counterparts began during enslavement, was perpetuated during Jim Crow, intensified during the post-WWII era of redlining, and further intensified during the early 2000’s window of predatory lending and the subsequent housing bubble burst (Weller and Roberts 2021). Offering reparations with the aim of housing justice provides an opportunity to narrow the wealth gap between Black Americans and their white counterparts.
Choice and agency: During enslavement and Jim Crow, and prior to the passage of the Fair Housing Act, Black Americans experienced little choice and agency about what kind of housing they could access, where they could live, and how they could secure housing. This resulted in significant residential segregation across the country, a reality that exists to this day.\textsuperscript{38} Programs incorporating a reparations approach should ensure that a central tenant is the ability of recipients to self-determine. This may create challenges in the context of place-based investments that focus on historically redlined communities.\textsuperscript{39} But it should be considered nonetheless.
Community and well-being: Because of the intersecting and perpetuating nature of racial inequities, reparations can be a tool to increase community and well-being, especially when grounded in an analysis of the barriers to housing justice that fall outside of lending and supply, which are most commonly emphasized. This can include investing in interventions that increase access to mental health services; that remediate the health impacts of poor air quality from proximity to environmental hazards (Millas Kaiman 2016); or that support equitable K-12 education funding (Harris et al. 2021), guaranteed basic income,\textsuperscript{40} free college,\textsuperscript{41} and universal childcare.\textsuperscript{42}
The examples outlined above provide an overview of how the housing justice framework can be applied to existing policies and programs. By working to align these policies with housing justice principles, policy makers and advocates can increase access to safe, affordable housing and promote wealth-building by confronting historical and ongoing harms and disparities caused by structural racism. As the Housing Justice project moves forward, Urban will explore additional policy areas and approaches and analyze their fit within the housing justice principles and framework.
Urban’s Role in Housing Justice
Through the Housing Justice Hub, Urban Institute is working to develop an evidence-informed approach to thinking about and achieving housing justice in three primary ways (figure 6).
First, we will convene and engage stakeholders to empower, equip, and support advocacy groups and policy leaders who have long been on the frontlines of the housing justice movement. Urban plans to provide a space for ongoing peer learning—by engaging governmental, nonprofit, community-based, and advocacy partners in a shared conversation around housing justice and the tools available to advance it. A key dimension of this conversation is to elevate lived experience through community-engaged research methods and the recruitment of a community advisory board to help steward the project.
Second, Urban will amplify and build knowledge in the growing field of housing justice by drawing on its deep expertise in housing research and policy, racial equity analytics, and strategic advising on cross-sector housing solutions. Data and research play a critical role in the housing justice movement. For example, data on the disproportional number of Black and Indigenous residents among people experiencing homelessness and housing insecurity has clarified how structural racism affects how and where people live. Going forward, data and tools that track whether policies lead to equitable outcomes will be vital in realizing housing justice.
Finally, we will work to advance housing justice by creating and sharing data tools and analyses to inspire research, policy solutions, and advocacy. Beyond simply documenting disparities, our work
aims to address the “how” and “why” of housing inequality by examining how the structural factors contributing to racial inequity are related to housing outcomes. Through data analysis, our tools will be able to forecast the impact of housing policies on target populations to better understand, for instance, how housing instability is affected by housing supply. Equipped with these evidence-informed insights, policymakers and community partners can strengthen how they design, implement, and monitor policies and programs to achieve housing justice for all.
Notes
1 Margery Turner and Solomon Green, "Causes and Consequences of Separate and Unequal Neighborhoods," Structural Racism Explainer Collection, Urban Institute, https://www.urban.org/racial-equity-analytics-lab/structural-racism-explainer-collection/causes-and-consequences-separate-and-unequal-neighborhoods
2 Brian H. Robb, “Homeownership and the American Dream,” Forbes, September 28, 2021, https://www.forbes.com/sites/forbesrealestatecouncil/2021/09/28/homeownership-and-the-american-dream/?sh=6b4504b423b5
3 Jerusalem Demsas, “The Housing Crisis Is the Top Concern for Urban Residents,” Vox, September 16, 2021, https://www.vox.com/2021/9/16/22674410/housing-crisis-homelessness-poll.
4 Freddie Mac Economic & Housing Research Group, “Housing Supply: A Growing Deficit,” May 2021, Freddie Mac, http://www.freddiemac.com/fmac-resources/research/pdf/202105-Note-Housing_Supply-08.pdf
5 Jared Bernstein, Jeffery Zhang, Ryan Cummings, and Matthew Maury, “Alleviating Supply Constraints in the Housing Market,” White House (blog), September 1, 2021, https://www.whitehouse.gov/cea/written-materials/2021/09/01/alleviating-supply-constraints-in-the-housing-market/
6 “HUD Releases 2020 Annual Homeless Assessment Report Part 1: Homelessness Increasing Even Prior to COVID-19 Pandemic, US Department of Housing and Urban Development,” US Department of Housing and Urban Development, March 18, 2021, https://www.hud.gov/press/press_releases_media_advisories/hud_no_21_041
7 Ta-Nehisi Coates, “The Case for Reparations,” The Atlantic, June 15, 2014, https://www.theatlantic.com/magazine/archive/2014/06/the-case-for-reparations/361631/
8 Sarah Gillespie and Samantha Batko, “Five Charts That Explain the Homelessness-Jail Cycle—and How to Break It,” Urban Institute (feature), September 15, 2020, https://www.urban.org/features/five-charts-explain-homelessness-jail-cycle-and-how-break-it.
9 Kilolo Kijakazi, Jonathan Schwabish, and Margaret Simms, “Racial Inequities Will Grow Unless We Consciously Work to Eliminate Them,” Urban Wire (blog), Urban Institute, July 1, 2020, https://www.urban.org/urban-wire/racial-inequities-will-grow-unless-we-consciously-work-eliminate-them
10 “Funding for Racial Equity,” Candid, https://candid.org/explore-issues/racial-equity For example, as of July 13, 2022, Candid estimates that philanthropic donors have provided $15 billion toward racial equity since 2020.
11 Jung Hyun Choi, Liam Reynolds, and Vanessa G. Perry, “A five-point strategy for reducing the black homeownership gap,” Urban Wire (blog), Urban Institute, February 1, 2022, https://www.urban.org/urban-wire/how-place-based-special-purpose-credit-programs-can-reduce-racial-homeownership-gap
12 See, for example: Rick Jacobus, “Restorative Housing Policy: Can We Heal the Wounds of Redlining and Urban Renewal?” Shelterforce, May 31, 2022, https://shelterforce.org/2022/05/31/restorative-housing-policy-can-we-heal-the-wounds-of-redlining-and-urban-renewal/
13 Urban Institute, “Housing Justice Hub,” https://www.urban.org/projects/housing-justice-hub
14 Silvia R. González, Rodrigo Dominguez-Villegas, and Kassandra Hernández, “Disparities in the Distribution of Paycheck Protection Program Funds Between Majority-White Neighborhoods and Neighborhoods of Color in California,” UCLA Latino Policy and Politics Institute, December 17, 2020, https://latino.ucla.edu/research/disparities-ppp-neighborhoods-california/
15 Carolina Reid, "Crisis, Response, and Recovery: The Federal Government and the Black/White Homeownership Gap," Terner Center for Housing Innovation, March 31, 2021, https://ternercenter.berkeley.edu/research-and-policy/crisis-response-and-recovery-the-federal-government-and-the-black-white-homeownership-gap/
16 Urban Institute, "Housing Justice Library." https://www.urban.org/projects/housing-justice-hub/housing-justice-library
17 Mark Lopez and Richard Rothstein, "Segregated By Design," Austin, TX, Silkworm Studio, https://www.segregatedbydesign.com/
18 Alex Dalby, Christi Staples, Brigid Boyd, Sarah Bartley, Sam Zito, and Joyce Tavon, "3 Steps to Achieve Housing Justice for All," United Way Massachusetts Bay and Merrimack Valley (blog), March 31, 2022, https://unitedwaymassbay.org/blog/3-steps-to-achieve-housing-justice/
19 Specifically, we consulted with the National Race Equity Working Group and the Homelessness Policy Research Institute’s Race Equity Committee.
20 See the following webpages (all accessed July 15, 2022): National Coalition for Housing Justice, https://nchj.org/, Alliance for Housing Justice, https://www.allianceforhousingjustice.org/, UCLA Luskin Institute on Inequality and Democracy, https://challengeinequality.luskin.ucla.edu/housing-justice/
21 Yonah Freemark, Lydia Lo, Eleanor Noble, and Ananya Hariharan, “Cracking the Zoning Code,” Urban Institute, May 2022, https://apps.urban.org/features/advancing-equity-affordability-through-zoning/
22 Research from the Othering & Belonging Institute found that in the San Francisco Bay Area, for example, 85 percent of land is single-family zoned (https://belonging.berkeley.edu/single-family-zoning-san-francisco-bay-area) and in the Los Angeles region, 78 percent of land is single-family zoned (https://belonging.berkeley.edu/single-family-zoning-greater-los-angeles). Outside of California, 72 percent of land in the Twin Cities, Minneapolis, and 70 percent in Connecticut is zoned for single family homes.
23 See this section of Urban Institute’s “Cracking the Zoning Code” feature for an overview: https://apps.urban.org/features/advancing-equity-affordability-through-zoning/#equity
24 See this letter from broad coalition of civil rights, community organizing, and affordable housing advocacy organizations to the Biden administration in June, 2021, in response to proposed incentives for cities to change local zoning and land-use policies: https://ourfinancialsecurity.org/2021/06/letters-principles-for-equitable-zoning-reform/
25 See, for example, efforts in Louisville to diagnose the land development code and create an equitable development plan for the city: https://louisvilleky.gov/government/planning-design/land-development-code-reform
26 Jared Bernstein, Jeffery Zhang, Ryan Cummings, and Matthew Maury, “Alleviating Supply Constraints in the Housing Market,” White House (blog), September 1, 2021, https://www.whitehouse.gov/cea/written-materials/2021/09/01/alleviating-supply-constraints-in-the-housing-market/.
27 See this letter from broad coalition of civil rights, community organizing, and affordable housing advocacy organizations to the Biden administration in June 2021 in response to proposed incentives for cities to change local zoning and land-use policies: https://ourfinancialsecurity.org/2021/06/letters-principles-for-equitable-zoning-reform/
28 See Urban Institute’s “Cracking the Zoning Code” for a list of potential reforms to prevent displacement while increasing housing supply: https://apps.urban.org/features/advancing-equity-affordability-through-zoning/#supply
29 "The Uncomfortable Facts about Biking and Minorities," Smart Cities Dive, accessed July 15, 2022, https://www.smartcitiesdive.com/ex/sustainablecitiescollective/uncomfortable-facts-about-biking-and-minorities/316886/
30 Mary K. Cunningham, “It’s Time to Reinforce the Housing Safety Net by Adopting Universal Vouchers for Low-Income Renters,” Urban Wire (blog), Urban Institute, April 7, 2020, https://www.urban.org/urban-wire/its-time-reinforce-housing-safety-net-adopting-universal-vouchers-low-income-renters
31 Alicia Mazzara, “Expanding Housing Vouchers Would Cut Poverty and Reduce Racial Disparities,” Center on Budget and Policy Priorities, May 11, 2021, https://www.cbpp.org/blog/expanding-housing-vouchers-would-cut-poverty-and-reduce-racial-disparities
32 “Housing Discrimination Under the Fair Housing Act,” US Department of Housing and Urban Development, n.d., https://www.hud.gov/program_offices/fair_housing_equal_opp/fair_housing_act_overview
33 “Affirmatively Furthering Fair Housing (AFFH),” US Department of Housing and Urban Development, https://www.hud.gov/AFFH
34 Affirmatively Furthering Fair Housing, 80 Fed. Reg. 42272 (July 16, 2015), https://www.federalregister.gov/documents/2015/07/16/2015-17032/affirmatively-furthering-fair-housing
35 National Low Income Housing Coalition, “Racial Stereotypes Pervade Tenant Screening Processes,” Memo to Members, October 4, 2021, https://nlihc.org/resource/racial-stereotypes-pervade-tenant-screening-processes.
36 An example of return of land is Los Angeles County and the State of California approving the return of beachfront property in Manhattan Beach to the descendants of Willa and Charles Bruce: https://www.gov.ca.gov/2021/09/30/moving-to-right-historical-wrong-governor-newsom-signs-legislation-to-return-bruces-beach-to-black-descendants/. An example of cash endowments is Harvard University’s cash endowments for slavery reparations: https://www.reuters.com/world/us/harvard-sets-up-100-million-endowment-fund-slavery-reparations-2022-04-26/
37 For more information on Evanston’s Local Reparations Restorative Housing Program, see: https://www.cityofevanston.org/government/city-council/reparations
38 Rashawn Ray, Andre M. Perry, David Harshbarger, Samantha Elizondo, and Alexandra Gibbons, “Homeownership, Racial Segregation, and Policy Solutions to Racial Wealth Equity,” Brookings Institution, September 1, 2021, https://www.brookings.edu/essay/homeownership-racial-segregation-and-policies-for-racial-wealth-equity/.
39 Andre M. Perry and David Harshbarger, “America’s Formerly Redlined Neighborhoods Have Changed, and So Must Solutions to Rectify Them,” Brookings Institution, October 14, 2019, https://www.brookings.edu/research/americas-formerly-redlines-areas-changed-so-must-solutions/
40 Kiara Alfonseca, “Guaranteed Income Experiment for Black Women Aims to Tackle Racial Wealth Gap,” ABC News, January 12, 2022, https://abcnews.go.com/US/guaranteed-income-experiment-black-women-aims-tackle-racial/story?id=82073348
41 Jeff Green and Ella Ceron, “California Panel Proposes Reparations, Free College for Black Students,” Bloomberg, June 1, 2022, https://www.bloomberg.com/news/articles/2022-06-01/california-panel-proposes-reparations-free-college-for-black-students
42 Rasheed Malik and Jamal Hagler, “Black Families Work More, Earn Less, and Face Difficult Child Care Choices,” Center for American Progress, August 5, 2016, https://www.americanprogress.org/article/black-families-work-more-earn-less-and-face-difficult-child-care-choices/
Allen, Michael. 2018. “Speaking Truth to Power: Enhancing Community Engagement in the Assessment of Fair Housing Process.” In *A Shared Future: Fostering Communities of Inclusion in an Era of Inequality*, edited by Jonathan Spader, Shannon Rieger, Christopher Herbert, and Jennifer Molinsky, 252–266. Cambridge, MA: Harvard Joint Center for Housing Studies.
Bell, Allison, Barbara Sard, and Becky Koepnick. 2018. “*Prohibiting Discrimination Against Renters Using Housing Vouchers Improves Results.*” Washington, DC: Center on Budget and Policy Priorities.
Bhatia, Shivangi. 2020. ‘To Otherwise Make Unavailable’: Tenant Screening Companies’ Liability Under the Fair Housing Act’s Disparate Impact Theory.” *Fordham Law Review* 88: 2551–2583.
Bostic, Raphael, and Arthur Acolin. 2018. “The Potential for HUD’s Affirmatively Furthering Fair Housing Rule to Meaningfully Increase Inclusion.” In *A Shared Future: Fostering Communities of Inclusion in an Era of Inequality*, edited by Jonathan Spader, Shannon Rieger, Christopher Herbert, and Jennifer Molinsky, 236–251. Cambridge, MA: Harvard Joint Center for Housing Studies.
Chetty, Raj, Nathaniel Hendren, and Lawrence Katz. 2016. “The Effects of Exposure to Better Neighborhoods on Children: New Evidence from the Moving to Opportunity Experiment.” *American Economic Review* 106 (4): 855–902.
Dunivin, Zachary, Harry Yan, Jelani Ince, and Fabio Rojas. 2022. “Black Lives Matter protests shift public discourse.” *Proceedings of the National Academy of Sciences of the United States of America* 119 (10).
Fedorowicz, Martha, Joe Schilling, and Emily Bramhall with Brian Bieretz, Yipeng Su, and K. Steven Brown. 2020. *Leveraging the Built Environment for Health Equity*. Washington, DC: Urban Institute.
FrameWorks Institute. 2022. How Is Culture Changing in This Time of Social Upheaval? Findings from the Culture Change Project. Washington, DC: FrameWorks Institute.
Harris, Khalilah, Arshi Pathak, Marshall Anthony Jr., Jessica Yin, Laura Dallas McSorley, and Jill Rosenthal. 2021. *Budget Reconciliation Must Support a Quality Education for All Students*. Washington, DC: Center for American Progress.
Immergluck, Dan. 2015. “A Look Back: What We Now Know About the Causes of the US Mortgage Crisis.” *International Journal of Urban Sciences* 19 (3), 269–285.
Jacobs, Leah A., and Aaron Gottlieb. 2020. “The Effect of Housing Circumstances on Recidivism.” *Criminal Justice and Behavior* 47(9), 1097–1115.
Korver-Glenn, Elizabeth. 2018. “Compounding Inequalities: How Racial Stereotypes and Discrimination Accumulate across the Stages of Housing Exchange.” *American Sociological Review* 83(4), 627–656
Langowski, Jamie, William Berman, Grace Brittan, Catherine LaRaia, Jee-Yeon Lehmann, and Judson Woods. 2020. *Qualified Renters Need Not Apply: Race and Voucher Discrimination in the Metro Boston Rental Housing Market*. Boston, MA: The Boston Foundation.
Manville, Michael, Paavo Monkkonen, and Michael Lens. 2020. “It’s Time to End Single-Family Zoning.” *Journal of the American Planning Association* 86(1): 1061–12.
Massey, Douglas S. 2015. “The Legacy of the 1968 Fair Housing Act.” *Sociological Forum* 30 (51): 571–588.
Massey, Douglas S., and Jacob S. Rugh. 2018. “The Great Recession and the Destruction of Minority Wealth.” *Current History* 117(802): 298–303.
Metcalf, Ben, David Garcia, Ian Carlton, and Kate Macfarlane. 2021. “Will Allowing Duplexes and Lot Splits on Parcels Zoned for Single-Family Create New Homes?” Berkeley, CA: Terner Center for Housing Innovations.
Millas Kaiman, Catherine. 2016. “Environmental Justice and Community-Based Reparations.” *Seattle University Law Review* 39(4): 1327–1374.
NLIHC (National Low Income Housing Coalition). 2022. “The Gap: A Shortage of Affordable Rental Homes.” Washington, DC: NLIHC.
Olivet, Jeffrey, Marc Dones, Molly Richard, Catriona Wilkey, Svetlana Yampolskaya, Maya Beit-Arie, and Lunise Joseph. 2018. “Supporting Partnerships for Anti-Racist Communities: Phase One Study Findings.” Needham, MA: Center for Social Innovation.
Poulos, Christopher. 2020. “Criminal Record Based Housing Discrimination Harms Public Safety.” *Seattle Journal for Social Justice* 19: 399–407.
Rothstein, Richard. 2018. *The Color of Law: A Forgotten History of How our Government Segregated America*. New York City, New York: Liveright Publishing Corporation.
Sard, Barbara, Douglas Rice, Allison Bell, and Alicia Mazzara. 2018. “Federal Policy Changes Can Help More Families with Housing Vouchers Live in Higher-Opportunity Areas.” Washington, DC: Center on Budget and Policy Priorities.
Schneider, Valerie. 2018. “The Prison to Homelessness Pipeline: Criminal Record Checks, Race, and Disparate Impact.” *Indiana Law Journal* 93(2): 421–455.
Steil, Justin P., Nicholas F. Kelly, Lawrence J. Vale, and Maia S. Woluchem, eds. 2021. *Furthering Fair Housing: Prospects for Racial Justice in America’s Neighborhoods*. Philadelphia: Temple University Press.
Taylor, Keeanga-Yamahtta. 2019. *Race for Profit: How Banks and the Real Estate Industry Undermined Black Homeownership*. Chapel Hill, NC: University of North Carolina Press.
Tighe, J. R., Megan Hatch, and Joseph Mead. 2017. “Source of Income Discrimination and Fair Housing Policy.” *Journal of Planning Literature* 32(1), 3–15.
Weller, Christian E. and Lily Roberts. 2021. *Eliminating the Black-White Wealth Gap Is a Generational Challenge*. Washington, DC: Center for American Progress.
Winkler, Anne E. 1993. “The Living Arrangements of Single Mothers with Dependent Children: An Added Perspective.” *The American Journal of Economics and Sociology* 52(1): 1–18.
Yixia Cai, Julie, Shawn Fremstad, and Simran Kalkat. 2021. *Housing Insecurity by Race and Place During the Pandemic*. Washington, DC: Center for Economic and Policy Research.
Bill Pitkin is a senior policy fellow in the Research to Action Lab and the Metropolitan Housing and Communities Policy Center at the Urban Institute. He leads work on housing justice and racial equity, upward mobility, and affordable housing.
Katharine Elder is a research analyst in the Metropolitan Housing and Communities Policy Center at the Urban Institute. Her main research interests include fair housing reform, homelessness, and rental assistance.
Danielle DeRuiter-Williams is the associate director of racial equity for the Research to Action Lab, where she stewards the Lab’s racial equity and diversity, equity, and inclusion efforts and collaborates on projects at the Lab and the Metropolitan Housing and Communities Policy Center. DeRuiter-Williams also contributes to the Office of Racial Equity Research to manage funder relationships and support implementation activities across several projects.
STATEMENT OF INDEPENDENCE
The Urban Institute strives to meet the highest standards of integrity and quality in its research and analyses and in the evidence-based policy recommendations offered by its researchers and experts. We believe that operating consistent with the values of independence, rigor, and transparency is essential to maintaining those standards. As an organization, the Urban Institute does not take positions on issues, but it does empower and support its experts in sharing their own evidence-based views and policy recommendations that have been shaped by scholarship. Funders do not determine our research findings or the insights and recommendations of our experts. Urban scholars and experts are expected to be objective and follow the evidence wherever it may lead.
500 L'Enfant Plaza SW
Washington, DC 20024
www.urban.org |
Numerical Solution of the Isaacs Equation for Differential Games with State Constraints*
E. Cristiani * M. Falcone **
* ENSTA, 32 Blvd Victor, 75739, Paris, FRANCE (e-mail: firstname.lastname@example.org)
** Dipartimento di Matematica, SAPIENZA – Università di Roma, P.le A. Moro, 2 - 00185 Roma (e-mail: email@example.com)
Abstract: We present a numerical approximation for differential games with state constraints. The scheme is based on dynamic programming and on the discretization of the Isaacs equation which describes the value function of the game. Once the approximate value function has been computed we can construct a numerical synthesis of feedback controls in order to reconstruct the corresponding optimal trajectories. Some numerical tests are presented and discussed.
1. INTRODUCTION
In this paper we present a numerical approximation scheme for general differential games with state constraints. In fact, we want to extend our approach for 2-player pursuit-evasion games with state constraints presented in [13] to more general situations where the dynamics is coupled and the effect of the strategies chosen by every player affects all the components (a more precise description of the dynamics will be presented in Section 2). The scheme is based on the dynamic programming approach and derives from a natural generalization of the unconstrained approximation scheme (see the survey papers [5, 14] for a general introduction). Unfortunately, we are not able to give a proof of convergence for our algorithm due to the fact that a precise definition of viscosity solution for the general constrained case is still missing. Our contribution here is mainly at the experimental and numerical level. However, we will show (in Section 3) some interesting examples where our algorithm is able to build a solution of the Isaacs equations and the corresponding optimal trajectories for the two players. The qualitative behaviour of the optimal strategies looks rather accurate and this motivates an additional effort to analyze the problem and prove a convergence theorem. In order to set our paper into perspective, note that in [5] the convergence of the fully-discrete solution to the solution of the continuous problem was proved in the free (i.e. unconstrained) case, but this result can not be directly extended to the constrained case. In [8] a convergence result is proved for constrained control problems, but it strictly relies on the fact that the time-discrete value function is continuous so we can not apply the same ideas here.
To deal with generalized differential games, we adapt to the discrete problem the definitions of admissible controls presented in [17] since they are not restricted to pursuit-evasion games, i.e. games where each player controls only his own dynamics. It should be noted that very few results on constrained differential games are available although several interesting problems with state constraints have been studied in the literature by Isaacs [16] and Breakwell in [7]. The aim of those contributions is mainly to compute the optimal trajectories without solving the Isaacs equation. The main theoretical contributions to the characterization of the value function for state constrained problems are, at our knowledge, the papers by Alziary de Roquefort [1], Bardi et alia [6] and by Cardaliaguet, Quincampoix and Saint-Pierre [9]. From the numerical point of view the list of contributions is even shorter. The first examples of computed optimal trajectories for pursuit-evasion games have appeared in the work by Alziary de Roquefort [2]. In Bardi et al. [5] there are some interesting tests in $\Omega \subset \mathbb{R}^2$ with state constraints and discontinuous value function. In [3] the effect of the boundary conditions for the free problem in $\mathbb{R}^4$ is studied. In the paper Cardaliaguet, Quincampoix and Saint-Pierre [10] a convergence result for an approximation scheme is presented for a modified viability kernel algorithm (see [11] for more details on this approach). Finally, in [13] we have shown convergence of our algorithm for pursuit-evasion games.
2. THEORETICAL BACKGROUND AND NOTATIONS
Let us start introducing the problem and our notations. A target set $T \subset \mathbb{R}^n$ is given and it is assumed to be closed. The system describing the dynamics is
$$\begin{cases}
\dot{y}(t) = f(y(t), a(t), b(t)), & t > 0 \\
y(0) = x
\end{cases}$$
where $y(t)$ is the state of the system, $a(\cdot) \in A$ and $b(\cdot) \in B$ are respectively the controls of the first and the second player, $A$ and $B$ being the sets of admissible controls defined as
$$A = \{a(\cdot): [0, +\infty) \to A, \text{ measurable}\},$$
$$B = \{b(\cdot): [0, +\infty) \to B, \text{ measurable}\},$$
and $A$ and $B$ are given compact sets of $\mathbb{R}^m$. We will always assume that...
\[ f : \mathbb{R}^n \times A \times B \to \mathbb{R}^n \text{ is continuous in the three variables and there exists } L > 0 \text{ such that} \]
\[
\begin{cases}
|f(y_1, a, b) - f(y_2, a, b)| \leq L|y_1 - y_2| \\
\text{for all } y_1, y_2 \in \mathbb{R}^n, \ a \in A, \ b \in B.
\end{cases}
\] (2)
We will denote the solution of (1) by \( y_x(t; a(\cdot), b(\cdot)) \). In our generalized Pursuit-Evasion game the first player, called the *Pursuer* and denoted by \( P \), wants to drive the system to \( T \). The second player, called the *Evader* and denoted by \( E \), wants to drive the system away.
Note that in [13] we have proposed an algorithm for the special case where \( y = (y_P, y_E) \) and \( f(y, a, b)) = (f_P(y_P, a), f_E(y_E, b)) \). We deal with the natural extension of the minimum time problem, so we define the *payoff* of the game as the first time of arrival \( T(x) \) (if any) on the target \( T \) for the solution trajectory of (1) starting at \( x \).
Note that, as usual, we set \( T(x) = +\infty \) if the trajectory will not reach the target.
As we said in the introduction we want to construct a numerical approximation for *differential games with state constraints*. This means that both the players have to keep the system in a given bounded domain \( Q \) and satisfy additional state constraints (if any) described by a set \( C \). We will denote by \( \overline{\Omega} \subset \mathbb{R}^n \) the set describing *all* the constraints (i.e. \( \overline{\Omega} \equiv Q \cap C \)). The analysis of the continuous model with state constraints via dynamic programming techniques which is the basis for our approximation can be found in [17, 6]. In order to simplify the presentation, we will assume in the sequel that \( T \subset Q \) (this is not restrictive since we can always redefine the target to be \( T \cap Q \)).
Let us start giving the time-discrete and the corresponding fully-discrete version of the differential game with state constraints.
### 3. THE FULLY-DISCRETE APPROXIMATION SCHEME
We will consider a discrete version of the dynamics based on the Euler scheme, namely
\[
\begin{cases}
y_{n+1} = y_n + hf(y_n, a_n, b_n) \\
y_0 = x
\end{cases}
\]
We denote by \( y(n; x, \{a_n\}, \{b_n\}) \) its solution at time \( nh \). The state constraints require that \( y(n; x, \{a_n\}, \{b_n\}) \in \overline{\Omega} \) for all \( n \in \mathbb{N} \).
Let us define
\[
\begin{align*}
A^h &:= \{ \{a_n\} : a_n \in A, \text{ for all } n \} \\
B^h &:= \{ \{b_n\} : b_n \in B, \text{ for all } n \}.
\end{align*}
\]
Adapting to the discrete case definitions in [17], we define the set of *admissible pairs* of controls at \( x \in \overline{\Omega} \)
\[
AP(x) = \{ (\{a_n\}, \{b_n\}) \in A^h \times B^h : y(n; x, \{a_n\}, \{b_n\}) \in \overline{\Omega} \text{ for all } n \}
\]
and then the sets of *admissible controls* for each player
\[
\begin{align*}
A^h_x &\equiv \{ \{a_n\} \in A^h : \exists \{b_n\} \in B^h : (\{a_n\}, \{b_n\}) \in AP(x) \} \\
B^h_x &\equiv \{ \{b_n\} \in B^h : \exists \{a_n\} \in A^h | (\{a_n\}, \{b_n\}) \in AP(x) \}.
\end{align*}
\]
We will always assume that \( A^h_x \neq \emptyset \) (or equivalently \( B^h_x \neq \emptyset \)) for all \( x \in \overline{\Omega} \).
Let us also define the following subsets of \( A \) and \( B \):
\[
\begin{align*}
A_h(x, b) &:= \{ a \in A : x + hf(x, a, b) \in \overline{\Omega} \}, \quad x \in \overline{\Omega} \\
B_h(x, a) &:= \{ b \in B : x + hf(x, a, b) \in \overline{\Omega} \}, \quad x \in \overline{\Omega}.
\end{align*}
\]
We will also assume that
\[
\begin{cases}
\exists h_0 > 0 : A_h(x, b) \neq \emptyset \text{ and } B_h(x, a) \neq \emptyset \\
\forall (h, x) \in (0, h_0] \times \overline{\Omega}, \ a \in A, \ b \in B
\end{cases}
\] (3)
**Definition 1.**
A *strategy* for the first player is a map \( \alpha_x : B^h_x \to A^h_x \). It is *nonanticipating* if \( \alpha_x \in \Gamma^h_x \), where
\[
\begin{align*}
\Gamma^h_x &:= \{ \alpha_x : B^h_x \to A^h_x : b_n = \tilde{b}_n \text{ for all } n \leq n' \\
&\quad \text{implies } \alpha_x[\{b_k\}]_n = \alpha_x[\{b_k\}]_n \text{ for all } n \leq n' \}.
\end{align*}
\] (4)
Let us define the reachable set as the set of starting points from which the system can be driven to the target
\[
\mathcal{R}^h := \left\{ x \in \mathbb{R}^n : \forall \{b_n\} \in B^h \exists \alpha \in \Gamma^h \text{ and } \bar{n} \in \mathbb{N} \text{ s.t.} \right.
\]
\[
\left. y(\bar{n}; x, \alpha[\{b_n\}], \{b_n\}) \in T \right\}. \] (5)
Then, we define for \( x \in \mathcal{R}^h \)
\[
n_{min}(x, \{a_n\}, \{b_n\}) \equiv \min \{ n \in \mathbb{N} : y(n; x, \{a_n\}, \{b_n\}) \in T \}
\]
and
\[
n_h(x, \{a_n\}, \{b_n\}) \equiv \begin{cases}
n_{min}(x, \{a_n\}, \{b_n\}) & \text{for } x \in \mathcal{R}^h \\
+\infty & \text{for } x \notin \mathcal{R}^h
\end{cases}
\]
We will consider for our approximation the discrete lower value of the game, which is
\[
T_h(x) := \inf_{\alpha_x \in \Gamma^h_x} \sup_{\{b_n\} \in B^h_x} h n_h(x, \alpha_x[\{b_n\}], \{b_n\})
\]
and its Kruzkov transform
\[
v_h(x) := 1 - e^{-T_h(x)}, \quad x \in \overline{\Omega}. \] (6)
Note that a similar construction can be done for the upper value of the game. The Dynamic Programming Principle (DPP) for differential games with state constraints (under rather restrictive assumptions) is proved in [17] which also gives a characterization of the lower and upper value of the game in terms of the Isaacs equation. The discrete version of the DPP should lead to the following characterization of the time-discrete value function \( v_h \), For every \( x \in \overline{\Omega} \setminus T \)
\[
v_h(x) = \max_{b \in B} \min_{a \in A_h(x, b)} \{ \beta v_h(x + hf(x, a, b)) \} + 1 - \beta \] (7)
whereas
\[
v_h(x) = 0 \text{ for } x \in T \] (8)
where \( \beta \equiv e^{-h} \). Unfortunately, the resulting Hamilton-Jacobi-Isaacs equation (7)-(8) is not very general and does not include simple games like pursuit-evasion games we studied for example in [13]. In fact in [17] it is assumed that the second player can choose his control in \( B \) without any restriction due to the state constraints and then only the first player has the responsibility to maintain the state of the system in \( \overline{\Omega} \). Although we can not prove at this stage a more general DPP we try to solve numerically the Hamilton-Jacobi-Isaacs equation in a more general framework in which every player must consider the choice of the other player in order to choose an admissible pairs of controls \((a, b)\). So we substitute (7) by the following equation
\[
v_h(x) = \max_{b \in B_h(x)} \min_{a \in A_h(x, b)} \{ \beta v_h(x + hf(x, a, b)) \} + 1 - \beta \] (9)
where
\[
B_h(x) := \{ b \in B : \exists a \in A_h(x, b) | x + hf(x, a, b) \in \overline{\Omega} \}
\]
for all \( x \in \overline{\Omega} \). This choice seems to be reasonable and it seems to be the right choice to solve differential games with coupled dynamics as we will see in the next section.
In order to achieve the fully-discrete equation we build a regular triangulation of $\overline{\Omega}$ denoting by $X$ the set of its nodes $x_i$, $i = 1, \ldots, N$ and by $S$ the set of simplices $S_j$, $j \in J \equiv 1, \ldots, L$. $V(S)$ will denote the set of the vertices of a simplex $S$ and the space discretization step will be denoted by $k$ where $k := \max_j \{diam(S_j)\}$.
The *fully-discrete approximation scheme* is, for $x_i \in (\Omega \setminus T) \cap X$,
$$v_h^k(x_i) = \max_{b \in B_h(x)} \min_{a \in A_h(x, b)} \left\{ \beta v_h^k(x_i + hf(x_i, a, b)) \right\} + 1 - \beta$$
(10)
whereas the homogeneous Dirichlet boundary condition (8) becomes
$$v_h^k(x_i) = 0, \quad x_i \in T \cap X.$$
(11)
The local reconstruction of the term $v_h^k(x_i + hf(x_i, a, b))$ is obtained by linear interpolation, i.e.
$$v_h^k(x) = \sum_j \lambda_j(x)v_h^k(x_j), \text{ where}$$
$$0 \leq \lambda_j(x) \leq 1, \quad \sum_j \lambda_j(x) = 1 \quad x \in \overline{\Omega}.$$
(12)
As in the unconstrained problem, the choice of linear interpolation is not an obligation and it was made here just to simplify the presentation.
Let us denote by $W^k$ the set
$$W^k := \{ w \in C(\overline{\Omega}) : \nabla w(x) = \text{ constant for } x \in S_j, j \in J \}.$$
The proof of the following theorem can be obtained with simple adaptations of the standard proof for the free fully-discrete scheme (see e.g. [5]).
**Theorem 1.** The problem (10), (11) has a unique solution $v_h^k \in W^k$ such that $v_h^k : \overline{\Omega} \to [0, 1]$.
Finally we note that the theorem of convergence of $v_h^k$ to $v_h$ for $k$ tends to 0 in [13] can be easily adapted to equation (10) although it was first stated in the particular case of pursuit-evasion games.
### 4. NUMERICAL EXPERIMENTS
In this section we present some numerical experiments for pursuit-evasion games as well as for general differential games. The code is written in C++ using OpenMP directives. The algorithm ran on an IBM system p5 575 equipped with 8 processors Power5 at 1.9 GHz and 32 GB RAM located at CASPUR ([www.caspur.it](http://www.caspur.it)).
We denote by $N$ the number of nodes in each dimension. In every case the controls $a$ and $b$ are chosen in the boundary of the two-dimensional unit ball $B(0, 1)$ plus the central point $(0, 0)$. We denote by $N_c$ the number of admissible directions/controls for each player.
We always solve the problem on a structured grid with four-dimensional cells of volume $\Delta x_1 \Delta x_2 \Delta x_3 \Delta x_4$ and we choose the (fictitious) time step $h$ such that
$$\| h f(x, a, b) \| \leq \min\{ \Delta x_1, \Delta x_2, \Delta x_3, \Delta x_4 \}$$
for all $x, a, b$ (so that the interpolation is made in the neighboring cells of the considered point). We adopt
$$\| V^{(p+1)} - V^{(p)} \|_\infty \leq \varepsilon, \quad \varepsilon > 0$$
as stopping criterion for the fixed point iteration $V^{p+1} = F(V^p)$ (where $V_i = v_h^k(x_i)$). We denote by $v(x)$ the approximate value function and by $T(x) = -\ln(1 - v(x))$ the time needed to reach the target. In the following we name "CPU time" the sum of the times taken by the CPUs and by "wallclock time" the elapsed time.
**Test 1 (Tag–Chase game)**
In this test we consider two boys $P$ and $E$ running one after the other in the same two-dimensional domain. The real game is played in a square $[-2, 2]^2$ so the problem is set in $Q = [-2, 2]^4$. The coordinates $(x_1, x_2)$ represent the position of the Pursuer and $(x_3, x_4)$ represent the position of the Evader. The Pursuer's dynamics is
$$\begin{cases}
f_1(x, a, b) = 2a_1 \\
f_2(x, a, b) = 2a_2
\end{cases}$$
(13)
and the Evader's dynamics is
$$\begin{cases}
f_3(x, a, b) = b_1 \\
f_4(x, a, b) = b_2 \quad \text{if } b_2 \geq 0
\end{cases}$$
(14)
$$\begin{cases}
f_3(x, a, b) = 3b_1 \\
f_4(x, a, b) = 3b_2 \quad \text{otherwise}
\end{cases}$$
(15)
so the Evader can run faster than the Pursuer when he goes down. We consider the state constraints due to boundary of the square (players can not exit the admissible domain $Q$) and, in addition, another constraint $C \equiv \{ x \in \mathbb{R}^4 : x_4 > x_2 \}$ so that the Evader must remains above the Pursuer. Although the dynamics is split in the sense that the choice of a player does not affect the position of the other, the state constraints are coupled and depend on the global state of the system.
The numerical target is $T = \{(i, j, k, l) \in \{1, \ldots, N\}^4 : |i-k| \leq 1 \text{ and } |j-l| \leq 1 \}$ so the target is reached when the capture occurs. We plot some flags on the approximate optimal trajectories every some time steps. This allows to follow the position of one player with respect to the other during the game.
We choose $\varepsilon = 10^{-3}$, $N = 50$ and $N_c = 32 + 1$. Convergence was reached in 137 iterations. The CPU time was 1d 01h 26m, the wallclock time was 3h 44m. Fig. 1 shows the optimal trajectory corresponding to the starting point $P = (-1.8, -1.9)$, $E = (-1.5, 1.5)$. We compare this solution with the solution of the problem in which we removed the constraints $x_4 > x_2$ (see Fig. 2). It is immediately seen that the behavior is completely different.
If the Evader is constrained above the Pursuer, he goes a little bit down just to increase his velocity but after a while he is pushed to the north boundary by the Pursuer.
In absence of that constraints the Evader waits until the Pursuer approaches the north boundary and then he goes down faster than the Pursuer so he is captured only when he touches the south boundary.
**Test 2**
In this test we consider a completely coupled dynamics. A ball is free to move on the plane $[-5, 5]^2$. We indicate the position of the ball by $(x_1, x_2)$ and its velocity by $(x_3, x_4)$. The two players can move the ball applying a force which depends on $(x_1, x_2)$. The first player wants to steer the ball to the target $T = \{(x_1, x_2) \mid x_1 \geq 4, x_2 \geq 0\}$ while the second player wants to steer the ball away. The dynamics is
$$
\begin{cases}
f_1(x, a, b) = x_3 \\
f_2(x, a, b) = x_4
\end{cases}
$$
(16)
$$
f_3(x, a, b) = \begin{cases}
4a_1 + b_1 & x_1 \leq 0 \\
2a_1 + 3b_1 & x_2 > 0
\end{cases}
$$
(17)
$$
f_4(x, a, b) = \begin{cases}
4a_2 + b_2 & x_1 \leq 0 \\
2a_2 + 3b_2 & x_2 > 0
\end{cases}
$$
(18)
This means that the first player can completely control the ball in the left side of the domain but not in the right side. We choose $\varepsilon = 10^{-3}$, $N = 40$ and $N_c = 24 + 1$. Convergence was reached in 132 iterations. The CPU time was 6h 14m, the wallclock time was 52m. Fig. 3 shows the value function $T(x_1, x_2, 0, 0)$ (we fix the initial velocity equal to $(0, 0)$). It is immediately seen that if the ball starts from the right-hand side the first player can not steer it to the target so the optimal time $T$ is $+\infty$. Fig. 4 shows an optimal trajectory corresponding to the starting point $(0, 0, 0, 0)$. We can see that at the beginning the first player moves the ball toward the left side of the domain in such a way he can control and accelerate the ball. After that, he pushes the ball toward the target. When the ball enters the right side of the domain the second player tries to slow down the ball and to move it away but at this point the velocity of the ball is too high so it can reach the target despite the second player.
**Test 3**
In this test we consider again a completely coupled dynamics. The aim is to stabilize a dynamical system. The dynamics is
$$
\begin{cases}
f_1(x, a, b) = (-3 + a_1)x_1 \\
f_2(x, a, b) = (-3 + a_2)x_2 \\
f_3(x, a, b) = (-3 + a_1 - 2b_1)x_3 \\
f_4(x, a, b) = (-3 + a_2 - 2b_2)x_4
\end{cases}
$$
(19)
We choose $\varepsilon = 10^{-3}$, $N = 38$ and $N_c = 2$ ($a_i, b_i = \pm 1$). Convergence was reached in 36 iterations. Fig. 5 shows an optimal trajectory corresponding to the starting point $(2, 2, -2, -2)$. We plotted the coordinates $(x_1, x_2)$ and $(x_3, x_4)$ separately for the reader’s convenience (the first is plotted by circles, the second by squares). We can see that the two curves approach the origin in different time
due to the action of the second player which slows down the evolution of the system.
5. CONCLUSION
We have proposed an approximation scheme for general differential games with state constraints. According to the numerical tests the numerical approximation gives an appropriate qualitative description of the value function and of the corresponding optimal trajectories. These results push toward a further analysis in order to prove that the approximate solution computed by the algorithm converges, for $h$ and $k$ tending to 0, to the viscosity solution of the Isaacs equation.
REFERENCES
[1] B. Alziary de Roquefort, *Jeux différentiels et approximation numérique de fonctions valeur, 1re partie: étude théorique*, RAIRO Modél. Math. Anal. Numér., **25** (1991), 517-533.
[2] B. Alziary de Roquefort, *Jeux différentiels et approximation numérique de fonctions valeur, 2e partie: étude numérique*, RAIRO Modél. Math. Anal. Numér., **25** (1991), 535-560.
[3] M. Bardi, S. Bittacin, M. Falcone, *Convergence of Discrete Schemes for Discontinuous Value Functions of Pursuit-Evasion Games*, in G. J. Olsder (ed.), "New trends in Dynamic Games and Application", Annals of the International Society of Dynamic Games, **3**, 273-304, Birkhäuser, 1995.
[4] M. Bardi, I. Capuzzo Dolcetta, *Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations*, Birkhäuser, 1997.
[5] M. Bardi, M. Falcone, P. Soravia, *Fully discrete schemes for the value function of pursuit-evasion games*, in T. Başar and A. Haurie (eds.), "Advances in Dynamic Games and Application", Annals of the International Society of Dynamic Games **1**, 89-105, Birkhäuser, 1994.
[6] M. Bardi, S. Koike, P. Soravia, *Pursuit-evasion games with state constraints: dynamic programming and discrete-time approximations*, Discrete Contin. Dynam. Systems, **6** (2000), 361-380.
[7] J. V. Breakwell, *Time-optimal pursuit inside a circle*, Differential Games and Applications (Sophia-Antipolis, 1988), Lecture Notes in Control and Inform. Sci. **119**, 72-85, Springer, Berlin, 1989.
[8] F. Camilli, M. Falcone, *Approximation of optimal control problems with state constraints: estimates and applications*, in B. S. Mordukhovic, H. J. Sussman (eds.), "Nonsmooth analysis and geometric methods in deterministic optimal control", IMA Volumes in Applied Mathematics **78**, 23-57, Springer Verlag, 1996.
[9] P. Cardaliaguet, M. Quincampoix, P. Saint-Pierre *Differential games with state-constraints*, in the Proceedings of the Conference of the International Society of Dynamics Games, St. Petersburg 2002.
[10] P. Cardaliaguet, M. Quincampoix, P. Saint-Pierre *Pursuit differential games with state constraints*, SIAM J. Control Optim., **39** (2001), 1615-1632.
[11] P. Cardaliaguet, M. Quincampoix, P. Saint-Pierre, *Set valued numerical analysis for optimal control and differential games*, in M. Bardi, T. Parthasarathy and T. E. S. Raghavan (editors) "Stochastic and differential games: theory and numerical methods", Annals of the International Society of Dynamic Games **4**, 177-247, Birkhäuser, 1999.
[12] E. Cristiani, *Fast Marching and semi-Lagrangian methods for Hamilton-Jacobi equations with applications*, Ph.D. Thesis, Dipartimento di Metodi e Modelli Matematici per le Scienze Applicate, Università "La Sapienza", Rome, Italy, 2006.
[13] E. Cristiani, M. Falcone, *Fully-discrete schemes for the value function of Pursuit-Evasion games with state constraints*, to appear in Proceedings of the "12th International Symposium on Dynamic Games", Sophia Antipolis, France, 2006. Annals of the International Society of Dynamic Games. Preprint server: http://cpde.iac.rm.cnr.it/.
[14] M. Falcone, *Numerical methods for differential games based on partial differential equations*, International Game Theory Review, **8** (2006), 231-272.
[15] M. Falcone, *Some remarks on the synthesis of feedback controls via numerical methods*, in J. L. Menaldi, E. Rofman, A. Sulem (eds.), "Optimal Control and Partial Differential Equations", IOS Press, 2001, 456-465.
[16] R. Isaacs, *Differential Games*, John Wiley and Sons, 1965.
[17] S. Koike, *On the state constraint problem for differential games*, Indiana University Mathematics Journal, **44** (1995), 467-487.
[18] P. Soravia, *Estimates of convergence of fully discrete schemes for the Isaacs equation of pursuit-evasion differential games via maximum principle*, SIAM J. Control Optim., **36** (1998), 1-11. |
Temperature: Diet Interactions Affect Survival through Foraging Behavior in a Bromeliad-Dwelling Predator
Olivier Dézerald, Régis Céréghino, Bruno Corbara, Alain Dejean, Céline Leroy
To cite this version:
Olivier Dézerald, Régis Céréghino, Bruno Corbara, Alain Dejean, Céline Leroy. Temperature: Diet Interactions Affect Survival through Foraging Behavior in a Bromeliad-Dwelling Predator. Biotropica, 2015, 47 (5), pp.569-578. 10.1111/btp.12249 . hal-02084254
Temperature: Diet Interactions Affect Survival through Foraging Behavior in a Bromeliad-Dwelling Predator
Olivier Dezerald\textsuperscript{1,7}, Régis Cereghino\textsuperscript{2,3}, Bruno Corbara\textsuperscript{4,5}, Alain Dejean\textsuperscript{1,2}, and Céline Leroy\textsuperscript{6}
\textsuperscript{1} Ecologie des Forêts de Guyane (UMR-CNRS 8172), CNRS, Campus Agronomique, F-97379, Kourou Cedex, France
\textsuperscript{2} UPS Laboratoire Ecologie Fonctionnelle et Environnement (ECOLAB), Université de Toulouse, INP, 31062, Toulouse, France
\textsuperscript{3} ECOLAB (UMR-CNRS 5245), CNRS, 118 Route de Narbonne, 31062, Toulouse, France
\textsuperscript{4} Clermont Université, Université Blaise Pascal, BP 10448, 63000, Clermont-Ferrand, France
\textsuperscript{5} Laboratoire Microorganismes: Génome et Environnement (UMR-CNRS 6023), CNRS, 63177, Aubière, France
\textsuperscript{6} UMR AMAP (botAnique et Modélisation de l’Architecture des Plantes et des végétaires), IRD, Boulevard de la Lironde, TA A-51/PS2, 34398, Montpellier Cedex 5, France
ABSTRACT
Temperature, food quantity and quality play important roles in insect growth and survival, influencing population dynamics as well as interactions with other community members. However, the interaction between temperature and diet and its ecological consequences have been poorly documented. \textit{Toxorhynchites} are well-known biocontrol agents for container-inhabiting mosquito larvae. We found that \textit{Toxorhynchites haemorrhoidalis} larvae (Diptera: Culicidae) inhabiting water-filled rosettes of tank bromeliads catch and eat prey of both aquatic (mosquito larvae) and terrestrial origin (ants), using distinct predatory methods. They carried out frontal attacks on ants, but ambushed mosquito larvae. In choice tests, \textit{T. haemorrhoidalis} favored terrestrial prey. Temperature had a significant effect on predator development and survival through its interaction with diet, but did not alter the preference for ants. \textit{T. haemorrhoidalis} larvae emerged quickly when fed only mosquito larvae, whereas all individuals died before pupation when fed only ants. We conclude that behavioral factors (\textit{i.e.}, attraction to ants that disturb the surface of the water) overtake physiological factors (\textit{i.e.}, the adverse outcome of elevated temperature and an ant-based diet) in determining a predator’s response to temperature:diet interactions. Finally, because \textit{T. haemorrhoidalis} larvae preferentially feed on terrestrial insects in tank bromeliads, mosquito larvae may indirectly benefit from predation release.
Abstract in French is available with online material.
Key words: biocontrol agent; development; French Guiana; selective feeding behavior; tank bromeliad; \textit{Toxorhynchites haemorrhoidalis}.
Sub-optimal food conditions are particularly stressful for insects that must store sufficient resources during larval feeding stages to support the development, dispersal, and reproduction of adults, non-feeding adults in particular. The effects of temperature or food fluctuations on individual physiology and behavior are well-studied in herbivorous species (Behmer 2009). However, there have been few attempts to disentangle such effects in predaceous insects (but see Traniello et al. 1984).
*Toxorhynchites* culicids are well-known biocontrol agents against container-inhabiting mosquito larvae, although their effectiveness has been questioned because their biology and behavior are insufficiently characterized (Collins & Blackwell 2000, Focks 2007). Some predatory larvae of *Toxorhynchites* sp. inhabiting the water-filled rosettes of tank bromeliads (Bromeliaceae) forage at the water–air interface, where they prey on aquatic and terrestrial invertebrates (mosquito larvae and ants, respectively) throughout their larval life span (Linley 1995, Campos & Lounibos 2000). For instance, *T. haemorrhoidalis* (Fabricius) is common in bromeliad axils as well as *Heliconia* flower bracts in northern South America (Lounibos et al. 1987). Owing to their small catchment and high terrestrial:aquatic surface ratio, bromeliad pools contain suitable model organisms to assess if temperature-induced changes in metabolic demands alter predatory behavior as well as aquatic versus terrestrial prey selection in top predators. Assuming that (1) the metabolic demands of individuals increase with increasing temperature (Trpis 1972), and (2) *T. haemorrhoidalis* selectively feeds on the species (*i.e.*, either mosquito larvae or ants) that provides the highest energy intake under ambient conditions, we tested the hypothesis that the preference for a prey species would remain unchanged with experimental warming, despite an increase in prey consumption.
**METHODS**
**STUDY SPECIES AND SAMPLE COLLECTION.**—In French Guiana, the larvae of *T. haemorrhoidalis* are among the largest and most numerically dominant predators (including odonate and tabanid larvae) in the aquatic communities dwelling in tank bromeliads. These larvae grow through four instars and are considered generalist predators that can exhibit cannibalistic behavior.
To test how temperature:diet interactions affect *T. haemorrhoidalis* survival, we conducted experiments in the laboratory in Kourou, French Guiana, from March to July 2013. We sampled all of the aquatic insects (*i.e.*, *T. haemorrhoidalis* and other Culicidae larvae) from two tank bromeliad species. We sampled *Aechmea mertensii* Schult.f. (Bromeliaceae), which obligatorily grows on ant gardens (AGs, Benzing 2000) inhabited by either the ants *Camponotus femoratus* Fabr. (Formicidae) or *Neoponera goeldii* Forel (Ponerinae) near the Petit-Saut dam (05°0’3”30.0” N; 52°58’34.6” W). We sampled *Aechmea aquilega* Griseb. (Bromeliaceae) near the city of Sinnamary (05°22’42.9” N; 52°57’11.9” W). Contrary to *A. mertensii*, *A. aquilega* is facultatively associated with ants, which build their nests within the leaf rosette. To collect aquatic invertebrates from the tanks, we carefully emptied the wells in each plant by sucking out the water using 10-ml and 50-ml pipettes with the end trimmed to widen the orifice (Jabiol et al. 2009, Jocque et al. 2010). We pooled all of the invertebrate samples regardless of origin.
**GUT CONTENTS.**—We used gut contents to quantitate the relative importance of the various prey items. After collection in the field, we preserved 30 third/fourth instar larvae in formalin (4%) for subsequent dissection (*N* = 17 from *A. mertensii* and *N* = 13 from *A. aquilega*). We collected late instar larvae because, within invertebrate food webs, the largest individuals within a species have the greatest effect on energy flows (Céréghino 2006). We determined the diet of *T. haemorrhoidalis* larvae by dissecting the entire gut, and analyzing it with a microscope (Optiphot-2 Nikon®, Garden City, NY, US). Most prey items could be identified and enumerated by comparing chitinous fragments (*e.g.*, head capsules or the legs of insects and the setae of Oligochaeta) with specimens of tank bromeliad invertebrates archived in our collection (University of Toulouse 3, France).
**RESOURCE PREFERENCE AND PREDATION BEHAVIOR.**—Examination of the gut contents suggested that ants constituted a substantial fraction of *T. haemorrhoidalis’* diet, but mosquito larvae are the most abundant prey species at our study site (Dézerald et al. 2014). Therefore, we conducted two-way choice tests on 30 third and fourth instar *T. haemorrhoidalis* larvae (*N* = 9 from *A. mertensii* and *N* = 21 from *A. aquilega*; body size = 8.29 ± 0.11, *N* = 30) by offering them *C. levior* ants (taken from AGs; body size = 1.72 ± 0.03, *N* = 30) and *Wyeomyia pertinans* mosquito larvae (body size = 3.91 ± 0.17, *N* = 30). We placed *T. haemorrhoidalis* larvae into separate plastic tubes (diameter = 3 cm; height = 7 cm; water volume = 40 ml) behind a rigid plastic strip (width = 3 cm; length = 8 cm) at ambient temperature (water temperature = 25 ± 0.5°C). On the other side of this strip, we placed one mosquito larva in the water column and one ant on the surface of the water. The plastic strips prevented premature attacks while adding the prey. After 10 sec, we pulled the strip out of each tube and recorded the predator’s vertical position in the water column, which prey species was attacked first, total number of attacks per prey item, and which prey was consumed. After a deadly attack, the other prey was removed. If no attack occurred after 15 min, we removed both prey. Finally, we repeated the two-way choice tests for each individual predator at 3-day intervals. We did not feed the larvae between the two tests.
A total of 90 tests (30 tested individuals across three successive tests) were validated. Before testing for a potential prey preference, we verified if *T. haemorrhoidalis* individuals displayed learning in their predatory behavior. We used an extension of generalized linear models (GLMs), generalized estimation equations (GEEs), because GEEs accommodate repeated observations on the same individual (Liang & Zeger 1986). In GEEs, an association structure between subsequent observations or measures from the same individual must be specified. We recorded behavioral observations every 3 days (time-ordered dataset), so we selected an auto-regressive correlation structure (Zuur et al. 2009). We tested the null hypothesis that the number of attacks...
toward ants and mosquito larvae in the three successive tests are the same. Conversely, if attacks increased or decreased across successive tests it would suggest learning had occurred. In these models, the response variables were the species of prey that was first attacked, the total number of attacks directed toward ants or mosquitoes, and the prey species that overcame deadly attacks. For each response variable, two separate models were used for ants and mosquitoes. The three successive tests were entered as a categorical (three levels) explanatory variable. Finally, we carried out proportion tests with Yates’ continuity correction for one sample on the total number of first attacks, the total number of attacks, and the total number of deadly attacks directed toward ants and mosquitoes. We conducted these proportion tests on either all or each successive test separately depending on the GEE results. The latter analyses allowed us to assess if predatory larvae preferred either ants or mosquitoes, while taking potential learning into account.
**Effect of temperature and diet on prey consumption and development.**—We collected 63 first instar *T. haemorrhoidalis* from the field (*N* = 11 from *A. mertensii* and *N* = 52 from *A. aquilega*) and placed them into separate plastic tubes (diameter = 3 cm; height = 7 cm) in the laboratory at an ambient temperature (25 ± 0.5°C). We fed them with mosquito larvae (*W. pertinax*) until they reached third instar. Then, we randomly conditioned the larvae at three different water temperatures. We selected experimental temperature according to a pilot study, where we placed small data loggers (iButtons®; Maxim Corporation, Dallas, Texas, U.S.A.) in the central reservoir of two bromeliads located in forested and open areas for 2 weeks during the dry season, monitoring temperatures every hour. Temperatures oscillated between 22 and 33°C with a mean of 24.6 ± 0.06°C and 28.3 ± 0.08°C (±SE, *N* = 425) in the forested and open areas, respectively. Therefore, we placed tubes in large plastic trays filled with water at 25 ± 0.5°C, 29 ± 0.5°C and 33 ± 0.5°C (21 tubes per tray, one *T. haemorrhoidalis* per tube). We set the water temperature in the trays using 50-watt electric immersion heaters for aquariums (Visitherm Eco®, http://www.aquariumsystems.eu/). Finally, we provided different food items to fourth instar larvae (Fig. 1). For each temperature treatment, we fed seven larvae *ad libitum* with either mosquito larvae or ants, and provided the remaining seven individuals with ants and mosquito larvae in equal proportions. In the latter treatment, if the *T. haemorrhoidalis* ate all of the mosquito larvae or ants, we added more individuals from both taxa. To prevent the ants from escaping and for the sake of consistency among treatments, we put the tube caps on top of all tubes (unscrewed) although the ants were rarely able to leave the water surface and climb the tube walls. We recorded elements of the predator’s development every 3 days for the rest of their aquatic cycle, namely: days spent in the trays, if the larvae pupated or died, number and type of prey consumed, and mean consumption rate. We calculated the latter variable as the number of prey available for predatory larvae minus the number of live prey in the tubes after 3 days, divided by the total number of days spent in the trays. Since prey occasionally died due to unknown causes (no apparent signs of ‘wasteful’ killing behavior by predatory larvae), we replaced both eaten and dead prey. At the end of the experiment, we collected the *T. haemorrhoidalis* pupae, and oven-dried them at 60°C for 48 h to obtain dry mass.
**Data analysis.**—To determine the overall effect of diet and temperature on *T. haemorrhoidalis*, we used GLMs with the time needed to reach the final state (*i.e.*, dead or pupa), and the final state as response variables. As the time spent in trays could partially confound negative or positive effect on survival, we utilized two distinct models, *i.e.*, one model for development time, and one for final state. The explanatory variables were diet, tempera-
---
**FIGURE 1.** Schematic representation of the experimental design of the temperature:diet interactions. On the left is the number of *Taxocnemichites haemorrhoidalis* individuals within each larval stage (from one to four instars) according to the different temperature and food treatments. *T. haemorrhoidalis* larvae were provided with either mosquito larvae (*i.e.*, *Wyeomyia* sp.), ants (*Crematogaster lewior*) or both (M, A, M+A) in three temperature treatments (25, 29, and 33°C).
ture, and their interactions (two factorial variables with three levels). Since the time needed to pupate is expressed in days (count-based data) and the final state is a binomial variable, we fit the models with Poisson and binomial families. We conducted an Akaike information criterion (AIC)-based selection on the GLMs and graphically assessed the validation of the final models. To test the hypothesis that increasing temperature positively affects consumption rates, we conducted a Kruskal–Wallis test with the mean consumption rates of mosquito larvae or ants as entered variables and temperatures as explanatory variables. To further evaluate the effects of both diet and temperature on predator development, we assessed changes in consumption habits over time. For each *T. haemorrhoidalis* larva, we regressed the number of prey consumed every 3 days against time. We used GEEs with an auto-regressive correlation structure as described for the choice tests. Moreover, for *T. haemorrhoidalis* raised with both mosquitoes and ants, we used two separate models. We assessed the temperature effect on slope estimates within a given diet treatment using a Kruskal–Wallis test. We compared the consumption of either mosquitoes or ants, when the larvae were provided with either one or both prey species using a Wilcoxon rank-sum test. We evaluated all statistical analyses at 95% CI using R v. 2.15.2 (R Development Core Team 2012) with the add-on Geepack v. 1.1-6 package for GEE analysis (http://cran.r-project.org/doc/packages/). We graphically assessed model validation (GLMs, and GEEs), evaluated the normality of residuals using Shapiro tests, and performed additional chi-square tests on the deviance and residual degrees of freedom for goodness-of-fit of the models (GLMs). We present the results as means ± SE throughout.
**RESULTS**
**Gut contents.**—Overall, 20 of the 30 dissected *T. haemorrhoidalis* had identifiable prey fragments in their gut. Ants contributed on average 46.7 ± 0.12 percent of the diet. Other frequent prey were *Wyeomyia* spp. (Culicidae; 13.3 ± 0.07% of the prey items) and Tanypodinae larvae (Chironomidae; 13.3 ± 0.1%). Less frequent prey were *Tanytarsus* (Chironomidae), *Bezzia* sp., and Forcipomyiinae (Ceratopogonidae) (6.7 ± 0.07% each), as well as *Culex* spp. (Culicidae 3.3 ± 0.03%) and *Telmatocoris* sp. (Psychodidae 3.3 ± 0.03%).
**Resource preference and predation behavior.**—*Toxorhynchites haemorrhoidalis* larvae spend most of their time resting and breathing at the water–air interface. A gentle tap on their plastic tube makes them swim downwards. Doing so after the strips were removed positioned the predatory larvae at the bottom of their tubes. The mosquito larvae behaved similarly when on the surface of the water, whereas the ants moved frantically on the surface of the water trying to reach the tube walls. *Toxorhynchites haemorrhoidalis* larvae responded quickly to the presence of ants by swimming toward them in a series of undulating backward movements. They then angled themselves at about 45° to the water’s surface, and progressively adjusted their lateral position to face the ants. Once within striking distance, the predatory larvae curled up, swam vertically, seized the ants in their mandibles, and drowned them. Due to the ants’ frantic movements and the air bubbles trapped by the ant setae that made them float, the predators struggled to maintain their position in the water column while breathing through their siphons. By contrast, when preying on mosquito larvae, *T. haemorrhoidalis* larvae acted as ambush predators. The predatory larvae remained motionless at the bottom of the tubes until the mosquito larvae swam close by. Then, they launched a lateral strike, grasped their aquatic prey, and swallowed the mosquito larvae within a few minutes.
The number of first attacks directed toward ants or mosquito larvae did not differ significantly across the three successive choice tests according to the GEEs (Table 1; ants: Wald = 0.07, $P = 0.79$; mosquito larvae: Wald = 0.8, $P = 0.371$). Overall, ants and mosquito larvae overcame 53 and 23 first attacks, respectively, and the proportion test indicates that *T. haemorrhoidalis* larvae were significantly more attracted by ants than by mosquito larvae (Fig. 2A; Pearson $\chi^2 = 11.1$, $P = 0.0009$). However, whereas ants suffered a significantly higher total number of attacks for the first choice test compared to mosquito larvae (Pearson $\chi^2 = 60.1$, $P < 0.0001$), this number decreased signifi-
---
**TABLE 1. Results of the generalized estimation equations (GEEs).** Models were generated for three response variables: the prey species that was first attacked, the total number of attacks and the prey species that overcame the deadly attack. The prey were either mosquito larvae or ants.
| Response variable | Models | Estimate ± SE | Wald | $P$ |
|-------------------|--------------|---------------|------|-------|
| First attack | Ants | | | |
| | Intercept | 0.49 ± 0.38 | 1.66 | <0.0001 |
| | Second test | 4.72e-16 ± 0.54 | 0.00 | 1.00 |
| | Third test | −0.01 ± 0.54 | 0.07 | 0.79 |
| | Mosquitoes | | | |
| | Intercept | −1.34 ± 0.46 | 8.59 | 0.003 |
| | Second test | 0.38 ± 0.62 | 0.37 | 0.541 |
| | Third test | 0.55 ± 0.61 | 0.8 | 0.371 |
| Number of attacks | Ants | | | |
| | Intercept | 1.05 ± 0.21 | 25.27| <0.0001 |
| | Second test | 0.02 ± 0.36 | 0.00 | 0.947 |
| | Third test | −0.66 ± 0.3 | 4.76 | 0.029 |
| | Mosquitoes | | | |
| | Intercept | −1.28 ± 0.35 | 13.62| 0.0002 |
| | Second test | 0.32 ± 0.46 | 0.48 | 0.488 |
| | Third test | 0.49 ± 0.43 | 1.25 | 0.263 |
| Deadly attacks | Ants | | | |
| | Intercept | −1.34 ± 0.46 | 8.59 | 0.003 |
| | Second test | 0.85 ± 0.60 | 2.03 | 0.154 |
| | Third test | 1.14 ± 0.59 | 3.69 | 0.055 |
| | Mosquitoes | | | |
| | Intercept | −1.34 ± 0.46 | 8.59 | 0.003 |
| | Second test | 0.12 ± 0.63 | 0.1 | 0.753 |
| | Third test | 0.38 ± 0.62 | 0.37 | 0.541 |
significantly upon the third test (Table 1; Wald = 4.76, $P = 0.029$). For the third test, the total number of attacks did not differ significantly between the ants and mosquito larvae (Fig. 2B; Pearson $\chi^2 = 0.78$, $P = 0.377$). These results indicate that the predatory larvae did not favor or reject a given prey after being presented with the other prey in earlier events, but *T. haemorrhoidalis* was more effective at capturing ants during the third test compared to the first one. Finally, *T. haemorrhoidalis* did not significantly increase the number of deadly attacks toward ants compared to those directed toward mosquito larvae (Table 1; ants: Wald = 3.69, $P = 0.055$; mosquito larvae: Wald = 0.37, $P = 0.54$). However, in all choice tests the numbers of deadly attacks were significantly higher toward ants than mosquitoes (Pearson $\chi^2$, $\chi^2 = 7.9$, $P = 0.005$). Together, these results suggest that *T. haemorrhoidalis* larvae were significantly more attracted by ants at first sight, that over time they learned to better manipulate ants, and that the number of deadly attacks was significantly higher for ants than mosquitoes.
**Effect of Temperature and Diet on Prey Consumption and Development.**—The relationship between the number of days spent in the trays and diet varied significantly with respect to temperature (Table 2; $P < 0.0001$). *Taxorhynchites haemorrhoidalis* larvae spent less time in the trays when raised with mosquito larvae as food than when provided with ants only ($-0.66 \pm 0.08$, $\chi^2 = -8.08$, $P < 0.0001$), and they spent less time at higher temperatures ($-0.7 \pm 0.08$, $\chi^2 = -8.46$, $P < 0.0001$). For instance, predatory larvae spent on average $32.7 \pm 2.3$ and
---
**TABLE 2. Results of the generalized linear models (GLMs) testing the relationship between the number of days spent in trays (Days) and if the *Toxorhynchites haemorrhoidalis* larvae pupated or died (Final State) as a function of diet and temperature. Both explanatory variables are factors with three levels. Diet (M): *T. haemorrhoidalis* larvae raised with mosquito larvae only; Diet (M-A): larvae raised with both mosquito larvae and ants; Temp (29): larvae raised at 29°C; Temp (33): larvae raised at 33°C. Only the final models are represented, but Akaike information criterion (AIC) are provided for the final model and full models. Dev/rDev = Deviance and residual deviance.**
| | Estimate ± SE | Z | $P$ | df | Dev/rDev | $\chi^2$ | AIC |
|--------------------------|---------------|------|-------|-----|----------|---------|---------|
| Days | | | | | | | 949.2 |
| Intercept | 4.14 ± 0.05 | 87.152 | <0.0001 | | | | |
| Diet (M) | −0.66 ± 0.08 | −8.08 | <0.0001 | 2 | 113/683 | <0.0001 | |
| Diet (M-A) | −0.11 ± 0.07 | −1.55 | 0.12 | | | | |
| Temp (29) | −0.12 ± 0.07 | −1.77 | 0.08 | 2 | 34/650 | <0.0001 | |
| Temp (33) | −0.7 ± 0.08 | −8.46 | <0.0001 | | | | |
| Diet (M): Temp (29) | 0.05 ± 0.12 | 0.47 | 0.641 | 4 | 67/583 | <0.0001 | |
| Diet (M-A): Temp (29) | 0.18 ± 0.1 | 1.84 | 0.07 | | | | |
| Diet (M): Temp (33) | 0.88 ± 0.12 | 7.2 | <0.0001 | | | | |
| Diet (M-A): Temp (33) | 0.53 ± 0.11 | 4.82 | <0.0001 | | | | |
| Final State | | | | | | | 53.32 |
| Intercept | 0.92 ± 0.59 | 1.55 | 0.12 | | | | |
| Temp (29) | 0.88 ± 0.97 | 0.91 | 0.36 | 2 | 6/47 | 0.047 | |
| Temp (33) | −1.2 ± 0.8 | −1.5 | 0.13 | | | | |
39.1 ± 4.8 days in the trays at 33 and 25°C in the mosquito larvae treatment, whereas, when provided only with ants, predaceous larvae stayed twice as long at lower temperature, spending on average 31.4 ± 7.5 and 63.1 ± 13.2 days in the trays at 33 and 25°C temperatures, respectively. The effect of temperature on the time spent in trays was less marked in the mosquito and in the mosquito–ant diet treatments than in the ant-based diet (Table 2).
All individuals died at the larval stage when fed only ants. This weakened our statistical analyses, so we ran subsequent GLMs without this factor (the diet variable remained two-fold: mosquitoes and both mosquitoes and ants). Finally, we detected a marginal but significant effect of temperature on mortality rates (Table 2; $\chi^2$, $P = 0.047$). At high temperature, eight individuals of 14 died (57%), whereas only four died at low temperature (29%). In summary, increasing temperatures significantly reduced the time spent in trays and the survival of *T. baemorrhoidalis*, and this effect was exacerbated by an ant-based diet. At low temperatures, all *T. baemorrhoidalis* larvae (except one that died) developed over a short period of time before emerging when raised only with mosquito larvae as food, whereas they lived twice as long in the trays but they all died before pupation when provided only with ants. At the highest temperature, three individuals died at the larval stage and four were able to emerge when raised only with mosquitoes, while all larvae died over the same time span when fed only with ants.
Temperature had a significant influence on the daily average consumption of mosquito larvae (Kruskal–Wallis test, $\chi^2 = 8.16$, $P = 0.017$), but not on that of ants (Kruskal–Wallis test, $\chi^2 = 3.04$, $P = 0.22$). A single fourth instar predatory larva could eat up to 339 and 167 third/fourth culicid instars at 33 and 25°C, respectively (in 67 and 34 d, respectively). By contrast, a single predatory larva could eat up to 41 and 119 ants at 33 and 25°C, respectively (Fig. 3).
Temperature did not significantly affect the pattern of mean daily prey consumption. Indeed, within a given diet treatment (either raised with a single or both prey species), slopes in the various temperature treatments were not significantly different (Table 3; Fig. 4; Kruskal–Wallis tests, $0.07 < \chi^2 < 2.82$, $0.244 < P < 0.965$). By contrast, diet significantly changed consumption rates. For instance, when fed only with mosquito larvae, *T. baemorrhoidalis* larvae greatly increased their prey consumption over time throughout the fourth instar stage (average slope estimates in this treatment = 1.53 ± 0.33 SE), eating up to 10 mosquito larvae per day for several days before pupation or death. However, the average slope of mosquito larva consumption dropped to 0.3 ± 0.07, when the predatory larvac were raised with both mosquito larvae and ants, representing around three mosquito larvae per day before pupation or death, and the slopes are significantly different (Table 3; Fig. 4A; Wilcoxon Rank Sum test, W = 409, $P < 0.0001$). Contrastingly, when provided only with ants, consumption was negatively correlated with time ($-0.42 \pm 0.1$); *i.e.*, <1 ant per day on average before dying. Nonetheless, when provided with both mosquito larvae and ants, the consumption of ants was less negatively correlated ($-0.1 \pm 0.02$); thus, predatory larvae ate more than one ant per day before pupation or death. The slopes for the consumption of ants in the different treatments were significantly different (Table 3; Fig. 4B; Wilcoxon rank sum test, W = 98.5, $P = 0.012$). Finally, regardless of the temperature, the dry mass of *T. baemorrhoidalis* pupae fed with both mosquito larvae and ants (3.17 ± 0.9 mg; $N = 13$) was significantly lower than the dry mass of pupae fed with mosquito larvae only (4.48 ± 1.1 mg; $N = 16$; W = 161, $P = 0.01$).
**DISCUSSION**
Gut contents indicate that *T. baemorrhoidalis* larvae living in tank bromeliads in the wild prey upon both small ant species (*i.e.*, *Crematogaster* spp.) and aquatic mosquito larvae. Paine (1934) was the first to observe that *T. inornatus* is attracted by any disturbance generated on the surface of the water by aerial insects. Subsequently, Breland (1949) suggested that terrestrial insects may be an important food source for *Toxorhynchites* larvae when other prey are unavailable. Our choice tests demonstrated that *T. baemorrhoidalis* larvae preferentially selected terrestrial prey and
---
**FIGURE 3.** Mean number of mosquito larvae (A) and ants (B) consumed *per* day at 25, 29, and 33°C. Temperature had a significant effect on the mean daily consumption of mosquito larvae (A) but did not influence ant consumption (B) (Kruskal–Wallis test, $\chi^2 = 8.16$, $P = 0.017$ and $\chi^2 = 3.04$, $P = 0.22$ for A and B, respectively).
TABLE 3. Average slope estimates and standard errors calculated from models where the number of prey consumed by *Toxorhynchites haemorrhoidalis* larvae was regressed against time. The slopes are distributed according to nine treatments: three different temperatures (average temperature in Celsius) × three different diets. *Toxorhynchites haemorrhoidalis* larvae were provided with either mosquito larvae or ants alone, or with both mosquito larvae and ants. When *T. haemorrhoidalis* larvae were provided with both mosquito larvae and ants, two slopes were estimated for each prey consumed. The results of Kruskal–Wallis tests (K–W) and Wilcoxon rank sum tests (W) are presented.
| | Estimates ± SE | Temperature effect | Diet effect |
|--------------------------|----------------|--------------------|-------------|
| | | K–W | df | P | W | P |
| Mosquito larvae consumed | | | | | | |
| Temperature (25) | 1.49 ± 0.57 | 1.69 | 2 | 0.429 | 409 | <0.0001 |
| Temperature (29) | 1.67 ± 0.65 | | | | | |
| Temperature (33) | 1.41 ± 0.53 | | | | | |
| Mosquito larvae consumed in M-A treatments* | | 0.831 | 2 | 0.67 | | |
| Temperature (25) | 0.3 ± 0.11 | | | | | |
| Temperature (29) | 0.19 ± 0.07 | | | | | |
| Temperature (33) | 0.43 ± 0.18 | | | | | |
| Ants consumed in M-A treatments* | | 2.82 | 2 | 0.244 | 98.5 | 0.012 |
| Temperature (25) | −0.25 ± 0.1 | | | | | |
| Temperature (29) | 0.02 ± 0.006 | | | | | |
| Temperature (33) | −0.06 ± 0.02 | | | | | |
| Ants consumed | 0.07 | 2 | 0.965| | | |
| Temperature (25) | −0.44 ± 0.19 | | | | | |
| Temperature (29) | −0.37 ± 0.14 | | | | | |
| Temperature (33) | −0.46 ± 0.18 | | | | | |
*M-A represents treatments where predatory larvae were provided with both mosquito larvae and ants.*
FIGURE 4. Mean slope estimates of the number of mosquito larvae (A) and ants (B) consumed over time according to three temperature and diet treatments. Increasing slope thickness represents an increase in temperature treatment (25, 29, and 33°C). M-A represents treatments where *Toxorhynchites haemorrhoidalis* larvae were provided with both mosquito larvae and ants (see Table 3 for SE). Note that observations (x axis) were made every 3 days. To obtain daily consumption one needs to divide the consumption by three.
As insects are ectotherms, their metabolic activity generally increases with temperature and they are capable of adjusting their consumption habits accordingly (Ward & Stanford 1982). Here, *T. haemorrhoidalis* showed a significant increase in the daily consumption of mosquito larvae in relation to the temperature gradient generated (25, 29, and 33°C). Regardless of the time spent in the trays, these predators ate on average 1.6-times more mosquito larvae at 33°C than at 25°C. These observations are in line with...
previous studies about consumption rates by *Taxorhynchites* spp. (Trpis 1972, Steffan & Evenhuis 1981, Lounibos *et al.* 1998) and other culicid species (Lounibos *et al.* 2002, Reiskind & Zarrabi 2012). By contrast, *T. haemorrhoidalis* larvae ate on average 2.7-times more ants at 25°C than at 33°C. Indeed, these predatory larvae spent more time in trays at 25°C than at 33°C, although their mean daily consumption of ants was not significantly affected by temperature. These data represent the first reported consumption rates of terrestrial prey by aquatic invertebrate predators in relation to water temperature. Temperature significantly influenced the survival of the late instar larvae through its interaction with diet. Indeed, when fourth instar *T. haemorrhoidalis* larvae were fed only with ants, the individuals died after 31 days at 33°C on average, compared to 63 days at lower temperatures. The larval life span of this genus varies from 10 to 91 days depending on the species, water temperature, and prey density (Steffan & Evenhuis 1981). Although mortality among predatory larvae was high at 33°C, suggesting that this temperature is at the edge of the thermal tolerance range for *T. haemorrhoidalis*, the adults oviposit in both forested and sun-exposed areas in French Guiana (pers. obs., see also Jabiol *et al.* 2009); thus, the larvae are naturally exposed to extreme temperatures.
Our results also showed that the adverse effect of temperature on the metabolic demands of *T. haemorrhoidalis* was exacerbated by consuming ants. Pupation was never achieved at any temperature on an ant-only diet. It is possible that the ants provided few nutritional rewards compared to the energetic cost of manipulating and digesting them, and/or that they did not provide chemical compounds required to trigger pupation. Assuming that *T. haemorrhoidalis* is well-adapted to preying upon terrestrial insects and that this behavior has not been counter selected, there must be a threshold of toxicity (*e.g.*, the digestive enzymes and alkaloid compounds of ants’ venom) beyond which predators cannot survive. The effect of food toxicity has been well-studied in herbivorous species but less so in predators (Gutierrez-Ibanez *et al.* 2007, Behmer 2009, Jensen *et al.* 2011).
Generalist predators are believed to feed on a wide variety of resources to obtain a nutritional balance (Behmer 2009). In this study, regardless of temperature, the dry mass of *T. haemorrhoidalis* pupae fed with both mosquito larvae and ants was significantly lower than the dry mass of pupae fed with mosquitoes only. For many holometabolous insects, reproduction is closely linked to the amount of resources accumulated during the larval stages (Boggs & Freeman 2005). However, the extent to which the morphological and physiological characteristics (*e.g.*, the dry mass of pupae and adults, or wing length; see Reiskind & Zarrabi 2012) of pupae are related to adult fitness in *Taxorhynchites* spp. deserves further attention. Learning to distinguish suitable from unsuitable prey coupled with effective foraging techniques can greatly improve fitness (Cunningham *et al.* 1998, Ishii & Shimada 2010). Learning has been reported in several insect taxa, and can even continue after metamorphosis in holometabolous insects (Dukas 2008, Kawecki 2010). Here, we report that ants suffered a higher total number of attacks in the first choice test compared to mosquito larvae, and that this number decreased significantly in the third test. These results suggest that *T. haemorrhoidalis* individuals were more effective at capturing ants on the third day than on the first one, and we cautiously posit that learning may improve foraging efficiency in predatory larvae. In the presence of both mosquito larvae and ants, fourth instar *T. haemorrhoidalis* decreased their consumption of mosquitoes and shifted to ants, whatever the temperature (see Fig. 4). We thus suggest that throughout its fourth instar stage (long-term basis) and regardless of thermal conditions, *T. haemorrhoidalis* cannot distinguish energetically suitable (mosquito larvae) from unsuitable (ants) prey. We conclude that the stimulus produced by ants on the surface of the water influenced the predator more than the adverse outcome of an ant-based diet. This study provides further evidence that prey activity and/or detectability is one of the main drivers of diet in aquatic invertebrate predators rather than a predator’s active choice (Peckarsky & Penton 1989, Sih 1993). Other unexpected consumption habits have been reported by Eggert and Wallace (2007), who showed that some aquatic detritivores preferentially fed upon leaf detritus although the surface biofilm of microbes was more nutritionally suitable. The prevalence of such unexpected behaviors in nature therefore requires greater attention given their importance in helping to predict the effects of disturbances on communities *via* species’ responses.
The sophistication of *T. haemorrhoidalis* foraging strategies also indicates that it is well-adapted to prey on terrestrial insects. It may be that under natural conditions (*i.e.*, in the water-filled rosettes of the bromeliads) *T. haemorrhoidalis* preferentially consumes terrestrial prey more nutritious than ants, so that the observed hunting strategy could increase growth. Field experiments manipulating terrestrial invertebrate inputs could test the preference for ants versus other terrestrial species in nature. A related question concerns the frequency at which terrestrial resources enter the aquatic food web. Nevertheless, this study suggests that predatory larvae in bromeliad reservoirs are frequently exposed to ants and preferentially feed on them, despite the higher abundance and constant availability of aquatic prey. The trophic level at which allochthonous resources enter the system is also of great importance as it may enhance either the top-down or bottom-up effects that pervade the entire food web (Jeffries 2000). In tank bromeliads that host *T. haemorrhoidalis* larvae which preferentially feed on terrestrial insects, aquatic invertebrates (notably mosquito larvae) may indirectly benefit from predation release. In conclusion, higher temperatures negatively affect the survival of *T. haemorrhoidalis* through interaction with diet, but do not change *T. haemorrhoidalis* preference for terrestrial prey despite their adverse influence on survival. The potentially synergistic effects of biotic and abiotic stressors (*e.g.*, sub-optimal diet and thermal conditions) on species-specific behavioral traits may hamper our ability to predict community-wide responses to environmental changes.
**ACKNOWLEDGMENTS**
We are grateful to Frederic Petitclerc and Stanislas Talaga for technical help, Andrea Yockey-Dejean for proofreading the manusscript, and the *Laboratoire Environnement de Petit Saut* for furnishing logistical assistance. This study has benefited from an ‘*Investissement d’Avenir*’ grant managed by the *Agence Nationale de la Recherche* (CEBA, ref. ANR-10-LABX-0025). OD’s financial support was provided by a PhD fellowship from the French CNRS and the *Fond Social Européen*. We also wish to thank two anonymous reviewers for providing insightful comments on the manuscript.
**LITERATURE CITED**
**Acheampong**, E., I. **Hense**, and M. A. **St. John**. 2014. A model for the description of feeding regulation by mesozooplankton under different conditions of temperature and prey nutritional status. *Ecol. Model.* 272: 84–97.
**Behmer**, S. T. 2009. Insect herbivore nutrient regulation. *Annu. Rev. Entomol.* 54: 165–187.
**Benzing**, D. H. 2000. *Bromeliaceae: Profile of an adaptative radiation*. Cambridge University Press, Cambridge, UK.
**Boggs**, C. L., and K. D. **Freeman**. 2005. Larval food limitation in butterflies: Effects on adult resource allocation and fitness. *Oecologia* 144: 353–361.
**Breland**, O. P. 1949. The biology and the immature stages of the mosquito, *Megarhinus septentrionalis* Dyar & Knab. *Ann. Entomol. Soc. Am.* 42: 38–47.
**Brown**, J. H., J. F. **Gillooly**, A. P. **Allen**, V. M. **Savage**, and G. B. **West**. 2004. Toward a metabolic theory of ecology. *Ecology* 85: 1771–1789.
**Campos**, R. E., and L. P. **Lounibos**. 2000. Natural prey and digestion times of *Taxonhynchites rutilus* (Diptera: Culicidae) in southern Florida. *Ann. Entomol. Soc. Am.* 93: 1280–1287.
**Céréghino**, R. 2006. Ontogenetic diet shifts and their incidence on ecological processes: A case study using two morphologically similar stoneflies (*Plecoptera*). *Acta Oecol.* 33: 33–38.
**Collins**, L. E., and A. **Blackwell**. 2000. The biology of *Taxonhynchites* mosquitoes and their potential as biocontrol agents. *Biocontr. News Inform.* 21: 105N–116N.
**Cunningham**, J. P., S. A. **West**, and D. J. **Wright**. 1998. Learning in the nectar foraging behaviour of *Helioverpa armigera*. *Ecol. Entomol.* 23: 363–369.
**Dell**, A. I., S. **Pawar**, and V. M. **Savage**. 2014. Temperature dependence of trophic interactions are driven by asymmetry of species responses and foraging strategy. *J. Anim. Ecol.* 83: 70–84.
**Dézerald**, O., S. **Talaga**, C. **Leroy**, J.-F. **Carrias**, B. **Corbara**, A. **Dejean**, and R. **Céréghino**. 2014. Environmental determinants of macroinvertebrate diversity in small water bodies: Insights from tank-bromeliads. *Hydrobiologia* 723: 77–86.
**Dukas**, R. 2008. Evolutionary biology of insect learning. *Annu. Rev. Entomol.* 53: 145–160.
**Eggert**, S. L., and J. B. **Wallace**. 2007. Wood biofilm as a food resource for stream detritivores. *Limnol. Oceanogr.* 52: 1239–1245.
**Emlen**, J. M. 1966. The role of time and energy in food preference. *Am. Nat.* 100: 611–617.
**Focks**, D. A. 2007. *Taxonhynchites* as biocontrol agents. *J. Am. Mosq. Control Assoc.* 23: 118–127.
**Gutiérrez-Ibarraez**, C., C. A. **Villagra**, and H. M. **Niemeyer**. 2007. Pre-pupation behaviour of the aphid parasitoid *Aphtidius ervi* (Haliday) and its consequences for pre-imaginal learning. *Naturwissenschaften* 94: 595–600.
**Ishii**, Y., and M. **Shimada**. 2010. The effect of learning and search images on predator-prey interactions. *Popul. Ecol.* 52: 27–35.
**Jabiol**, J., B. **Corbara**, A. **Dejean**, and R. **Céréghino**. 2009. Structure of aquatic insect communities in tank-bromeliads in a East-Amazonian rainforest in French Guiana. *For. Ecol. Manage.* 257: 351–360.
**Jeffries**, R. L. 2000. Allochthonous inputs: Integrating population changes and food-web dynamics. *Trends Ecol. Evol.* 15: 19–22.
**Jensen**, K., D. **Mayntz**, S. **Toft**, D. **Raubenheimer**, and S. J. **Simpson**. 2011. Nutrient regulation in a predator, the wolf spider *Paridosa prativinga*. *Anim. Behav.* 81: 993–999.
**Jocque**, M., A. **Kernahan**, A. **Nobes**, C. **Williams**, and R. **Field**. 2010. How effective are non-destructive sampling methods to assess aquatic invertebrate diversity in bromeliads? *Hydrobiologia* 649: 293–300.
**Kawicki**, T. J. 2010. Evolutionary ecology of learning: Insights from fruit flies. *Popul. Ecol.* 52: 15–25.
**Kondoh**, M. 2003. Foraging adaptation and the relationship between food-web complexity and stability. *Science* 299: 1388–1391.
**Liang**, K. Y., and S. L. **Zeger**. 1986. Longitudinal data-analysis using generalized linear-models. *Biometrika* 73: 13–22.
**Linley**, J. R. 1995. Behavior on approach to surface prey by larvae of *Taxonhynchites amboinensis* and *Taxonhynchites brevipalpis* (Diptera: Culicidae). *J. Med. Entomol.* 32: 53–65.
**Logan**, J. D., W. **Wolesensky**, and A. **Joern**. 2006. Temperature-dependent phenology and predation in arthropod systems. *Ecol. Model.* 196: 471–482.
**Lounibos**, L. P., J. H. **Frank**, C. E. **Machado-Allison**, P. **Ocanto**, and J. C. **Navarro**. 1987. Survival, development and predatory effects of mosquito larvae in Venezuelan phytotelmata. *J. Trop. Ecol.* 3: 221–242.
**Lounibos**, L. P., E. A. **Martin**, D. **Duzak**, and R. L. **Escher**. 1998. Daylength and temperature control of predation, body size, and rate of increase in *Taxonhynchites rutilus* (Diptera: Culicidae). *Ann. Entomol. Soc. Am.* 91: 308–314.
**Lounibos**, L. P., S. **Suarez**, Z. **Menendez**, N. **Nishimura**, R. L. **Escher**, S. M. **O’Connell**, and J. R. **Rev**. 2002. Does temperature affect the outcome of larval competition between *Aedes aegypti* and *Aedes albopictus*? *J. Vector Ecol.* 27: 86–95.
**MacArthur**, R. H., and E. R. **Pianka**. 1966. On the optimal use of patchy environment. *Am. Nat.* 100: 603–609.
**Mitra**, A., and K. J. **Flynn**. 2005. Predator-prey interactions: Is ‘ecological stoichiometry’ sufficient when good food goes bad? *J. Plankton Res.* 27: 393–399.
**Paine**, R. W. 1934. The introduction of *Megarhinus* mosquitoes into Fiji. *Bull. Entomol. Res.* 25: 1–31.
**Peckarsky**, B. L., and M. A. **Penton**. 1989. Mechanisms of prey selection by stream-dwelling stoneflies. *Ecology* 70: 1203–1218.
**Petchey**, O. L., U. **Brose**, and B. C. **Rall**. 2010. Predicting the effects of temperature on food web connectance. *Philos. Trans. R. Soc. B Biol. Sci.* 365: 2081–2091.
**Petchey**, O. L., P. T. **McPhearson**, T. M. **Casey**, and P. J. **Morin**. 1999. Environmental warming alters food-web structure and ecosystem function. *Nature* 402: 69–72.
**R Development Core Team** (2012). *R*: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/.
**Regnière**, J., J. **Powell**, B. **Bentz**, and V. **Nealis**. 2012. Effects of temperature on development, survival and reproduction of insects: Experimental design, data analysis and modeling. *J. Insect Physiol.* 58: 634–647.
**Reiskind**, M. H., and A. A. **Zarrabi**. 2012. Is bigger really bigger? Differential responses to temperature in measures of body size of the mosquito, *Aedes albopictus*. *J. Insect Physiol.* 58: 911–917.
**Sih**, A. 1993. Effects of ecological interactions on forager diets: Competition, predation risk, parasitism and prey behaviour. Diet selection: An interdisciplinary approach to foraging behaviour. In R. N. Hughes (Ed.). Diet selection: An interdisciplinary approach to foraging behaviour, pp. 183–211. Blackwell Scientific Publications, Cambridge University Press, Cambridge, UK.
**Steffan**, W. A., and N. L. **Evenhuis**. 1981. Biology of *Taxonhynchites*. *Annu. Rev. Entomol.* 26: 159–181.
**Thierry**, A., A. P. **Beckerman**, P. H. **Warren**, R. J. **Williams**, A. J. **Cole**, and O. L. **Petchey**. 2011. Adaptive foraging and the rewiring of size-structured food webs following extinctions. *Basic Appl. Ecol.* 12: 562–570.
Traniello, J. F. A., M. S. Fujita, and R. V. Bowen. 1984. Ant foraging behavior: Ambient temperature influences prey selection. Behav. Ecol. Sociobiol. 15: 65–68.
Tripis, M. 1972. Development and predatory behavior of *Toxorhynchites brevipalpis* (Diptera: Culicidae) in relation to temperature. Environ. Entomol. 1: 537–546.
Visser, M. E., and C. Both. 2005. Shifts in phenology due to global climate change: The need for a yardstick. Proc. R. Soc. B Biol. Sci. 272: 2561–2569.
Ward, J. V., and J. A. Stanford. 1982. Thermal responses in the evolutionary ecology of aquatic insects. Annu. Rev. Entomol. 27: 97–117.
Woodward, G., D. M. Perkins, and L. E. Brown. 2010. Climate change and freshwater ecosystems: Impacts across multiple levels of organization. Philos. Trans. R. Soc. B Biol. Sci. 365: 2093–2106.
Yvon-Durocher, G., J. M. Montoya, M. Trimmer, and G. Woodward. 2011. Warming alters the size spectrum and shifts the distribution of biomass in freshwater ecosystems. Glob. Change Biol. 17: 1681–1694.
Zuur, A. F., E. N. Ieno, N. J. Walker, A. A. Saveliev, and G. M. Smith. 2009. Mixed effects models and extensions in ecology with R. Mixed effects models and extensions in ecology with R, p. 574. |
Address Rebellion with Love
After a summer of “freedom” and relaxed rules, September’s return to school and schedules often poses a challenge for children. On top of that, 2020 has been filled with new restrictions and limits that are tough for everyone to comply with and comprehend.
As the pandemic drags on, kids are likely to have more questions about why they need to stay home or wear masks, and why they can’t enjoy certain activities or venues that are temporarily shut down.
Although rules are set for our own good, following them isn’t always fun or easy. Because of sin, all humans rebel against authorities and against God. Children are no exception, as new parents quickly discover. From a surprisingly young age, little ones begin asserting their independence by pushing back against limits and saying “no!” Just as God deals with our rebellion out of love, he instructs parents to raise and discipline children lovingly. That approach molds them into followers of Jesus who strive to obey God and respect other people.
Rebellion takes different forms as children grow (see the next page), so you’ll need to adapt your approach to rule-setting and discipline. No matter your children’s age, however, one of the most important things you can do is pray for them—and for yourself in the vital role of parent. Thank Jesus for each of your children by name, and ask Jesus to work in their hearts and lives, giving them a strong desire to always follow God faithfully.
Staying on the Right Path
Use these strategies for dealing with rebellion as children grow:
**Birth to 2 years:** Accept that God gives even infants and toddlers unique temperaments. Provide lots of comfort, physical touch, and warmth.
**3 to 4 years:** Listen carefully, and respond to physical and emotional needs. Explore what upsets children. Model Jesus’ love through affection.
**5 to 7 years:** Offer choices and clear consequences for disobedience. Balance your growing demands with warmth and reason.
**8 to 12 years:** Be consistent. Express trust, and praise kids for jobs well done. When kids fall short, ask what they can do differently next time.
**Commandments 2.0** As a family, work to reword each of God’s Ten Commandments as a loving rule with positive purposes; for example, “Because I want you to be protected from religions that would mislead you, don’t worship any other god besides me.”
**Walking with God** Either trace one another’s feet on paper or make footprint impressions with plaster of Paris. As prints dry, read Joshua 22:5 (NIV) and discuss what it means to “walk in obedience” with God. Also talk about what it’s like to veer from God’s path—and how he brings us back to his ways.
**Grace Abounds** During family devotions about people in the Bible who rebelled, address not only the consequences each person faced but also God’s abundant grace. For example, King David paid a hefty price for sinning, but he asked for—and received—forgiveness.
**Going God’s Way** Beforehand, use tape to mark start and finish lines at opposite ends of a room. Share times you’ve done the opposite of what you should have done. Gather on the starting line and say: “See how fast you can get to the finish line—but you must crawl or crabwalk backward. Go!” Read John 1:35-40. Ask: “How was this game like or unlike following Jesus? When it’s tempting to do the opposite, how can we live God’s way?”
**Rules Roulette** Search online for outdated rules that were in communities and schools years ago—or that might still remain today. Share some funny rules with family. Then read Luke 16:17. Ask: “Why do rules exist? Why do they sometimes need to change? How does it feel to know that God’s rules will never change?”
**Starting Over** As a family, choose an item to draw on an Etch-A-Sketch. Every 60 seconds, pass the toy to someone else. When it returns to you, start drawing and say, “Oops, I messed up!” Shake the toy. Say: “Sometimes we mess up by not following God’s rules, but he lets us start over.” Read Luke 15:11-32. Discuss how the prodigal son rebelled against his father but got a second chance.
**Map Treks** Hand out paper and pencils. Say: “Keeping your eyes closed, draw a map from our house to school, church, or a friend’s house.” After comparing maps, read Psalm 119:10. Ask: “How is the Bible like a map for our lives? How can the Bible keep us from wandering away from God?”
“So why do you keep calling me, ‘Lord, Lord,’ when you don’t do what I say?” —Luke 6:46
**MOVIE**
**Title:** The One and Only Ivan
**Genre:** Animation, Adventure, Comedy
**Rating:** PG
**Cast:** Sam Rockwell, Angelina Jolie, Phillipa Soo, Bryan Cranston
**Synopsis:** In this CGI/live-action film streaming on Disney+, Ivan the gorilla ponders life in captivity. He and other animals kept at an Atlanta mall form unexpected friendships while plotting an escape plan. The movie is based on Katherine Applegate’s middle-grade novel, which won the Newbery Award.
**Our Take:** *Ivan* explores themes such as hope, freedom, one’s sense of home, and new perspectives. It also can spark conversations about when to question or challenge one’s situation. Because the movie is inspired by a true story, the concept of animal abuse could upset some viewers.
---
**BOOK**
**Title:** Smile
**Artist:** Katy Perry
**Synopsis:** Perry’s fifth album—and her first since 2017—coincides with the birth of her first child. Though the title track has an upbeat sound and lyrics, the pop singer says it emerged from “one of the darkest periods” of her life, when she’d lost her smile. “This whole album is my journey toward the light,” she says, “with stories of resilience, hope, and love.”
**Our Take:** “Daisies,” the lead single, talks about overcoming odds and defying expectations. On “Smile,” Perry expresses gratitude for renewed happiness, noting that “every tear has been a lesson.” Perry has been vocal about bouts with depression. Be aware: Some lyrics contain profanity or suggestive phrases.
---
**Games, Podcasts & Apps**
**Jump Rope Challenge**
This simple Nintendo Switch game, inspired by pandemic-related lockdowns, is free on the eShop until Sept 30. Using the Joy-Con, players make a bunny jump in time with them. Jump Rope Challenge makes a noble yet limited attempt to get gamers moving.
**Music Box**
Using interactivity, this music-education podcast teaches children about fundamentals in fun ways. Episodes explore concepts such as meter and pitch, songwriting basics, and what different instruments sound like. Host Faith Murphy and special guests introduce listeners to a wide range of music.
**WideOpenSchool**
From the nonprofit Common Sense Media, this app offers a vast, organized collection of educational resources. Parents new to online learning or home-schooling will appreciate all the subject matter—for pre-K through grade 12—from sources such as Scholastic, National Geographic, and PBS.
---
**CULTURE & TRENDS**
**Back to School?** Experts predict a sharp uptick in homeschooling this fall, with many parents leery of health risks or unhappy with hybrid-learning options. Only 3% of kids were homeschooled in 2016, but that’s expected to rise significantly—and possibly become a lasting trend. (*Washington Post*)
**At-Risk Educators** Kids who *do* return to classrooms should be prepared to see new people at the helm. Almost one-third of U.S. teachers are 50 or older, putting them at higher risk for Covid-19. Those with pre-existing conditions or at-risk family members may sit out this year, and more than usual are expected to retire. (*various sources*)
---
**QUICK STATS**
**On the Move** More than one-fifth (22%) of Americans moved or know someone who moved due to the pandemic. Of the people who have moved, three-fifths (61%) relocated to a family member’s home. (*Pew Research*)
**COVID Conundrum** Though 59% of parents worry that their kids are falling behind academically during the pandemic, only 44% of adults with school-age children are willing to send them back to school this fall. (*ABC News/Ipsos*)
| SUNDAY | MONDAY | TUESDAY | WEDNESDAY | THURSDAY | FRIDAY | SATURDAY |
|--------|--------|---------|-----------|----------|--------|----------|
| | | | 1 Happy Birthday, Melissa Ziegler! | 2 | 3 | 4 |
| | | | | | | 5 |
| 6 | 7 Labor Day | 8 | 9 Happy Birthday, Charlotte Decker! | 10 | 11 Patriot Day | 12 |
| 13 Happy Birthday Jean Heatley! Grandparents Day! | 14 Happy Birthday, Shelly Libby! | 15 | 16 *Don’t Forget! NEXT STOP DRIVE THRU 4-6pm | 17 | 18 | 19 City Serve Begins |
| 20 | 21 | 22 | 23 See You @ the Pole! | 24 | 25 | 26 City Serve Ends |
| 27 | 28 | 29 Happy Birthday, Bill Carle! | 30 | | | |
WHAT JESUS IS SAYING TO THE CHURCH
REVELATION 1:3
BEGINS SEPTEMBER 13
MAKING DISCIPLES WHO MAKE DISCIPLES
SHORT-TERM LIFE GROUP CLASSES
EQUIPPING YOU FOR LIFE AND SERVICE
FOR MORE INFO VISIT LIFESPRINGCHURCH.COM/ADULTS
The Lord blesses His people with peace. Psalm 29:11
| SUNDAY | MONDAY | TUESDAY | WEDNESDAY | THURSDAY | FRIDAY | SATURDAY |
|--------|--------|---------|-----------|----------|--------|----------|
| 6 READ A BOOK DAY | 7 Where do pencils go on vacation? When played together | 1 What does peace mean to you? Can you answer with a drawing? | 2 Do some yard work as a family. | 3 Make a collage of all your dreams and goals. | 4 Try a new food. | 5 Bake cookies. |
| 13 What one prayer request does each person have for the week ahead? | 8 Try Go Karting. | 9 What does it mean to have peace with God? Romans 5:1 | 10 Go somewhere you have never been. | 11 Tell me about the kids in your class. Who do you enjoy playing with the most? | 12 Go roller skating. | 19 Create a time capsule and bury it in the backyard. |
| 20 One of Jesus names is Prince of Peace. What do you think it means? | 14 Play a game of 20 QUESTIONS. | 15 Write a story together. | 16 COLLECT ROCKS DAY | 17 When do you feel most at peace? When do you not feel at peace? | 18 Make homemade pizzas. | 25 Go through the house, gather up the “junk.” Sell or donate some, throw the rest away. |
| 27 Read Mark 4:35-41. How can you have peace in the midst of a storm? | 21 MINIATURE GOLF DAY | 22 What has been your favorite part of school so far? | 23 Make a family music video. | 24 What did zero say to eight? | 26 LOVE NOTE DAY | 31 Create a family tree. |
| 28 GOOD NEIGHBOR DAY | 29 Create a family mission statement. | 30 When did you feel peace even in a difficult time? |
SUNDAY SCHOOL ONLINE
- Go to LifeSpringChurch.com
- Connect
- Children’s Ministry
- Parent Resources
- Sunday School Online
Once there you will find this week’s lesson to include the Bible Story Video and Graphics to go with (Bible story picture, Story Point, Big Picture Q & A, Key Verse, Activity Pages & Coloring Sheet).
In agreement with our curriculum company the lessons will be available Saturday through Tuesday at noon.
THE GOSPEL PROJECT
September Sunday School Lessons
Jesus the Healer
| Date | Lesson Title | Scripture |
|------------|-------------------------------------|-------------|
| September 6| Jesus Healed Ten Men | Luke 17 |
| September 13| Jesus Healed a Woman and a Girl | Mark 5 |
| September 20| Jesus Healed a Man Who Was Lame | John 5 |
| September 30| Jesus Healed a Man Who Was Blind | John 9 |
Scripture
“Yet he himself bore our sickness, and he carried our pains; but we in turn regarded him stricken, struck down by God, and afflicted. But he was pierced because of our rebellion, crushed because of our iniquities; punishment for our peace was on him, and we are healed by his wounds.”
Isaiah 53:4-5
Big Picture Question & Answer
Why did God create people?
God created people to worship Him, love Him, and show His glory.
Recipe of the Month
Apple Cinnamon Pancakes
Ingredients
- 3/4 cup milk
- 1 1/2 tablespoons vinegar
- 1 cup flour
- 3 tablespoons sugar
- 1 teaspoon cinnamon (if you really love cinnamon you can add another 1/2 teaspoon)
- 1 teaspoon baking powder
- 1/2 teaspoon salt
- 1 egg
- 2 tablespoons oil
Apple Topping
- 2 tablespoons butter
- 2 apples, peeled, cored, and diced
- 2 tablespoons brown sugar
- 1/2 teaspoon cinnamon
- 1/3 cup maple syrup
Instructions
1. Preheat a skillet to medium-high heat (275 degrees).
2. Whisk together the milk and vinegar and allow to rest for 5 minutes.
3. While milk is curdling, whisk together flour, sugar, baking powder, baking soda, and salt in a large bowl.
4. Whisk egg and oil into milk. Add wet ingredients and stir until combined (don’t over-mix, it should still have some lumps).
5. Spray skillet with cooking spray. Use 1/4 cup measuring cup to pour batter onto skillet. Cook about 2 minutes until bubbles form and the edges start to look “dry”. Use a spatula to flip the pancake and cook another 1-2 minutes on the other side. Set pancakes aside and repeat with remaining batter.
6. Add butter, apples, brown sugar, and cinnamon to a medium sauce pan. Stir over medium heat 3-5 minutes until apples are very tender. Stir in syrup. Serve apple topping over warm pancakes.
Recipe from: www.lecremedelacrumb.com
Students (K-college) from across the nation will celebrate religious freedom and share God’s love with their friends.
This event is sponsored by Focus on the Family. See their website for more ideas: focusonthefamily.com
It is designed to empower students to express their belief in the truth of God’s Word—and to do so in a respectful way that demonstrates the love of Christ.
Christian students can be a powerful voice of hope at their school!
LifeSpring Church Facebook Challenge for Bring Your Bible to School Day!
On October 1:
- Take a picture of your child with their Bible as they head out to school.
- Tag “Real LifeKids LifeSpring” Facebook page
You will have a chance to win a Gift Bundle!
Thank you LifeSpring Church Family for the absolute privilege of serving you for the last 20 years! These 20 years have flown by as I have lived out my calling by God to invest in the spiritual lives of children and families! It’s because it is something I love and feel passionate about.
I look back on these 20 years and recognize how blessed I am to have had this opportunity to lead and serve the families of LifeSpring. What a joy to see kids journey with Jesus, grow in their knowledge and faith, start their careers and families, and continue to serve the Lord!
My commitment to each of you is to continue to serve you and minister to you as the Lord leads, guides, and ordains in the days and years to come. Please know that I am here to come alongside you in whatever way I can. Feel free to contact me by phone (402-292-4546) or email email@example.com
I love you all and miss you terribly. Hope to see your faces in the near future.
Kelly Wallace
Children’s Minister & Office Administrator
You have approximately 936 weeks from the time your child is born until he or she graduates from high school. It goes by fast, and we want to make sure resources are quickly within your reach as you navigate how to raise a child in faith. We pray this list will be a blessing to your family as you each take your next steps toward Jesus!
| KID BIBLES | KID DEVOTIONS |
|-----------|--------------|
| **0-2** | **2-5** |
| Say and Pray Devotions – Diane Storz
The Big Book of Bible Stories for Toddlers
The Beginner’s Bible for Toddlers
Baby’s First Bible Box Set (board books)
Baby’s First Bible Stories (board book) | God and Me!
Minno - Stories Kids Love
Jesus Calling, for Little Ones
Age-Appropriate Bibles
First Bible Basics |
| **3-5** | **6-12** |
|-----------|--------------|
| The Jesus Storybook Bible
The Big Picture Interactive Storybook Bible
Jesus Calling Bible Storybook
My Awesome God Storybook
The Story for Children
Peanut Butter and Jelly Prayers
The Beginners Bible
Laugh and Learn Bible for Little Ones
My Learn to Read Bible | God and Me!
Indescribable, Devotions about God & Science
Superbook Videos
Day by Day Devotions – Karyn Henley
Jesus Calling for Kids
Grace for the Moment: 364 Devotions for Kids
Foundations for Kids: Bible Reading Plan
The Purpose Drive Life Devotionals for Kids
Notes from Jesus |
6-12
Adventure Bible
The Action Bible (Comic Version)
Laugh and Grow Bible for Kids
The Big Picture Interactive Bible
Day by Day Kid’s Bible
Hands-On Bible
Connect Bible
Mighty God Girls
For Girls Only
3-Minute Devotions for Girls
You’re God’s Girl!
Lies Girls Believe
Gotta Have God
The Ultimate Boys Book of Devotions
3-Minute Devotions for boys
A Boy After God’s Own Heart
FAMILY DEVOTIONS 6-12
God’s Story (great for mixed ages of little and up)
One Year of Dinner Table Devotions
Jesus Calling, Family Devotional
PrayerWorks: Prayer Strategy and Training for Kids
The Very Best, Hands-On Kinda Dangerous Family Devotions
Faith Family Talks: 100 Discipleship Activities and Conversation starters
Cornerstones: 200 Questions and Answers to Teach Truth
Step Into the Bible: 100 Family Devotions to Help Grow Your Child’s Faith
The One-Year Book of Family Devotions
Sticky Situations: 365 Devotions for Kids and Families
KID BOOK RESOURCES
STORYBOOKS
Maybe God is Like that Too – Jennifer Grant
What is Heaven Like – Beverly Lewis
ELEMENTARY
Left Behind: Kids Series - Tim LaHaye & Jerry B Jenkins
The Sugar Creek Gang Series - Paul Hutchens
| **What is God Like**
Series by Dr. Craig | **The Prince Warriors Series**
- Priscilla Shrier |
|---------------------|--------------------------------|
| Just in Case You Ever Wonder – Max Lucado | Theodore Boone Series
- John Grisham |
| You are Special – Max Lucado | **DEATH AND GRIEF** |
| The Tallest of Smalls – Max Lucado | God Gave Us Heaven
– Lisa Tawn Bergren |
| When God Made You – Matthew Paul Turner | The Memory Box
– Joanna Rowland |
| Hermie – Max Lucado | Sam’s Dad Died
– Margaret Holmes |
| How Full is Your Bucket for Kids – Rath/Reckmeyer/Manning | Molly’s Mom Died
– Margaret Holmes |
| Giraffes Can’t Dance – Andreae/Parker-Rees | Water Bugs and Dragonflies
– Doris Stickney |
| God Made You Special – Big Idea Books | Someday Heaven
- Larry Libby |
| Only One You – Linda Kranz | **SEXUALITY** |
| It’s Not Easy Being a Bunny – Marilyn Sadler | How and When to Tell Your Kids About Sex (parent book)
– Stanton and Brenna Jones |
| Wonderfully Made – Joyce Meyer | Good Pictures, Bad Pictures |
| God Gave Us You – Lisa Tawn Bergren | |
**You're All My Favorites**
- Sam McBratney
**I'm Gonna Like Me**
- Jamie Lee Curtis
**Pete the Cat Too Cool for School**
- James Dean
**I Like Myself**
- Karen Beaumont
**I Am a Rainbow**
- Dolly Parton
**Jesus and His White Horse**
- Jake McCandless
**God's Design for Sex Series:**
- Book 1 - The Story of Me (Ages 3-5)
- Book 2 - Before I Was Born (Ages 5-8)
- Book 3 - What’s the Big Deal? (Ages 8-11)
- Book 4 - Facing the Facts (Ages 11-14)
**God Made All of Me**
(sexual abuse prevention)
- Justin and Lindsey Holcomb
**It’s Great to be a Girl**
- Dannah Gresh
**It’s Great to be a Guy**
- Jarod Sechler
---
**PARENT RESOURCES**
**BOOKS**
**Praying Circles Around Your Children**
- Mark Batterson
**5 Love Languages of Children**
- Dr. Gary Chapman
(Help your kids take the 5 Love Languages test for kids!)
**Parenting with Love and Logic**
- Foster Cline
**Talk Now and Later: How to lead kids through life’s tough topics**
- Brian Dollar
6 Ways to Keep the 'Little' in Your Girl
- Dannah Gresh
Spiritual Parenting: An awakening for today’s families
– Michelle Anthony
Grace Based Parenting: Set your family free
- Tim Kimmel
Screamfree Parenting: How to raise amazing adults by learning to pause more and react less
– Hal Runkel
Sticky Faith:
Everyday ideas to build lasting faith in your kids
– Kara Powell and Chap Clark
Faithful Families: Creating sacred moments at home
- Traci Smith
Give Them Grace:
Dazzling your kids with the love of Jesus
- Elyse Fitzpatrick
The Tech-Wise Family: Everyday steps for putting technology in its proper place
- Andy Crouch
Parenting Beyond Your Capacity
- Reggie Joiner & Carey Nieuwhof
Parenting is Wonder-Full
- Sue Miller & Holly Delich (great for new parents!)
Intentional Parenting: 10 Ways to be an exceptional parent in a quick-fix world - Doug & Cathy Fields
Don’t Miss It: Parent every week like it counts - Reggie Joiner & Kristen Ivy
PARENT RESOURCES
WEBSITES
Rightnowmedia.com
Over 20,000 Biblically-based videos on topics like marriage, parenting, youth, recovery, leadership, finances and much more.
Phaseguides.com
An 18-part series of concise and interactive journal-style books that simplify what parents need to know about each phase of a kid’s life and give them the opportunity to discover more about their children—so they can make the most of every phase.
PursueGod.org/family
Hundreds of family topics you may be thirsting for.
D6family.com
Devotional magazines for every member of your family. Blogs on all parenting topics. Book recommendations.
FocusOnTheFamily.com
Great podcasts and articles on many topics. Family activities through Clubhouse and Clubhouse Jr. Also, check out the PluggedIn site which is full of movies, tv, music, games, books and streaming!
FaithAtHome.com
Resources and devotions for everyone in the family!
SeedsFamilyWorship.com
Resources for kids and family; songs, videos, devotions, scripture challenges, and more!
Parenting-101.org
Life can be complicated and distracting, but if you can focus some time every day to connect your child to your heart, they will grow up emotionally healthy and will be able to trust you, trust God and have great relationships throughout their life. You can do it, we can help!
ParentingForFaith.org
Equipping parents with tools and podcasts.
PODCASTS
God Centered Mom
The Compassionate Caregiver
The Confident Parent
Daily Radio Bible for Kids
Mindful Parenting in a Messy World
Parenting without Power Struggles
The Tilt; Parenting Podcast
Focus on Parenting
Parenting for Faith |
Psychology In Action
Talking, discussing psychology, is after all, only talk. Pleasant talk, perhaps, but talk. How much chemistry would you know if you never entered the laboratory? How much biology could you learn if you never squinted thru a microscope? It is a pathetic delusion to think that we can study human nature, either by dealing with it in business or politics, or domestic life, where prejudice and self-interest fog the picture; or in the classroom, where it is words, words, words!
One good place to study human nature is in the psychological clinic, but we have no clinic at Bowling Green. We have no place here to carry our tangled complexes, our incipient insanities, or even our acute "how to study" perplexities, our stuttering bashfulness, or our over-bearing conceit, to have them coolly analysed and as coolly removed. Perhaps all that will come some day.
But another place, interesting tho not quite so effective as a clinic, we must admit, is a laboratory for testing and experimenting. We are beginning to get that. Let me speak, this time, about the testing aspect of a psychological laboratory.
We are very greatly interested in ourselves. Are we normal, like other people, or are we different? How different? Suppose we consider "nervousness" (tremor, unsteadiness, tiring quickly, irritability). Some people don't know that they are nervous, or if they are more so than the general run of people. We have some measures of those things, for you, and shall be able to tell you how you compare with others. Then, how about your eyes? Quite a number of people here at Bowling Green are color-blind, but probably don't know it. We can tell you if you are. Also, whether you are near, or far-sighted, or astigmatic. Which is your favorite eye? We can determine, too, the possible range of your attention, the number of things you can perceive and understand in a short time. Just how much more or less suggestible are you, in regard to things that you are not sure of, than other people? Can you be led to believe things that are clearly untrue, when carefully examined? We have a test for that.
How good is your memory for material that has no "meaning"—just pure, brute rote memory? How good for the "meaningful"? How much more vivid is your imagination than those of a hundred other people, when you all are put in the same situations?
How fast can you learn? How do you learn best,—thru the eye, or ear, or thru the muscle sense of what you do? What does punishment do to your learning? People differ on that. Is it easy or hard to jolt you out of your old habits?
Are your habits of thinking chiefly scientific, or moralistic, or esthetic, or "practical", or what? We have a pretty good test for that. When you reason or try to, do you generalize more or less quickly than others? You ought to know. Can you apply rules easily; are you a good deductive thinker? What is your intelligence level? We can test that with "verbal" or "talking" tests, and with "non-verbal" tests, as well.
And so on. That isn't all, tho our program is incomplete. But our ideal, for the years to come, is to give all Bowling Green students a chance to come into a well-equipped, carefully directed laboratory where, under controlled conditions, they can get a fund of information about themselves.
CLYDE W. GLEASON,
Department of Psychology.
Key payments due Wed.
BOX CANDIES
Dressed in Gay Christmas wrappings. Various sizes and various prices. 25c - 39c - 50c - 59c - 70c - $1 and up Remember mother, the girl friend and the kid sister with one of these boxes for Christmas.
LABEY'S SWEET SHOP
OWED TO MATHIAS
Mr. Mathias is my teacher. I shall not pass. He maketh me to prove dense propositions; He leadeth me to expose my ignorance before the class. He maketh me to draw figures on the board for my grade's sake. Yea though I study till midnight I shall not cram Geometry. The propositions bother me and the originals trouble me, He prepareth puzzles for me in the presence of mine enemies, He giveth me a low grade and my work fanneth over. Surely zero and this condition shall follow me all the days of my life, and I shall dwell in the class of Trigonometry forever. —Der Frosh.
Christmas Gifts DeLuxe
Unusual, Clever Gifts collected from the four corners of the earth are here for your selection.
VISITORS WELCOME
PICTURE FRAME & GIFT SHOP
180 South Main St.
STRAWSER & CO.
Jewelers and Licensed Optometrists
Orders Taken Here For GRADUATE'S CLASS RINGS
The Jewelry Store with the large window
115 N. Main St.
GIFTS for EVERYONE at WARDS
MONTGOMERY WARD & Co.
Bowling Green, Ohio
Debaters Clash, In Argument With B-W
Thursday afternoon another battle of wits was staged. Christie, Linsenmayer, and Egbert, Bowling Green aces challenged and re-challenged the men from Baldwin-Wallace. Our men apparently were more experienced and brought to mind several inconsistencies and fallacies in their opponents arguments for what they called "Americanized Socialism."
The B-W men were nearly smothered in a verbal hay mow of syllables and syllogisms. This is all written in due respect for the attitudes and men from B-W. They confessed after the battle that down in their hearts that just such a program as they proposed is inevitable for America. And we say, the benefits they claim would result, let 'er come! No more war, no more famine, and no more starving.
Forensic activities have quieted down somewhat, until after the holidays. Two of the younger teams are going to Pemberville tonight to get some experience, some eats, and to give some information and entertainment to the listeners.
---
Student Opinion
The recent Austin case offers the most striking miscarriage of justice I have seen in some time. Whether it is due simply to politics, or fear of scandal or pure ignorance, I don't know. But certainly it can be safely branded as an outrage on the three luckless students who were caught and a ludicrous joke on the judicial court of our city.
To make a long story short: why were not the women, who were riding in the car at the time the misdemeanor was committed, apprehended just as were the three men? There can be only one answer—their fathers, are too prominent in the town's upper circle, both legally and financially. The fact remains that if the two men not driving were accomplices and equally as guilty as the driver, then certainly the women were guilty quite as much as the others.
In this day of equal rights between man and woman, it strikes me that the above is too often the fallacy in modern justice. Women holler about equal rights on smoking and social privileges but when a case comes in which they are equal, they gracefully droop an eyelid, and fade out of the picture.
---
Make . .
UHLMAN'S CLOTHING STORE
Your Gift
CHRISTMAS
HEADQUARTERS
---
CLA-ZEL THEATRE
TUES. and WED., Dec. 15-16
"FRANKENSTEIN"
with MAE CLARKE and JOHN BOLES
SUN. and MON., Dec. 20-21
JOE E. BROWN with DOROTHY SEE in
"LOCAL BOY MAKES GOOD"
---
RAPPAPORT'S
"For Everything"
GREETING CARDS — CANDY
Gifts for Every Purpose
"Santa Claus Headquarters"
---
Comments of A Freshman
As I sit in chapel I observe the most dignified upper classmen. It gives me great pleasure and encouragement. I often wonder where and how they find so many things to do every week during this hour.
You always hear a lot about the manners of the Freshmen but never anything about the upperclassmen. I often wonder who reared these dignified people who enjoy annoying other people during this hour. How well they influence the Freshmen to do better? We are supposed to follow the upper classman, but I think that in a good many cases the Freshmen could teach the upper classmen some manners.
Let me illustrate what I mean. The week that Rev. Seibens spoke to us there was a big lobster setting up behind me in chapel. He was leafing a magazine. Not only did he read it this way but he had to accompany this noise with humming. He is, I think, a junior.
Not only the boys of the dignified classes but also the girls need a little manner training. The girl on my right was getting her shorthand lesson for the next hour. Behind me there were three girls who came from the same town. They were very loudly planning what they would do over their Christmas vacation. They, I think, are sophomores.
These are not the only examples I could show you but all over the room the upperclassmen are doing something. If they are not talking or reading they are sleeping. Actually I heard one person snore out loud about two weeks ago.
Now, do you think, upperclassmen, that you are setting a good example for the freshmen. If you do not endeavor to do your part, do you think the freshmen will ever be able to do theirs when they become upperclassmen? Did you ever stop to think? The man who is here giving his time and effort has spent hours on his speech. Dr. Williams does not bring a single person here who does not talk about something that every young man and young woman should be interested in.
So, upperclassmen, if you must be dignified, and if you will not recognize freshmen, you might set an example for them to follow, at least.
—Freshman 1931
---
Something to Gossip About!
Buzz! Buzz! What a hustle! Yet, it's easy to guess what it's all about! These thrifty-conscious Co-eds have been a-shopping and are comparing notes... each convinced that she has captured the laurels in the pursuit of alluring values.
All agree, however, that for dresses, lingerie, hose, shoes and other items in the attire of the smart undergraduate, no store offers more for less than Penney's!
---
Make This a . .
Watch - Diamond Jewelry Christmas
All Watches, Diamonds, Jewelry
ONE HALF PRICE
ALEX KLEVER
JEWELER
121 N. Main St.
Sing Sing Kicks Pigskin
Sing Sing has gone in for football. It gives the convicts a chance to build character.
Altho they don't wear stripes, their mascot is a Zebra (synthetic, however). At their last game, in which they defeated Ossining Naval Militia 33 to 0, Miss Lawes, 10-year old daughter of the warden, rode this inanimate beast in an inside-walls parade.
Sing Sing on the Hudson, boasts of a large alumni, but somehow they don't back for the games, if they can help it.
Coach Red Hope, who has a 30-year contract with the University, says that in their system they play a strictly home schedule, thus eliminating the players becoming acclimated to new fields. "This", he goes on, "puts us in the same football class with Army." Although the coach looses one of his guards next season his backfield will remain intact for the next five years.
As yet Sing Sing has no paid athletes, but she does provide free board and room.
What they need now is a good Alma Mater song and some good snappy cheers. The boys object to the sound of "Hold 'em, Sing Sing!"
Teachers Entertain
(Continued from page 1, column 3)
games.
As the hands of the clock crawled toward a late hour, refreshments consisting of hot wassail, decorated Christmas cake, and candy, were served. Each plate was attractively adorned with a small Christmas tree as a favor.
The decorations of the gym helped materially in furthering the spirit of the occasion. The arches formed a background for huge lighted Christmas candles. The same motif was used on the curtains while an Eastern Star shown brightly overhead.
The invited Faculty guests were: Pres. and Mrs. H. B. Williams, Dr. Clyde and Mrs. Hissong, Dr. W. C. and Mrs. Hoppes, Mr. and Mrs. A. B. Conklin, Miss Harriet Hayward, Miss Alice Roth, Miss Lena Mills, Mrs. Maude Sharp.
The success of this happy union of all the training forces of all the elementary schools was due largely to the splendid cooperation of all the following committees:
Invitation Com., Miss Winkles.
Favor Co., Miss Simmons.
Refreshment Com., Miss Doane.
Reception Com., Mr. Hoppes.
Entertainment Com., Miss Duncan, Miss Paxton.
Decoration Com., Miss Lorenz.
The general chairman of this well-planned and delightful evening was Miss Beattie.
In closing we are looking forward to "bigger and better" reunions of the elementary training staff and students.
From Christmas Headquarters comes this warning. Shop now! Avoid the rush and probable disappointment.
Appropriate gifts in all departments for all members of the family; also for the home.
A. Droney Co. |
DATA SYSTEMS—"DATA-PHONE" SERVICE
AND DATA ACCESS ARRANGEMENTS ON
DIRECT DISTANCE DIALING NETWORK
OVERALL DATA TRANSMISSION TEST REQUIREMENTS
CONTENTS PAGE
1. GENERAL 1
2. OPERATION OF THE DDD NETWORK 2
3. TRANSMISSION ASPECTS OF DATA SERVICES 3
4. MAXIMUM TRANSMITTING LEVEL AND OVERALL CIRCUIT LOSS 4
5. ATTENUATION FREQUENCY DISTORTION 4
6. RETURN LOSS REQUIREMENTS 5
7. MESSAGE CIRCUIT NOISE 5
8. IMPULSE NOISE 6
9. ENVELOPE DELAY DISTORTION 6
10. FREQUENCY SHIFT 9
11. EVALUATING DATA TRANSMISSION AND TROUBLE INVESTIGATION 9
12. REFERENCES 11
1. GENERAL
1.01 This section describes the overall transmission considerations and test requirements involved in providing data transmission over the switched telecommunications network (DDD) using loops, trunks, and switching equipment as used in voice service. This section applies equally to DATA-PHONE service and Data Access Arrangements (DAA) unless otherwise specified. There are no specific requirements for inductively or acoustically coupled DAA.
1.02 This section is reissued for the following reasons:
- To include DAA information throughout this section
- To update BSP references in 7.01
- To include reference to 914-type data test set in 11.03
- To include a reference section
- To change the upper test frequency to 2800 Hz
- To change the holding tone to a $-13 \text{ dBm0}$.
1.03 In general, data transmission calls are handled the same as voice telephone calls. The calling party dials the desired number and the called party answers. When the parties are ready to send or receive data, both parties change their mode of operation from voice to data by the operation of pushbuttons or keys either built into or associated with the data set. It is necessary that the data sets on either end of the connection be of the same type and be compatible in bit rate, frequency, etc. Upon completion of the data transmission, both parties (by previous agreement) either hang up or return to the voice mode. There are exceptions to this procedure; i.e., the called station may be unattended. If the called station is unattended, the calling party receives a tone indicating that the distant end data set has answered and is ready to receive (or send) data. At the end of the call, the distant end will disconnect under the control of the far-end business machine equipment. Another exception is in the use of automatic calling units. These units permit a computer or other similar business machine to "dial" the desired number. These systems are usually associated with the unattended service feature described above, and
therefore no person is involved at any time during the sequence of operations.
1.04 In order to test Bell System DATA-PHONE services, a number of 904-type data test centers (DTC) have been installed in various locations in the Bell System. The data test centers are used in conjunction with local and toll test centers. The two types of voiceband data test centers in operation are the 904A or C and the 904B or D. The 904A and C data test centers are designed for local testing and are capable of testing data sets which have a remote test feature. The remote test feature allows the data set to be tested from a DTC through the operation of a test key on the data set by the attendant. This permits a data set to be tested without telephone company personnel at the station. The 904B or D data test center must always be associated with a 904A or 904C DTC. The 904B or D DTC contains several types of data test sets and other test equipment, which enable it to make dynamic tests (end-to-end error tests) of data sets. (In other words, the DTC is a "presumed good" data set.) It is most useful for testing the interface of the data set (which is not tested by the 904A-type tests) and for quick demonstrations to the customer and/or business machine personnel that the data set is operational. However, since the DTC is somewhere in the middle of the network and may not be in the routing taken by the customer's call, sending data to a DTC is not always a conclusive test. If the results are good, no information is gained as to whether data service is satisfactory to the particular location the customer is calling. If the tests are bad, the fault may be due either to facilities between the DTC and the customer or to the data sets, indicating that further analysis is needed. Section 314-205-300 provides additional information on the overall transmission maintenance procedures.
1.05 The Data Access Arrangement provides a service through which a customer may connect his data set (modem) to the switched telecommunications network. Since a non-Bell System modem is used with DAA, the error rate performance cannot be specified. The DAA consists of a Bell System data coupler and, when necessary, a telephone. This arrangement provides signal level limiting, loop isolation, a loop-holding path for dc supervision, and hazardous voltage protection. The Bell System retains the responsibility for network control signaling features. However, with automatic DAAs, the customer's business machine may generate tone address signals or control the generation of dc dial pulses.
2. OPERATION OF THE DDD NETWORK
2.01 The DDD network consists of a large number of trunks which interconnect long distance switching offices. This network serves, with a few exceptions, all of the telephones in the United States and Canada. Since data calls are routed from city to city via the DDD network, it may be helpful to review briefly the general structure of the DDD network.
2.02 Central offices where the customer data lines are terminated for the purposes of interconnection to other offices are called end offices and are designated class 5 offices. The class 5 offices are connected by trunking facilities to higher ranking offices (lower class number). The class 5 office does not necessarily have to terminate in a class 4 office. Depending upon location, it may home on any higher ranking office (any lower class number) from a class 4 to a class 1. High-usage trunks may be provided between offices of any class. The needs of direct distance dialing are met by switching and trunking arrangements that employ the principle of Automatic Alternate Routing to provide rapid and accurate connections while making efficient use of the telephone plant. With Automatic Alternate Routing, a data call which encounters an "all trunks busy" condition on the first high-usage route tested is automatically and rapidly "route advanced" and offered in sequence to one or more alternate routes for completion. The overall tests of a data service should be made during the normal working hours in order to determine if there are any variations in error rate or in general performance under alternate routing conditions.
2.03 During the busy-hour period, the overflow traffic is more likely to be routed through alternate routes. For each call, there is a network of final routes which are last choice routes and are engineered on a low-delay basis. On the average, no more than a small fraction of the calls offered to this final trunk group during the busy-hour period will find all trunks busy. Within the United States, the maximum number of trunks in tandem will not exceed a total of nine, i.e., seven trunks from a class 4 office to a class 4 office, plus one trunk on each end to a class 5 office. The probability of a call traversing all nine final trunk
routes is estimated to be only a few calls out of millions. Most calls are completed on direct or first alternate trunk routes between offices; relatively few switch through more than two intermediate (intertoll) trunks in tandem.
2.04 Part of the DDD network is operated on a 4-wire basis and the remainder on a 2-wire basis. It would be advantageous to operate all trunks in the DDD network at a zero loss, making the total transmission loss independent of the number of trunks used in a connection between two stations. If the whole system operated on a 4-wire basis subset to subset, it would be possible to keep the losses close to zero. However, with the interconnection of 2- and 4-wire facilities, problems of balance, echo, singing, noise, and crosstalk require circuits to be operated at definite minimum losses. The application of the above considerations to an individual trunk depends upon the facilities involved, length of the circuit, and accuracy with which the various adjustments at the terminals and intermediate points have been made and held. An important feature of analog transmission systems is that the adjustment of a component made at any one point will have a reaction upon adjustments made at other points. Therefore, it is important in clearing transmission difficulties to correct the basic cause of the trouble rather than to make terminal adjustments to eliminate the symptoms.
2.05 Most of the trunks on the DDD network are designed to operate on a via net loss (VNL) basis. VNL is defined as the lowest loss in dB at which it is possible to operate an intermediate trunk facility in a multitrunk DDD connection, considering limitations of echo, crosstalk, noise, singing, and office balance on the overall connection. VNL design provides the lowest practical loss at which a trunk can be operated regardless of the number of trunks in tandem in the connection. More information about VNL is contained in Section 851-300-100 entitled Transmission Design Consideration and Objectives, Switched Special Services and PBX Services.
3. TRANSMISSION ASPECTS OF DATA SERVICE
3.01 The data subscriber line should meet the objectives shown in Section 314-205-501 (Data Systems—DATA-PHONE® Service and Data Access Arrangements on Direct Distance Dialing Network—Test Requirements for Subscriber, Foreign Exchange, and Remote Exchange Lines).
3.02 Voice transmission and data transmission, while they are similar in basic elements such as means of switching and circuit design, differ somewhat in transmission requirements. There are a number of transmission considerations which may affect data transmission over the DDD network. They are as follows:
(a) Maximum transmitting level and overall circuit loss
(b) Attenuation frequency distortion
(c) Return loss
(d) Message circuit noise
(e) Impulse noise
(f) Envelope delay distortion
(g) Frequency shift
(h) Nonlinearities
(i) Phase jitter (incidental FM)
(j) Hits—amplitude and phase
(k) Dropouts—microwave fading.
These items are covered in more detail in Parts 4 through 11 of this section.
3.03 Several of the parameters listed above are primarily controlled by voice requirements. These include overall circuit loss, return loss, and message circuit noise. Of the remaining parameters, data requirements are usually controlling. In general, voice telephone service can tolerate greater transmission impairments than data service. For example, if the customers have difficulty with transmission during a telephone conversation, they will either compensate for the difficulty by talking louder or repeat the part of the conversation that has been missed. Under the same conditions, the data set is at a disadvantage since it can only transmit at a predetermined level and frequency. The data set has no way of determining if errors or reduction of signal level have occurred during transmission. (Of course, error detection capability
may be provided in some instances.) Impulse noise, except in extreme conditions, has little effect upon voice transmission since the duration of the impulse noise peaks involved are often too short to be recognized by man. Impulse noise is a serious problem in voice-frequency data transmission, in which the data signals are measured in milliseconds or less. Envelope delay does not have a serious effect upon voice transmission because the human ear is relatively insensitive to differences in delay at different frequencies. Modern carrier and loaded facilities have better envelope delay performance than the older types. Carrier frequency error does not seriously affect voice transmission in most instances. For data transmission, more than a 10-Hz deviation from the normal carrier frequency may degrade a data circuit to the point at which the data being sent is unintelligible.
3.04 The performance of a data set generally deteriorates after the operating limits are exceeded. For example, after the attenuation distortion (frequency response) limit has been reached in data set 202C, a small change may degrade the performance from good to intolerable. Maintenance and adjustment of data transmission equipment and facilities should be as close to the optimum point as possible. Prudent and careful application of adjustments to each section in the overall connection will increase the reliability of the service.
4. MAXIMUM TRANSMITTING LEVEL AND OVERALL CIRCUIT LOSS
4.01 The maximum practical transmitting level of data sets is limited by crosstalk in multipair cable facilities and by the maximum level that a steady tone or combination of tones which may be applied to a carrier terminal unit without overloading. Most DATA-PHONE data sets have been designed so that the maximum transmitting level will not exceed one milliwatt in 900 ohms. In DAA, the couplers are designed to limit, when necessary, the signal power delivered by the customer-provided data modem. In connection with initial installation tests (see Section 314-205-501), the loop insertion loss is measured and recorded for the data loop. The maximum sending level for the telephone company-provided data set involved should be set so as not to exceed $-12$ dBm at the main frame appearance of the subscriber line at the class 5 office furnishing dial tone to this line. This will correspond to a maximum data signaling power of $-13$ dBm0 on toll carrier facilities. The transmit level will be selected by the engineering department and the proper option shown on the circuit layout card or line card. This can be verified by dialing the milliwatt supply from the customer's premises and subtracting the transmission measuring set reading from $-12$ dBm. In regard to some types of DATA-PHONE data sets such as the 400 series, more than one tone is sent simultaneously. The $-12$ dBm figure represents the total power of all tones transmitted simultaneously. In order to keep receiving levels within requirements, the data set level should be set so it will be received at the serving central office as close to the $-12$ dBm level as possible without exceeding it. Information on the transmit level options is found in the BSP installation section of that particular data set.
4.02 The maximum permissible overall circuit loss between data sets depends upon the DDD connection and the type, sensitivity, and operating frequencies of the data set. The receive level of data sets ranges from +2 dBm to $-53$ dBm. The exact maximum loss that a particular data set can tolerate may be calculated by subtracting the maximum transmitting level from the minimum receive level. At this time, all DATA-PHONE data sets can tolerate an overall loss of 36 dB at 1000 Hz and 48 dB at 2800 Hz. Although the slope requirements are based on loss measurements at frequencies of 1000 and 2800 Hz, the test oscillator should be offset by about 4 Hz to obtain stable measurements over T carrier.
4.03 With the many improvements in DDD network losses over the past few years, overall loss is not considered to be a major cause of trouble. Under present design, the maximum overall circuit 1000-Hz loss should not exceed 37 dB. This includes the local loops and toll connecting trunks at each end plus seven intertoll trunks in the connection. If problems arise, they usually can be traced to improperly lined-up trunks in the network.
5. ATTENUATION FREQUENCY DISTORTION
5.01 Excessive attenuation frequency distortion (also called slope) on voice-frequency data transmission increases the error rate by degrading the signal as it traverses the facility. Some DATA-PHONE data sets that operate in the higher bit range [above 300 bits per second (bps)] can tolerate more attenuation frequency distortion by
the use of the compromise equalizer. With low-speed (under 300 bps) narrow band data sets, such as the 100-type data set, attenuation frequency distortion is less limiting because of the narrow bandwidth.
5.02 For the entire station-to-station connection through the DDD network, the attenuation frequency (slope) should not exceed 15 dB between 1000 Hz and 2800 Hz for satisfactory operation of data sets. For high-speed data transmission, the loop between the serving central office and the customer location should measure no more than 3.0-dB maximum difference between the 1000-Hz loss and the 2800-Hz loss. The maximum difference on the connection over the DDD network, end office to end office, should not exceed 9 dB between 1000 Hz and 2800 Hz.
5.03 The attenuation frequency characteristic of connections on the message network varies from call to call. On a built-up connection, the facility is affected at higher frequencies due to the effect of capacitance in office wiring. When tests of a data service reveal instances of high distortion measurements accompanied by an excessive number of errors, overall loss-frequency measurements should be made station-to-station. (The connection should be "held" in order to make the measurements.) If these measurements indicate that there is an attenuation difference, the circuit should be measured at each loop. Both loops should meet the maximum objective and if this objective cannot be met, each link should be tested to determine the source of the trouble. See Section 660-405-300 for additional information on trouble sectionalization and clearing methods to be applied to trunks in the DDD network when used for data services. Section 660-101-305 provides information on the local testroom procedures followed by the plant service center when handling data service complaints. It also contains the overall maintenance plan for DDD data service.
5.04 The attenuation frequency characteristic also defines the bandwidth of the transmission facilities. DATA-PHONE and DAA services are operated on frequencies as low as approximately 300 Hz and as high as approximately 3000 Hz. Modern transmission facilities provide sufficient bandwidth to accommodate these frequencies; however, obsolete types of facilities (such as H-1/2 loading) may prevent satisfactory data transmission and a substitute must be provided.
6. RETURN LOSS REQUIREMENTS
6.01 Return loss requirements for data sets are determined by listener echo (echo heard by the listener). Listener echo is limiting for data sets because the receiving data set on a connection will interpret the data received through the echo path as interference. Most of the data sets in use at this time will not tolerate listener echo delayed more than one-third the baud interval and at a power closer than 12 dB to the received signal level. Return losses at each 2-wire to 4-wire point in the DDD network will affect listener echo. At 2-wire switching points, the return losses, in turn, are affected by office balance. Voice requirements for return loss and echo on the DDD network provide an adequate margin for data service.
6.02 Bell Telephone Laboratories studies on return loss requirements indicate that the 12-dB first listener echo requirement for data sets is valid but can be met without special loop treatment. Therefore, there is no longer a specific return loss requirement for data service loops. Return loss is an important parameter, especially to high-speed data transmission, but the troubles are usually isolated to trunks, improperly installed E-type repeaters, or poorly balanced hybrids rather than the loop facilities.
7. MESSAGE CIRCUIT NOISE
7.01 Message circuit noise is the noise on a channel in the absence of a signal. Message circuit noise is of lesser importance in data service than in voice service. If normal voice circuit noise objectives are met, then data transmission noise objectives will be automatically met. Message circuit noise objectives for voice may be found in the following sections:
- 311-100-500—Circuit Order and Trunk Order Transmission Tests—PBX Central Office Trunks, Off-Premises Station Lines and Tie Trunks Having Access to the Direct Distance Dialing Network
- 311-100-501—1000 Hz and Noise Tests—PBX Central Office Trunks, Off-Premises Station Lines and Tie Trunks Having Access to the Direct Distance Dialing Network
660-403-500—Message Circuit Noise Measurements on Message Trunks—Requirements
660-500-500—Transmission Testing of Message Trunks at Locations Other Than Testboards—General Information.
7.02 Message circuit noise will not be a problem if a 24-dB signal-to-C-notched noise ratio is maintained throughout the connection.
8. IMPULSE NOISE
8.01 Impulse noise hits are a primary source of errors in data transmission. If impulse noise hits of sufficient magnitude occur during data transmission, they can seriously degrade the error rate of the data transmission system. *Impulse noise measurements should be made on every loop that is to be used for the transmission of data signals at 300 bps or greater.* A 6-type impulse noise counter is used to measure impulse noise on a facility. Information involving the use of the 6-type impulse counter can be found in the 103-6YY-ZZZ series of practices. The magnitude and frequency of the occurrence of the impulse noise voltages are used to specify the impulse noise objective. The objective is expressed as a threshold (referred to the zero transmission level point) which will be exceeded no more than a specified number of times per 15 minutes for individual circuit measurements.
8.02 Impulse noise exhibits some level variation with the time of day. It was previously believed this variation was great enough to warrant measurements only during the busy hour. Recent studies show this effect to be less severe, and it is now permissible to make these measurements any time during the normal business day. Measurements resulting from data transmission service trouble reports should be made during periods when the customer is experiencing trouble, if possible.
8.03 Previously, it was recommended that transmission level corrections be made at 1700 Hz, which was selected because it was near the center of the spectrum of high-speed voiceband data transmission. This is no longer considered necessary on trunks because the difference between the 1000-Hz and 1700-Hz loss is small on trunks but must be taken into account on loops. An average of the 1000- and 2800-Hz loss is used since nonloaded facilities are encountered.
8.04 The impulse noise objectives for trunks and facilities are given in Table A. The objectives given in this table are average levels where one-half of the trunks in the trunk group or one-half of the facilities in a facility group exhibit five or less counts in 5 minutes.
8.05 A trunk group is defined as all of the trunks between two offices, A and B, for any given purpose and under the same maintenance control. A facility group is defined as all of the facilities in a given routing with common design. For example, the 12 channels in an N1 system would be included in a group. Where specific trouble investigation is in process, only those facilities under investigation are included in the facility group. For example, if seven of the 24 channels in an ON1 system are used in a trunk group A-B where high-impulse levels have been noted, only those seven channels enter into the computations.
8.06 Where compandored facilities are encountered, a $-13$ dBm0 holding tone is used in setting the objectives. This stabilizes the expandor loss at 9.0 dB.
8.07 Impulse noise objectives will be met if, throughout the connection, fewer than 15 counts in 15 minutes occur at a threshold 5 dB below the data signal.
9. ENVELOPE DELAY DISTORTION
9.01 Envelope delay distortion can seriously affect data transmission on the DDD network. Different frequencies undergo different amounts of delay as they are transmitted over the message network, which will cause the data signal to be distorted. Voice transmission performance is not affected to the same degree by envelope delay distortion as data transmission. The amount of envelope delay distortion that will be found on a voice-frequency facility depends upon the type and, in the case of cable, the length of the facility. Carrier system distortions are affected by the type of carrier and the multiplex arrangement encountered.
9.02 Envelope delay distortion (EDD) is usually expressed as the maximum variation of the envelope delay characteristic within a particular frequency band. This measurement is usually
### TABLE A
**TRUNK AND FACILITY IMPULSE NOISE OBJECTIVES**
| LENGTH (MILES) | TOLL CONNECTING TRUNKS AND INTERTOLL TRUNKS | TYPE TRUNK | Note (3) |
|----------------|---------------------------------------------|------------|----------|
| 0 through 60 | 54 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 58 dBm<sub>0</sub>* |
| 61 through 125 | 54 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 58 dBm<sub>0</sub>* |
| 126 through 250 | 54 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 59 dBm<sub>0</sub>* |
| 251 through 500 | 66 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 59 dBm<sub>0</sub>* |
| 501 through 1000 | 66 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 61 dBm<sub>0</sub>* |
| 1001 through 2000 | 66 dBm<sub>0</sub>* | 66 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* |
| Over 2000 | 66 dBm<sub>0</sub>* | | |
| LENGTH (MILES) | TOLL CONNECTING FACILITIES AND INTERTOLL FACILITIES | TYPE FACILITY | Note (3) |
|----------------|-----------------------------------------------------|---------------|----------|
| 0 through 60 | 51 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 55 dBm<sub>0</sub>* |
| 61 through 125 | 51 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 55 dBm<sub>0</sub>* |
| 126 through 250 | 51 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 56 dBm<sub>0</sub>* |
| 251 through 500 | 64 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 56 dBm<sub>0</sub>* |
| 501 through 1000 | 64 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 56 dBm<sub>0</sub>* |
| 1001 through 2000 | 64 dBm<sub>0</sub>* | 64 dBm<sub>0</sub>* | 58 dBm<sub>0</sub>* |
| Over 2000 | 64 dBm<sub>0</sub>* | | 61 dBm<sub>0</sub>* |
**Note (1):** Voice frequency only.
**Note (2):** Compandored carrier or mixed compandored and noncompandored facilities with -13 dBm<sub>0</sub> holding tone.
**Note (3):** Noncompandored carrier.
* Limits are given for measurements made with instruments equipped with "C" Message (C) weighting filter. If measurements are made with instruments equipped with voiceband (VB) weighting filter, add one dB to the objective.
Expressed as microseconds over the band of interest. Data sets vary in their tolerance to envelope delay distortion, depending upon the type of modulation and the bit rate. Low-speed DATA-PHONE data sets can tolerate a greater amount of delay distortion than the higher speed data sets. With data service, envelope delay distortion should be suspected if high error rates which cannot be attributed to message noise, impulse noise, overall loss, or attenuation frequency distortion are encountered. The P/AR (peak to average ratio) meter (Section 103-110-110) is useful in determining the condition of a data transmission connection. P/AR measurements are primarily sensitive to EDD, but attenuation distortion and noise may also have a strong effect on P/AR readings. Table B shows the expected readings for acceptable and unacceptable conditions. If envelope delay measuring equipment is available at the station ends of the overall connection suspected of having excessive envelope delay distortion, direct measurements should be made. If this equipment is not available, consult the Data Technical Support personnel through normal lines of organization for advice. The end-to-end envelope delay distortion should be compared with the requirements of the data set involved. The maximum overall envelope delay distortion requirements for satisfactory error performance for data sets 201A and 202-type are shown in Table C of this section. List 3 and List
4 of data set 203-type are designed for DDD operation. Error performance is not specified, but some insight into the performance that can be expected per field trial (see Technical Reference, Data Set 203-Type, June 1970) is as follows. The performance was equal to or better than $10^{-5}$ errors per bit for 95 percent of the calls at 1800 bps (2-level). A $10^{-4}$ error or better error rate was obtained on 84 percent of all calls and an error rate better than $10^{-6}$ errors per bit on 62 percent of all calls at 3600 bps (4-level). Error performance for 4800 bps is approximately equal to that at 3600 bps. In the case of a customer-provided modem, the requirements should be the same as for an equivalent bit rate Bell System data set.
### TABLE B
**P/AR READINGS**
| CONNECTION IS | PAR READING |
|---------------|-------------|
| Acceptable | Above 50 |
| Unacceptable | Below 50 |
### → TABLE C ←
**MAXIMUM ENVELOPE DELAY DISTORTION REQUIRED FOR SATISFACTORY ERROR PERFORMANCE**
| DATA SET (SEE NOTE) | FREQUENCY | MAX EDD |
|---------------------|--------------------|-----------|
| 201A | 1150-2300 Hz | 500 μs |
| | 1000-2500 Hz | 900 μs |
| | 800-2700 Hz | 1750 μs |
| 202 - Type | 1200-2200 Hz | 1050 μs |
| | 1000-2500 Hz | 1500 μs |
| | 800-2600 Hz | 2000 μs |
*Note:* All data sets used in connection with DATA-PHONE service should have the compromise equalization option connected (see 9.03).
9.05 On occasion, envelope delay distortion will be too high within the DDD network for data transmission operation between two particular points on the network. Information about the situation should be forwarded through the lines of organization for reassignment or further investigation. It may be necessary to provide additional equalization at the data set location or to install a remote exchange (RX) line to bypass part of the network until better facilities can be provided. The RX line will have to meet the requirements per Section 314-205-501.
9.06 When remote exchange lines of any type, including wide area telephone service (WATS) lines, are used for data service, their design should first be reviewed by personnel responsible for circuit design to ensure that the envelope delay distortion...
will not exceed the limits for the type of data sets involved. RX lines to class 4 or higher offices will include the distortion of a toll connecting trunk in computing objectives. If the envelope delay distortion exceeds 300 microseconds between 1000 and 2400 Hz, the line should be delay-equalized to meet the 300-microsecond objective. WATS design will be identical to RX design if other than the local office is used as a serving office. If the local office is used, the normal loop objectives apply. Information about delay equalizers may be found in Sections 314-820-100, -103, and -104.
10. FREQUENCY SHIFT
10.01 Frequency shift (sometimes called frequency offset) beyond the capabilities of the data set will result in high error rate. If the symptoms occur and the cause cannot be readily attributed to loss, attenuation frequency distortion, steady or impulse noise, or envelope delay, the possibility of frequency shift should be investigated.
10.02 Under normal circumstances, frequency shift will have little effect upon voice transmission. With data service, deviations in frequency of more than ±10 Hz may cause distortion of a data signal. The modulated data signal of the DATA-PHONE data set is transmitted as a tone or combination of tones which have been calibrated to precise frequencies. At the receiving end of the facility, the signal is demodulated by the receiving data set in order to recover the data. If the frequencies of the transmitted tones are changed as they traverse the facility, the frequency-sensitive circuits in the receiving data sets will not receive the tones at the optimum points, thus resulting in a distortion of the data signal and an increase in the number of errors received. On carrier systems used in connection with data services, the overall carrier frequency error should be kept to ±5 Hz or less. Individual carrier facility sections should have carrier frequency errors of no more than ±2 Hz.
10.03 There will not be a frequency error problem on the "transmitted carrier" type of carrier systems, such as the Western Electric "N" (only even numbered channels with N3), "O", and "ON". With this type of carrier system, the carrier signal that is used for modulation is transmitted directly to the distant terminal for demodulation. Western Electric "J", "K", and "L" systems are of the suppressed carrier type, in which the carrier is suppressed at the transmitting terminal and resupplied at the receiving terminal. When this function is accomplished by the use of a generator that is held in synchronization with the generator at the transmitting end, frequency shift will be at a minimum and should not cause data distortion.
10.04 Frequency shift exists primarily in suppressed carrier systems where there has been no provision for synchronizing the carrier terminals at the ends of the system. Nonsynchronized Western Electric type "J", "K", "L", and "C" systems use carrier supply generators with long-term stability. These systems should not present any frequency shift problems provided they are adequately aligned and maintained at the intervals specified in the practices. Western Electric type "C" (vacuum tube modulator type) and "H" carrier systems may present more serious problems, depending upon operational environment and the maintenance routines.
10.05 Carrier systems that are not supplied by the Western Electric Company can be roughly classified in the same way as the Western Electric systems. Actual frequency shift performance of any system in the questionable category should be determined prior to the start of data service over that system and corrective action instituted if necessary.
10.06 In the event that all specified requirements have been met and unsatisfactory service is experienced, the trouble may be caused by either phase jitter, harmonic distortion, or single-tone interference. Normally, it is not expected that the plant department will be required to make these tests. However, if advised by Data Technical Support personnel, these parameters should be checked.
11. EVALUATING DATA TRANSMISSION AND TROUBLE INVESTIGATION
11.01 In all instances, facilities used for data transmission should meet normal voice-frequency objectives prior to their consideration for use on data services. The additional requirements described in Parts 4 through 10 of this section should then be applied to the facilities, as required, in order to accommodate the more stringent objectives of data transmission.
11.02 On a connection over the DDD network, the effects of such items as overall loss, attenuation frequency distortion, envelope delay, etc., are cumulative as the length of the circuit and the number of links involved increase. All types of switched facilities are subject to some interruptions, which may be due to equipment failures, facility failures, or human errors. The object of maintenance testing for data services is to determine the location of troubles which can cause actual failures in data transmission over the message network. The malfunction may be of very short duration, measured in microseconds, fading or drop-outs which can extend seconds or minutes, or actual facility failures which will interrupt service for a considerable length of time. It is important to note that service should be restored as quickly as possible. For example, a data service operating at 1200 bps is capable of transmitting or receiving over four million bits of information in one hour. An outage of one hour due to a facility malfunction can cost the customer a considerable amount of money in lost "computer time," obsolescence of information, and extra time consumed in storing and recovering data which has accumulated during the disruption of service. Duration of intermittent interruptions is an important factor in the detection of trouble since complete failures are more readily found than momentary troubles. The message network is so arranged that a complete failure of a cable, carrier channel, or central office terminal equipment will usually be detected by means of automatic alarm systems. In connection with interruptions of shorter duration, the shorter the time interval, the more difficult will be the problem of detection. The line evaluation test covering the particular data set under test is described in the installation performance procedures of the 590 series of practices. Errors received and peak distortion determine the quality of the circuit under test.
11.03 Analysis of station record cards may give an indication as to the source of repeated data troubles. When it is possible, the circuit or connection should be "held" at the serving office and the call traced and tested through its various links in order to detect the malfunction. (See Section 314-205-300 and 590-010-300 for procedures.) Since it is not always possible to continue to "hold" the suspected circuit for immediate testing, a record should be made of the links involved and arrangements made to test the circuit at the first appropriate opportunity. A line evaluation test should be made from the "sending end" data set location. Use the suspected circuit for the test. Both locations should be equipped with 901-, 902-, and 903-type data test sets or a 914-type data test set.
11.04 An analysis of possible results of the circuit evaluation test is shown in Table D. The result of the tests may be used as a guide for locating transmission difficulties encountered with data services.
### TABLE D
**CIRCUIT EVALUATION TEST—RESULTS USING 900-TYPE DATA TEST SETS—RECEIVING END OF CIRCUIT**
| TROUBLE CONDITION | REMARKS | TRANSMISSION IMPAIRMENTS (CHECK ITEMS IN SEQUENCE AS SHOWN) |
|-------------------|---------|-------------------------------------------------------------|
| High distortion and high error rate | Distortion reading is high and steady. | Attenuation Frequency Distortion
Message Circuit Noise
Return Loss
Envelope Delay Distortion | (Part 5)
(Part 7)
(Part 6)
(Part 9) |
| High distortion and high error rate | Distortion reading is high and unsteady. | Overall Circuit Loss | (Part 4) |
| High error rate and normal distortion | Distortion reading shows frequent peaks. | Impulse Noise
Message Circuit Noise | (Part 8)
(Part 7) |
| High distortion and high error rate | Distortion reading may shift gradually. | Frequency Shift | (Part 10) |
12. REFERENCES
12.01 Bell System Practices mentioned in this section which cover various equipment are listed as follows:
| SECTION | TITLE | SECTION | TITLE |
|-----------|----------------------------------------------------------------------|-----------|----------------------------------------------------------------------|
| 010-521-100 | Data Technical (DATEC) Support | 314-820-100 | Dialing Network—Test Requirements for Subscribers, Foreign Exchange, and Remote Exchange Lines |
| 103-110-110 | J94027A and B Par Meter Generator and Receiver, Description, Operation, and Maintenance | 314-820-103 | Envelope Delay Characteristics of 200-Type Delay Equalizers |
| 107-100-100 | 901A and 901B Data Test Sets—Identification and Operation | 314-820-104 | Envelope Delay Characteristics of 366- and 367-Type Equalizers |
| 107-101-100 | 914-Type Data Test Sets, Description and Operation | 590-010-300 | Data Systems—DATA-PHONE® Service on Direct Distance Dialing Network—Overall Field Force Maintenance Procedures |
| 107-200-100 | 903A and 903B Data Test Sets, Description and Operation | 660-101-305 | Data Systems—DATA-PHONE® Service on Direct Distance Dialing Network—Plant Service Center Handling Customer Trouble Reports |
| 107-300-100 | 902A and 902B Data Test Sets, Identification and Operation | 660-405-300 | Data Systems—DATA-PHONE® Service and Data Access Arrangements Using the Switched Telecommunication Network, Toll Testroom Trouble Clearing Procedures |
| 314-205-300 | Data Systems—DATA-PHONE® Service on Direct Distance Dialing Network, Overall Transmission Maintenance Procedures | 851-300-100 | Transmission Design Consideration and Objectives, Switched Special Services and PBX Services. |
Page 11
11 Pages
NOTES |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.