text
stringlengths
0
100k
The following statement was issued by the ­Labor Fraction of Workers World Party. On April 6, President Donald Trump ordered a blatant act of war against the Syrian people. He did it without a shred of evidence that the Syrian government was involved in an alleged poison gas attack that killed civilians. What advantage would Syria gain strategically by targeting noncombatants, when the Bashar al-Assad government has been winning the war against U.S.-backed terrorist groups — the Islamic State group (IS), al-Qaida, the Free Syrian Army and others? In fact, there is evidence that the gas attack was actually carried out by so-called rebels. The attack on Syria was followed by the bombing of Afghanistan, employing the most powerful non-nuclear bomb ever dropped in all of world history. A military assault on the Democratic People’s Republic of Korea could be imminent. Working people, organized and unorganized, have no reason to support yet another war. In fact, the chatter in the workplace is that people are outraged that Trump has dragged us into what could be a major war — or wars. There will be mass suffering as a result of billions of dollars of cuts in human services carried out to shift money to the Pentagon war machine. The suffering of the Syrian people, including the nearly five million made refugees by this U.S.-sponsored war, will only worsen. Organized labor should be outraged! Yet its leaders, AFL-CIO President Rich Trumka and Change to Win President James Hoffa Jr., have said nothing. Silence is the voice of complicity. Their collusion with Trump — and with both Democrats and Republicans — when faced with Pentagon aggression is just their latest betrayal of our class interests as workers since the November election. These are the same leaders who sold out the heroic water defenders at Standing Rock and the migrant communities who are being rounded up, detained and deported en masse. Labor lieutenants of capitalism — a shameful tradition On Feb. 25, 1967, the Rev. Dr. Martin Luther King Jr. said “the bombs in Vietnam explode at home.” (tinyurl.com/aertr2e) While this towering figure of the Civil Rights movement was risking his life by positioning himself against the Vietnam War, rare were the voices from the labor movement striking a similar chord. In its silence, organized labor was complicit in a criminal war of aggression against the people of Vietnam, funded by the poor, the workers and the oppressed people of the U.S. Fifty years have passed. Yet we see the leaders of the AFL-CIO and Change to Win continuing to act as cheerleaders for imperialist war. They know full well that the union movement has enough muscle to bring the war machine to a halt. Even the ten percent of the working class in unions have a lot of power. Power to stop the transport of goods by truck, rail and air. Power to shut down telecommunications and government. Power to shut down the military-industrial complex at the point of production. We believe in the old union maxim that “an injury to one is an injury to all.” This principle has no borders. When our human family in other countries is subjected to imperialist violence, carried out in our name, labor in the U.S. has no choice but to join the chorus of voices saying, “Hands off Syria, Afghanistan, Korea and the world!” This message must be heard around the world on May Day, International Workers Day, and beyond.
ADVERTISING: Twelve teams have signed up for the first Open Series, and amongst them are fan-favourites such asandThe first Open Series will be a best-of-one single elimination contest, with the format expanding to best-of-three clashes for the Semi and Grand Finals. A seeding system has been touted to ensure that heavyweight teams avoid each other in the early rounds, but it has not yet been confirmed.The winners will receive $150 and lay claim to the first spot in the Invitational Series, where they will eventually go head-to-head with the winners of the seven other Open Series qualifiers for the $10,000 grand prize.The action gets underway this Saturday, June 2nd at 13:00 Eastern Standard Time (19:00 CET), with FireFlash set to shoutcast all of it live, along withand - potentially - our very owntoo.Sign up information is available below.
Description: CLAIM OF PRIORITY This application claims the benefit of U.S. Provisional Patent Application 61/837,354 entitled, A COGNITIVE ARCHITECTURE AND MARKETPLACE FOR DYNAMICALLY EVOLVING SYSTEMS by Bastea-Forte, et al., filed Jun. 20, 2013; U.S. Provisional Patent Application 61/888,907 entitled, INTERACTIVE COMPONENTS OF A COGNITIVE ARCHITECTURE FOR DYNAMICALLY EVOLVING SYSTEMS by Bastea-Forte, et al., filed Oct. 9, 2013; and U.S. Provisional Patent Application 61/917,541 entitled, QUALITY AND MARKETPLACE MECHANISMS FOR A COGNITIVE ARCHITECTURE FOR DYNAMICALLY EVOLVING SYSTEMS by Bastea-Forte, et al., filed Dec. 18, 2013, the entire contents of which are all incorporated herein by reference. BACKGROUND Some consumers and enterprises may desire functionality that is the result of combinations of services available on the World Wide Web or “in the cloud.” Some applications on mobile devices and/or web sites offer combinations of third-party services to end users so that an end user's needs may be met by a combination of many services, thereby providing a unified experience that offers ease of use and highly variable functionality. Most of these software services are built with a specific purpose in mind. For example, an enterprise's product manager studies a target audience, formulates a set of use cases, and then works with a software engineering group to code logic and implement a service for the specified use cases. The enterprise pushes the resulting code package to a server where it remains unchanged until the next software release, serving up the designed functionality to its end user population. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a block diagram of an example plan created by a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 2 illustrates a block diagram of an example dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 3 is a flowchart that illustrates a method for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 4 illustrates a block diagram of an example plan for a dynamically evolving cognitive architecture system, under an embodiment; FIG. 5 illustrates a block diagram of another example plan for a dynamically evolving cognitive architecture system, under an embodiment FIG. 6 illustrates a block diagram of yet another example plan for a dynamically evolving cognitive architecture system, under an embodiment FIG. 7 illustrates a block diagram of an example of abstract representations of a small concept action network for a dynamically evolving cognitive architecture system based on third party developers, under an embodiment; FIG. 8 illustrates a block diagram of example object representations for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 9 illustrates a block diagram of example dialog templates for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 10 illustrates a block diagram of an example description of an equivalence policy for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 11 illustrates a block diagram of example concept action network nodes and edges for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 12 illustrates a block diagram of an example plan for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 13 illustrates a block diagram of another example plan for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; FIG. 14 illustrates a block diagram of an example user interface for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment; and FIG. 15 is a block diagram illustrating an example hardware device in which the subject matter may be implemented. DETAILED DESCRIPTION Embodiments herein provide dynamically evolving cognitive architecture systems based on third-party developers. At a minimum, the system functions with two action objects and three concept objects. For example, the system forms an intent based on a user input and creates a plan based on that intent. The plan includes a first action object that transforms a first concept object associated with the intent into a second concept object. The plan further includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects. The system executes the plan, and outputs a value associated with the third concept object. FIG. 1 illustrates a block diagram of an example plan 100 created by a dynamically evolving cognitive architecture system based on third-party developers, in which action objects are represented by rectangles and concept objects are represented by ovals, under an embodiment. User input 102 indicates that a user inputs “I want to buy a good bottle wine that goes well with chicken parmesan” to the system. The system forms the intent of the user as seeking a wine recommendation based on a concept object 104 for a menu item, chicken parmesan. Since no single service provider offers such a use case, the system creates a plan based on the user's intent by selecting multiple action objects that may be executed sequentially to provide such a specific recommendation service. Action object 106 transforms the concept object 104 for a specific menu item, such as chicken parmesan, into a concept object 108 list of ingredients, such as chicken, cheese, and tomato sauce. Action object 110 transforms the list of ingredients concept object 108 into a concept object 112 for a food category, such as chicken-based pasta dishes. Action object 114 transforms the food category concept object 112 into a concept object 116 for a wine recommendation, such as a specific red wine, which the system outputs as a recommendation for pairing with chicken parmesan. Even though the system has not been intentionally designed to create wine recommendations based on the name of a menu item, the system is able to intelligently synthesize a way of creating such a recommendation based on the system's concept objects and action objects. Although FIG. 1 illustrates an example of a system creating a single plan with a linear sequence that includes three action objects and four concept objects, the system creates multiple plans each of which may include any combination of linear sequences, splits, joins, and iterative sorting loops, and any number of action objects and concept objects. Descriptions below of FIGS. 4, 5, and 6 offer examples of multiple non-linear plans with splits, joins, and other numbers of action objects and concept objects. In a dynamically evolving cognitive architecture system based on third-party developers, the full functionality is not known in advance and is not designed by any one developer of the system. While some use cases are actively intended by developers of the system, many other use cases are fulfilled by the system itself in response to novel user requests. In essence, the system effectively writes a program to solve an end user request. The system is continually taught by the world via third-party developers, the system knows more than it is taught, and the system learns autonomously every day by evaluating system behavior and observing usage patterns. Unlike traditionally deployed systems, which are fixed in functionality, a dynamically evolving cognitive architecture system based on third-party developers is continually changed at runtime by a distributed set of third-party developers from self-interested enterprises around the globe. A third-party developer is a software developer entity that is independent of the dynamically evolving cognitive architecture system, independent of the end users of the dynamically evolving cognitive architecture system, and independent of other third-party developers. Third-party developers provide the system with many types of objects through a set of tools, editors, and other mechanisms. These objects include concept objects that are structural definitions representing entities in the world. These objects also include action objects, which are similar to Application Programming Interfaces (APIs) or web service interfaces that define a set of concept object input dependencies, perform some computation or transaction, and return a set of zero or more resulting concept object values. These objects also include functions, which define specific logic that implement an action object interface created by a self-interested party, and monitors, which are specific types of action objects and associated functions that allow external services to keep track of the world, looking for certain conditions. Once the conditions become true, associated action objects are injected into the system for execution. These objects additionally include tasks, for which a third-party developer specifies groupings of particular inference chains of action objects that make up an action object in a hierarchical way, and data, which provides instantiations of concept objects, such as product catalogs, business listings, contact records, and so forth. The objects further include linguistic data because there are many ways to interact with the system. Third-party developers may add new vocabulary, synonyms, and linguistic structures to the system that the system maps to concept objects and action objects to support the use case where natural language input is involved. The objects additionally include dialog and dialog templates provided by third-party developers, which contains all output strings and logic the system requires to communicate ideas back to the end user, either through visual interfaces or through eyes-free interfaces, and layout templates provided by third-party developers, which describe visually how the system presents information on a variety of devices. The objects may also include delight nuggets, which are domain oriented logic that enables the system to respond to situations in a way that surprises and delights an end user, providing additional information or suggestions that please and help the end user. Third-party developers provide these new concepts, actions, data, monitors, and so forth to the system, in a self-interested way, with the intent of making available certain new capabilities with which an end user may interact. As each new capability is added to the system, an end user may access the new functionality and may do more than the end user was capable of doing before. The system knows more than it is taught, meaning that if a third-party developer adds ten new capabilities, the system will, through dynamic combinations of services, be able to do far more than ten new things. Given a request from an end user, the system, in a sense, writes automatic integration code that links individual capabilities into new dynamic plans that provide value for the end user. FIG. 2 illustrates a block diagram of a dynamically evolving cognitive architecture system based on third-party developers 200, under an embodiment. As shown in FIG. 2, the system 200 may illustrate a cloud computing environment in which data, applications, services, and other resources are stored and delivered through shared data-centers and appear as a single point of access for the end users. The system 200 may also represent any other type of distributed computer network environment in which servers control the storage and distribution of resources and services for different client users. In an embodiment, the system 200 represents a cloud computing system that includes a first client 202, a second client 204, and a first server 206 and a second server 208 that may be provided by a hosting company. The clients 202-204 and the servers 206-208 communicate via a network 210. The first server 206 includes components 212-254 in an embodiment. Although FIG. 2 depicts the system 200 with two clients 202-204, two servers 206-208, and one network 210, the system 200 may include any number of clients 202-204, any number of servers 206-208, and/or any number of networks 210. The clients 202-204 and the servers 206-208 may each be substantially similar to the system 1500 depicted in FIG. 15 and described below. FIG. 2 depicts the system components 212-254 residing completely on the first server 206, but the system components 212-254 may reside completely on the first server 206, completely on the second server 208, completely on the clients 202-204, completely on another server that is not depicted in FIG. 2, or in any combination of partially on the servers 206-208, partially on the clients 202-204, and partially on the other server. One of the server components may include a concept action network 212. A concept action network 212 is the schema for the present capabilities and knowledge of the system 200, and a structured collection of known types fortified with atomic actions on those types. The concept action network 212 organizes and facilitates the interoperating execution of Internet enabled services, and may be represented as a mathematical graph with constraints defining its structure. Third-party developers may interact with the concept action network 212 by extending the concept action network 212 with new concept objects, new action objects, and new implemented services. End users may interact with the concept action network 212 to accomplish end user tasks. An Internet enabled service is a collection of functional interfaces to data retrievals, such as a local business search or querying a shopping cart, nontrivial computations, such as computing a symbolic integral, and real world actions, such as booking a reservation at a hotel or turning on a light in a smart enabled home. These functional interfaces are exposed to the public Internet via well-defined interfaces using standard protocols. When depicted as a mathematical graph, the concept action network 212 consists of nodes and edges. These nodes in a concept action network 212 include concept objects and action objects. A concept object is a model of a real world entity, such as a restaurant, or coupling thereof, such as a reservation, with a restaurant and a time. An action object is a model of an atomic unit of work that declares its external dependencies as input concept objects and produces a predetermined type of output concept object. The concept action network 212 may catalog similar Internet enabled services under a common schema, providing interoperability. The concept action network 212 may be depicted as a well-defined, strongly-typed mathematical graph structure that defines precisely a space of known capabilities. The server 206 may also include a planner 214 component. When provided with an intent, a planner 214 produces a static plan of execution, which is a collection of input signals and a goal representing the semantics of an end user's desired task or step. A plan is a directed and acyclic coupling of concept action network nodes. Being directed and acyclic ensures that the plan is executable and that every step in the plan makes progress to the goal. Plans may include multiple instances of concept action network nodes, such as two distinct businesses in the case that one task includes, as a component, another task of finding the nearest coffee shop to the nearest movie theater. The planner 214 also revises plans when dynamic execution deems necessary. The server 206 may include several registry components. A function registry 216 maps function values to action objects. Function values bundle declarative metadata about some action implementation with an invokable endpoint. A strategy registry 218 is a registry of selection strategies and instantiation strategies, both of which are used to satisfy the cardinality constraints of action inputs without bothering the end user. Strategies are keyed off the execution context in which they apply. A dialog registry 220 is a registry of dialog templates, keyed off the execution context in which they apply and guarded by additional dynamic context triggers. A follow up registry 222 is a registry of follow up plan intents/goals, used to suggest follow up actions to an end user under specific situations. Entries in the follow up registry 222 are also keyed off the execution context in which they apply and guarded by additional dynamic context triggers. A layout registry 223 stores third-party developer layout descriptions which the system 200 uses for rendering outputs based on concept object values to be rendered, such as the example of the wine recommendation described in FIG. 1. An end user data store 224 is an end user specific storage of preferences and instrumented usage data, used to store both the raw data about decisions an end user makes and official/explicit preferences. A global data store 226 is a cross-user storage of default preferences and aggregate usage data that is updated in batches offline from end user specific data. A service scheduler 228 determines the order in which services will be called for a particular action invocation. The service scheduler 228 balances the cost and quality of each service to maximize precision and recall. A session state 230 is the state for a specific session of execution. A short term end user memory 232 is made up of recently completed plans and currently interrupted plans that are pending additional input. An execution session 234 is a place for data, which is usually ephemeral, which an execution engine 252 uses. For example, as a plan executes the wine recommendation example in FIG. 1, the execution engine 252 stores the intermediate food classification concept object values in the execution session 234. An end user interface 236 is the user's view into the system 200 and associates an end user with an execution session. The end user interface 236 enables the end user's intent to be elicited at each step of interaction. A metrics store 238 is a data store housing all the raw, end user agnostic runtime data, such as service invocation attempts, successes, failures, latency, overhead, dialog selection counts and rendering overhead, end user request counts and overhead, and strategy selection counts and overhead, etc. The server 206 will also include developer tools 240-250 in an embodiment. Developer tools 240-250 are a set of editors, debuggers, etc. that enable creation and updating of the data supporting the runtime environment. A modeler 240 creates and updates concept objects, such as updating primitive and structured types, and action objects, such as updating input/output/metadata schema definitions. A function editor 242 creates and updates provider specific implementations of action objects, which may involve writing some code in a sandboxed scripting language that may be partially generated and validated against action objects. A dialog editor 244 creates and updates dialog scripts that specify output messaging and logic for various aspects of the system 200, which, in an embodiment, likely involves a simple templating language with conditional code, variables, etc. An analytics viewer 246 provides insight into the data stored in the metrics store and generates reports, which may include things like performance time of various components over time, domain distribution of end user requests, and speed and success performance analytics for service providers, etc. A follow up editor 248 associates follow up goals with a contextual trigger in which the follow up goals should become active and recommended to an end user. A follow up trigger may evaluate the execution context that led to the current goal, user preferences, or environmental conditions. A strategy editor 250 writes instantiation strategies and selection strategies in a sandboxed scripting language and registers those strategies with the appropriate context in which they should be triggered. In an embodiment, the server 206 will include the execution engine 252 that interacts with nearly all components of the dynamically evolving cognitive architecture system based on third-party developers 200. For example, the execution engine 252 weaves together the end user intent with the planner 214, strategy registry 218, dialog registry 220, end user data store 224, function registry 226, and session state 230 to set up and complete tasks. The execution engine 252 also handles interrupted tasks and resumes interruptions when more data is elicited. The execution engine 252 is instrumented, which allows the execution engine 252 to collect dynamic data like end user preferences and the success rates of using particular services. When action object preconditions are not met, the execution engine 252 may dynamically adapt and/or interactively elicit feedback from an end user in order to continue with new information. Furthermore, the execution engine 252 intelligently schedules evaluation of services within the execution order semantics. When parallel or alternative paths exist in an executable plan, the execution engine 252 dynamically determines whether to proceed along one or more paths or whether to prompt for additional end user input before proceeding. These determinations are made from a variety of sources, including past result precision, recall, performance, and both global and local user feedback. A natural language intent interpreter 254 provides a flexible platform for inferring intent structures from natural language queries. The natural language intent interpreter 254 allows the consideration of multiple sources of data, including, but not limited to, modeled vocabulary via exact and approximate language agnostic-matching, implicitly gathered usage data, such as popularity measurement, explicitly annotated training data via machine learning, and contextual data, for example an end user's current location. Additionally, the natural language intent interpreter 254 is dynamically reactive to both the upstream producers, such as speech recognizers, and downstream consumers, such as planners and executors, of its data. Furthermore, the natural language intent interpreter 254 is a flexible framework for handling a deep vertical integration between the concept action network 212 and all producers and interpreters of natural language. Also, the natural language intent interpreter 254 acts as a conduit through which, for example, a normally “black box” speech recognizer may access concept action network level usage data or relationships to function more accurately. Similarly, the natural language intent interpreter 254 leverages concept action network level information through its clients, such as the planner 214, a downstream consumer of the natural language intent interpreter 254, to function more quickly and accurately. The planner 214, in turn, may access internal metadata from either the natural language intent interpreter 254 itself or its upstream producers, such as a speech recognizer. Speech recognition is facilitated by concept action network specific natural language models, which are in turn bolstered with data generated from concept action network specific planning algorithms, which are tuned and guided by dynamic execution data. FIG. 3 is a flowchart that illustrates a method for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. Flowchart 300 illustrates method acts illustrated as flowchart blocks for certain steps involved in and/or between the clients 202-206 and/or the servers 206-208 of FIG. 2. An intent is formed based on a user input, block 302. For example and without limitation, this may include the natural language intent interpreter 254 responding to a user saying “I want to buy a good bottle wine that goes well with chicken parmesan,” by forming an intent as a wine recommendation based on the concept object 104 for a menu item, chicken parmesan. The concept action network 212 provides the ability to represent an end user query, or task specification, in a format amenable to automated reasoning and automated satisfaction/servicing. The concept action network 212 enables queries and tasks from potentially many input sources to be represented in a single mathematical structure that does not contain natural language or other potentially ambiguous constructs. Below is an example of an unambiguous intent expressed in terms of a concept action network 212. 1 intent { 2 goal:phone.PhoneCall 3 value:biz.BusineesCategory (Pharmacy) 4 value:biz.BusinessName(CVS) 5 value:geo.PostalCode (95112) 6 } 7 The system 200 forms intents from concept action network elements, such as concept objects and action objects, based on their significance to the task at hand, and these objects may be instantiated with known data values that may aid in accomplishing the task. The system 200 annotates intents as source signals and a goal, the collection of which form an intent. Signals are a formalization of “what user data does the user provide,” and a goal is likewise a formalization of “what does the user want to accomplish.” An intent is an unambiguous, mathematical representation of these formalizations. Forming the intent may include outputting dialog that requests an additional user input. For example, the system 200 may provide dialog to ask the user if the requested wine recommendation is for a wine that the user wants to drink after the wine is ordered and subsequently delivered or if the requested wine recommendation is for a wine that the user wants to purchase from a local supplier within a short driving distance and then drink the same day. Although this example describes the natural language intent interpreter 254 forming an intent based on a user input provided via speaking, the user input may not be based on natural language and the user input may be provided via any of multiple modalities, such as typed entry of text via a real or virtual keyboard, or similar substitutions, touch and mouse gestures, speech, and combinations of the above. Given a concept action network 212 and an intent, the planner 214 may automatically reason about the existence of a sequence of concept action network prescribed steps that may service an intent. These steps of sequences produced by planning are denoted as plans, or programs for the concept action network 212 that, when executed with respect to the execution semantics, satisfies the goal within an end user's intent. A first plan is created based on an intent, wherein the first plan includes a first action object that transforms a first concept object associated with the intent into a second concept object and also includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects, block 304. By way of example and without limitation, this may include the planner 214 creating a plan based on the intent by selecting the action objects 106, 110, and 114 from multiple action objects in the concept action network 212. The action object 106 transforms the concept object 104 for a specific menu item, such as chicken parmesan, into the concept object 108 for a list of ingredients, such as chicken, cheese, and tomato sauce. The action object 110 transforms the list of ingredients concept object 108 into the concept object 112 for a food category, such as chicken-based pasta dishes. The action object 114 transforms the food category concept object 112 into a concept object 116 for a wine recommendation, such as a specific red wine. The concept object 104 may include data which provides instantiations of a concept object for a specific menu item, such as chicken parmesan, the concept object 108 may include data which provides instantiations of a concept object for a list of ingredients, such as chicken, cheese, and tomato sauce, and the concept object 112 may include data which provides instantiations of a concept object for a food category, such as chicken-based pasta dishes. Forming the intent may associate user data in the user input with a concept object, such as associating the user saying “chicken parmesan” with the concept object 104 for a specific menu item, such as chicken parmesan. Different third-party developers may have provided each of the concept objects 104, 108, 112, and 116, and the action objects 106, 110, and 114 to the concept action network 210 because the system 200 provides interoperability between the objects 104-116. A second plan is optionally created based on an intent, wherein the second plan includes a third action object that transforms a first concept object associated with an intent into a fourth concept object and also includes a fourth action object that transforms the fourth concept object into the third concept object associated with a goal of the intent, wherein the third action object and the fourth action object are selected from multiple action objects, block 306. In embodiments, this may include the planner 214 creating another plan based on the same intent, wherein the other plan includes action objects selected from the multiple action objects in the concept action network 212 to sequentially transform the concept object 104 for a specific menu item, such as chicken parmesan, eventually into the concept object 116 for a wine recommendation, such as a specific red wine. Given the likely case of the existence of an exponentially large number of feasible plans, the planner 214 may automatically identify the most efficient or desirable plan. The planner 214 may optimize plans using independently configurable metrics, including, such as plan size and plan execution cost, where cost may include notions of time, actual money required to invoke a service step, or fit with end user preference models. The system 200 may determine the simplest plan given an intent. The planner 214 efficiently enumerates the possible plans that satisfy an intent, defined as “includes steps that connect all signals to the given goal,” and selects which plan best satisfies some criteria, defined as a mathematical objective function over plans. The definition of the objective function is independent of the planner 214. One instantiation of this objective function is “simplest plan”, in which the planner 214 finds the plan with the fewest number of steps. A first plan is optionally selected for execution based on comparison of a first plan to a second plan based on an action object cost, an action object quality, and/or a number of planned action objects, block 308. For example and without limitation, this may include the planner 214 selecting the plan for executing the action objects 106, 110, and 114 based on three planned action objects for the plan to execute the action objects 106, 110, and 114 and five planned action objects for the other plan. Given the likely case of the existence of an exponentially large number of these plans, the planner 214 identifies the most efficient or desirable plan. A first plan is executed, block 310. By way of example and without limitation, this may include the execution engine 252 executing the plan to execute the action objects 106, 110, and 114 for recommending a wine for pairing with chicken parmesan, using the additional user input to identify a local supplier of the specific red wine. The execution engine 252 may execute a plan for recommending a wine for pairing with chicken parmesan based on an input parameter of an action object mapped to a web service parameter and a web service result mapped to an output value of the corresponding action object. Executing a plan may include using a user decision, a user preference, and/or user application contextual information to transform a concept object into another concept object. For example, the system 200 may identify a supplier of the specific red wine that is located geographically second closest to the user's current location as a favorite supplier of wine for the user based on previous purchases. A value associated with a third concept object is output, block 312. In embodiments, this may include the system 200 outputting the name of a specific red wine which the system 200 outputs as a recommendation for pairing with chicken parmesan through a visual interface or through an eyes-free interface. The system 200 may select another action object from the concept action network 212 and execute the other action object to transform the concept object associated with the goal of the intent into another concept object. For example, the system 200 may also recommend purchasing the specific red wine from a local supplier that is the third closest geographically to the user because the third closest supplier is selling the specific red wine at a lower sales price than the sales price of the specific red wine at the suppliers that are closer geographically to the user. Another third-party developer may provide another action object after the system 200 forms the intent based on the user input and before the system 200 outputs the value associated with the third concept object, as the system 200 and the concept action network 212 evolve dynamically, without the need to stop providing services at runtime while being updated with additional service capabilities during the dynamic evolution. Although FIG. 3 depicts the blocks 302-312 occurring in a specific order, the blocks 302-312 may occur in another order. In other implementations, each of the blocks 302-312 may also be executed in combination with other blocks and/or some blocks may be divided into a different set of blocks. FIG. 4 illustrates a block diagram of an example plan for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. In this example, the system 200 responds to a user saying “What time is it in Japan?” by creating the plan 400. The plan 400 includes a left branch 402, a right branch 404, and a central branch 406. The plan 400 represents an ambiguity based on the assumption that a third-party developer has taught the system 200 that “Japan” could be both the name of a country and the name of a city, which is called a locality in the general geographic model. Therefore, the planner 214 begins with two given source signals, both with concrete values of “Japan,” but with two different types, city name and country name. The left branch 402 and the right branch 404 represent the resolution of the respective city and country source signals to a common resolved form, an AdministrativeDivision. The system 200 knows how to get a time zone from an AdministrativeDivision, from which the system 200 can query the current time. The static plan 400 represents an effort at unifying the source signals under a coherent plan that will achieve the goal. At runtime, the system 200 executes both the left branch 402 and the right branch 404, either serially or in parallel. When the values “join” at the AdministrativeDivision node 408 labeled “choice,” the following three cases may occur. First, “Japan” is a city, and not a country, such that the system 200 selects the locality value without prompting the user and returns the time. Second, “Japan” is a country, and not a city, such that the system 200 selects the country value is selected without prompting the user and returns the time. Third, “Japan” is either both a city and a country, or more than one of either, such that the system 200 prompts the user to clarify. This process is subject to dynamic learning, whereby the system 200 “learns every day.” As the system 200 is used, users will respond to prompts like this to inform the system 200, and the third-party developers by proxy, that “Japan” is not a city, or is rarely a city, and the system 200 subsequently adjusts its behavior. Although FIG. 4 illustrates an example of the system 200 creating a single plan with a joining sequence that includes a limited number of action objects and concept objects, the system 200 creates multiple plans each of which may include any combination of linear sequences, splits, joins, and iterative sorting loops, and any number of action objects and concept objects. FIG. 5 illustrates a block diagram of another example plan for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. In this example, the system 200 responds to a user saying “Find Southwest flight status,” by creating the plan 500. The plan 500 includes an action object 502, a right branch 504, a central branch 506, and an object 508. A third-party developer models the “FindFlightStatus” action object 502 to accept both a “flightHandle,” which consists of a required FlightNumber and an optional carrier, and a carrier. The third-party developer indicates that the action object 502 can handle queries like “status of flight 501” and “status of united 501” without interrupting the user. However, the “Find Southwest flight status” query does not contain enough information because there are too many flights to reasonably query or present the results to the user, such that the system 200 must query the user for clarification. The right branch 504 involves a resolution to a carrier given its name, such as “southwest.” Assuming that the right branch 504 succeeds, the system 200 uses a “split” with the carrier identification to both initiate the construction of a flight handle, in the central branch 506, and pass directly to the FindFlightStatus action object 502. The construction of the flightHandle follows what the third-party developer has prescribed, that it must contain a FlightNumber. When the system 200 cannot find a flight number, the system 200 inserts a placeholder in the “Required: air.FlightNumber” object 508, which will later induce the system 200 to prompt the user with, for example, “Which southwest airlines flight(s) would you like to check?” Although FIG. 5 illustrates an example of the system 200 creating a single plan with a join and a split, which includes a limited number of action objects and concept objects, the system 200 creates multiple plans each of which may include any combination of linear sequences, splits, joins, and iterative sorting loops, and any number of action objects and concept objects. FIG. 6 illustrates a block diagram of an example plan for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. In this example, the system 200 responds to a user saying “Show the highly rated restaurants,” by creating the plan 600. This example assumes to have a set of restaurants, perhaps from a prior result. The system 200 may cache user input data and system output data from a previous user request, and use the cached data as context for a subsequent user request. For example, the system 200 may cache user input data and system output data from a previous user request to find restaurants within a proximity of a shopping area that the user plans on visiting, and use the cached data as context for the subsequent user request for the highest rated of the identified restaurants. The system 200 transforms the user's intent of “highly rated” into a reference to the “rating.Rating” concept 602, with special platform-provided instructions to “sort by this.” Although FIG. 6 illustrates an example of the system 200 creating a single plan with a iterative sorting loop that includes a limited number of action objects and concept objects, the system 200 creates multiple plans each of which may include any combination of linear sequences, splits, joins, and iterative sorting loops, and any number of action objects and concept objects. FIG. 7 illustrates a block diagram of an example of abstract representations of a small concept action network for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. Although the abstract representation 700 of a small concept action network includes about 300 objects, a real-life concept action network could include thousands or millions of objects. The detailed slice 702 of abstract representations of a small concept action network includes labels on concepts and actions and their relationships. An extension is a strong relationship between concept objects corresponding to the classic “is a” relationship in computing and philosophy. Concept objects related by extension are expected to be substitutable. For example, if a restaurant extends a business, a restaurant is expected to have all of the components of a business and is expected to be able to be used anywhere a business is expected. Concept objects may extend more than one other concept object, as the concept action network 212 supports multiple inheritances. A property is a strong relationship between concept objects that corresponds to the “has a” or containment relation. For example, a business (Business) has a property for its phone number (PhoneNumber). Properties may represent a one-to-many relationship, such as a business having multiple phone numbers, and these properties may carry cardinality restrictions. Action-connection edges include inputs and outputs. Inputs connect concept objects, such as a “restaurant,” to action object inputs, such as “BookReservation.” Action object inputs are models of what an action object requires in order to execute properly. Action object outputs connect corresponding action objects to the concept objects corresponding to their output type, such as “reservation.” Outputs represent what an action object produces when it executes as expected. The precise structure of the concept action network 212 acts as the central implementation point for many components of the system 200. In some situations, the system 200 enables concept objects to be directly transformed into other concept objects without action objects. For example, if a “call” action object needs a PhoneNumber, and the planner 214 selects a business concept object, the planner 214 separates or selects the phone number component of the business concept object and feeds the phone number component to the “call” action object. The resulting sequence for this part of the plan is: beginning concept object, concept object component, action object and resulting concept object or business concept object, PhoneNumber concept object, Call action object and InProgressCall concept object. There are three main cases of concept object to concept object transformations without action objects, property projections, extensions, and contextualizations. Property projections include copying, or selecting, once piece of an existing concept object as another concept object, such as selecting a PhoneNumber concept object from a Business concept object. Extensions include treating a specific concept object as its more general form, such as treating a Restaurant concept object as a Business concept object. Contextualization includes treating a general concept object as a more specific form of concept object, such as assigning the role of ArrivalAirport to a generic instance of Airport. None of these transformations actually involve manipulation of data; they only prescribe viewing the concept object from a different perspective. The property, extension, and contextualization relationships are parts of the declarative declaration of a concept object, such that they are third-party contributions. FIG. 8 illustrates a block diagram of example object representations for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. Each of the objects in the concept action network 212 may be represented in a format using domain specific languages. The format may be declarative, rather than imperative, such as is typical with many programming languages. Third-party developers specify objects and contribute the objects to the shared concept action network 212. Each object may extend or reference other objects defined by the third-party developer community. Some examples of these formats include a type system for concept objects that allows a variety of aspects of a concept object to be declared, including type extension, properties, enumerations, etc., and a format for action objects that allows declaration of inputs and outputs and other aspects of an action object. Some other examples of these formats include a language for specifying formatting and rendering of data for display to an end user, a language for implementation of functions, and a language for describing executions that may occur based on input to achieve the output. A third-party developer may edit these objects using conventional developer tools, such as code editors, or dedicated tools specifically built for editing the concept action network 212. Third-party developers may contribute code to a versioned object storage system that optionally supports collaborative development and may allow third-party developers to track versions of their code, as well as fork or merge changes from one set of objects to another, much as with standard revision control systems. The object representations 800 shows possible syntax for describing a few concept objects, which include primitive and structure types, with optional extensions and properties. The object representations 800 shows a sample action object, including inputs, input types, input constraints, and outputs. FIG. 9 illustrates a block diagram of example dialog templates for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. Another example of the formats using domain specific languages is a templating language for specification of language dialog that will be shown to an end user. The example dialog template 900 and 902 include patterns that indicate applicability of dialog expressions in different situations. FIG. 10 illustrates a block diagram of an example description of an equivalence policy 1000 for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. Yet another example of the formats using domain specific languages includes an equivalence specification language that allows declaration of when different concept values are equivalent. For example, two restaurants may be considered equivalent if their phone numbers are the same, so the language allows description of the identifying fields that determine equality or inequality. The example description of an equivalence policy 1000 indicates when businesses, restaurants, or geographic points may be considered equal, based on structural, string, or numeric equality. FIG. 11 illustrates a block diagram of example concept action network nodes and edges 1100 for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. The elements, such as nodes and edges, in the concept action network 212 map to well-defined semantics that allows an end user to use them. The process by which a node, such as an action object, is executed or evaluated corresponds to the invocation of a provider. For example, the execution semantics may prescribe: 1) the invocation of one or more Internet enabled services; 2) the manipulation of data returned by a service in a well-defined way: 3) the dynamic disambiguation of information, both implicitly from intermediate results and explicitly through end user input prompting; 4) the elicitation of additional information, such as credentials for a service; and 5) the interactive rendering of results produced at one or more nodes, starting and termination conditions and a well-defined execution order. An example of an element of these semantics is the evaluation of property edge. Property edges exist between concept objects and are interpreted as selective forms of data copying. The execution of a property edge involves selecting a component, or piece, of one concept object and copying it into another concept object. To execute a property edge between a concept object A and a concept object B, the execution engine 252 copies the component of the concept object A corresponding to the property associated with the edge from within the concept object A and instantiates the component in the slot reserved by the concept object B. The execution engine 252 may implement these semantics as server side software. The example concept action network nodes and edges 1100 are depicted during the process of execution by the execution engine 252. The execution engine 252 implements execution of action objects via functions, which are also contributed by third-party developers. Functions are represented in a programming language or declarative form that enables a third-party developer to fully specify how an action object is implemented in terms of data manipulations, external web service calls, and so on. In the case where functions are implemented in a traditional imperative or functional programming language, concept action network functions may correspond to methods or functions in the programming language. Concept objects may be mapped to values within the programming language. The programming environment may also offer additional features to facilitate use of web services, threading, error handling, and returning of output values as concept object values and indications of concept object types via metadata, where resource management may be facilitated by the execution engine 252. In other cases, function executable code may be synthesized by a declarative description of the function's operation, such as the mapping of input parameters to web service parameters, and the mapping of web service results to output values. Based on this declarative description, the function may be run via an interpreter or compiled into executable code. When data values are vended by multiple functions, declaratively modeled hierarchical equivalence policies may analyze values pairwise to determine whether the data values are equivalent, are not equivalent, or are of unknown equivalence. These equivalence policies may delegate to sub-policies or use a set of predefined predicates for primitive value comparisons. During the course of execution, the execution engine 252 may annotate data sources with metadata to indicate their source. For example, provenance may include an end user who entered the data, the name of a service, foreign keys on a remote system, and the copyright data associated with a piece of information. As data flows throughout nodes during execution, the execution engine 252 tracks the provenance of the data so that the ultimate result contains representations or links to the full, combined set of sources that contributed to a result. This information may be made available to an end user in some user interfaces. The system 200 may also use the provenance data stylistically when rendering, and to indicate follow up actions. In an embodiment, a preference library collects two types of preference data, end user explicit and end usage implicit. An example of end user explicit data is quick completion of regular order preferences, such as when an end user starts to order a sandwich and immediately seeing the autocomplete showing the exact type and condiments from previous orders such that the end user has a quick option to complete a full order as a shortcut for the same order as the order last time. Another example of end user explicit data is the recommendation of restaurants based on known food type preferences, such as when an end user either tags foods that the end user likes in the interface in the same way a “like” button works for social networks, or explicitly tells the system 200 about specific favorite food dishes so that the system 200 may use this information to locate restaurants serving variants of this food that are known either by menu data or mentions from reviews. End user explicit data may also include “things to do recommendations,” such as when an end user clicks on a quick menu of options for favorite social, cultural or category based things the end user likes to do, and the system 200 then uses this data to recommend a set of preference matched events, local attractions or other candidate geographically relevant activities with a single click of a button. A further example of end user explicit data is travel preferences, such as when the system 200 collects all travel preference data and applies the data to relevant planning and booking, such as frequent flyer information, seat preferences, hotel amenities, such as extra pillows, ocean views or rooms with entertainment systems with kids games, and general such as “hotels with a spa,” hotels “on the beach,” on so on. This may include the system 200 prompting the user to determine the type of trip being planned, such as individual travel, for which the system 200 uses personal preferences, or a family based trip, such as when the kids going, when it a romantic trip, or when it is an adventure trip In an embodiment, end usage implicit data may include any items ever selected via a generic menu of options becoming an implicit favorite, any specifically requested item categorized and assigned as a favorite within that category, and any ordered item in understood categories considered a favorite, such as when an end user orders pizza, this data implies that the end user “likes” pizza. Another example of usage implicit data may be if an end user frequently reserves flights that leave in the morning hours during weekdays, the system 200 understands that the end user prefers morning flights during the week. Likewise, if an end user reserves the same restaurant over and over, the system 200 assumes that the end user “likes” this restaurant and subsequently recommends restaurants similar to this restaurant when the end user is in unfamiliar geographies. Similarly, if an end user is at a certain location for four nights in a row at 2:00 AM, the system 200 infers that the end user lives at that location or if an end user travels between point A in the morning to point B and back the same route in the evening many times, the system 200 infers that the end user works at point B. Global learning is the confirmation of hypothesis by contextual user trends. The system 200 prompts an end user for a direction when an end user input may have multiple meanings. The system 200 reviews those disambiguation samples, examine the context, and learn what most people choose in order to avoid asking next time for similar inputs. FIG. 12 illustrates a block diagram of example plan 1200 for a dynamically evolving cognitive architecture system based on third-party developers, under an embodiment. The planner 214 may start with a null plan, a disconnected graph consisting solely of the signals and the goal, and growing the null plan into a full executable plan. The planner 214 incrementally connects nodes in the null plan, the intentional nodes, pairwise with paths. The planner 214 may define these paths in advance, such as inferred from usage data or pre-computed via a shortest/simplest heuristic, or the planner 214 may learn the path online through the traversal of the graph structure of the concept action network 212. The planner 214 adds and removes paths as defined by a set of search parameters, including, for example, a limit on the total amount of computation performed. The addition of paths to a plan and the removal of paths from a plan effectively induces a search over a diverse sequence of plans, each of which the planner 214 evaluates for fitness via a configurable objective function. The planner 214 stores the current best plan. Should no one plan emerge as a clear optimum, the planner 214 stores a set of the current best plans and carries the set forward to the next step of the search. The example plan 1200 is the simplest plan that satisfies the previously formed intent. FIG. 13 illustrates a block diagram of example plan 1300 for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. The system 200 may determine the family of the N simplest plans, a generalization of the above. The system 200 provides alternative execution paths as contingency plans, and find and encode alternate interpretations, or multiple hypotheses, of an otherwise unambiguous intent structure. The example plan 1300 is a version of the plan 1200, but fortified with automatically generated contingencies and alternate interpretations. The system 200 may start with a known plan as an initial state, then, using, for example, a similar search procedure as before, connect the nodes in the plan with additional alternative paths until some totality condition is reached, such that that all possible alternative routes have been added. FIG. 14 illustrates a block diagram of example Explorer user interface 1400 for a dynamically evolving cognitive architecture system based on third-party developers according to an embodiment. The Explorer uses the concept action network 212 and the end user interface 236 to interactively elicit intent from an end user based on an action object graph. Since the system 200 dynamically extends the concept action network 212 at runtime, what an end user may say and do changes over time. The Explorer and the end user interface 236 enable an end user to form any intent representable by the concept action network 212 at the current time, and it forms the intent in a way that enables rapid construction of goals. The system 200 shows not only obvious follow up possibilities, but longer-tail inputs that enable a rapid plan sketch to be entered, allowing the planner 214 to fill in all of the missing steps to the end goal. For example, an end user selects “phone call” as the first step, the planner 214 suggests “phone number” as a closely associated input possibility via the end user interface 236, which enables the end user to discover suggestions such as “menu item.” These suggestions enable an end user to enter the plan sketch “lasagna—phone call” via the end user interface 236, and the planner 214 writes a sequence of steps that amount to “find someone who sells/has lasagna, and call that someone.” The Explorer UI elicits a goal from an end user, such as sorting suggested goals by relevance, prioritizing the output of actions. The Explorer UI may elicit a sub-goal, a property of the original requested goal—such as the name of a director name for a movie, from a user or continue with the original goal. The Explorer UI suggests signals by walking the concept action network graph from the goal via extensions and action objects and finding primitive inputs, without suggesting inputs that have already been selected and are not multi-cardinal. The Explorer UI repeats suggesting signals and finding primitive signals until an end user indicates a selection or until there are no more available signals. After an end user indicates their selection, the execution engine 252 executes the plan using the inputs and the goal. If the there is an interruption, the Explorer UI prompts for the interruption if the interrupted concept object is a primitive, otherwise the Explorer UI sets the goal to the interrupted concept object and begins suggesting signals and finding primitive signals. The example user interface 1400 elicits an intent structure centered around locating a movie. Intent is not only possible from explicit indications, but may be inferred via integration with other mobile, touch, or window/desktop applications. All user interaction may be via multiple modalities, such as typed entry of text via a real or virtual keyboard, or similar substitutions, touch and mouse gestures, speech, and combinations of the above. Any entity within an end user application that is selected or represented may be starting points for interactions that involve a set of concept objects and action objects in the concept action network 212. Selection of pieces of information via an indication such as typing in a text box, having keyboard focus on a window or an object on screen, a mouse or touch gesture on a displayed object, or a natural language reference to an object may be used to select concept object values. An end user application may also represent contextual information, such as a document that is currently being edited, a geospatial location, contact information such as name, address or phone number, or any other piece of information offered to, stored, or elicited from an end user by an end user application. Such pieces of information may be referred to as cues. Given a set of cues from an end user's use of an end user application, at any given point, the system 200 may link cues to corresponding concept action network objects or to intents in several ways. The system 200 may link cues or sets of cues to: 1) corresponding concept objects, action objects, renderings, or other information within the concept action network 212; 2) formal descriptions of intents; 3) natural language hints that may be used to describe intents; and 4) combinations of the above, such as a formally represented intent, combined with additional hints or inputs in natural language, and several additional concept objects corresponding to some of the cues. For example, within any end user application that shows business listings, such as a touch-based map application, a web-based or mobile phone application restaurant review portal, or a search results page, an end user may select a business using appropriate modality, and then see business details. This selection allows integration with concept action network-based follow ups. In another example, while using a mapping application, an end user may ask “what are the hours of that African restaurant in Adams Morgan,” the end user application, based on the context of the user looking at a map of that part of Washington, D.C., provides neighborhood restrictions on the lookup of restaurants, and the system 200 infers intent and provides execution. In addition, the mapping application may maintain references to concept object values for all objects on display, and provide those as cues directly to provide concept action network-based follow ups. In yet another example, on any representation of an object within an end user application, the end user application may offer contextual follow ups, such as menus, based on actions that correspond to actions and follow ups within the concept action network 212. Illustrating this example, an end user clicks on a calendar item, and sees a list or menu of additional actions for that calendar item, such as “invite others,” “create social network invitation,” etc. The execution engine 252 may interact with an end user through dialog. Dialog is modeled declaratively and may consist of a string template of dialog content, possibly including dependent references to other dialog declarations or runtime values, the general phase of execution in which the template applies, such as before an action evaluation, accompanying a selection prompt, or at a successful result view, the specific execution context in which the template applies, such as a restaurant, the PhoneNumber projected from an EventVenue, and the GeoRegion constraint to the FindBusiness action, zero or more contextual conditions, such as input/output modality, time of day, location, user preferences, or previous usage history. The system 200 abstracts the details of selection and presentation from end users and third-party developers, taking into account past renderings, the active output modality, user preferences, and information coverage/gain, amongst other things. The system 200 automatically renders concept object values, often taking the form of query results, with respect to declarative specifications. This automatic rendering is beneficial because it allows for different modalities, it requires third-party developers to think about the data model in a multimodal compatible manner, and it requires third-party developers to be explicit about relationships between data. The system 200 may mix and match different pieces of concept objects from different sources, such as injected layout exponential personal capabilities and presentation adaptive layout for mode, situation, and/or context. Automatically rendering concept object values with respect to declarative specifications enables the intelligent summarization of results, such as removing repeated data presenting the most relevant fragments of data, and enables intelligent, graceful degradation in the presence of bad/incomplete data to highlight contextual relevance. The system 200 may intelligently highlight results based on what an end user requested, such as highlighting selected pizza category restaurants, and enables provenance-aware rendering, such as highlighting branded data or merged data. Fully modeling the layout provides essential advantages. The system 200 structures data in a more linguistic manner and different representations of the same content support multiple platform and form factors. The system 200 renders data based on statically typed structural data, such as concept objects, from the concept action network 212, as well as contextual information, such as the rendering modality and environment, user preferences, modeling details, including structural data about the concept objects, relative placement constraints, hints about importance of displaying different pieces of content or properties within concept objects, and the set of available templates or forms and other rendering data. The goal includes a plan for what to render and how to render it for a given modality. During a planning phase, the system 200 performs optimization over possible renderings to best fit a desired set of goals, which may be implemented by optimizing an objective function, and renders the goals based on constraints, relative placement, and/or templates. Rendering layout may be performed server side, and optimized for lower latency, or higher quality of service, interactive use. The system 200 may minimize the amount of data sent to the clients 202-204 while still maintaining the original data structure on the first server 206 by pre-computing what data is shown to an end user in each frame. Interactive components may trigger a roundtrip to the first server 206, with the option of prefetching and pipelining the interactive responses. The system 200 implements learning-based prefetching based on an interactive user interface. By analyzing user interaction usage, the system 200 determines which interactive elements, or types of interactive elements, should be pre-fetched/pipelined to the clients 202-204 and in what order, which allows for the optimal balance. In an embodiment, the layout may be hierarchical, automatic, and template based. A set of templates may be designed to layout images, text, and buttons on a screen. These templates may have various priorities and hints assigned to text/button/image regions. The system 200 automatically lays out concept objects without explicit layout information on the concept object itself by matching the appropriate concept priorities/hints to template priorities and hints. In addition to displaying results in dedicated applications, such as a dedicated interactive user interface, the system 200 may embed results, dialog, and interactions with concept action network execution within end user applications wherever it may be useful for an end user. An interaction that begins from within an end user application may also display its results there. For example: the system 200 may overlay results on, combine results with, or interleave results with objects displayed in an existing end user application. The system 200 may display dialog or textual interactions within the same interaction patterns of an end user application. Examples include forms, dialog boxes, touch, keyboard or mouse-oriented menus, graphical placements of objects in visual positions, such as maps or charts, and stylistic elements such as making a contact or address appear in a certain format. Since individual services are typically built by different third-party developers, a key challenge is to reconcile three goals, the easy integration of third-party services into the system 200 by third-party developers, a high level of interoperability between these services, and a high level of quality of services offered to end users. Historically, most approaches to such a challenge are to offer a platform where third-party developers contribute their services, and interoperability is possible via the platform. However, one challenge is that such platforms for integrating third-party services may only be successful when all stakeholders have incentives to use the platform cooperatively, so each participant receives desired benefits, end users have a rewarding experience, making use of the best service for each situation. Third-party developers are compensated for value they offer end users or other parties. Other contributors, such as data providers and end users who edit or contribute content, are also incentivized to help improve user experience. Advertisers may reach appropriate audiences effectively. Mechanisms for building a marketplace of data and services are described in the context of a platform that supports the marketplace. For example, the platform may be the dynamically evolving cognitive architecture system based on third-party developers 200 described above, or any other software framework that allows contributions of services and interoperability between these contributions. The platform offers a collaboratively extensible environment for description of data and interoperable services, built from objects and relations between objects, and uses services to handle requests. A platform may include software services hosted by third parties, which are not part of the platform, objects which include data types passed to and from services, operations that may be performed by the platform, user interface and dialog descriptions, cues for natural language processing, functions that are executable or declarative software code that implement operations, and that may access data or other services, and data, which may be any information stored by the platform and accessed by functions. A platform may also include developer tools, such as editors for objects, and mechanisms for data ingestion or upload, allow contributors to offer new functionality, and a shared, visible repository for the declarations of these objects. This may be a centralized or distributed storage system, such as a database. Contributors are people or organizations offering data, services, and/or objects for use in a platform. Advertisers are a type of contributor that may offer content for delivery to end users in exchange for compensation. Compensation to contributors may take many forms, including real or virtual currency, and/or other benefits, such as public recognition, and/or increased opportunities for use of a platform. Invocation may be a single use of a function on behalf of an end user. For example, a platform runs executable software code on a specific input, possibly via remote services, such as looking up a city name from a postal code via a geocoding service. A request from an end user may be expressed as an intent to achieve a desired outcome that may be achieved by a combination of invocations. An object makes a contribution to the handling of a request if it is a function and it is invoked, or if it is another object and its definition is used to service a request. A visit is a view of a web page by an end user, or other form of digitally mediated user attention, such as an end user impression of an advertisement, or interaction with a widget or game. Traffic is quantitatively measured visits or contributions to services. Measurements may be in aggregate numbers of visits, level of engagement by an end user, or other more complex numeric representations of total contributions and visits. The marketplace for services is a set of processes and technical mechanisms to encourage effective use of the platform. The processes and mechanisms are designed to achieve the goals of high quality of individual services, in terms of data quality and completeness, features, and any other aspects that affect end user experience. Another marketplace goal is interoperability with other services, so that contributors may derive benefits from others' contributed objects and data, both via explicit dependencies and via automated means supported by a platform. Other marketplace goals include software code reuse and consistency, so that contributors may do this with less software engineering effort, accurate indications of suitability, via metadata and dynamic measurements, so that a platform may accurately determine when services are suitable for a request, and performance, including low latency, low cost to serve requests, and other metrics. The parties within a marketplace are the end users, a platform operator, and contributors of several types. The contributors may play several roles in the marketplace. Content application program interface providers desire branding, to sell advertising, and/or to sell access to restricted content. Data providers and data curators want recognition, payment for all content, and/or payment for enhanced or premium content. Transaction providers desire branding and transactions via selling of some good or service. Advertisers desire traffic from qualified end users. A single person or organization may play more than one of these roles. A platform may offer technical mechanisms for handling an end user request and invoking and combining services to respond to it. A challenge of a marketplace is to select and prioritize the services that are used, so that goals of different parties are met. Selection relies on accurate accounting of service usage and contributions. A platform may be instrumented to maintain current information, such as contributions per contributor and per object and per group of objects, including invocation contexts, number of invocation times, implicitly and explicitly expressed end user experience metrics, and performance metrics. Traffic management may include desired limits on whether a service or object may handle a request. For example, restrictions may be expressed by number of requests, by type of request, by rate, such as a number of requests per minute. In addition, these quotas may be expressed individually per end user, or for sets of end users. A traffic quota for an object is a representation of such desired traffic constraints for contributions from an object or service. A platform may provide mechanisms for enforcement of traffic quotas. In many situations a platform may choose services to meet explicitly known constraints. These may include contractual goals on service use, in which specific contributors may have traffic or data driven constraints, such as number of requests per hour, or requests containing a specific keyword or involving a certain geographic region. A platform may use standard mechanisms to ensure execution meets specific contractual needs, such as using certain services, white labeling avoiding certain services, and packaging of dependent services. End user expressed approvals are approvals made by an end user, either in response to a request, or via a previous selection of a service via existing phone/social network applications, or via explicit preference over services or categories of services. Contributed services may be reviewed by a single reviewing authority, such as the platform operator, to determine if they meet desired goals for authority based approvals. Services may have provisional approval for specific traffic levels, or for specific periods of time, or be unconditionally approved for use at any level. A platform may directly use traffic management facilities to ensure these goals are met for explicit selection mechanisms. Assuming a service meets explicitly specified restrictions, a platform may control traffic via implicit means, via a continuous process that begins by the assignment of initial traffic quotas via a policy. The automatic traffic control mechanism may maintain a set of current quotas which are enforced by a platform. Handling of requests may result in new analytics data, which a platform may use to update a set of current quotas. The initial quotas for services or objects may involve the speculative assignment of traffic based on initial indicators. A platform may dynamically rank objects and services according to the analytics provided by the platform, and dynamically adjust traffic quotas. Analytics signals that may contribute to traffic quota assignment include performance, including latency, automatically measured response quality, such as via known sentinel queries, or contributed test cases from contributors or users, precision/recall based on implicit user feedback, such as frequency of follow up queries, precision/recall based on explicit user feedback, such as a thumbs up or thumbs down selected by an end user in a user interface, after receiving a response, evaluations from human evaluators, such as from paid evaluators, or from other third party services, and proxy ranking off other indicators, such as a contributor's web domain ranking, or the ranking of that contributor's applications in an a publicly browsable app store. A traffic assignment policy, whereby quotas are determined from these signals, may be fixed set of rules, or determined via more complex algorithms, including machine learning based approaches. A few other processes may supplement the processes described above, such as automatic reporting of analytics and ranking data in a forum for third-party developers, end users, and the public to peruse, and to offer recognition to exceptional contributions. Another process may be the curation of services and objects based on review/approvals for categories or services, and peer reviews/curation. Yet another process may include service tiers, in which a platform maintains metadata on all services and objects, so that different levels of stability are simultaneously available, such as bleeding edge, beta, and stable. End users may opt into the tier of their choice. Further processes may include promotion and discovery of services, such as end user facing features for discovery of available services based on suitability, intent elicitation from end user based on available services, and prioritization based on payment category of service, such as free, paid, freemium, etc. A marketplace may support accounting and controls on all contributions from services and objects, enabling parties in the marketplace to enter into a variety of transactions: End users may pay to use services or objects, contributors may pay other contributors on which they depend, contributors may pay end users or other curators for help improving their services, contributors may pay the platform operator for operations of their services, and advertisers may pay the platform operator to obtain traffic or visits. In each of these cases, payment may be any form of compensation, immediately, or in the form of an agreement. Examples of end user transactions include free, but limited quantity or via promotion, purchase per request or by subscription, and freemium, for which limited features are free and premium features require a fee. The platform may charge contributors based on a variety of metrics, such as the number of objects contributed, the number of objects making contributions to end user requests, traffic levels, and the amount of data stored. A platform operator may adjust traffic quotas based on a variety of compensation from advertisers. A key approach may be via bid and auction mechanisms using real or virtual currency. A platform may select bids via an auction mechanism, which may include ranking based on a variety of factors, including bid price, contributor, object, or group scores, user preferences, current situation, time of day, geographic location, current or upcoming calendar events, etc., and known preference history based on specific attributes, preferred services. Advertisers may bid for traffic that captures contextual moments that fall outside of traditional keyword matching sponsored links, such as hotels bidding to be the first choice offer for airline weather delays near airports, bars bidding to offer drink specials to 21-35 year olds in the vicinity with a Klout score over 55, restaurants bidding to offer drink/dinner specials to sports fans in the time, and location vicinity of large games or events. In another example, the platform may use a trusted personality algorithm to promote timely sponsored service suggestions based not only on intent inference but also using known preference history based on specific attributes, preferred services and context information such as time of day and location information. Offers may be filtered through probability of attractiveness filters and delivered via proactive suggestions from the assistant via dialog alert. An exemplary hardware device in which the subject matter may be implemented shall be described. Those of ordinary skill in the art will appreciate that the elements illustrated in FIG. 15 may vary depending on the system implementation. With reference to FIG. 15, an exemplary system for implementing the subject matter disclosed herein includes a hardware device 1500, including a processing unit 1502, a memory 1504, a storage 1506, a data entry module 1508, a display adapter 1510, a communication interface 1512, and a bus 1514 that couples elements 1504-1512 to the processing unit 1502. The bus 1514 may comprise any type of bus architecture. Examples include a memory bus, a peripheral bus, a local bus, etc. The processing unit 1502 is an instruction execution machine, apparatus, or device and may comprise a microprocessor, a digital signal processor, a graphics processing unit, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc. The processing unit 1502 may be configured to execute program instructions stored in the memory 1504 and/or the storage 1506 and/or received via the data entry module 1508. The memory 1504 may include a read only memory (ROM) 1516 and a random access memory (RAM) 1518. The memory 1504 may be configured to store program instructions and data during operation of the device 1500. In various embodiments, the memory 1504 may include any of a variety of memory technologies such as static random access memory (SRAM) or dynamic RAM (DRAM), including variants such as dual data rate synchronous DRAM (DDR SDRAM), error correcting code synchronous DRAM (ECC SDRAM), or RAMBUS DRAM (RDRAM), for example. The memory 1504 may also include nonvolatile memory technologies such as nonvolatile flash RAM (NVRAM) or ROM. In some embodiments, it is contemplated that the memory 1504 may include a combination of technologies such as the foregoing, as well as other technologies not specifically mentioned. When the subject matter is implemented in a computer system, a basic input/output system (BIOS) 1520, containing the basic routines that help to transfer information between elements within the computer system, such as during start-up, is stored in the ROM 1516. The storage 1506 may include a flash memory data storage device for reading from and writing to flash memory, a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and/or an optical disk drive for reading from or writing to a removable optical disk such as a CD ROM, DVD or other optical media. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the hardware device 1500. It is noted that the methods described herein may be embodied in executable instructions stored in a computer readable medium for use by or in connection with an instruction execution machine, apparatus, or device, such as a computer-based or processor-containing machine, apparatus, or device. It will be appreciated by those skilled in the art that for some embodiments, other types of computer readable media may be used which may store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, RAM, ROM, and the like may also be used in the exemplary operating environment. As used here, a “computer-readable medium” may include one or more of any suitable media for storing the executable instructions of a computer program in one or more of an electronic, magnetic, optical, and electromagnetic format, such that the instruction execution machine, system, apparatus, or device may read (or fetch) the instructions from the computer readable medium and execute the instructions for carrying out the described methods. A non-exhaustive list of conventional exemplary computer readable medium includes: a portable computer diskette; a RAM; a ROM; an erasable programmable read only memory (EPROM or flash memory); optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), a high definition DVD (HD-DVD™), a BLU-RAY disc; and the like. A number of program modules may be stored on the storage 1506, the ROM 1516 or the RAM 1518, including an operating system 1522, one or more applications programs 1524, program data 1526, and other program modules 1528. A user may enter commands and information into the hardware device 1500 through data entry module 1508. The data entry module 1508 may include mechanisms such as a keyboard, a touch screen, a pointing device, etc. Other external input devices (not shown) are connected to the hardware device 1500 via an external data entry interface 1530. By way of example and not limitation, external input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. In some embodiments, external input devices may include video or audio input devices such as a video camera, a still camera, etc. The data entry module 1508 may be configured to receive input from one or more users of the device 1500 and to deliver such input to the processing unit 1502 and/or the memory 1504 via the bus 1514. A display 1532 is also connected to the bus 1514 via the display adapter 1510. The display 1532 may be configured to display output of the device 1500 to one or more users. In some embodiments, a given device such as a touch screen, for example, may function as both the data entry module 1508 and the display 1532. External display devices may also be connected to the bus 1514 via the external display interface 1534. Other peripheral output devices, not shown, such as speakers and printers, may be connected to the hardware device 1500. The hardware device 1500 may operate in a networked environment using logical connections to one or more remote nodes (not shown) via the communication interface 1512. The remote node may be another computer, a server, a router, a peer device or other common network node, and typically includes many or all of the elements described above relative to the hardware device 1500. The communication interface 1512 may interface with a wireless network and/or a wired network. Examples of wireless networks include, for example, a BLUETOOTH network, a wireless personal area network, a wireless 802.11 local area network (LAN), and/or wireless telephony network (e.g., a cellular, PCS, or GSM network). Examples of wired networks include, for example, a LAN, a fiber optic network, a wired personal area network, a telephony network, and/or a wide area network (WAN). Such networking environments are commonplace in intranets, the Internet, offices, enterprise-wide computer networks and the like. In some embodiments, the communication interface 1512 may include logic configured to support direct memory access (DMA) transfers between the memory 1504 and other devices. In a networked environment, program modules depicted relative to the hardware device 1500, or portions thereof, may be stored in a remote storage device, such as, for example, on a server. It will be appreciated that other hardware and/or software to establish a communications link between the hardware device 1500 and other devices may be used. It should be understood that the arrangement of the hardware device 1500 illustrated in FIG. 15 is but one possible implementation and that other arrangements are possible. It should also be understood that the various system components (and means) defined by the claims, described below, and illustrated in the various block diagrams represent logical components that are configured to perform the functionality described herein. For example, one or more of these system components (and means) may be realized, in whole or in part, by at least some of the components illustrated in the arrangement of the hardware device 1500. In addition, while at least one of these components are implemented at least partially as an electronic hardware component, and therefore constitutes a machine, the other components may be implemented in software, hardware, or a combination of software and hardware. More particularly, at least one component defined by the claims is implemented at least partially as an electronic hardware component, such as an instruction execution machine (e.g., a processor-based or processor-containing machine) and/or as specialized circuits or circuitry (e.g., discrete logic gates interconnected to perform a specialized function), such as those illustrated in FIG. 15. Other components may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other components may be combined, some may be omitted altogether, and additional components may be added while still achieving the functionality described herein. Thus, the subject matter described herein may be embodied in many different variations, and all such variations are contemplated to be within the scope of what is claimed. In the descriptions above, the subject matter is described with reference to acts and symbolic representations of operations that are performed by one or more devices, unless indicated otherwise. As such, it is understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by the processing unit of data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the device in a manner well understood by those skilled in the art. The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while the subject matter is described in a context, it is not meant to be limiting as those of skill in the art will appreciate that various of the acts and operations described hereinafter may also be implemented in hardware. To facilitate an understanding of the subject matter described above, many aspects are described in terms of sequences of actions. At least one of these aspects defined by the claims is performed by an electronic hardware component. For example, it will be recognized that the various actions may be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Last weekend I released a version of Coverage.py for Python 3.x. Getting to that point took a while because 3.x was new to me, and, it seems, everyone is still figuring out how to support it. I experimented with using 2to3 to create my 3.x code from my 2.x code base, and that worked really well, see Coverage.py on Python 3.x for some details. For a while, I developed like this, with 3.x code translated by 2to3 so that I could run the tests under Python 3.1. But then I had to figure out how to package it. I didn’t want to have to create a separate package in PyPI for the 3.x support. I tried for a while to make one source package with two distinct trees of code in it, but I never got setup.py to be comfortable with that. Setup.py is run during kitting, and building, and installation, and the logic to get it to pick the right tree at all times became twisted and confusing. (As an aside, setuptools has forked to become Distribute, and they’ve just released their Python 3 support which includes being able to run 2to3 as part of build and install. That may have been a way to go, but I didn’t know it at the time.) Something, I forget what, made me consider having one source tree that ran on both Python 2 and Python 3. When I looked at the changes 2to3 was making, it seemed doable. I adapted my code to a 2-and-3 idiomatic style, and now the source runs on both. Changes I had to make: ¶ I already had a file called backward.py that defined 2.5 stuff for 2.3, now I also used it to deal with import differences between 2 and 3. For example: try : from cStringIO import StringIO except ImportError : from io import StringIO and then in another file: from backward import StringIO ¶ exec changed from a statement to a function. Syntax changes like this are the hardest to deal with because code won’t even compile if the syntax is wrong. For the exec issue, I used this (perhaps too) clever conditional code: # Exec is a statement in Py2, a function in Py3 if sys . hexversion > 0x03000000 : def exec_function ( source , filename , global_map ): """A wrapper around exec().""" exec ( compile ( source , filename , "exec" ), global_map ) else : # OK, this is pretty gross. In Py2, exec was a statement, but that will # be a syntax error if we try to put it in a Py3 file, even if it isn't # executed. So hide it inside an evaluated string literal instead. eval ( compile ( """ \ def exec_function(source, filename, global_map): exec compile(source, filename, "exec") in global_map """ , "<exec_function>" , "exec" )) ¶ All print statements have to adopt an ambiguous print(s) syntax. The string to be printed has to be a single string, so some comma-separated lists turned into formatted strings. ¶ 2to3 is obsessive about converting any d.keys() use into list(d.keys()), since keys returns a dictionary view object. If the dict isn’t being modified, you can just loop over it without the list(), but in a few places, I really was returning a list, so I included the list() call. ¶ A few 2to3 changes are fine to run on both, so these: d . has_key ( k ) d . itervalues () callable ( o ) xrange ( limit ) became: k in d d . values () hasattr ( o , '__call__' ) range ( limit ) ¶ Exception handling has changed when you want to get a reference to the exception. This is one of those syntax differences, and it’s structural, so a tricky function definition isn’t going to bridge the gap. Where Python 2 had this: try : # .. blah blah .. except SomeErrorClass , err : # use err now Python 3 wants: try : # .. blah blah .. except SomeErrorClass as err : # use err The only way to make both versions of Python happy is to use the more cumbersome: try : # .. blah blah .. except SomeErrorClass : _ , err , _ = sys . exc_info () # use err This is uglier, but there were only a few places I needed it, so it’s not too bad. ¶ Simple imports are relative or absolute in Python 2, but only absolute in Python 3. The new relative import syntax in Python 3 won’t compile in Python 2, so I can’t use it. I was only using relative imports in my test modules, so I used this hack to make them work: sys . path . insert ( 0 , os . path . split ( __file__ )[ 0 ]) # Force relative import from myotherfile import MyClass By explicitly adding the current directory to the path, Python 3’s absolute-only importer would find the file alongside this one in the current directory. ¶ One area that still tangles me up is str/unicode and bytes/str. Python 3 is making a good change here, but it feels like we’re still in transition. The docs aren’t always clear on what will be returned, and trying to get the same code to do the right thing under both versions still seems to require experiments with decode and encode. After making all of these changes, I had a single code base that ran on both Python versions, without too much strangeness. It’s way better than having to maintain two packages at PyPI, or trying to trick setup.py into installing different code on different versions. Others have written about the same challenge: Stephan Deibel supports 2.0 through 3.1! Ryan Kelly has some useful tips on the string issues. Fabio Zadrozny has more specifics.
FIA World Rally Championship star Andreas Mikkelsen and reigning Red Bull Global Rallycross champion Scott Speed will be joining the regular competitors for the first round of the 2016 Audi Sport TT Cup at the Hockenheimring this weekend Sixteen permanent entrants will battle it out with Mikkelsen and Speed over the weekend around the 4.574km circuit. “We have a very good mix of rookies and seasoned campaigners who competed in the Audi Sport TT Cup last year,” says Project Leader Philipp Mondelaers. New for 2016 is the Rookie class which will see ten new faces joining the line-up which is sure to shake things up for the regular competitors, “Following the impressions from the tests, I’m expecting to see very close races and a lot of variety on the podiums,” says Chris Reinke, Head of Audi Sport customer racing. Competitors will get a practice session on Friday followed by a 30-minute qualifying session on Saturday morning. Race action will be at 12:10 Saturday and 16:20 Sunday local time. Both rounds will be broadcast by live streaming at www.audimedia.tv. 2016 Audi Sport TT Cup permanent entrants with car numbers and car colors #2 Strohschänk, Kevin (D, *May 24, 1989), Rookie – Green #3 Rdest, Gosia (PL, *January 14, 1993) – Blue #4 Lappalainen, Joonas (FIN, *March 1, 1998) – Gray #5 Nielsen, Nicklas (DK, *February 6, 1997), Rookie – Yellow #6 Lefterov, Pavel, (BG, *November 12, 1997), Rookie – Orange #7 Hofbauer, Christoph (D, *July 15, 1991) – Green #11 Hofer, Max (A, *May 23, 1999), Rookie – Gray #12 Larsson, Simon (S, *May 13, 1997), Rookie – Gray #14 Caygill, Josh (GB, *June 22, 1989) – Yellow #23 Ellis, Philip (GB, *October 9, 1992), Rookie – Gray #27 Marschall, Dennis (D, *August 15, 1996) – Yellow #31 van der Linde, Sheldon (ZA, *May 13, 1999), Rookie – Blue #33 Lindholm, Emil (FIN, *July 19, 1996) – Blue #42 Egsgaard, Patrick (DK, *December 15, 1994), Rookie – Yellow #76 Holton, Paul (USA, *October 11, 1996), Rookie – Orange #91 Meyer, Yves (CH, *June 12, 1991), Rookie – Green
We’re one of Northern Colorado’s newest game and comic stores, located in Windsor, CO! Heroes and Horrors Games & Comics carries fine quality board and card games for all ages. We sell items for collectible card games, role playing games, and Warhammer. We are proud to have one of the largest collections of Magic the Gathering card singles for sale in Northern Colorado. In addition to all those games, Heroes and Horrors carries a huge variety of comics, manga, graphic novels, and collectible figures. We sell used sci-fi and fantasy books for your reading enjoyment. The store has a growing selection of used video games as well. Come Play with Us! Participate in one of our weekly game events or try out a new board game during open play time. Our game room has seating for 24 and a growing game library of over 160 card and board games. Currently the game room is available for open play: Tuesday through Friday from 2:30 to 6:00 p.m. Thursday from 6:00 to 9:30 p.m. Saturday from noon to 5:oo p.m. Sunday from 5:30 to 9:30 p.m. These times may change as new groups form or others change days. Feel free to give us a call, (970) 833-5128, with any questions. See our Calendar for the listing of current organized play events. We regularly have Magic the Gathering tournaments, Warhammer 40K, Dungeons & Dragons, Pathfinder, and board game night.
This article was updated on May 19 to include World Cup Qualification matches for Guatemala, St. Kitts and Nevis, and Canada. Fletcher Whiteley contributed to this article. Today marks the first day FC Dallas is training without what will be a good number of absences due to international duty for a variety of Oscar Pareja’s squad as Kellyn Acosta is in Australia in preparation for the US U-20s run in the FIFA U-20 World Cup. A number of players are soon to follow. This is a quick breakdown of what we expect to see. JeVaughn Watson – Jamaica – Copa America – probably leaves after the May 29 match against Sporting Kansas City. Jamaica also plays in the Gold Cup but we think there is a chance FC Dallas asked Jamaica to only take Watson for one of those matches. Blas Perez – Panama – Gold Cup – probably leaves after the June 19 match against the Colorado Rapids or June 26 match against the Houston Dynamo. Kyle Bekker – Canada – Gold Cup – probably leaves after the June 19 match against the Colorado Rapids or June 26 match against the Houston Dynamo. Canada also has World Cup Qualifiers on June 11 and June 16. Matt Hedges – USA – Gold Cup – might leave after the June 19 match against the Colorado Rapids or June 26 match against the Houston Dynamo, if he makes the team. Moises Hernandez – Guatemala – Gold Cup – might leave after the June 19 match against the Colorado Rapids or June 26 match against the Houston Dynamo, if he makes the team. Guatemala also has World Cup Qualifiers on June 12 and June 15. Tesho Akindele – Canada/USA – Gold Cup – might leave after the June 19 match against the Colorado Rapids or June 26 match against the Houston Dynamo, if he makes the team. Canada also has World Cup Qualifiers on June 11 and June 16. Atiba Harris – St. Kitts and Nevis – World Cup Qualification – probably leaves after the May 29 match but possibly after the June 7 match at San Jose for games on June 11 and June 16. Walker Zimmerman – USA U23 – Toulon Tournament and Olympic Qualifying – The Toulon Tournament is May 27-June 7. If he makes the squad, he will miss some key games. The Olympic Qualifying Tournament is in the first two weeks of October and the team has games in September leading into that tournament. Jesse Gonzalez – Mexico U20 – Same schedule as Accosta, has been with Mexico for a few weeks now. Alejandro Zendejas – USA U17 – The U17 FIFA World Cup is in the second half of October. Expect him to leave at the start of that month.
WASHINGTON -- Scott Gillis sat in the House balcony on Thursday, watching a speech by Rep. Tammy Duckworth (D-Ill.). The last time Gillis had seen her, she was unconscious, bleeding profusely and in need of life-saving surgery. Gillis was one of the medics who helped save Duckworth's life in 2004. Back then, he was an Army sergeant stationed in Iraq, and Duckworth was rushed into Gillis' hospital tent after her helicopter was hit by a rocket-propelled grenade. Before passing out, Duckworth, an Army pilot, somehow managed to help land the helicopter, despite injuries that led to both of her legs being amputated and one of her arms being damaged. Gillis was there to receive her, helping to stabilize her before she was flown to Germany for emergency surgery. He never saw her after that. Despite having treated more than 2,000 soldiers during the war, Gillis said he always remembered Duckworth. There were a couple of things about her that stood out. "The grotesqueness of her injuries and that she was a woman," said Gillis, now 40 and living in Northern Virginia. "We also got briefed as to what happened to our patients when we got them." Eight years went by before he thought of her again. In August 2012, he watched his TV with disbelief as someone named Tammy Duckworth walked out on stage at the Democratic National Convention with prosthetic limbs, talking about her military experience and her run for Congress. That's when it began to register that not only had Duckworth survived, but she had gone on to make something of herself. It was surprisingly painful for Gillis. "When I finally made the connection as to who she was, I wasn't right for three days," he said. "I hadn't realized that I never found out what had become of any of my patients that lived. She was my first one. You would think it would have had a happier effect on me, but I think it opened a box I didn't know existed." He added, “People died. You kind of know how it went for them.” This Huffington Post reporter, who is friends with Gillis' wife Melissa, heard Gillis' story over dinner one night in the spring of 2013. After a series of messages between Gillis and Duckworth's congressional office, a meeting was finally set for Oct. 3 in a quiet room in the Capitol. The two sat together for more than half an hour and shared stories. When HuffPost wandered in, it was obvious both were stunned at what was happening. By the end, they were laughing like old friends. "I don't remember anything," Duckworth said of being brought into Gillis' tent, joking about intubation being more painful than her legs being amputated. "I might have an old complaint card if you want to fill that out," Gillis said. "Oh yeah, would you? That'd be great," she replied. "He said I was unconscious, but later on ..." Duckworth said, before Gillis jumped in: "She was ignoring me." Duckworth replied, "I was pretending to be dead, hoping he would go away." Duckworth eventually had to head to the House floor, but their time together didn't end there. Gillis sat with Duckworth's staff in the balcony and watched as she spoke below about the need to ensure veterans are taken care of during the government shutdown. Two days later, Gillis was Duckworth's guest at a dinner for wounded warriors. He said he couldn't get over how nice she was to him. "She was just so familiar. The first thing she did was give me a hug. I was so blown away," Gillis said afterward. "I must have told her 25 times, 'I just can't believe I'm sitting here with you.' She would be like, 'Alright, I need another big hug.' I'm like, 'Oh my god, you're a real person. This job didn't somehow fuck your head up.'" Gillis says he doesn't know when he may talk to Duckworth again, but the fact that he got to meet her at all has had a huge effect on him -- and his process of healing from the horrors he endured in Iraq.
The Liga leaders could lose a host of their young talents, with Julio Pleguezuelo set to join Arsenal, while Sergi Canos is attracting interest from Liverpool and Tottenham By Duncan Castles Chelsea have agreed a deal to sign 16-year-old Barcelona starlet Josimar Quintero at the end of the season, with the Liga leaders braced to lose several promising talents to the Premier League.The Ecuadorian attacking midfielder has told the Spanish club that he will not be taking up their offer of a senior contract, which will enable him to join Roman Abramovich's outfit this summer for a nominal fee.A fast and direct creator of chances from wide or central positions, Josimar joins defender Julio Pleguezuelo in the latest exodus of Barcelona's youngsters.Pleguezuelo, a player in the mould of Carles Puyol, has already told staff at Barca that he is leaving for Arsenal in the summer, while 16-year-old forward Sergi Canos is actively encouraging offers from English sides.Canos, one of three Barca players called up by Spain for an international tournament against France, Italy and the Czech Republic in April, has been pursued by Liverpool and Tottenham. Like Josimar and Pleguezuelo, the striker has made it clear to his English suitors that financial terms significantly superior to those on offer at Camp Nou will convince him to move abroad.The players' first professional contract offers a window for Premier League sides to take advantage of Barcelona's much-admired academy programme. Fifa rules prevent individuals under the age of 16 from moving overseas except under special circumstances, but also bar academies from signing their best graduates to long-term professional contracts until they have reached that age.Should a club from another European league wish to offer them an alternative deal at 16 they must only pay the developing side Fifa-mandated compensation in lieu of a transfer fee. The maximum sum due for for a player who has spent four years in a “Category 1” club such as Barcelona is €360,000.Scouting then poaching youngsters from Barca's Under-16s has become a common strategy for the Premier League's more affluent clubs. Arsenal targeted Cesc Fabregas in 2003, rapidly turning him into a regular first-team player and eventually making him their captain before he forced a Camp Nou return for €41 million eight years later. Gerard Pique joined Manchester United in 2004, returning to Catalunya four seasons later for €6m.
Guerrilla Games has ditched experience point gain for Killzone Shadow Fall's multiplayer. The Dutch developer has instead opted for a challenge-based system for the PlayStation 4 exclusive first-person shooter. Completing a challenge adds a point to your rank, Guerrilla explained in a blog post, which in turn rewards weapon attachments and ability enhancements. Guerrilla said there will be over 1500 challenges in the game when it launches alongside the PlayStation 4 this November. An example of a simple challenge is "Destroy 1 Turret". An example of a more complex challenge is "Kill 1 enemy player with Laser Tripmine while they are carrying a Beacon". Elsewhere, you can trigger temporary bonuses, called Combat Honours, by earning enough points during a round. These last only as long as your play session, so when you quit a match you lose them. In-match, they're stackable, and you can unlock as many as you put points into. As for the weapons, Guerrilla said they're all skill based, and promised Shadow Fall won't include recoil-less or auto-aiming guns. You can augment the weapons, though, with up to two different attachments at once. And here's something a bit different: the Spotlight System. At the end of each round players see a brief scene featuring the three top players from the winning side and the top player from the losing side. One of the winning players gets to select a Spotlight move to celebrate his or her victory over the losing players. If you fail to execute the move in time, the losing side gets the chance to perform a countermove. Shadow Fall ships with a wide selection of Spotlight moves, and more will be added post-launch. As previously revealed, Shadow Fall lets you add online and offline bots to multiplayer. You can add bots to online matches, with human players replacing them during the game. You can set up a co-op Warzone of sorts by adding bots to only one faction. In offline mode you can create Warzones and play against bots, but in so doing you can't complete Challenges. And finally, Guerrilla confirmed Shadow Fall does not contain Exoskeletons or Jetpacks - at launch. "Their inclusion would introduce a wide number of new variables and exceptions to account for," the developer explained, "and we want to focus on offering fair, reliable and consistent Warzone customizability."
The Arizona Republic has found a large cohort of elderly and retired people who claim to have been abused by TSA staff at Phoenix's Sky Harbor airport. The passengers claim that they were required to remove their prostheses (particularly prosthetic breasts worn by cancer survivors), and that their objections were met with threats and hostility. One woman wrote that an agent ordered a pat down of her prosthetic breast and refused to conduct the search in private, before a flight in May 2012. "She made me pull it out in front of the world. When I got upset I was told to shut up. I have never been so humiliated in my life," the woman wrote. "The TSA has overstepped their bounds and ruined my vacation." Two weeks earlier, another passenger wrote that TSA agents twice patted down her breast in as many weeks. "Since this has occurred at two different checkpoints on two different dates, TSA clearly must have a procedure in place (that) requires that women with breast prosthesis to be singled out and treated in this cruel and humiliating manner," the woman wrote. There is no record that the TSA responded to the second woman. When it does, it's usually a form letter.
Get the biggest Daily stories by email Subscribe Thank you for subscribing See our privacy notice Could not subscribe, try again later Invalid Email 'Britain First is not welcome here' - that is the clear and frank message from the Mayor of Ramsgate Trevor Shonk after the group announced plans to hold a demonstration in the town . The far-right activists revealed they would be holding a rally in Ramsgate in relation to the conviction of four men who gang-raped a teenager. Britain First and other far-right groups have been actively campaigning around the trial since the men were arrested after the horrific crime which took place above 555 Pizza and Kebab, in Northwood Road, in September last year. But Cllr Shonk, UKIP, has slammed their latest plans to hold a rally in Ramsgate and said they would not be welcome. Cllr Shonk said: "This is the problem with this country, people forget that the law is the law and it is final. "Hopefully the law is upheld again and sentencing is significant in this situation. "But people shouldn't be taking these things into their own hands. "We have all read all the awful stories about that case, it is terrible but the law should be left to take its course without all this far right stuff." Britain First announced plans to hold a gathering outside Ramsgate railway station on October 14, with a number of key figures from the group pipped to appear, including leader Paul Golding and deputy leader Jayda Fransen. The leader and deputy leader were arrested earlier this year for "inciting religious hatred" during the trial, and are currently understood to still be on police bail as they await their trial date. Just weeks ago Ms Fransen posted a live video from a police station in Kent, claiming that Mr Golding had been "taken into custody " again after they were called back by Kent Police as part of their bail terms. A flyer announcing the Ramsgate protest reads: "Paul Golding, Jayda Fransen and Steve Lewis are being persecuted for exposing the Ramsgate migrant rapists." The exact details of the gathering are not yet known, but the group is known for holding public rallies and marches – though they are often shut down by police. The group has protested outside the takeaway before, banging on the windows and shouting at shop staff. But Cllr Shonk said their protests do more damage to the country than good. 'Britain First is not welcome' He added: "I'm not against protests, you can campaign in a nice way but Britain First don't do that. "We seem to be a very divided country at the moment and this does not help anyone. "I am not supportive of this protest in any way. The law of the country comes first. "My doors are open to help anyone from all over. We are diverse here in Ramsgate and I'm proud of that. "It is a wonderful place and the negativity of groups like Britain First is not welcome."
Chinese barber shop on East Pender after the riots. The wives of the tong elders had told her the history of white brutes in 1907 yanking the braided queues of the first elders and kicking them down Hastings Street , their white hands bashing Chinese heads and tearing down the shops and laundries of Chinatown . from All That Matters, p. 33, by Wayson Choy In 1907, an anti-immigration rally exploded into violence and vandalism in both Chinatown and Japantown in Vancouver. What began as riots in Bellingham as a movement to drive Punjabi Sikhs out of the lumber industry had eventually spread to white supremacist marches to Vancouver city with demands for a “White Canada.” The riots were not only a landmark in the rise of racism in Canada, they signified the commencement of systematic federal intervention to prohibit Asian immigration to Canada through the imposition of quotas on Japanese emigration, continuous voyage regulations those from India, and the enforcement of laws against the Chinese. The 1907 Riots were advertised in news reports, and by the time the parade arrived at city hall, a huge crowd had gathered. Crowd estimates vary between four thousand and eight thousand people. As rioters attacked Chinatown, the angry mob eventually turned toward Japantown or Nihon Bachi, around the Powell Street grounds in what is now Oppenheimer Park. Although news of the riot flashed reached different corners of the world, appearing on front pages in Ottawa , New York, and London, only three people were charged and only one person convicted of any offence. Not only had newspapers openly mocked the efforts of the court and police, few injuries were reported. All levels of government in Canada made vague apologies.
About Sleek, elegant, X. Welcome, Kickstarter. Taking the same professional process we have used the past 25 years in designing and manufacturing cinematic features, we created, tested, and modified multiple prototypes. These cases achieve previously unattainable ergonomics and functionality in an iPhone case. Our patent pending X design is guaranteed to go on your phone 1 time and never come off. Existing mobile and iPhone accessory cases are bulky or have limited lens attachment capabilities. Most exhibit poor functionality and offer little protection. This case has been certified by design to withstand U.S. MIL STD 810G-516.6 shocks. What makes Xase best in market? X frame design: Aluminum is minimalistic and secure providing professional functional elegance Evolution of Xase Rubberized Grip: The grip has very simple clip technology so it is natural feeling, you will love it. Rubberized Grip on Xase Breast Cancer Awareness Pink: Together we can raise awareness and to show our support we designed a pink Xase Breast Cancer Awareness Pink Xase Super Cinematic HD Lens: Premium glass combined with the camera on iPhone 6s/s+ makes for a beautiful cinematic picture Super Cinematic HD Lens Kickstarter Exclusive! Leather Strap: Made of leather to look good and feel better on your wrist You'll need an HTML5 capable browser to see this content. Play Replay with sound Play with sound 00:00 00:00 "Xower Up" - Battery Pack Grip Xower - Battery Pack Wood Handle Grip Xower grip: 2200mAh battery grip to extend the life of your iPhone 2 full charges. The Team The design came to the team on a long pursuit to find an iPhone case that served their diverse needs and offer it in professional elegance. The team wanted to film on the go with confidence. Whether it be in the midst of action scene on set, or receiving an award, the team designed 1 case for every setting. We couldn't be happier with the design team on Xase. Many long days and nights were spent designing, fabricating, testing, and modifying Xase but only a few prototypes have lived to our standards. So it was only natural to build such a brilliant Xase. We know you will like it and be coming back for more ;)
BAGHDAD — Naji Abdulamir used to run all over Baghdad but preferred the farmlands along the banks of the Tigris. Karim Aboud once ran for Saddam Hussein and has the gold watch and the newspaper clippings to prove it. Falih Naji ran for Iraq at a competition in India in 1982, and recalled, “There was no higher honor than representing your country.” The three men — now in their 60s, elder statesmen of Iraq’s tiny running community — were awash in memories last week ahead of the Baghdad International Marathon, the first in as long as anyone could remember. In fact, the race was not a marathon at all. Rather than the standard 26.2 miles, Baghdad’s version was a road race that allowed participants their choice of a lesser distance: two, four, eight or 10 kilometers. That mattered little. “I feel like a kid on Eid,” said Mr. Abdulamir, a running coach and former star of the Iraqi national team.
Federal law enforcement agencies in the U.S. and Europe have shut down more than 400 Web sites using .onion addresses and made arrests of those who run them, which calls into question whether the anonymizing The Onion Router (Tor) network itself is still secure. The Web sites - which authorities say sold a range of illegal wares including drugs, firearms with the serial numbers filed off, phony credit cards, fake IDs and counterfeit money – have been taken down by seizing the servers that host them. Seizing the servers and the arrests indicate that law enforcement agencies have found a way to trace the physical locations of devices connected to Tor and to track down the individuals responsible for them – two things Tor was designed to prevent. Even the name of the coordinated effort - Operation Onymous – indicates that the agencies involved undermined the anonymity component of Tor, which they refer to as the Darknet. “[T]his time we have … hit services on the Darknet using Tor where, for a long time, criminals have considered themselves beyond reach. We can now show that they are neither invisible nor untouchable,” says Troels Oerting, head of the European Cybercrime Center in a press release from the agency. Law enforcement officials didn’t say how they had found the physical locations of devices and their owners, and Oerting says it’s not going to. “This is something we want to keep for ourselves,” he told Wired. “The way we do this, we can’t share with the whole world, because we want to do it again and again and again.” “Today we have demonstrated that, together, we are able to efficiently remove vital criminal infrastructures that are supporting serious organized crime. And we are not 'just' removing these services from the open Internet; this time we have also hit services on the Darknet using Tor where, for a long time, criminals have considered themselves beyond reach. We can now show that they are neither invisible nor untouchable. The criminals can run but they can’t hide. And our work continues....”, says Troels Oerting, Head of EC3. This makes it unclear whether these authorities have broken Tor to the point that it can no longer mask the location of its infrastructure or whether they found them using other intelligence. Tor relies on volunteers who host nodes of the network. Traffic bounces around within Tor in order to disguise where it comes from, but exit nodes and entrance nodes would yield the most useful information about actual IP addresses connecting to Tor. “Law enforcement could try to get in that first layer and see the sources and therefore try to reduce the anonymity as much as possible,” says Ben Johnson, chief evangelist at Bit9+Carbon Black. “Combine this with some older versions of the Tor software having some vulnerabilities and this could be how some of these users and sites are tracked down. “It will be interesting to see how quickly Tor becomes a bunch of systems that are actually owned by intelligence services, much like double agents, or something along those lines.” But because of its popularity and churn among those who set up nodes, he says he thinks the service will be reliably secure. “I believe enough people use and support Tor that new nodes (both relays and bridges) will spawn and continue to make Tor a viable anonymity service,” he says. The U.S. Department of Justice detailed some of the sites taken down as follows: “Pandora” (pandora3uym4z42b.onion), “Blue Sky” (blueskyplzv4fsti.onion), “Hydra”(hydrampvvnunildl.onion), and “Cloud Nine” (xvqrvtnn4pbcnxwt.onion), all of which were dark markets similar to Silk Road 2.0, offering an extensive range of illegal goods and services for sale, including drugs, stolen credit card data, counterfeit currency, and fake identity documents. “Executive Outcomes” (http://iczyaan7hzkyjown.onion[external link]), which specialized in firearms trafficking, with offerings including assault rifles, automatic weapons, and sound suppressors. The site stated that it used “secure drop ship locations” throughout the world so that “anonymity [was] ensured” throughout the shipping process, and that all serial numbers from the weapons it sold were “remove[d] . . . and refill[ed] with metal.” “Fake Real Plastic” (http://igvmwp3544wpnd6u.onion[external link]), which offered to sell counterfeit credit cards, encoded with “stolen credit card data” and “printed to look just like real VISA and Mastercards.” The cards were “[g]uaranteed to have at least $2500 left on [the] credit card limit” and could be embossed with “any name you want on the card.” “Fake ID” (http://23swqgocas65z7xz.onion[external link]), which offered fake passports from a number of countries, advertised as “high quality” and having “all security features” of original documents. The site further advertised the ability to “affix almost all kind of stamps into the passports.” “Fast Cash!” (http://5oulvdsnka55buw6.onion[external link]) and “Super Notes Counter” (http://67yjqewxrd2ewbtp.onion[external link]), which offered to sell counterfeit Euros and U.S. dollars in exchange for Bitcoin. “This action constitutes the largest law enforcement action to date against criminal websites operating on the “Tor” network,” according to a press release from the DoJ.
Despite his absence, Mr Rove's place was marked with a name-tag Karl Rove, President Bush's ex-aide, has refused to attend a congressional hearing on allegations that he helped politicise the US Justice Department. Mr Rove has been accused of attempting to influence the prosecution of a former Democratic governor of Alabama. He is also said to have been involved in the firing of several US attorneys, allegedly for political reasons. Lawmakers subpoenaed Mr Rove to attend the hearing, but he has refused, citing executive privilege. The president and those who work for him are allowed by law to resist certain attempts by the judicial and legislative branches of government to force them to co-operate with inquiries - this right is known as "executive privilege". But the House Judiciary sub-committee, which is conducting the investigation into the Justice Department and issued Mr Rove with a subpoena, has rejected Mr Rove's claim of immunity. Despite Mr Rove's absence, the committee set up a chair for him, and marked his place with a name-tag. Stunt The committee could decide to seek Mr Rove's prosecution for contempt. Two other officials - White House Chief of Staff Josh Bolten and former White House counsel Harriet Miers - have already been held in contempt by the committee for failure to testify. Republicans in the US House of Representatives have dismissed the committee's actions as a stunt, and have called on its members to accept an offer made by Mr Rove to speak to them informally in private. The committee rejected Mr Rove's offer, because they wanted his remarks to be made under oath. The inquiry stems from the firing in 2006-07 of a number of US attorneys. Critics say the attorneys were sacked for political reasons, and that the Department of Justice subsequently attempted to mislead the public about the reasons for the dismissals. Mr Rove claims that executive privilege allows him to skip the hearing The committee's investigation has also been extended to cover the prosecution of former Democratic governor of Alabama Don Siegelman. Siegelman was convicted last year of accepting and concealing a contribution to his campaign to start a state education lottery, in exchange for appointing a hospital executive to a regulatory board. He was sentenced last year to more than seven years in prison but was released in March when an appeals court ruled that he had raised "substantial questions of fact and law" in his appeal. Siegelman has alleged that his prosecution was pushed by Republican officials, including Mr Rove. A former Republican campaign volunteer told congressional attorneys last year that she had overheard discussions suggesting that Mr Rove had put pressure on officials from the Justice Department to prosecute Siegelman. Departmental officials have disputed the claims, arguing that Siegelman was convicted by a jury, and that the case had been handled by career professionals, not political appointees. E-mail this to a friend Printable version Bookmark with: Delicious Digg reddit Facebook StumbleUpon What are these?
Vols Film Study: Breaking Down the End of the Game What a play. Nearly everyone had written off the Vols after Jacob Eason’s touchdown pass put the Georgia Bulldogs up 31-28 with ten seconds left, but Josh Dobbs and Jauan Jennings wouldn’t be denied. Coach Butch Jones called the Hail Mary with four seconds remaining, and the Vols executed the play perfectly. Dobbs, the senior quarterback from Alpharetta, Georgia will finish his career 2-0 as the starter against his hometown Bulldogs. Jennings’ spectacular catch keeps the Vols’ undefeated season alive, and he will go down in Volunteer history forever. But before we look at the play that changed the Vols’ season, let’s look at how Tennessee set up the Hail Mary. After Georgia’s touchdown, defensive back Rico McGraw ran on the field without a helmet in celebration. By rule, that is unsportsmanlike conduct, and Georgia was penalized 15 yards on the ensuing kickoff. Now, instead of kicking off from the 35, Georgia would kickoff from their own 20. Tennessee lined up in a rather unconventional kick return formation. Rather than line up in a traditional formation, the Vols lined up with 10 players on Georgia’s side of midfield, with only Evan Berry deep. (Not pictured below are Jauan Jennings and Josh Malone, just off the screen to the near side, and Berry, back deep) The formation resembled what you would expect if you were facing an onside kick, and the personnel matched what you would expect on the hands team. Berry was joined by receivers Malone and Jennings, tight ends Jason Croom and Ethan Wolf, running backs Jalen Hurd and Alvin Kamara, safety Todd Kelly Jr., and linebackers Colton Jumper and Cortez McDowell. (I’ve yet to see a shot with an angle to I.D. the eleventh player on the field). However, Butch Jones clarified that the Vols were not expecting an onside kick. “It was not our hands team. It was comprised of a lot of skill players on the field, but it was not a hands team… This was a different type of return anticipating different types of kicks so you can adapt and adjust to it at the end of the game.” So what is the benefit of lining up in this unorthodox formation with skill players on the field at the end of the game? First, the Volunteers formation effectively kept Georgia from squib kicking. Kelly Jr. and Jumper lined up at the 35 yard line in the middle of the field. Had Georgia attempted a low squib kick, either of them could’ve grabbed the ball and took off running up the middle with outstanding field position. By putting so many skill players on the field, the Vols were well equipped to handle a squib. Next to a squib, the next best option in an end-of-game situation is a sky kick. Sure enough, Georgia head coach Kirby Smart and special teams coordinator Shane Beamer called for a sky kick in an effort to, as Smart said after the game, “get it away from the best returner in the country.” Unfortunately for the Bulldogs, the Vols’ formation meant that the ball would be headed right towards the All-American returner. Berry lined up at the 15 yard line, ready to sprint up and field the ball, no matter which direction it was kicked. With Berry as the only Vol past midfield, there would be no upbacks who might try to field the ball. By sending Berry back by himself, the Vols could ensure that the ball would end up in the hands of the best kick returner in America. The only way Georgia could’ve avoided kicking to Berry would be to boot it out of bounds or kick it very short. The Vols had their top two receivers, Malone, and Jennings, lined up between the 45-50 yard lines to the left side of the field, in position to field a short kick and give Tennessee outstanding field position. Georgia kicked the ball towards the left sideline, and Berry came sprinting up to field the ball on the run at the 32. The Vols appeared to be setting up a wall return back to the right. The blocking wasn’t great, but Berry ran through a few tackles on his way to the Georgia 48 yard line. Going with this return was a smart play for the Vols. In a regular formation, Berry likely would not have been in position to field the ball. Georgia would’ve simply kicked to an upback, likely a tight end or linebacker, who almost certainly would not have been able to return the ball to midfield. By leaving Berry as the only man deep and taking away the squib kick by alignment, the Vols forced Georgia to either kick the ball out of bounds or kick to the All-American. Doing so enabled a big return that put the Vols in range for Dobbs to throw to the end zone. After the game, Butch Jones said, “That was a kickoff return that we put in (for) the end of (a) game, and we’ve had it in for three years and never used it. To our kids’ credit, we rep it every Thursday, and we were able to get it (today), and it put us at midfield range where you could throw a Hail Mary.” Coach Jones also gave assistant coach Robert Gillespie a lot of credit, saying he was instrumental in making sure the team was prepared for the return. That is a great example of situational football by Jones and the Vols. For years, they had been preparing for an end-of-game kick return situation and had a play already planned out. When the time came, Jones knew exactly what play to go to, and the team was ready to execute. To add insult to injury for the Bulldogs, Georgia was offsides on the kick. Tennessee was able to add the penalty to the end of the return, moving the ball from the Georgia 48 to the 43. Now, with only four seconds left, Tennessee was down to one final snap. There would be no time for a quick pass to get the ball in field goal range. The only viable option would be to throw to the end zone. Originally, the Vols lined up in a trips left formation. Josh Malone, Ethan Wolf, and Josh Smith were split wide to the left, and Jennings was the single receiver to the right. Alvin Kamara was beside Dobbs in the backfield. Tennessee saw the look Georgia was in and used their first timeout. In the huddle, Coach Jones kept the play call the same. The Vols would still throw a Hail Mary from their trips formation. However, wide receivers coach/passing game coordinator Zach Azzanni decided to make some personnel changes. Azzanni flipped Jennings and Malone, putting Jennings to the strongside of the formation. Azzanni also removed Wolf from the game, putting the 6’5” Jason Croom in his spot. Once the Vols broke the huddle and returned to the field, they saw how the Bulldogs were planning to defend the play. Georgia lined up with three defensive linemen in the game set to rush Dobbs. Four defensive backs lined up close to the line of scrimmage, clearly playing man coverage on the four Tennessee receivers. Three defenders were in the end zone, evenly spaced across the field. Finally, Lorenzo Carter (#7), Georgia’s star 6’6” linebacker was playing the “jumper” role. Carter (yellow circle below) is the key player for the defense. He is an “extra” defender whose only assignment is to track the ball and jump up to bat it down at its highest point. Carter aligned at roughly a 7 yard depth in the end zone. For Georgia, the outside corner to the strong side is playing with outside leverage, while the slot corners play with inside leverage. The hope is to bunch all of the Vols’ receivers together. Georgia wants the end zone to become a cluttered mess with all the receivers in the same spot. That plays to their advantage because they will have more defenders (eight) than the Vols will have receivers (four). The backside safety stayed over the top of Malone, ready to help out should the Vols decide to throw to the single receiver side. The other two safeties, Quincy Mauger (#20) and Dominick Sanders (#24). are funneling the ball on the strong side, each one attempting to keep the ball inside of them. Carter’s job is to simply go to the ball. As the tallest defender on the field, he has to be able to go up over everyone and get his hands on the ball. When you consider the two safeties plus Carter, Georgia has three defenders in the end zone funneling the ball. Add three players in man coverage, and the Bulldogs outnumber the Vols six-to-three on the strongside. The Vols had actually designed the play to go to Croom. He was their “jumper” or “middle man.” Dobbs is aiming to throw the ball to Croom, whose job is to go up and catch the ball at its highest point. Jennings is the “back man,” designated to get ahead of Croom and be ready to make a play on a tip. Josh Smith’s assignment was to trail behind Jennings and Croom, also looking for the tip. Josh Malone was to cross the field from the weakside and be in position to catch the ball off a deflection towards the middle of the field. For the Vols, the first aspect of the play is the pass protection. The offensive line must give Dobbs a clean pocket and time to throw. The line, which has been maligned at times this year, came up big when it mattered most. With only three pass rushers, the Vols’ five linemen, plus Kamara, were able to double team each defender. None of the rushers even came close to Dobbs, giving him time to step into his throw. This is the most underrated aspect of the play. Dobbs, with a clean pocket, was able to throw a beautiful ball. Far too often, games end with receivers never getting a chance to make a play on the ball because the pass is inaccurate. Here, Dobbs threw a nice, high spiral and gave his receivers a chance to make a play. He also timed his throw perfectly. The ball arrived in the end zone just after the receivers did, giving the Vols the best chance to make a play. You can see here how the Vols’ had their receivers aligned in the end zone. Jennings was the deepest receiver, Croom was the “middle man,” and Smith was trailing behind, but all three receivers had their eyes on the ball. Carter is, once again, circled in yellow. Remember how he lined up seven yards deep in the end zone? So far back, he never had a chance to knock the ball down. Had he lined up five yards closer to the line of scrimmage, he very well might’ve knocked the ball down. Instead, he got caught behind the play and never had a chance. This ends up being the key to the play. The man Georgia assigned to track the ball and knock it down never made it to the ball because he lined up too deep in the end zone and didn’t move up quick enough. Cornerback Deandre Baker (#18), the defender assigned to Jennings, ended up well behind the play after getting caught up in traffic. The only two defenders that had a chance at the ball were the two safeties, Mauger and Sanders. Both defenders are listed at 6’0″, and both were overmatched versus the bigger, stronger Jennings. Even though he wasn’t the designated receiver, Jennings saw the ball coming his way and went up for the catch. Mauger and Sanders leaped from either side, but Carter and Baker never came close to the ball. Jennings had the best position out of any player on either team. You can also see that Smith is in a pretty good position as the trailer to respond to the ball should it be deflected towards him. If it had been tipped to Smith’s right, he would’ve at least had a chance. Jennings showed off his athleticism by going up and getting the ball at its highest point, and he showed off his strength by securing it in midair. What a catch. At the end of the day, football comes down to execution. Players win games. And in this case, Jennings went out on the field and won the game. Butch Jones and Kirby Smart can draw up this play on the sideline all day long, but one of the players, either a receiver or defender, has to go up and win. And on this snap, Jennings wasn’t going to be denied. And his reward? A 5-0 record for the Vols. Editor’s Note: Seth Price writes Vols Film Study weekly for FOX Sports Knoxville. You can see more of his work at Football Concepts. He is also the author of Fast and Furious: Butch Jones and the Tennessee Volunteer’s Offense, which is available on Amazon.com.
Introduction For close to thirty years, desktop computing experiences have centered around a keyboard and a mouse or trackpad as our main user input devices. Over the last decade, however, smartphones and tablets have brought a new interaction paradigm: touch. With the introduction of touch-enabled Windows 8 machines, and now with the release of the awesome touch-enabled Chromebook Pixel, touch is now becoming part of the expected desktop experience. One of the biggest challenges is building experiences that work not only on touch devices and mouse devices, but also on these devices where the user will use both input methods - sometimes simultaneously! This article will help you understand how touch capabilities are built into the browser, how you can integrate this new interface mechanism into your existing apps and how touch can play nicely with mouse input. The State of Touch in the Web Platform The iPhone was the first popular platform to have dedicated touch APIs built in to the web browser. Several other browser vendors have created similar API interfaces built to be compatible with the iOS implementation, which is now described by the "Touch Events version 1" specification. Touch events are supported by Chrome and Firefox on desktop, and by Safari on iOS and Chrome and the Android browser on Android, as well as other mobile browsers like the Blackberry browser. My colleague Boris Smus wrote a great HTML5Rocks tutorial on Touch events that is still a good way to get started if you haven’t looked at Touch events before. In fact, if you haven’t worked with touch events before, go read that article now, before you continue. Go on, I’ll wait. All done? Now that you have a basic grounding in touch events, the challenge with writing touch-enabled interactions is that the touch interactions can be quite a bit different from mouse (and mouse-emulating trackpad and trackball) events - and although touch interfaces typically try to emulate mice, that emulation isn’t perfect or complete; you really need to work through both interaction styles, and may have to support each interface independently. Most Importantly: The User May Have Touch And a Mouse. Many developers have built sites that statically detect whether an environment supports touch events, and then make the assumption that they only need to support touch (and not mouse) events. This is now a faulty assumption - instead, just because touch events are present does not mean the user is primarily using that touch input device. Devices such as the Chromebook Pixel and some Windows 8 laptops now support BOTH Mouse and Touch input methods, and more will in the near future. On these devices, it is quite natural for users to use both the mouse and the touch screen to interact with applications, so "supports touch" is not the same as "doesn’t need mouse support." You can’t think of the problem as "I have to write two different interaction styles and switch between them," you need to think through how both interactions will work together as well as independently. On my Chromebook Pixel, I frequently use the trackpad, but I also reach up and touch the screen - on the same application or page, I do whatever feels most natural at the moment. On the other hand, some touchscreen laptop users will rarely if ever use the touchscreen at all - so the presence of touch input shouldn’t disable or hinder mouse control. Unfortunately, it can be hard to know if a user’s browser environment supports touch input or not; ideally, a browser on a desktop machine would always indicate support for touch events so a touchscreen display could be attached at any time (e.g. if a touchscreen attached through a KVM becomes available). For all these reasons, your applications shouldn’t attempt to switch between touch and mouse - just support both! Supporting Mouse and Touch Together #1 - Clicking and Tapping - the "Natural" Order of Things The first problem is that touch interfaces typically try to emulate mouse clicks - obviously, since touch interfaces need to work on applications that have only interacted with mouse events before! You can use this as a shortcut - because "click" events will continue to be fired, whether the user clicked with a mouse or tapped their finger on the screen. However, there are a couple of problems with this shortcut. First, you have to be careful when designing more advanced touch interactions: when the user uses a mouse it will respond via a click event, but when the user touches the screen both touch and click events will occur. For a single click the order of events is: touchstart touchmove touchend mouseover mousemove mousedown mouseup click This, of course, means that if you are processing touch events like touchstart, you need to make sure that you don’t process the corresponding mousedown and/or click event as well. If you can cancel the touch events (call preventDefault() inside the event handler), then no mouse events will get generated for touch. One of the most important rules of touch handlers is: Use preventDefault() inside touch event handlers, so the default mouse-emulation handling doesn’t occur. However, this also prevents other default browser behavior (like scrolling) - although usually you’re handling the touch event entirely in your handler, and you will WANT to disable the default actions. In general, you’ll either want to handle and cancel all touch events, or avoid having a handler for that event. Secondly, when a user taps on an element in a web page on a mobile device, pages that haven’t been designed for mobile interaction have a delay of at least 300 milliseconds between the touchstart event and the processing of mouse events (mousedown). If you have a touch device, you can check out this example - or, using Chrome, you can turn on "Emulate touch events" in Chrome Developer Tools to help you test touch interfaces on a non-touch system! This delay is to allow the browser time to determine if the user is performing another gesture - in particular, double-tap zooming. Obviously, this can be problematic in cases where you want to have instantaneous response to a finger touch. There is ongoing work to try to limit the scenarios in which this delay occurs automatically. Chrome for Android Android Browser Opera Mobile for Android Firefox for Android Safari iOS Non-scalable viewport No delay 300ms 300ms No delay 300ms No Viewport 300ms 300ms 300ms 300ms 300ms The first and easiest way to avoid this delay is to "tell" the mobile browser that your page is not going to need zooming - which can be done using a fixed viewport, e.g. by inserting into your page: <meta name="viewport" content="width=device-width,user-scalable=no"> This isn’t always appropriate, of course - this disables pinch-zooming, which may be required for accessibility reasons, so use it sparingly if at all (if you do disable user scaling, you may want to provide some other way to increase text readability in your application). Also, for Chrome on desktop class devices that support touch, and other browsers on mobile platforms when the page has viewports that are not scalable, this delay does not apply. #2: Mousemove Events Aren’t Fired by Touch It’s important to note at this point that the emulation of mouse events in a touch interface does not typically extend to emulating mousemove events - so if you build a beautiful mouse-driven control that uses mousemove events, it probably won’t work with a touch device unless you specifically add touchmove handlers too. Browsers typically automatically implement the appropriate interaction for touch interactions on the HTML controls - so, for example, HTML5 Range controls will just work when you use touch interactions. However, if you’ve implemented your own controls, they will likely not work on click-and-drag type interactions; in fact, some commonly used libraries (like jQueryUI) do not yet natively support touch interactions in this way (although for jQueryUI, there are several monkey-patch fixes to this issue). This was one of the first problems I ran into when upgrading my Web Audio Playground application to work with touch - the sliders were jQueryUI-based, so they did not work with click-and-drag interactions. I changed over to HTML5 Range controls, and they worked. Alternately, of course, I could have simply added touchmove handlers to update the sliders, but there’s one problem with that... #3: Touchmove and MouseMove Aren’t the Same Thing A pitfall I've seen a few developers fall into is having touchmove and mousemove handlers call into the same codepaths. The behavior of these events is very close, but subtly different - in particular, touch events always target the element where that touch STARTED, while mouse events target the element currently under the mouse cursor. This is why we have mouseover and mouseout events, but there are no corresponding touchover and touchout events - only touchend. The most common way this can bite you is if you happen to remove (or relocate) the element that the user started touching. For example, imagine an image carousel with a touch handler on the entire carousel to support custom scrolling behavior. As available images change, you remove some elements and add others. If the user happens to start touching on one of those images and then you remove it, your handler (which is on an ancestor of the img element) will just stop receiving touch events (because they’re being dispatched to a target that’s no longer in the tree) - it'll look like the user is holding their finger in one place even though they may have moved and eventually removed it. You can of course avoid this problem by avoiding removing elements that have (or have ancestors that have) touch handlers while a touch is active. Alternately, the best guidance is rather than register static touchend/touchmove handlers, wait until you get a touchstart event and then add touchmove/touchend/touchcancel handlers to the target of the touchstart event (and remove them on end/cancel). This way you'll continue to receive events for the touch even if the target element is moved/removed. You can play with this a little here - touch the red box and while holding hit escape to remove it from the DOM. #4: Touch and :Hover The mouse pointer metaphor separated cursor position from actively selecting, and this allowed developers to use hover states to hide and show information that might be pertinent to the users. However, most touch interfaces right now do not detect a finger "hovering" over a target - so providing semantically important information (e.g. providing "what is this control?" popup) based on hovering is a no-no, unless you also give a touch-friendly way to access this information. You need to be careful about how you use hovering to relay information to users. Interestingly enough, though, the CSS :hover pseudoclass CAN be triggered by touch interfaces in some cases - tapping an element makes it :active while the finger is down, and it also acquires the :hover state. (With Internet Explorer, the :hover is only in effect while the user’s finger is down - other browsers keep the :hover in effect until the next tap or mouse move.) This is a good approach to making pop-out menus work on touch interfaces - a side effect of making an element active is that the :hover state is also applied. For example: <style> img ~ .content { display:none; } img:hover ~ .content { display:block; } </style> <img src="/awesome.png"> <div class="content">This is an awesome picture of me</div> Once another element is tapped the element is no longer active, and the hover state disappears, just as if the user was using a mouse pointer and moved it off the element. You may wish to wrap the content in an <a> element in order to make it a tabstop as well - that way the user can toggle the extra information on a mouse hover or click, a touch tap, or a keypress, with no JavaScript required. I was pleasantly surprised as I began work to make my Web Audio Playground to work well with touch interfaces that my pop-out menus already worked well on touch, because I’d used this kind of structure! The above method works well for mouse pointer based interfaces, as well as for touch interfaces. This is in contrast to using "title" attributes on hover, which will NOT show up when the element is activated: <img src="/awesome.png" title="this doesn’t show up in touch"> #5: Touch vs. Mouse Precision While mice have a conceptual disassociation from reality, it turns out that they are extremely accurate, as the underlying operating system generally tracks exact pixel precision for the cursor. Mobile developers on the other hand have learned that finger touches on a touch screen are not as accurate, mostly because of the size of the surface area of the finger when in contact with the screen (and partly because your fingers obstruct the screen). Many individuals and companies have done extensive user research on how to design applications and sites that are accommodating of finger based interaction, and many books have been written on the topic. The basic advice is to increase the size of the touch targets by increasing the padding, and reduce the likelihood of incorrect taps by increasing the margin between elements. (Margins are not included in the hit detection handling of touch and click events, while padding is.) One of the primary fixes I had to make to the Web Audio Playground was to increase the sizes of the connection points so they were more easily touched accurately. Many browser vendors who are handling touch based interfaces have also introduced logic into the browser to help target the correct element when a user touches the screen and reduce the likelihood of incorrect clicks - although this usually only corrects click events, not moves (although Internet Explorer appears to modify mousedown/mousemove/mouseup events as well). #6: Keep Touch Handlers Contained, or They’ll Jank Your Scroll It’s also important to keep touch handlers confined only to the elements where you need them; touch elements can be very high-bandwidth, so it’s important to avoid touch handlers on scrolling elements (as your processing may interfere with browser optimizations for fast jank-free touch scrolling - modern browsers try to scroll on a GPU thread, but this is impossible if they have to check with javascript first to see if each touch event is going to be handled by the app). You can check out an example of this behavior. One piece of guidance to follow to avoid this problem is to make sure that if you are only handling touch events in a small portion of your UI, you only attach touch handlers there (not, e.g., on the <body> of the page); in short, limit the scope of your touch handlers as much as possible. #7: Multi-touch The final interesting challenge is that although we’ve been referring to it as "Touch" user interface, nearly universally the support is actually for Multi-touch - that is, the APIs provide more than one touch input at a time. As you begin to support touch in your applications, you should consider how multiple touches might affect your application. If you have been building apps primarily driven by mouse, then you are used to building with at most one cursor point - systems don’t typically support multiple mice cursors. For many applications, you will be just mapping touch events to a single cursor interface, but most of the hardware that we have seen for desktop touch input can handle at least 2 simultaneous inputs, and most new hardware appears to support at least 5 simultaneous inputs. For developing an onscreen piano keyboard, of course, you would want to be able to support multiple simultaneous touch inputs. The currently implemented W3C Touch APIs have no API to determine how many touch points the hardware supports, so you’ll have to use your best estimation for how many touch points your users will want - or, of course, pay attention to how many touch points you see in practice and adapt. For example, in a piano application, if you never see more than two touch points you may want to add some "chords" UI. The PointerEvents API does have an API to determine the capabilities of the device. Touching Up Hopefully this article has given you some guidance on common challenges in implementing touch alongside mouse interactions. More important than any other advice, of course, is that you need to test your app on mobile, tablet, and combined mouse-and-touch desktop environments. If you don’t have touch+mouse hardware, use Chrome’s "Emulate touch events" to help you test the different scenarios. It’s not only possible, but relatively easy following these pieces of guidance, to build engaging interactive experiences that work well with touch input, mouse input, and even both styles of interaction at the same time.
FOXBOROUGH, MA—In a savage and gruesome turn of events, Patriots head coach Bill Belichick reportedly slaughtered a half-dozen dogs adopted from the humane society Friday, sewing together the dismembered body parts to construct a new, horrific tight end. “They were cute dogs at first, but then I figured out that if you rip them apart, they could be really useful,” said Belichick, watching the fur-covered abomination lumber across the field, producing the blood-curdling sound of splintering bone and ripping flesh with every step. “While tinkering around in my workshop, I started out stitching a few dog legs together, combining ribcages and whatnot, and soon I was reanimating the dead tissue with a portable generator. In almost no time at all, I had a viable red zone target.” Belichick confirmed that the grotesque tight end had a far better understanding of the offense and was considerably more intelligent than Rob Gronkowski. Advertisement
Quarterbacks being taken first and second in the draft is nothing new. It’s happened three times in the past five years alone, most recently with Jared Goff and Carson Wentz starting off the 2016 NFL Draft with a bang. The Rams and Eagles both made blockbuster trades to land their respective franchise quarterbacks, and each has already paid dividends after less than two full seasons. On Sunday, the two will square off for the first time ever when Wentz’s Eagles pay Goff’s Rams a visit. While they’ll never be on the field at the same time, all eyes will be on the star-studded matchup at quarterback. And although this Week 14 battle has huge playoff implications, with the winner moving one step closer to clinching a first-round bye, Goff and Wentz are the headliners of this show. There’s no reason they shouldn’t be, either. Goff and Wentz are the primary reasons their teams are 9-3 and 10-2, respectively, and their individual performances make them worthy of the MVP award. That’s unusual for two second-year quarterbacks to battle for the most coveted individual award in the NFL, but it’s undoubtedly the truth with regards to Goff and Wentz. They have a chance to be everything we wanted from Jameis Winston and Marcus Mariota, or Andrew Luck and RGIII, or even Peyton Manning and Ryan Leaf. But who’s been the better quarterback? That depends on a variety of things, but looking at each player’s stat line, Wentz appears to be having the superior season. Wentz: 242-for-399 (60.7 percent), 3,005 yards, 29 TDs and 6 INTs (102.0 passer rating) Goff: 244-for-392 (62.2 percent), 3,184 yards, 20 TDs and 6 INTs (98.4 passer rating) If you go beyond the basic stats, the race becomes much closer. Goff certainly has the edge in some categories, particularly yards per attempt and big plays downfield. The following stats show where Goff and Wentz rank in the NFL in each category. We’ll let the numbers speak for themselves. Big plays (25-plus yards) 4. Goff: 29 17. Wentz: 17 Third-down conversions (pass plays) 2. Wentz: 50 percent 9. Goff: 43.2 percent Yards after catch by receivers 3. Goff: 1,661 18. Wentz: 1,142 Fourth-quarter passer rating 18. Wentz: 91.3 31. Goff: 75.4 Yards per attempt 3. Goff: 8.12 10. Wentz: 7.53 Aggressiveness (throws into tight windows) 1. Wentz: 24.8 percent 36. Goff: 13.8 percent Red zone passer rating 2. Wentz: 118.8 7. Goff: 106.3
As we approach the end of this project's run, I would just like to sum up the most critical bits of information in all the previous updates, as well as release one final update for the model. Firstly, I have thought about some of the comments that point out that the nose is a little blocky while the visor is a little unrealistic, and agree that these are valid points. As such, I have revised the design for the nose, adding more cheese slopes and replacing the two 1 x 2 tiles(with grille) with two trans black 1 x 2 tiles. Check out the new (final) model!: LDD file available here* ** *Note: Some of you may notice that there is a slight disparity between my part count (1141 bricks, see below) and the LDD count (1111 bricks). This is because the file only includes the two extra tails (without part optimization) and not the extra white bricks required to remove the blue stripe along the fuselage. **This LDD file is for personal use only. Please give credit where it is due. Thanks! Now, to sum up:​ Final part count of Concorde and the Bristol Olympus is 889 bricks (Concorde=868, Olympus & trolley=21). This will be the base model, as proposed in the original draft, should Lego decide that the part count is too high to add any of the other accessories. Final model of the Olympus jet engine of Concorde and the Bristol Olympus is 889 bricks (Concorde=868, Olympus & trolley=21). as proposed in the original draft, should Lego decide that the part count is too high to add any of the other accessories. Final model of the Olympus jet engine Mobile staircases are each made up of 63 bricks, meaning that if two of them are added, the total part count will be brought up to 1015 bricks. 1 x mobile staircase 1 x mobile staircase Microfigures would be nice, but these are not so critical to the main idea so I won't be pushing for them Lastly, the liveries: To add the option for all three liveries, British Airways, Air France and British Airways (Classic), 126 bricks (possibly less if optimized further) are needed, so this would bring the part count up to 1015 bricks without the mobile staircases and 1141 bricks with the staircases. A final word Thanks everyone for such a a tremendously fun journey! When I first started out on Lego IDEAS my best projects merely had LDD screenshots for illustrations, but throughout this whole process I've had the refreshing, mind blowing, brain racking opportunity to learn POV-ray, simple code and how to run a social media campaign. It has been my pleasure to receive your delighted comments upon the release of a major update or the passing of a significant milestone and to see my model grow with your constructive criticism and input. Even if this project doesn't pass the review, I'm thankful to have had the opportunity to develop this project with you and experience what it's like to be a Lego designer, to build and to propose a new set. Thank you, thank you all and have a magnificent New Year! ABStract (Ethan Low)
I wish I hadn’t waited so long to read this paper. It’s got three of my favorite things in a graphics paper: it’s image-space, it’s got a proof rather than hand waving, and it’s clever! It contains some damned pretty images too. The setup is pretty much the same as every image-space rendering algorithm. You render a few buffers (3D position and normal) of the refractive object from the view of the light. Like most image-space algorithms, you want to consider each texel in these buffers to be a small surface. To calculate the caustics on the scene, you want to know where the light refracted through each of these surfaces ends up in the 3D scene. This is the hard part about rendering caustics, because you need to know where the refracted ray intersects the scene. Ray-scene intersection on the GPU = impractical (though obviously not impossible). This is the problem that this paper takes a hack at. By additionally rendering the 3D positions of the scene (sans-refractive object) from the view of the light, this algorithm totally bypasses explicit ray-scene intersection. The paper outlines an iterative method which moves toward the correct distance along the refracted ray at which it intersects a scene surface, in image-space. The algorithm is as follows: Assume an initial distance d along the refracted ray do for some number of iterations: Backproject the 3D position P1 at length d along the ray into view space of the camera at length along the ray into view space of the camera Use the calculated view space position to get the 3D position P2 of the scene from the scene position buffer of the scene from the scene position buffer Use the distance between the P1 and P2 as the new estimate for d The paper offers a proof of convergence based on Newton’s method. By repeating this process for each texel in the refractive object buffer and splatting the refracted flux onto the scene at the 3D position calculated by the iterative process above, voila caustics. One problem with a naive implementation is that the amount of flux splatted onto the scene is dependent on the number of texels covered by the refractive object in the refractive object buffers. The paper states that the flux contribution of each texel is the flux at the surface (N dot L) multiplied by the reciprocal of the projected area of the object in the refracted object buffer. The paper doesn’t state this, but I believe that this should additionally be multiplied by a fresnel term. The projected area is calculated by performing an occlusion query from the view of the light. I think the method in this paper is very practical. The frame rates are fantastic. They suggest two buffers for the refractive objective information but it could be stored in one RGBA texture by storing depth in one channel and backprojecting/unprojecting the depth to 3D position. This will save a bit on bandwidth and utilize the ALU a bit more. Musawir A. Shah, Jaakko Konttinen, Sumanta Pattanaik. “Caustics Mapping: An Image-space Technique for Real-time Caustics” To appear in IEEE Transactions on Visualization and Computer Graphics (TVCG). paper – project page Advertisements
In May 2012, researchers observed a pod of killer whales attacking a gray whale and its calf in Monterey Bay, California. After a struggle, the calf was killed. What happened next defies easy explanation. Two humpback whales were already on the scene as the killer whales, or orcas, attacked the grays. But after the calf had been killed, about 14 more humpbacks arrived—seemingly to prevent the orcas from eating the calf. “One specific humpback whale appeared to station itself next to that calf carcass, head pointed toward it, staying within a body length away, loudly vocalizing and tail slashing every time a killer whale came over to feed,” says Alisa Schulman-Janiger, a whale researcher with the California Killer Whale Project. For six and a half hours, the humpbacks slashed at the killer whales with their flippers and tails. And despite thick swarms of krill spotted nearby—a favorite food for humpbacks—the giants did not abandon their vigil. It’s not clear why the humpbacks would risk injury and waste so much energy protecting an entirely different species. What is clear is that this was not an isolated incident. In the last 62 years, there have been 115 interactions recorded between humpback whales and killer whales, according to a study published in July in the journal Marine Mammal Science. “This humpback whale behavior continues to happen in multiple areas throughout the world,” says Schulman-Janiger, who coauthored the study. “I have witnessed several encounters, but nothing as dramatic as [the May 2012 event],” she says. It remains the longest humpback-to-killer whale interaction known to date. What Is Going on Here? The most logical biological explanation for the humpbacks’ vigilante-like behavior is that the whales receive some sort of benefit from interfering with orca hunts. For instance, orcas are known to attack humpbacks, and the whales are most vulnerable when they are young. Once fully grown, though, a single humpback is large enough to take on an entire pod of killer whales. So perhaps the “rescuing” behavior has evolved as a way to help the species get through its weakest life stage, with humpbacks charging in when they think a young whale is at risk. There’s also a good chance that the calf under attack is related to the whales coming to its rescue. Was This Whale Trying to Save a Diver’s Life? “Because humpbacks calves tend to return to the feeding and breeding grounds of their mothers, humpbacks in a given area tend to be more related to neighboring humpbacks than to the population as a whole,” says study leader Robert Pitman, a NOAA marine ecologist and National Geographic Society grant recipient. But there’s a wrinkle in this explanation. Of all the incidents the scientists investigated over the last five decades, killer whales targeted humpbacks just 11 percent of the time. The other 89 percent involved orcas hunting seals, sea lions, porpoises, and other marine mammals. There’s even one incident in which humpbacks apparently tried to save a pair of ocean sunfish from becoming orca hors d'oeuvres. Perhaps it’s personal. Schulman-Janiger notes that not all humpbacks interfere with orca hunts, and many that do bear scars from being attacked by orcas earlier in their lives, perhaps as calves. Therefore, it’s possible that personal history drives humpbacks to respond to orca hunts. The study also notes that it’s possible the humpbacks are responding to auditory calls made by the killer whales rather than the animals they are hunting. This would mean that the humpbacks don’t know what species is being attacked until they have already invested energy in swimming to the battle. Such a behavior could persist in the population because it would occasionally benefit humpbacks—apparently enough to justify benefiting other species the majority of the time. View Images A Weddell seal rests on the chest of a humpback whale, safe for the time being from attacking killer whales. Photograph by Robert L. Pitman All for One, and One for All? Other whale experts see a dose of something even more complex: altruism. “Although this behavior is very interesting, I don’t find it completely surprising that a cetacean would intervene to help a member of another species,” says Lori Marino, an expert in cetacean intelligence and president of the Whale Sanctuary Project. Humpbacks are capable of sophisticated thinking, decision-making, problem-solving, and communication, says Marino, who is also the executive director of the Kimmela Center for Animal Advocacy. “So, taken altogether, these attributes are those of a species with a highly developed degree of general intelligence capable of empathic responses.” Furthermore, humpbacks are not the only animals that seem to display some sort of regard for another species. Dolphins have been famously depicted as “aiding” dogs, whales, and perhaps even humans—though it should be noted that onlookers, not animal experts, often report such events, and it can be easy to misinterpret animal behavior. Whether humpbacks are truly performing what amounts to a good deed or are benefiting from the process, it’s clear that we still have much to learn about the minds and motivations of the animals around us. For the most part, Pitman says animals tend to do what is in their own best interest—even if the motivations themselves aren’t entirely clear to us. “As biologists,” he says, “that is where we should start our search for explanations.”
Image caption Carwyn Jones and Nicola Sturgeon want to prevent any loss of power devolved to Wales and Scotland A proposal that could have given the Welsh Assembly power to veto key Brexit legislation has been rejected by MPs. On the first day of detailed scrutiny on the EU Withdrawal Bill in the House of Commons, Plaid Cymru's planned amendment lost by 318 votes to 52. It had the support of SNP MPs but Labour said before the vote they could not support the proposed change. As it stands, Westminster wants support from the devolved nations but can push the planned law through without it. The Welsh and Scottish governments have called the bill a "power grab" which undermines devolution. Prime Minister Theresa May has warned she will not "tolerate" any attempt to block the legislation. With Plaid only having four MPs, it needed support of politicians from other parties for the amendments to the bill to pass. It is one of more than 350 tabled changes to the wording of the bill, which has passed its second reading and was scrutinised by MPs in the Commons on Tuesday. The European Union (Withdrawal) Bill intends to convert all existing EU laws into UK law, to ensure there are no gaps in legislation on Brexit day. But the Welsh and Scottish Governments have objected to the idea that EU responsibilities in devolved policy areas such as agriculture should first be held at Westminster, pending longer-term decisions. While the administrations have no formal power in the Commons, they have suggested a list of amendments which have been tabled by SNP and Welsh Labour MPs to be discussed at committee stage this month. Image caption Plaid Cymru MP Hywel Williams said the amendments would make sure all of the UK had a say in the final Brexit deal Speaking ahead of the debate, Plaid's Brexit spokesman Hywel Williams MP said it would be "irresponsible and dangerous" if the prime minister were to "bulldoze her disastrous Brexit mirage through, against the will of three of the four members [of the UK]". Mr Williams added: "This is not an attempt to derail Brexit - it is an attempt to make sure Wales and the other UK member countries have a say. "Every national parliament should be involved in this process, not just Westminster." A Welsh Government spokesman said the administration had "grave concerns" about the proposed legislation. "There are still serious questions of how Westminster plans to honour the referendum while safeguarding the economy and respecting devolution and the established powers of the Welsh Government," he said. The UK government has been asked to comment.
Posted 6 years ago on Dec. 18, 2012, 12:23 p.m. EST by OccupyWallSt Tags: honolulu On Wednesday, December 12th, members of (de)Occupy Honolulu filed a lawsuit against the City & County of Honolulu, Wesley Chun (Director & Chief Engineer of Department of Facilities Maintenance), Trish Morikawa (County Housing Coordinator), and Sergeant Larry Santos (Honolulu Police Department), over deprivation of civil rights during raids on the encampment, in the U.S. District Court for the District of Hawai`i. On Monday, December 17th, a Temporary Restraining Order has been issued, until the Preliminary Injunction hearing in a month, dealing with raids of Thomas Square. All defendants have either quit their jobs or retired since the last raid at Thomas Square, the day before Thanksgiving. The lawsuit focus on the city & county’s abuse of Ordinance 10-29 (AKA Bill 39), which limits the use of sidewalks after pushing (de)Occupy to the sidewalk, and Ordinance 11-029 (AKA Bill 54), which allows the Department of Facility Maintenance, Housing, Parks, and HPD to traumatize, steal, and brutalize the vulnerable houseless population Since the (de)Occupy camp was established on November 5, 2011, the movement has been fighting against Ordinance 11-029, which was used as a tool to repress freedom of speech within hours of being signed into law. City ordinances like Bill 39 and Bill 54 criminalize the houseless. The U.S. 9th Circuit Court of Appeals stated in Tony Lavan v. City of Los Angeles, “For many of us, the loss of our personal effects may pose a minor inconvenience. However, . . . the loss can be devastating for the homeless.” “Houseless rights are human rights. We have been standing vigil 24/7 for over a year. During that time the city has repeatedly stolen and destroyed our collective and personal property, including car registrations, medications, and bedding of protesters and the houseless alike,” says Sugar Russell, plaintiff. “The city has humiliated people using intimidation and violence. This is what the government does to people who are willing and able to stand up and document abuse and inequality.” “The fight is not over until the peoples’ voice means more than corporate money! (de)Occupy Honolulu is determined to shut down the unconstitutional ordinances of Bill 39 and Bill 54 throughout the County of Honolulu. Prioritizing programs like job placement, rehabilitation, and housing first will show a better return in value for both the community, and the thousands of houseless on the island,” says plaintiff Christopher Nova Smith. “Restructuring the assistance housing funds to mirror Hawaii County’s plan could offset the financial strain on the community. By investing in the value of people, the City and County of Honolulu can save taxpayers millions of dollars while promoting equal civil rights and community sustainability.” DeOccupy Honolulu // www.DeOccupyHonolulu.org Facebook: OccupyHonolulu // Twitter: #OHNL
A homeless man in Los Angeles has gotten quite innovative with turning a freeway underpass into his personal compound. Ceola Waddell Jr, 59, furnished an underpass in L.A.’s 110 freeway near Coliseum, complete with a makeshift jacuzzi, two porcelain toilets, couches, discarded refrigerators, and a four-poster bed, the Daily Mail reported. Waddell began living in the underpass six months ago, but has since decorated the underpass into his personal paradise. The man has become a viral sensation, thanks to a Facebook video he took of his living space. City officials, however, are not so pleased with his decision to deck out the freeway underpass. They say the site is dangerous and that Waddell has turned down their offers for temporary housing and homeless services. Workers removed a refrigerator with an “abundance of rotting food,” “explosive materials,” and other unsanitary items, according to the Los Angeles Times. In the Facebook live video tour he gave of his living space, he described it as “Paradise Lane.” “You have now entered Paradise Lane. Let me give you a little tour,” he said. “This here is my jacuzzi. It holds ten gallons of water. All it is is a refrigerator on its bottom. I’m being innovative.” “Come on down. You have now entered the man cave. This is where my quarters are,” he continued, introducing his bedroom. He has another bed, “jacuzzi,” and toilet in his guest room. Waddell rents out his spare room, which has a tent, to other homeless people in the area for $25 a week. “I decided I wanted to live like everybody else, make me something nice that I wanted to come home to,” he said. “If I was in the Arctic, I’d make me an igloo.” “I refuse to let the city beat me down to what they think a homeless person’s profile is, living on cardboard,” he added. City officials say they have received reports about the site and have had sanitation crews dismantle it twice.
Up is down, down is up, the bull is a bear, the bear is a bull and an economy in recovery is really in recession. Such is the current state of the markets, according to Morgan Stanley strategists, who see a "Bizarro World" where nothing makes sense and it's getting tougher and tougher to make a buck. "Everything seems backwards," Adam Parker, the firm's chief U.S. equity strategist, said in a note to clients. "Sell winners, buy losers, own staples in both up and down markets. Just do the opposite of what makes sense." Read MoreThis trend is an 'unambiguous buy' signal: BofA The "Bizzaro" reference is familiar to Superman fans for a world where the Man of Steel is really a bad guy and everything else is upside down as well. But for Morgan Stanley, it's been no comic book but rather stark reality. The firm's investment portfolio registered its worst month in more than five years — 61 months, to be exact — as the stock market got off to one of its worst starts ever this year.
The first official commercial resupply mission to the International Space Station (ISS) successfully passed the critical phase of its arrival at the orbital outpost, as SpaceX’s CRS-1 Dragon was “tamed” by the Space Station Remote Manipulator System (SSRMS) ahead of schedule, at 6:56am Eastern. The vehicle was then successfully berthed to the Harmony module. Dragon Berthing (milestones will be updated during the events): Advancing from its tasks during the C2+ test objectives – that involved Dragon undertaking a lap around the Station to test communication assets – the private spacecraft was now qualified to arrive at the ISS on Flight Day 3 of its mission. The spacecraft made a series of thruster burns, each taking it closer to the station; holding at distances of 2,500, 1,200, 250, 30 and 10 metres (2,735, 1,310, 273, 33 and 11 yards), before finally being grappled by the Canadarm2 Remote Manipulator System, and attached to the nadir port of the Harmony module. Akihiko Hoshide operated Canadarm2 during capture, while Sunita Williams will later use it to berth the Dragon. However, as always, Dragon was required to pass a series of “Go No/Go” points during its rendezvous, ensuring it provides no risk to the $100 billion Station. The series finite manuevers began as Dragon caught up to the Station via the Height Adjustment (HA) and Co-Elliptical (CE) burn points, bringing Dragon 2.5 km below ISS. A Go/No-Go was performed for the HA3/CE2 burn pair bringing Dragon to 1.2 km below ISS. The HA3/CE3 burn pair, using RGPS and configured with the ISS’ own GPS system, were then conducted, followed by the HA4 (Ai) burn, taking Dragon inside the corridor where the crew began to monitor the spacecraft’s approach. With both SpaceX mission control in California, and NASA’s ISS Flight Control Room (FCR) in Houston monitoring, Dragon arrived and held at 250 meters distance from the Station, where checks of Dragon’s LIDAR system were successfully conducted, a key element of hardware that has a heritage of testing via the Space Shuttle Discovery during her STS-133 mission. With all parties are satisfied with Dragon’s performance – and ability to abort if required – Dragon was given a “Go” to approach to 30 meters distance from the Station where it automatically paused. At all points, the ability to abort could be made by controllers on the ground, the Dragon itself and the Expedition 33 crew – via the Commercial Orbital Transportation Services (COTS) Ultra High Frequency (UHF) Communication Unit, or CUCU, which rode in the middeck stowage locker on Atlantis during STS-129 late in 2009, before being handed over to ISS crewmembers ahead of the demonstration flights. The CUCU provides a bi-directional, half-duplex communications link between Dragon and ISS using existing ISS UHF Space to Space Station Radio (SSSR) antennas, which provides a communication path between MCCX (SpaceX) and Dragon during proximity operations and a command security between ISS and Dragon. “SpaceX1 Capture Preparations: The crew and ground controllers performed a checkout of the CUCU, the RWS (Robotic Work Station), and the SSRMS, in preparation for SpaceX1 capture. All checkouts went well and there were no problems,” noted L2 CRS-1 Status – LINK – in the days prior to Dragon’s launch. Proceeding from 30m to the Capture Point at 10 meters out, Dragon automatically held position again, allowing the ISS’ robotic assets – already translated to the pre-capture position – to make the move towards the Dragon via controls in the Cupola RWS. “MT Translation: The MT translated from worksite 5 to worksite 2, in preparation for SpaceX1 arrival. All MSS hardware is repowered, without any significant issues,” added the Status reports, showing the translation of the large ISS robotic assets for the grapple of Dragon upon arrival, Upon receiving the “Go for Capture” call from Houston, the ISS crew armed the SSRMS capture command and begin tracking the vehicle through the camera on the Latching End Effector (LEE) of the SSRMS, noted an overview presentation (L2 – Link). With the ISS’ thrusters inhibited and Dragon confirmed to be in free drift, the arm’s LEE maneuvered over the Grapple Fixture (GF) pin on Dragon to trigger the capture sequence ahead of pre-berthing maneuvers. With the addition of several holds into the timeline, capture was confirmed at 6:56am Eastern. The Dragon, secured by the SSRMS, was then carefully translated to the pre-install set-up position, 3.5 meters away from the Station’s module, allowing the crew to take camcorder and camera footage of the vehicle through the Node 2 windows. This footage will be downlinked to the ground for engineers to evaluate the condition of the Dragon spacecraft (See raw download collection from C2+ mission in L2). The SSRMS then maneuvered Dragon to the second pre-install position, at a distance of 1.5 meters out. Desats were inhibited prior to the maneuver of the Dragon into Common Berthing Module (CBM) interface to begin the securing of the spacecraft to the ISS. A “Go” at this point was marked by all four Ready To Latch (RTL) indicators providing confirmation on the RWS panel. As has been seen with other arrivals – and indeed new additions to the Station itself – Dragon was put through first stage capture tasks, allowing the SSRMS to go limp, ahead of second stage capture, officially marking Dragon’s berthing with the ISS. With all of the ISS berthing milestones ahead of the pre-planned schedule, the ISS crew decided to open the hatch to the Dragon a day ahead of the timeline. The Dragon spacecraft is carrying 905 kilograms (1995 lb) of cargo to the space station, consisting of 461.5 kilograms (1015 lb) of usable items. The cargo includes 118 kilograms (260 lb) of supplies for the crew; including food and clothing; 117 kilograms (390 lb) of scientific equipment for NASA, the US National Laboratory, ESA and JAXA; 102 kilograms (225 lb) of spares and other station hardware; and 3.2 kilograms (7 lb) of computer equipment; mostly spare hard drives and CD cases. The US scientific payloads include a General Laboratory Active Cryogenic ISS Experiment Refrigerator, or GLACIER, cryogenic experiment; the Fluids Integrated Rack (FIR); Commercial Generic Bioprocessing Apparatus Micro-6 (CGBA/Micro-6), a commercial biosciences payload studying fungi in the space environment and Capillary Flow Experiments 2 (CFE-2), a fluid dynamics experiment. Hardware for the Alpha Magnetic Spectrometer (AMS), retrieval equipment for the MISSE-8 exposed experiment, and refrigeration bags, are also being carried. The European Space Agency’s BioLab and Energy experiments are also being delivered by the SpaceX CRS-1 mission. BioLab will be used to perform biological research in the Columbus module, while Energy will study the station’s crew’s energy balance. The Japan Aerospace Exploration Agency has included its Education Payload Operations 10 (EPO-10) payload, which will be used to record video of experiments aboard the station for educational purposes. Dragon is also carrying an experiment to study plant microtubules, and an ammonia test kit. The Dragon spacecraft is scheduled to depart the International Space Station on 28 October; with the spacecraft being unberthed and released via Canadarm2. The spacecraft will perform a burn to depart the vicinity of the ISS, before closing its GNC bay and beginning its deorbit burn. Once the deorbit is complete, the trunk module will be jettisoned, and the spacecraft will reenter the atmosphere. The trunk section is expected to disintegrate, while the capsule descends under parachute for a landing in the Pacific Ocean. For its return to Earth, the capsule will be loaded with completed experiments and equipment no longer needed aboard the space station, including 74 kilograms (163 lb) of crew equipment, 393 kilograms (866 lb) of scientific equipment and samples, 253 kilograms (518 lb) of station hardware, 5 kilograms (11 lb) of computer equipment, 33 kilograms (68 lb) of spacesuit parts used by previous crews, and a 20 kilogram (44 lb) payload for Roskosmos. (Images: via L2’s SpaceX Dragon Mission Special Section – Containing presentations, videos, images (Over 2,000MB in size), space industry member discussion and more. Additional imagery via SpaceX and NASA). (Click here: http://www.nasaspaceflight.com/l2/ – to view how you can support NSF and access the best space flight content on the entire internet).
Jonathan Ferguson One of the most persistent firearm myths is that American soldiers fighting in the Second World War (or later, in the Korean War) were at substantial risk of being identified and engaged by the enemy because of the distinctive ‘ping’ sound made by ejection of clips from their issued rifles. The M1 ‘Garand’ was ahead of its time as a military self-loading rifle, but unlike modern rifles it did not feature detachable box magazines. Instead it was loaded with eight round metal en bloc clips. These were inserted into the open action from the top and retained inside the weapon until the last round was fired, at which point the clip would eject (along with the final fired cartridge case) with a distinctive ‘ping’ sound (you can clearly hear this in the movie ‘Saving Private Ryan’, for example, and see it in slow motion in this Forgotten Weapons video). The notion of this ‘ping’ being a fatal flaw is a myth, in that there’s no evidence that it endangered infantrymen. However, there’s a bit more to it than that… A lot of ink and pixels have been expended arguing the ‘M1 ping’ myth back and forth, and some have even tried to practically demonstrate why it’s a silly idea. Tactical trainer Larry Vickers recreated a scenario for his ‘TAC TV’ series, and more recently YouTuber ‘Bloke on the Range’ has tackled the myth. The Bloke shows just how difficult it would be to even hear the ‘ping’, without the various other loud noises associated with battle. Soldiers have only recently begun to wear any kind of hearing protection at all, which would have made such a noise even more difficult to detect. Not to mention the obvious fact that soldiers rarely fight alone. Even if a German or Japanese soldier did manage to take advantage of the ‘ping’ window of opportunity, he’s likely to get shot by another GI. More importantly, the Bloke shows how easy and quickly one could reload following the ‘ping’. At all but the closest ranges, this really is a myth and a total non-issue. As Bloke points out, there is no actual historical evidence for this ever having happened, and for every claim that a veteran experienced it, there is an ‘equal and opposite veteran’ making a claim to the contrary. This is typified by an exchange in ‘American Rifleman’ magazine in 2011/12 (reproduced here). It’s almost impossible to find a first-hand account either; it’s always a relative, a friend, or a friend-of-a-friend, and being told and retold decades after the fact. At this point, one would normally call ‘case closed’ as Garand expert Bruce N. Canfield has done online, in no uncertain terms. However, this situation is more complicated than just the bare facts. Sometimes, myths intrude into reality by being thoroughly embedded in thought and practice. There is no doubt whatever that whether this ever happened or not, quite a lot of soldiers in the ‘40s and ‘50s clearly did believe that this quirk of their rifle posed a real threat. This is proven by a fascinating document uploaded by the Garand Collector’s Association. 1952 Technical Memorandum (ORO-T-18 (FEC)), entitled ‘Use of Infantry Weapons and Equipment in Korea’, was written by G.N. Donovan of ‘Project Doughboy’. This was an effort by the Operations Research Office of the John Hopkins University to gather feedback on the practical usage of US military weapons in the then-current Korean War. On page five we read the conclusion that: “The noise caused by ejection of the empty clip from the M1, despite the fact that at close range it could be heard by the enemy, was considered valuable by the rifleman as a signal to reload.” And on page eighteen: “One other complaint about the M1 was the noise made by the safety. Half the men had a nagging fear that some day the noise made in releasing the safety would reveal their positions to the enemy, yet only one-fourth objected to the distinctive noise the empty clip made when ejected. They were quite willing to retain the noise of the clip even though the enemy might be able to use it to advantage, because they found it a very useful signal to reload.” However, the question that prompted this response was rather a leading one (p. 51): “Interviews Conducted on Noise of the Rifle Is the sound of the clip being ejected of possible help to the enemy or is it helpful to you as an indication of when to reload, or is it of no importance? [Answers are followed by the number of men responding in the affirmative] Helpful to the enemy – 85 Helpful to know when to reload, therefore retain – 187 Of no importance – 43 [Total responders – ] 315” But the answers speak for themselves. Of those soldiers surveyed, twice as many believed that the noise was helpful to the enemy, as thought it unimportant. Many more men thought it was actually a useful audible indication of an empty weapon, bearing out the Bloke’s results that yes, you can hear the ping if you’re close enough, but no, you probably can’t successfully rush a man before he can get another clip into his rifle. In defence of their findings, the researchers commented thusly: “Results of these interviews show that there is great uniformity in responses to questions asked, and all numerical estimates of such items as range of firing, load carried, etcetera, have been found to cluster around a central point with comparatively little scattering. Thus it is felt that the results are reliable and can be fairly said to represent what the infantryman believed he did. The fact that these were group interviews further increased the reliability of the results, since any apparent exaggeration by one man was quickly picked up and questioned by others. In this way the men themselves provided a check on the accuracy of their answers.” In other words, if other soldiers thought it impossible for the enemy to take advantage of the ‘ping’, they would have said so. This is probably true, although interviewees are likely to behave differently under observation and questioning, and so some doubt must remain. There was also no recommendation made with respect to this perceived ‘flaw’ with the weapon, and no comment from officers on the issue (interestingly, they did point out that the noisy safety could be carefully operated not to make noise). However, again, the numbers here speak for themselves, along with the later anecdotal evidence. Some soldiers really did believe that it was possible for the enemy to hear the ‘ping’ of your rifle, rush your position, and kill you. And, whilst unlikely, there’s no reason to believe that such a thing is impossible. For example, in an incident that occurred in Afghanistan in 2008, a skirmish between a British patrol and a small number of Taliban came down to just such a one-on-one situation, with a British officer and Taliban fighter positioned just feet from each other with only a river bank in the way. Realising his weapon was empty, the attacking officer opted to use his bayonet (and the element of surprise) rather than take time to reload, and killed the wounded enemy. If we imagine a similar engagement where one party is armed with a Garand, it may well be possible to hear the final shot and the clip go ‘ping’, close the distance, and kill the unfortunate combatant. There are many other scenarios in which this could happen, but all would involve a lull in firing, being isolated from one’s squadmates (or at least in their firing line, preventing them from shooting past you), running out of ammunition at just the wrong moment, and a certain amount of bravery and/or luck on the part of the defender. It may have happened, it may never have happened; on that question the balance of the evidence suggests that it did not. However, and this is an important caveat, it is important not to insist that this claim is a total myth as Canfield has done, stating that it is ‘…so silly as to not be worthy of serious discussion’. The implication is that no-one with any knowledge of the subject would make this claim, but we now know that many veteran combatants who fought with this rifle, in fact, believe it. They simply believed that the minor risk posed by the noise was outweighed by the benefit of an audible cue to reload the weapon. Remember, all arms and munitions are dangerous. Treat all firearms as if they are loaded, and all munitions as if they are live, until you have personally confirmed otherwise. If you do not have specialist knowledge, never assume that arms or munitions are safe to handle until they have been inspected by a subject matter specialist. You should not approach, handle, move, operate, or modify arms and munitions unless explicitly trained to do so. If you encounter any unexploded ordnance (UXO) or explosive remnants of war (ERW), always remember the ‘ARMS’ acronym: AVOID the area RECORD all relevant information MARK the area from a safe distance to warn others SEEK assistance from the relevant authorities
That's what Hanna Rosin's young son, Jacob, asks her everyday about her recent book The End of Men: And the Rise of Women. I am reading Rosin's book for additional research for my forthcoming book on why men are going on strike in marriage, fatherhood and in the culture. One reason I suggest in my book for men's negative attitudes towards marriage, women and the society is the denigrating and damaging way that boys and young men are treated in our culture and a book with a title like this sure doesn't help. To her credit, Rosin at least offers up a lame explanation to Jacob that "I want to convince people that some men out there need our help, since it's not so easy for them to ask for it." "He doesn't quite believe me yet, but maybe one day he will." Yet as I read the pages of her book, I am not sure what type of help she thinks men need and as Christina Hoff Summers said to me about men's centers that try to convince men to be more like woman: "I don't think that's the kind of help men want." Rosin points out the ways in which girls are seen as better than boys. Rosin describes a shift in the US whereby parents, both men and women, prefer a girl when asked. It took two hypothetical daughters for people to say they would prefer the third child to be a boy. Rosin goes on to say: Women are not just catching up anymore; they are becoming the standard by which success is measured. "Why can't you be more like your sister?" is a phrase that resonates with many parents of school-age sons and daughters, even if they don't always say it out loud. As parents imagine the pride of watching a child grow and develop and succeed as an adult, it is more often a girl than a boy that they see in their mind's eye. Did Rosin ever stop to think that with men are just responding to the culture around them. "The End of Men," "boys are stupid, throw rocks at them," Girl Power, and parents who want them to be girls--these are damaging messages to send to our young men. Boys and men keep many of their thoughts and feelings to themselves but don't think that they won't hear and respond to what is happening around them. Books declaring "the end of men" are contributing to the problem, not trying to find a solution to the reasons that boys and men are not faring as well in our society.
As we reported on Friday, a critical bill that was unable to pass this past week was the extension of unemployment benefits to millions of Americans currently collecting a $1,200 average monthly stipend from the US government for sitting on their couch and not paying their mortgage. As a result of this huge hit to endless governmental spending of future unearned money, the WSJ reports that "a total of 1.3 million unemployed Americans will have lost their assistance by the end of this week." Furthermore, the cumulative number of people whose extended benefits are set to run out absent this extension, will reach 2 million in two weeks, and continue rising: as a reminder the DOL reported over 5.2 million Americans currently on Extended Benefits and EUC (Tier 1-4). The net result is yet another hit to the US ledger, as soon 2 million Americans will no longer recycle $1,200 per month into the economy. In other words, beginning in July, there will be $2.4 billion less spent each month by America's jobless on such necessities as LCD TVs (that critical 4th one for the shoe closet), iPads and cool looking iPhones that have cool gizmos but refuse to hold a conversation the second the phone is touched the "wrong" way. As the number of jobless whose benefits expire grows, the full impact of lost money will progressively increase, and absent some last minute compromise, the monthly loss will promptly hit $5 billion per month. Annualized this is a hit of $60 billion to "consumption", and represents roughly 120 million iPads not purchased, and about half a percentage point of GDP (ignoring various downstream multiplier effects). Worst of all, as these people surge back into the labor force, the unemployment rate is about to spike by nearly 1%, up to 10.5%. From the WSJ: On Thursday, Senate Democrats failed to secure the 60 votes needed to break off a GOP-led filibuster. Sen. Ben Nelson (D., Neb.) voted with Republicans in a 57-41 roll call. Senate Majority Leader Harry Reid (D., Nev.) said this third vote on the matter would be the last, allowing the Senate to move on to modest legislation cutting taxes for small businesses. The collapse of the wide-ranging legislation means that a total of 1.3 million unemployed Americans will have lost their assistance by the end of this week. It will also leave a number of states with large budget holes they had expected to fill with federal cash to help with Medicaid costs. Up in the air are other provisions that were to be included in the legislation, including some $50 billion in new taxes designed to help offset its cost. They included an increase in levies paid by private investment groups, including hedge-fund firms and real-estate partnerships, a provision long sought by some Democrats that will likely return another day. Under a program initially enacted last year—which expired June 2—jobless workers could receive up to 99 weeks of aid, including 26 weeks of basic assistance provided by states plus longer-term federal payments. The Labor Department estimates that the long-term unemployed, meaning those out of a job for at least six months, make up 46% of all jobless workers in the U.S. And like every other stimulus program, there are those who focus on possible cons from the program end... There are economic risks in ending benefits. Workers receiving them tend to funnel money back into the economy immediately, helping prop up demand and jobs. In addition, said Harvard economist Lawrence Katz, if workers are unable to find work and no longer eligible for unemployment benefits, some will turn to other government programs, such as disability and Social Security. "If you're really concerned about the long-term deficit, you should be really concerned about the long-term unemployed," Mr. Katz said. and pros... Other economists argue that extended benefits have played a part in keeping people out of the labor force. "There's a very large body of research that says that more generous benefits and benefits that last longer…encourage people to stay out of work longer," said Bruce Meyer, an economist and public policy professor at the University of Chicago. James Sherk, a labor economics analyst at the conservative Heritage Foundation think tank, said that while it could be argued that the benefits made available last year were too extensive, cutting off workers who expected to receive the full 99 weeks of benefits isn't ideal either. "You don't sort of pull the rug out from someone halfway through," he said. In our view, what will happen is that the 1.3 million who had gotten used to receiving benefits (and for whom we certainly feel sorry, as once again expectations and reality under the current administration diverge in a dramatic fashion) and had no desire to look for work, will immediately flood back into the labor force to find some job, any job, that pays even remotely as well as what the government did. What this means is that the total labor force (which incidentally dropped by 322,000 From April to May) of 154.393 million, is about to grow by at least 1.3 million, and as much as 2 million, in July. And since census employment peaked, and the number of employed will stay flat (at best) at 139.420 million, the expansion in the total labor force, will increase the unemployment rate by almost 1% in just a month, growing from 9.7% in May to 10.5% in July. That number will be reported in late August. But by then the sequel to the Great Depression v2 movie will be playing in every theater across the land, and this number will be the least of our worries. Appendix A: average monthly benefits check as per the Daily Treasury Statement and the DOL's weekly claims report. Appendix B: For an extended discussion of jobless benefits, how they work, and how their expiration will adversely impact the economy, read As Extended And Emergency Unemployment Benefits Finally Begin Expiring, A Much Different Employment Picture Emerges
KIEV (Reuters) - Britain continues to support keeping sanctions on Russia, its foreign minister Boris Johnson said during a visit to Ukraine on Wednesday, adding that London’s position was unchanged by June’s vote to leave the European Union. Britain's Foreign Secretary Boris Johnson (L) shakes hands with Ukraine's Foreign Minister Pavlo Klimkin during a meeting in Kiev, Ukraine, September 14, 2016. REUTERS/Valentyn Ogirenko On his first visit to Kiev since taking office, Johnson said it was primarily up to the Kremlin to make progress towards peace in eastern Ukraine, where fighting between Ukrainian troops and separatist rebels has killed more than 9,500 people. He said he supported the efforts of the foreign ministers of Germany and France, who were also separately in Kiev for talks, to achieve a lasting ceasefire and a roadmap for peace under the so-called Minsk agreement. There were fears in Kiev ahead of the June 23 EU referendum that a Brexit vote would weaken the EU’s support for Ukraine and undermine its resolve to stand up to Russia. Some EU states want sanctions against Russia lifted. In the run-up to the vote, Johnson, who championed leaving the EU, also publicly linked the Ukraine crisis to what he called the “EU’s pretensions to running a defence policy”. “Whatever you want to say about the EU’s handling of the issues, the crucial thing now is that we maintain sanctions,” Johnson said on Wednesday, while fielding a question that referred to his earlier remarks. “Brexit or not, it makes no difference to us,” he said. “We continue to be a major player, as I’ve always said, in common foreign and security policy. It’s inconceivable that the UK would not be involved in that kind of conversation about sanctions.” Johnson said that the Minsk process was progressing at a “snail’s pace” but remained the only way of resolving the conflict in the Donbass region, which erupted after Russia’s annexation of Crimea in 2014. “Clearly it’s up to the Russians primarily to make progress on the security side,” he said, speaking to reporters alongside his Ukrainian counterpart Pavlo Klimkin. “But it’s up to all sides I think in this conversation to make progress together.” Moscow denies accusations by Ukraine and NATO that it helps the separatists with troops and arms. A ceasefire in Donbass was launched to coincide with the start of the school year on Sept. 1. It failed to stop all fighting but the German and French foreign ministers said on Wednesday an attempt to revive a ceasefire in eastern Ukraine from midnight could set the scene for agreement next week on further peace moves. German Foreign Minister Frank-Walter Steinmeier said Ukraine had agreed to abide by a new seven-day truce proposed by Russian-backed separatists and explicitly backed by Moscow.
This is the third of three stories in a mini-series on how artificial intelligence is affecting the work that agencies do. Read the previous stories about Xaxis and Publicis.Sapient. After years of double-digit growth, global lingerie brand Cosabella suddenly lost momentum in 2016. “We decided we needed to cut ties [with our agency] and change something up,” said Courtney Connell, marketing director at Cosabella. Cosabella had three options: hire another agency, hire more in-house marketers or adopt an artificial intelligence platform that can handle marketing and media buying autonomously. In October, after assessing multiple vendors, Cosabella chose Adgorithms’ AI engine, Albert. Albert’s machine learning powers marketing in email, mobile, search, social and display. Marketers enter high-level parameters like geos, channels and target audiences and set a budget and KPIs around return on ad spend. “After that, [Albert] makes every single decision,” Connell said, including identifying targets and keywords, moving budgets between channels, identifying fraud, controlling bids and executing buys. After three months with Albert, Cosabella saw a 336% increase in return on ad spend. In Q4, revenues increased 155% and the brand saw 1,500 more transactions year over year, 30% of which came from new customers. In Albert’s first month, Cosabella decreased costs by 12% by increasing returns by 50%. On Facebook, return on ad spend was up 565% within Albert’s first month. By the end of month three, Albert had increased conversions on Facebook by 2,000%. Cosabella didn’t have to hire any new talent to bring marketing in-house with Albert. Its 10-person marketing department does creative production in-house, feeding Albert images and copy to serve dynamically. “He can mix and match [creative] however he pleases,” Connell said. “He might start with an ad and if he sees that getting fatigued, he might roll out a new combination.” Connell checks the Albert dashboard every morning, but because the tool self-optimizes, her team only must check on campaigns once or twice a week. It takes them less than an hour to produce graphics and copy and other materials for a campaign. “All of the big idea, strategy and campaign creative is happening in-house,” Connell said. As Albert buys and optimizes media, it makes suggestions. For example, it told Cosabella that creative featuring people performed 50% better that ads featuring just the product. “The beauty of Albert is we don’t have to optimize campaigns,” she said. “He’ll make suggestions on budget or different microsegments he’s seeing movement on.” While systems integration can be bumpy for clients, Connell described ramping up on Albert as “painless” and requiring “no technical investment whatsoever.” All a marketer has to do to is link their accounts, like Google AdWords and Facebook, to Albert. It ingests and optimizes ongoing campaigns for about two to three weeks before deploying its own. “Ever since then it’s been very easy,” Connell said. “We just give him the creative concept to make sure he has enough fresh content.” Albert only takes a matter of days to weeks to set up because it can work with just pieces of a marketer’s data, said Or Shani, CEO of Adgorithms. “We really didn’t want to be in the position of the marketing clouds where it takes six months to a year to onboard,” he said. “We want to get started very fast and show value even if we don’t have all the information.” Eventually, Cosabella will get Albert talking to the other AI vendors it’s onboarded since cutting ties with its agency, including Emarsys for email marketing and Sentient for customer acquisition and real-time merchandizing. Albert can ingest customer lists for lookalike targeting, but Cosabella wants to hook it up the company’s CRM system to keep that information flowing in constantly. Connell estimates the development work will take up to three hours. “We want to hook Emarsys up to Albert so he can model high- or low-value customers and adjust his budgets to spend more to attain a certain customer,” she said. “We all want them talking to each other.” If an email company creates a higher lifetime-value customer, for example, Albert can change its calculations to spend more to target that customer. “At the end of the day, Albert is just as good as the data you hook it up to,” Shani said. “The more information he can get, the more accurate he can execute.” Before technologies like Albert, mid-size companies like Cosabella, which has 100 employees, didn’t have many options to leverage AI, Connell said. “A lot of companies have proprietary technology, but that’s just not reachable for small to mid-size companies,” she said. “Now that’s totally changed.” Connell doesn’t miss working with an agency at all. Bringing marketing in-house has allowed Cosabella to communicate better and work more efficiently on marketing without the agency as a middleman, she said. “There’s nothing I miss about advertising agencies,” she said. When it comes to measurement, attribution and reporting, Albert is more accurate than Cosabella’s agency ever was. “He knows that he’s shown someone a Facebook ad or if they click on a search ad and make a purchase,” she said. “He gives you reports on assists versus actual sales so you can see how the channels work together to get the customer to make a purchase.” This isn’t the first time Albert has snagged an agency’s business. A large CPG brand recently fired its agency after piloting Albert for four months, Shani claimed. “Everybody tells us we’re the agency killer,” he said. “We are a threat to them, there’s no way around that, because were trying to take the pieces of the puzzle they make a lot of money on and that’s a risk.” Connell doesn’t foresee a time when Cosabella will return to working with an ad agency. There’s just too much data for a human to process and make real-time decisions on. “I would never have a human do this type of work ever again,” she said. “Albert is looking at very small, subtle patterns 24/7. When everybody else should be sleeping, he’s out there making decisions.” She leaves marketers with a word of advice. “If you’re not up to speed, get up to speed and get a tool like this,” she said. “If you want to survive, that’s what you need to do.”
WILSON COUNTY, Tenn. (WKRN) - Add Wilson County State Senator Mae Beavers' name to the growing list of those who said they may run for governor. The long-time lawmaker told News 2 that after word Republican Montgomery County State Senator Mark Green might be under consideration for Secretary of the Army, she "got calls for two days looking for a conservative candidate." She also told News 2 in an on-camera interview that some people from Mountain City to Memphis and "all over the state" have been asking her to run for six months. "I am exploring the idea," she said. The lawmaker said she "did not consider it" while Senator Green was in the race. Senator Beavers said, "He will probably be picked for Secretary of the Army and they were asking me to get in the race." When asked when she might make a decision, Beavers said, "we'll see how long it takes for Senator Green to get his background check and his decision and we will go from there." When asked if she thought it was a "done deal" that Green will become Secretary of the Army, Beavers replied, "I do. There is no reason that he won't pass a background check." While serving in Iraq, Sen. Green, who is also a physician, was one of the first to examine Saddam Hussein after the former Iraqi leader's capture. In addition to Sen. Green, who has extensive military service, U.S. House Representatives Diane Black and Marsha Blackburn have been among those mentioned as potential Republican candidates, along with Tennessee House Speaker Beth Harwell. On the Democratic side, former Nashville Mayor Karl Dean has officially announced, while Tennessee House Democratic leader Craig Fitzhugh is among those considering a run for governor.
WASHINGTON (CNN) -- The Supreme Court's conservative majority expressed varying degrees of concern Wednesday over a civil rights case brought by 20 firefighters, most of them white, who claim reverse discrimination in promotions. Recent case has Supreme Court justices deciding when race considerations are proper to ensure a diverse workplace. The suit was filed in response to New Haven, Connecticut, officials' decision to throw out results of promotional exams that they said left too few minorities qualified. At issue is whether the city intentionally discriminated, in violation of both federal law and the Constitution's equal protection clause. The high court is being asked to decide whether there is a continued need for special treatment for minorities, or whether enough progress has been made to make existing laws obsolete, especially in a political atmosphere where an African-American occupies the White House. Watch why the plaintiffs are suing » As is true in many hot-button social issues, Wednesday's arguments fell along familiar ideological lines, with most justices expressing clear views on when race considerations are proper to ensure a diverse workplace. "It looked at the results and classified successful and unsuccessful applicants by race," said Justice Anthony Kennedy, head in hand. "And you (the city) want us to say this isn't race? I have trouble with this argument." But a ruling against the city could leave New Haven officials stuck in a "damned-if-you-do, damned-if-you-don't" situation, subject to lawsuits from both minority and majority employees, said Justice David Souter. It is a situation business groups and municipalities have long expressed concern about. Key plaintiff Frank Ricci and others took promotional exams in 2003 for lieutenant and captain positions that had become available in New Haven, Connecticut's second-largest city. The personnel department contracted with a private firm to design an oral and written exam. When the results came back, city lawyers expressed concern about the results because none of the black firefighters and only one Latino who took the exam would have been promoted. The New Haven corporation counsel refused to certify the test and no promotions were given. The record does not indicate how many firefighters took the two tests for promotion to captain and lieutenant. Watch why the lawsuit is so controversial » The city said that under a federal civil rights law known as Title VII, employers must ban actions such as promotion tests that would have a "disparate impact" on a protected class, such as a specified race or gender. But a group of firefighters sued, calling themselves the "New Haven 20." The plaintiffs, wearing their dress blue uniforms, posed on the high court steps after the 75-minute argument. Nineteen identify themselves as white while one says he is Hispanic-white. Inside, conservatives on the high court questioned whether the city could throw out the results of the promotion exams after they were already given. "You had some applicants who were winners and their promotion was set aside," said Justice Antonin Scalia. Chief Justice John Roberts offered a hypothetical: "Jones, you don't get the promotion because you're white ... and they go down the list and throw out everybody who took the test. That would be all right," he asked. "They get do-overs until it comes out right. Or throw out this test, they do another test. Oh, it's just as bad, throw that one out." Christopher Meade, arguing for New Haven, said, "The city has a duty to ensure that its process is fair for all applicants, both black and white." Justices on the left seemed to support that view. Justice John Paul Stevens suggested that if there were a choice of two tests, one of which had a lesser "disparate impact" on minorities, "they could take that test, even though its sole purpose was to achieve racial proportionality in candidates selected." The firefighters' attorney, Greg Coleman, countered by saying the city's action in this instance "violates the principle of individual dignity." Kennedy's views could prove key. He appeared to oppose the city's dismissal of the test results and has traditionally been skeptical of many race-based decisions in education and the workplace. But his more moderate views could blunt the impact of any ruling by his more right-leaning colleagues. In a key ruling three years ago that tossed out racial diversity plans in two public school districts, Kennedy took a more centrist view that held open the limited use of skin color when trying to achieve diversity in the classroom. The Obama administration also has taken a nuanced position on the appeal. A Justice Department lawyer told the high court that while the federal government supports the city's discretion to nullify the test results, it believes the lawsuit should be allowed to proceed on a limited basis. The case has attracted a broad range of interest from a variety of advocacy and business groups concerned about how far the high court would go to allow race to be used at all by government and the private sector in such areas as affirmative action, education, and contracting. The case is Ricci v. DeStefano (07-1428), A ruling is expected in about two months. All About Affirmative Action • U.S. Supreme Court • Employment Discrimination
Some people lack self-control. A habit of saying the wrong thing at the wrong time is one example. But now, scientists have developed a way of improving a person's self-control through electrical brain stimulation. This is according to a study published in The Journal of Neuroscience. Researchers from the University of Texas Health Science Center (UTHealth) at Houston and the University of California, San Diego, say their findings could be useful for future treatments of attention deficit hyperactivity disorder (ADHD) and Tourette's syndrome, among other self-control disorders. To reach their findings, the investigators analyzed four study participants with epilepsy who were required to perform a series of behavioral tasks that involved the "braking" of brain activity. Could you resist? Scientists say they have discovered a way to enhance a person's self-control through the use of electrical brain stimulation. Could you resist? Scientists say they have discovered a way to enhance a person's self-control through the use of electrical brain stimulation. The researchers found that the area in which the brain-slowing activity occurred was in the prefrontal cortex of each participant. Using brief electrical stimulation through electrodes implanted directly on the brain surface, a computer increased activity in the prefrontal cortex brain of each patient at the point when their behavioral brain activity slowed. The researchers note that this was a double-blind study, so both the participants and investigators did not know when or where the electrical charges were triggered. 'Self-control enhanced' with stimulation of braking system They found that the electrical stimulation in the prefrontal cortex of the brain enhanced the slowing of behavioral activity, leading to an enhanced form of self-control. However, when electrical stimulation was administered outside the prefrontal cortex, the participants showed no change in behavior. The researchers say this suggests that the effects of electrical stimulation are specific to the prefrontal cortex. Commenting on the findings, Nitin Tandon, of the Vivian L. Smith Department of Neurosurgery at the UTHealth Medical School and senior author of study, says: "Our daily life is full of occasions when one must inhibit responses. For example, one must stop speaking when it's inappropriate to the social context and stop oneself from reaching for extra candy. There is a circuit in the brain for inhibiting or braking responses. We believe we are the first to show that we can enhance this braking system with brain stimulation." The investigators point out that although their findings are promising, they do not yet provide evidence that direct electrical stimulation is effective for treating self-control disorders, such as borderline personality disorder, obsessive-compulsive disorder (OCD), and Tourette's syndrome. But they say their proof-of-principle study may be useful one day when it comes to treating these types of disorders. Medical News Today recently reported on a study detailing the potential for mind control, after scientists discovered that a brain region activated when people work out mathematic calculations is also activated when people say quantitative terms.
Ask any landscape photographer when their favorite time to shoot is, and they’ll most likely say Golden Hour. Golden Hour light is unparalleled in that it’s soft yet vibrant, warm yet even. And man, does it do wonders for showing off the features of a landscape! But taking a high-quality landscape photo during Golden Hour requires more than just pointing the camera toward the sunset and pressing the shutter. I’ve put together a collection of nine spectacular sunset shots to inspire your work, and I’ve included a few tips and tricks along with them. Let’s see just what you need to do to improve your Golden Hour photos in order to create masterpieces like the photos below. When shooting a sunset, many photographers will make the sky the focal point of the image. And while that works in many instances due to the gorgeous coloring of the sky, sometimes the image works better if the sunset takes a backseat to another element. In this shot, Kevin McNeal uses the beautiful waterfall as a focal point, which plays perfectly into the photo. The bright white of the water really pops while the visual weight of the waterfall balances out the layering of the cliffs that dominate the right side of the image. Using water in the foreground of a sunset shot is a great way to improve the exposure. By reflecting the brightness of the sky in the water, the photographer, Steve Kossack, was able to open up the foreground and help prevent it from becoming a dark, shadowy weight that brings the photo down. Better still, the water reflects the gorgeous colors of the sunset, adding additional pop to an already gorgeous image. In this image, Rick Sandford highlights the value of having a perfectly level horizon. Though it’s easy to be taken in by the beauty of the scene, it’s imperative that when you’re photographing a sunset with a clearly defined landscape like this one that you pay close attention to the framing of the shot. If the horizon is off just a little, it will be plainly visible and ruin an otherwise breathtaking photo. In this gorgeous Golden Hour photo, Loscar Numael capture golden tones as the sun illuminates the distant mountains. When photographing sunsets, you can enhance these golden tones by switching your white balance to the “shade” setting. Because the shade setting is intended to warm up bluish tones, it adds red and orange tones to the shot. That means that when it’s used on golden tones, it makes those red and orange tones that much more prevalent. When photographing a sunset, don’t be afraid to extend the length of the exposure to show the movement of the clouds. As we can see in this shot by Jason Odell, the clouds take on a dreamy quality with the blurriness that’s induced by the long shutter speed. Paired with the explosion of color from the setting sun, the movement of the clouds gives this image a greater level of depth and interest. Even though Golden Hour is a prime time to show off the sky, again, we see the value of incorporating foreground interest into the shot. In this case, Joe Rossbach has framed the shot perfectly with the rocks in the foreground serving as an anchor for the photo. The rocks also give our eye something to follow deeper into the scene, giving it a greater sense of dimension. The cool tones of the rocks act as a nice balancing point to the bright, warm colors of the sunset as well. In another beautiful example of a long exposure, Edwin Martinez pairs the harsh lines and shapes of the rocks in the foreground with the soft, smooth surface of the water. In the background, the fog acts like a giant diffuser, helping to spread out the rays of the setting sun in a way that makes the fog glow. In looking at the differences in the colors between the foreground and background, we also see how you can use color to help achieve improved visual balance from front to back. Gary Hart demonstrates with his photo above how not all sunset photos have to be taken after the sun actually sets. When photographing a landscape before the sun has hit the horizon, it’s a good idea to work in aperture priority mode so you can make quick changes to the exposure settings as they rapidly change with the movement of the sun. Then, once the sun dips below the horizon, switch to manual mode so you have greater creative control over your camera’s settings. Not all sunset photos have to include the sun or even need to be taken in the direction of the sunset. In this image by Steve Kossack, the setting sun bathes the rock formations in an orange glow, giving us a better sense of the texture of the rocks and the size and scale of the landscape. The lesson here is the value in turning around. Often, as spectacular as the setting sun might be, there very well could be an equally beautiful view behind you! These photos show what’s possible when you put in the time and effort to find an interesting vantage point, incorporate foreground interest, use the appropriate camera settings, and so forth. Another crucial aspect of creating images like those seen above is to use high-quality filters that boost colors, help you blur movement, and control the dynamic range of the scene at sunset. Singh-Ray makes some of the best filters around, including polarizers, solid neutral density filters, and graduated neutral density filters. They even have reverse neutral density filters that are perfect for sunrise and sunset shooting because they are darkest in the center of the filter with clear glass on the bottom and graduated from dark to light above the horizon line. That’s beneficial because at sunset, the brightest part of the sky is along the horizon. By filtering out some of that light, the reverse neutral density filter helps you control the dynamic range of the shot by lightening the foreground and gradually controlling the brightness of the sky. The benefit to you is that you get all those effects with a single filter. No more long post-processing sessions! I shoot with Singh-Ray filters because I feel they are the best on the market. If you want to take your photos to the next level, I suggest you check them out.
Scotland Yard has raided five squats in London 24 hours before the royal wedding – a week after promising pre-emptive action to ensure the day is trouble free. Three squats in Camberwell, south London, were raided on Thursday morning along with a community at Heathrow, known as Transition Heathrow and set up in opposition to the building of a third runway. The fifth squat raided was Offmarket in Hackney, north-east London. Scotland Yard said 14 people were arrested in Camberwell. A spokesman denied the raids had anything to do with the royal wedding. Police said the raid at a squat known as Ratstar in Camberwell was carried out under a section 18 warrant to search for stolen goods. A spokesman said once officers arrived at the address they found that those inside were bypassing the electricity meter and were arrested for "electricity abstraction". "It's business as usual," a spokesman said. "This is nothing to do with the royal wedding." On Wednesday police officers from the Metropolitan and Sussex police forces raided a squat in Brighton and arrested seven people. Sussex police confirmed it had been assisting the Met in executing three warrants in the city in response to the trouble at last month's anti-cuts protest in London, which followed the TUC march in the capital. Scotland Yard's denial that the raids were in any way linked to the police preparations for the royal wedding came a week after senior officers made clear they would be taking pre-emptive action, including raiding squats and making arrests, in advance of the wedding to ensure that no criminal activity took place on the day. To arrest people under conspiracy laws the police would have to have evidence of a plot. But the raids on Thursday all appeared to have been carried out under warrants to search for either stolen goods or to look for evidence relating to the disorder at the TUC march on 26 March and the anti-cuts demonstration in London last December. The warrant issued in Hackney stated the raid was taking place under "section 8 of the Police and Criminal Evidence Act" to look for "documentation ... material ... mobile phones, cameras and correspondence ... that can be linked to events and suspects of serious disorder at the TUC rally on 26 March 2011". One legal source pointed out that it was highly unusual for so many raids on squats to be carried out in one day. "The Heathrow squat has been there peacefully, supported by many people in the community, for more than a year," said the legal source. John McDonnell, the MP for Hayes and Harlington, which covers the Heathrow squat, accused the police of "harassing environmental campaigners". McDonnell raised a point of order in the Commons to question the timing of the incidents, saying they appeared to be "some form of pre-emptive strike before the royal wedding". He added: "I believe this disproportionate use of force is unacceptable and I would urge that a minister comes to this house from the Home Office to explain what is exactly happening today, what are the grounds for that action and also to contact the Metropolitan police commissioner to explain that many of us feel that this is disproportionate and no way to celebrate this joyous wedding."
Action Bronson and Party Supplies are returning on November 1st with the second installment of Blue Chips. Today they've released the cover art for the mixtape. Action was supposed to have a very legendary surprise guest on the project, which, it turns out, what supposed to be fellow New York Cam'ron, but as Action told XXL, that fell through (and it was more of a "pipe dream"). "Unfortunately, I wanted to get Cam’ron in," Bronsolino said. "He was going to be the guest, but we kind of jumped the gun by writing that." Bronson instead decided to write himself in as the (legendary) featured artist. Who would like to hear an Action x Cam'ron collaboration? Peep the artwork in the gallery above, and watch the teaser for the project below.
This post was updated Sept. 1 at 9:25 a.m. UCLA officials announced Wednesday a partnership with the Los Angeles Lakers basketball team that covers in-game health care for players and naming rights to the Lakers’ new training facility. UCLA Health will serve as the exclusive in-game health providers for Lakers’ players. The team’s new training facility and offices in El Segundo, California, will also be named the UCLA Health Training Center. Tami Dennis, UCLA Health spokesperson, said in an email statement existing physicians and staff would care for the players, with the potential to hire additional staff. “As we expand our efforts in musculoskeletal activities, including orthopedics, sports medicine and rehabilitation, we anticipate expanding our staff as well,” Dennis said. Construction of the new training center began in September 2015 and is expected to be completed by the fall of 2017. Jeanie Buss, president and co-owner of the Lakers, said in a statement that she is extremely pleased with the partnership. “Their innovative, forward-thinking, research-oriented medical team and facilities are unsurpassed,” Buss said. “Their focus on not only treating, but preventing, injuries will translate into the ultimate goal of helping our players perform better on-court and to prolonging their professional careers.” John Mazziotta, vice chancellor of UCLA Health Sciences and CEO of UCLA Health, said in a statement he thinks the partnership with the Lakers will allow both organizations to promote health and fitness to a diverse audience. “UCLA Health is committed to improving the health of our community, both for individuals and populationwide,” he said. The 120,000 square-foot facility will be the new home of the Lakers and will contain two basketball courts, a public sponsors’ galley for UCLA Health, separate office and gameday entries, separate and secure players parking and entry and an employee hub/internet cafe. Dennis said UCLA Health will pay $4 million to the Lakers per year over the next five years as part of the partnership. “The partnership includes the naming rights to the UCLA Health Training Center, a strong advertising presence at Staples Center through the placement of our brand and information about our services and enhanced visibility of UCLA Health via broadcast and social media channels,” Dennis said. Dennis added UCLA Health expects the partnership to last 15 years. UCLA Health committed to the first five financially with the option to extend in five-year increments.
According to reliable sources within IBJJF itself that have contacted BJJEE.com, the leading federation of Brazilian Jiu-Jitsu, the IBJJF which is run by Carlos Gracie Jr. is planning some major changes in black belt registration for IBJJF members. IBJJF is a private for-profit company and even though they are regarded as the leading and most prestigious BJJ federation (followed by UAEJJF) they are not an official governing body of the sport. For BJJ black belts to get certified by IBJJF, they need to pay hundreds of dollars to “certify”their black belt and “degrees”. This is a conversation that was sent to BJJEE. portions have been redacted, including the European country in question, to protect the source and the investigator. In these files you see a couple issues regarding the black belt degrees. We see the shifting of the requirements to change the required belt to rank black belts to third degree from second, as it has been until now. Deemed as “quality control” or “tradition”, this means that to own a fully functional, independent BJJ academy with the proper certifications takes an additional three years. And if your BJJ academy was independent at second, you’re now BACK on the hook because the requirements shifted. Three years of additional affiliation fees can range into the thousands of dollars, and you will not be able to compete under your own flag during that time. At present time, only a second degree black belt can promote a brown belt to a black belt in BJJ. The minimum required time to get a first degree in BJJ is 3 years and add another 3 years to get a second degree. These changes haven’t been officially confirmed by IBJJF as of yet but are rumoured to be put in effect in early 2018. Clips from the interview with the high ranking IBJJF official detailing IBJJF’s control policies in Europe: What does this all mean? For some prominent members of the community this is good news because it means that the federation wants only mature instructors (at least 9 active years at black belt) to be able to promote new black belts. They would want to avoid a 25 year old black belt (the youngest 2nd degree black belt) to promote another black belt. This is also quality control so that BJJ doesn’t end as diluted as Taekwondo. For others, this is a hypocrisy because of the actions of some highly ranked and legit black belts that have promoted what some feel are ‘undeserving’ members of the bjj community or even worse, martial artist that haven’t even put in the the time to train in the Gi or in some cases haven’t even trained BJJ at all… Some bad examples: Or:
WATCH: Donald Trump signs his first official act as president, the waiver for retired Gen. James Mattis to serve as defense secretary. pic.twitter.com/MoL3m235cx — Washington Examiner (@dcexaminer) January 20, 2017 As one of his first actions as President, Donald Trump signed a waiver that will allow retired Marine General James Mattis to serve as Trump’s Defense Secretary. He still needs to be confirmed, but the waiver is a necessity because of a restriction under federal law that prevents Mattis from getting the position. Federal law says that in order to serve as Secretary of Defense, a person has to be a civilian, and “may not be appointed as Secretary of Defense within seven years after relief from active duty.” Since Mattis only retired three years ago, he wouldn’t be qualified under the statute. The House approved the waiver on Friday in a vote of 268-151. Only 36 Democrats supported the bill. The Senate easily approved the waiver a day earlier. However, some Democrats opposed the bill fearing that it was too vague because it didn’t mention Mattis by name. “If we don’t stand up for ourselves now, we’re going to be rolled over countlessly,” Rep. Adam Smith of Washington state, the ranking Democrat on the House Armed Services Committee said according to Politico. The waiver for Gen. Mattis is just another example of Donald Trump thinking the rules that his predecessors followed shouldn’t apply to him. pic.twitter.com/S9NfpM49OK — Ruben Gallego (@RepRubenGallego) January 13, 2017 This isn’t the first time this has happened. General George C. Marshall became Secretary of Defense in 1950, despite having been Army Chief of Staff until 1945. Congress allowed Marshall to take the job, but they made it clear that they didn’t want to make a habit of doing this. When Congress passed legislation that let Marshall become Secretary, they specifically said, “the authority granted by this Act is not to be construed as approval by the Congress of continuing appointments of military men in the office of Secretary of Defense in the future.” Ronn Blitzer contributed to this report.
Harvard scholar Christopher Jencks reviews an edited volume, Legacies of the War on Poverty, where some of the best economists weigh in on what worked and what didn’t. Superb review. I learned a lot. The review was in two parts, and here’s the conclusion: On the one hand, there have clearly been more successes than today’s Republicans acknowledge, at least in public. Raising Social Security benefits played a major part in cutting poverty among the elderly. The Earned Income Tax Credit cut poverty among single mothers. Food stamps improve living standards for most poor families. Medicaid also improves the lives of the poor. Even Section 8 rent subsidies, which I have not discussed, improve living standards among the poor families lucky enough to get one, although the money might do more good if it were distributed in a less random way. Head Start also turns out to help poor children stay on track for somewhat better lives than their parents had. On the other hand, Republican claims that antipoverty programs were ineffective and wasteful also appear to have been well founded in many cases. Title I spending on elementary and secondary education has had few identifiable benefits, although the design of the program would make it hard to identify such benefits even if they existed. Relying on student loans rather than grants to finance the early years of higher education has discouraged an unknown number of low-income students from entering college, because of the fear that they will not be able to pay the loans back if they do not graduate. Job-training programs for the least employable have also yielded modest benefits. The community action programs that challenged the authority of elected local officials during the 1960s might have been a fine idea if they had been privately funded, but using federal money to pay for attacks on elected officials was a political disaster. The fact that the War on Poverty included some unsuccessful programs is not an indictment of the overall effort. Failures are an inevitable part of any program that requires experimentation. The problem is that most of these programs still exist. Job-training programs that don’t work still pop up and disappear. Title I of the Elementary and Secondary Education Act still pushes money into the hands of educators who do not raise poor children’s test scores. It has had little in the way of tangible results. Increasingly large student loans still allow colleges to raise tuition faster than family incomes rise, and rising costs still discourage many poor students from attending or completing college. It takes time to produce disinterested assessments of political programs. The Government Accountability Office has done good assessments of some narrowly defined programs, but assessing strategic choices about how best to fight poverty has been left largely to journalists, university scholars, and organizations like the Russell Sage Foundation, which paid for Legacies. Scholars are not completely disinterested either, but in this case we can be grateful that a small group has helped us reach a more balanced judgment about a noble experiment. We did not lose the War on Poverty. We gained some ground. Quite a lot of ground.
Jump to Prev Top Next Close Jennifer Hale asks a few questions about the character. That's less than 30 seconds after she walks in the room. Her arrival prompts the usual bit of Hollywood hug and kiss, and some hello-how-are-yous, but then it's down to business. Who is this character? What is she doing? Why is she here? And then, just a minute later, she's in the booth and she's nailing it. If you've played a video game in the past several years, you'll probably recognize her voice. Her list of credits is enormous. Baldur's Gate, Planescape: Torment, Eternal Darkness: Sanity's Requiem, Metroid Prime, Knights of the Old Republic, Mercenaries, the Metal Gear Solid series, Mass Effect 2 and 3, Gears of War 3, Diablo 3, Halo 4 and 5, Call of Duty: Black Ops, BioShock Infinite, The Last of Us and Broken Age are not even half of the games she's contributed to. And now, added to that list, is Defense Grid 2. Executive Producer Jeff Pobst and Script Co-Writer Sam Ernst are directing, sitting in chairs with wood frames and slung canvas, like you'd expect. Each is holding a script the size of a small phone book. The engineer signals he's ready to begin. And Hale begins. Standing in the recording booth, she runs over a couple of lines, trying out different voices. She's playing a new character, a former scientist who is now part of a computer, and who will help the player. Hale asks if it's all right if the character is well-traveled. "She's moved around a lot, but spent a lot of time in Australia?" she suggests. Pobst says, "Sure." And then Hale drops it, and it's perfect. A fully realized character, pulled out of the bag like it's nothing. The result of (and perhaps the cause for) over two decades of successful work in the video game industry. For every game you play, this scene will repeat multiple times. Each character, each voice, each barely noticeable grunt or scream is created by a person in a booth. Not always by someone as innately talented as Hale, but someone. Somewhere. After 10 minutes of working with Hale, Ernst and Pobst noticeably relax. It's working. The new character, voiced by the veteran Hale, sounds better than they'd hoped. Stitched together after the fact with the voices recorded by the other actors, it will somehow feel perfectly in place. Even though Hale had never heard those voices, and hadn't read the script until today. Ernst and Pobst celebrate with pastries and warm smiles, while Hale continues to rocket through the script, laying down lines, adding life to the game that's still being made hundreds of miles away, in a completely different state. This is game development. The writer Ernst and writing partner Jim Dunn wrote the script for the original Defense Grid. That was back before they'd really made it in Hollywood. Since then, they've written and produced two television series, Haven and Crisis, as well as some episodes of Stephen King's Dead Zone. They're the type of Hollywood writers whose names aren't yet well-known, but they're doing it — making money, making shows. "Journeymen" wouldn't be an unfair characterization. Neither would "pros." When the time came to write the script for Defense Grid 2, Pobst went back to the well, to the now more experienced team. Ernst and Dunn flew to Seattle in spring of 2013 to begin discussing with Hidden Path's executives what to do with the script for the game. Their first suggestion: Kill off main character Fletcher. The Hidden Path execs lost their minds. "I wasn't saying we should kill Fletcher permanently," Ernst tells me in LA, over sushi. "It's not that I wanted to get rid of him. ... It was really just the shock value of getting to say that to a bunch of people." The original Defense Grid script had been a fixer-upper project for Ernst and Dunn. A version had already been written, but those writers were not going to see the project through. When Ernst saw what had been created, his first instinct was to jettison it and start over. Ernst and Dunn are one of the rare professional writing teams brought in to help make games. But the practice is becoming more common as game developers try to break out of the traditional enthusiast fan base and into more mainstream markets. Ernst and Dunn's first game project was for one of the Shrek games. Ernst says the process initially was painful. A script had been written, but it was overly complex — and terrible. Jeff Pobst (middle) and Sam Ernst (right) "The idea of bringing in writers was slow," Ernst says. "These ... designers, they thought they could write, and they sucked. They sucked like — I don't know if you read any fan fiction. On Haven, we would have all this fan fiction, and I couldn't read much of it, because a lot of it was sexy. And I was like, 'What are you doing with my characters, man?' And of course I named one of the characters after my daughter, so it was really awkward. But [Shrek] was really on that level. "The reason we got hired on Shrek was to go in because they were in a bad spot. Where the game was, where the story was — the problem was they would focus all on story. That's what they think is important, is story, and there are very few good TV shows or movies where the story is insanely complicated. If anything, you spend all your time trying to simplify a cool story and make it emotionally complex, the characters. When you run into a building and kill zombies, it's hard to focus on character. That's changed, as you know. But that was the big challenge for video games." For Defense Grid that process of focusing on character would be tricky. The Defense Grid series is tower defense. The player doesn't portray a character so much as they manipulate the objects on the screen. They are spoken to by characters in the game, but the characters are never shown. They're just voices. So any dialogue or dramatic interaction is between characters that aren't directly driving the action on the screen, and are also invisible. It's a unique challenge for the creative team. "So the big question is, how much dialogue can you get away with when you can't see the characters?" says Ernst. "With DG1, I don't think anybody knew. We were trying to tell a real story, and there was a lot of back-and-forth on how much dialogue we can do." Ernst and Dunn's first attempt was, in effect, a two-character story, involving dialogue between the main character Fletcher, and the player playing the game. Only the player's lines were not spoken, but typed on the screen while the game played. "We thought it would be awesome, and we put it in and it was horrible," says Pobst. "Because you were playing the game and reading it, and it didn't work." Pobst called the writers and told them the game would have to change. The dialogue would have to all become monologue. "Silence on the other end of the line, for way too long," says Pobst. "I was like, 'They don't want to work with this anymore.'" "The only emotion they want is like a Steven Seagal movie. Someone kills Steven Seagal's family, now he's got emotion." The writers recovered, and gave it a shot. The result turned out better than they hoped, and shipped with Defense Grid. Now, almost eight years later, the cast of characters in the Defense Grid universe has expanded. The Containment expansion introduced Simon, voiced by Alan Tudyk, and Cai, voiced by Ming-Na Wen. Defense Grid 2 will add even more characters including, for the first time in the series, a villain. Ernst sees his work as tapping more fully into emotion in whatever medium he's working in. He's critical of video games and movies that attempt to portray heroes as badass, but skimp on their feelings. "You have to be willing to talk about emotions," he says, while describing how he approached the rewrites on the Shrek game. "I felt like [action game] designers, they [are] afraid of emotion, because it was sappy and stupid and nobody cares about that. The only emotion they want is like a Steven Seagal movie. Someone kills Steven Seagal's family, now he's got emotion. ... He's sad for a scene, he puts a flower on the grave, and then he stands up and his eyes have changed and now he's gonna go be a man. That's Steven Seagal. Or George W. Bush. These are very one-dimensional ideas." For Defense Grid 2, the characters introduced in the first game and subsequent expansions are evolving, and there's drama to unfold with new characters in an entirely new world. How well that all works is only partly up to Ernst and Dunn. The game itself will have to carry some of that weight, but most of the emotion of the characters will either come through or not in the performances of the actors. And those performances are happening in Burbank, Calif. The recording The process of turning words on the page into performances you can hear starts in unexpected places. An entire industry exists to service the effort, mostly in Los Angeles. For Defense Grid 2, Hidden Path works with The Voicecaster in Burbank. It's billed as the longest-serving VO agency in Hollywood, and it provides a "full-service solution," including casting and studio rentals. We arrive at Voicecaster on a Thursday morning . It's not a glamorous place, but it is home to four separate recording studios and has produced VO for hundreds of productions — television, film, commercials and video games. Walking inside the place, it's like the 1970s never ended. Deep pile carpets, dark wood paneling. The recording equipment and computer look out of place amidst the retro decor. The Keurig machine, sitting on a table, looks like an alien artifact. Working with Voicecaster, Pobst scheduled returning actors Jim Ward and Alan Tudyk, and found an actress to take over the Cai role from Ming-Na Wen, as well as actors to handle most of the new characters. Most of the actors are booked in hour-long blocks throughout the day. Juan Carlos Bagnell arrives to engineer the sessions. He's a veteran, and a longtime Voicecaster employee. But like most people working in show business in LA who haven't become huge stars (and some who have), Bagnell works a variety of jobs. He's known as "Some Gadget Guy" on YouTube, where he reviews technology and gadgets. That's his passion, but the VO work pays the bills. Sometimes Bagnell directs. Today he's only engineering. The computer and sound decks are turned on and waiting for Bagnell when he arrives. He checks to make sure everything is how he wants it, walks into the sound booth to adjust the mic, and then waits. All told, he's good to go in under five minutes. The first session of the day is for the character Cai. Ellen Dubin replaces Ming-Na Wen, who's now appearing in Marvel's Agents of S.H.I.E.L.D.. "Ming-Na got new agents, and the new agents basically said, 'Oh, well, you know, now that she's a big TV star there's no way we would do what we did 12-13 months ago, for that price,'" Pobst says. "As much as I liked her ... it seemed like that was the place to make a change." Dubin is new to the production, and takes a few line reads to find her footing. Bagnell plays Ming-Na's performance so Dubin can understand the character, but also so that she won't accidentally imitate the well-known actress' performance. There's a fine line between remaining true to the character and delivering a straight-up imitation. One would be good; the other might turn players off. Like the "uncanny valley" effect of seeing a robot of a game character that's almost lifelike, but not quite. Pobst wants Cai to sound similar to the original performance, but different enough that it sounds like an obviously different actress. After several tries it's still not quite there. Ernst gives the note "more balls." Dubin acknowledges "more balls." And then she delivers it. The performance unmistakably has "more balls." And it's perfect. The character has been found. And then, less than an hour later, it's Jen Hale's turn. After she finds her slightly Australian-tinted character, the next 35 minutes go by in a blur. Hale, whose session started late because the previous session also started late, performs hundreds of pages of dialogue at a rate of approximately one line every five seconds. Occasionally she does one twice, but mostly she nails it on the first take. At one point, while being directed, she cuts off Pobst. She likes to read the line as soon as she hears it in her head, she says. It saves time. Pobst agrees. When she's finished, a quick exchange of thank yous, a pose for a picture, and then she's done. Jennifer Hale, the consummate professional, whisks herself out the door in a swirl of bewildered appreciation and smiles, presumably on her way to do it again someplace else, for some other video game or television show. And then, for the DG2 crew, it’s on to the next character, which just happens to be the most important character in the game. "Ming-Na got new agents, and the new agents basically said, 'Oh, well, you know, now that she's a big TV star there's no way we would do what we did 12-13 months ago, for that price.'" Jennifer Hale Magic on demand After Hale takes her leave, it's Jim Ward's turn. Ward plays the iconic character Fletcher, the original AI with whom the player interacted in the first Defense Grid. Fletcher is a martial character, regal and dignified. He is an ancient leader from the world where the first Defense Grid game took place, who has now become a voice inside of a computer. When Ward arrives, he looks anything but regal and dignified. Wearing a leather jacket, jeans and a T-shirt, Ward looks like he's either just woken up or come home from a wild night. He walks timidly. He admits he's not feeling well, and his voice reflects that. It's rough, soft and hoarse. Ward takes some time to compose himself. He gets a drink of water. A stool is located for him to sit on in the booth because his back is failing him. Worried looks are exchanged. Ward is responsible for almost half of the lines in the script. His character, by far, carries the game. If he can't deliver, half of an entire day will have been wasted. He's booked for another four-hour block the next day, but with so many lines to cover, both days were planned to be full. Pobst and Bagnell play it cool, saying kind things and trying not to let their worry show, but the tension rises. There is concern. Ward settles in the booth, on his stool. Bagnell signals he's ready, and plays some old Fletcher lines to refresh Ward's memory. And then it's time. Ward clears his throat, takes a deep breath and speaks ... and he's Fletcher. In an instant, Jim Ward is gone, and the person sitting on that stool, in that booth is inhabiting an almost extra-dimensional plane, channeling the character of Fletcher. Ward perfectly re-creates the character in line after line, one after the other, barely pausing for breath. He's reading one line every few seconds, tearing through the script at an astonishing pace. At one point, as Ward is ripping through one-line "barks" that will play after certain actions are performed in the game, Pobst leans over and whispers, "It's exactly the same as he performed it eight years ago." And sure enough, after I return home from Burbank I compare the barks recorded in 2014 with the same barks recorded in 2007 for the original game, holding my voice recorder up to the computer. They sound absolutely identical. At the end of the day, Ward has finished reading the final Fletcher line in the script, plus some extra "available now!" advertising blurbs. He's banged out a total of over 900 lines in just under four hours. The second half-day session won't be required. Ward has landed it. By the end of the next day, almost every single line of dialogue for Defense Grid 2 will have been recorded, except for those of Firefly and Serenity star Alan Tudyk, who was in Australia. He'll have to record another day. Pobst will fly home from LA having completed just another recording session for just another game. Magic, on demand. Meanwhile, back in Bellevue, Wash. at Hidden Path, the game isn't finished. The voice-overs are just one more piece of the puzzle. Months of writing, weeks of planning and casting, hours of recording, hours yet to be performed of editing and mixing and in the end it will be just one part of what you see and hear when you play the game. That's voice-over recording, and this is game development. Ward is responsible for almost half of the lines in the script. His character, by far, carries the game. Jim Ward Continue to part 10 of this series here. This story is part of a series covering the development of Defense Grid 2. To read previous installments, please visit the Making of Defense Grid 2 page. This series will continue into mid-2014, the projected launch date for Defense Grid 2. Images: Hidden Path Entertainment, Polygon
The following is an excerpt from The Sync Book: Myths, Media, Magic and Mindscapes, an anthology that came out on 9/11/11 in which 26 bloggers/writers/artists share their perspective on the strange and beautiful universe in which we live. Available here. Sync Generally speaking, when one seemingly significant thing or event echoes another, in a way that can't be understood causally, some call the phenomena synchronicity. As the connection between these thing-events1 and what makes them significant, is up to the interpretation of the individual, this kind of sync is relative. The decision that the relationship between one thing and another is entirely understood (being then normal cause and effect), or not (being then sync), also rests with personal interpretation. It is then an opinion to say "this" is connected to "that," or not. The word coincidence is generally associated with thing-events that echo one another through chance.2 The decision that something is chance or more mysterious also lies in the murky phantom world of opinion. Synchronicity and coincidence are the same thing, only the individual bias over whether the relationship is meaningful or arbitrary changes. Two people will witness the same events and one will call it coincidence and the other sync. This is an issue of individual temperament-and does not change whether the thing-event took place or not. All is sync or nothing is sync Everything is connected to everything else,3 sync being the cases when our current perspective allows us to see these thing-events as related (or as not divided). Often the seemingly random and meaningless will later be understood as perfectly "in sync," when the perspective shifts appropriately. In this view of sync, we now move away from seeing synchronicity as the connections deemed special and move into a knowing of that property which makes one connection meaningful or not-a property that is always present, because it is ourselves. Syncs offer endless insight into the world, but their most essential quality is in vivifying what and who really allows for them. If we reflect on one of those amazing animations where we zoom in (or out) infinitely on a fractal Mandelbrot set, we notice that, at points, the entire shape comes into focus and then recedes into strange and dazzling complexity. This happens repeatedly, regardless of where we travel on the infinite complexity of the fractal, as it always resolves into the same familiar shape eventually. Our macro/microcosm has the same property. We have all seen films where the camera moves rapidly between vast amounts of space containing familiar forms like atoms, molecules and cells-eventuating into a creature or plant that would be part of our normal experience or magnification. Further "up," we have the same dynamic: the clutter of streets and houses resolving into patterned cities; later becoming the planet; moving through the vastness of empty space; resolving into the solar system . . . much more space and we see the galaxy, etc. Sync compares, in the fractal example, to the points at which we see a whole familiar shape after lots of complexity, and, in the micro/macro images, to the moments when we recognize a familiar form resolving after much empty space. The sync is always part of the whole, made up of the interconnected infinite thing-events, but stands out as where we-owing to our unique perspective-resolve this complexity into something conceivable and as associated to other parts of our experience. All understanding is sync, which is simply association, relationship or connections between thing-events. We derive meaning from putting things and events together to tell ourselves a story. Without constantly making associations we would be lost in perplexing noise and chaos, all the information coming at us making no sense at all. What we generally call synchronicity is the fringe of this everyday activity. Creating unusual associations (that eventually become accepted as universal) is how we expand the parameters of the consensus association narrative (our reality), adding depth and texture to the experience of experience itself. The idea that the sun is the center of the local spheres was once an obscene sync, now included in the common story about the reality we all share. The connection between the movement of the earth relative to the sun, causing seasons, are commonly accepted associations between thing-events (or syncs), while the connection between 9/11 and 2001: A Space Odyssey is a more occult (out of the ordinary) relationship. On 9/11/2001, the World Trade Center collapsed right beside "The Millennium Hilton Hotel," a building designed to resemble the black Monolith from the film 2001: A Space Odyssey. The building was practically flush with the towers, being damaged by falling debris and was captured in many of the iconic images from 9/11. Both the real life event and the film associate to the year 2001, as well as sharing the presence of the Monolith. The film depicts key evolutionary phases in mankind's history overseen and influenced by the Monolith. 9/11 can also be viewed as a key evolutionary point in our history, and occurred in the presence of the same object as in the film 2001. Through sync association, the context of the film-evolution involving Jupiter-becomes applicable to the real-life it reflects. Realizing 911 is also the emergency call service number in America, the vivid synchronicities involved in 9/11 act as an emergency wake-up call into a higher state of consciousness, where we become aware of our inseparability from thing-events. I call this process the "9/11 Mega Ritual." The boundaries between things and events are collectively and individually agreed upon conventions. We use them in order to make sense of our environment and navigate our everyday lives. The categories we impose on the world are what allow for definition and the creation of all symbols;4 these are necessary and helpful tools, but not the nature of the actual reality they point towards. The culture5 of a certain time and place decides the general agreed-upon lines of division we draw between things-events and how we interpret them. Even the divide between "myself" and "other" is a convention or phantom of this process: the label­ing of thing-events that creates our understanding of reality. It is conducive to health and well-being to notice these conventions as shadows or phantoms. That way, these playful apparitions will not be frightening or confuse our activities. Thing-events are not only seamlessly interpenetrating, but are also boundless in their depth and have infinite complexity. Sync is the experience of the relaxation and dissolution of the boundaries between ourselves and the thing-events of the world. The more we allow ourselves to notice sync, the less barriers there are and the more our depth of perception will grow, allowing us to penetrate deeper into the world — a world which starts to feel inseparable from what we consider ourselves. Sync is Sign People of many indigenous cultures are known for seeing "signs" in their surroundings. Spirits6 of ancestors and plants communicate through signs in the natural environment with individuals, mystics and shamans sensitive to such matters. These signs help heal and guide the culture. Sync is the modern equivalent, now recognized, as the false-distinction between the natural and human-created world exits. All thing-events (whether trees, cars or movies) arise from the great non-local mystery of existence. The entire interrelated process of creation shapes the forest as much as it does the city. Seeing signs in the urban landscape is sync. I go on regular vision quests or "sync walks" through the concrete jungle with my Ayahuasquero friend,7 seeing syncs and signs in number plates, T-shirts, billboards and trash. Usually a sync walk will end at the movie theatre, the temple of the city where communion with nature reaches its peak amongst the "stars." Just like signs have helped align the healthy smaller cultures of earth with Creator's plan, sync attunes the modern person to the unfathomable will of the Self realizing itSelf. How do we know we are understanding the messages from these syncs or signs? Because the ultimate ever-present reality is total perfection,8 the greater the amount of joy one obtains from their interpretations of sync, the more accurate, successful and aligned the reading of its meaning is. The universe is created in all places and times non-locally, right here and now, the home of consciousness. Thankfully (and mercifully) the intensity of this gnosis varies, allowing for the daily ebb and flow of life. We view this bizarre happening of creation sequentially, doing our best to make sense of it. Sync is the new level of understanding (a collective allowing) about the nature of the universe, now making its way into everyday consensus awareness. When noticing the non-local hand during a prerecorded movie, the mind-blowing elegance hinted at by the new "in-sync" perspective starts to become tangible. I will often sit down in a theatre or at home and watch something, noticing how the situations, objects and words align with things I have been meditating on, realizing the movie echoes activities and conversations I've been engaged in recently, with uncanny perfection. On occasion, watching media during moments where the context has already allowed for particularly heightened states of awareness, the reactive mirroring between the screen and myself is immediate. As a teenager, glimpsing this reality with a head full of conspiracies and misperceptions about power and influence, I perceived syncs in movies as hints of a nefarious agenda. Now I realize this entrainment, between my personal experiences and films made independently from me, is the emergent awareness of myself as a non-local organism-environment.9 I bleed into everything I perceive, my boundary or what I consider as "myself" and the "world" is a flexible convenience, and my perception detaches from the individual focal point. That a prerecorded film is alive and interactive — reacting to my specific context at a given moment — shows that what creates both myself and the film, is ever-present, interrelated and, by implication, non-local. The intensity of holy communion with a film (as with life in general) varies appropriately with our degree of presence. I can enjoy watching TV with my mom, noticing a few light entrainments here and there. Or, I can do Ayahuasca with my friend Jim Sanders, the next day stepping out to the cinema; shaking in my seat with vivid, seemingly impossible boundary loss. Entrainment hints at a property inherent in sync associations that draw them together. The concepts of sync and entrainment are so interrelated that the words are regularly interchanged. Often a word will arise in the mind10 of the organism and, at the same moment, consciousness will notice this word somewhere in the environment-perhaps on a passing T-shirt or in a song playing in the background. Old models try to say: "either the T-shirt was seen first, prompting my mind to issue the same word" or "perhaps my mind thought the word, then, out of all the clutter in my surroundings, was primed to notice the chance occurrence of the same word." Indeed they are coincident (in the sense of happening at the same time) and this is entrainment between myself and environment. Both myself (which thinks the word) and the environment (which contains the T-shirt) share the same source that is primary to both. These examples of entrainment, those happening at the same instance or perhaps very close to each other, will help us make the leap to the more unusual ultimate reality of sync. As all is sync (or nothing is sync), regardless of the length of time between events; they are always entrainment as the non-local self is present in any instances, never mind the justification put forward by the inherently-limited mind. All the associations we make are portals opening between ourselves, across the barriers of timespace, organism-environment and thing-event. The most recognized of such portals is love. When we meet a future lover, and our upcoming associations are to be moments of intense shared awareness, we might even recognize it from the first glance. The sensation that things are specifically being orchestrated for (or even by) yourself, is a normal result of the emerging sync consciousness. Depending on the temperament and makeup of the specific belief system, the interpretation of sync can be nefarious or benign. If concepts of influence over what happens in spacetime are associated to pyramid-type hierarchies, the syncs can form a reality of peculiar control networks, manipulating the texture of the moment.11 To satisfy the ever-increasing depth of the experience, the limited self (ego) needs increasingly powerful and subtle top-down organizations able to manipulate the environment, sometimes extending "them" beyond the physical (into the spiritual, inter-dimensional and godlike spheres). If the external control concepts are replaced by the personal and internal, the individual experiencing sync can start thinking him or herself God. Many undergoing this particular process (awakening as a united organism-environment field via synchronicity) will have a hodge-podge of these symptoms. The faulty idea that the universe is a hierarchy of power and influence, introduced at a young age, is responsible for these phantom menaces. This is not to undermine the models of external higher power and internal personal divinity, but to help put them in the most helpful context we can imagine. The nature of reality clearly corresponds in some areas to the two seemingly opposing ideas. This vivifies the continued joke being played on us as we try and impose concepts and symbols over the (thankfully) indefinable transcendent nature of what ultimately Is. If we could fit the nature of our Selves and God into a conceptual map or box, it would imply that It is finite, and not bottomless eternal perfection. Imagine a ladder extending infinitely up and down, that all pos­si­ble beings are climbing. The ladder represents our progress and success as entities in the universal enterprise. As this ladder extends eternally higher and lower, regardless how far apart Gods are from humans, or humans are from ants, we are all still in the center. Only relatively speaking are there higher and lower beings. All are in the center from which consciousness emanates. From a profile view, power represents the top of the pyramid. When we continue up a dimension, and see the pyramid from the overhead perspective, we notice the tip is the center and balanced point. Real power comes from being in harmony with the totality and filled with a joy ultimately free from concept. If the ultimate controlling and manipulating force were not inside of us right now (and in everything else) it would not be ever-present, and not the true eternal Master. Investigating sync leads one, not only to relaxing the boundaries between things and events, but also to letting go of the rigid classifications of these thing-events. In order to notice that one thing-event (situation, object or symbol) has an affinity to another, often means being "loose" with our associations. In a sense, to see and understand sync we must become fools-free from the limitations, but not the benefits, of mind. For example, a few years ago I was struggling with actress Robin Tunney climbing K2 (the mountain) in the film Vertical Limit and also passing signs reading "2K" (in reference to the year 2000 Millennium) in another film, End of Days. K2 is obviously 2K backwards, but my mind was resistant to accept this as a clear association, which it now plainly recognizes. The old model of perception was concerned with the simple relaxation of associating something backwards (letting the box that fits K2, now also accept 2K). This shows how we can be confronted with sync and miss it-owing to how we discriminate between what associates and what we think is not associated. Sync is realizing all associations are agreed upon conventions, and we are free to re-appropriate the process. The symbol representative of the planet Jupiter12 clearly looks like an amalgamation of 2 and 4. In synchromysticism13 it has come to symbolize "42." 42 is "The Answer to Life, the Universe, and Ev­erything" in the popular book series and major motion picture The Hitchhiker's Guide to the Galaxy. 42 is also the angle at which light refracts, becoming the colors of the rainbow. Now when I see the number 42, concepts like "Jupiter" and "rainbow" are evoked, perfectly reasonable given the above context. In the same fashion, any symbols or words or concepts are up for re-association, unlocking their infinite potential. We saw Jupiter associating to the sync-starting "9/11 Mega Ritual" and here it is freeing our symbols. Jupiter is the source of words like "jovial" and "joy"-and we know that the ultimate reality is perfect Joy. The evolutionary jump we are undergoing, assisted by sync, is overseen by Jupiter. Sync is time travel The one unambiguous ever-present factor in any and all synchronicity is the witness or consciousness.14 Consciousness itself is the ultimate sync, present in all sync. All syncs point towards consciousness and arise from it. Like water is to the fish, consciousness is so central and ubiquitous to sync (and all else) that we tend to miss it. Consciousness is present whenever you see a 42 or any other sync. The next time you witness a 42, there will be a non-local bridge between the past and present self. This is part of the exciting charge of seeing syncs. The past, present and future selves, all created now, witnessing resonant events and becoming aware of each other beyond time. It is also why significant events (like 9/11) are encoded heavily in the sync architecture of the present. Syncs are signs of our potently aware non-local selves bearing witness to thing-events. Often syncs are interpreted-including 2012 phenomena-as pointing towards an imminent massive collective spiritual exper­ience. This event — collective consciousness realization — does, by its very nature, singularly stand out in the fabric of all we per­ceive. The great perceiver, perceiving itself through us on the largest planetary scale in known history. Individuals are becoming aware of themselves across spacetime. At the same moment, many different individuals, who are noticing the same syncs, start perceiving their shared transpersonal essence. Real-time synchronicity sharing (what all communication/association essentially boils down to) across the internet and on networks like Twitter, are the result of this emergent process (of the collective awaking into a unified greater Self), as well as facilitating in its increased realization. The ultimate sync is consciousness in this moment, and we have it already. This does not remove from the delight of processes and play in the world around us. Knowing you are the Self makes you the best player you can possibly be. Movie stars are "as below" resonators of the skies above Even though our current general associations are agreed upon conventions (and we are free to make new ones), they still arise from the ultimate mind that orchestrates all thing-events and are not arbitrary. The map is not the territory, but is part of the territory, and both share the same fountainhead. A wonderful example is how we use the word "star" for celebrities. Our oldest myths and stories are associated to the heavenly spheres. They have been personified, deified and their movements become elaborate dramas. Kings, Queens and other big players in our cultures have been connected to these bodies and vice versa. The stories about the stars and humankind are inseparably intertwined and are reflections of each other. Our cinema is a recent incarnation of this dynamic, the obvious giveaway being the word "star." The mystery and beauty of the stars (that would pull man deep­er into the void), had to be increasingly turned away from at the onset of "mind." Playing right into Creator's hands, as he/she had put them into the silver screen to keep telling us stories, and continue our process of becoming stars ourselves. Astrology studies the relationships between us and the heavenly spheres. Synchromysticism realizes the same dynamic exists between our world and celebrity stars. Meditating on the film strip ladder — passing at 24 frames a second — we climb the stairway to heaven. Film and pop culture offer a rich and immediate portal into the collective awakening psyche when treated as creative flux emanating from the heart of all Being. The artist is acknowledged as shaping his/her medium, yet channeling the energy to create, directly from the greater Self. The ever-changing patterns and themes surrounding us (via our omnidirectionally mediated and increasingly mercurial environments) are taken as the externalized collective Body. We can investigate this dynamic entity-disguised as the forms and context of the environment (that we ourselves comprise) — by treating all that arises in our awareness as sacred and ready to be re-contextualized — with joy as our guide and the sky as the limit. The mainstream culture, no longer a toxic river, is the major artery from which all other strange subcultures branch and receive nourishment. The original collective context of our media and forms are not lost, but celebrated and elaborated upon. The world remains cohesive and integrated (and its individuals sane), yet open-ended for infinite exploration, as depth and meaning penetrate everything continually. The pool of pop and meaning we collectively swim in anchors our experience and supplies a shared framework from which we can launch our experimental, far out and freaky, sync depth charges. Which of these will explode . . . ? New connections that are caught up and pulled in by the gravity of others' interests, only to eventually go "pop" themselves and enrich the sync whole. Notes: (1) Using "thing-events" helps highlight how things and events are not ultimately separable or essentially different. I could say, "the tree I am looking at right now is . . ." a thing, or an event, if I realize all the infinite processes that are involved in there being a tree. "Tree" as a "thing" is a convenient label for all the processes that comprise the limitless mysterious reality of the object I am seeing. Syncs are values of mundane versus profound we place upon one thing-event associated to another. (2) Chance is a label for the occurrences between the initiation and outcome of thing-events far too subtle for current perspective to understand. Chance happenings (coincidences) versus meaningful synchronicity, are different models mapping the same phenomena beyond the scope of both. All models are inherently beyond the reach of our labels of the unlimited thing-events. If a map had all the detail of the area it represented it would be identical to it, and useless as a representation. (3) Reality isn't ultimately connected or not connected (one or many), it is something beyond understandable categories. The ultimate nature of sync goes to the indefinable core of all that exists, and our words about it will always only be relatively true. The words can guide towards transcendental understanding of sync, as they (and the reader) arise from the same source. (4) To "symbolize" is to give a finite shape and definition to a thing-event (ultimately indefinable) for the purpose of creating associations, which in turn create our perception of the world. The letters and words you are now reading are part of just such a system, symbols creating associations and giving rise to this story of sync. Symbols are in flux, new interpretations are constantly being associated to them, and new symbols are being added to our collection; the story of reality. A collection of symbols, like "letters," can make up another symbol, such as a "word." Our entire perception of reality is a big symbol for the indefinable mystery it represents. (5) CULTure is the reigning collection of sync associations that make up the current reality of entities (ranging in size from the entire planet down to a single individual). Large cultures are the familiar ones, like countries or regions. Smaller groups, like synchromystics, share many associations with symbols only agreed upon by the group itself. Cults of only one person exist and they are often labeled eccentrics or mad men/women. Smaller cults are the breeding grounds for associations that could go viral and become accepted by the bigger groups of association networks (a.k.a. realities). The more joy and harmony emanating from your cult-whether a large or small group-the more it resonates with the ultimate purpose of the totality. (6) The essences of things are infinite and boundless. When we draw a perimeter around a thing, by the necessity of conceptual­ization, we place a border we can recognize around part of it. The parts not contained, yet still perceived by those sensitive to such matters, are often called spirit. Other spirits are things so subtle that no model has yet been created to contain it. The spirit world is the realm of thing-events beyond the event horizon of human conceptualization. There is a continual process of evolution where the human nervous system grows to acknowledge new facets of this realm and include it in the consensus. What was once spirit­ual is now science. What is now sync will soon be regular association. (7) In the forests of South America, certain Shamans have cultivated a profound relationship of feedback between themselves and the environment. They have a partnership with a conscious plant brew, Ayahuasca. The feedback between the individual and the environment (through the plant) is a singularly powerful portal for what-creates-both to come into this world. The practice has spread to new cultures all over the earth as the living bio­sphere self-organizes for the next dramatic evolutionary leap. In my home of Winnipeg, I have been fortunate enough (clearly sync entrainment) to stumble upon just such a movement, headed by Shaman and man of sync, Jim Sanders. (8) The more sensitive and free from imperfection the film stock, the more clarity and depth we can capture in the picture. Consid­ering the infinite nature up and down every point of the macro- and microcosmic spacetime manifold, we realize the most ultimate and perfect substrate allows for it. That we can never grasp It is even clearer proof of Its existence, for to be graspable, It must be containable, thus finite. This perfection and the joy that is every moment is beyond the understanding of the limited mind, from which it itself is born. Often the mind is perplexed at how the events of our lives and societies reflect this ultimate of realities. The total eternal and ever-present perfection, regardless of context, is self-evident to the ultimate Self. (9) Using "organism-environment" helps highlight how the perceiving entity and what is being experienced are interrelated and ultimately united (or not divided). Both the organism and its environment are thing-events that arise in primary consciousness. When I notice myself (organism) and what I am seeing (environment), the real I is the witness of both. In any spacetime dynamic there is an organism-environment-thing-event. (10) The mind is a localized collection of symbols, forming a map or model through which the ultimate consciousness perceives the world. As a collection of limitations, the mind and all it generates (including the persona) are never to be taken primarily for the realities it perceives. Again, the mind is not the territory, but it is part of the territory, and both share the same fountainhead. (11) My reality tunnel at this stage resembled The Truman Show, The Matrix, The Illuminatus Trilogy! and 1984, all put in a blender. (12) The Jupiter symbol as 42, and its association to rainbows, was explained to me by Jim Sanders. (13) Synchromysticism is a name I started using to describe thoughts about synchronicity in 2006. "The art of realizing mean­ingful coincidence in the seemingly mundane with mystical or esoteric significance." I used this summary on an old blog site to try and encapsulate synchromysticism. I still see it used on new websites today. Nothing is essentially wrong with it. Perhaps too slippery on the brain and open for misinterpretations. This is likely also its strength as a catchphrase. I hope this current document clarifies, or at least confuses in the right direction, the continuing process of creating a better understanding of the phenomena of synchronicity. On the Internet, an ever-changing community of people have come and gone associating themselves, willing or otherwise, with the word. (14) Consciousness is the mysterious property, always out of reach, that knows. Consciousness can refer to this essence in individuals or groups, depending on the context it is being used in. There is an ultimate consciousness underlying everything, directly connected and primary to all other levels of consciousness. The unknowable I that knows it knows. The Sync Book is available here Teaser image by ralphbijker, courtesy of Creative Commons license.
Tokyo is to foot a $3.1 billion bill, which is part of the cost for relocating American troops from Okinawa. For the first time, it will also host US long-range surveillance drones, which would help to monitor disputed islands in the East China Sea. The cost-sharing agreement for the troop transfer and the future deployment of drones by next spring are both part of an effort aimed at updating the US-Japan military and diplomatic alliance. The pledge to modernize the alliance for the first time in 16 years was made in a joint declaration during the visit of US Secretary of State John Kerry and US Defense Secretary Chuck Hagel, who met their Japanese counterparts, Foreign Minister Fumio Kishida and Defense Minister Itsunori Onodera. Japan hosts some 50,000 American soldiers and officers, particularly in Okinawa. Their presence is a constant source of tension with local populations due to crimes committed by the servicemen, disruptions caused by military flights and land use by the US military. Last year the countries announced a plan to relocate about 9,000 US Marines from Okinawa to other locations. 5,000 of them will go to Guam while others will be stationed elsewhere. The estimated cost of the relocation is about $8.6 billion. Japan will cover $3.1 billion of that sum, the officials from the two countries announced on Thursday. The cost includes development of new facilities in Guam and the Northern Mariana Islands. As the foot soldiers leave, US Global Hawk unmanned aircraft will be arriving, marking the first time that American drones will be stationed on Japanese soil on a permanent basis. Two or three long-range spy drones will be placed at a US base to help monitor Japan’s territory. While the disputed Senkaku Islands in the East China Sea was not mentioned in the documents, the islands contested by Japan, China and Taiwan was a prevailing topic in public speeches after the signing. Hagel said the US reiterated that it recognizes Japan's administration of the islands and has responsibilities to protect Japanese territory under a mutual defense treaty. "We strongly oppose any unilateral or coercive action that seeks to undermine Japan's administrative control," he said. The US will also deploy second X-band early warning radar in Japan. Officials were careful to stress that it will be directed against North Korea rather than China, and will help track down and probably intercept missiles coming from the defiant state. The plan to deploy radar was first announced by the then-Defense Secretary Leon Panetta about a year ago. The new system will be placed at Kyogamisaki air base in Kyoto prefecture in western Japan to complement already existing radar in the northern part of Japan. Beijing however may not be convinced by the assurances. It has criticized the installation of the inaugural military radar, saying it could disrupt the strategic military balance in the region and destabilize the situation. Other plans to boost US military presence in Japan include possible deployment of F-35 jet fighters around 2017, a top UN official told AP. There is also a plan to send Navy P-8 anti-submarine aircraft later this year - the first time the sub-killers deployment outside of US. The upgrade also expands cooperation in areas like counter-terrorism and cyber warfare. The Internet threat is the one that Japan sometimes cannot defend itself against with the systems it currently has, Kazunori Kimura, the Defense Ministry's director of cyber-defense planning, told Reuters.
Cox’s Bazar and Chittagong airports to remain closed amid cyclone Mora News Hour: Cox’s Bazar and Chittagong airports will remain closed from early Tuesday morning till midday due to Cyclone Mora. The Civil Aviation Authority (CAAB) suspended domestic and international flight operations in this routes. CAAB officials said they suspended the air traffic movements to and from Cox’s Bazar Airport until further notice while flight operations at Chittagong Shah Amanat International Airport will be closed at least until 2pm tomorrow from midnight today. Wing Commander Reazul Kabir, the manager of the airport, said the suspension could be extended or withdrawn early depending on what the weather is like. The cyclonic storm ‘Mora’ over north bay and adjoining east central Bay is feared to be intensified further and move in a northerly direction and predicted to cross Chittagong -Cox’s Bazar coast by tomorrow morning, the latest weather bulletin said this evening. Like this: Like Loading...
With reporting by Hotnews.ro Russian Deputy Prime Minister Dmitry Rogozin has accused Romania of planning to annex Moldova by supporting Chisinau's efforts toward European integration.Rogozin, in a messageon October 2, said Romanian President Traian Basescu, whom he called the "Traian Horse," has a two-step plan: first, Moldova's "association with the EU, then its Anschluss with Romania" -- a reference to Nazi Germany's annexation of Austria in 1938.In response, Basescu told a news conference in Bucharest that Romania did not intend to annex any country but would continue to support Moldova's European aspirations.Basescu said Rogozin was used to "making a statement about Romania every morning before having tea."Moldova hopes to sign an Association Agreement with the European Union at an Eastern Partnership summit in November.Russia recently banned Moldovan wine imports, a move widely seen as a warning to Chisinau.
“This has become a country where people are not just killed, they are tortured, mutilated, burned and dismembered … Children have been decapitated, and we know of at least four cases where the killers have eaten the flesh of their victims.” Navi Pillay, UN High Commissioner for Human Rights, was describing conditions in the Central African Republic (CAR), where the country’s entire minority Muslim population faces either death or expulsion at the hands of the Christian majority. According to Amnesty International, the troubles in CAR began when the Muslim “Seleka” militia started a murderous rampage in the northeast of the country. It then seized the capital of Bangui in March, 2013, ousting president Francois Bozize, a Christian, and replacing him with a Muslim, Michel Djotodia. Amnesty reports that over the next 10 months, the mainly Muslim militia killed countless Christian civilians, burned numerous villages and looted thousands of homes. To fight the Seleka troops, the Christian population organized their own militias and started carrying out reprisal raids on Muslim neighbourhoods. By November, 2013, violence reached such levels, the UN warned CAR was at risk of spiraling into genocide. Unable to hold on to power, on January 10 of this year, Djotodia resigned and fled the country. With him gone, the tide turned against the Muslim militia, resulting in an unprecedented orgy of reprisal revenge attacks on Muslims across the country that have not stopped. The Associated Press reported thousands of Muslims who tried to flee the sectarian violence in Bangui were turned back by peacekeepers as crowds of angry Christians shouted, “we’re going to kill you all.” As Muslims tried to flee to neighbouring Chad and Cameroon, Reuters reported Christian militias were blocking the main roads used by Muslim civilians and attacking the refugees. According to Amnesty, the stated goal of the Christian militias is now to rid the country of Muslims forever. And as the world looks on, the killing continues. At this rate, CAR will be Muslim-free in a matter of months. The killings come at a time when the world is observing the twentieth anniversary of the Rwandan Genocide. Canadian Sen. Romeo Dallaire, who witnessed the 1994 genocide in Rwanda as a UN peacekeeper, says, “Let’s not divorce what’s happening in the Central African Republic with what happened 20 years ago in Rwanda.” In an interview with the Guardian, he said, “We’ve actually established a damn pecking order and the sub-Saharan black African — yes we’re interested but it just doesn’t count enough to spill our blood, to get embroiled in something complex that will need longer-term stability and influence.” For Muslims around the world, witnessing yet another Muslim community suffering mass murder, and so soon after the killings in Myanmar, it’s traumatic. It’s easy to start believing in conspiracy theories and to become addicted to victimhood. However, as a Muslim I believe this latest tragedy sends us a message we must not ignore, whether we are in Africa, Asia or here in North America. The problems we Muslims face are the making of our poor political and religious leadership. If we wish to join the rest of humanity, we need to get rid of those who now hold our reins. We need to free ourselves from bondage. If we don’t, the Central African Republic will not be the last tragedy we will face.