text
stringlengths 0
100k
|
|---|
Our Future Wunderlist joins Microsoft Today marks a momentous day for me and the entire Wunderlist family. I am incredibly excited to share with you that we are joining Microsoft. When we launched Wunderlist almost five years ago now, we set out on a mission to reinvent productivity software. Our goal was to build the most delightful, simple and elegant product to help people manage their daily personal and professional to-dos. Seeing Wunderlist grow to what it is today – 13+ million users, who have collectively created more than 1 billion to-dos – blows my mind. Yet, it's only the beginning. Our aspirations are much bigger. Joining Microsoft gives us access to a massive wealth of expertise, technology and people that a small company like us could only dream of amassing on its own. So what will change for you? Nothing right now. Our team in Berlin will continue to build and deliver you Wunderlist, Wunderlist Pro and Wunderlist for Business across all platforms–iPhone, iPad, Apple Watch, Mac, Android, Windows Phone, Windows and the Web. I will continue to lead the team and product strategy, because that’s what I love the most–building great products that help individuals and businesses get stuff done in the simplest and most intuitive way possible. Over the next few months as Wunderlist becomes a part of the Microsoft family, we’ll introduce a host of new features, continue growing the ecosystem of partner integrations and progress in delivering Wunderlist to billions of people. We are excited and can’t wait to share with you what we have been working on–watch this space! On a personal note, I couldn’t be more proud of my team. I started this company with a dream. My team turned it into a reality. Together we have created an award winning beautiful and simple to-do list that is powered by a real-time sync architecture, unlike any other. We have scaled to new markets, won loyal fans (& awards) along the way and had a lot of fun. It’s been an incredible journey and has provided such a valuable learning experience for all of us. Thank you, team! Also, I want to say a huge thanks to you–our users, supporters and investors. And thank you to Microsoft! We are beyond thrilled to be continuing our journey with you. To sign off this chapter in our history I've handpicked some photos that document our story from the very beginning to now, I hope you will enjoy. Here’s to the future of Wunderlist! Christian Reber Founder & CEO Follow @christianreber Best wishes from Berlin,Christian ReberFounder & CEO If you have any questions feel free to ask us anything via Email, Twitter, Facebook, Google+ or in the comments below.
|
I thought it would be fun to setup a prop on my porch for Halloween that did more than just sit there. Of course I came up with the idea very late and so this whole thing was cobbled together in just an hour or two. Looking around my house at a ton of the gadgets I had the idea of putting a Jack-o-lantern on the porch that I could control the light in. Searching the internet for scary Halloween sounds caused things to quickly evolve into adding motion sensors, controlling a couple of lights, adding sound, and putting the pumpkin in the lap of a headless scarecrow. I put a motion sensor at the end of the walk facing the house so that when someone approaches the porch it triggers the computer controlled sequence. First the porch light shuts off, then a scary sound starts playing through a speaker under the scarecrow. The light in the jack-o-lantern slowly brightens just as a "Help Me" emanates from the speaker. Then it dims back down, the sound stops, and the porch light turns back on. I also have a blacklight on the porch that stays on to make sure there is still some light there and to add some atmosphere. The scarecrow is just some old clothes stuffed with lots and lots of plastic shopping bags. I used a tan colored one up at the neck as a sort of flesh tone. The whole thing is controlled by a Raspberry Pi with the standard Raspbian setup. The sound is played out the analog jack to a powered speaker that sits under the chair, and the lights are controlled with several X10 modules. In order to control the X10 modules and receive the signal from the motion sensor I use Heyu and a CM11A module connected to a USB serial adapter. Heyu doesn't come as a package under Debian so you'll need to compile and install it yourself. It's pretty straightforward, but you'll need to adjust the localstatedir during configure: $ ./configure --localstatedir=/var After it compiles and installs, create the /usr/local/etc/heyu/x10.conf by coping the x10config.example and setting the serial port to /dev/ttyUSB0. Also down where the script section is, add the following lines: ALIAS treater B8 SCRIPT treater on :: /usr/local/bin/treater.sh Create the /usr/local/bin/treater.sh file as below (or download the attached file) and put the sound file in /usr/local/lib. In rc.local add a line to automatically start the heyu engine. Test it out by sending the ON code that the motion sensor sends. Once the animated sequence is working, put it out on your porch and see how many kids you scare! #!/bin/sh PATH=/usr/local/bin:/usr/bin:/bin export PATH PORCH=B3 PUMPKIN=B2 WINDOW=B4 if lockfile -! -r 0 /tmp/treater.lock ; then echo locked exit 0 fi play -q /usr/local/lib/way_scaryR.wav & heyu off ${PORCH} #heyu on ${WINDOW} heyu bright ${PUMPKIN} 22 sleep 2 heyu dim ${PUMPKIN} 22 heyu on ${PORCH} #heyu off $(WINDOW) rm -f /tmp/treater.lock
|
YARMOUTH, MA — A Medford man is lucky to be alive after his boat capsized in frigid Cape Cod waters Monday night. The 36-year-old spent more than 13 hours clinging to the overturned boat — and he was unable to call for help after his phone sank, according to police. "He lost his cell phone in the water and did the best he could to keep his life jacket on and cling to the boat and wait for help," police said in a statement. The man's girlfriend notified authorities when he failed to contact her after launching his 14-foot aluminum boat in Yarmouth. A Coast Guard helicopter crew spotted the man at about 4 a.m. and he was rescued. He was transported to Cape Cod Hospital and expected to be released later Tuesday. This story will be updated This is Coast Guard video of the rescue: Photos: Police said a man spent a night clinging to a 14-foot aluminum boat that overturned near Yarmouth on February 20, 2017. (Credit: Yarmouth Police Department)
|
WHO? Ramadevi T.S. WHAT? Space Electronics Engineer WHERE? Vikram Sarabhai Space Center, Trivandrum Reported by Nandita Jayaraj “Have you ever watched a rocket launch?”“Ethrayo kandittund [So many I’ve seen]. It was part of my job, Nandita.” The 1960s, when Ramadevi was growing up, was an eventful decade for space science. Russia had just launched the first ever artificial satellite Sputnik-1 into space, sparking off the famous space race with the USA. Yuri Gagarin became the first human to travel to space in 1961 and Valentina Tereshkova became the first woman to do so in 1963. Just a few months later, India’s first rocket was launched from a fishing village near Trivandrum called Thumba. The story of the rocket (actually it was just a part of the rocket) being transported on a bicycle lives on. In 1969, the freshly graduated engineer Ramadevi read the news of Neil Armstrong walking on the moon. It’s tempting to romanticise the circumstances and suggest that they played a role in shaping Ramadevi’s illustrious career with India’s space agency ISRO, but the now-retired scientist insists that she had no special fascination for space growing up. However, she does humbly recall the sight of grey buses carrying Thumba Equatorial Rocket Launching Station (TERLS) employees whooshing through Trivandrum Engineering College, where Ramadevi was pursuing her degree in electronics and communications. “At 8-o’clock every morning, very punctually, the buses used to go. Looking at that, I thought it may be nice to work there but I actually didn’t know much about space science.” As it turned out, after college Ramadevi accepted a job at ISRO. She joined ISRO’s only center at the time, the Vikram Sarabhai Space Centre (VSSC), as a technical assistant in 1970. Forty years later in 2010, she left as Deputy Director. This made her one of the few women who has made it to the top positions at the national space agency. What R&D at ISRO is like When she joined, Ramadevi was in the R&D section which developed on-board electronics for ISRO’s rockets. So what exactly did “research and development” involve? I asked her. Ramadevi looked around for an example – “See this recorder,” she said, pointing at my dictaphone, “suppose you have been asked to make one. You need to know what it means, what is inside it, how it operates and the state-of-the-art technologies that it uses.” This is the research part. Then comes the development. “Now you have to design the device – the circuits, calculate the audio output, the amount of amplification needed, etc.” A report is then made – in those days on paper – and sometimes a prototype is made and taken up to experts for a review. The experts give feedback – “they may say the the bass is not right or the treble is not right and then ask you return with a modified design.” After the design is finally approved, the first ‘qualification model’ is made. Then, the most crucial part starts – the environmental tests. Devices that are part of the spacecraft need to tolerate many extreme environments on their way to and while in space, surely a lot more than my recorder is subjected to. “The temperature can reach 125 degrees during burnout (when the fuel is consumed fully), then vacuum, negative temperatures – all that had to be tested.” Ramadevi’s very first project at VSSC was not space-related, it acquainted her well with the extent of this testing phase – “We were asked to develop walkie-talkies for the Border Security Force (BSF) soldiers.” Back then, in 1970-72, the term ‘walkie-talkie’ had not yet come to use. The device was called a ‘high-frequency transmitter-receiver’ and it was big enough to mount on the back like a backpack. The model Ramadevi and her team developed, had to be tested against vibration (in case it was in a vehicle), high temperatures (in deserts) and low temperatures (on the mountains). Some members of her team even travelled to Ladakh for testing purposes – “I would have liked to go but because I was a woman – just 22 years old – the management would not allow that.” Witnessing an electronic revolution In those early years, Ramadevi found herself in the thick of back-to-back technological transformations. “In just 10 years, electronics advanced so much. From tube-based systems, they became transistor-based, then microprocessor based, then microcomputer based. These things were not planned but we could witness them all in our lifetime from 1970-95.” Four years after her team developed the backpack-sized walkie-talkie device, a portable model developed. Ramadevi was an electronics engineer in the right place at the right time. But after 25 years in the R&D division, Ramadevi itched for something new. “The thing with being on the technical side is that you’ll be stuck doing the same thing – I was making power modules for satellites. I was asked to make it smaller and smaller – after a while, I had enough,” she laughed. “I didn’t find any challenge there anymore, so I decided I wanted to come out of it.” Keeping this in mind, Ramadevi began to attend project management training programmes. “This way I impressed upon my superiors that I want to work on slightly non-technical management areas too,” she said. An unconventional move Naturally, when an opportunity came to do something different with ISRO’s Inertial Systems Unit, Ramadevi snatched it. It was a ‘head’ post, a designation to which “women were not selected very easily”. In retrospect, she realises that this move was her big break, but at the time, her decision to leave from R&D to production was very unconventional. Production is the process that starts after a product is designed and passes testing. “The product is given to you and you just have to produce multiple units of the same,” she explained. This side of things is viewed somewhat inferior in technical fields, especially at ISRO, Ramadevi informed me – “they see it as making dosa or idly – the recipe is existing; you just need to send out multiple copies.” As a result, she found herself at the end of several raised eyebrows, including from her friends and family. “People would say aiyyo how boring and all that. But for me it was okay. In management, there’s a saying ‘sitting at all desks makes a good manager’.” On her new desk, Ramadevi found that she was actually pretty good at production. “It was very easy for me,” she giggled, “my workload became a little less and I was able to pursue my personal interests in reading on the side.” On the other hand, Ramadevi was travelling for conferences and getting training in new technologies like never before. “People must have thought ‘what is wrong with her – at this age why learn all this’. They looked down on me a bit,” she said solemnly. “These conferences did not usually have many ladies,” she added. By this time, the number of launches at ISRO had increased. Ramadevi’s challenge was to reduce the time needed to finish the production of each system. She was able to do this, she says perhaps because she managed people well. Some of the work was outsourced to a small scale industry led by handicapped women. “ISRO trained the women – they had problems with their legs – to do critical tasks like integration and soldering for the onboard electronics,” she said. Ramadevi used to spend considerable time inspecting these activities and ensuring quality – “you see, one bad joint can cause the whole rocket to fail”. It took about five years for her to streamline these processes. “Production stabilised and I started getting official recognition. That’s when I completely went to the management side.” Ramadevi then returned to VSSC (from ISRO Inertial Systems Unit) as Deputy Director. This promotion meant a lot to her. “Many people who had been with me in development side did not get such a position,” she said. As Deputy Director, she was in charge of all of VSSC’s programme planning, evaluation, budgeting and reviewing. Ramadevi’s worries of having only two years of service left were put to rest when the Director extended her tenure by a total of three years. This was somewhat unprecedented. “Till then these kinds of extensions were given mostly to men – and those in academics, not to management people like me. That was a nice thing…” D-Day anxieties So what was it like backstage, during the day of a rocket launch? “It was very tense, those times. We have to watch as the trajectory goes up, observe each burnout stage… we can’t take it for granted!” she stressed. Ramadevi recalled the shock of witnessing a GSLV (Geosynchronous Satellite Launch Vehicle) launch failure at Sriharikota in 2006. “I was on the ground with some guests at the time when the explosion happened about 50-56 km above the ground in the upper atmosphere. There were shards that landed close to me. When I looked around, I realised everyone around me had run away. Where to, I don’t know,” she laughed. “Luckily, it didn’t fall on the island but into the sea.” Such events are not new to any space agency. Great efforts are made to investigate the cause of failure. “Right after the heat on the launch pad goes down, inspection begins. Until the cause is found out, we don’t rest. Failure analysis takes a lot of time.” Below is the video of a similar instance in 2010 when a GSLV launch failed. A space for women Growing up in a village called Annamanada in Kerala, Ramadevi was always given freedom by her parents. “I had no inhibitions,” she says. Nevertheless, the statistical odds were against her and other female engineers of the 1960s. In her college, out of around 1,000 students, there were just about 50 women. This increased to 200 at one point but never beyond that. Electronics and avionics had high numbers of women but in the mechanical stream, there were just 2 to 4 girls. Ramadevi was surprised to note that these trends persisted decades later when her son was studying mechanical engineering. Ther gender ratios in mechanical engineering classrooms remain woeful. When she joined ISRO, many of Ramadevi’s male colleagues did not mix much with the women. Didn’t that irritate her? She shrugged, “Those to talked to me, I talked to. Those who didn’t, I didn’t.” Though some reports say that even today only 10% of ISRO’s technical workforce comprises women, things appear to be looking up. There is now a women’s forum that gives employees the platform to voice their issues. This led to the establishment of a creche a few years back. A daycare system is significant progress for engineers who have to work till late evening, confirmed Ramadevi, who did not have this option in her time. Indeed, the scenario for women does seem a lot brighter today. “Now there are engineering colleges just for women. Those days the male domination was severe.”
|
Pills made from baby flesh / Yonhap By Bahk Eun-ji More than 8,500 pills made from baby flesh have been found over the past three years, a main opposition party lawmaker said Tuesday. The pills, allegedly made from the flesh of aborted or stillborn babies, were mostly smuggled from China in the false belief they are good for health. According to data obtained by Rep. Park Myung-jae of the main opposition Liberty Korea Party, 8,511 capsules were found from 2014 to June this year. In 2014, 6,694 pills were found, 251 in 2015 and 476 last year. Rep. Park said most of the pills were smuggled by international mail. More than 900 pills were smuggled by mail in 2015, but this year, they were mainly carried by travelers due to tougher checks. "We have to root out the smuggling of human flesh pills, and make people aware it is a crime against humanity," Park said. "It is a misconception that human flesh enhances stamina _ it is actually dangerous to health."
|
Slovenian President Borut Pahor attends a meeting with his Russian counterpart Vladimir Putin at the Kremlin in Moscow, Russia, February 10, 2017. REUTERS/Alexander Zemlianichenko/Pool LJUBLJANA (Reuters) - Slovenia’s President Borut Pahor is the front-runner in presidential elections due this month, according to a newspaper poll published on Monday. The survey of 500 people conducted by the Delo newspaper is the first since the election commission announced the list of nine election candidates last week. The poll predicts that Pahor, a former prime minister, would win 34.8 percent of the vote in the first round, followed by Kamnik city mayor Marjan Sarec, who is running as an independent candidate, with 22.7 percent. The two would then compete in the second round where Pahor would win a majority, according to the survey. Pahor is also running as an independent candidate although he enjoys the support of the Social Democrats, of which he once was the leader. The SD are junior coalition partner to the centre-left Party of Modern Centre. Opposition candidates Ljudmila Novak and Romana Tomc would get 10.6 and 9.9 percent of votes in the first round, while Education Minister Maja Makovec Brencic, the Modern Centre candidate, would win 2 percent of the vote, the poll said. Although the presidency is mostly ceremonial, the president leads the army and nominates several top officials, including the central bank governor. The election is seen as a possible indicator of parties’ support before general elections are held in June or July next year. Pahor was prime minister from 2008 to 2012.
|
I had been pondering adding needtobreathe as my new artist for 2014. Now I realize there was really no need to. These guys are awesome. Their music I'd say is a little Jars of Clay, Switchfoot and definitely Third Day like in some songs. I can't really get all their albums right now...but I settled on the heat...this one Rivers in the Wasteland...and The Outsiders which I haven't listened to yet but know it'll be good judging from the reviews I've read on that album. Ever since hearing Washed by the Water on Klove, I started getting interested off and on in their music. I'd say this will be a 2014 standout....Multiplied, as one other reviewer put it, is really awesome! Probably one of the best worship songs I've heard in a long time....certainly what i needed after a difficult day but would be perfect for good times as well. This group helps me remember a creativity lost in most Christian Music. yes it's called diversity and this album has that. I just found out that Third Day will be releasing an album later on in the year...but I tell you what, this album will become a benchmark for expectations. I find some of the albums that probably won't get the most radio play end up being my favorite kind of albums...and that's the way this one is. If you can...buy it!
|
Pearl Harbor Attack: Lieutenant Lawrence Ruff Survived the Attack Aboard the USS Nevada By Mark J. Perry Beached and burning after being hit by Japanese bombs and torpedoes the Nevada would be rebuilt, modernized serving as a fire-support ship in the invasions of Normandy, Southern France, Iwo Jima, and Okinawa. (National Archives) Beached and burning after being hit by Japanese bombs and torpedoes the Nevada would be rebuilt, modernized serving as a fire-support ship in the invasions of Normandy, Southern France, Iwo Jima, and Okinawa. (National Archives) Lieutenant Lawrence Ruff, USS Nevada‘s communications officer, rose early that Sunday. He had turned in after the ship’s movie the night before, planning to attend church services on the hospital ship Solace. Since his transfer to Nevada, he had lived on board as a ‘geographical bachelor,’ leaving his wife back on the West Coast. They had both decided that life in the islands, while idyllic, was too uncertain and potentially dangerous for a family household. Emerging on deck, Ruff stepped into another day in paradise. High clouds lingered over the Koolau Mountain Range to the east, but the sun had already burned off most of the early morning overcast. Lieutenant Ruff joined Father Drinnan in the boat headed for Solace. Chugging in leisurely fashion across Pearl Harbor, the launch deposited the two officers at Solace‘s accommodation ladder shortly before 7 a.m. Ruff waited in the officers’ lounge while Father Drinnan assisted in the preparation for services. Admiral Husband E. Kimmel, commander in chief, U.S. Pacific Fleet (CINCPAC), had most of his ships in port that Sunday. While his aircraft carriers were at sea delivering planes to some of America’s outlying Pacific islands, he felt it would be prudent to keep his remaining ships under the protective cover of land-based aircraft. Nests of destroyers bobbed together, tethered to mooring buoys about the harbor. The larger cruisers and auxiliaries rode alone or occupied the limited berthing space at the naval station. The heart of the fleet, seven battleships, rode at their moorings east of Ford Island. An eighth battleship, Pennsylvania, rested on blocks in dry dock No. 1. While the smaller ships swayed gently in the wind, the broad-beamed, immense battleships were unaffected by the lapping water. In the atmosphere of rising tensions with Japan, Admiral Kimmel wanted to keep his fleet concentrated for any eventuality. For the officers and men, Sunday in port meant holiday routine, with liberty for most of the men and reduced work schedules for those standing watch. As the tropical heat rose and the clouds retreated, December 7, 1941, promised to be an excellent day for relaxation. Nevada occupied berth Fox 8 alone at the northeast end of the line of battleships. At 583 feet long and 29,000 tons, Nevada and her sister ship Oklahoma were the smallest and oldest. Nevertheless, each possessed a powerful main battery of 10 14-inch guns. Twelve 5-inch guns, four 6-pounder anti-aircraft guns and eight .50-caliber machine guns provided anti-aircraft protection. Six Bureau Express oil-fired boilers powered a pair of Parsons turbines generating 25,000 shaft horsepower for a top speed of 20.5 knots. While Lieutenant Ruff waited for services to start, the reveille watch on Nevada polished brass, piped away breakfast and woke the forenoon watch. The assistant quartermaster of the watch woke Ensign Joseph K. Taussig, Jr., the forenoon officer of the deck, at 7 a.m. Taussig was the junior gunnery officer in charge of the starboard anti-aircraft batteries. He did not have to relieve the watch until 7:45 and had ample time to dress and eat breakfast. Ensign Taussig was descended from a proud naval family. His father and namesake had led the first American warships to Europe in World War I. Destroyer Squadron 8’s six ships had barely arrived in Ireland following a rough North Atlantic passage when British Vice Adm. Sir Lewis Bayly asked when they would be available. Commander Taussig answered confidently, ‘We are ready now, sir.’ Truly a fine example for the young Taussig to live up to. Taussig relieved the watch promptly at 7:45. His first duty of the day was to execute colors at 8 a.m. A 23-member band and color guard, with proper holiday colors for Sunday, stood ready. Taussig had to precisely follow the lead of the senior officer present afloat, Rear Adm. William R. Furlong on the minesweeper Oglala. At the proper signal, they would raise the national ensign aft and the blue, white-starred jack forward and play the national anthem, simultaneously. Taussig was determined to execute this ceremony in precise military fashion. The rest of the watch was easy in comparison. First call to colors sounded at 7:55. Few on deck noticed the planes buzzing around the harbor. The watch piped colors at 8 a.m., the flags went up and the band played. Only what they thought to be an inconsiderate Army aviator roaring low over Battleship Row marred the ceremony. But this was no ill-timed Army drill. At 7:40 a.m. Japanese naval aircraft, led by Commander Mitsuo Fuchida, approached Kahuku Point, the northernmost tip of the island of Oahu. There, the main force broke into smaller attack groups, each proceeding to its primary target. Fuchida, in a Nakajima B5N torpedo bomber, accompanied the high-level bombers. Nevada was his plane’s target. Torpedo bombers, dive bombers and high-level bombers formed up northwest of Kaena Point at 7:50. Five minutes later, the first bombs began to fall on both ships and Oahu’s shore installations. Midway through the ‘Star-Spangled Banner’ on Nevada, the first bomb exploded on Ford Island’s seaplane ramp. Hard on the heels of the first blast came several more. A torpedo struck USS Arizona, just ahead of Nevada. As the B5N torpedo bomber (later given the Allied code name of Kate) pulled up over Nevada, its rear gunner sprayed the fantail, shredding the flag but, amazingly, missing the tight ranks of bandsmen. Through shock, discipline or habit, the band members finished the anthem before rushing to their battle stations. Ships’ klaxons sounded all over the harbor, mixed with the wail of air-raid sirens from the nearby airfields. Smoke from fires and spray from near-misses obscured the sights of gunners bringing their mounts into action. Ensign Taussig rushed through the press of men to his battle station in the starboard anti-aircraft director. From there, he took charge of Nevada‘s defensive fire. The regularly manned fore and aft .50-caliber machine guns chattered, and a single 5-inch gun barked. Taussig plugged his sound-powered phones into the net, linking him with the other anti-aircraft stations. He found many of them already on the line. One 5-inch mount had been manned at the beginning of the raid for its daily systems check. Taussig calmly passed orders while guiding his director from target to target, but the system was inadequate to handle so many attackers. Surprised men scrambled up from below, struggling into their clothes. Shortly after 8 o’clock, most of the guns were manned and firing but lacked good overall coordination. Despite the confusion, Nevada‘s gunners had already claimed a couple of enemy planes shot down, including a torpedo bomber off the port quarter. Marine Private Peyton McDaniel paused to watch a torpedo bear down on the ship. Though he expected it to break the ship in two, Nevada only shuddered and listed somewhat to port. Then a projectile crashed into Taussig’s gun director, passed through his thigh and smashed the ballistics computer. In shock, the ensign felt no pain. His leg was shattered, and his left foot was lodged up under his armpit. Taussig commented absently, ‘That’s a hell of a place for a foot to be.’ Ignoring his injury and refusing evacuation, Taussig tried to regain control of the gun mounts. While the guns could still fire in local control, Taussig knew that they would be much more effective in directed mode. Most of the connections between his director and the starboard guns were cut, but the wounded ensign continued to give visual spotting reports over his sound-powered phones. Far above, Commander Fuchida guided his bombers down Battleship Row. Although anti-aircraft fire increased steadily, most of the shells burst well below his planes. The gunfire and lingering high clouds frustrated the attackers, and Fuchida’s bombardier reported that he could not see Nevada. Other planes reported similar difficulties, though some managed to drop their bombs. With resistance still largely ineffective, Fuchida did not want to rush the attacks, so he led his charges in a wide circle over Honolulu to make another run. This took only a few minutes, but on the second pass the northern end of Battleship Row was still obscured, this time by the blaze and thick, oily smoke from Arizona. Despairing of a clear shot at Nevada, Fuchida directed his pilot to try for another ship. Lieutenant Ruff remembered saying to himself, ‘Uh oh, some fool pilot has gone wild,’ as he heard the first explosion from Solace. A short time later, he heard a roar and rushed to the starboard porthole in time to see Arizona erupt in a ball of flame. Leaving Father Drinnan behind, he commandeered one of Solace‘s launches, directing the coxswain back to Nevada. The small boat labored across the smoky harbor, strafed but unhit. Shouting above the din, Ruff guided the coxswain under Nevada‘s stern for protection from low-flying attackers. Moments later, he scrambled up the accommodation ladder to the quarterdeck. Ruff found himself in the midst of a full-blown shooting war. Minutes after Arizona had been torpedoed, a speeding Kate launched one into Nevada, tearing a 45-by-30-foot gash in her bow. The gunners labored to maintain a high volume of fire, but the Japanese aircraft seemed to attack with impunity. Fuses set for too low an altitude caused 5-inch shells to explode below many of the attackers. Lack of coordination reduced overall effectiveness. Ruff saw only a glimpse of this as he headed below to his general quarters station in radio central. On the way he passed Ensign ‘Pops’ Jenkins at his damage control station near the galley, but they exchanged little more than a glance. Ruff trotted down the passageway, ducking through watertight doors. He reasoned that with Captain Francis Scanland and the executive officer ashore, Lt. Cmdr. Francis Thomas, the command duty officer, would need all the help he could get. Though unsure of Thomas’ location, Ruff realized that radio central would not play much of a role under the current circumstances. He changed direction and headed up to the navigation bridge. There, higher and more exposed, Ruff could feel the intense heat and smoke from Arizona. Upon reaching the bridge, Ruff found Quartermaster Chief Robert Sedberry on station. When the attack began, Chief Sedberry, on his own initiative, had ordered engineering to prepare to get underway. Since Nevada always kept one boiler steaming, she could sortie when most of the other large ships were resting at ‘cold iron’ and could not. Ruff joined Sedberry in preparing the bridge, laying out charts and identifying navigable landmarks for a run to sea. Admiral Furlong had already signaled the fleet to sortie as soon as possible. None of the larger ships had yet attempted to do so. Establishing communications with Commander Thomas in Nevada‘s internal control station, deep in the bowels of the ship, Ruff detailed the conditions topside. He filled Thomas in on the sortie signal and his readiness on the bridge. Thomas had his hands full below, counterflooding to correct Nevada‘s port list, dispatching firefighting teams around the ship and supervising engineering’s preparations to get underway. Ruff suggested that Thomas handle things belowdecks while he handled topside. Battling damage and a shortage of manpower, Thomas readily agreed. Time was running out for a sortie. A sheet of flames from Arizona rode a slick of fuel oil toward Nevada‘s bow. Despite the spirited defense organized by Taussig, assisted by Ensign T.H. Taylor in the port director, two or three bombs struck Nevada around 8:25. Inside the bridge, Lieutenant Ruff heard a faint voice calling, ‘Let me in, let me in.’ Ruff opened the hatch leading to the bridge wing but found no one. Returning puzzled, he heard the voice again. After casting about for the location of the voice, Ruff and Sedberry traced it to the deck. They lifted the deck gratings and opened the access hatch–and found Thomas, who had climbed the 80-foot access trunk from his control station. Mounting damage had convinced him that Nevada must attempt the sortie soon or be pounded under the water. Thomas had stabilized the ship’s damage to the best extent possible, so it was now or never. Ruff and Sedberry quickly briefed him, and within 15 minutes Nevada pulled away from Fox 8. By sheer luck, Thomas timed his departure perfectly. Between 8:25 and 8:40 there was a lull between the first and second strikes. With steam to the engines and the steering tested, Thomas directed that Nevada get underway. Chief Boatswain Edwin Hill, led a few sailors to the moorings ashore to cast off the lines. Although hindered by Arizona‘s spreading fire, strafing planes and spent anti-aircraft shells falling around them, Chief Hill and his party quickly freed Nevada. They then dove into the treacherous waters and swam back to the ship. Thomas, Ruff and Sedberry now began the difficult maneuvers involved in getting the 29,000-ton battleship out of Pearl Harbor unassisted. As Ruff remembered, it usually took two hours to build steam in all boilers, and required several tugs, a civilian harbor pilot, the navigator and the captain to get underway. The three of them would attempt the channel passage alone, under attack, their ship damaged by both flooding and fires. Ruff found the prospect daunting. With Thomas conning, Ruff navigating and Sedberry manning the helm, Nevada eased back from her berth. Ruff aligned his landmarks on Ford Island and fed Thomas positions and recommended courses to steer. As Nevada headed fair into the South Channel, Ruff gazed in shock at the destruction of Battleship Row. Arizona blazed fiercely, forcing Nevada‘s sailors manning the starboard anti-aircraft batteries to shield the shells from the heat with their bodies. The deck crew still managed to throw a line to three sailors in the water. Wet and oily, they promptly joined the crew of the nearest 5-inch battery. Several of Ruff’s U.S. Naval Academy classmates had been serving on Arizona, and he could only wonder if any had survived her destruction. West Virginia came into sight next. She had taken several torpedoe hits, and she was settling into the mud on an even keel, thanks to rapid counterflooding. Oklahoma had turned turtle, trapping many sailors inside. Tennessee and Maryland were moored inboard and had escaped torpedo damage. Still, smoke rose from both of them. Finally, Nevada steamed past California, the flagship of the battle force. Flames surrounded her and she, too, was settling on an even keel. Nevada cleared the end of Battleship Row just before 9 a.m. Ahead lay the dredge Turbine and its pipeline attached to Ford Island. Maneuvering through the narrow space between the dredge and 1010 Dock would be challenging on a normal day. Now time was running out; the second wave of Japanese planes began to arrive in force. Attacks on Nevada intensified, and Chief Sedberry did’some real twisting and turning’ to make Nevada a difficult target and avoid the dredge. Planes destined for Pennsylvania dove on Nevada instead. If they could sink her, they could bottle up the South Channel or, better yet, the main channel off Hospital Point, for months. Nevada‘s gun crews threw up the stiffest barrage they could, but Aichi D3A1 dive bombers scored numerous hits and near-misses. Casualties mounted in the gun crews. Flying splinters raked the decks, and fires set off ready ammunition. Boatswain’s Mate A. Solar, who had taken charge of his mount until its officers arrived, fell to shrapnel. Seaman 1st Class W. F. Neundorf, gun captain of No. 6 gun, also died at his post. Most of the bombs struck forward, making a shambles of the forecastle. Ruff, Thomas and Sedberry hung on. ‘Their bombs jolted all Hell out of the ship,’ Ruff remembered. ‘My legs were literally black and blue from being knocked around by the explosions.’ Still, the officers on the bridge hoped that they might make it to open water. Then, a signal from Vice Adm. W.S. Pye, the battle force commander, ordered Nevada not to exit the harbor because of reported enemy submarines. Committed to their present course and continuing to absorb heavy punishment, Thomas and Ruff decided to nose her into the mud off Hospital Point so that she would not be sunk in the channel. Hits to the forecastle had wrecked the anchor windlass and killed many in the deck crew, including Chief Hill, who was blown over the side. Once aground, securing the ship there might prove impossible. Fortunately, Ruff could still talk to the boatswain’s mate standing by the stern anchor on the fantail. Fires raged around the conning tower, threatening to cut him off, so Ruff relayed the plan as quickly as possible. Heedless of the danger on the open fantail, the young sailor promised to wait for Ruff to wave his hat, the signal to let go the anchor. Passing out of the channel between buoy No. 24 and floating dry dock YFD-2, Ruff backed the engines full, then hastened to the bridge wing, waving his hat out over the side. With a clatter and a cloud of rust, the stern anchor plunged into the water and took hold. At 9:10, Nevada came to rest at Hospital Point. Thomas then turned his full attention to damage control, while Ruff headed aft to assess conditions topside. Five minutes later, he met Captain Scanland boarding at the quarterdeck. The captain had left his home in Honolulu as the first bombs fell, fighting his way through the chaos in the streets to commandeer a launch and chase down his command. With the second-wave attacks nearly spent, firefighting and flood control became paramount. Tugboats sent by Admiral Furlong arrived alongside, bringing their hoses into action against the fires that raged from stem to almost amidships. For a time, only the tugs could fight the fires because most of Nevada‘s fire mains had been ruptured. Thomas directed his damage-control parties to splice or patch the critical ones forward. After directing Ruff to report Nevada‘s status to Admiral Kimmel, Scanland headed forward to find Thomas, and Ruff boarded the launch that had brought Scanland. As the coxswain picked his way through smoking debris, Ruff saw Arizona, still blazing as fiercely as when they had passed her half an hour before. California also burned steadily. Shaw, the destroyer perched in YFD-2, added to the pall. Her forward magazine had exploded shortly after Nevada had grounded. Finally, great columns of smoke billowed skyward from the major airfields surrounding Pearl. Even from lowly sea level, the destruction appeared complete. Back on Nevada, as the attacks ceased, the gun crews joined in the battle to save the ship. Sweating, smoke-grimed sailors gradually gained the upper hand over the fires. Individually, officers and sailors secured their immediate areas. Ensign Taylor climbed down from his gun director to lead the firefighting on the port gun deck. Hindered by shattered eardrums, Taylor directed hose teams to spray red-hot ready ammunition boxes before they exploded. Escape proved considerably more difficult for Taussig. His men finally convinced him to relinquish his post, where he had fought on despite his serious wounds. Now fires licked up and around the upper works, blocking the ladders to the starboard director. Eager sailors rigged a line to lower Taussig’s stretcher directly to the deck. The young ensign remained conscious and coherent as pharmacist’s mates worked to stabilize his injuries. With no bow anchors to hold her fast, Nevada might still slide back and block the South Channel. At 10:35, with the damage situation under control, Scanland prepared to move Nevada to a safer haven well clear of the shipping channels. Two tugs pushed her stern around until her bow slid free, then accompanied her across the channel to Waipio Point, where she grounded herself stern first at 10:45. Nevada rested there until February 1942, when she was floated for repairs. She later returned to service. Meanwhile, Ruff had arrived at CINCPAC headquarters to find a somber staff sorting out the details of the attack and grasping for some means of retaliation. Admiral Kimmel questioned Ruff personally, his calm demeanor barely masking the anguish he obviously felt. Ruff had hardly returned to Nevada when Scanland sent him back to report the grim initial damage assessment. At least one torpedo and five bombs had hit Nevada, mostly forward. Numerous near-misses had added to the hull damage. Engineering was flooded, salting the boilers and much of the steam piping. Though she had sortied, Nevada was now neither battle-worthy nor seaworthy. Some stubborn fires burned on and would not be completely extinguished until 6:30 p.m. Ruff made several more trips between headquarters and Nevada. He acted as Captain Scanland’s pointman ashore, organizing necessary services for the ship and crew. Most important, the crew needed shelter and sustenance. The wounded received top priority, evacuating to Solace or the base hospital. Ensign Taussig was on one of the first boats. He would lose his left leg and spend the remainder of the war in the hospital. With the ship in such bad shape, Ruff arranged shore billeting for the crew in the base’s open-air theater. Captain Scanland left a skeleton crew aboard to serve as a reflash watch and to perform critical repairs to keep the ship defensible. Thomas remained aboard, directing much of that work. In fact, Scanland’s after-action report offered high praise of Thomas, a naval reservist, not only for his skillful handling of the ship during the attack but also for his dogged repair efforts. Two days after the attack, Thomas was on the verge of collapse from almost continuous work with no sleep. As darkness fell, Lieutenant Ruff bedded down with the crew at the theater. Exhausted, he could only gaze into the night sky, pondering the few short hours that had shattered this tropical paradise. Friends had died, Nevada lay aground, and the war he and his wife had feared was upon them with stormlike fury. Reeking, oily smoke hung over Pearl, and the glow of fires was still visible all around. In the darkness, the desperate day finally ended. Author Mark J. Perry has conducted extensive research on the Pearl Harbor attack and its aftermath. For further reading, try: At Dawn We Slept, by Gordon W. Prange; and Day of Infamy, by Walter Lord. This article originally appeared in the January ’98 issue of World War II.
|
Support us AD-FREE Producing content you read on this website takes a lot of time, effort, and hard work. If you value what we do here, please consider subscribing today. SUBSCRIBE TODAY According to Tokyo Electric Power Company (Tepco), the radiation level in the containment vessel of reactor 2 at the damaged Fukushima No.1 nuclear power plant nuclear complex has reached a maximum of 530 sieverts per hour which is the highest recorded level since the triple core meltdown in March 2011. The reading means a person could die from even brief exposure, highlighting the difficulties ahead as the government and Tepco grope their way toward dismantling all three reactors that melted down in the March 2011 nuclear disaster, the Kyodo News reported. Tepco announced the finding on Thursday, February 2, 2017, and added that based on an image analysis, a 1 m2 (10 ft2) hole has been found on a metal grating beneath the reactor pressure vessel. The iron scaffolding has a melting point of 1 500 °C ( 2 732 °F), the company said. There is a possibility the fuel debris has fallen onto it and burnt the hole. Such fuel debris have been discovered on equipment at the bottom of the pressure vessel just above the hole. The presence of dangerously high radiation will complicate efforts to safely dismantle the plant, but the radiation is not leaking outside the reactor, Tepco said. In today's news conference, Minister of Economy, Trade and Industry said that confirming the conditions inside the reactor is the first step toward decommissioning. “While difficult tasks and unexpected matters may arise, we will mobilize all of Japan’s technological capabilities to steadily implement decommissioning work and rebuild Fukushima." The searing radiation level, described by some experts as “unimaginable,” far exceeds the previous high of 73 sieverts per hour at the reactor, according to The Japan Times. National Institute of Radiological Sciences said medical professionals have never considered dealing with this level of radiation in their work. 4 sieverts of radiation exposure would kill 1 in 2 people while 1 Sievert could lead to infertility, loss of hair and cataracts. A dose of about 8 sieverts is considered incurable and fatal. Tepco said it calculated the figure of 530 sieverts by analyzing the electronic noise in the camera images caused by the radiation. This estimation method has a margin of error of +/- 30%, it said. Featured image credit: TEPCO via KYODO Register/become a supporter Your support is crucial for our survival. It makes this project fully self-sustainable and keeps us independent and focused on the content we love to create and share. Monthly subscription Subscription options Option 1 : $5.00 USD - monthly Option 2 : $10.00 USD - monthly Option 3 : $15.00 USD - monthly Option 4 : $25.00 USD - monthly Option 5 : $50.00 USD - monthly Option 6 : $100.00 USD - monthly Yearly subscription Subscription options Option 1 : $50.00 USD - yearly Option 1 : $100.00 USD - yearly
|
Mr. Page and Mr. Brin have largely avoided public events, with Mr. Page having a medical ailment that has made public speaking difficult and Mr. Brin focusing on the company’s more experimental projects. Mr. Pichai has tended to speak mostly about products and services, instead of policy. Through a Google spokesman, Mr. Schmidt declined to comment. Mr. Schmidt has been marginalized over time at Alphabet through a combination of its changing leadership, the shifting political environment in the United States, and his own personal gaffes, according to people familiar with the company, who spoke on the condition of anonymity because they were not authorized to speak publicly. Since 2011, Mr. Schmidt has been a go-between for the company in Washington — some people internally referred to his role as Google’s secretary of state. During President Barack Obama’s administration, Mr. Schmidt, who has supported many Democratic politicians, prominently represented Google on policy matters. He also gave money and technical assistance to Hillary Clinton’s campaign team during the 2016 presidential race. But since President Trump came to office, Mr. Schmidt’s standing has changed. He has been eclipsed in Washington by others at Google, including Susan Molinari, a former Republican Congresswoman from New York, said some of the people familiar with the company. Google also has new Washington staff members such as Max Pappas, a longtime political operative who has a relationship with Charles and David Koch, the billionaire brothers who support conservative causes. In the meantime, Google is under fire, along with other tech giants, as legislators seek to deal with the perceived monopolies these companies have. In a time of heightened scrutiny on workplace behavior and sexual harassment, Mr. Schmidt’s personal life has also attracted attention. While he is married, he has brought a series of girlfriends to corporate events over the years. Though Mr. Schmidt’s relationships were outside the office, the fact that they were carried out publicly and that the women attended professional events with him set the tone for other executives, several former Google executives said. Some gaffes by Mr. Schmidt over the years have also received attention. In a 2009 interview on CNBC, for instance, Mr. Schmidt said something about Google users’ concerns about privacy that still haunts the company today: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Google has also had to tamp down reports that Mr. Schmidt sought to remove personal information, including about political donations, from the search engine, which he has denied. In August, Mr. Schmidt thrust Google into a negative spotlight when Barry C. Lynn, a scholar at the New America Foundation, claimed that the executive had forced him out for applauding the European Union’s decision to levy a record $2.7 billion fine against the company. Google has denied the claim.
|
Democrats are angry with a recent shakeup of the University of Louisville (U of L) Board of Trustees that strips away the board’s Democratic majority and increases the diversity of its members. Kentucky Governor Matt Bevin (R) announced an executive order on June 17 to restructure the U of L Board in the wake of a number of scandals, including allegations that the men’s basketball team had provided strippers and prostitutes to recruits. Governor Bevin’s order wiped the Board of its members and decreased the number of governor-appointed Board members from 17 to 10. "Our legal challenge...is to prevent [Bevin] from asserting 'absolute authority' to control the Board." [RELATED: KY Gov. seeks to replace University of Louisville Trustees] On June 29, Gov. Bevin named the ten new appointees, according to WDRB. While the former Board was almost entirely white and Democrat, the new Board has a near-equal split of political affiliations and is much more ethnically diverse. Gov. Bevin’s appointed Board consists of four registered Democrats, four Republicans, and two independents. Four Board members are non-white. State laws suggest the pre-Bevin Board may have actually been illegal. Not only is the Board required to have equal representation of both political parties, but at least three of the 17 appointed members have to be racial minorities. The former Board only had one racial minority, which led to Gov. Bevin settling a lawsuit on behalf of his predecessor, former Governor Steve Beshear (D). Even though Bevin’s Board is more politically diverse on the surface, it still leans Democrat in terms of political donations. Members have donated $18,000 to Beshear’s campaigns for governor, and only $3,000 to Bevin’s 2015 campaign. [RELATED: Professors admit Brandeis Law is committed to ‘liberal agenda’] Nonetheless, Democrats have criticized Gov. Bevin’s executive orders, alleging they have no legal standing because they exceeded the authority of his office. Attorney General Andy Beshear, Steve Beshear’s father, has filed a legal challenge to Bevin's reorganization of the Board. "Our legal challenge is not about who will or will not serve on a Board of Trustees," Beshear claimed Wednesday. “It is to prevent Gov. Matt Bevin from asserting 'absolute authority' to control the Board and the university by simply dissolving the Board any time he disagrees with it.” Bevin, however, contends that he does have the legal standing to change the U of L Board. "I have absolute authority—both constitutionally and legislatively, statutorily—to disband any Board in this state," Bevin asserted on Tuesday. "It has been done time, and time, and time, and time, and time again by every governor that has ever preceded me." Kentucky State Senator Morgan McGarvey (D) noted that the Board members could not be fired without cause, and the Board is required to have 17 appointed members. "I applaud the reform efforts of the governor, but the statutes say you can only remove these members for cause," McGarvey said. "I do believe there has been some dysfunction on the board, but you don't throw out the good with the bad." While the Board faces legal challenges, Bevin feels his move is the best way for the University of Louisville to get a fresh start post-scandal and dysfunction. Bevin says the Board members "embody the professional experience, leadership skills and core values needed to move efficiently and effectively oversee, govern, and manage the affairs of the University...I am confident that they will build upon the University's many successes by governing with the utmost integrity and transparency." In addition to the Board reorganization, James Ramsay, current university president, plans to resign no later than May. Follow the author of this article on twitter: @amber_athey
|
Minor Victories is the supergroup of Slowdive's Rachel Goswell, Editors' Justin Lockey, Mogwai's Stuart Braithwaite, and James Lockey of Hand Held Cine Club. Their debut album is in the works, and today, they've previewed it with an excellent short film, Minor Victories - Film One, featuring what appear to be snippets from two new tracks. Watch it below via Pitchfork.tv. James Lockey wrote the following description: This is Minor Victories - film one. You could call it a teaser trailer? Album trailer? album trailer teaser? Stripteaser trailer teaser? Super internet trailer of tease based on an album? Long player teaser moviefilm? The prequel to the sequel to the adaptation based on the remake of the original teaser...trailer...? It's not really any of the above... Just the First hello.
|
Follow @__jhannah It’s interesting to consider what makes one framework or library more popular than another. While I don’t have a definitive answer to that question, I do have a bit of data that provides a snapshot of JavaScript framework usage over the past year. It’s worth asking if the popularity of a framework or library matters. For me, the answer is a qualified yes. It does matter, but how much it matters depends on the project, the team and the organization making the decision. It’s one factor among many, with popular frameworks and libraries providing big communities that can help speed development, among other benefits. Now, about the numbers and the graph below. They’re from npm-stat and you can go there and play around with the chart generator yourself. The graph is showing monthly downloads over the past year, and rather than take these numbers as absolute truth, I think it’s best to use them for relative comparison. For reference, here’s what npm has to say about their download counts. You can click on the image below to see a larger version. Angular, the black line, was obviously being used heavily prior to late March of 2017. They had a namespace change which accounts for it showing up where it does. Before we talk about what these data might mean, let’s pull a few numbers out to help us see things a bit more clearly. Note: the data for Angular begins in April 2017, the first full month of its availability. Framework Nov. 2016 Oct. 2017 Percent change React 2,564,601 7,040,410 174.5% Angular 1,289,953 2,168,899 68.1% Backbone 663,610 837,372 31.6% Angular.js 616,135 1,081,796 75.6% Vue 6231 874,424 13933.4% The first thing that sticks out to me is that React absolutely dominates. It’s by far the most downloaded according to these numbers, and if you work in this space, your daily experience probably backs that up. We also see a spike towards the end which is no doubt React 16 making a big impact. Another thing that I see is that all of the frameworks — even the “old” ones — have increasing monthly downloads. That really surprised me. Angular.js and Backbone have surprising endurance. Also, check out the explosive growth of Vue! This is the year that Vue arrived as one of the top frameworks. Even so, it’s current usage is similar to the venerable Backbone and far behind both React and Angular. Interesting stuff. It will be fun to check these numbers in a few months to see how things have developed.
|
Spiritual Journey through Alchemy of Love Spiritual Journey through Alchemy of Love Also see - Using Poetry as Spiritual Tool We intuitively use Poetry to Heal or express Joy! Spiritual Journey is a most amazing Journey one could ever embark on It never ends and it is always successful because an invisible driver (our soul) directs the Journey. Our commitment to grow as spiritual beings is always beautifully rewarded. Spiritual Journey: Understand Your Body Mind and Soul Our body is a chariot, horses are thoughts and emotions, and the charioteer is the soul. The chariot, horses and the charioteer, a body, mind, and soul, they all play a role in the speed of our movement towards enlightenment and the state of happiness we are able to achieve. If the horses (thoughts and emotions) lead your self development journey, the chariot (body) might never move or it might move too fast or take the opposite direction. There are two mistakes we could make: if the horses ( thoughts and emotions ) are strained or abused they might die of exhaustion before reaching the destination; or and ) are strained or abused they might die of exhaustion before reaching the destination; or if the horses (thoughts and emotions) are let loose they will stop to do whatever they please, they will fight and will not care less about the charioteer (the soul). And if the chariot (our body) is not in a good condition, it will break during the self development journey. If the chariot is not oiled properly, not taken care of, it will get damaged or fall apart as soon as there is an obstacle, a stone in the way, or a bit of rough road. The body has to be respected. On the spiritual path we become aware of our imperfections, our thoughts and feelings, and we start our self development journey of discoveries. We purify ourselves removing biological, emotional and mental blockages, to become healthier and happier. This process of spiritual discoveries requires commitment and strong willpower. Spiritual Work Poem by Nuit Taking steps in trust, child-full playfulness and joyful peace Chanting, drumming, humming, dancing at the most Sacred Palace Resonating with the Song of Life We touched the Spiral where Forest meets Rivers Ascending the deepest caves where Stalactites and Stalagmites form Castles Of Eternal Lovers meditating within the Earth’s Belly Mirroring what is Below is Above We were 12 Honourable Guests witnessing the Sacred Marriage Mother of Form unite with Father of Consciousness Sages, Saints give our Initiatives Fruitfulness Angels protect our Fire of Life Ancestors guard our Wisdom Scientists guide us towards the Land of Plenty Holding each other’s space, souls’ expression, laughter We honoured the 4 elements letting the Equinox Flow through the Spiral of our DNAs Worshiping the Central Force that unites us with the Universe Becoming a Vessel of Love that is Life We merged with Tao Stay Inspired! We have created a set of amazing transformation tools and structured self development techniques to help you in your self development journey. To learn more about our Self-Development Course please check our: This is an exciting journey full of self-development tools and techniques Join us in this journey of Alchemical discoveries Mindful Being towards Mindful Living available through Amazon Kindle Intro to the Alchemy of Love Courses If you would like to experiment, and experience all of the Self Development Life Coaching Tools and Techniques: we will be happy to help you examine your mind (thinking and feeling), your goals and your dreams in our 12 WEEKS SELF DEVELOPMENT COURSE. AoL consciousness research These icons link to social bookmarking sites where readers can share and discover new web pages.
|
Winnipeg has a tenth candidate in the running to be mayor. My entire campaign is going to lack a lot of pizzazz. But it'll have a lot of form and function to it. - Buck Duchene, mayoral candidate Buck Duchene is a display advertising specialist and is a landlord in Winnipeg. He said he is running because Winnipeg is stuck dealing with the same issues over and over, such as infrastructure or snow removal. He is opposed to taxes being hiked to fund what he says should be sustainable expenses, such as rapid transportation. Duchene is bluntly honest about what kind of campaign he plans to run. "I absolutely do not have the funding compared to some people," he said. "I know there's a lot of business owners they've got dressed up vehicles, [they're] running commercials and stuff like that. My funding is extremely low. A lot of people get caught up with the flash and pizzazz. My entire campaign is going to lack a lot of pizzazz. But it'll have a lot of form and function to it." He said he does have one thing on his side. "I do have a moderate bonus and I know it's a more of a comical one: my first name being Buck," he said, laughing. "It's an easy name to remember. I suspect that my chances on a grand scale of things I mean are one tenth of whatever's out there." But he does hope his ideas will be part of the debate on how the city should be run.
|
‘Beavis and Butt-head” — the show that celebrated the slacker way of life and helped make MTV into a network that did more than just play music videos — is coming back. The move to resurrect the hugely popular 1990s animated anti-heroes has been rumored for several days. But yesterday, sources at MTV confirmed that a new batch of “Beavis and Butt-head” episodes are in the works. The new series would keep Beavis and Butt-head in their perpetual high-school state, but it would be updated so that the pals — who obsessively watch music videos on a battered TV set — could lob their snarky comments at more current targets like Lady Gaga. The show’s minimalist animated style is also expected to remain intact. The return of “Beavis and Butt-head” will be a backdoor means for MTV to return to showing music videos — something the network was founded upon but abandoned in the last decade to make room for popular reality shows like “Laguna Beach,” “The Hills” and “Jersey Shore.” “Beavis and Butt-head,” which premiered in 1993, began as an animated short called “Frog Baseball,” which aired on MTV’s “Liquid Television.” The basic plotline revolved around two shorts-wearing, spectacularly immature teenage pals whose banter was delivered against the backbeat of their constant idiotic laughter. Series creator Mike Judge, who’s also creating the new episodes, voiced both characters. The guys worked at a fast-food joint and were always out to “score” with “chicks” when they weren’t sitting on a ratty couch watching music videos. Beavis, the blond half, usually wore a Metallica T-shirt and would morph into his crazed, gibberish-spewing alter-ego, “Cornholio,” when he ingested too much sugar. Butt-head was the “cooler” of the two. He usually wore an AC/DC T-shirt and often picked on Beavis in much the same way Moe would slap around Curly, Larry and Shemp on “The Three Stooges.” The duo was so successful they were spun off into a 1996 big-screen movie, “Beavis and Butt-head Do America” and a marketing juggernaut of T-shirts and character trinkets. A recurring character on the show, high-school classmate Daria (whom they called “Diarrhea”), eventually got her own MTV series. After MTV canceled “Beavis and Butt-head” in 1997, Judge went on to create “King of the Hill” for Fox. He also wrote the cult-classic movie comedy “Office Space” and last year’s big-screen movie “Extract.” MTV officials had no comment yesterday. Judge is “not commenting at this time,” his publicist said.
|
Hey everybody, It is with great reluctance that I have to announce Ranked Warzones are not going live with Game Update 1.2: Legacy. We had a fantastic run of testing on the Public Test Server and I can’t begin to thank the community enough for their tireless advocacy for this feature and the time people took to file bugs and voice their concerns. After careful consideration, it is clear that to make Ranked Warzones the feature we all want it to be is going to take a bit more time. I apologize for any disappointment this may cause and ask for your patience as we work to make sure the Ranked Warzone Preseason launch is polished and fun. With Legacy, the new Warzone, Operation, and Flashpoint, as well as many other amazing features of Game Update 1.2 ready to go, we're very excited for tomorrow's additions. In the future we'll be rolling the Ranked Warzone Preseason out in phases, listening carefully to player feedback and making adjustments as we go. The first phase will be full team, eight-player queuing only and from there we'll look at next steps as Preseason progresses. In the meantime we have provided an alternate purchase route for War Hero gear so you can start building your set while figuring out your teams. Daniel Erickson Lead Game Designer Discuss this article
|
The reality star says the talk show host didn't do much to help. Caitlyn Jenner is blaming Ellen DeGeneres for alienating her from the LGBTQ community. Caitlyn, 67, appeared on The Ellen Degeneres Show in 2015, telling the talk host that she really still hadn't accepted gay marriage. Considering the circumstances, Ellen later said she found Caitlyn’s opinion “confusing," during an interview with Howard Stern. Now, in her new memoir, “The Secrets of My Life,” Caitlyn explains that Ellen, 59, asked in a “friendly voice to discuss how her views on marriage equality have progressed over the years.” “I am for it. I did not initially understand why marriage was so important, influenced no doubt by my own personal experience. Now I do, and it's a wonderful thing to see,” Caitlyn said at the time. The I Am Cait star upset Ellen when she called herself a “traditionalist” who never used to approve of same-sex marriage. ❤️ A post shared by Caitlyn Jenner (@caitlynjenner) on Nov 28, 2016 at 9:04am PST “I have to admit that I remember 15 years ago, 20 years ago, whenever it was that the whole gay marriage issue came up, I was not for it. I am a traditionalist. I mean, I'm older than most people in the audience. I like tradition and it's always been between a man and a woman and I'm thinking I don't quite get it,” Caitlyn said. But she writes in her book that she was hurt by Ellen. “This discussion further alienated me from members of the LGBTQ community. Ellen's appearance on The Howard Stern Show, where in my mind she even more emphatically took what I said out of context, made it go viral.”
|
Members of The Church of Jesus Christ of Latter-day Saints unequivocally affirm themselves to be Christians. They worship God the Eternal Father in the name of Jesus Christ. When asked what the Latter-day Saints believe, Joseph Smith put Christ at the center: “The fundamental principles of our religion is the testimony of the apostles and prophets concerning Jesus Christ, ‘that he died, was buried, and rose again the third day, and ascended up into heaven;’ and all other things are only appendages to these, which pertain to our religion.” 1 The modern-day Quorum of the Twelve Apostles reaffirmed that testimony when they proclaimed, “Jesus is the Living Christ , the immortal Son of God. … His way is the path that leads to happiness in this life and eternal life in the world to come.” 2 In recent decades, however, some have claimed that The Church of Jesus Christ of Latter-day Saints is not a Christian church. The most oft-used reasons are the following: Latter-day Saints do not accept the creeds, confessions, and formulations of post–New Testament Christianity. The Church of Jesus Christ of Latter-day Saints does not descend through the historical line of traditional Christianity. That is, Latter-day Saints are not Roman Catholic, Eastern Orthodox, or Protestant. Latter-day Saints do not believe scripture consists of the Holy Bible alone but have an expanded canon of scripture that includes the Book of Mormon, the Doctrine and Covenants, and the Pearl of Great Price. Each of these is examined below. Latter-day Saints Do Not Accept the Creeds of Post–New Testament Christianity Scholars have long acknowledged that the view of God held by the earliest Christians changed dramatically over the course of centuries. Early Christian views of God were more personal, more anthropomorphic, and less abstract than those that emerged later from the creeds written over the next several hundred years. The key ideological shift that began in the second century A.D., after the loss of apostolic authority, resulted from a conceptual merger of Christian doctrine with Greek philosophy.3 Latter-day Saints believe the melding of early Christian theology with Greek philosophy was a grave error. Chief among the doctrines lost in this process was the nature of the Godhead. The true nature of God the Father, His Son, Jesus Christ, and the Holy Ghost was restored through the Prophet Joseph Smith. As a consequence, Latter-day Saints hold that God the Father is an embodied being, a belief consistent with the attributes ascribed to God by many early Christians.4 This Latter-day Saint belief differs from the post-New Testament creeds. Whatever the doctrinal differences that exist between the Latter-day Saints and members of other Christian religions, the roles Latter-day Saints ascribe to members of the Godhead largely correspond with the views of others in the Christian world. Latter-day Saints believe that God is omnipotent, omniscient, and all-loving, and they pray to Him in the name of Jesus Christ. They acknowledge the Father as the ultimate object of their worship, the Son as Lord and Redeemer, and the Holy Spirit as the messenger and revealer of the Father and the Son. In short, Latter-day Saints do not accept the post–New Testament creeds yet rely deeply on each member of the Godhead in their daily religious devotion and worship, as did the early Christians. Latter-day Saints Believe in a Restored Christianity Another premise used in arguing that Latter-day Saints are not Christians is that The Church of Jesus Christ of Latter-day Saints does not descend from the traditional line of today’s Christian churches: Latter-day Saints are not Catholic, Eastern Orthodox, or Protestant. Latter-day Saints believe that by the ministering of angels to Joseph Smith priesthood authority to act in God's name was returned or brought back to earth. This is the “restored,” not a “reformed,” church of Jesus Christ. The Latter-day Saint belief in a restored Christianity helps explain why so many Latter-day Saints, from the 1830s to the present, have converted from other Christian denominations. These converts did not, and do not, perceive themselves as leaving the Christian fold; they are simply grateful to learn about and become part of the restored Church of Jesus Christ, which they believe offers the fulness of the Lord’s gospel, a more complete and rich Christian church—spiritually, organizationally, and doctrinally. Members of creedal churches often mistakenly assume that all Christians have always agreed and must agree on a historically static, monolithic collection of beliefs. As many scholars have acknowledged, however, Christians have vigorously disagreed about virtually every issue of theology and practice through the centuries, leading to the creation of a multitude of Christian denominations.5 Although the doctrine of The Church of Jesus Christ of Latter-day Saints differs from that of the many creedal Christian churches, it is consistent with early Christianity. One who sincerely loves, worships, and follows Christ should be free to claim his or her understanding of the doctrine according to the dictates of his or her conscience without being branded as non-Christian. Latter-day Saints Believe in an Open Canon A third justification argued to label Latter-day Saints as non-Christian has to do with their belief in an open scriptural canon. For those making this argument, to be a Christian means to assent to the principle of sola scriptura, or the self-sufficiency of the Bible. But to claim that the Bible is the sole and final word of God—more specifically, the final written word of God—is to claim more for the Bible than it claims for itself. Nowhere does the Bible proclaim that all revelations from God would be gathered into a single volume to be forever closed and that no further scriptural revelation could be received.6 Moreover, not all Christian churches are certain that Christianity must be defined by commitment to a closed canon.7 In truth, the argument for exclusion by closed canon appears to be used selectively to exclude the Latter-day Saints from being called Christian. No branch of Christianity limits itself entirely to the biblical text in making doctrinal decisions and in applying biblical principles. Roman Catholics, for example, turn to church tradition and the magisterium (meaning teachers, including popes and councils) for answers. Protestants, particularly evangelicals, turn to linguists and scripture scholars for their answers, as well as to post–New Testament church councils and creeds. For many Christians, these councils and creeds are every bit as canonical as the Bible itself. To establish doctrine and to understand the biblical text, Latter-day Saints turn to living prophets and to additional books of scripture—the Book of Mormon, Doctrine and Covenants, and Pearl of Great Price. Together with the Old and New Testaments, the Book of Mormon supports an unequivocal testimony of Jesus Christ. One passage says that the Book of Mormon “shall establish the truth” of the Bible “and shall make known to all kindreds, tongues, and people, that the Lamb of God is the Son of the Eternal Father, and the Savior of the world; and that all men must come unto him, or they cannot be saved.”8 In its more than six thousand verses, the Book of Mormon refers to Jesus Christ almost four thousand times and by over one hundred different names: “Jehovah,” “Immanuel,” “Holy Messiah,” “Lamb of God,” “Redeemer of Israel,” and so on.9 The Book of Mormon is indeed “Another Testament of Jesus Christ,” as its title page proclaims. Conclusion Converts across the world continue to join The Church of Jesus Christ of Latter-day Saints in part because of its doctrinal and spiritual distinctiveness. That distinctiveness flows from the knowledge restored to this earth, together with the power of the Holy Ghost present in the Church because of restored priesthood authority, keys, ordinances, and the fulness of the gospel of Jesus Christ. The fruits of the restored gospel are evident in the lives of its faithful members. While members of The Church of Jesus Christ of Latter-day Saints have no desire to compromise the distinctiveness of the restored Church of Jesus Christ, they wish to work together with other Christians—and people of all faiths—to recognize and remedy many of the moral and family issues faced by society. The Christian conversation is richer for what the Latter-day Saints bring to the table. There is no good reason for Christian faiths to ostracize each other when there has never been more urgent need for unity in proclaiming the divinity and teachings of Jesus Christ. Resources The Church acknowledges the contribution of scholars to the historical content presented in this article; their work is used with permission. Originally published November 2013.
|
When Luke Johnson says he "doesn't like to be idle", the serial entrepreneur certainly means it. When you glance at his CV you wonder how he has the time to fit in all his commitments. Mr Johnson, who is estimated to be worth £150m, is the chairman of UK private equity business Risk Capital Partners. His current hands-on investments range from restaurant chains including Patisserie Valerie to a car park company, a cruise holiday business and even greyhound racing tracks - not to forget a bank, a research business, a marketing agency and theatre production companies. Meanwhile, his wide-ranging charity work includes his chairmanship of the Institute of Cancer Research. And he writes business advice books. Try to find great partners and managers who you want to work with, people who are confident, knowledgeable and motivated. Luke Johnson And if that weren't all more than enough, Mr Johnson, 51, recently launched a new think tank, the Centre for Entrepreneurs, which aims to promote "the role of entrepreneurs in creating economic growth and social well-being". Mr Johnson says: "Boredom is a great enemy in my life. I like to avoid getting bored. "I have a restless personality, like other entrepreneurs, that keeps me going. Business is what I do, and I believe that all people need work of a sort, they need to be proactive." 'The perfect business' To go back in time, Mr Johnson made his first fortune - and his reputation as a shrewd businessman - in the world of pizza. It was 1993, and Mr Johnson was just 30 years old. He and Hugh Osmond, his business partner at the time, decided to take a share in a small London-based restaurant chain called Pizza Express. Image caption Luke Johnson made his name with Pizza Express With the help of their overdrafts and savings, the two budding young entrepreneurs managed to cobble together the funds, and persuade Pizza Express's founder that they were the right men to lead the company's expansion. Pizza Express had just 12 branches at the time, but within six years this had increased to more than 250 across the UK. At the same time, its share price rose from 40p to more than £9. Mr Johnson has since described Pizza Express as the "perfect business" because of pizza's enduring popularity, and its high profit margins. I would advise any young person thinking of starting their own business to take the plunge. Life is not a rehearsal, don't be the person who spends his life thinking 'I wish I had done that' Luke Johnson But keen for a fresh challenge, Mr Johnson sold up in 1999. He then went on to own famous London restaurants such as The Ivy and Le Caprice, while launching the Strada Italian restaurant chain, which were sold in 2005 for a total of more than £90m. Mr Johnson subsequently grew the Giraffe restaurant chain, which was sold this year to supermarket group Tesco for £50m. He says that for someone with wide-ranging restaurant industry experience it is not too difficult to establish a winning formula. "There are a host of things you are looking for," he says. "Try to find great partners and managers who you want to work with, people who are confident, knowledgeable and motivated. "And find projects that are established, but not yet scaled up. I like to think that what I bring to the party is the capital, knowledge and connections that allow the business to expand significantly." At the actual restaurant level, he says: "The properties are very important, the branch management is very important, and of course the menu and good service." 'Tough lessons' For all Mr Johnson's success in business, which also includes six years as chairman of UK broadcaster Channel 4 from 2004 to 2010, it all could have turned out very differently. When he first went to Oxford University as an 18-year-old, he was meant to go on to become a doctor. All that changed, however, when he and a friend started to run a nightclub evening. Image caption Buying the Borders bookshop chain was not a success for Mr Johnson "We had the bright idea to charge our friends entry, and that was it for me. I knew there and then that this was what I wanted to do with my life. I wanted to ultimately run my own companies." Mr Johnson decided against continuing his medical degree and instead graduated in physiological sciences. He then worked as a stockbroker for a number of years before ultimately taking the plunge into the world of entrepreneurship at the age of 27. Now in his early 50s, Mr Johnson is keen to pass on his knowledge to young entrepreneurs just starting out in their business life. Recently addressing the launch of Global Entrepreneurship Week, which in the UK is organised by the Prince of Wales's start-up support charity Youth Business International, Mr Johnson was keen to stress the importance of coping with failure. Ill-fated purchase Highlighting two of his own, he said "everything went wrong" when he tried to open a branch of Belgo, the mussels and frites restaurant, in New York. "The US partner went bust, we chose the wrong location, and we neglected to use unionised labour, which you have to do in the US... Come opening night the construction union protested outside with a 25ft [7.5m] tall inflatable rat." Mr Johnson is also happy to discuss his ill-fated purchase of the UK branches of the bookstore chain Borders, which was ultimately liquidated in 2011. "I appeared to be able to buy it very cheaply, but my plan was doomed from the start," he says. "It was a painful experience and I lost all my money. I thought I could turn it around, but I didn't realise the extent to which Amazon, e-books and supermarkets were killing big-scale book stores." In both cases, Mr Johnson says they are "tough lessons", and that young entrepreneurs should remember that "failures are rarely fatal". He adds: "I would advise any young person thinking of starting their own business to take the plunge. Life is not a rehearsal, don't be the person who spends his life thinking, 'I wish I had done that'."
|
Image caption Jessica Raine, Bryony Hannah and Helen George play the non-nun midwives at Nonnatus House The Call the Midwife Christmas special is already being dubbed a perennial favourite in the TV listings - up there with EastEnders and Doctor Who - so it may be surprising to find this will be only its second festive outing. Based on Jennifer Worth's memoirs of her time as a nurse in 1950s London, the show - which is about to head into its third series - centres on life at the Nonnatus House order and the nuns and midwives who work side-by-side. Having already exhausted all of the late author's original stories, the drama is now being penned by series creator Heidi Thomas. The Christmas episode promises to be a tearful affair but avoids the shock value of last year's Downton Abbey Christmas special, which saw the death of one of its main characters, and led to accusations of sensationalism. "It's just a beautifully crafted story," says Helen George, who plays the glamorous yet feisty Trixie. "There is the emphasis on birth and birthing, but it is about relationships of the families and the community as well, which is so touching and emotive in its own essence there's no need to shove a death in for shock value." 'Wiped out' While the series has moved into the tail-end of the 1950s, the drama is played out to the backdrop of a country still recovering from World War Two, particularly in the Christmas episode. Image caption Miranda Hart was nominated for a Bafta for playing midwife Chummy Jessica Raine, who plays lead character Jenny, says: "I think it is necessary as people tend to forget how long it took to get over the war, especially in the East End, because a lot of it was wiped out. It had such an effect on everyone's lives still." George adds: "Going into the next series, there are a few reflections of the war still so it still feels like a community and a world repairing itself, which we haven't really covered before. It's always spoken about but the emotional depth of it isn't really explored, but in the new series it is." Pam Ferris, who plays the no-nonsense Sister Evangelina, agrees the war still had a profound effect long after it ended. "For those of us naive of us to think the war ended in 1945, it's a lesson for all historians and people who don't know the war - its repercussions go on and fade slowly." I hope younger fans appreciate how things have moved on for women and to not throw away that liberty and freedom the vanguard of feminism and change have brought them. Pam Ferris The after-effects of war are also felt in other, more dramatic ways in the Christmas special, as the East End midwives come face-to-face with an unexploded World War Two bomb. The episode also tackles how the experience of women and birth began to change. "The fashion surrounding birth has already changed in the series. At first we were still saying things like 'Get into the correct position for birthing', now even men are creeping into the room," says Ferris, who played the matriarch in the early '90s comedy drama Darling Buds of May. "Of course I can imagine Sister Evangelina is going to fight a heavy rearguard action over that!" 'Absolutely filthy' Jenny Agutter, probably best known for her role in The Railway Children and who plays Sister Julienne, the sister in charge at Nonnatus House, says: "In the Christmas one you see the change in attitudes towards men coming in. "When we are filming I'm so fascinated when we talk about childbirth experiences - the men in the crew always want to talk about it more because they are there and present and part of it. What we are still showing is men not being part of this intimate women's world." And Ferris feels the show has a message the younger generation should take note of. "I hope the younger fans appreciate how things have moved on for women - that within living memory things were tougher, and to not throw away that liberty and freedom the vanguard of feminism and change have brought them," she says. Despite its gentle pace, the birthing scenes can be quite realistic. "We end up absolutely filthy. We use sticky blood that smells like Marmite," George says. Raine adds: "They use a condom filled with blood so whenever you see a gush of water coming out of the woman you have a real midwife standing above with a condom and she just releases it, so there are a lot of condoms lying around the set!" Image caption Pam Ferris (left) and Jenny Agutter play the Nonatus nuns, a real order now based in Birmingham Miranda-factor Call the Midwife often has that feel-good Sunday evening tone to it, drawing in mums and dads, along with grans and granddads. But George insists they have a lot of younger fans. "It's usually 14-year-old girls that come up to us. I think the Miranda [Hart] fan base has spilled over into Call the Midwife." And like ITV's hugely successful period drama Downton Abbey, Call the Midwife is hugely popular in the US. "The last time we were in New York we had a Q&A with lots of fans of the show who were just crazy midwife fans. We felt like One Direction," says George. The cast know they will all be watching on Christmas Day, as their extended families will insist on it. "This is the first show I've been in [that] my husband has told me not to tell him what happens," says Ferris. Call The Midwife is on Christmas Day at 18:15 GMT.
|
For the lake in Minnesota, see Anna Lake Lake Anna is one of the largest freshwater inland reservoirs in Virginia, covering an area of 13,000 acres (53 km2), and located 72 miles (116 km) south of Washington, D.C. in Louisa and Spotsylvania counties (and partially in Orange County at the northern tips). The lake is easily accessible from Fredericksburg, Richmond and Charlottesville, and is one of the most popular recreational lakes in the state. History [ edit ] The reservoir is formed by the North Anna Dam on the North Anna River at . In 1968, Virginia Electric and Power Company (now Dominion) purchased 18,000 acres (73 km2) of farmlands in three counties along the North Anna and Pamunkey rivers to provide clean, fresh water to cool the nuclear power generating plants at the North Anna Nuclear Generating Station adjacent to the lake.[1] By 1972 the lake bottom was cleared of all timber and the dam was nearing completion. It was projected to take three years to completely fill the lake, but with the additional rainfall from Hurricane Agnes, the lake was full in only 18 months. The first communities broke ground at about that same time and now some 120 different communities dot the shores of the lake. In June 1978, the first of the two reactors went into commercial operation. The second unit followed in December 1980. Description [ edit ] Lake Anna is approximately 17 miles (27 km) long from tip to tip, with 200 miles (320 km) of shoreline.[2] The lake is divided into two sides: the public side (also known as the "cold" side) and the private side, working as a cooling pond (also known as the "hot" side). The public side is roughly 9,000 acres (36 km2), while the private side is roughly 4,000 acres (16 km2). The private side is formed of three main bodies of water, connected by navigable canals. The public and private sides are divided by three stone dikes. The private side has no marinas or public access ramps; only property owners and North Anna Power Station employees have access to the waters of the private side. The public side has several marinas and boat launches, including a boat ramp at the state park. The public side sees significantly higher boat traffic than the private side, especially on summer weekends. The public side is known as the "cold" side because it provides water to cool the generators at the power plant; the private or "hot" side receives warm water discharge from the power plant. The private side can be substantially warmer than the public side, especially near the discharge point, where it can be too hot for swimming. The private side has an extended water sports season. Some water circulates back out of the private side into the public side through underground channels; consequently, the public side is warmer in the southern area near the dam. In the winter, some fish migrate to these warmer waters. Preliminary steps toward the addition of a third reactor have raised protests from environmentalists and property owners, who fear an increase in the water temperature and a decrease in the water level, particularly on the private side. According to Dominion, the water discharged from the plant is usually about 14 °F (7.8 °C) warmer than the intake water.[2] North Anna Dam [ edit ] North Anna Dam The dam creating the lake, North Anna Dam, is a 5,000-foot-long (1,524 m) and 90-foot-high (27 m) earthen embankment dam. It is 30 feet (9 m) wide at its crest which sits at an elevation of 265 feet (81 m) above sea level. The dam's spillway is located in the center of its body and is 200 feet (61 m) wide, containing three main 40-foot-wide (12 m) and 30-foot-high (9 m) radial gates. Two smaller 8.5-foot (3 m) wide and tall gates on the outer edges of the spillway section maintain normal discharges. Normal elevation for the reservoir is 250 feet (76 m).[3] The dam's hydroelectric power plant is located on the west side of the spillway and is supplied with water via a 5-foot-diameter (2 m) penstock. The plant consists of two small open runner turbine-generators, the larger with a 775 kW capacity and the smaller rated at 225 kW for a combined installed capacity of 1 megawatt.[4][5] Use and recreation [ edit ] Lake Anna State Park, offering picnic areas and boat launching ramps, is located directly on the lake's public side eastern shore. The park has a maintained beach area with snack bar, docks, an exhibit center and several miles of hiking, horse trails and tours including visits to the remains of Goodwin gold mine and gold panning. The state park offers rental cabins for overnight lodging.[6] Events [ edit ] Lake Anna is the site of several major water-related events including: Rumpus in Bumpass, an annual spring triathlon festival, with an International/Olympic-distance race and a sprint-distance race held on consecutive days.[7] The Kinetic Race weekend in May with a half-distance race on Saturday and a sprint-distance race on Sunday The Giant Acorn triathlon weekend in the fall, featuring an International/Olympic-distance triathlon followed by a sprint-distance triathlon on the next day.[8] See also [ edit ]
|
It will come as a surprise to precisely no one that Trump has a 'slightly frayed' relationship with the intelligence community. As you'll undoubtedly recall, Trump engaged in a protracted, public dispute with Obama's former CIA Chief John Brennan over his assessment of 'Russian meddling' in the 2016 election... .@FoxNews "Outgoing CIA Chief, John Brennan, blasts Pres-Elect Trump on Russia threat. Does not fully understand." Oh really, couldn't do... — Donald J. Trump (@realDonaldTrump) January 16, 2017 much worse - just look at Syria (red line), Crimea, Ukraine and the build-up of Russian nukes. Not good! Was this the leaker of Fake News? — Donald J. Trump (@realDonaldTrump) January 16, 2017 ...and his ongoing battle with the FBI seems to just now be getting warmed up... Tainted (no, very dishonest?) FBI “agent’s role in Clinton probe under review.” Led Clinton Email probe. @foxandfriends Clinton money going to wife of another FBI agent in charge. — Donald J. Trump (@realDonaldTrump) December 3, 2017 After years of Comey, with the phony and dishonest Clinton investigation (and more), running the FBI, its reputation is in Tatters - worst in History! But fear not, we will bring it back to greatness. — Donald J. Trump (@realDonaldTrump) December 3, 2017 So what do you do when you've grown convinced that your own intelligence agencies loathe your presence in the White House to such an extent that they would prioritize your demise over pretty much any other objective? Well, according to reporting from The Intercept, you just might take it upon yourself to start a new, privately-funded spy network to compete with the official agencies that increasingly look to be politically compromised. The Trump administration is considering a set of proposals developed by Blackwater founder Erik Prince and a retired CIA officer — with assistance from Oliver North, a key figure in the Iran-Contra scandal — to provide CIA Director Mike Pompeo and the White House with a global, private spy network that would circumvent official U.S. intelligence agencies, according to several current and former U.S. intelligence officials and others familiar with the proposals. The sources say the plans have been pitched to the White House as a means of countering “deep state” enemies in the intelligence community seeking to undermine Trump’s presidency. The creation of such a program raises the possibility that the effort would be used to create an intelligence apparatus to justify the Trump administration’s political agenda. “Pompeo can’t trust the CIA bureaucracy, so we need to create this thing that reports just directly to him,” said a former senior U.S. intelligence official with firsthand knowledge of the proposals, in describing White House discussions. “It is a direct-action arm, totally off the books,” this person said, meaning the intelligence collected would not be shared with the rest of the CIA or the larger intelligence community. “The whole point is this is supposed to report to the president and Pompeo directly.” And while The Intercept notes that meetings have already been conducted to raise private funding for such an initiative, official sources from within the intelligence community have understandably denied all such rumors. Some of the individuals involved with the proposals secretly met with major Trump donors asking them to help finance operations before any official contracts were signed. The proposals would utilize an army of spies with no official cover in several countries deemed “denied areas” for current American intelligence personnel, including North Korea and Iran. The White House has also considered creating a new global rendition unit meant to capture terrorist suspects around the world, as well as a propaganda campaign in the Middle East and Europe to combat Islamic extremism and Iran. "I can find no evidence that this ever came to the attention of anyone at the NSC or [White House] at all,” wrote Michael N. Anton, a spokesperson for the National Security Council, in an email. “The White House does not and would not support such a proposal.” But a current U.S. intelligence official appeared to contradict that assertion, stating that the various proposals were first pitched at the White House before being delivered to the CIA. The Intercept reached out to several senior officials that sources said had been briefed on the plans by Prince, including Vice President Mike Pence. His spokesperson wrote there was “no record of [Prince] ever having met with or briefed the VP.” Oliver North did not respond to a request for comment. According to two former senior intelligence officials, Pompeo has embraced the plan and has lobbied the White House to approve the contract. Asked for comment, a CIA spokesperson said, “You have been provided wildly inaccurate information by people peddling an agenda.” And here are the familiar names/companies behind the effort... At the heart of the scheme being considered by the White House are Blackwater founder Erik Prince and his longtime associate, CIA veteran John R. Maguire, who currently works for the intelligence contractor Amyntor Group. Maguire also served on Trump’s transition team. Amyntor’s role was first reported by Buzzfeed News. Michael Barry, who was recently named the National Security Council’s Senior Director for Intelligence Programs, worked closely with Erik Prince on a CIA assassination program during the Bush administration. Prince and Maguire deny they are working together. Those assertions, however, are challenged by current and former U.S. officials and by Trump donors who say the two men were collaborating. As with many arrangements in the world of CIA contracting and clandestine operations, details of who is in charge of various proposals are murky by design and change depending on which players are speaking. An Amyntor official said Prince was not “formally linked to any contract proposal by Amyntor.” In an email, Prince rejected the suggestion that he was involved with the proposals. When asked if he has knowledge of this project, Prince replied: “I was/am not part of any of those alleged efforts.” Prince has strong ties to the Trump administration: His sister Betsy DeVos is secretary of education, he was a major donor to the Trump election campaign, and he advised the transition team on intelligence and defense appointments, as The Intercept has previously reported. Prince has also contributed to Vice President Mike Pence’s campaigns. Of course, while a privately-funded spy network might help offset the problem of a politically compromised intelligence community, in the end these efforts will certainly prove to just substitute the political ambitions of corrupt public servants for those of corrupt private citizens who will undoubtedly expect a "return" on their invested capital.
|
The big landmark season 30 is now in the books for Survivor, and Worlds Apart has left a somewhat strange taste in the mouth. Billed by Jeff and others at CBS as one of the best seasons ever, it was an unusual season, with some great moments, a cast with massive potential, and a very popular winner. Yet it was a strangely personal season, and downright uncomfortable to watch at times. Contrast that with San Juan del Sur - a season seemingly unloved by production and downplayed right from the start. The cast was widely panned and the seasons written off by many. However, in retrospect, it was a season that built to a great ending and had solid amounts of strategy throughout. Some of the characters were a bit forgettable, but there were a surprising amount of game players as well. It may be a big call but I’m calling San Juan del Sur as the better season of the two, and here are ten reasons why. 10. Better Challenges Not normally something I am overly worried about, but the challenges are becoming a bigger and bigger part of the show so its noticeable when they are good, and maybe not so good. Both seasons had some good challenges and some not so good challenges. Worlds Apart started well with a challenge that offered options on how to finish it, but petered out into barrel rolling and giant slide challenges. An epic build doesn’t necessarily make for a good challenge. Although part of the reason may be that most of the challenges in Worlds Apart were not that close, either in the pre or post merge game. Whereas there was real tension in San Juan del Sur, and a variety of different skills were required to win. There were some great come from behind wins (Jaclyn’s last win was a highlight), as well as Natalie’s win to keep Jon away from the immunity necklace. San Juan del Sur also seemed to have more original ideas for challenges, and even managed to break Missy’s leg with one of them. As challenges can get pretty boring if they are repeated too often, I always like to see innovative new ideas to keep things fresh. No, they won’t always work out but it’s good to see new ideas being used. 9. A Point of Difference Whilst many felt that the blood vs. water theme didn’t work so well with all new players, you have to applaud the producers for giving it a go and San Juan del Sur is memorable in that it at least tried something different. They also got rid of Redemption Island, which is good because even if it “works better in blood vs. water season”, Redemption Island is a massive problem and goes against everything Survivor is meant to be. The loved ones duels that replaced it were a bit hit and miss – the duels themselves were actually pretty good but the return of Exile Island didn’t really work. Again, you have to praise producers for trying different things instead of playing it safe. Which is exactly what they did with Worlds Apart – really taking no risks, and re-running the “Brains vs. Brawn vs. Beauty” theme, just with different names. The result was a pretty straightforward season and one that doesn’t really stand out as being good or bad. So even if you don’t really like SJDS, you have to admire it as a season with a point of difference and one that wasn’t afraid to play with the format a bit. World Apart played it safe and therefore got a pretty bland season. 8. Better Editing I understand that editing a show like Survivor must be one hell of a tough job, so I give them a lot of credit, but I do feel the ball was dropped somewhat with Worlds Apart. Specifically, Mike’s win was signalled through the editing weeks, if not months in advance and the efforts to throw viewers off by showing scenes of him being bad or clumsy were offset against much more meaningful scenes of him finding idols (complete with scenes of him in tears and emotional music in the background), so it was hard to miss that his win was being set up in the same obvious way that Tyson, Cochran and Kim were before him. This just doesn’t make for a satisfying viewing experience. To be fair, there have been times when it is very hard not to show the winner in this way because their game was so dominant – think Boston Rob in Redemption Island. However, editors could have really played up the chances of Carolyn or even Rodney winning in order to give viewers options on who would win. In fact, this is exactly what happened on San Juan del Sur, where Natalie’s win was hidden behind the downfall stories of major alpha males like Josh, Jeremy and Jon. Natalie’s win was not a massive surprise, but it never felt like the season was playing out as a coronation to her win. And the story is better for it – you get to see the possibilities of other people winning for extended periods of the season, and it keeps viewers guessing. Surely it isn’t hard to do more of this and less of the “Mike wins” story. 7. Lower Expectations and not Overshadowed by the Next Season They say the secret to happiness in life is low expectations and San Juan del Sur is a great example of this. Jeff really played down the season, and therefore most of us didn’t think we would get much. And to be fair, the season is a slow burn – the best drama and gameplay comes at the end as it should, and its a slow build to this endgame. It’s great because we never really expected too much. Compare that to Worlds Apart – the promotion of which took up considerable time in the SJDS reunion show. Jeff and others talked of this being the best season ever, the best cast ever, the best premiere episode ever….it wasn’t and the expectation probably killed a lot of enjoyment we would have got if Jeff had just shut up and let the season speak for itself. But the Worlds Apart did have one major problem, and that was that its final few episodes were completely overshadowed by the Second Chance vote. I have no complaints – it’s a great idea and this was probably the best way to create the most amount of hype, but Worlds Apart was completely lost under it. So the 10 minute sequence of World Apart in the SJDS reunion? Nothing compared to the two week long hype machine of what could be one of the best seasons for a while in Second Chances. All this didn’t help Worlds Apart stand as its own season. 6. More Comedy San Juan del Sur is unquestionably a funnier season than Worlds Apart. This may not be a major selling point to many people but a totally serious season is never that much fun. And comedy is always a great tool to fall back on when the strategic action isn’t very interesting. Seasons like Fiji, Exile Island and Vanuatu used a lot of humour and all these seasons were better because of it. What is funny is all down to personal taste, but SJDS offers a little of everything. Particular favourites for me include the Drew Christy downfall arc, where he throws a challenge to get rid of one of the girls, then gets voted out himself. Brother Alec (aka the “meat collector”) was also funny with his open mouth, slack jawed expressions from the jury. There were some other great characters when it came to humour, with the Nale men being front and centre. Keith was often doing or saying something funny, and Wes’ whole subplot of overeating was actually his major storyline. In terms of loveable dummies, we also had Jon who was great at putting his foot in his mouth and I had to like how Natalie made fun of him basically all season. Contrast this to Worlds Apart, where Rodney was really the only source of comedy but giving (admittedly fantastic) impressions of his other tribe mates. Jenn certainly had her moments but this was a dour season for the most part and really didn’t have many lighter moments at all. 5. San Juan Del Sur Will Improve on a Re-watch It is still early days, but most obsessive Survivor fans like to re-watch a season, to see how the story plays out with the hindsight of knowing the end result. You can also pick up on other things you may have missed in the first viewing. I haven’t had time to fully re-watch these seasons yet, but I feel pretty confident that San Juan del Sur will only improve on a re-watch. Knowing Natalie wins, you can see her game in retrospect and how well she plays. You can also go back and look at the downfall of Missy and Baylor, and the faults in Josh, Reed and Jeremy’s game. The pre-merge is also more fun than you probably remembered, especially prior to the swap. The Drew downfall episode and the fall of Nadiya and Val, along with John Rocker make for an explosive start to the season that is great to watch back. However, the issue with re-watching Worlds Apart is that there really isn’t much to re-analyse with Mike’s game. He basically won through challenges, and there isn’t really much to watch in terms of strategy. In fact the strategy seems completely meaningless when you know the season is just won by a challenge beast. As much as people will say ideas like the return of Exile Island didn’t work, neither did the second vote that Dan had, and it just really wasn’t that interesting the first time around. The nastiness and personal attacks in Worlds Apart will not be any more light or interesting on a second viewing. 4. Bigger, Better Blindsides If big tribal councils and blindsides are your thing, you will find much more to enjoy in San Juan del Sur. This goes back a little bit to my concerns with editing of the seasons. Worlds Apart had plenty of votes that surprised the players, but as viewers, we were rarely in doubt of the outcome. I would say that only the eliminations of Vince, Joaquin and Tyler were really massive surprises in Worlds Apart. Compare that to SJDS, where several votes, especially late in the game came as a massive surprise. With idol plays and hidden plans aplenty, the game really came alive and things got shaken up several times. Natalie urging Jon to play his idol, “Stick to the Plan” and Jon’s blindside were massive moments. Jeremy’s blindside took many viewers by complete surprise and even Julie’s quit changed the direction of the game, with Jon and Jaclyn seemingly ready to get rid of Jeremy before switching to Josh following her departure. Even the finale has the idol play and blindside of Baylor. SJDS is a great season in that it gets better almost every week, unlike Worlds Apart which pretty much flat lines after Kelly gets idoled out of the game. For all the talk of how many big players there were in season 30, it’s actually the game play of season 29 contestants that is the most intriguing. Its easy to write off SJDS as being full of “dumb” or naive players like Keith, Alec or Jon, but in reality there are some big gamers like Josh, Reed, Jeremy, Natalie and Missy but also players like Jaclyn and Baylor that grow into becoming pretty savvy strategists as well. If strategy is your thing, you won’t find as much of it as you have probably been led to believe in World Apart. Look to season 29 for more complex and interesting strategy. 3. Better Heroes and Villains Any great season of Survivor has both a collection of players to love and hate. Some who you want to see win, others you can’t wait to see get their due. It may be simplistic to set up players as either good or bad, but to be honest, it works and there is a reason that Survivor is often set up as being about Heroes or Villains. Because it is more fresh in the memory, it’s easy to think that Worlds Apart has better heroes, namely Mike and Shirin, and better villains in Dan and Will but overall these players never really had much depth. Fans may not want to admit it, but Missy was in fact one of the best villains the show has had in a while. She was really disliked, to the point that when she sustained her injury near the end of the season, there was a decent chunk of fans who were outraged that she wasn’t medically evacuated. This was largely because Missy was developed as a character with varying sides – as a mother, she felt the need to protect her daughter in the game but also make big moves. She was accused of covering for Baylor to allow her to not do work, but she also created strong bonds when she needed to (and knew how to break them). She was a villain with many sides to her personality – but they all led to her being someone audiences loved to hate. On the other side were a variety of heroes – regardless of what you liked in the game, you had someone to support and want to win – be it Natalie, with her very good strategic play and revenge narrative, Keith who was a loveable idiot who won challenges and accidently ruined the plans of many around him, or Jaclyn who would put Jon in his place and was feisty enough to not just be cast as a token good looking girl. World Apart had arguably a much better cast on paper, but they just never really worked in the game. Characters like Carolyn could have been much bigger heroes, but they just never got the edit to support it. Shirin’s edit turned her into someone to pity, which I’m sure is the last thing she wanted. Joe won challenges but was never built as being someone worth supporting. And Will and Dan, the supposed “villains” were actually given a one note bully edit which didn’t build any depth. All over, Worlds Apart was frustrating in this way – the cast seemed to have so much potential, but just never really “popped”. SJDS looked ordinary on paper, but grew into something pretty great. 2. More Fun There really is no way around the fact that Worlds Apart is a dour season. It’s too personal, too serious and just lacking in any kind of fun. I don’t for one moment think producers should paint over the Shirin, Dan and Will episodes – I like that they show big issues and don’t shy from them. But it doesn’t change the fact that it makes for an awkward, uncomfortable and pretty miserable viewing experience. Whilst some would argue that Dan and Will are villains, I see them more as buzzkills, who provide no entertainment and are just terrible to watch. This isn’t a personal comment, but more the way they were edited into one dimensional “evil” people. The more fun characters on the season like Jenn and Joe leave too early, or Rodney, who is great fun but doesn’t get nearly the camera time he deserves. There’s no hiding from the fact that the dirty thirty are just not much fun to watch. I think it also hurts that so many of them are game focussed that actually having fun character moments is something we don’t get. Again, a lack of expectations helps the San Juan del Sur cast, who are all sorts of entertaining. They are just the right amount of game savvy and unbelievably goofy and dumb to make for fantastic TV. The “stick to the plan” moment is so much fun, as well as Natalie giving Jon a hard time for being a wine snob. Even minor charters like Dale bring some fun, with fake hidden immunity idols, as well as Drew in his one episode downfall. Nadiya and Val also keep things interesting in their short stays. Yes, there are some down moments, and the three-episode run from Kelley’s boot to Julie’s quit is pretty slow (Dale and his fake idol aside), but it quickly picks up and never really slows again. San Juan del Sur is again going to deliver so much more on a second or third viewing, with lots of fun character moments that make the season more than the sum of its parts. 1. More Satisfying Winner and Ending Let me be quite clear – winners deserve to win and this point is not a hit on Mike as a winner at all. My point is that purely from a viewer’s perspective, the story of Natalie’s win is just so much more entertaining. Her story is about overcoming the vote off of her sister, then her closest ally to ensure she eventually won the game. Natalie’s win had a little of everything – great strategy, subtle yet smart social play, challenge wins and idol finds. Natalie understood to make a couple of showy moves in front of the jury (the idol play that led to Baylor’s blindside), as well as laying low and letting the bigger more egocentric players knock each other out. She managed to keep Keith quiet on her plan to get rid of Jon, and even recover from Jaclyn’s anger at this. And although she did win a couple of critical challenges, she never really needed them to keep going in the game. She was able to navigate a game full of loved ones paired up after her loved one went first, and survive. And she did all of this without really being an avid fan or viewer of Survivor before playing. Not since Earl Cole has someone known so little of the game yet played it so well. I truly believe that in the fullness of time, her game will come to be viewed as one of the best ever. Mike on the other hand did one thing very well, and that was win challenges. And ultimately, that’s all he needed. Full credit to him, but it just doesn’t make for a very satisfying or entertaining season. Ultimately all the strategy that happens from about Joe’s exit onwards doesn’t matter because Mike will be safe either though immunity or the idol all the way to the end. Some will say that this is just as good as playing a great strategy; others will say a good strategist never needs immunity to be safe in the first place. Wherever you draw your line, no re-watch of Worlds Apart will make you understand Mike’s eventual win better – he played one aspect of the game amazingly, and he won’t care how that million dollars got into his bank account. But from a purely viewer’s perspective, it’s just not that interesting. I had always wondered what it would have been like if Terry, Brett or Ozzy won the last challenge in seasons where they played pretty poor strategic and social games – and now I know. And it just doesn’t make for good TV. What do you think of the top 10? Do you agree? Disagree? Is it in the wrong order or are there ones that didn’t make the top 10 that you feel should’ve? Leave a comment below to let us know your thoughts! ALL IMAGES USED IN THIS ARTICLE ARE COPYRIGHT CBS. IF YOU WISH TO READ OUR DISCLAIMER IN REGARDS TO THE USE OF IMAGES PLEASE CLICK HERE
|
fullscreen continue view fullscreen close Fort Tilden's secluded beauty remains marred by the remnants of Hurricane Sandy. Federal officials have determined that the broken metal and concrete debris from the ruined Shore Road make the area too hazardous for beachgoers. Visiting the area last month with my friend, a Rockaway native, we didn't see another person all day. A small cluster of multi-use facilities sits at the entrance of the park, including repurposed barracks housing the Rockaway Artists Alliance. Most of the buildings are unused, but two host a summer day camp called Camp Kidsmart and a local theater company. A domed wrought iron gate divides the semi-active buildings from the abandoned ones. My friend tells me a few years ago a local girl found a dead man hanging from it (certainly the park is not unfamiliar with death). Before Sandy, there was an open mic every Thursday night, with locals pouring out of the building into the back garden area. One of the long-unused warehouses had a library of paperbacks with a secret garden still full of mosaics and shrines of curios, toilet bowls full of weeds and plant strung together and hung from trees. Now the wooden structures are battered and damp, the remains of the library balled in wet clumps of paper on the floor, the garden fallow in the brush. They've been untouched by cleanup efforts, local haunts not easily restored by out-of-borough volunteers. As we approached the main fortress, the scent of fresh paint intensified. The entire barrack had recently been given a coat of dark bluea real pity, because that building used to be covered in runes and graffiti dating as far back as the First World War. Soldiers etched their names into the stone next to long strips of symbols that could be mistaken for hieroglyphics. Isolated on the Atlantic shore, Fort Tilden now begs protection from the rising tides and escalating storms, but once it protected New York. Established in 1917 and named for one term governor and failed presidential candidate of 1876, Samuel, J. Tilden, the fort was built during World War I as a fortification of the Rockaway Peninsula. Constructed to defend the New York Harbor from sea or air attack, it consisted of two batteries, both of which remain today. It was the base of Naval Air Station Rockaway, and the departure point for the first transatlantic flight. After the war, a small caretaker dispatch was left to maintain the buildings until June 1, 1941, when the battlements were flooded once more by 1,000 menmore barracks and support buildings had to be constructed to accommodate the troops. During the Second World War, Battery Harris, the largest structure at Fort Tilden, was casemated to protect from aerial bombings, and an overhead trolley system was outfitted on the ceiling to transport heavy artillery shells to the guns. The end of World War II saw the abolishment of the Coast Artillery Corps, and so in 1946 46 of Fort Tilden's barracks were converted into 350 apartments for veterans and their families. During the Cold War in 1951, the barracks saw action once again, becoming the home of the 69th Anti-Aircraft Artillery Battalion, bringing in more men and a lot of guns. When anti-aircraft guns became outdated, missiles were brought in: first Nike Ajax, then Nike Hercules, which defended the city's skies from 1959 to 1974. With the end of Nike Hercules came the end of Fort Tilden's military career; the base was deactivated and became a part of the Gateway National Recreational Area in 1974. Today, there's an observatory deck on top of Battery Harris East, a historic gun site that used to house 70 foot cannons shooting 2,300 pound shells 25 miles out to sea. It is a popular destination for bird watchers. Many of the barracks, bunkers, and underground artillery shelters still dot the grounds, albeit gutted and covered in covered in decade's worth of graffiti. These smaller outbuildings can be difficult to find, hidden off the main path and lost to vegetation (in the winter months, the spray paint on their facades makes them easier to spot in the dead brush). There are no directional signs or plaques explaining the place's history, nothing to remind visitors that this was once home to the 187th infantry brigades and the largest cannon ever employed (a 16 bore); only occasional reports of unexploded ordnance found on the grounds hint towards the fort's military past. Hannah Frishberg is a 5th generation Brooklynite. Earlier this month she wrote about Red Hook's giant grain elevator.
|
WSGI on Python 3 Yesterday after my talk about WSGI on Python 3 I announced an OpenSpace about WSGI. However only two people showed up there which was quite disappointing. On the bright side however: it was in parallel to some interesting lighting talks and I did not explain to well what the purpose of this OpenSpace was. In order to do better this time around, I want to summarize the current situation of WSGI on Python 3, what the options are and why I'm at the moment thinking of going back to an earlier proposal that was dismissed already. So here we go again: Language Changes There are a couple of changes in the Python language that are relevant to WSGI because they make certain things harder to implement and others easier. In Python 2.x bytestrings and unicode strings shared many methods and Python would do a lot to make it easy for you to implicitly switch between the two types. The root cause of the unicode decode and unicode encode errors everybody knows in Python are often caused by the implicit conversion going on. Now in Python 3 the whole thing looks a lot different. There are only unicode strings now and the bytestrings got replaced by things that are more like arrays than strings. Take this Python 2 example: >>> 'foo' + u'bar' u'foobar' >>> 'foo %s ' % 42 'foo 42' >>> print 'foo' foo >>> list ( 'foo' ) ['f', 'o', 'o'] Now compare that to the very same example on Python 3, just with syntax adjusted to the new rules: >>> b 'foo' + 'bar' Traceback (most recent call last): File "", line 1, in TypeError : can't concat bytes to str >>> b 'foo %s ' % 42 Traceback (most recent call last): File "", line 1, in TypeError : unsupported operand type(s) for %: 'bytes' and 'int' >>> print ( b 'foo' ) b'foo' >>> list ( b 'foo' ) [102, 111, 111] Traceback (most recent call last): File "", line 1, in TypeError : unsupported operand type(s) for %: 'bytes' and 'int' There are ways to convert these bytes to unicode strings and the other way round, there are also string methods like title() and upper() and everything you know from a string, but it still does not behave like a string. Keep this in mind when reading the rest of this article, because that explains why the straightforward approach does not work out too well at the moment. Something about Protocols WSGI like HTTP or URIs are all based on ASCII or an encoding like latin1 or even different encodings. But all those are not based on a single encoding that represents unicode. In Python 2 the unicode situation for web applications was fixed pretty quickly by all frameworks in the same way: you as the framework/application know the encoding, so decode incoming request data from the given charset and operate on unicode internally. If you go to the database, back to HTTP or something else that does not operate on unicode, encode to the target encoding which you know. This is painless some libraries like Django make it even less painful by having special helpers that can convert between utf-8 encoded strings and actual unicode objects at any point. Here a list of web related libraries operating on unicode (just a small pick): Django, Pylons, TurboGears 2, WebOb, Werkzeug, Jinja, SQLAlchemy, Genshi, simplejson, feedparser and the list goes on. What these libraries can have, what a protocol like WSGI does not, is having the knowledge of the encoding used. Why? Because in practice (not on the paper) encodings on the web are very simple and driven by the application: the encoding the application sends out is the encoding that comes back. It's as simple as that. However WSGI does not have that knowledge because how would you tell WSGI what encoding to assume? There is no configuration for WSGI so the only thing we could do is forcing a specific charset for WSGI applications on Python 3 if we want to get unicode onto that layer. Like utf-8 for everything except headers which should be latin1 for RFC compliance. Byte Based WSGI On Python 2 WSGI is based on bytes. If we would go with bytes on Python 3 as well, the specification for Python 3 would look like this: WSGI environ keys are unicode WSGI environ values that contain incoming request data are bytes headers, chunks in the response iterable as well as status code are bytes as well If we ignore everything else that makes this approach hard on Python 3 and only look at the bytes object which just does not behave like a standard string any more, a WSGI library based on the standard libraries functions and the bytes type is quite complex compared to the Python 2 counterpart. Take the very simple code commonly used to reproduce a URL from the WSGI environment on Python 2: def get_host ( environ ): if 'HTTP_HOST' in environ : return environ [ 'HTTP_HOST' ] result = environ [ 'SERVER_NAME' ] if ( environ [ 'wsgi.url_scheme' ], environ [ 'SERVER_PORT' ]) not \ in (( 'https' , '443' ), ( 'http' , '80' )): result += ':' + environ [ 'SERVER_PORT' ] return result def get_current_url ( environ ): rv = ' %s :// %s / %s%s ' % ( environ [ 'wsgi.url_scheme' ], get_host ( environ ), urllib . quote ( environ . get ( 'SCRIPT_NAME' , '' ) . strip ( '/' )), urllib . quote ( '/' + environ . get ( 'PATH_INFO' , '' ) . lstrip ( '/' )) ) qs = environ . get ( 'QUERY_STRING' ) if qs : rv += '?' + qs return rv This depends on many string operations and is entirely based on bytes (like URLs are). So what has to be changed to make this code work on Python 3? Here an untested version of the same code adapted to theoretically run on a byte based WSGI implementation for Python 3. The get_host() function is easy to port because it only concatenates bytes. This works exactly the same on Python 3, but we could even improve that theoretically by switching to bytearrays which are mutable bytes objects which in theory give us better memory management. But here the straightforward port: def get_host ( environ ): if 'HTTP_HOST' in environ : return environ [ 'HTTP_HOST' ] result = environ [ 'SERVER_NAME' ] if ( environ [ 'wsgi.url_scheme' ], environ [ 'SERVER_PORT' ]) not \ in (( b 'https' , b '443' ), ( b 'http' , b '80' )): result += b ':' + environ [ 'SERVER_PORT' ] return result The port of the actual get_current_url() function is a little different because the string formatting feature used for the Python 2 implementation are no longer available: def get_current_url ( environ ): rv = ( environ [ 'wsgi.url_scheme' ] + b '://' get_host ( environ ) + b '/' urllib . quote ( environ . get ( 'SCRIPT_NAME' , b '' ) . strip ( b '/' )) + urllib . quote ( b '/' + environ . get ( 'PATH_INFO' , b '' ) . lstrip ( b '/' )) ) qs = environ . get ( 'QUERY_STRING' ) if qs : rv += b '?' + qs return rv The example did not become necessarily harder, but it became a little bit more low level. When the developers of the standard library ported over some of the functions and classes related to web development they decided to introduce unicode in places where it's does not really belong. It's an understandable decision based on how byte strings work on Python 3, but it does cause some problems. Here a list of places where we have unicode, where we previously did not have it. Not judging here on if the decision was right or wrong to introduce unicode there, just that it happened: All the HTTP functions and servers in the standard library are now operating on latin1 encoded headers. The header parsing functions will assume latin1 there and pass unicode to you. Unfortunately right now, Python 3 does not support non ASCII headers at all which I think is a bug in the implementation. The FieldStorage object is assuming an utf-8 encoded input stream as far as I understand which currently breaks binary file uploads. This apparently is also an issue with the email package which internally is based on a common mime parsing library. object is assuming an utf-8 encoded input stream as far as I understand which currently breaks binary file uploads. This apparently is also an issue with the email package which internally is based on a common mime parsing library. urllib also got unicode forcely integrated. It is assuming utf-8 encoded string in many places and does not support other encodings for some functions which is something that can be fixed. Ideally it would also support operations on bytes which is currently only the case for unquoting but none of the more complex operations. The about-to-be Spec There are some other places as well where unicode appeared, but these are the ones causing the most troubles besides the bytes not being a string thing. Now what later most of WEB-SIG agreed with and what Graham implemented for mod_wsgi ultimately is a fake unicode approach. What does this mean? Make sure that all the information is stored as unicode but not with the proper encoding (which WSGI would not know) but just assume latin1. If latin1 is not what the application expected, the application can encode back to latin1 and decode from utf-8. (As far as I know, this is loss-less). Here what the current specification looks like that is about to be crafted into a PEP: The application is passed an instance of a Python dictionary containing what is referred to as the WSGI environment. All keys in this dictionary are native strings. For CGI variables, all names are going to be ISO-8859-1 and so where native strings are unicode strings, that encoding is used for the names of CGI variables. For the WSGI variable 'wsgi.url_scheme' contained in the WSGI environment, the value of the variable should be a native string. For the CGI variables contained in the WSGI environment, the values of the variables are native strings. Where native strings are unicode strings, ISO-8859-1 encoding would be used such that the original character data is preserved and as necessary the unicode string can be converted back to bytes and thence decoded to unicode again using a different encoding. The WSGI input stream 'wsgi.input' contained in the WSGI environment and from which request content is read, should yield byte strings. The status line specified by the WSGI application should be a byte string. Where native strings are unicode strings, the native string type can also be returned in which case it would be encoded as ISO-8859-1. The list of response headers specified by the WSGI application should contain tuples consisting of two values, where each value is a byte string. Where native strings are unicode strings, the native string type can also be returned in which case it would be encoded as ISO-8859-1. The iterable returned by the application and from which response content is derived, should yield byte strings. Where native strings are unicode strings, the native string type can also be returned in which case it would be encoded as ISO-8859-1. The value passed to the 'write()' callback returned by 'start_response()' should be a byte string. Where native strings are unicode strings, a native string type can also be supplied, in which case it would be encoded as ISO-8859-1.
|
This post may contain affiliate links. See my policy page for more info. El Chorro, Spain Balancing precariously on a rusty steel beam, I slowly hike across the Caminito del Rey trying not to glance down at the treacherous river hundreds of feet below me. UPDATE: After my hike along the Caminito del Rey in 2014, the route has since been completely restored by the government, open and safe for all tourists to visit. I’d traveled to this remote corner of Andalucia in the South of Spain to hike the Caminito del Rey. This path is famous around the world with rock climbers and adrenaline junkies due to its shocking state of disrepair. Just looking up at the hazardous path full of holes and missing sections sent a shiver of fear down my spine. Barely clinging to the vertical canyon walls it’s attached to — ready to crumble at any moment. Known as Spain’s most dangerous path, or the most dangerous walkway in the world, the Caminito del Rey (The King’s Little Pathway) is over 100 years old and 100 meters (350 feet) high. The perilous concrete trail winds through steep limestone cliffs near the small village of El Chorro and into a hidden valley. Would I really go through with this risky journey? By myself? I was starting to have second thoughts… Hiking The Caminito Del Rey Walking the entire length of the 3 kilometer Caminito (sometimes called the Camino del Rey) has become an exclusive adventure sport for people crazy enough to attempt it. There are numerous sections where the concrete has collapsed, creating large open air gaps that are bridged by very narrow steel beams, themselves often rotting away. A via ferrata cable runs the length of the path though, allowing hikers to clip in with a harness. You need to bring your own gear or rent one from a climbing shop. Or you can make your own Swiss Seat (like I did) with some webbing, climbing a rope, and a few carabiners! However, the integrity of the safety cable running the length of the path is unknown, as it’s not officially maintained by anyone. So you must rely on it at your own risk. Armed with my trusty Luna Sandals made for trail running, and a backpack loaded with gear, I spent 4 days hiking the walkway over 8 times. Filming video with my GoPro camera along the way. The Path is 350 Feet High in Some Places[/caption] The Caminito del Rey in El Chorro Canyon[/caption] Dangerous & Beautiful On the hike itself, the wind whips through the narrow canyon, testing my nerves as I carefully place one foot in front of the other. Hoping my next step isn’t my last. I’m not the only one attempting to conquer my fears though, there are other adventurous hikers up here flirting with death. Sometimes we must pass each other, which can be complicated on a 1-meter wide path full of holes. In many places the entire path has completely fallen away, leaving just a three-inch wide steel beam to balance on. Other sections don’t even have beams — forcing you to cling to the face of the rock. The Caminito del Rey is made up of two different sections. They each traverse a narrow area of the Gaitanes Gorge, with a stunning hidden valley located between them. “The Valley of the Orange” is completely surrounded by mountains, with orange trees growing near the Guadalhorce river as it flows through the middle. There’s even an old ruined house at the bottom. Less hikers attempt (or know about) the second part of the walkway. Much of it has no safety line, save for a few very sketchy sections that require some rock climbing skills to pass. After about 3 hours I finished this wild adventure at the far end of the valley. Luckily in one piece. History Of The Path The walkway was completed in 1905 after 4 years of construction so workers could move materials back and forth between the two hydroelectric power plants at Chorro Falls and Gaitanejo Falls on either end of the canyon. A water canal also weaves its way through tunnels in the mountains. The suspended catwalk allowed easy access to this canal for inspections and maintenance work, controlling the flow of water when necessary using a series of steel doors lowered into the canal with gears. Spanish King Alfonso XIII inaugurated the pathway in 1921, which is why it’s now called “The King’s Little Pathway”. The King himself walked the length of it to marvel at the beautiful & scenic landscape. Deaths On The Caminito There have been at least 5 deaths on the Caminito del Rey, the most recent few occurring in 2000, and many more accidents over the years. The path hasn’t been maintained since the 1920’s — rust eats away at many of the metal support beams. Large gaping holes in the concrete are common. Sometimes whole sections of the treacherous walkway are completely missing after they’ve crashed down to the bottom of the canyon 100 meters (350 feet) below. If you’re afraid of heights, it’s the stuff of nightmares. New Path Restoration Work just finished on a €3.12 million restoration program in 2015 that transformed the entire walkway into a much safer route, opening up the path to more people and regular tourism. The aging concrete was replaced with wooden slats and glass panels with a handrail. While more people will now get to enjoy the views of this magnificent canyon, sadly the adventurous spirit of the Camino has changed now that it’s fixed up. I’m very lucky I was able to hike it when I did! Hiking the Caminito del Rey ranks right up there with camping on an erupting volcano and cageless scuba diving with bull sharks as one of the craziest adventure travel experiences I’ve ever had. ★ UPDATE: After my hike along the Caminito del Rey in 2014, the route has since been completely restored by the government, open and safe for all tourists to visit. Watch Video: The Caminito Del Rey in Spain Subscribe to my YouTube Channel for new Adventure Travel Videos! How do you feel about the Caminito del Rey getting fixed? Would you have hiked this route? Share your thoughts in the comments below!
|
Ben Shpigel and David Waldstein provided updates and analysis during Game 5 of the Dodgers-Phillies National League Championship Series. PHILADELPHIA — Unsatisfied by their last championship and determined to repeat recent history, the Philadelphia Phillies took another step in their quest to win the World Series again. The Phillies have been in existence since 1883, the longest tenure of any professional sports team in one city, with one name. But it took them until Wednesday night to earn a rare distinction. With an emphatic 10-4 victory against the Los Angeles Dodgers in Game 5 of the National League Championship Series, the Phillies have now won consecutive National League pennants for the first time in their 126-year history, and their seventh over all. It is also the second year in a row the Phillies beat the Dodgers in five games. Their next appointment is with the winner of the American League Championship Series between the Los Angeles Angels and the Yankees, who lead that series three games to one. The Phillies relied on the long ball to win this game. Jayson Werth hit two of them, a three-run homer in the first inning and a solo shot in the seventh. Shane Victorino added a two-run shot in the sixth, and Pedro Feliz hit a solo home run for the Phillies, who needed only eight hits to score their runs against shaky Dodgers pitching. The Dodgers issued four walks and hit three batters, and four of those free passes were converted into runs. The Phillies scored their final run on a wild pitch by Ronald Belisario. Phillies closer Brad Lidge got the final out as Ronnie Belliard flied out to center fielder Shane Victorino, igniting a celebration both on the field and in the stands, where 46,214 deliriously happy fans, most of whom were wearing red, waved white handkerchiefs and cheered their team’s return to the World Series as the Dodgers filed quietly into the clubhouse. The Dodgers had the best record in the National League, but the season came to a disappointing end, particularly for Manager Joe Torre, who has led two teams to 14 consecutive playoff appearances, but hasn’t won the World Series since 2000 with the Yankees. — DW Victorino just barely missed his second home run. The ball was ruled a double because of fan interference. Victorino seemed to have slightly injured something — perhaps his hand — on the swing, but he’s still in the game. Rollins just scored the 10th run on a wild pitch. The Phillies have two more runs than hits. The entire left-field bleachers were chanting “Take a shower” at Ramirez, but surprisingly, he seemed completely oblivious. — DW Ryan Madson gets out of the eighth. That’s clutch pitching, because three different Dodgers came to the plate with a chance to draw the Dodgers to a run behind with one swing off the bat. But Loney fouled out, Martin struck out swinging, and Blake grounded into a fielder’s choice to end the threat. The Phillies need only three more outs and Brad Lidge is warming up hard in the bullpen. It’s not a save situation, but Manuel would love to have Lidge on the mound for the final out and the celebration. It would be another confidence boost in a nice postseason for the closer, who came into the postseason as a rather large question mark. — DW How about this? The potential tying run is on deck. Just minutes after we declared this game over, the Dodgers have the bases loaded with nobody out in the eighth. Oops, one out as Loney fouls out. It’s 9-4 and Russell Martin is up with Casey Blake on deck. Some fans in front of the press box were chanting “Take a shower” at Manny Ramirez, who came to the plate with two runners on base in the eighth. Ramirez walked and Matt Kemp hit an R.B.I. single to center. — DW Werth goes deep again. This time he straightened it out and hit it to center off Hong-Chih Kuo. It is the seventh home run of the game and only the first to center. This may not be the most provocative thought written in this space tonight, but suffice it to say this game is over. The Phillie Phanatic seems to be very confident right now. It’s 9-3 as we enter the eighth. — DW Get those showers ready. Shane Victorino just launched the first pitch he saw from Clayton Kershaw in the sixth inning to reawaken the crowd and give the Phillies an 8-3 lead. With two outs, Kershaw hit Jimmy Rollins with a pitch — the third Phillie to be hit — and Victorino made him pay. That’s the sixth home run of the game. The first four were to right field. The last two have been to left. If the Yankees are indeed the team the Phillies play in the World Series — barring a nearly impossible comeback by the Dodgers, of course — it will be scary to see what the Yankees sluggers can do in this ballpark. Amazingly, the Phillies have eight runs on only five hits. Chan Ho Park is pitching for the Phillies in the seventh. — DW The pace of this game has slowed. They’ve been playing for almost two and a half hours and it’s only the top of the sixth. As a result, the atmosphere in the park has quieted considerably, even with the Phillies poised to win the pennant. Fans are sitting in their seats waiting for things to happen. They showed a little life just now as Chad Durbin set the Dodgers down in order. Only nine outs to go. — DW Ramirez just hit a weak roller off the handle of his bat and was thrown out at first without much effort. So the rally dies a weak death. Ramirez, who took a foul ball off his left shin on the second pitch of the at-bat and was in considerable pain, initially seemed to think the ball he hit fair was in fact foul. When he realized it was fair, he ran three-quarters speed three quarters of the way down the line as Durbin fielded the ball, then jogged the rest of the way to first. Ramirez may have been in pain, but it doesn’t look good in the face of elimination. Phillies still lead, 6-3. — DW A huge at-bat coming up. Manny Ramirez is coming to the plate as the potential tying run, with Furcal on second and Ronnie Belliard on first with two outs. Manuel took out the left-hander J.A. Happ and is bringing in the right-hander Chad Durbin. Manny is 2 for 8 with a home run and three walks against Durbin. — DW Interesting move by Charlie Manuel here: Not that he’s pulled Hamels — who showed poor body language after giving up the homer to Orlando Hudson and the double to Rafael Furcal — but that he chose the left-hander J.A. Happ to face a right-handed-heavy top of the Dodgers’ lineup. Manuel clearly feels more comfortable with Happ, a rookie, than with the other potential long man, the right-hander Chad Durbin. — BS Orlando Hudson, pinch-hitting for Sherrill, just answered the previous question. If his last at-bat is any indication, the Dodgers are going to battle. Hudson, whose optimism and positive outlook are generally uncontainable, just curled a 2-1 pitch around the foul pole in left. The Dodgers close the gap to 6-3. — DW George Sherrill hit Victorino with a pitch on a 3-2 count to force in a run. It’s now 6-2, Phillies, heading into the fifth. Padilla was charged with all six runs in just 55 pitches over three-plus innings. That’s a run every 9.17 pitches. Not an enviable ratio. The question now is, do the Dodgers keep battling, or do they start just hacking away with one foot on that plane back to L.A.? — DW Not only is Hamels pitching decently, he’s also handling the bat well here in the fourth. He just laid down a perfect sacrifice on a 3-2 pitch to push Ibanez and Ruiz over to second and third for the next batter, Rollins. On a 3-1 pitch, Hamels squared and then pulled back the bat for a called strike on a slightly high pitch. He told the plate umpire, Tom Hallion, that he pulled it back, and when he was told that it was a called strike, he didn’t argue. Hamels wants those pitches called strikes for him, too, especially since the Phillies already have their runs. Troncoso just hit Jimmy Rollins with a pitch, so the bases are loaded. Torre’s bringing in the left-hander George Sherrill. If Shane Victorino gets a hit, Manny is going straight to the showers. — DW The Phillies are piling on. Raul Ibanez just ripped a double to the gap in right-center, scoring Werth, who had singled to left, to make the score 5-2, and that’s all for Padilla. The former Phillie just didn’t have it tonight, and the Dodgers are going to the right-hander Ramon Troncoso. Padilla received a huge sarcastic cheer as he walked off. Phillies fans can smell another pennant. — DW Clayton Kershaw, the presumed Game 6 starter for the Dodgers, was loosening in the bullpen. Perhaps that was the motivation Padilla needed because he just recorded his first 1-2-3 inning. In fact, it was his first inning without a home run. Kershaw is sitting down again. Torre said Tuesday it would be “all hands on deck,” baseball’s terminology for any pitcher being available to pitch out of the bullpen. Ramirez is leading off the top of the fourth inning. He looks very clean from here. — DW Padilla did it again. Pedro Feliz just hit the fourth home run of the game, also to right field. This came after the Dodgers had cut the deficit to one on Loney’s homer. Not surprisingly, the Dodgers already have action in their bullpen. 4-2, Phillies. — DW The Dodgers got one back as James Loney led off the second inning with a solo home run to right, where all three homers have landed. Funny, no discernible wind in that — or any — direction. So the Dodgers have two home runs to the Phillies’ one, but Philadelphia leads, 3-2. There’s a perfect demonstration of why walks hurt so much. When Werth homered, Torre sat motionless in the Dodgers dugout with a very sour expression. I’ve definitely seen that look on his face recently. Oh yes, it was during batting practice, when he was talking to the former Yankees reliever Jeff Nelson. Nelson always had that effect on Torre. — DW What’s the worst thing a pitcher can do after his team stakes him to a lead? Give it right back. Vicente Padilla did that and then some, surrendering a three-run homer to Jayson Werth with two outs in the bottom of the first to make the score 3-1. Padilla got the first two outs without much problem but then completely lost the strike zone. He walked Chase Utley and Ryan Howard, and then fell behind on Werth, 3-0, bringing the fans to their feet waving their hankies and chanting “Beat L.A.” as if it were the 1984 N.B.A. finals in Boston. Padilla regained some of his control to get it to a full count, but Werth then sent a 3-2 pitch into the stands in right field. The crowd went crazy. — DW The Dodgers look as if they came to play. Andre Ethier just clubbed a solo home run to give them a 1-0 lead and silence the crowd. Ramirez followed with a single to right, but Hamels struck out Matt Kemp to end the inning. For Hamels, it was the fourth home run he’s allowed in the postseason. — DW First pitch was 8:07 on the dot and the temperature is a balmy 63 degrees. Wow, it’s actually baseball weather. Hamels is off to a good start, striking out Rafael Furcal. Ronnie Belliard is up, with Andre Ethier on deck. Manny Ramirez is taking a bath. — DW Preview PHILADELPHIA — The Dodgers and the Phillies are about to take the field for Game 5 of the National League Championship Series at Citizens Bank Park, where the pennant could be decided in the next few hours. There are no surprises in the lineup, as the Dodgers send out the former Phillies right-hander Vicente Padilla, who is looking to forestall a Philadelphia celebration and send the series back to Los Angeles for a Game 6. Cole Hamels, the Phillies’ struggling left-hander, goes for the home team. Video The national anthem has been sung, the infield is groomed and ready, and Hamels will throw the first pitch in a matter of moments. Dodgers Manager Joe Torre is relaxed and ready for the game. Torre, who once played for The Boss, went to see another Boss, Bruce Springsteen, here in Philadelphia on Tuesday night. Torre was asked if Springsteen played his song “No Surrender.” “He did sing ‘No Surrender,'” Torre said, adding: “‘Glory Days,’ too. Don’t forget that.” — David Waldstein Here are the lineups for Game 5: Dodgers 1. Rafael Furcal, SS 2. Ronnie Belliard, 2B 3. Andre Ethier, RF 4. Manny Ramirez, LF 5. Matt Kemp, CF 6. James Loney, 1B 7. Russell Martin, C 8. Casey Blake, 3B 9. Vicente Padilla, RHP Phillies 1. Jimmy Rollins, SS 2. Shane Victorino, CF 3. Chase Utley, 2B 4. Ryan Howard, 1B 5. Jayson Werth, RF 6. Raul Ibanez, LF 7. Pedro Feliz, 3B 8. Carlos Ruiz, C 9. Cole Hamels, LHP
|
Signup to receive a daily roundup of the top LGBT+ news stories from around the world The head of Lucasfilm has confirmed that discussions have taken place about a potential same-sex romance in Star Wars. 2015’s Star Wars: The Force Awakens featured a close bond between Resistance pilot Poe Dameron (Oscar Isaac) and stormtrooper Finn (John Boyega) – with rumours spreading that Disney is setting up a gay romance between two of the film’s main characters. Isaac added fuel to the fire by hinting at an undisclosed romance of some kind, adding: “I think it’s very subtle romance that’s happening. You know, you have to just look very close… you have to watch it a few times to see the little hints. But there was… I was playing a romance.” Boyega initially shut down reports, but later hinted he was open to the idea, adding: “Mark Hamill didn’t know that Darth Vader was Luke’s father: you never know what they’re going to pull.” In an interview with eCartelera, Lucasfilm head Kathleen Kennedy said: “We’ve talked about it, but I think you’re not going to see it in [upcoming film] The Last Jedi.” She added: “After 40 years of adventures people have a lot of information and a lot of theories about the way these stories can take, and sometimes those theories that come up are new ideas for us to listen to, read and pay attention to. “[It’s] clear that the fans are as much masters of this franchise as we are.” Beyond Finn and Poe, Star Wars fans have also suggested that Rogue One: A Star Wars Story characters Chirrut Îmwe (Donnie Yen) and Baze Malbus (Jiang Wen) are in a same-sex relationship, which the film’s director Gareth Edwards refused to rule out. Meanwhile, Mark Hamill continues to insist that the sexuality of his iconic Star Wars character, Luke Skywalker, is open to interpretation. He said: ” Fans are writing and ask all these questions, ‘I’m bullied in school… I’m afraid to come out’. They say to me, ‘Could Luke be gay?’ I’d say it is meant to be interpreted by the viewer… If you think Luke is gay, of course he is. “You should not be ashamed of it. Judge Luke by his character, not by who he loves.” Last year, Star Wars novel writer Chuck Wending issued an epic response to critics who were unhappy with the introduction of a major new gay character into the Star Wars book universe. Wending had introduced the character in his novel Star Wars: Aftermath. Lesbian Jedi Juhani featured in the 2003 game Star Wars: Knights of the Old Republic – making her technically the first gay Star Wars hero. The new films, however, are set in a different universe to both the books and the game. (It’s complicated.)
|
HARTFORD, Ill. (AP) – A state historic site is hosting a day of baseball next month when games will be played by 1860s rules. The Lewis and Clark State Historic Site in Hartford will host the games on May 6 at 10 a.m. Visitors are invited to bring lawn chairs or blankets for the free event. In the 1860s, baseball was pitched underhand and the distance from the pitcher to home base was 45 feet, not 60-feet, 6 inches as it is today. The competitors will be the Vandalia Old Capitals, Springfield Long Nines, Alton Baseball Club and the Belleville Stags. William Clark arrived at the site that became Camp River Dubois in December 1803. That spring they made preparations for the expedition, which formally began the next year.
|
Image caption President Santos has vowed to continue a tough stance against armed groups Colombian troops have been given the order to destroy houses used by rebels to attack civilians or government, President Juan Manual Santos says. He was speaking after left-wing Farc guerrillas attacked several towns in the west of the country, killing at least three people and injuring dozens. Mr Santos says the attacks showed the rebels' "desperation" as security forces closed in on their leaders. In recent months, guerrillas have stepped up hit-and-run attacks. President Santos announced a range of measures in the wake of the violence in Cauca department. Security would be stepped up, he said, and another mountain battalion would be created in Tacueyo, to operate "in a zone traditionally used by guerrillas as a corridor and a sanctuary". "We've taken the decision that from now on, security personnel will destroy any house being used by terrorists to attack government forces or civilians. No more using houses to shoot at security forces or at civilians," Mr Santos said, after holding a security meeting. Over the weekend, Farc rebels staged a series of attacks in Cauca, long plagued by violence from armed groups. Worst hit was Toribio, in Cauca province, where rebels drove a small bus laden with explosives into the local police station. Image caption Toribio was holding its Saturday market when the rebels struck Several houses were also destroyed. Two civilians and a policeman are reported to have died as rebels and officers exchanged fire. Toribio has repeatedly been targeted by the Farc, Colombia's largest left-wing rebel group. The town is located in a mountainous area where the Farc's Sixth Division is active. Farc rebels also targeted the towns of Corinto, Caldono, Jambalo and Santander. President Santos said the attacks were a cowardly response to the successes of the security forces in the area. "We know exactly what they're thinking: they're doing their best to distract the security forces because we're taking away their sanctuaries and lairs," he added. President Santos was referring to a recent raid by the security forces in which, he said, they came close to catching Farc leader Alfonso Cano. According to Mr Santos, Mr Cano left the camp only 12 hours before soldiers moved in.
|
Gatestone Institute reported: A new report by the Robert Koch Institute (RKI), the federal government’s central institution for monitoring and preventing diseases, confirms an across-the-board increase in disease since 2015, when Germany took in an unprecedented number of migrants. The Infectious Disease Epidemiology Annual Report — which was published on July 12, 2017 and provides data on the status of more than 50 infectious diseases in Germany during 2016 — offers the first glimpse into the public health consequences of the massive influx of migrants in late 2015. The report shows increased incidences in Germany of adenoviral conjunctivitis, botulism, chicken pox, cholera, cryptosporidiosis, dengue fever, echinococcosis, enterohemorrhagic E. coli, giardiasis, haemophilus influenza, Hantavirus, hepatitis, hemorrhagic fever, HIV/AIDS, leprosy, louse-borne relapsing fever, malaria, measles, meningococcal disease, meningoencephalitis, mumps, paratyphoid, rubella, shigellosis, syphilis, toxoplasmosis, trichinellosis, tuberculosis, tularemia, typhus and whooping cough. Germany has — so far at least — escaped the worst-case scenario: most of the tropical and exotic diseases brought into the country by migrants have been contained; there have no mass outbreaks among the general population. More common diseases, however, many of which are directly or indirectly linked to mass migration, are on the rise, according to the report. The incidence of Hepatitis B, for example, has increased by 300% during the last three years, according to the RKI. The number of reported cases in Germany was 3,006 in 2016, up from 755 cases in 2014. Most of the cases are said to involve unvaccinated migrants from Afghanistan, Iraq and Syria. The incidence of measles in Germany jumped by more than 450% between 2014 and 2015, while the number of cases of chicken pox, meningitis, mumps, rubella and whooping cough were also up. Migrants also accounted for at least 40% of the new cases of HIV/AIDS identified in Germany since 2015, according to a separate RKI report.
|
Vulva, plural vulvae, the external female genitalia that surround the opening to the vagina; collectively these consist of the labia majora, the labia minora, clitoris, vestibule of the vagina, bulb of the vestibule, and the glands of Bartholin. All of these organs are located in front of the anus and below the mons pubis (the pad of fatty tissue at the forward junction of the pelvic bones). Read More on This Topic reproductive system disease: Vulvar cancer Primary carcinoma of the vulva (the external female genital organs) usually occurs in women over 50 and usually arises from the labia majora or labia minora. Most patients first notice a lump on the vulva or perineum; the diagnosis is made by examination… The labia majora are two thick folds of skin running from the mons pubis to the anus. The outer sides of the labia are covered with pigmented skin, sebaceous (oil-secreting) glands, and after puberty, coarse hair. The inner sides are smooth and hairless, with some sweat glands. Beneath the skin layer, there is mostly fatty tissue with some ligaments, smooth muscle fibres, nerves, and blood and lymphatic vessels. The labia majora correspond to the scrotum in the male. Directly beneath the mons pubis and between the labia majora is a small structure of erectile tissue known as the clitoris. It is capable of some enlargement caused by increased blood pressure during sexual excitement and is considered homologous (comparable in structure) to the male penis, only on a much smaller scale. Unlike the penis, the clitoris does not contain the urethra for excretion of urine; it does have a rounded elevation of tissue at the tip known as the glans clitoridis. Surrounding the glans clitoridis on two sides are the beginning folds of the labia minora. These folds are known as the prepuce (or foreskin) of the clitoris. Like the glans penis, the glans clitoridis contains nerve endings and is highly sensitive to tactile stimulation. The labia minora, two smaller folds of skin between the labia majora, surround the vestibule of the vagina; they have neither fat nor hairs. The skin is smooth, moist, and pink and has sebaceous and sweat glands. The vestibule of the vagina begins below the clitoris and contains the openings of the urethra, the vagina, and the ducts of the two glands of Bartholins. The urethral opening is a small slit located closest to the clitoris; through this opening urine is excreted. Below the urethral opening is the larger, vaginal orifice. The two Bartholin ducts open on each side of the vaginal orifice; these glands secrete mucus (a thick protein compound) and frequently are sites of infection. Each gland is about 1 cm (0.4 inch) in diameter; after the 30th year, they gradually diminish in size. The vaginal orifice is surrounded or somewhat covered by a membranous fold of skin known as the hymen; any of a variety of activities can cause the hymen to stretch or tear. Running along the sides of the vestibule are two elongated bodies of erectile tissue known as the bulb of the vestibule. Many mucous glands are also present in the vestibular region. Both the labia minora and labia majora tend to cover the vestibule.
|
You have a strong cast of actors like Michael Fassbender, Rebecca Ferguson and J.K. Simmons. You have well-regarded source material from popular writer Jo Nesbo. Most importantly though, you have director Tomas Alfredson coming off two stone cold classics with “Let the Right One In” and “Tinker, Tailor, Soldier, Spy”. So why in the world has this week’s “The Snowman” been thoroughly panned by critics? Scoring just 34/100 on Metacritic and a 26% (3.8/10) score on Rotten Tomatoes, the film is without a doubt one of the worst reviewed films out there in cinemas now. In recent weeks, there have also been rumors about it being a heavily compromised production. Now, in an interview with Norwegian Broadcasting Corporation NRK (via Indiewire), Alfredson explains why the movie ended up being a misfire with a lot of it having to do with him being brought onto the project late in the game, and the film itself being rushed too fast into production. Martin Scorsese was originally slated to direct the film but stepped back into an executive producing role. Though it had been in development for some time, once Alfredson came on board things suddenly started moving too fast with the director left scrambling to catch up: “It happened very abruptly. Suddenly we got notice that we had the money and could start the shoot in London. Our shoot time in Norway was way too short, we didn’t get the whole story with us and when we started cutting we discovered that a lot was missing.” He goes on to say around 10-15% of the movie’s screenplay was not filmed during production, that left huge story gaps that needed to be filled in the editing room: “It’s like when you’re making a big jigsaw puzzle and a few pieces are missing so you don’t see the whole picture.” “The Snowman” opens in U.S. theaters this Friday.
|
C# (or maybe C++) compiler by Anton Golov (3809277) Frivolous changes to existing code: * Removed unused keywords/types, including try/catch, floating point types, and array support in the grammar. Core features implemented (1-6): * Boolean and character constants are handled and have the correct type. This makes the latter effectively useless, as they can only be stored and printed; arithmetic on chars is not implemented. * The operators have the correct priorities. All operators except the assignment and comparison operators are left-associative. Assignment is right-associative, and comparison is non-associative. * Static method calls work fine. At the moment, only static methods of the current class can be called. The caller is responsible for cleanup. * The print syntax works. For efficiency reasons, the arguments are evaluated right-to-left. * Methods can have a result. It is returned via the result register. * Local variables can be used. Bonus features implemented (7, 9, 11, 12, 13, 14): * Lexer comments of both types are discarded. * The for statement is implemented in the parser, and internally treated as a while statement. * The code generator evaluates both logical operators lazily. * Static member functions and variables may be defined and using the C++ syntax of ClassName::FunctionName(args) or ClassName::VariableName. An unqualfied function call is assumed to be a static function call to a function of the current class. All other kinds of member access must be qualified. Static members are placed near the vtable, and lead to "code modified" warnings. Please ignore. Derived classes have their own instances of statics. The current implementation of statics is suboptimal as all but the first static member requires addition to access. This could be optimised out by generating a label for every static member or allowing the assembler to handle statements like LDC label+3. * The following error messages are produced: * Undeclared local. * Undeclared static function. * Undeclared static member. * Undeclared non-static function. * Undeclared non-static member * Nonexistent type. * Usage of rvalues as lvalues. * Type mismatch. The error-reporting itself is lacking; lack of position information and crude checks means the messages aren't particularly useful. Oh, and they only come one at a time, and stop all further compilation. * Objects can be created using the new syntax. New requires parentheses but does not allow for passing arguments (constructors are not implemented). Objects are stored on a heap, the size of which can be adjusted with the --heapsize flag. Every allocated object has an overhead of 1 byte. Objects can be destroyed using delete, as in C++. Any kind of silly usage of the heap is undefined behaviour, no diagnostic required. :) Bonus bonus features implemented: * Local variables may be initialised at the point of declaration. * The type of such variables may be left for the compiler to deduce using "var". * A number of optimisations are applied. * Classes may inherit from other classes using the standard C# syntax. The derived class receives all members of the base class, and a reference to a derived class object can be converted to a reference to a base class object. Deletion via base class objects works correctly. Virtual function access suffers the same problem as static member access. * All non-static functions are by default polymorphic, and can be overriden. * C++-style lambda functions are supported. The syntax is [captures](parameters) -> return_type { body } Where captures is a comma-separated list of locals to capture. The return type may be omitted, together with the arrow, in which case void is assumed. All captures happen by-value. Captures are treated as locals within the lambda body; they may be assigned to, but this will not affect the value in subsequent calls. A lambda expression allocates memory, which can be reclaimed by deleting the resulting function reference. * Lambdas have the type @rt(args), where rt is the return type, and the arguments are not given names. Lambdas may be passed around as any object. * Conversions of function types take into account conversions of the parameter and return types; if D is derived from B, @D(B) will convert into a @B(D) just fine (but not the other way around!). * Converting member functions to a lambda type will fail because the name won't be found. :) (Ran out of time before I could implement this.) * A lambda x can be called using the syntax %x(args). Some comments on the architecture: * Due to the overwhelming complexity, the algebra has been split into three pieces: * The first phase (CSharpSTBuild) builds the symbol table and does some basic sanity checking at class-level. * The second phase (CSharpTypeCheck) does type checking and extracts all information for generating expressions from the global environment. * The first phase generates the actual code. * The second algebra transforms all expressions and some statements into a slightly different form. All references to types are removed, and instead any global level information is filled directly in. This allows for a much simpler implementation of the code generator. * There are a number of optimisers. These are implemented as parsers that run over the SSM code, matching anything they can optimise and replacing it with the updated version, while keeping anything not matched the same. * The heap is implemented as a circular single linked list. Only free blocks are linked; they contain their size and a pointer to the next block. Allocation involves splitting off the front of a free block and removing it from the linked list. Both allocation and deallocation are linear in the size of the list in the worst case.
|
The Belgian rookie made Q3 for the fourth time this year, putting himself into the top 10 shoot-out with a last-second improvement in the second segment on a wet track. But Vandoorne could do no better than 3.6s off the pace in Q3 and ended up last in the segment, although he is set to start eighth due to penalties for the Red Bull cars ahead. "I think, for us, to be in Q3 in Monza has been a pretty good afternoon, I think the conditions were definitely not easy," Vandoorne said. "A little bit of a shame, we didn't really have a proper shot in Q3 because we had a loss of engine power, which hurt our performance. "We didn't really have a chance to get a good lap, because that's really the session where you try to find the proper limit, you take some extra risks and fully know the conditions. "At the moment we don't know what the issue is but tomorrow we're still in a reasonable position." Teammate Fernando Alonso was resigned to starting at the back of the grid heading into qualifying, courtesy of a 35-place grid penalty. He progressed from Q1 but only did four laps in Q2 and didn't make the final segment, finishing seven tenths off Vandoorne. "We didn't want to pass my teammate because he was 10th [at the time] and if I had been in Q3, I would have kicked him out," Alonso said. "Since I'm starting last at least we wanted a car in Q3, so we went out with the engine in practice mode, with used tyres from Q1. "Seeing the times I think it would have been possible to be in the top five with the rain. "Let's hope we have more rain when we have no penalties - or on Sunday instead of Saturday." Despite the power-demanding nature of the Monza circuit, Vandoorne believes he will have a prime opportunity to add to his sole point in 2017 on Sunday even if the track stays dry, provided he makes a good start. "We've been pretty strong in the dry all weekend so I'm really optimistic, hopefully we can have a good start and then kind of hang on there with the cars in front. "The cars in front, they're pretty quick, they're fast cars in qualifying trim, but maybe in the race we can have a good go."
|
A pigeon sits on a wire during a sandstorm in Beirut, on Sept. 8. (Alia Haju/Reuters) Last month, a thick yellow cloud of dust blanketed parts of the Middle East and extended all the way to Cyprus. Tens of thousands of Syrian refugees were forced to scurry for shelter from the choking plume, while Israelis were instructed to stay indoors and ports in Egypt were shut. Health officials in Damascus, the Syrian capital, said more than 1,200 people, including 100 children, were hospitalized with breathing difficulties; in Lebanon, two women died as a result of the dust storm. It was an unusual, unseasonal event, as my colleague Hugh Naylor reported. And, according to a team of Israeli scientists, it may have been the consequence of extreme, man-made conditions in Syria and Iraq right now. As Israeli newspaper Haaretz notes, researchers at Ben-Gurion University's Institute for Desert Research scrutinized the storm, the likes of which are usually seen in the spring. They found that the particles of dust kicked up were larger than anything their instruments had previously recorded (since being operation in 1995) and that the dust traveled at a rather low level. This satellite image, posted online by NASA on Sept. 9, shows a sand storm blanketing the Middle East. (NASA via AP) Haaretz's report suggests this is tied to a dramatic change in the Syrian environment, where large-scale agricultural work has ceased and war has literally wracked the land: According to initial results of the study, the researchers attribute the storm and its intensity to two main factors. The first is a sharp decline in the amount of farm activity in northern Syria, largely caused by removal of dams along the Euphrates River by Turkey. “The process began during the precious decade,” said [Professor Arnon] Karnieli, adding, “The analysis shows stark differences between the Turkish side of the border and the Syrian side in terms of vegetation.” The other factor is the military activity, which has caused harm to the soil crust in Syria. It's just another sign of the impact of the country's hideous, grinding conflict, which is now in its fifth year. More than 300,000 Syrians have perished since the civil war began and more than half the country's population, nearly 12 million people, have been forced to flee their homes. The storm provided one bit of respite for opposition fighters combating the Syrian regime: It blinded the Syrian government air force for a time and helped shield rebel troop movements. But that moment has since passed, and with the recent intervention of the Russian military on behalf of the regime, the war looks to be getting only more entrenched. More on WorldViews 7 Middle East crises that are a bigger problem than Iran's nuclear program Why Russia's Syria war is and isn't bad news for the U.S.
|
Japanese Prime Minister Shinzo Abe, right, and Russian President Vladimir Putin speak to each other during their meeting at a hot springs resort in Nagato, western Japan, Thursday, Dec. 15, 2016. Despite continued sanctions on Russia, Abe is eager to make progress on a 70-year-old territorial dispute that has kept their countries from signing a peace treaty formally ending World War II. (AP Photo/Alexander Zemlianichenko, Pool) NAGATO, Japan (AP) — The leaders of Russia and Japan move to Tokyo on Friday to wrap up a two-day summit on an economic cooperation agreement and a protracted territorial dispute that has prevented their countries from signing a peace treaty to end World War II. Japanese Prime Minister Shinzo Abe and Russian President Vladimir Putin spent much of their first round at a hot springs resort in western Japan on Thursday discussing small steps to move forward on the dispute over four small islands. “We had in-depth discussions on a peace treaty,” Abe told reporters afterward. He said they discussed possible joint economic projects on the disputed islands. A sticking point: Japan says they should be operated under a special legal status that does not raise sovereignty issues. Russia, which governs the islands, wants them to be run under its law. Japanese media reports say Japan and Russia may ink a broader economic cooperation agreement Friday that the two sides have been negotiating for several months. For Putin, the summit meeting marks his first official visit to a G-7 country since Russia annexed Crimea in 2014. Abe invited Putin even though the G-7 nations, including Japan, still have sanctions on Russia. Abe said the two leaders talked for three hours Thursday, spending about half of the time on the dispute over the southern Kuril islands seized by the former Soviet Union in the closing days of World War II, and a peace treaty. Japan calls the islands the Northern Territories. Japan says the Soviet Union took the islands illegally, expelling 17,000 Japanese to nearby Hokkaido, the northernmost of Japan’s four main islands. Putin expressed concern about the deployment of U.S. missile defense systems in Japan, calling them an overreaction to North Korea’s missile program, Japanese media reported. Abe assured him that they are limited to self-defense and do not pose a threat to neighboring countries, while stressing the importance of discussing defense issues amid growing security concerns in the region, they said. To that end, the two leaders agreed to resume “2+2” talks among the countries’ foreign and defense ministers, Russian Foreign Minister Sergey Lavrov said. Lavrov, who is accompanying Putin, attended the first and last “2+2″ meeting three years ago. Russia wants to attract Japanese investment, particularly to its far east. Japan hopes that stronger ties through joint economic projects will help resolve the thorny territorial issue over time. ___ Yamaguchi reported from Tokyo. Associated Press writer Nataliya Vasilyeva in Moscow and videojournalist Kaori Hitomi in Tokyo contributed to this story.
|
Liberal bloggers love to mock the news judgment of Fox News, but CNN and MSNBC were truly displaying their affinity for Hillary Clinton on Friday by giving more than a half-hour of live coverage to her commencement speech at Wellesley College, her alma mater. In the previous hour, Vice President Mike Pence addressed graduates of the U.S. Naval Academy in Annapolis, but neither network noticed that. The presidential loser was far more newsworthy than the sitting Vice President. There were no snarky CNN or MSNBC chyrons as Mrs. Clinton attacked the president for assaulting "truth" and openness, stating "when people in power invent their own facts and attack those who question them, it can mark the beginning of the end of a free society." CNN anchor John King called Hillary's harsh attacks merely "full-throated" and "aggressive." JOHN KING: That was remarkable....This was just full-throated. A lot of people think, what is Hillary Clinton going to do in her political retirement. That's not retirement. As she spoke at this commencement, a budget of unimaginable cruelty. A trillion dollar mathematical lie, she said, wrapped into that budget. She mocked the president for talking about crowd sizes, alternative facts. And, she said, in her view, he's a threat to society....Now, if you're a Trump voter out there, you're probably saying, sore loser. But we didn't know how active and how aggressive she was going to be. That was something else. Washington Post reporter Abby Phillip called it a “cheerleading act for the Resistance.” Fox News, by contrast, only showed Mrs. Clinton speaking at the end of their 11 am hour, talking over it. On Fox Business, anchor Neil Cavuto showed Mrs. Clinton at 12:13, and mused out loud about the utter lack of fire-breathing conservative protesters at Wellesley: NEIL CAVUTO: We're still following Hillary Clinton speaking at Wellesley, and of course we're waiting for that moment a group of conservatives, probably a large consortium at Wellesley if I know Wellesley, start walking off or throwing spitballs at her. That has not materialized yet but I will predict this, if it were to happen, it would get wall-to-wall media coverage, and any dustups we see elsewhere, mmm, not so much. But again safe to say, that the odds of that happening are like me popping up first on the salad bar line. But we'll watch. You could always be surprised. Fox carried pretty much all of Pence's speech in Annapolis from 10:32 to 10:54 am, and Fox Business ran the first seven minutes. MSNBC never mentioned Pence in that hour, and CNN only referred to Pence as tweeting congratulations to Montana's controversial new Congressman-elect Greg Gianforte (while advertising the upcoming Hillary speech!)
|
Friday on MSNBC, Karen Finney, spokeswoman for Hillary Clinton’s presidential campaign, dismissed the idea that transcripts of a speech her candidate gave on Wall Street were relevant to “undecided voters.” Finney cited education and health care as two issues that she thought voters viewed to be more essential to the process that Clinton’s speech transcripts. “Sen. Sanders is, you know, trying to use this to make an allegation to which he has absolutely no response when asked where is the proof,” she said. “So, you know, I think a lot of voters also find that very offensive. And more important, moreover, I have to tell you that if you are trying to figure out how to send your kid to college, if you are trying to figure out how to take care of a sick parent or wanting your child’s schools to be improved, this is not something you care about.” “I mean, I understand, I think we understand the sort of media fascination with this,” she added. “But I’m just telling you, I mean, I have been out there on the road talking to voters. This never comes up. People want to know and I think we heard a lot of specifics from Secretary Clinton last night about the things she wants to do to help move our country forward as she talks about break down those barriers and was just talking about this morning, lift everybody up. That is what people really want to talk about. ” Follow Jeff Poor on Twitter @jeff_poor
|
Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on “cognitive integration” or the “extended mind hypothesis” in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human “mind-making” within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. Introduction As evolved beings, it is reasonable to assume that evolutionary theory has something to offer the study of human psychology, and the social sciences more generally. The question is: what exactly? This question has been debated ever since Darwin (1871) published the Descent of Man, and we appear no closer to resolution of this issue almost 150 years later. Some maintain that evolutionary theory can revolutionize the social sciences, and hence our understanding of human life, by encompassing both the natural and human sciences within a single unifying framework. Wilson’s (1975) Sociobiology was one of the first, and most emphatic, claims to this effect. Meanwhile, others have resisted the idea of unification, viewing it as little more than imperialist over-reaching by natural scientists (e.g., Rose, 2000). The question posed by this research topic puts a different, more specific, spin on this issue, asking whether an evolutionary approach within psychology provides a successful alternative to current information-processing and representational views of cognition. The broader issue of unification across the natural and social sciences continues to pervade this more narrow debate, however, because certain proponents of the evolutionary approach insist that the incorporation of the social sciences into the natural sciences is the only means to achieve a coherent understanding of human life. As Tooby and Cosmides (2005) state, evolutionary psychology “in the broad sense, … includes the project of reformulating and expanding the social sciences (and medical sciences) in light of the progressive mapping of our species’ evolved architecture” (Tooby and Cosmides, 2005, p. 6). So, what is our answer to this question? The first point to make clear is that any answer we might offer hinges necessarily on the definition of evolutionary psychology that is used. If one settles on a narrow definition, where evolutionary psychology is equated with the views promoted by the “Santa Barbara School”, headed by Donald Symons, John Tooby, Leda Cosmides, David Buss, and Steven Pinker (referred to here as Evolutionary Psychology or simply as EP), then the answer is a simple “no” (see also: Dunbar and Barrett, 2007). If one opts instead to define an evolutionary approach in the broadest possible terms (i.e., simply as an evolutionarily informed psychology), then the answer becomes a cautious and qualified “yes.” In what follows, we argue that the primary reason why EP fails as a viable alternative to the standard computational approach is because, in all the important details, it does not differ from this approach. We then go on to suggest that the specific evolutionary arguments in favor of EP, which are used to claim its superiority over other approaches, rest on some rather shaky premises, and cannot be used to rule out alternatives in the way that advocates of EP have supposed. In particular, we deal with arguments relating to the reverse engineering of psychological adaptations, and the logical necessity of domain-specific processes (specifically, arguments relating to the poverty of the stimulus and combinatorial explosion). We then move onto a consideration of recent incarnations of the “massive modularity” hypothesis showing that, while these are not vulnerable to many of the criticisms made against them, it is not clear whether these can, in fact, be characterized as psychological adaptations to past environments. We suggest that, taken together, these arguments weaken the case for EP as the obvious framework for psychology. Finally, we go onto suggest an alternative view of psychological processes, cognitive integration [or the extended mind (EM) hypothesis], that we feel has the potential to improve on the current computational approach; one that is relevant to core areas of psychological research, will promote integration between psychology and other cognate disciplines, but also allow for a healthy pluralism both within psychology and across the social sciences more generally. The Computational Core of Evolutionary Psychology The primary reason why Evolutionary Psychology cannot offer a successful alternative to computational-representational theories of mind is because it is a computational-representational theory of the mind. Evolutionary Psychology (e.g., Cosmides, 1989; Tooby and Cosmides, 1992, 2005; Cosmides and Tooby, 1994, 1997) is the marriage of “standard” computational cognitive psychology (as exemplified by Chomsky’s computational linguistics, e.g., Chomsky, 2005) with the adaptationist program in evolutionary biology (e.g., Williams, 1966); a combination that its proponents cast as revolutionary and capable of producing greater insight, not only into human cognitive processes, but also into the very idea of “human nature” itself (Cosmides, 1989; Tooby and Cosmides, 1992, Cosmides and Tooby, 1994, 1997). The revolutionary promise of incorporating evolutionary theory into psychology can be traced to, among others, Tooby and Cosmides (1992) conceptual paper on the “psychological foundations of culture,” their freely available “primer” on evolutionary psychology (Cosmides and Tooby, 1997), along with Cosmides’s (1989) seminal empirical work on an evolved “cheat-detection” module. Another classic statement of how computational theories benefit from the addition of evolutionary theory is Pinker and Bloom’s (1990) paper on language as an “instinct,” where Chomsky’s innate universal grammar was argued to be a product of natural selection (in contrast to Chomsky’s own views on the matter). In all these cases, strong claims are made that leave no doubt that “computationalism” forms the foundation of this approach. Cosmides and Tooby (1997), for example, argue that the brain’s evolved function is “information processing” and hence that the brain “is a computer that is made of organic (carbon-based) compounds rather than silicon chips” (paragraph 14), whose circuits have been sculpted by natural selection. More recently, Tooby and Cosmides (2005, p. 16) have stated that “the brain is not just like a computer. It is a computer—that is, a physical system that was designed to process information.” While Pinker (2003, pp. 24–27) argues that: “The computational theory of mind … is one of the great ideas of intellectual history, for it solves one of the puzzles of the ‘mind-body problem’ … It says that beliefs and desires are information, incarnated as configurations of symbols … without the computational theory of mind it is impossible to make sense of the evolution of mind.” Accordingly, hypotheses within EP are predicated on the assumption that the brain really is a computational device (not simply a metaphorical one), and that cognition is, quite literally, a form of information processing. In one sense, then, EP cannot offer an improvement on the computational theory of mind because it is premised on exactly this theory. Any improvement on the current state of play must therefore stem from the way in which evolutionary theory is incorporated into this model. The Evolved Computer The unique spin that EP applies to the computational theory of mind is that our cognitive architecture is organized into a large number of functionally specialized mechanisms, or “modules,” that each performs a specific task (e.g., Tooby and Cosmides, 1992; Cosmides and Tooby, 1997; Barrett and Kurzban, 2006). As these modules are the products of natural selection, they can be considered as “adaptations”, or organs of special design, much like the heart or liver. The function of each module is to solve a recurrent problem encountered by our ancestors in the environment of evolutionary adaptedness (EEA), that is, the period over which humans were subject to evolutionary processes, including those of natural selection (Tooby and Cosmides, 1990; Symons, 1992). The EEA therefore represents the sum total of the selection pressures that give rise to a particular adaptation and cannot, strictly speaking, be identified with a particular time or place (Cosmides and Tooby, 1997). In practice, however, based on the argument that, for most of our evolutionary history, humans lived as hunter–gatherers, the EEA is often operationalized to the Pleistocene habitats of East and Southern Africa (although not to any particular location or specific time within this period). Unlike the notion of computationalism, which is accepted largely without question in psychology and beyond, the concepts of both “massive modularity” and the EEA have met with a large amount of criticism over the years from social and natural scientists alike, as well as from philosophers (e.g., Lloyd, 1999; Buller and Hardcastle, 2000; Rose and Rose, 2000; Buller, 2005; Bolhuis et al., 2011). In general, critics argue that positing modular psychological adaptations to past environments amounts to little more than “just so” story telling, and lacks adequate standards of proof; an accusation that proponents of EP strongly resist and categorically refute (e.g., Holcomb, 1996; Ketelaar and Ellis, 2000; Confer et al., 2010; Kurzban, 2012). As these arguments and counter-arguments have been covered in detail elsewhere (e.g., Conway and Schaller, 2002; Confer et al., 2010), we will not rehearse them again here. Instead, we deal only with those elements that speak to EP’s success as a novel computational theory of mind, and its ability to improve on the model we have currently. Can We Reverse Engineer Psychological Adaptations? Clearly, the success of EP stands or fails by its ability to accurately identify, characterize, and test for psychological adaptations. Within EP, the method of “reverse-engineering” is prominent, and relies heavily on analogies to computational algorithms, functions, inputs, and outputs. In essence, the idea behind reverse-engineering is that one can infer the function of an adaptation from analysis of its form. This involves identifying a problem likely to have been encountered by our ancestors across evolutionary time, and then hypothesizing the kinds of algorithmic “design features” that any psychological adaptation would require in order to solve such a problem. Predictions derived from these hypotheses are then put to the test. As Gray et al. (2003), among others, have pointed out, such a strategy will work provided all traits are adaptations, that the traits themselves can be easily characterized, and that plausible adaptive hypotheses are hard to come by. Unfortunately, these conditions do not always hold, and identifying adaptations is by no means straightforward. Proponents of EP themselves recognize this problem, acknowledging the existence of both by-products (aspects of the phenotype that are present because they are causally coupled to adaptations) and noise (injected by “stochastic components of evolution”; e.g., Cosmides and Tooby, 1997). Nevertheless, Cosmides and Tooby (1997) argue that, because adaptations are problem-solving machines, it remains possible to identify them “using the same standards of evidence that one would use to recognize a human-made machine: design evidence” (paragraph 65). That is, we are able to identify a machine as a TV rather than a stove by referring to the complex structures that indicate it is good for receiving and transforming electromagnetic waves, and not for cooking food. Thus, if one can show that a phenotypic trait has design features that are complexly specialized for solving an adaptive problem, that these could not have arisen by chance alone, and that their existence is not better explained as the by-product of mechanisms designed to solve some other problem, then one is justified in identifying any such trait as an adaptation (Cosmides and Tooby, 1997). Although this approach seems entirely reasonable when discussed in these terms, there is ongoing debate as to whether this process is as straightforward as this analysis suggests (particularly with respect to differentiating adaptations from by-products, e.g., Park, 2007). Again, much of this debate turns on the appropriate standard of evidence needed to identify an adaptation, particularly in the case of behavior (see, e.g., Bateson and Laland, 2013). Along with detailed knowledge of the selective environment, it is often argued that evidence for a genetic basis to the trait, along with knowledge of its heritability and its contribution to fitness, are necessary elements in identifying adaptations, not simply the presence of complex, non-random design (see Travis and Reznick, 2009). Defenders of EP counter such arguments by noting, first, that as they are dealing with adaptation, and not current adaptiveness, heritability, and fitness measures are uninformative. By an EP definition, adaptations are traits that have reached fixation. Hence, they should be universal, with a heritability close to zero, and measures of current fitness and the potential for future selection cannot provide any evidence concerning the action of past selection (Symons, 1989, 1990). Second, the argument is made that, given we are willing to accept arguments from design in the case of other species, it is inconsistent and unfair to reject such reasoning in the case of humans. For example, Robert Kurzban, a prominent figure in EP and editor of two main journals in the field, has presented several cogent arguments to this effect in the blog associated with the journal, Evolutionary Psychology. In response to a paper presenting the discovery of a “gearing” mechanism in a jumping insect of the genus Issus, Kurzban (2013) noted that the authors make a strong claim regarding the evolved function of these interlocking gears (the synchronization of propulsive leg movements). He further noted that that this claim was based on images of the gearing structures alone; there was no reference to the genetic underpinnings or heritability of these structures, nor was there any experimental evidence to establish how the gears work, nor how they contributed to fitness. Kurzban’s (2013) point is: if it is permissible for biologists to reason in this way—and to do so persuasively—then why not evolutionary psychologists? (see also Kurzban, 2011b; for a similar example). On the one hand, this is an entirely fair point. Other things being equal, if evolutionary psychologists and biologists are arguing for the existence of the same phenomena, namely evolutionary adaptations, then the standards of evidence acceptable to one sub-discipline must also be acceptable when used by the other. On the other hand, the phenomena being compared are not quite equivalent. Insect gears are morphological structures, but psychological adaptations are, according to EP, algorithmic processes. Obviously the latter involve morphology at some level, because “all behavior requires underlying physical structures” (Buss, 1999, p. 11), but it is unclear exactly how the psychological mechanism of, say, cheater detection, maps onto any kind of morphological structure within the brain, not least because of the massive degeneracy of neuronal processes (i.e., where many structurally distinct processes or pathways can produce the same outcome). Prinz et al. (2004), for example, modeled a simple motor circuit of the lobster (the stomatogastric ganglion) and were able to demonstrate that there were over 400,000 ways to produce the same pyloric rhythm. In other words, the activity produced by the network of simulated neurons was virtually indistinguishable in terms of outcome (the pyloric rhythm), but was underpinned by a widely disparate set of underlying mechanisms. As Sporns (2011a,b) has suggested, this implies that degeneracy itself is the organizing principle of the brain, with the system designed to maintain its capacity to solve a specific task in a homeostatic fashion. Put simply, maintaining structural stability does not seem central to brain function, and this in turn makes brain function seem much less computer-like. This, then, has implications for the proponents of EP, who appear to argue for some kind of stable, functionally specialized circuits, even if only implicitly. In other words, the “function from form” argument as applied to EP raises the question of what exactly underlies a “psychological adaptation” if not a morphological structure that can undergo selection? One way around this is to argue that, in line with Marr’s (1982) computational theory of vision, EP is concerned only with the computational and algorithmic level of analysis, and not the implementation at the physical level (e.g., Buss, 1999). In other words, EP deals with the computational “cognitive architecture” of the human mind and not with the structure of the wet brain. Hence, as long as a reliable and predictable output is produced from a specified set of inputs, EP researchers are justified in referring to the mechanisms that produce this output as a psychological adaptation (whatever these might be). This seems to raise another problem, however, in that the reliability and stability of the underlying psychological mechanism is only inferred from the reliability of the behavior produced under a given set of circumstances, and does not involve identification of the actual computational mechanism itself. In physical terms, as was evident from the lobster example, when we consider how an organism’s neural circuitry operates in the solving of a task, stability does not seem to be preserved at all, even though virtually indistinguishable network activity is produced as output. If this is true for brains in general and if, as Lehrman (1970) argued, “nature selects for outcomes” and does not particularly care how these are achieved, what has been the target of selection, other than the brain itself? In a sense, one could argue that each specific kind of behavior represents the “modular” component, with a vast number of different neural configurations able to produce it. If so, does this also mean there are a variety of different algorithms as well, and that there is equivalent degeneracy at the algorithmic/representational level? In turn, this raises the issue of whether every possible neural/computational configuration that is capable of producing a given behavior can reasonably be considered a target of selection. Viewed like this, the notion of an “evolved cognitive architecture” comprising specialized circuits devoted to solving a given task serves more as a hypothetical construct used to interpret and make sense of behavioral data, rather than a revealed biological truth. This, of course, does not invalidate the approach—hypothetical constructs are the bread-and-butter of contemporary psychological theorizing—but it does make it difficult to maintain the position that the design argument used to account for stable morphological structures, like insect gears, can be applied equally well to psychological phenomena. It is important to recognize that our argument is not that there “must be spatial units or chunks of brain tissue that neatly correspond to information-processing units” (Barrett and Kurzban, 2006, p. 641; see also Tooby and Cosmides, 1992). As Barrett and Kurzban (2006) make clear, this does not follow logically, or even contingently, from the argument that there are specialized processing modules; functional networks can be widely distributed across the brain, and not localized to any specific region (Barton, 2007). Rather, we are questioning the logic that equates morphological with psychological structure, given recent neurobiological findings (assuming, of course, that these findings are general to all brains). If neural network structure is both degenerate and highly redundant because the aim is to preserve functional performance in a dynamic environment, and not to form stable representational structures based on inputs received, then it becomes less easy to draw a direct analogy between morphological structures and cognitive “structures.” The computational metaphor does, however, lend itself to such an analogy, and is perhaps the reason why the structure–function argument seems so powerful from an EP point of view. That is, when the argument is couched in terms of “machinery in the human mind” or “cognitive architecture,” psychological phenomena are more readily conceptualized as stable, physical structures (of some or other kind) that are “visible” to selection. If they are seen instead as temporally and individually variable neuronal configurations that converge on reliable behavioral outputs without any stable circuits, as Prinz et al. (2004) demonstrated in the lobster, a shift of focus occurs, and the brain itself is revealed as the complex adaptation we seek. The capacity to produce frequency-dependent, condition-dependent behavior then becomes the realized expression of the complex adaptation that is the brain, rather than these capacities themselves being seen as distinct adaptations. This does not end the matter, of course, because we still need to understand how highly active degenerate brain circuits can produce flexible behavior. This is an unresolved empirical issue that cannot be tackled by theoretical speculation alone. Rather, we are simply placing a question mark over the idea that it is possible to identify psychological adaptations at the cognitive level, via behavioral output, without any consideration of how these are physically implemented. Given that, according to EP’s own argument, it is the physical level at which selection must act, and this is what permits an analogy to be drawn with morphological structures, then if brains are less computer-like and representational than we thought, the idea that psychological adaptations can be viewed as stable algorithmic mechanisms that run on the hardware of the brain may also require some re-thinking. Evolved, Learned, and Evolved Learning Capacities Another, more positive, corollary of questioning the premise that the brain is a computer with highly specialized, evolved circuits, is that there is less temptation to distinguish between evolved and learned behaviors in ways that generate a false dichotomy. Although Evolutionary Psychologists do not deny the importance of learning and development - indeed there are some who actively promote a “developmental systems” approach (as we discuss below)—the fundamental assumption that the human cognitive system is adapted to a past environment inevitably results in the debate being framed in terms of evolved versus learned mechanisms. When, for example, the argument is made that humans possess an evolved mating psychology, or an evolved cheater detection mechanism, there is the implicit assumption that these are not learned in the way we ordinarily understand the term, but are more akin to being “acquired” in the way that humans are said to acquire language in a Chomskyan computational framework: we may learn the specifics of our particular language, but this represents a form of “parameter setting,” rather than the formation of a new skill that emerges over time. To be clear, Evolutionary Psychologists recognize that particular kinds of “developmental inputs” are essential for the mechanism to emerge—there is no sense in which psychological modules are argued to be “hard-wired” and impervious to outside input—but there is the denial that these mechanisms reflect the operation of domain-general learning principles being applied in a particular environmental context (Tooby and Cosmides, 1992; Cosmides and Tooby, 1997; Buss, 1999; Barrett and Kurzban, 2006). In contrast, some researchers take the view that development is more than just “tuning the parameters” of modular capacities via specific inputs, but that development involves dynamic change over time in a highly contingent fashion (e.g., Karmiloff-Smith, 1995, 1998; Smith and Thelen, 2003). In this constructivist view, our ability to engage in certain kinds of reasoning about particular domains of interest, such as cheater detection, emerges through the process of development itself. Hence, these kinds of reasoning are likely to be specific to our time and place and may be very different to the kinds of reasoning performed by our ancestors in both the recent and more distant past. These criticisms are often combined with those mentioned above, namely that the evidence for evolved modular mechanisms is not particularly convincing, and is consistent with alternative explanations for the same data. That is, opponents of modular EP argue that we may learn many of the things that EP attributes to evolved psychological adaptations. In this way, learned mechanisms end up being opposed to those that have evolved. Such an opposition is, however, false because all learning mechanisms, whether general or domain-specific, have evolved, and therefore what is learned is never independent of evolutionary influences. This is something that both critics and proponents of EP alike recognize, and yet the opposition of evolved versus culturally learned behavior continually arises (e.g., Pinker, 2003). Perhaps this is because the argument is framed in terms of adaptation, when the real issue being addressed by both parties is the degree to which there are constraints on our ability to learn, that is, the degree of plasticity or flexibility shown by our learning mechanisms. Evolutionary Psychologists, in essence, argue simply that all humans converge on a particular suite of mechanisms that once enhanced the fitness of our ancestors, through a process of learning that is heavily guided by certain biological predispositions. Does Flexibility Require Specificity? This is not to say, however, that humans lack flexibility. Indeed, the argument from EP is precisely that “a brain equipped with a multiplicity of specialized inference engines” will be able to “generate sophisticated behavior that is sensitively tuned to its environment.” (Cosmides and Tooby, 1997, paragraph 42). What it argues against, rather, is the idea that the mind resembles a “blank slate” and that its “evolved architecture consists solely or predominantly of a small number of general purpose mechanisms that are content-independent, and which sail under names such as ‘learning,’ ‘induction,’ ‘intelligence,’ ‘imitation,’ ‘rationality,’ ‘the capacity for culture,’ or simply ‘culture.”’ (Cosmides and Tooby, 1997, paragraph 9). This view is usually characterized as the “standard social science model” (SSSM), where human minds are seen as ‘primarily (or entirely)’ free social constructions” (Cosmides and Tooby, 1997, paragraph 10), such that the social sciences remain disconnected from any natural foundation within evolutionary biology. This is because, under the SSSM, humans are essentially free to learn anything and are thus not constrained by biology or evolutionary history in any way (Cosmides and Tooby, 1997). Tooby and Cosmides’s (1992) attack on the SSSM is used to clear a space for their own evolutionary theory of the mind. Their argument against the SSSM is wide-ranging, offering a detailed analysis of what they consider to be the abject failure of the social sciences to provide any coherent account of human life and behavior. As we do not have space to consider all their objections in detail (most of which we consider ill-founded), we restrict ourselves here to their dismissal of “blank slate” theories of learning, and the idea that a few domain-general processes cannot suffice to produce the full range of human cognitive capacities. The first thing to note is that Tooby and Cosmides’s (1992) argument against the SSSM bears a striking resemblance to Chomsky’s (1959) (in)famous dismissal of Skinner’s work. This similarly attempted to undercut the idea of general learning mechanisms and replace it with notions of domain-specific internal structure. This similarity is not surprising, given that Tooby and Cosmides (1992) expressly draw on Chomsky’s logic to make their own argument. What is also interesting, however, is that, like Chomsky (1959), Tooby and Cosmides (1992), and Cosmides and Tooby (1997) simply assert the case against domain-general mechanisms, rather than provide empirical evidence for their position. As such, both Chomsky’s dismissal of radical behaviorism and Evolutionary Psychology’s dismissal of the SSSM amount to “Hegelian arguments.” This is a term coined by Chemero (2009) based on Hegel’s assertion, in the face of contemporary evidence to the contrary, that there simply could not be a planet between Mars and Jupiter (actually an asteroid) because the number of planets in the solar system was necessarily seven, given the logic of his own theoretical framework: an eighth planet was simply impossible, and no evidence was needed to support or refute this statement. In other words, Hegelian arguments are those that rule out certain hypotheses a priori, solely through the assertion of particular theoretical assumptions, rather than on the basis of empirical data. In the case of behaviorism, we have Chomsky’s famous “poverty of the stimulus” argument, which asserted, purely on the basis of “common sense” rather than empirical evidence, that environmental input was too underdetermined, too fragmentary, and too variable to allow any form of associative learning of language to occur. Hence, an innate language organ or “language acquisition device” was argued to fill the gap. Given the alternative was deemed impossible on logical grounds, the language acquisition device was thus accepted by default. The Hegelian nature of this argument is further revealed by the fact that empirical work on language development has shown that statistical learning plays a much larger role than anticipated in language development, and that the stimulus may be much “wealthier” than initially imagined (e.g., Gómez, 2002; Soderstrom and Morgan, 2007; Ray and Heyes, 2011). Similarly, the argument from EP is that a few domain-general learning mechanisms cannot possibly provide the same flexibility as a multitude of highly specialized mechanisms, each geared to a specific task. Thus, a content-free domain-general cognitive architecture can be ruled out a priori. Instead, the mind is, in Tooby and Cosmides’ (1992) famous analogy, a kind of Swiss Army knife, with a different tool for each job. More recently, the metaphor has been updated by Kurzban (2011a), who uses the iPhone as a metaphor for the human mind, with its multitude of “apps,” each fulfilling a specific function. Rather than demonstrating empirically that domain-general psychological mechanisms cannot do the job asked of them, this argument is instead supported by reference to functional specialization in other organ systems, like the heart and the liver, where different solutions are needed to solve two different problems: pumping blood and detoxifying poisons. Of course, the brain is also a functionally specialized organ that helps us coordinate and organize behavior in a dynamic, unpredictable world. Using the same logic, this argument is extended further, however, to include the idea that our psychological architecture, which is a product of our functionally specialized brain, should also contain a large number of specialized “mental organs,” or “modules,” because a small number of general-purpose learning mechanisms could not solve the wide variety of adaptive problems that we face; we need different cognitive tools to solve different adaptive problems. Analogies are also drawn with functional localization within the brain: visual areas deal only with visual information, auditory areas deal only with auditory information, and so on. The Poverty of the Stimulus Revisited Cosmides and Tooby (1994) use their own version of Chomsky’s poverty of stimulus argument to support this claim for domain-specificity (see also Frankenhuis and Ploeger, 2007 for further discussion) suggesting that “adaptive courses of action can be neither deduced nor learned by general criteria alone because they depend on statistical relationships between features of the environment, behavior, and fitness that emerge over many generations and are, therefore, not observable during a single lifetime alone.” Thus, general learning mechanisms are ruled out, and modular evolved mechanisms deemed necessary, because these “come equipped with domain-specific procedures, representations or formats prepared to exploit the unobserved” (p. 92). Using the example of incest avoidance to illustrate this point, Cosmides and Tooby (1994) argue that only natural selection can “detect” the statistical patterns indicating that incest is maladaptive, because “ … it does not work by inference or simulation. It takes the real problem, runs the experiment, and retains those design features that lead to the best available outcome” (p. 93). Frankenhuis and Ploeger (2007), state similarly: “to learn that incest is maladaptive, one would have to run a long-term epidemiological study on the effects of in-breeding: produce large numbers of children with various related and unrelated partners and observe which children fare well and which don’t. This is of course unrealistic” (p. 700, emphasis in the original). We can make use of Samuels’ (2002, 2004) definition of “innateness” to clarify matters further. According to Samuels’ (2002, 2004), to call something “innate” is simply to say that it was not acquired by any form of psychological process. Put in these terms, Cosmides and Tooby’s (1994) and Frankenhuis and Ploeger’s (2007) argument is that, because it is not possible to use domain-general psychological mechanisms to learn about the long-term fitness consequences of incest, our knowledge must be innate in just this sense: we avoid mating with close relatives because we have a functionally specialized representational mechanism that acts as a vehicle for domain-specific knowledge about incest, which was acquired by a process of natural selection. Note that domain-specificity of this kind does not automatically imply innateness, as Barrett and Kurzban (2006) and Barrett (2006) make clear. Here, however, the argument does seem to suggest that modules must contain some specific content acquired by the process of natural selection alone, and not by any form of learning, precisely because the latter has been ruled out on a priori grounds. On the one hand, these statements are entirely correct—a single individual cannot literally observe the long-term fitness consequences of a given behavior. Moreover, there is evidence to suggest that humans do possess a form of incest avoidance mechanism, the Westermarck effect, which results in reduced sexual interest between those raised together as children (Westermarck, 1921; also see Shepher, 1971; Wolf, 1995). On the other hand, it is entirely possible for humans to learn with whom they can and cannot mate, and how this may be linked to poor reproductive outcomes—indeed, people can and do learn about such things all the time, as part of their upbringing, and also as part of their marriage and inheritance systems. Although it is true that many incest taboos do not involve biological incest as such (these are more concerned with wealth concentration within lineages), it is the case that mating and marriage with close relatives is often explicitly forbidden and codified within these systems. Moreover, the precise nature of incest taboos may shift over time and space. Victorian England, for example, was a veritable hotbed of incestuous marriage by today’s standards (Kuper, 2010); indeed, Darwin himself, after famously making a list of the pros and cons of marriage, took his first cousin as his wife. It is also apparent that, in some cases, shifts in how incestuous unions are defined often relate specifically to the health and well-being of children produced. Durham (2002), for example, discusses the example of incest (or rual) among the Nuer cattle herders of Sudan, describing how differing conceptions of the incest taboo exist within the population, such that people obey or resist the taboo depending on their own construal of incest. As a result, some couples become involved in incestuous unions, and may openly challenge the authority of the courts, running off together to live as a family. When these events occur, they are monitored closely by all and if thriving children are produced, the union is considered to be “fruitful” and “divinely blessed.” Hence, in an important sense, such unions are free of rual (this is partly because the concept of rual refers to the hardships that often result from incest; indeed, it is the consequences of incest that are considered morally reprehensible, and not the act itself). Via this form of “pragmatic fecundity testing,” the incest taboo shifts over time at both the individual and institutional level, with local laws revised to reflect new concepts of what constitutes an incestuous pairing (Durham, 2002). This example is presented neither to deny the existence of the Westermarck effect (see Durham, 1991 for a thorough discussion of the evidence for this), nor to dispute that there are certain statistical patterns that are impossible for an individual to learn over the course of its lifetime. Rather it is presented to demonstrate that humans can and do learn about fitness-relevant behaviors within their own lifetimes, and can make adaptive decisions on this basis. Personal knowledge of the outcomes from long-term epidemiological study is not needed necessarily because humans can call on the accumulated stores of inter-generational knowledge residing in, and available from, other members of their community. This can be knowledge that is passed on in folklore, stories, and songs, as well as prohibitions and proscriptions on behavior set down in custom and law. As the Nuer example illustrates, we also form our own ideas about such things, regardless of what we learn from others, possibly because people can, in fact, tap into the “long-term epidemiological study” set up by the evolutionary process a long time ago, and which has been running for many years. It would indeed be impossible to learn the pattern required if each individual had to set up his or her own individual experiment at the point at which they were ready to mate, but people potentially can see the outcomes of the “long term study” in the failed conceptions of others. Furthermore, the Nuer example also makes clear that we are capable of updating our existing knowledge in the light of new evidence. Given that any such learning abilities are themselves evolved, there is no suggestion here that incest taboos are free from any kind of biological influence, and are purely socially constructed. What we are suggesting, however, is that this example undermines the notion that domain-general mechanisms cannot, even in principle, do the job required. We agree that an individual who lives for around 70 years cannot learn the outcome of a process that may take several generations to manifest, but this is a completely different issue from whether an individual can learn that certain kinds of matings are known to have deleterious consequences, and what to do about them. Thus, one cannot use this argument as a priori proof that evolved content-rich domain-specific mechanisms are the only possible way that adaptive behavior can be brought about. In other words, this is not an argument specifically about the mechanisms by which we avoid incest, but a general argument against the strategy used to establish the necessity of evolved domain-specific processes: positing that individuals cannot learn the actual fitness consequences of their actions, as defined within evolutionary biology, does not mean that humans are unable to learn to pick up on more immediate cues that reflect the relative costs and benefits that do accrue within a lifetime (cues that may well be correlated with long-term fitness) and then use these to guide their own behavior and that of their descendants. We suggest it is possible for our knowledge of such matters to be acquired, at least partly, by a psychological process during development. Hence, it is not “innate.” Moreover, even if it could be established that domain-specific innate knowledge was needed in a particular domain (like incest), this does not mean that it can be used as an argument to rule out general learning processes across all adaptive problem domains. In addition to the above examples, Heyes (2014) has recently presented a review of existing data on infants, all of which were used to argue for rich, domain-specific interpretations of “theory of mind” abilities, and shows that these results can also be accounted for by domain-general processes. Heyes and colleagues also provide their own empirical evidence to suggest that so-called “implicit mentalizing ability” could also equally well be explained by domain-general processes, such as those related to attentional orienting (Santiesteban et al., 2013). In addition, Heyes (2012) has suggested that certain cognitive capacities, which have been argued to be evolved, specialized social learning mechanisms that permit transmission of cultural behaviors, may themselves be culturally-inherited learned skills that draw on domain-general mechanisms. One point worth noting here is that, if data interpreted as the operation of domain-specific processes can be equally well accounted for by domain-general process, then this has important implications for our earlier discussion of “reverse engineering” and inferring evidence of design, as well as for the necessity of domain-specialization. As Durham (1991) suggested, with respect to the issue of incest taboos: “the influence of culture on human phenotypes will be to produce adaptations that appear as though they could equally well have evolved by natural selection of alternative genotypes … cultural evolution can mimic the most important process in genetic microevolution” (p. 289). Therefore, even if a good case could be made that a cognitive process looks well-designed by selection, an evolved module is not the only possible explanation for the form such a process takes. The Paradox of Choice? These demonstrations of the power of domain-general learning are interesting because Tooby and Cosmides (1992) also attempt to rule this out on the basis of “combinatorial explosion,” which they consider to be a knock-down argument. They state that, without some form of structure limiting the range of options open to us, we would become paralyzed by our inability to work through all possible solutions to reach the best one for the task at hand. This again seems to be something of a Hegelian argument, for Tooby and Cosmides (1992) simply assert that “[If] you are limited to emitting only one out of 100 alternative behaviors every successive minute, [then] after the second minute you have 10,000 different behavioral sequences from which to choose, a million by the third minute, a trillion by six minutes” with the result that “The system could not possibly compute the anticipated outcome of each alternative and compare the results, and so must be precluding without complete consideration the overwhelming majority of branching pathways” (p. 102). This formulation simply assumes that any sequence of behavior needs to be planned ahead of time before being executed, and that an exponential number of decisions have to made, whereas it is also possible for behavioral sequences to be organized prospectively, with each step contingent on the previous step, but with no requirement for the whole sequence to be planned in advance. That is, one can imagine a process of Bayesian learning, with an algorithm that is capable of updating its “beliefs.” Relatedly, Tooby and Cosmides (1992) apparently assume that each emission of behavior is an independent event (given the manner in which they calculate probabilities) when, in reality, there is likely to be a large amount of auto-correlation, with the range of possible subsequent behaviors being conditional on those that preceded it. Finally, Tooby and Cosmides’s (1992) argument assumes that that there is no statistical structure in the environment that could be used to constrain the range of options available (e.g., something akin to the affordances described by Gibson (1966, 1979), and that organisms are thus required to compute all contingencies independently of the environment. May et al. (2006), however, have shown that robotic rat pups, provided with a completely random control architecture (i.e., without any rules at all, whether domain-general or domain-specific), were nevertheless able to produce the distinctive huddling behavior of real rat pups, due to the constraining influence of bodily and environmental structures. That is, rather than having to decide among a trillion different options, according to the logic described above, bodily and environmental structures allow for complex behavior to emerge without any decision-making at all. Thus, there is no reason, in principle, to suppose that humans could not be similarly scaffolded and guided by environmental constraints, in ways that would allow general-learning mechanisms to get a grip and, over time, produce functionally specialized mechanisms that help guide behavior. Indeed, this may also be one reason why human infant learning mechanisms take the form they do, with only a limited capacity at first, so as not to overwhelm the system. As Elman (1993) showed, in his classic paper on infant language learning, the training of a neural network succeeded only when such networks were endowed with a limited working memory, and then gradually “matured.” More recently, Pfeifer and Bongard (2007) have reported on similar findings relating to the development of behavior in a “babybot.” Thus, while reasonable when taken at face value, many of the arguments offered in support of an evolved domain-specific computational architecture turn out to be rather Hegelian on closer inspection, rather than well-supported by empirical data. As such, the increased value of evolutionary psychology remains an open issue: it is not clear that EP offers an improvement over other computational perspectives that do not make strong claims for an evolved, domain-specific architecture of this kind. Modules 2.0 The contention that EP has sometimes offered Hegelian arguments should not be taken to suggest that opponents of the EP position are not guilty of the same. We do not deny that modular accounts have also been ruled out based on assertion rather than evidence, and that there have been many simplistic straw man arguments about genetic determinism and reductionism. Interestingly enough, Jerry Fodor himself, author of “The Modularity of Mind” (Fodor, 1983), asserted that it was simply impossible for “central” cognitive processes to be modular, and Fodor (2000) also presents several Hegelian arguments against the evolutionary “massive modularity” hypothesis. Indeed, the prevalence of such arguments in the field of cognitive science is Chemero’s (2009) main reason for raising the issue. His suggestion is that, unlike older disciplines, cognitive science gives greater credence to Hegelian arguments because it has yet to establish a theoretical framework and a supporting body of data that everyone can agree is valid. This means that EP does not present us with the knock-down arguments against the SSSM and domain-general learning that it supposes, but neither should we give Hegelian arguments against EP any credence for the same reason. As both Barrett and Kurzban (2006) and Frankenhuis and Ploeger (2007) have documented, many of the misrepresentations and errors of reasoning concerning the massive modularity hypothesis in EP can, for the most part, be traced precisely to the conflation of Fodor’s (1983) more limited conception of modularity with that of Tooby and Cosmides (1992, 2005) and Cosmides and Tooby (1994). Criticisms relating to encapsulation, cognitive impenetrability, automaticity, and neural localization are not fatal to the EP notion of modularity because EP’s claim is grounded in functional specialization, and not any specific Fodorian criterion; criticisms that argue in these terms therefore miss their mark (Barrett and Kurzban, 2006). Given that most criticisms of the massive modularity hypothesis prove groundless from an EP point of view, it is worth considering Barrett and Kurzban’s (2006) analysis in detail in order to understand exactly what the EP view of modularity entails, and whether this updated version of the modularity argument is more convincing in terms of presenting an improved alternative to standard computational models. First and foremost, Barrett and Kurzban (2006) make clear that functional specialization alone is the key to understanding modularity from an EP point of view, and domain-specific abilities, and hence modules, “should be construed in terms of the formal properties of information that render it processable by some computational procedure” (Barrett and Kurzban, 2006, p. 634). That is, modules are defined by their specialized input criteria and their ability to handle information in specialized ways: only information of certain types can be processed by the mechanism in question. Natural selection’s role is then “to shape a module’s input criteria so that it processes inputs from the proper domain in a reliable, systematic and specialized fashion.” (By “proper” domain they mean the adaptive problem, with its associated array of inputs, that the module has been designed by selection to solve; this stands in contrast to the “actual” domain, which includes the range of inputs to which the module is potentially able to respond, regardless of whether these were present ancestrally: see Sperber, 1994; Barrett and Kurzban, 2006, p. 635). Hence, the domain-specificity of a module is a natural consequence of its functional specialization (Barrett and Kurzban, 2006). Crudely speaking, then, modules are defined more in terms of their syntactic rather than semantic properties—they are not “content domains,” but more like processing rules. Barrett and Kurzban (2006) argue that their refinement of the modularity concept holds two implications. First, given that a module is defined as any process for which it is possible to formally specify input criteria, there is no sharp dividing line between domain-specific and domain-general processes, because the latter can also be defined in terms of formally specified input criteria. The second, related implication is that certain processes, like working memory, which are usually regarded as domain-general (i.e., can process information from a wide variety of domains, such as flowers, sports, animals, furniture, social rituals), can also be considered as modular because they are thought to contain subsystems with highly specific representational formats and a sensitivity only to specific inputs (e.g., the phonological loop, the visuospatial sketchpad; Barrett and Kurzban, 2006). This does, however, seem to deviate slightly from Cosmides and Tooby’s (1997) suggestion that modules are designed to solve particular adaptive problems encountered by our ancestors: what specific adaptive problem does “working memory” solve, given that the integration of information seems common to all adaptive problems? (see also Chiappe and Gardner, 2012). Taken on its own terms, however, Barrett and Kurzban’s (2006) definition of modularity should raise no objections from anyone committed to the computational theory of mind, nor does it come across as particularly radical with respect to its evolutionary theorizing. Thus, Barrett and Kurzban (2006) dissolve many of the problems identified with massive modularity, and suggest that most criticisms are either misunderstandings or caricatures of the EP position. When considered purely as a computational theory (i.e., leaving to one side issues relating to the EEA, and Hegelian arguments relating to the need for evolved domain-specific knowledge), the more recent EP position is thereby revealed as both reasonable and theoretically sophisticated. Developmental Considerations: “Soft” Developmental Systems Theory and EP It is also important to note that the more recent work in EP also incorporates a strongly developmental perspective, again laying rest to criticisms that EP is overly determinist and that EP researchers are prone to simplistic claims about the innateness or “hard-wiring” of particular traits (e.g., Barrett, 2006; Frankenhuis et al., 2013). In particular, work by Barrett (2006) and Frankenhuis et al. (2013) attempts to integrate developmental systems theory (DST) into EP. This represents an encouraging move at first glance, because the aim of DST is to moves us away from a dichotomous account of development, where two classes of resources—genes and “all the rest”—interact to produce the adult phenotype, toward an account in which there is no division into two fundamentally different kinds of resources. Instead, genes are seen as just one resource among many available to the developmental process, and are not the central drivers of the process (Griffiths and Gray, 1994). Indeed, genes can play their role only if all other resources essential for development are in place. This should not be taken to mean that all resources contribute equally to each and every process, and always assume the same relative importance: the aim is not to “homogenize” the process of development, and obliterate the distinctions between different kinds of resources, but to call into question the way in which we divide up and classify developmental resources, opening up new ways to study such processes. The EP take on DST, however, is self-confessedly “soft,” and continues to maintain that standard distinction between genetic and environmental resources. As defined by Frankenhuis et al. (2013), “soft DST” regards developmental systems as “dynamic entities comprising genetic, molecular, and cellular interactions at multiple levels, which are shaped by their external environments, but distinct from them” (p. 585). Although a strongly interactionist view, the “developmental system” here remains confined to the organism alone, and it continues to treat genetic influences as fundamentally distinct from other developmental resources, with a unique role in controlling development. More pertinently, Barrett (2006) suggests that, precisely because it gets us away from any kind of “genetic blueprint” model of growth and development, it may be “fruitful to think of developmental processes themselves in computational terms: they are designed to take inputs, which include the state of the organism and its internal and external environments as a dynamically changing set of parameters, and generate outputs, which are the phenotype, the end-product of development. One can think of this end-product, the phenotype, as the developmental target” (p. 205). Thus, once again, EP does not present us with an alternative to current computational models, because, as Barrett (2006) makes clear, the incorporation of these additional theories and models into an EP account entails a reinterpretation of such theories in fully computational terms. Ancient Adaptations or Thoroughly Modern Modules? Another consideration we would like to raise is whether, as a result of incorporating a clearly articulated developmental component, EP researchers actually undermine some of their own claims regarding the evolved domain-specificity of our putative modular architecture. Barrett (2006), for example, uses Sperber’s (1994) ideas of actual and proper domains to good effect in his developmental theorizing, distinguishing clearly between “types” of cognitive processes (which have been the target of selection) and “tokens” of these types (which represent the particular manner in which this manifests under a given set of conditions). This enables him to provide a cogent account of an evolved modular architecture that is capable of generating both novelty and flexibility. The interesting question, from our perspective, is whether the modules so produced can be still be considered adaptations to past environments, as Cosmides and Tooby (1994, 1997) insist must be the case. For example, as Barrett (2006) notes, many children possess the concept of Tyrannosaurus rex, which we know must be evolutionarily novel because, as a matter of empirical fact, there has been no selection on humans to acquire this concept. Nevertheless, as Barrett (2006) argues, we can consider the possession of this concept as a token outcome that falls well within the proper type of a putative predator-recognition system. This argument is logical, sensible, and difficult to argue with, yet seems at odds with the central idea presented in much of Tooby and Cosmides (1992, 2005) work that the modular architecture of our minds is adapted to a past that no longer exists. That is, as tokens of a particular type of functional specialization, produced by a developmental process that incorporates evolutionarily novel inputs, it would seem that any such modules produced are, in fact, attuned to present conditions, and not to an ancestral past. As Barrett (2006) notes, Inuit children acquire the concept of a polar bear, whereas Shuar children acquire the concept of a jaguar, even though neither of these specific animals formed part of the ancestral EEA; while the mechanisms by which these concepts are formed, and why these concepts are formed more easily than others, may well have an evolutionary origin, the actual functional specializations produced - the actual tokens produced within this proper type - would seem to be fully modern. The notion that “our modern skulls house a stone age mind” (Cosmides and Tooby, 1997) or that, as Pinker (2003, p. 42) puts it; “our brains are not wired to cope with anonymous crowds, schooling, written language, governments, police courts, armies, modern medicine, formal social institutions, high technology and other newcomers to the human experience” are thus undermined by the token-type distinction developed in more recent EP theorizing. One could argue, perhaps, that what Pinker means here is that our brains did not evolve to deal with such things specifically, i.e., that he is simply making Barrett’s (2006) argument that these phenomena are just tokens of the various types that our brains are wired to cope with. But, if this is the case, then it seems that EP loses much of its claim to novelty. If it is arguing only that humans have evolved psychological mechanisms that develop in ways that attune them to their environment, this does not differ radically from computational cognitive theories in human developmental and comparative psychology more generally. This sounds like a critical argument, but we do not mean it in quite the way it sounds. Our argument is that the theoretical EP literature presents a perfectly acceptable, entirely conventional computational theory, one that admits to novelty, flexibility, the importance of learning and development, and incorporates the idea that a species’ evolutionary history is important in shaping the kinds of psychological processes it possesses and the ease with which they are acquired. Our point is that this is no different from the arguments and empirical findings offered against behaviorism toward the middle of the last century, which heralded the rise of cognitive psychology (see e.g., Malone, 2009; Barrett, 2012). Central to all cognitivist psychological theories is the idea that there are internal, brain-based entities and processes that transform sensory input into motor output, and the acknowledgment that much of this internal structure must reflect a past history of selection. EP, in this sense then, is not controversial within psychology, and is entirely consonant with current psychological theory and practice. Thus, in addition to the fact that EP is based on the same computational metaphor as standard cognitive psychology, it is also apparent that most of the evolutionary aspects of this theory, as reconceived by current authors, do not render it revolutionary within psychology, nor is there any reason to believe that the remaining social sciences should view EP as any more essential or necessary to their work than current computational models. Indeed, one could simply take the message of EP to be that, as with all species, humans are prepared to learn some things more readily than others as a result of evolving within a particular ecological niche. Seen in these terms, it is surprising that EP continues to be considered controversial within psychology, given that its more recent theoretical claims can be seen as entirely mainstream. An Alternative Suggestion: Cognitive Integration If our conclusion is that EP does not offer an alternative to standard computational cognitive psychology, we are left with two further questions: Is an alternative really needed? And if so, what is it? In the remainder of this paper, we tackle these questions in turn. One reason why we might need an alternative to standard computational and representational theories of mind is because, despite claims to the contrary (e.g., Pinker, 2003), it has yet to provide a complete account of how humans and other species produce adaptive, flexible behavior in a dynamic, unpredictable world. Although we may understand something about capacities like playing chess, engaging in formal reasoning, or natural language (i.e., tasks that involve the manipulation of symbols according to rules, and are inherently computational anyway), we still lack a good understanding of the more mundane tasks that characterize much of what we could call “everyday” intelligence, such as how we manage to negotiate uneven terrain or coordinate all the actions and objects necessary to make a pot of tea, or coordinate our social actions with others when we dance, engage in conversation smoothly and easily, or simply walk down a crowded street. It is also interesting to note that the computational metaphor also hindered the advancement of robotics in much the same way. The MIT roboticist and inventor of the Roomba, Rodney Brooks, relates how his first formal foray into robotics was at Stanford, where they took a “classic” artificial intelligence approach, with robots that took in sensory inputs, computed solutions to a task based on these inputs, and then executed them. This made the robots operate very slowly, even to the extent that the movement of the sun across the sky, and the changes in the shadows thrown, had the ability to confuse their internal representations. Only by moving away from a classic computational “sense–represent–plan–act” approach, and eliminating the need for internal representations altogether, was progress made (Brooks, 2002; also see Pfeifer and Bongard, 2007). In other words, the idea that cognition is, ultimately, a form of “mental gymnastics” (Chemero, 2009) involving the construction, manipulation, and use of internal representations according to a set of rules does not seem to provide an adequate account of how humans and other animals achieve most of the activities they engage in every day. Given this, the obvious alternatives to the standard computational theories of mind are the various forms of “E-cognition” (embodied, embedded, enactive, extended, and extensive) that have been gaining steady ground in recent years within cognitive science and philosophy of mind and, to a lesser extent, psychology itself, both theoretically and empirically (e.g., Clark, 1997, 2008; Gallagher, 2005; Wheeler, 2005; Menary, 2007, 2010; Pfeifer and Bongard, 2007; Chemero, 2009; Barrett, 2011; Hutto and Myin, 2013). While these approaches vary in the degree to which they reject computational and representational approaches to cognition [e.g., Clark (1997, 2008) argues for a form of “dynamic computationalism,” whereas Hutto and Myin (2013) reject any suggestion that “basic minds,” i.e., those that are non-linguistic, make use of representational content], they have in common the idea that body and environment contribute to cognitive processes in a constitutive and not merely causal way; that is, they argue that an organism’s cognitive system extends beyond the brain to encompass other bodily structures and processes, and can also exploit statistical regularities and structure in the environment. For reasons of space, we cannot provide a full account of these alternatives, and the similarities and differences between them. Instead, we will focus on one particular form of E-cognition, the “EM” hypothesis. Specifically, we will deal with “second-wave EM” thinking, also known as “cognitive integration,” as exemplified by the work of Clark (2008), Sutton (2010), and Menary (2007, 2010). We believe this supplies the beginnings of an answer to why an alternative to standard computational theory is required, and illustrates why EP cannot provide it. Put simply, the EM hypothesis is that external resources and artifacts, like written language and other forms of material culture, are central to the production of the modern human cognitive phenotype, and serve to augment and ratchet up the power of our evolved brains (e.g., Clark, 1997, 2008; Menary, 2007, 2010; Sutton, 2010). External resources are argued to play a role in a cognitive process in ways that are either functionally equivalent to that carried out by a biological brain such that, for the duration of that process, the external resource can be considered to be part of the cognitive system (the so-called “parity principle”: Clark and Chalmers, 1998) or they play roles that are complementary to brain-based processes, and augment them accordingly (the “complementarity principle”: Menary, 2007, 2010; Sutton, 2010). We can see this in everything from the way in which our ability to multiply very large numbers is enhanced by the use of pencil and paper to the fascinating literature on sensory substitution devices, where blind individuals are able to visually explore their environments via external devices that supply auditory or tactile information in ways that compensate for the loss of their visual sense (Bach-y-Rita et al., 1969, 2003; Bach-y-Rita and Kercel, 2003). The idea here, then, is not to eliminate all distinctions between different kinds of resources and consider them to be synonymous, but to reduce our prejudice that only internal processes taking place in the brain count as cognitive, and to redraw the boundaries of the cognitive system accordingly. The notion of the EM or cognitive integration therefore dissolves the boundary between brain, body, and world, and rejects the idea that the “cognitive system” of an animal is confined to its brain alone (for a review of how cognitive integration relates to the non-human animal literature, see Barrett, 2011). Instead, as Clark (1997) and Clark and Chalmers (1998) suggested, many of our cognitive states can be considered as hybrids, distributed across biological and non-biological realms. We are, as the title of one of Clark’s books suggests, “natural born cyborgs” (Clark, 2003). Cultivating the Hybrid Human The human cognitive system, in particular, is extended far beyond that of other species because of the complex interaction between the biological brain and body, and the wide variety of artifacts, media and technology that we create, manipulate, and use. It is crucial to realize that the hybrid nature of human beings is not a recent phenomenon tied to the development of modern technology. On the contrary, cognitive extension is a process that has been taking place ever since the first hominin crafted the first stone tools, and has continued apace ever since. What this means today is that, as Clark (2003) puts it, “our technologically enhanced minds are barely, if at all, tethered to the ancestral realm” (p. 197) nor are they now “constrained by the limits of the on-board apparatus that once fitted us to the good old savannah” (p. 242). This stands in stark contrast to the EP position, where the only “cognitive machinery” involved is the brain itself, whose structure is tied fundamentally and necessarily to the past, untouched by our culturally constructed, technological world. As Tooby and Cosmides (1992) put it: “what mostly remains, once you have removed from the human world everything internal to humans, is the air between them” (p. 47). Cognitive integration begs to differ in this regard, and invites us to look around and see that this simply cannot be true. Consequently, our view is that cognitive integration promises to explain more about human psychology than EP ever could because it forces a stronger recognition of the historical, sociocultural nature of human psychology - the fact that we develop in a socially and culturally rich milieu that reflects the contingent nature of both historical and evolutionary events. Past generations structure the developmental context of those that succeed them, providing resources that are essential to the production of species-typical behavior. Importantly, however, they also enhance what can be achieved by providing ever more sophisticated forms of cognitive scaffolding that itself augments the scaffolding that previous generations bequeathed to them (Sterelny, 2003; Stotz, 2010). This can be seen as something akin to the process of ecological succession, where the engine of change is the organism’s own impact on the environment; a metaphor we have stolen from Griffiths and Gray’s (1994) treatment of DST. Indeed, there is a natural sympathy between DST as an approach to the study of the evolution and development of biological organisms, and the more dynamical forms of E-cognition that adopt a similar approach to the evolution, development, and functioning of cognitive systems. In particular, Stotz (2010) argues convincingly that understanding human psychology from an evolutionary perspective requires a focus on “developmental niche construction”; an idea that, as the name suggests, incorporates elements of both developmental systems and niche construction theory (see also Griffiths and Stotz, 2000). Understanding modern human psychology therefore requires an understanding of the entanglement of our technologies, cultural practices, and historical events with our evolutionary heritage, and not the reverse engineering of human cognitive architecture alone. Clark (2002) suggests that the pay-off from this kind of expanded psychology “… could be spectacular: nothing less than a new kind of cognitive scientific collaboration involving neuroscience, physiology and social, cultural and technological studies in equal measure” (p. 154). Turning to an embodied, extended approach as an alternative to standard computational theories, including that of EP, is a step in the right direction not only because it recognizes the hybrid nature of humans, in the terms described above, but also in the sense discussed by Derksen (2005), who argues that a recognition of ourselves as part-nature and part-culture creates a distinct and interesting boundary (or rather a range of related boundaries) between humans and the natural world. As Derksen (2005) points out, the reflexive ways in which we deal with ourselves and our culture are very different from our dealings with the natural world, and a recognition of our hybrid nature allows us to explore these boundaries in their own right, and to examine how and why these may shift over time (for example, issues relating to fertility treatments, stem cell research, cloning, and organ transplantation all raise issues concerning what is “natural” versus “unnatural,” and how we should conceive of human bodies in both moral and ethical terms). To emphasize this shifting, dynamic element of the boundaries we straddle as hybrid natural-cultural beings, Derksen (2005) uses the metaphor of “cultivation.” Like a gardener tending his plants, humans cultivate their nature, and in so doing elaborate their potential. As Vygotsky (1962) suggested, this makes culture something we do, rather than something that happens to us, or that we simply possess. The intersection between cognitive integration and cultivation should be clear, for cognitive integration, which naturally takes into account our historical, social, cultural, and evolutionary underpinnings in equal measure, is key to our ability to cultivate new forms of human nature (see also Bakhurst, 2011). Indeed, proponents of cognitive integration, suggest that “human nature” continually emerges in an ongoing way from human activity, and that we cannot pinpoint some fixed and unchanging essence (Derksen, 2012). As Wheeler and Clark (2008) put it: “our fixed nature is a kind of meta-nature … an extended cognitive architecture whose constancy lies mainly in its continual openness to change” (p. 3572). Such a view stands in contrast to the EP perspective, where the idea of a universal human nature, comprising our evolved computational architecture, is a central premise of the approach. The problem here, as we see it, is that cultural variation across time and space is seen simply as the icing on the cake of our evolved universal psychology. Humans are argued to manifest different behaviors under different conditions because our evolved architecture works rather like a jukebox that can play different records given different inputs; what Tooby and Cosmides (1992) refer to as “evoked culture.” By this definition, such cultural differences fail to penetrate or alter our “human nature” in any fundamental way. Such a view also fails to account for how and why completely different modes of thinking have emerged over space and time as a consequence of the invention of different material artifacts, like the wheel, the plow, time-pieces, accounting systems, and written language. Such things are not evoked simply by exposure to local ecological conditions, and their existence fundamentally changes how we think about the world (without the invention of time-pieces, for example, the cultural importance of timeliness and punctuality so valued by, among others, the Swiss and Germans, would not, and could not, be considered any part of human nature). EP therefore leaves out the most distinctive aspect of human cognitive life—the way in which material culture is both a cause and consequence of our psychological and cultural variability—whereas cognitive integration makes this the central element to understanding why humans think and act in the ways that they do (Menary, 2010; Sutton, 2010; Malafouris, 2013). Finally, as Derksen (2005, 2007) argues, a view of human nature as a matter of cultivation, as a form of ongoing human activity, renders the idea of unification between the biological and social sciences wrongheaded on its face: the very diversity of disciplines in which we engage reflects the disunity, the boundary between nature and culture, that characterizes our humanity, and not the fundamental “psychic unity” of humankind that EP assumes. Consequently, there is a very real need to collaborate and confront each other along disciplinary boundaries, but not dissolve, ignore, or erase them (Derksen, 2005, 2007). Such sentiments are echoed by those involved in the study of cognitive integration, who similarly call for this kind of multidisciplinary pluralism in our approach to the study of human nature and the mind (Derksen, 2005, 2007; Menary, 2007; Clark, 2008; Wheeler and Clark, 2008; Menary, 2010; Sutton, 2010). Simply put, our hybrid selves can be studied in no other way. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments Louise Barrett is supported by the Canada Research Chairs (Tier 1) Program and NSERC Discovery grants. Thomas V. Pollet is supported by NWO (Veni, 451.10.032). Gert Stulp is supported by an NWO Rubicon grant. We are grateful to Maarten Derksen for reading and commenting on an earlier draft of the manuscript, and for the comments of our two reviewers. Thanks also to Danielle Sulikowski for inviting us to contribute to this research topic. References Bach-y-Rita, P., Collins, C. C., Saunders, F. A., White, B., and Scadden, L. (1969). Vision substitution by tactile image projection. Nature 221, 963–964. doi: 10.1038/221963a0 CrossRef Full Text Bach-y-Rita, P., Tyler, M. E., and Kaczmarek, K. A. (2003). Seeing with the brain. Int. J. Hum. Comput. Interact. 15, 285–295. doi: 10.1207/S15327590IJHC1502_6 CrossRef Full Text Bach-y-Rita, P., and W Kercel, S. (2003). Sensory substitution and the human–machine interface. Trends Cogn. Sci. 7, 541–546. doi: 10.1016/j.tics.2003.10.013 CrossRef Full Text Barrett, H. C. (2006). “Modularity and design reincarnation,” in The Innate Mind: Culture and Cognition, eds P. Carruthers, S. Laurence, and S. P. Stich (Oxford: Oxford University Press), 199–217. Barrett, L. (2011). Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton, NJ: Princeton University Press. Barrett, L. (2012). “Why behaviorism isn’t satanism,” in The Oxford Handbook of Comparative Evolutionary Psychology, eds J. Vonk and T. K. Shackelford (Oxford: Oxford University Press), 17–38. Barton, R. A. (2007). “Evolution of the social brain as a distributed neural system,” in Oxford Handbook of Evolutionary Psychology, eds L. Barrett and R. I. M. Dunbar (Oxford: Oxford University Press), 129–144. Brooks, R. A. (2002). Robot: The Future of Flesh and Machines. London: Allen Lane. Buller, D. J. (2005). Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature. Cambridge, MA: MIT Press. doi: 10.1002/evan.1360040603 CrossRef Full Text Buller, D. J., and Hardcastle, V. (2000). Evolutionary psychology, meet developmental neurobiology: against promiscuous modularity. Brain Mind 1, 307–325. doi: 10.1023/A:1011573226794 CrossRef Full Text Buss, D. M. (1999). Evolutionary Psychology: The New Science of the Mind. Needham Heights, MA: Allyn & Bacon. Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press. Chiappe, D., and Gardner, R. (2012). The modularity debate in evolutionary psychology. Theory Psychol. 22, 669–682. doi: 10.1177/0959354311398703 CrossRef Full Text Chomsky, N. (1959). A review of BF Skinner’s Verbal Behavior. Language (Baltim.) 35, 26–58. doi: 10.2307/411334 CrossRef Full Text Chomsky, N. (2005). Three factors in language design. Linguist. Inq. 36, 1–22. doi: 10.1162/0024389052993655 CrossRef Full Text Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. Clark, A. (2002). Towards a science of the bio-technological mind. Int. J. Cogn. Technol. 1, 21–33. doi: 10.1075/ijct.1.1.03cla CrossRef Full Text Clark, A. (2003). Natural-Born Cyborgs: Minds, Technologies, and the Future of Human. Oxford: Oxford University Press. Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780195333213.001.0001 CrossRef Full Text Clark, A., and Chalmers, D. (1998). The extended mind. Analysis 58, 7–19. doi: 10.1093/analys/58.1.7 CrossRef Full Text Conway, L. G., and Schaller, M. (2002). On the verifiability of evolutionary psychological theories: an analysis of the psychology of scientific persuasion. Pers. Soc. Psychol. Rev. 6, 152–166. doi: 10.1207/S15327957PSPR0602_04 CrossRef Full Text Cosmides, L., and Tooby, J. (1994). “Origins of domain specificity: the evolution of functional organization,” in Mapping the Mind: Domain Specificity in Cognition and Culture, eds L. A. Hirschfeld and S. A. Gelman (Cambridge: Cambridge University Press), 85–116. doi: 10.1017/CBO9780511752902.005 CrossRef Full Text Cosmides, L., and Tooby, J. (1997). Evolutionary Psychology: A Primer. Available at: http://www.cep.ucsb.edu/primer.html (accessed May 08, 2014). Darwin, C. (1871). The Descent of Man and Selection in Relation to Sex. London: John Murray. doi: 10.1037/12293-000 CrossRef Full Text Derksen, M. (2005). Against integration why evolution cannot unify the social sciences. Theory Psychol. 15, 139–162. doi: 10.1177/0959354305051360 CrossRef Full Text Derksen, M. (2007). Cultivating human nature. New Ideas Psychol. 25, 189–206. doi: 10.1016/j.newideapsych.2006.09.001 CrossRef Full Text Derksen, M. (2012). Human nature as a matter of concern, care, and cultivation. Paper Presented at Symposium on The Adapted Mind, Ghent. Dunbar, R. I. M., and Barrett, L. (2007). “Evolutionary psychology in the round,” in Oxford Handbook of Evolutionary Psychology, eds R. Dunbar and L. Barrett (Oxford: Oxford University Press), 3–9. Durham, W. H. (1991). Coevolution: Genes, Culture, and Human Diversity. Stanford, CA: Stanford University Press. Durham, W. H. (2002). “Cultural variation in time and space: the case for a populational theory of culture,” in Anthropology Beyond Culture, eds R. G. Fox and B. J. King (Oxford: Berg), 193–206. Elman, J. L. (1993). Learning and development in neural networks: the importance of starting small. Cognition 48, 71–99. doi: 10.1016/0010-0277(93)90058-4 CrossRef Full Text Fodor, J. A. (1983). The Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press. Fodor, J. A. (2000). The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA: MIT Press. Frankenhuis, W. E., and Ploeger, A. (2007). Evolutionary psychology versus fodor: arguments for and against the massive modularity hypothesis. Philos. Psychol. 20, 687–710. doi: 10.1080/09515080701665904 CrossRef Full Text Gallagher, S. (2005). How the Body Shapes the Mind. Cambridge: Cambridge University Press. doi: 10.1093/0199271941.001.0001 CrossRef Full Text Gibson, J. J. (1966). The Senses Considered as Perceptual Systems. Boston, MA: Houghton Mifflin. Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston, MA: Houghton Mifflin. Gómez, R. L. (2002). Variability and detection of invariant structure. Psychol. Sci. 13, 431–436. doi: 10.1111/1467-9280.00476 CrossRef Full Text Gray, R. D., Heaney, M., and Fairhall, S. (2003). “Evolutionary psychology and the challenge of adaptive explanation,” in From Mating to Mentality: Evaluating Evolutionary Psychology, eds K. Sterelny and J. Fitness (Hove: Psychology Press), 247–268. Griffiths, P. E., and Gray, R. D. (1994). Developmental systems and evolutionary explanation. J. Philos. 91, 277–304. doi: 10.2307/2940982 CrossRef Full Text Griffiths, P. E., and Stotz, K. (2000). How the mind grows: a developmental perspective on the biology of cognition. Synthese 122, 29–51. doi: 10.1023/A:1005215909498 CrossRef Full Text Holcomb, H. R. III (1996). Just so stories and inference to the best explanation in evolutionary psychology. Minds Mach. 6, 525–540. Hutto, D. D., and Myin, E. (2013). Radicalizing Enactivism: Basic Minds Without Content. Cambridge, MA: MIT Press. Karmiloff-Smith, A. (1995). Beyond Modularity: A Developmental Perspective on Cognitive Science. Cambridge, MA: MIT Press. Karmiloff-Smith, A. (1998). Development itself is the key to understanding developmental disorders. Trends Cogn. Sci. 2, 389–398. doi: 10.1016/S1364-6613(98)01230-1233 CrossRef Full Text Ketelaar, T., and Ellis, B. J. (2000). Are evolutionary explanations unfalsifiable? Evolutionary psychology and the Lakatosian philosophy of science. Psychol. Inq. 11, 1–21. doi: 10.1207/S15327965PLI1101_01 CrossRef Full Text Kuper, A. (2010). Incest and Influence: The Private Life of Bourgeois England. Cambridge, MA: Harvard University Press. Kurzban, R. (2011a). Is Your Brain Like an Iphone? Available at: http://www.psychologytoday.com/blog/mind-design/201101/is-your-brain-iphone (accessed May 10, 2014). Kurzban, R. (2011b). Two Sides of the Same Coyne. Available at: http://www.epjournal.net/blog/2011/01/two-sides-of-the-same-coyne/ (accessed May 10, 2014). Kurzban, R. (2012). “Just so stories are (bad) explanations,” Functions are Much Better Explanations. Available at: http://www.epjournal.net/blog/2012/09/just-so-stories-are-bad-explanations-functions-are-much-better-explanations/ (accessed May 8, 2014). Kurzban, R. (2013). How Do Biologists Support Functional Claims? Available at: http://www.epjournal.net/blog/2013/09/how-do-biologists-support-functional-claims/ (accessed May 14, 2014). Lehrman, D. S. (1970). “Semantic and conceptual issues in the nature-nurture problem,” in Development and Evolution of Behavior: Essays in Memory of T. C. Schneirla, eds L. R. Aronson, E. Tobach, D. S. Lehrman, and J. S. Rosenblatt (San Francisco, CA: Freeman), 17–52. Lloyd, E. A. (1999). Evolutionary psychology: the burdens of proof. Biol. Philos. 14, 211–233. doi: 10.1023/A:1006638501739 CrossRef Full Text Malafouris, K. (2013). How Things Shape the Mind: A Theory of Material Engagement. Cambridge, MA: MIT Press. Malone, J. C. (2009). Psychology: Pythagoras to Present. Cambridge, MA: MIT Press. doi: 10.1002/cplx.20149 CrossRef Full Text Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York, NY: Henry Holt. doi: 10.1057/9780230592889 CrossRef Full Text May, C. J., Schank, J. C., Joshi, S., Tran, J., Taylor, R. J., and Scott, I.-E. (2006). Rat pups and random robots generate similar self-organized and intentional behavior. Complexity 12, 53–66. doi: 10.1002/cplx.20149 CrossRef Full Text Menary, R. A. (2007). Cognitive Integration: Mind and Cognition Unbounded. London: Palgrave Macmillan. Menary, R. A. (2010). “Cognitive integration and the extended mind,” in The Extended Mind, ed. R. A. Menary (Cambridge, MA: MIT Press), 227–243. Park, J. H. (2007). Distinguishing byproducts from non-adaptive effects of algorithmic adaptations. Evol. Psychol. 5, 47–51. doi: 10.1017/S0140525X00081061 CrossRef Full Text Pinker, S. (2003). The Blank Slate: The Modern Denial of Human Nature. London: Penguin. Pinker, S., and Bloom, P. (1990). Natural language and natural selection. Behav. Brain Sci. 13, 707–727. doi: 10.1017/S0140525X00081061 CrossRef Full Text Rose, H. (2000). “Colonising the social sciences,” in Alas, Poor Darwin: Arguments Against Evolutionary Psychology, eds H. Rose and S. Rose (London: Jonathan Cape), 106–128. Rose, H., and Rose, S. (2000). Alas, Poor Darwin: Arguments Against Evolutionary Psychology. London: Jonathan Cape. Samuels, R. (2002). Nativism in cognitive science. Mind Lang. 17, 233–265. doi: 10.1111/1468-0017.00197 CrossRef Full Text Smith, L. B., and Thelen, E. (2003). Development as a dynamic system. Trends Cogn. Sci. 7, 343–348. doi: 10.1016/S1364-6613(03)00156-156 CrossRef Full Text Sperber, D. (1994). “The modularity of thought and the epidemiology of representations.,” in Mapping the Mind: Domain Specificity in Cognition and Culture, eds L. A. Hirschfeld and S. A. Gelman (Cambridge: Cambridge University Press), 39–67. doi: 10.1017/CBO9780511752902.003 CrossRef Full Text Sporns, O. (2011a). Networks of the Brain. Cambridge, MA: MIT Press. Sterelny, K. (2003). Thought in a Hostile World: The Evolution of Human Cognition. Oxford: Blackwell. Stotz, K. (2010). Human nature and cognitive-developmental niche construction. Phenom. Cogn. Sci. 9, 483–501. doi: 10.1007/s11097-010-9178-7 CrossRef Full Text Sutton, J. (2010). “Exograms and interdisciplinarity: history, the extended mind, and the civilizing process,” in The Extended Mind, ed. R. A. Menary (Cambridge, MA: MIT Press), 189–225. Symons, D. (1992). “On the use and misuse of Darwinism in the study of human behavior,” in The Adapted Mind: Evolutionary Psychology and the Generation of Culture, eds J. H. Barkow, L. Cosmides, and J. Tooby (Oxford: Oxford University Press), 137–162. Tooby, J., and Cosmides, L. (1990). The past explains the present: emotional adaptations and the structure of ancestral environments. Ethol. Sociobiol. 11, 375–424. doi: 10.1016/0162-3095(90)90017-Z CrossRef Full Text Tooby, J., and Cosmides, L. (1992). “The psychological foundations of culture,” in The Adapted Mind: Evolutionary Psychology and the Generation of Culture, eds J. H. Barkow, L. Cosmides, and J. Tooby (Oxford: Oxford University Press), 19–136. Tooby, J., and Cosmides, L. (2005). “Conceptual foundations of evolutionary psychology,” in The Handbook of Evolutionary Psychology, ed. D. M. Buss (New York, NY: Wiley), 5–67. Travis, J., and Reznick, D. N. (2009). “Adaptation,” in Evolution: The First Four Billion Years, eds M. Ruse, and J. Travis (Cambridge, MA: Harvard University Press), 105–131. Westermarck, E. (1921). The History of Human Marriage, 5th Edn. London: Macmillan. Wheeler, M. (2005). Reconstructing the Cognitive World: The Next Step. Cambridge, MA: MIT Press. Williams, G. C. (1966). Adaptation and Natural Selection. Princeton, NJ: Princeton University Press. Wilson, E. O. (1975). Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press.
|
By Chris Morris BBC News, Raipur Troops carry voting machines in Bihar, where the Maoists are also active As the sun begins to set over the village of Chandi a group of dancers is entertaining the crowd gathered to hear from the local candidate of the Congress party. This is grass roots politics, a long way from the centre of Indian political power. But here too, it is all about the numbers. "We think we will win 11 seats," says Dhanendra Sahu, the Congress president in Chhattisgarh. "The people believe in us." There is another kind of politics at work in this state, though. Away from the headlines, a low intensity war is being waged across a vast swathe of territory in central and eastern India. It has become known as the Red Corridor. Elections may be in full swing, but there is also an army of Maoist rebels preaching revolution in this part of the country. They do not believe in parliamentary democracy. Anyone caught by the Maoists on election day with ink on their finger - as proof that they have voted - will, the rebels warn, have that finger cut off. 'Difficult fight' Yes, Maoists - known locally as Naxalites - promising a dictatorship of the proletariat. It may sound like a blast from the past, but in Chhattisgarh alone they have 10,000 armed cadres in vast sparsely populated forest regions. Hundreds of people are killed in Maoist violence in India every year - at least 25 have been killed in the last few days. There have been pitched battles between the Maoists and the security forces in Chhattisgarh, Orissa and Bihar. The authorities admit that there is now a Maoist presence in nearly one third of all districts across the country. One in 10 districts is seriously affected. And the Naxalites have big plans. "The Maoist projection is that they will be able to take over the Indian state by 2050," explains Ajai Sahni of the Institute for Conflict Management. "So that is the kind of projection they are making - they are looking forward 40 years." The idea that they can pose a national challenge sounds far-fetched. So is it just fantasy? "I would like to believe that if the Indian state wakes up and begins to address this issue properly it is really fantasy," Ajai Sahni says. "But if the Maoists are able to establish disruptive capacity as they are currently trying to do, if they're able to do that across the country, the Indian state will be looking at a fight that is going to be very, very difficult." Police challenge Already there are plenty of victims. At a small children's home on the outskirts of Raipur, there are more than 20 orphans of the Maoist conflict. Kishore came to the children's home after his father was killed "I'm here to take education," Kishore says. "There were too many problems in my village, they were killing people and my father died in the violence. "So it's good to be here for the future." At Chhattisgarh police headquarters, the focus is on the present threat. The authorities here have been heavily criticised for sponsoring vigilante groups who oppose the Maoists. But the police chief in charge of anti-Maoist operations, Pawan Deo, is unapologetic. He says he knows he has to win over the people. And force alone is not enough. "Steps need to be taken. I think the government is fully aware of that, and we are making strategies according to that," he says. "The biggest hindrance to our operations is the inaccessibility of the areas where the Maoists operate. They destroy the roads. They oppose development. "They also have front organisations," Mr Deo says, "which covertly and overtly support them. So we are tackling them on that." 'Basic needs' But are they tackling the right people? Every week in Raipur a peaceful protest is held, calling for the release of a well-known local doctor, Binayak Sen. He has been in custody for nearly two years, accused of collusion with the Maoists. The medical journal The Lancet is among those who have campaigned for his release. Activist Rajendra Sail says he can understand why the poor are angry His supporters, like Rajendra Sail of the People's Union for Civil Liberties, say Dr Sen's only crime is working for human rights, and understanding why the poor get so frustrated. "We don't see any wisdom in using violent means to achieve any ends," Mr Sail insists, "and we are very strong on this. "But people are not even receiving their basic human needs. And that, coupled with the large scale exploitation of their mineral resources, of the forest and the water, leads people to resort to certain methods." In the city of Raipur you really would not know there was a war on in parts of this state. Most of the violence is confined to remote rural areas. But as the world's biggest exercise in democracy gets under way it is worth noting that many of the issues fuelling the Maoist insurgency - poverty and inequality, lack of land and opportunity - are among the biggest challenges facing India in the 21st Century. Bookmark with: Delicious Digg reddit Facebook StumbleUpon What are these? E-mail this to a friend Printable version
|
Please enable Javascript to watch this video LORAIN COUNTY, Ohio -- It's a heartbreaking case of "he said, she said" after a wedding photo went viral and was viewed by millions of people across the globe. Photos taken in late September by Delia D. Blackburn went viral and were viewed over 70 million times and appeared in news reports in dozens of countries. The images captured a touching moment at the wedding of Brittany and Jeremy Peck. Just before the wedding was to start, Brittany’s father stopped everything to go get her stepdad. Both men then walked Brittany down the aisle. “It was a great moment,” said Blackburn, “I, myself, was teary over it.” But now the photographer says she's upset by a series of events that have transpired since all of the media attention. Blackburn claims Brittany’s father threatened to sue her for allegedly making money off the photos, a claim she denies. Blackburn also said family members and friends of Brittany have been slamming her on social media sites. One person wrote, “Throw her a bullet. Then say the next one will be coming faster.” “They’re trying to ruin me," Blackburn told WJW. Brittany said there is no ill will toward Blackburn, and claims she doesn’t know the person who wrote the threat. The comment was eventually deleted and Brittany's mother posted an apology on Facebook. Brittany told WJW concerns about profits stemmed form a Facebook post where Blackburn mentioned the possibility of printing posters, a move that Brittany and her family did not support. She also said the contract stated she would receive her wedding photos within two weeks, but it has been seven weeks and she still does not have her photos. “We don’t want anything out of this,” said Brittany. ”Hand us our pictures and we’ll go our separate ways.” Blackburn said two weeks has never been a problem in the past but the overwhelming attention after the photos went viral has delayed her work, according to WJW. Both sides have obtained counsel and a date has been set for the photos to be completed and delivered to the family. Blackburn said she hopes the “slanderous and untrue character attacks” will stop so that no further legal action is necessary. Brittany said she is still grateful for the pictures which immortalized one of the most special moments of her life. “I love my pictures,” said the newlywed, “I couldn’t imagine having anyone else take them; honestly, I couldn’t.”
|
The party is definitely over. Faced with serious salary cap issues, the Chicago Blackhawks traded away another contributor to the Stanley Cup title team, shipping Kris Versteeg to the Toronto Maple Leafs on Wednesday. "Obviously, like I said earlier this year, we're not looking to trade anybody but that's the game we're in today," Blackhawks general manager Stan Bowman said. "You look at the trades that are made, most of them are related around the salary cap, but that's just a reality everybody is facing," The Leafs also received forward Bill Sweatt, a second-round pick in 2007 who played four seasons at Colorado College, for forwards Viktor Stalberg, Chris DiDomenico and Philippe Paradis. "We're always looking to acquire good young players," Bowman told ESPNChicago.com. "We're trying to continue this great run. We're very high on these three players we acquired. Obviously, [Viktor] Stalberg is an NHL player. He's got some size. He's going to complement the playmakers on our team." Versteeg, 24, had 20 goals and 24 assists in 79 regular-season games for Chicago. He had six goals and eight assists in the playoffs. Versteeg completed his third season in the league, all with Chicago. "Kris is a young player who came into our lineup on a full-time basis a year ago and he kind of blossomed as a guy who a year ago was just trying to make the team," Bowman said. "He's a versatile guy, he played some center for us this year when we had some injuries. He's able to play left wing or right wing, got a high skill level, can play across the board in terms of penalty killing and power play." Versteeg signed a three-year, $9.25 million contract with the Hawks last summer. Cap concerns already forced Chicago to unload playoff hero Dustin Byfuglien in a deal with Atlanta this offseason. Stars Patrick Kane, Jonathan Toews and Duncan Keith have new long-term deals to go with the huge contract Marian Hossa signed before last season. Cup winning goalie Antti Niemi and young defenseman Niklas Hjalmarsson are restricted free agents and will likely receive raises. Bowman had said after the Byfuglien deal that the team didn't have to make any more moves. "I guess that's always debatable in terms of need and want," he told ESPNChicago.com on Wednesday night. "The important thing is we're always trying to improve our team. I've been trying to warn our fans that this team we fell in love with isn't going to stay the same." Bowman said the deal could make the team a player when free agency starts Thursday. "We have some more flexibility to look at some [unrestricted free agents] which we probably didn't have as of yesterday." And the general manager is not concerned about other teams grabbing their restricted free agents. "They [offer sheets] don't concern me," he said. "These guys are going to remain with the Blackhawks. We have plenty of flexibility to make things work." The 24-year-old Stalberg, from Sweden, had nine goals and five assists in 40 games for Toronto last season, his first in the NHL. The 19-year-old Paradis was selected by Carolina in the first round of the 2009 draft, then traded to Toronto in December. He had 24 goals in 63 games last season for Shawinigan in the Quebec Major Junior Hockey League. The 21-year-old DiDomenico had seven goals in 12 regular-season games for Drummondville in the QMJHL. He also had seven goals in 14 playoff games. Information from ESPNChicago.com's Jesse Rogers and The Associated Press was used in this report.
|
Open Source Tools 90 Bn BLAST NCBI 1990 Bn BLAST NCBI BLAST NCBI About: The Basic Local Alignment Search Tool aka (BLAST) is the most widely used sequence similarity tool. There are versions of BLAST that compare protein queries to protein databases, nucleotide queries to nucleotide databases, as well as versions that translate nucleotide queries or databases in all six frames and compare to protein databases or queries. Year of release: 1990 Download: http://blast.ncbi.nlm.nih.gov/Blast.cgi Author: Stephen Altschul, Warren Gish, Webb Miller, Eugene Myers, and David J. Lipman DOI: 10.1016/S0022-2836(05)80360-2 OS: Windows, Unix, Mac OS X Licence: Public Domain Category: Aligners (pairwise) 08 So SOAP 2008 So SOAP SOAP About: SOAPaligner/soap2 is a member of the SOAP (Short Oligonucleotide Analysis Package). It is an updated version of SOAP software for short oligonucleotide alignment. The new program features in super fast and accurate alignment for huge amounts of short reads generated by Illumina/Solexa Genome Analyzer. Year of release: 2008 Download: http://soap.genomics.org.cn/ Author: Prof. T.W. Lam, Alan Tam, Simon Wong, Edward Wu and S.M. Yiu DOI: 10.1093/bioinformatics/btn025 OS: Linux, MacOSX Licence: GNU GPL v.3 Category: Aligners (short read) 81 Py PHYLIP 1981 Py PHYLIP PHYLIP About: PHYLIP (the PHYLogeny Inference Package) is a package of programs for inferring phylogenies (evolutionary trees). Year of release: 1981 No altmetric.com data available. Download: http://evolution.genetics.washington.edu/phylip.html Author: Jerry Shurman, Chistopher Meacham, Mary Kuhner, Jan Yamato, Naruya Saitou and Mark Moehring Key publication: http://nebc.nerc.ac.uk/bioinformatics/docs/phylip.html OS: Linux, Mac OS X and Windows Licence: Open source Category: ToolKits and APIs 00 Em EMBOSS 2000 Em EMBOSS EMBOSS About: EMBOSS is 'The European Molecular Biology Open Software Suite'. EMBOSS is a free Open Source software analysis package specially developed for the needs of the molecular biology user community. The software automatically copes with data in a variety of formats and even allows transparent retrieval of sequence data from the web. Year of release: 2000 No altmetric.com data available. Download: http://emboss.sourceforge.net/ Author: Tim Carver and Lisa Mullan Key publication: http://www.ncbi.nlm.nih.gov/pubmed/10827456 OS: Linux, Mac OS X, Unix and Windows Licence: GPL and LGPL Category: ToolKits and APIs 00 Bj BIOJAVA 2000 Bj BIOJAVA BIOJAVA About: BioJava is an open-source project dedicated to providing a Java framework for processing biological data. It includes objects for manipulating biological sequences, file parsers, DAS client and server support, access to BioSQL and Ensembl databases, tools for making sequence analysis GUIs and powerful analysis and statistical routines including a dynamic programming toolkit. Year of release: 2000 Download: http://biojava.org/wiki/Main_Page Author: Thimas Down, Michael Heuer, David Huen, Matthew Pocock, Mark Schreiber, Richard Holland, Martin Szugat, Keith James, Sylvain Foisy, Andreas Prlic, Dickson S. Guedes, Francois Pepin and Jianjiong Gao DOI: 10.1093/bioinformatics/bts494 OS: Mac OS X, Unix, Windows Licence: LGPL v 2.1 Category: ToolKits and APIs 01 Bc BIOCONDUCTOR 2001 Bc BIOCONDUCTOR BIOCONDUCTOR About: Bioconductor is an open source, open development software project to provide tools for the analysis and comprehension of high-throughput genomic data. Year of release: 2001 Download: http://www.bioconductor.org/help/publications/books/bioinformatics-and-computational-biology-solutions/microarrays/ Author: Vince Carey, Marc Carlson, Sean Davis and others. DOI: 10.1186/gb-2004-5-10-r80 OS: Linux, Mac OS X and Windows Licence: Artistic 2.0, GPL2 Category: ToolKits and APIs 10 Bi Bismark 2010 Bi Bismark Bismark About: A tool to map bisulfite converted sequence reads and determine cytosine methylation states. Year of release: 2010 Download: http://www.bioinformatics.babraham.ac.uk/projects/bismark/ Author: Felix Krueger DOI: 10.1093/bioinformatics/btr167 OS: Linux, Mac OS X and Windows Licence: GPL v3 or later Category: Other Aligners 03 Ms SWISS-MODEL 2003 Ms SWISS-MODEL SWISS-MODEL About: SWISS-MODEL is a fully automated protein structure homology-modeling server, accessible via the ExPASy web server, or from the program DeepView (Swiss Pdb-Viewer). The purpose of this server is to make Protein Modelling accessible to all biochemists and molecular biologists worldwide. Year of release: 2003 Download: http://swissmodel.expasy.org/ Author: Manuel Peitsch, et al DOI: 10.1093/bioinformatics/bti770 OS: Web-based Licence: Free, commercial users need to sign an agreement. Category: Structure Modelling 98 Hm HMMER 1998 Hm HMMER HMMER About: HMMER is used for searching sequence databases for homologs of protein sequences, and for making protein sequence alignments. It implements methods using probabilistic models called profile hidden Markov models. Year of release: 1998 Download: http://hmmer.janelia.org/ Author: Sean R. Eddy DOI: 10.1093/nar/gkr367 OS: Windows, MacOSX, Linux Licence: GPLv3 Category: Aligners (pairwise) 07 Mq MAQ 2007 Mq MAQ MAQ About: Maq stands for Mapping and Assembly with Quality It builds assembly by mapping short reads to reference sequences. Maq is previously known as mapass2. Maq is a software that builds mapping assemblies from short reads generated by the next-generation sequencing machines. It is particularly designed for Illumina-Solexa 1G Genetic Analyzer, and has preliminary functions to handle ABI SOLiD data. Year of release: 2007 Download: http://maq.sourceforge.net/ Author: Heng Li DOI: 10.1101/gr.078212.108 OS: Linux Licence: GNU GPL v3 Category: Aligners (short read) 01 Bs bioSQL 2001 Bs bioSQL bioSQL About: BioSQL is a generic unifying schema for storing sequences from different sources, for instance Genbank or Swissprot. BioSQL is meant to be a common data storage layer supported by all the different Bio* projects, Bioperl, Biojava, Biopython, and Bioruby. Entries stored through an application written in, say, Bioperl could be retrieved by another written in Biojava. Year of release: 2001 Download: http://www.biosql.org/wiki/Main_Page Author: Ewan Birney DOI: 10.1186/2041-1480-1-8 OS: Unix, Mac OS X, Windows Licence: GNU Lesser General Public Licence v 3 Category: Database/Warehouse 00 Eh eHIVE 2000 Eh eHIVE eHIVE About: This is a distributed processing system based on 'autonomous agents' and the behavioural structure of honey bees. It implements all functionality of both data-flow graphs and block-branch diagrams which should allow it to codify any program, algorithm, or parallel processing job control system. It is not bound to any processing 'farm' system and can be adapted to any GRID. Year of release: 2000 Download: http://www.ensembl.org/info/docs/eHive/index.html Author: Abel Ureta-Vidal, Jessica Severin, Michael Schuster DOI: 10.1186/1471-2105-11-240 OS: Linux, Mac OS X, Windows Licence: Open Source Category: Workflows 00 En ENSEMBL API 2000 En ENSEMBL API ENSEMBL API About: The Ensembl API (application programming interface) is a framework for applications that need to access or store data in Ensembl's databases. Year of release: 2000 No altmetric.com data available. Download: http://www.ensembl.org/info/docs/api/index.html Author: Glenn Proctor, Ian Longden and Patrick Meidl Key publication: OS: Unix, Windows Licence: Apache-style licence. Category: ToolKits and APIs 00 Br bioRUBY 2000 Br bioRUBY bioRUBY About: BioRuby is an open source Ruby library for developing bioinformatics software. Year of release: 2000 Download: http://bioruby.open-bio.org/ Author: Toshiaki Katayama DOI: 10.1093/bioinformatics/btq475 OS: Linux, Mac OS X and Windows Licence: Ruby licence Category: ToolKits and APIs 03 La LAGAN 2003 La LAGAN LAGAN About: The LAGAN Tookit consists of four components:CHAOS, LAGAN, Multi-LAGAN,Shuffle-LAGAN. Year of release: 2003 Download: http://lagan.stanford.edu/lagan_web/authors.shtml Author: Michael Brudno DOI: 10.1101/gr.926603 OS: Linux, Windows Licence: GNU GPL Category: ToolKits and APIs 10 Mve Mauve 2010 Mve Mauve Mauve About: Mauve is a system for efficiently constructing multiple genome alignments in the presence of large-scale evolutionary events such as rearrangement and inversion. Multiple genome alignment provides a basis for research into comparative genomics and the study of evolutionary dynamics. Aligning whole genomes is a fundamentally different problem than aligning short sequences. Mauve has been developed with the idea that a multiple genome aligner should require only modest computational resources. It employs algorithmic techniques that scale well in the amount of sequence being aligned. For example, a pair of Y. pestis genomes can be aligned in under a minute, while a group of 9 divergent Enterobacterial genomes can be aligned in a few hours. Year of release: 2010 Download: http://gel.ahabs.wisc.edu/mauve/ Author: Darling AE, Mau B, Perna NT DOI: 10.1371/journal.pone.0011147 OS: Windows, Mac, Linux Licence: GNU GPL Category: Other Aligners 05 Hh HHpred 2005 Hh HHpred HHpred About: The primary aim in developing HHpred was to provide biologists with a method for sequence database searching and structure prediction that is as easy to use as BLAST or PSI-BLAST and that is at the same time much more sensitive in finding remote homologs. In fact, HHpred's sensitivity is competitive with the most powerful servers for structure prediction currently available. Year of release: 2005 Download: http://toolkit.tuebingen.mpg.de/hhpred Author: Soding, Biegert, Lupas, et al DOI: 10.1093/nar/gki408 OS: Linux, Mac OS X Licence: GPL v3 Category: Structure Modelling 99 Mu MUMmer 1999 Mu MUMmer MUMmer About: MUMmer is a system for rapidly aligning entire genomes, whether in complete or draft form. For example, MUMmer 3.0 can find all 20-basepair or longer exact matches between a pair of 5-megabase genomes in 13.7 seconds, using 78 MB of memory, on a 2.4 GHz Linux desktop computer. Year of release: 1999 Download: http://mummer.sourceforge.net/ Author: Stefan Kurtz, Adam Phillippy, Art Delcher and Steven Salzberg DOI: 10.1186/gb-2004-5-2-r12 OS: Unix Licence: Artistic License Category: Aligners (pairwise) 08 Bo Bowtie 2008 Bo Bowtie Bowtie About: Bowtie is an ultrafast, memory-efficient short read aligner geared toward quickly aligning large sets of short DNA sequences (reads) to large genomes. It aligns 35-base-pair reads to the human genome at a rate of 25 million reads per hour on a typical workstation. Year of release: 2008 Download: http://bowtie-bio.sourceforge.net/bowtie2/index.shtml Author: Ben Langmead and Cole Trapnell DOI: 10.1038/nmeth.1923 OS: Windows, Linux, MacOSX Licence: GNU GPL v3 Category: Aligners (short read) 94 Cl CLUSTALW 1994 Cl CLUSTALW CLUSTALW About: Clustal W is a general purpose multiple alignment program for DNA or proteins. Clustal 2 comes in two flavors: the command-line version ClustalW and the graphical version ClustalX. Year of release: 1994 Download: http://www.clustal.org/clustal2/ Author: Higgins DG, Sharp PM DOI: 10.1093/bioinformatics/btm404 OS: Linux, MacOSX, Windows Licence: GNU Lesser GPL Category: Multiple Sequence Aligners 06 Pe PECAN 2006 Pe PECAN PECAN About: Pecan is a consistency based multiple-alignment program developed by Benedict Paten in Ewan Birney's group at the EBI. Year of release: 2006 Download: http://www.ebi.ac.uk/~bjp/pecan/ Author: Benedict Paten DOI: 10.1101/gr.076554.108 OS: Linux Licence: Category: Multiple Sequence Aligners 97 Mi MIRA 97 1997 Mi MIRA 97 MIRA 97 About: MIRA is a whole genome shotgun and EST sequence assembler for Sanger, 454, Solexa (Illumina), IonTorrent data and PacBio. It can be seen as a Swiss army knife of sequence assembly developed and used in the past 12 years to get assembly jobs done efficiently - and especially accurately. Year of release: 1997 No altmetric.com data available. Download: http://sourceforge.net/apps/mediawiki/mira-assembler/index.php?title=Main_Page Author: Bastien Chevreux Key publication: http://www.bioinfo.de/isb/gcb99/talks/chevreux/ OS: Linux, Unix Licence: GNU GPL v2 Category: Assemblers Genomic (long read) 04 Se SEQCLEAN 2004 Se SEQCLEAN SEQCLEAN About: A script for automated trimming and validation of ESTs or other DNA sequences by screening for various contaminants, low quality and low-complexity sequences. Year of release: 2004 No altmetric.com data available. Download: http://compbio.dfci.harvard.edu/tgi/software/ Author: The Gene Indices Group -Geo Pertea- Key publication: OS: Linux, Windows Licence: Artistic licence Category: Assemblers Genomic (long read) 01 Jm Jmol 2001 Jm Jmol Jmol About: Jmol is a free, open source molecule viewer for students, educators, and researchers in chemistry and biochemistry. Year of release: 2001 No altmetric.com data available. Download: http://jmol.sourceforge.net/ Author: Egon Willighagen, et al Key publication: http://www.scribd.com/doc/14333194/Processing-CML-Conventions-in-Java OS: Linux, Windows, Mac OS X Licence: LGPL Category: Structure Visualisation 09 Sd SOAP: DE NOVO 2 2009 Sd SOAP: DE NOVO 2 SOAP: DE NOVO 2 About: SOAPdenovo is a novel short-read assembly method that can build a de novo draft assembly for the human-sized genomes. The program is specially designed to assemble Illumina GA short reads. Year of release: 2009 Download: http://soap.genomics.org.cn/soapdenovo.html Author: BGI DOI: 10.1186/2047-217X-1-18 OS: Linux, Mac OS X Licence: GPLv3 Category: Assemblers Genomic (short read) 03 Un UNIGENE 2003 Un UNIGENE UNIGENE About: UniGene computationally identifies transcripts from the same locus; analyzes expression by tissue, age, and health status; and reports related proteins (protEST) and clone resources. Year of release: 2003 No altmetric.com data available. Download: http://www.ncbi.nlm.nih.gov/unigene/ Author: Boguski MS, Schuler GD Key publication: http://www.ncbi.nlm.nih.gov/books/NBK21083/ OS: Linux, Mac OS X, Windows Licence: Open licence Category: Assemblers (mRNA) 98 Gl GLIMMER 3 1998 Gl GLIMMER 3 GLIMMER 3 About: Glimmer is a system for finding genes in microbial DNA, especially the genomes of bacteria, archaea, and viruses. Glimmer (Gene Locator and Interpolated Markov ModelER) uses interpolated Markov models (IMMs) to identify the coding regions and distinguish them from noncoding DNA. Year of release: 1998 Download: http://www.cbcb.umd.edu/software/glimmer/ Author: Arthur Delcher DOI: 10.1093/bioinformatics/btm009 OS: Linux Licence: OSI Certified Open source Category: Gene Prediction (mRNA) 06 Cd CD-HIT 2006 Cd CD-HIT CD-HIT About: CD-HIT is a very widely used program for clustering and comparing protein or nucleotide sequences. Year of release: 2006 Download: http://weizhong-lab.ucsd.edu/cd-hit/ Author: Wizhong Li, Adam Godzik, Lukasz Jaroszewski DOI: 10.1093/bioinformatics/btl158 OS: Linux Licence: GPL v2 Category: Sequence Tools 97 Tr tRNAscan 1997 Tr tRNAscan tRNAscan About: tRNAscan-SE was designed to make rapid, sensitive searches of genomic sequence feasible using the selectivity of the Cove analysis package. Search for tRNA genes in genomic sequence.tRNAscan-SE identifies transfer RNA genes in genomic DNA or RNA sequences. Year of release: 1997 No altmetric.com data available. Download: http://lowelab.ucsc.edu/tRNAscan-SE/ Author: Oeter Schattner, Angela N. Brooks and Todd M. Lowe Key publication: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC146525/ OS: Unix Licence: GNU GPL Category: Gene Prediction (ncRNA) 03 Bm bioMART 2003 Bm bioMART bioMART About: The purpose of BioMart is to convert one or more data source (flat files or relational) into data marts which can be accessed via its tandardised web browser interface and also via its Perl, Java and webservice APIs. Year of release: 2003 Download: http://www.biomart.org/ Author: Arek Kasprzyk DOI: 10.1093/database/bar038 OS: Linux Licence: Open Source Category: Database/Warehouse 03 Ta TAVERNA 2003 Ta TAVERNA TAVERNA About: Taverna is an open source and domain-independent Workflow Management System – a suite of tools used to design and execute scientific workflows and aid in silico experimentation. Year of release: 2003 Download: http://www.taverna.org.uk/ Author: myGrid (University of Manchester) and EBI) DOI: 10.1093/nar/gkt328 OS: Linux, Mac OS X, Windows Licence: LPGL Category: Workflows 00 Eb ENSEMBL BROWSER 2000 Eb ENSEMBL BROWSER ENSEMBL BROWSER About: Ensembl is a joint project between EMBL - EBI and the Wellcome Trust Sanger Institute to develop a software system which produces and maintains automatic annotation on selected eukaryotic genomes. Year of release: 2000 Download: http://www.ensembl.org/index.html Author: EBI and Wellcome Trust Sanger Institute DOI: 10.1101/gr.1863004 OS: Linux Licence: Apache-style licence. Category: Genome Browsers 00 Bp bioPERL 2000 Bp bioPERL bioPERL About: BioPerl is a toolkit of perl modules useful in building bioinformatics solutions in Perl. BioPerl project is an international association of developers of open source Perl tools for bioinformatics, genomics and life science research. Year of release: 2000 Download: http://www.bioperl.org/wiki/Main_Page Author: Chris Dagdigian, Richard Resnick, Lew Gramer,Alessandro Guffanti and others. DOI: 10.1101/gr.361602 OS: Linux, Mac OS X, Windows Licence: Perl Artistic Licence Category: ToolKits and APIs 07 Pl PLINK 2007 Pl PLINK PLINK About: PLINK is a free, open-source whole genome association analysis toolset, designed to perform a range of basic, large-scale analyses in a computationally efficient manner. Year of release: 2007 No altmetric.com data available. Download: http://pngu.mgh.harvard.edu/~purcell/plink/ Author: Shaun Purcell Key publication: http://pngu.mgh.harvard.edu/purcell/plink/ OS: Linux, Mac OS X Licence: GNU v.2 Category: ToolKits and APIs 09 Sa SAMtools 2009 Sa SAMtools SAMtools About: SAMtools is a set of utilities for efficiently processing nucleotide alignments in the Sequence Alignment/Map format (SAM) and its binary representation (BAM format), and accurately calling the SNPs and short INDELs from the alignment. Year of release: 2009 Download: http://samtools.sourceforge.net/ Author: Heng Li,Bob Handsaker, Jue Ruan, John Marshall and Petr Danecek DOI: 10.1093/bioinformatics/btp352 OS: Linux Licence: BSD licence, MIT Licence Category: ToolKits and APIs 99 Ps PSIPRED 1999 Ps PSIPRED PSIPRED About: The PSIPRED Protein Sequence Analysis Workbench aggregates several UCL structure prediction methods into one location. Users can submit a protein sequence, perform the predictions of their choice and receive the results of the prediction via e-mail or the web. Year of release: 1999 Download: http://bioinf.cs.ucl.ac.uk/psipred/ Author: Jones, Buchan, Nugent, Minneci, Bryson et al DOI: 10.1093/nar/gkq427 OS: Web-based, Linux Licence: Custom free licence Category: Structure Modelling 03 Ss SSAHA 2003 Ss SSAHA SSAHA About: ssaha is a tool for rapidly finding near exact matches in DNA or protein databases. The name is an acronym standing for Sequence Search and Alignment by Hashing Algorithm. It works by converting a sequence database into a hash table. This is then rapidly quizzed for hits, which are concatenated into matches. Year of release: 2003 Download: http://www.sanger.ac.uk/resources/software/ssaha/ Author: Prof. T.W. Lam, Alan Tam, Simon Wong, Edward Wu and S.M. Yiu DOI: 10.1101/gr.194201 OS: Linux Licence: GNU GPL v2 Category: Aligners (pairwise) 09 Bw BWA 2009 Bw BWA BWA About: Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome. It implements two algorithms, bwa-short and BWA-SW. Year of release: 2009 Download: http://bio-bwa.sourceforge.net/ Author: Heng Li, Chi-Kwong, Nong Ge and Yuta Mori. DOI: 10.1093/bioinformatics/btp324 OS: Licence: GPLv3 Category: Aligners (short read) 00 Cf T-COFFEE 2000 Cf T-COFFEE T-COFFEE About: T-Coffee is a multiple sequence alignment package. You can use T-Coffee to align sequences or to combine the output of your favorite alignment methods (Clustal, Mafft, Probcons, Muscle...) into one unique alignment. Year of release: 2000 Download: http://tcoffee.org/ Author: Cedric Notredame, Higgins DG, Heringa J. DOI: 10.1006/jmbi.2000.4042 OS: Unix, Linux, MacOSX, Windows Licence: GNU GPLv3 Category: Multiple Sequence Aligners 10 Pr PRANK 2010 Pr PRANK PRANK About: PRANK is a probabilistic multiple alignment program for DNA, codon and amino-acid sequences that has been shown to produce exceptionally accurate alignments for evolutionary analyses. It is based on a novel algorithm that treats insertions correctly and avoids over-estimation of the number of deletion events. Year of release: 2010 Download: http://www.ebi.ac.uk/goldman-srv/prank/ Author: Ari Löytynoja DOI: 10.1186/1471-2105-11-579 OS: Licence: GPL Category: Multiple Sequence Aligners 97 Ce CELERA 1997 Ce CELERA CELERA About: Celera Assembler is a de novo whole-genome shotgun (WGS) DNA sequence assembler.Celera Assembler is a de novo whole-genome shotgun (WGS) DNA sequence assembler. Year of release: 1997 Download: http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=Main_Page Author: Celera Gomes DOI: 10.1126/science.287.5461.2196 OS: Linux, Mac OS X, Unix Licence: GNU GPL/ BSD Category: Assemblers Genomic (long read) 09 Fo FORGE 2009 Fo FORGE FORGE About: Forge is a classic Overlap layout consensus genome assembler. Implemented in C++ and using the parallel MPI library, it runs on one or more machines in a network and can scale to very large numbers of reads provided there is enough collective memory on the machines used. Year of release: 2009 Download: http://combiol.org/forge/ Author: Darren Platt and Dirk Evers DOI: 10.1186/gb-2009-10-9-r94 OS: Unix Licence: Licenced under Apache 2.0 licence Category: Assemblers Genomic (long read) 95 Rm RasMol 1995 Rm RasMol RasMol About: RasMol is an important scientific tool for visualisation of molecules created by Roger Sayle in 1992. RasMol is used by hundreds of thousands of users world-wide to view macromolecules and to prepare publication-quality images. Year of release: 1995 Download: http://rasmol.org/ Author: R. A. Sayle and E. J. Milner-White DOI: 10.1016/S0968-0004(00)89080-5 OS: Universal Licence: GPL Category: Structure Visualisation 09 Sg SGA 2009 Sg SGA SGA About: SGA is a de novo genome assembler based on the concept of string graphs. The major goal of SGA is to be very memory efficient, which is achieved by using a compressed representation of DNA sequence reads. Year of release: 2009 Download: https://github.com/jts/sga Author: Jared Simpson and Richard Durbin DOI: 10.1101/gr.126953.111 OS: Linux Licence: GNU GPL v3 Category: Assemblers Genomic (short read) 09 Cu CUFFLINKS 2009 Cu CUFFLINKS CUFFLINKS About: Cufflinks assembles transcripts, estimates their abundances, and tests for differential expression and regulation in RNA-Seq samples. It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts. Cufflinks then estimates the relative abundances of these transcripts based on how many reads support each one, taking into account biases in library preparation protocols. Year of release: 2009 Download: http://cufflinks.cbcb.umd.edu/ Author: Cole Trapnell, Adam Roberts, Ali Mortazavi, Steven Salzberg, Barabara Wold, Lior Pachter etc. DOI: 10.1038/nbt.1621 OS: Linux, Mac OS X Licence: OSI-approved Boost Licence Category: Assemblers (mRNA) 99 Eu EUGENE 1999 Eu EUGENE EUGENE About: An open gene finder for eukaryotic organisms. Compared to most existing gene finders, EuGène is characterized by its ability to simply integrate arbitrary sources of information in its prediction process. Year of release: 1999 Download: http://eugene.toulouse.inra.fr/ Author: P. Bardou, MJ. Cros, S. Foissac, J Gouzy, A. and Moisan, T. Schiex DOI: 10.1007/3-540-45727-5_10 OS: Linux Licence: Artistic licence Category: Gene Prediction (mRNA) 09 Fx FASTX-Toolkit 2009 Fx FASTX-Toolkit FASTX-Toolkit About: The FASTX-Toolkit is a collection of command line tools for Short-Reads FASTA/FASTQ files preprocessing. Year of release: 2009 Download: http://hannonlab.cshl.edu/fastx_toolkit/ Author: Assaf Gordon DOI: 10.1006/geno.1997.4995 OS: Linux, Unix Licence: GNU Affero Category: Sequence Tools 10 Rn RNAfold 2010 Rn RNAfold RNAfold About: The RNAfold web server will predict secondary structures of single stranded RNA or DNA sequences. Year of release: 2010 No altmetric.com data available. Download: http://rna.tbi.univie.ac.at/cgi-bin/RNAfold.cgi Author: M Zuker and P Stiegler Key publication: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC326673/ OS: Linux, Mac OS X and Windows Licence: Open Source Category: Gene Prediction (ncRNA) 07 In INTERMINE 2007 In INTERMINE INTERMINE About: InterMine is a powerful open source data warehouse system. Using InterMine, you can create databases of biological data accessed by sophisticated web query tools. InterMine includes an attractive, user-friendly web interface that works 'out of the box' and can be easily customised for your specific needs. Year of release: 2007 Download: http://intermine.org/wiki/InterMineOverview?redirectedfrom=InterMine Author: Richard Smith, Jakub Kulaviak, Julie Sullivan, Mather Wakeling, Xavier Watkins DOI: 10.1093/database/bar062 OS: Linux Licence: GNU Lesser General Public Licence Category: Database/Warehouse 07 Gx GALAXY 2007 Gx GALAXY GALAXY About: Galaxy is a scientific workflow and data integration platform that aims to make computational biology accessible to research scientists that do not have computer programming experience. Year of release: 2007 Download: http://wiki.g2.bx.psu.edu/FrontPage Author: Enis Afgan, Guru Ananda and Dannon Baker DOI: 10.1186/gb-2010-11-8-r86 OS: Unix Licence: GNU Category: Workflows 02 Ap APOLLO 2002 Ap APOLLO APOLLO About: Apollo is a genome annotation viewer and editor. It was developed as a collaboration between the Berkeley Drosophila Genome Project (part of the FlyBase consortium) and The Sanger Institute in Cambridge, UK. Apollo allows researchers to explore genomic annotations at many levels of detail, and to perform expert annotation curation, all in a graphical environment. Year of release: 2002 Download: http://apollo.berkeleybop.org/current/index.html Author: Ed Lee, Nomi Harris, Steve Searle, Michele Clamp, Suzanna Lewis, John Richter and Mark Gibson DOI: 10.1186/gb-2002-3-12-research0082 OS: Linux, MacOsX, Unix and Windows Licence: Artistic Licence Category: Genome Browsers 04 Ig IGB 2004 Ig IGB IGB About: The Integrated Genome Browser (IGB, pronounced Ig-Bee) is an interactive, zoomable, scrollable software program you can use to visualize and explore genome-scale data sets. Year of release: 2004 Download: http://bioviz.org/igb/ Author: Gregg Helt DOI: 10.1093/bioinformatics/btp472 OS: Linux, MacOSX, Unix and Windows Licence: Common Public Licence v 1.0 Category: Genome Browsers 07 Ut UTGB 2007 Ut UTGB UTGB About: UTGB Toolkit is an open-source software for developing personalized genome browsers. Year of release: 2007 Download: http://utgenome.org/ Author: Shinichi Morishita, Taro Saito, Jun Yoshimura, Koichiro Higasa, Hiroshi Minoshima, Reginaldo Kuroshu and Atsushi Sasaki. DOI: 10.1093/bioinformatics/btp350 OS: Linux, Mac OS X and Windows Licence: Apache Licence 2.0 Category: ToolKits and APIs 10 Am AMPLICONNOISE 2010 Am AMPLICONNOISE AMPLICONNOISE About: AmpliconNoise is a collection of programs for the removal of noise from 454 sequenced PCR amplicons. It involves two steps the removal of noise from the sequencing itself and the removal of PCR point errors. This project also includes the Perseus algorithm for chimera removal. Year of release: 2010 Download: http://code.google.com/p/ampliconnoise/ Author: Chrisopher Quince, Andres Lanzen, Russell J Davenport DOI: 10.1186/1471-2105-12-38 OS: Linux and Mac OS X Licence: GNU Lesser GPL Category: ToolKits and APIs 09 Qi QIIME 2009 Qi QIIME QIIME About: QIIME (canonically pronounced ‘Chime’) is a pipeline for performing microbial community analysis that integrates many third party tools which have become standard in the field. QIIME can run on a laptop, a supercomputer, and systems in between such as multicore desktops. Year of release: 2009 Download: http://www.qiime.org/ Author: Developed by Knight Lab at the University of Colorado at Boulder DOI: 10.1038/nmeth.f.303 OS: Linux, MacOSX and Windows Licence: Category: ToolKits and APIs 05 Ex EXONERATE 2005 Ex EXONERATE EXONERATE About: exonerate is a general tool for sequence comparison. It uses the C4 dynamic programming library. It is designed to be both general and fast. It can produce either gapped or ungapped alignments, according to a variety of different alignment models. Year of release: 2005 Download: http://www.ebi.ac.uk/~guy/exonerate/ Author: Guy St.C. Slater DOI: 10.1186/1471-2105-6-31 OS: UNIX, Linux Licence: GNU GPL 2005 Category: Aligners (pairwise) 09 Bf BFAST 2009 Bf BFAST BFAST About: BFAST (Blat-like Fast Accurate Search Tool) facilitates the fast and accurate mapping of short reads to reference sequences. BFAST was designed to facilitate whole-genome resequencing, where mapping billions of short reads with variants is of utmost importance. BFAST supports both Illumina and ABI SOLiD data, as well as any other Next-Generation Sequencing Technology (454, Helicos), with particular emphasis on sensitivity towards errors, SNPs and especially indels. Year of release: 2009 Download: http://bfast.sf.net/ Author: Nils Homer DOI: 10.1371/journal.pone.0007767 OS: Unix, Linux, MacOSX Licence: GPL v2 Category: Aligners (short read) 02 Ma MAFFT 2002 Ma MAFFT MAFFT About: MAFFT is a multiple sequence alignment program for amino acid or nucleotide sequences. It has several different options for various types of alignment problems. Progressive alignment options can be applied to a large number of homologous sequences. Year of release: 2002 Download: http://mafft.cbrc.jp/alignment/server/ Author: Kazutaka Katoh, Charles Plessy DOI: 10.1093/bioinformatics/btq224 OS: Linux, MacOSX, Windows Licence: GPL and BSD Category: Multiple Sequence Aligners 04 Mv MAVID 2004 Mv MAVID MAVID About: MAVID is a multiple sequence alignment program suitable for the alignment of large numbers of DNA sequences. The sequences can be small mitochondrial genomes or large genomic regions up to megabases long. Year of release: 2004 Download: http://bio.math.berkeley.edu/mavid/download/ Author: Nicolas Bray, Lior Pachter DOI: 10.1101/gr.1960404 OS: Unix, Linux, MacOSX Licence: Open Source Category: Multiple Sequence Aligners 02 Ar ARACHNE 2002 Ar ARACHNE ARACHNE About: ARACHNE is a program for assembling data from whole genome shotgun sequencing experiments. It was designed for long reads from Sanger sequencing technology, and has been used extensively to assemble many genomes, including many that are large and highly repetitive. Year of release: 2002 Download: http://www.broadinstitute.org/crd/wiki/index.php/Arachne_Main_Page Author: Jaffe DB DOI: 10.1101/gr.208902 OS: Linux Licence: Category: Assemblers Genomic (long read) 10 Me MERACULOUS 2010 Me MERACULOUS MERACULOUS About: Meraculous relies on an efficient and conservative traversal of the subgraph of the k-mer (deBruijn) graph of oligonucleotides with unique high quality extensions in the dataset, avoiding an explicit error correction step as used in other short-read assemblers. Year of release: 2010 Download: ftp://ftp.jgi-psf.org/pub/JGI_data/meraculous/Meraculous-1.4.4-rel.tar Author: Chapman JA, Ho I, Sunkara S, Luo S, Schroth GP DOI: 10.1371/journal.pone.0023501 OS: Unix Licence: GNU GPL Category: Assemblers Genomic (long read) 08 Co CORTEX_CON_RP 2008 Co CORTEX_CON_RP CORTEX_CON_RP About: Cortex is an efficient and low-memory software framework for analysis of genomes using sequence data. Year of release: 2008 Download: http://cortexassembler.sourceforge.net/index.html Author: Mario Caccamo and Zamin Iqbal DOI: 10.1038/ng.1028 OS: Linux, Mac OS X Licence: GPLv3 Category: Assemblers Genomic (short read) 10 Pa PE-ASSEMBLER 2010 Pa PE-ASSEMBLER PE-ASSEMBLER About: A simple extension approach to assembling paired-end reads and capable of parallelization. Year of release: 2010 Download: http://www.comp.nus.edu.sg/~bioinfo/peasm/PE_manual.htm Author: Nuwantha Ariaratne P., Sung WK. DOI: 10.1093/bioinformatics/btq626 OS: Linux Licence: Open Source Category: Assemblers Genomic (short read) 10 Oa OASES 2010 Oa OASES OASES About: Oases is a de novo transcriptome assembler designed to produce transcripts from short read sequencing technologies, such as Illumina, SOLiD, or 454 in the absence of any genomic assembly. Year of release: 2010 Download: http://www.ebi.ac.uk/~zerbino/oases/ Author: Marcel Schulz and Daniel Zerbino DOI: 10.1093/bioinformatics/bts094 OS: Linux Licence: GPL Category: Assemblers (mRNA) 00 Ep ENSEMBL PIPELINE 2000 Ep ENSEMBL PIPELINE ENSEMBL PIPELINE About: Ensembl is a joint project between EMBL - EBI and the Wellcome Trust Sanger Institute to develop a software system which produces and maintains automatic annotation on selected eukaryotic genomes. Year of release: 2000 Download: http://www.ensembl.org/info/docs/Doxygen/pipeline-api/files.html Author: EBI and Wellcome Trust Sanger institute. DOI: 10.1093/nar/gkq1064 OS: Linux, Mac OS X, Windows Licence: Apache-style licence. Category: Gene Prediction (mRNA) 10 Fq FastQC 2010 Fq FastQC FastQC About: A quality control tool for high throughput sequence data. Year of release: 2010 No altmetric.com data available. Download: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/ Author: Simon Andrews Key publication: OS: Linux, Mac OS X, Windows Licence: GPL v3 or later Category: Sequence Tools 12 St Strelka 2012 St Strelka Strelka About: Strelka is an analysis package designed to detect somatic SNVs and small indels from the aligned sequencing reads of matched tumor-normal samples. Year of release: 2012 Download: https://sites.google.com/site/strelkasomaticvariantcaller/ Author: Christopher T. Saunders; Wendy Wong; Sajani Swamy; Jennifer Becq; Lisa J. Murray; R. Keira Cheetham DOI: 10.1093/bioinformatics/bts271 OS: Linux Licence: Free Category: Sequence Tools 10 Ch CHADO 2010 Ch CHADO CHADO About: Chado is a relational database schema that underlies many GMOD installations. The Chado schema has been designed with modularity and compartmentalization of function in mind. Year of release: 2010 Download: http://gmod.org/wiki/Chado Author: Chris Mungall and Dave Emmert DOI: 10.1093/database/bar051 OS: Unix, Windows Licence: Artistic Licence Category: Database/Warehouse 08 Kn KNIME 2008 Kn KNIME KNIME About: KNIME (Konstanz Information Miner) is a user-friendly and comprehensive open-source data integration, processing, analysis, and exploration platform. Year of release: 2008 Download: http://www.knime.org/ Author: Iris Ada, Michael Berhold and Thomas Gabriel DOI: 10.1093/bioinformatics/btr478 OS: Linux, Mac OS X, Windows Licence: GNU GPL v.3 Category: Workflows 02 Gb GBROWSE 2002 Gb GBROWSE GBROWSE About: GBrowse is a combination of database and interactive web pages for manipulating and displaying annotations on genomes. Year of release: 2002 Download: http://gmod.org/wiki/Gbrowse Author: Lincoln Stein DOI: 10.1101/gr.403602 OS: Linux, MacOSX and Windows Licence: Perl Artistic Licence Category: Genome Browsers 04 Nc NCBI MapViewer 2004 Nc NCBI MapViewer NCBI MapViewer About: The Map Viewer provides special browsing capabilities for a subset of organisms in Entrez Genomes. The viewer allows you to view and search an organism's complete genome, display chromosome maps, and zoom into progressively greater levels of detail, down to the sequence data for a region of interest. Year of release: 2004 Download: http://www.ncbi.nlm.nih.gov/mapview/static/MapViewerHelp.html#Overview Author: DOI: 10.1002/0471250953.bi0105s16 OS: Mac OS X, Unix and Windows Licence: Category: Genome Browsers 03 By BioPython 2003 By BioPython BioPython About: Biopython is a set of freely available tools for biological computation written in Python by an international team of developers. Year of release: 2003 Download: http://biopython.org/wiki/Biopython Author: Tiago Antao, Sebastian Bassi, Jeffrey Chang etc. DOI: 10.1093/bioinformatics/btp163 OS: MacOsx, Unix and Windows Licence: Biopython License Category: ToolKits and APIs 09 Pic Picard 2009 Pic Picard Picard About: Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (SAM-JDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported. Year of release: 2009 No altmetric.com data available. Download: http://picard.sourceforge.net/ Author: Key publication: http://picard.sourceforge.net/ OS: Licence: Apache License V2.0, MIT License Category: ToolKits and APIs 09 Vc VCFtools 2009 Vc VCFtools VCFtools About: Welcome to VCFtools - a program package designed for working with VCF files, such as those generated by the 1000 Genomes Project. VCFtools is split into two sections: The vcftool binary program/ The VCF perl module. Year of release: 2009 Download: http://vcftools.sourceforge.net/ Author: Adam Auton and Petr Danecek DOI: 10.1093/bioinformatics/btr330 OS: Linux and Mac OS X Licence: GNU General Public Licence version 3.0 Category: ToolKits and APIs 88 Fa FASTA 1988 Fa FASTA FASTA About: FASTA is a DNA and protein sequence alignment software package first described (as FASTP). Its legacy is the FASTA format which is now ubiquitous in bioinformatics. FASTA is pronounced 'fast A', and stands for 'FAST-All', because it works with any alphabet, an extension of 'FAST-P' (protein) and 'FAST-N' (nucleotide) alignment. Year of release: 1988 No altmetric.com data available. Download: http://en.wikipedia.org/wiki/FASTA Author: David Lipman and William Pearson Key publication: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC280013/?tool=pmcentrez OS: Unix, Linux, MacOSX, Windows Licence: Free for academic users. Category: Aligners (pairwise) 10 Sm SMALT 2010 Sm SMALT SMALT About: SMALT efficiently aligns DNA sequencing reads with genomic reference sequences. The software employs a perfect hash index of short words (< 21 nucleotides long), sampled at equidistant steps along the genomic reference sequences. For each read, potentially matching segments in the reference are identified from seed matches in the index and subsequently aligned with the read using a banded Smith-Waterman algorithm. Reads from a wide range of sequencing platforms, for example Illumina, Roche-454, Ion Torrent, PacBio or ABI-Sanger, can be processed including paired-end reads. Year of release: 2010 No altmetric.com data available. Download: http://www.sanger.ac.uk/resources/software/smalt/ Author: Hannes Ponstigl Key publication: OS: Unix Licence: GNU GPL Category: Aligners (short read) 12 Rs rna-star 2012 Rs rna-star rna-star About: To align our large (>80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. Year of release: 2012 Download: https://code.google.com/p/rna-star/ Author: Alexander Dobin, Carrie A. Davis, Felix Schlesinger, Jorg Drenkow, Chris Zaleski, Sonali Jha, Philippe Batut, Mark Chaisson and Thomas R. Gingeras DOI: 10.1093/bioinformatics/bts635 OS: Universal Licence: GPL v3 Category: Aligners (short read) 05 Pb PROBCONS 2005 Pb PROBCONS PROBCONS About: ProbCons is an open source probabilistic consistency-based multiple alignment of amino acid sequences. It is an efficient protein multiple sequence alignment program, which has demonstrated a statistically significant improvement in accuracy compared to several leading alignment tools. Year of release: 2005 Download: http://probcons.stanford.edu/about.html Author: Chuong Do, Michael Brudno DOI: 10.1101/gr.2821705 OS: Unix Licence: Open Source Category: Multiple Sequence Aligners 03 Ph PHUSION(2) 2003 Ph PHUSION(2) PHUSION(2) About: Phusion is a software package for assembling genome sequences from whole genome shotgun(WGS) reads. The Phusion assembler is involved in assembling a number of genomes, such as mouse, zebrafish, C. briggsae, Schistosoma mansoni etc. Year of release: 2003 Download: http://www.sanger.ac.uk/resources/software/phusion/ Author: Mullikin JC and Zemin Ning DOI: 10.1101/gr.731003 OS: Unix Licence: Creative Commons Attribution 3.0 Unported Licence Category: Assemblers Genomic (long read) 10 Qu QUAKE 2010 Qu QUAKE QUAKE About: Quake is a package to correct substitution sequencing errors in experiments with deep coverage (e.g. >15X), specifically intended for Illumina sequencing reads. Quake adopts the k-mer error correction framework, first introduced by the EULER genome assembly package. Year of release: 2010 Download: http://www.cbcb.umd.edu/software/quake/ Author: David R. Kelly, Michael C. Schatz, Steven L. Salzberg DOI: 10.1186/gb-2010-11-11-r116 OS: Linux Licence: Pearl Artistic Licence Category: Assemblers Genomic (long read) 07 Al ALLPATHS-LG 2007 Al ALLPATHS-LG ALLPATHS-LG About: ALLPATHS LG is a whole-genome shotgun assembler that can generate high-quality genome assemblies using short reads (~100bp) such as those produced by the new generation of sequencers. The significant difference between ALLPATHS and traditional assemblers such as Arachne is that ALLPATHS assemblies are not necessarily linear, but instead are presented in the form of a graph. Year of release: 2007 Download: http://www.broadinstitute.org/science/programs/genome-biology/crd Author: Gnerre S. DOI: 10.1101/gr.7337908 OS: Linux, Unix Licence: Open Source Licence Category: Assemblers Genomic (short read) 09 Ba BAMBUS 2 09 2009 Ba BAMBUS 2 09 BAMBUS 2 09 About: Bambus 2.0, the second generation Bambus scaffolder for polymorphic and metagenomic data. Year of release: 2009 Download: http://www.cbcb.umd.edu/software/bambus/ Author: Segey Koren and Mihai Pop DOI: 10.1101/gr.1536204 OS: Linux Licence: Open source Category: Assemblers Genomic (short read) 11 Ty Trinity 2011 Ty Trinity Trinity About: Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes. Year of release: 2011 Download: http://trinityrnaseq.sourceforge.net/ Author: Grabherr MG, Haas BJ, Yassour M, Levin JZ, Thompson DA, Amit I, Adiconis X, Fan L, Raychowdhury R, Zeng Q, Chen Z, Mauceli E, Hacohen N, Gnirke A, Rhind N, di Palma F, Birren BW, Nusbaum C, Lindblad-Toh K, Friedman N, Regev A. DOI: 10.1038/nbt.1883 OS: Linux Licence: BSD Category: Assemblers (mRNA) 11 Sn SNAP 2011 Sn SNAP SNAP About: The SNAP gene finder is HMM-based like Genscan and attempts to be more adaptable to different organisms, addressing problems related to using a gene finder on a genome sequence that it was not trained against. Year of release: 2011 Download: http://korflab.ucdavis.edu/software.html Author: Ian Korf DOI: 10.1186/1471-2105-5-59 OS: Linux Licence: GNU GPL Category: Gene Prediction (mRNA) 96 Rk RepeatMasker 1996 Rk RepeatMasker RepeatMasker About: RepeatMasker is a program that screens DNA sequences for interspersed repeats and low complexity DNA sequences. The output of the program is a detailed annotation of the repeats that are present in the query sequence as well as a modified version of the query sequence in which all the annotated repeats have been masked (default: replaced by Ns). On average, almost 50% of a human genomic DNA sequence currently will be masked by the program. Sequence comparisons in RepeatMasker are performed by one of several popular search engines including, cross_match, ABBlast/WUBlast, RMBlast and Decypher. Year of release: 1996 Download: http://www.repeatmasker.org/ Author: Smit, AFA, Hubley, R, Green, P. DOI: 10.1002/0471250953.bi0410s05 OS: Universal Licence: OSL v2.1 Category: Sequence Tools 11 Vt VariantTools 2011 Vt VariantTools VariantTools About: Variant tools is a software tool for the annotation, selection, and analysis of variants in the context of next-gen sequencing analysis. Unlike some other tools used for Next-Gen sequencing analysis, variant tools is project based and provide a whole set of tools to manipulate and analyze genetic variants. Year of release: 2011 Download: http://varianttools.sourceforge.net/ Author: F Anthony San Lucas, Gao Wang, Paul Scheet, Bo Peng DOI: 10.1093/bioinformatics/btr667 OS: Mac, Linux Licence: GPL v3 Category: Sequence Tools 71 Pd PDB 1971 Pd PDB PDB About: The Protein Data Bank (PDB) archive is the single worldwide repository of information about the 3D structures of large biological molecules, including proteins and nucleic acids. These are the molecules of life that are found in all organisms including bacteria, yeast, plants, flies, other animals, and humans. Understanding the shape of a molecule helps to understand how it works. This knowledge can be used to help deduce a structure's role in human health and disease, and in drug development. The structures in the archive range from tiny proteins and bits of DNA to complex molecular machines like the ribosome. Year of release: 1971 Download: http://www.rcsb.org/pdb Author: Consortium. DOI: 10.1093/nar/28.1.235 OS: Linux, Mac OS X and Windows Licence: Data contained in PDF is free of all copyright restrictions and made fully and freely available for both non-commercial and commercial use. Category: Database/Warehouse 03 Up UniProt 2003 Up UniProt UniProt About: The mission of UniProt is to provide the scientific community with a comprehensive, high-quality and freely accessible resource of protein sequence and functional information. Year of release: 2003 Download: http://www.uniprot.org/ Author: Consortium led by Rolf Apweiler, Cathy Wu, and Ioannis Xenarios. DOI: 10.1093/nar/gkr981 OS: Linux, Mac OS X and Windows Licence: Creative Commons Attribution-NoDerivs Category: Database/Warehouse 08 Iv IGV 2008 Iv IGV IGV About: The Integrative Genomics Viewer (IGV) is a high-performance visualization tool for interactive exploration of large, integrated datasets. It supports a wide variety of data types including sequence alignments, microarrays, and genomic annotations. Year of release: 2008 Download: http://www.broadinstitute.org/igv/home Author: Jim Robinson, Helga Thorvaldsdottir, Jacob Silterra and Marc-Danie Nazaire. DOI: 10.1038/nbt.1754 OS: Linux, Mac OS X and Windows Licence: GNU Lesser General Public Licence Category: Genome Browsers 07 Ag ARGO 2007 Ag ARGO ARGO About: The Argo Genome Browser is the Broad Institute's production tool for visualizing and manually annotating whole genomes. Year of release: 2007 Download: http://www.broadinstitute.org/annotation/argo/ Author: Reinhard Engels DOI: 10.1093/bioinformatics/btl193 OS: Mac OS X and Windows Licence: GNU General Public Licence Category: Genome Browsers 02 At ARTEMIS 2002 At ARTEMIS ARTEMIS About: Artemis is a free genome browser and annotation tool that allows visualisation of sequence features, next generation data and the results of analyses within the context of the sequence, and also its six-frame translation. Year of release: 2002 Download: http://www.sanger.ac.uk/resources/software/artemis/ Author: Tim Carver, Giles Velarde, Matthew Berriman, Julian Parkhill and Jacqueline McQuillan. DOI: 10.1093/bioinformatics/16.10.944 OS: Linux, Mac OS X, Unix and Windows. Licence: GNU General Public Licence v. 2.0 Category: Genome Browsers 05 As APSampler 2005 As APSampler APSampler About: APSampler is a tool that allows multi-locus and multi-level association analysis of genotypic and phenotypic data. The goal is to find the allelic sets (patterns) that are associated with phenotype. Year of release: 2005 Download: http://code.google.com/p/apsampler/ Author: Dmitrijs Lvov and A.V. Favorov DOI: 10.1534/genetics.105.048090 OS: Linux and Windows Licence: MIT License Category: ToolKits and APIs 12 Ge GenABEL 2012 Ge GenABEL GenABEL About: The mission of the GenABEL project is to provide a free framework for collaborative, robust, transparent, open-source based development of statistical genomics methodology. Year of release: 2012 Download: http://www.genabel.org/ Author: GenABEL project developers DOI: 10.1093/bioinformatics/btm108 OS: Linux Licence: GPL v2 Category: ToolKits and APIs 05 Ac ACT 2005 Ac ACT ACT About: ACT is a free tool for displaying pairwise comparisons between two or more DNA sequences. It can be used to identify and analyse regions of similarity and difference between genomes and to explore conservation of synteny, in the context of the entire sequences and their annotation. Year of release: 2005 Download: http://www.sanger.ac.uk/resources/software/act/ Author: Carver TJ, Rutherford KM, Berriman M, Rajandream MA, Barrell BG and Parkhill J DOI: 10.1093/bioinformatics/bti553 OS: Universal Licence: GNU GPL Category: Aligners (pairwise) 10 Ja JAligner 2010 Ja JAligner JAligner About: An open source Java implementation of the Smith-Waterman algorithm with Gotoh's improvement for biological local pairwise sequence alignment using the affine gap penalty model. Year of release: 2010 No altmetric.com data available. Download: http://jaligner.sourceforge.net/ Author: Ahmed Moustafa Key publication: OS: Mac OS X, Unix, Windows Licence: GPL Category: Aligners (pairwise) 11 Gm GMAP/GSNAP 2011 Gm GMAP/GSNAP GMAP/GSNAP About: GMAP: A Genomic Mapping and Alignment Program for mRNA and EST Sequences, and GSNAP: Genomic Short-read Nucleotide Alignment Program Year of release: 2011 Download: http://research-pub.gene.com/gmap/ Author: Thomas D. Wu, Colin K. Watanabe, Serban Nacu DOI: 10.1093/bioinformatics/bti310 OS: Unix, Linux Licence: Free (custom Genentech) but do not redistribute modifications Category: Aligners (short read) 08 Fs FSA 2008 Fs FSA FSA About: FSA is a probabilistic multiple sequence alignment algorithm which uses a "distance-based" approach to aligning homologous protein, RNA or DNA sequences. Year of release: 2008 Download: http://fsa.sourceforge.net/ Author: Robert Bradley, Colin Dewey, Jaeyoung Do, Sudeep Juvekar, Lior Pachter, Adam Roberts, and Michael Smoot. DOI: 10.1371/journal.pcbi.1000392 OS: Linux and Mac OS X Licence: GNU GPL Category: Multiple Sequence Aligners 11 Pi PRICE 2011 Pi PRICE PRICE About: We are pleased to release PRICE (Paired-Read Iterative Contig Extension), a de novo genome assembler implemented in C++. Its name describes the strategy that it implements for genome assembly: PRICE uses paired-read information to iteratively increase the size of existing contigs. Initially, those contigs can be individual reads from a subset of the paired-read dataset, non-paired reads from sequencing technologies that provide non-paired data, or contigs that were output from a prior run of PRICE or any other assembler. Year of release: 2011 Download: http://derisilab.ucsf.edu/software/price/index.html Author: DOI: 10.1534/g3.113.005967 OS: Linux, MacOS Licence: Open Source Category: Assemblers Genomic (long read) 07 Ve VELVET 2007 Ve VELVET VELVET About: Velvet is short read assembler which uses de Bruijn graphs to assemble next-generation sequencing reads into useable contigs. It handles mixed length reads as well as paired-end reads. Year of release: 2007 Download: http://www.ebi.ac.uk/~zerbino/velvet/ Author: Daniel Zerbino and Ewan Birney DOI: 10.1101/gr.074492.107 OS: Linux, Mac OS X Licence: GPL v2.0 Category: Assemblers Genomic (short read) 12 Sp SPAdes 2012 Sp SPAdes SPAdes About: SPAdes stands for St. Petersburg genome assembler. It is intended for both standard (multicell) and single-cell MDA bacteria assemblies. Year of release: 2012 Download: http://bioinf.spbau.ru/en/spades Author: Anton Bankevich, Sergey Nurk, Dmitry Antipov etc.'c DOI: 10.1089/cmb.2012.0021 OS: Linux Licence: GPLv2 Category: Assemblers Genomic (short read) 09 Cn Contrail 2009 Cn Contrail Contrail About: Similar to other leading short read assembler, Contrail relies on the graph-theoretic framework of de Bruijn graphs. However, unlike these programs, which require large RAM resources, Contrail relies on Hadoop to iteratively transform an on-disk representation of the assembly graph, allowing an in depth analysis even for large genomes. Year of release: 2009 No altmetric.com data available. Download: http://contrail-bio.sf.net/ Author: Michael Schatz, Jeremy Chambers, Avijit Gupta, Rushil Gupta, David Kelley, Jeremy Lewi, Deepak Nettem, Dan Sommer, and Mihai Pop Key publication: OS: All Licence: Unknown Category: Assemblers Genomic (short read) 09 Au AUGUSTUS 2009 Au AUGUSTUS AUGUSTUS About: AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences. It can be run on this web server, on a new web server for larger input files or be downloaded and run locally. Year of release: 2009 Download: http://augustus.gobics.de/ Author: Mario Stanke and Oliver Keller DOI: 10.1093/bioinformatics/btn013 OS: Linux Licence: Artistic licence 2 Category: Gene Prediction (mRNA) 08 Mg MGENE 2008 Mg MGENE MGENE About: mGene is a computational tool for the genome-wide prediction of protein coding genes from eukaryotic DNA sequences. Year of release: 2008 Download: http://www.mgene.org/ Author: Gunnar Raetsch, Gabriele Schweikert, Jonas Behr DOI: 10.1101/gr.090597.108 OS: Linux Licence: Open Source Category: Gene Prediction (mRNA) 09 Mm MetaBioME 2009 Mm MetaBioME MetaBioME About: MetaBioME is a web resource to find novel homologs for known Commercially Useful Enzymes (CUEs) in metagenomic datasets and completed bacterial genomes. Year of release: 2009 Download: http://metasystems.riken.jp/metabiome/ Author: Vineet Kumar Sharma, Naveen Kumar, Tulika Prakash, Todd D. Taylor DOI: 10.1093/nar/gkp1001 OS: Online Licence: Free to use Category: Sequence Tools 04 Jv JalView 2004 Jv JalView JalView About: Jalview is a free program developed for the interactive editing, analysis and visualization of multiple sequence alignments. It can also work with sequence annotation, secondary structure information, phylogenetic trees and 3D molecular structures. Year of release: 2004 Download: http://www.jalview.org/ Author: Waterhouse AM, Procter JB, Martin DMA, Clamp M, Barton GJ DOI: 10.1093/bioinformatics/btp033 OS: All Licence: GPL3 Category: Sequence Tools 97 Ct CATH 1997 Ct CATH CATH About: The CATH database is a hierarchical domain classification of protein structures in the Protein Data Bank. Protein structures are classified using a combination of automated and manual procedures. Year of release: 1997 Download: http://www.cathdb.info/ Author: Sillitoe I, Cuff AL, Dessailly BH, Dawson NL, Furnham N, Lee D, Lees JG, Lewis TE, Studer RA, Rentzsch R, Yeats C, Thornton JM, Orengo CA DOI: 10.1093/nar/gks1211 OS: Universal Licence: Free to use Category: Database/Warehouse 12 Tg MISO 2012 Tg MISO MISO About: MISO (Managing Information for Sequencing Operations) is a new open-source Lab Information Management System (LIMS) under development at TGAC, specifically designed for tracking next-generation sequencing experiments. Year of release: 2012 No altmetric.com data available. Download: http://www.tgac.ac.uk/miso/ Author: Robert Davey and Mario Caccamo Key publication: OS: Linux, Mac OS X and Windows Licence: GPL v3 Category: Database/Warehouse 10 Da DALLIANCE BROWSER 2010 Da DALLIANCE BROWSER DALLIANCE BROWSER About: Dalliance is a track-oriented genome viewer similar to Ensembl, UCSC, or GBrowse. However, it is a little different in operation because it uses recent extensions to web standards (HTML 5) to offer a higher level of interactivity than most previous genome viewers. Year of release: 2010 Download: http://www.biodalliance.org/ Author: Thomas Down DOI: 10.1093/bioinformatics/btr020 OS: Linux, Mac OS X and Windows Licence: BSD-Style licence. Category: Genome Browsers 07 Jb JBROWSE 2007 Jb JBROWSE JBROWSE About: JBrowse is a JavaScript genome browser with an emphasis on portability. Browse is the official successor to GBrowse. Year of release: 2007 Download: http://jbrowse.org/ Author: Rober Buels and Mitch Skinner DOI: 10.1101/gr.094607.109 OS: Linux and Mac OS X Licence: GNU Lesser GPL v 2.1 Category: Genome Browsers XX Mb MrBayes XX Mb MrBayes MrBayes About: MrBayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models. MrBayes uses Markov chain Monte Carlo (MCMC) methods to estimate the posterior distribution of model parameters. Year of release: XX No altmetric.com data available. Download: http://mrbayes.sourceforge.net/ Author: John Huelsenbeck, Bret Larget, Paul van der Mark, Fredrik Ronquist, Donald Simon and Maxim Teslenko. Key publication: http://mrbayes.sourceforge.net/commref_mb3.2.pdf OS: Macintosh, Windows, and UNIX Licence: Free software Category: ToolKits and APIs 04 Pn PhastCons 2004 Pn PhastCons PhastCons About: PHAST is a freely available software package for comparative and evolutionary genomics. It consists of about half a dozen major programs, plus more than a dozen utilities for manipulating sequence alignments, phylogenetic trees, and genomic annotations (see left panel). Year of release: 2004 No altmetric.com data available. Download: http://compgen.bscb.cornell.edu/phast/ Author: Adam Siepel, Melissa Hubisz and Katie Pollard Key publication: http://compgen.bscb.cornell.edu/~acs/phyhmm_with_apdx.pdf OS: Linux, Mac OS X and Windows Licence: BSD-style licence Category: ToolKits and APIs 08 Ug UGENE 2008 Ug UGENE UGENE About: Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. Year of release: 2008 Download: http://ugene.unipro.ru/ Author: Konstantin Okonechnikov; Olga Golosova; Mikhail Fursov DOI: 10.1093/bioinformatics/bts091 OS: Universal Licence: GPL v2 Category: ToolKits and APIs 04 Ml MUSCLE 2004 Ml MUSCLE MUSCLE About: MUSCLE is a program for creating multiple alignments of amino acid or nucleotide sequences. A range of options is provided that give you the choice of optimizing accuracy, speed, or some compromise between the two. Year of release: 2004 Download: http://www.drive5.com/muscle/ Author: Robert C. Edgar DOI: 10.1093/nar/gkh340 OS: Linux, MacOSX, Unix, Windows Licence: Freeware Category: Multiple Sequence Aligners 12 Po PROT PAL 2012 Po PROT PAL PROT PAL About: Prot Pal is a software tool for multiple sequence alignment, ancestral reconstruction, and measurement of indel rates on a phylogenetic tree. Year of release: 2012 No altmetric.com data available. Download: http://www.biowiki.org/ProtPal Author: Oscar Westesson and Ian Holmes Key publication: OS: Linux Licence: Free software Category: Multiple Sequence Aligners 05 Ji JIGSAW 2005 Ji JIGSAW JIGSAW About: JIGSAW is a program designed to use the output from gene finders, splice site prediction programs and sequence alignments to predict gene models. JIGSAW is available for all species. We have tested JIGSAW on Human, Rice (Oryza sativa), Arabidopsis thaliana, C. elegans, Brugia malayi, Cryptococcus neoformans, Entamoeba histolytica, Theileria parva, Aspergillus fumigatus, Plasmodium falciparum and Plasmodium yoelii. Year of release: 2005 Download: http://www.cbcb.umd.edu/software/jigsaw/ Author: Jonathan Edward Allen DOI: 10.1186/gb-2006-7-s1-s9 OS: Linux Licence: GNU gcc3.2 Category: Gene Prediction (mRNA) 08 Th Tophat 2008 Th Tophat Tophat About: TopHat is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results to identify splice junctions between exons. Year of release: 2008 Download: http://tophat.cbcb.umd.edu/ Author: Daehwan Kim, Geo Pertea, Cole Trapnell, Harold Pimentel, Ryan Kelley and Steven L Salzberg DOI: 10.1186/gb-2013-14-4-r36 OS: Linux, Mac Licence: Artistic Category: Sequence Tools 12 Re RSEM 2012 Re RSEM RSEM About: RSEM is a software package for estimating gene and isoform expression levels from RNA-Seq data. The RSEM package provides an user-friendly interface, supports threads for parallel computation of the EM algorithm, single-end and paired-end read data, quality scores, variable-length reads and RSPD estimation. In addition, it provides posterior mean and 95% credibility interval estimates for expression levels. For visualization, It can generate BAM and Wiggle files in both transcript-coordinate and genomic-coordinate. Year of release: 2012 Download: http://deweylab.biostat.wisc.edu/rsem/ Author: Bo Li DOI: 10.1186/1471-2105-12-323 OS: Linux Licence: GPL Category: Sequence Tools 12 Fb Flexbar 2012 Fb Flexbar Flexbar About: Flexbar preprocesses high-throughput sequencing data efficiently. It demultiplexes barcoded runs and removes adapter sequences. Moreover, trimming and filtering features are provided. Flexbar increases read mapping rates and improves genome and transcriptome assemblies. It supports next-generation sequencing data in fasta/q and csfasta/q format from Illumina, Roche 454, and the SOLiD platform. Year of release: 2012 Download: http://sourceforge.net/p/flexbar/wiki/Manual/ Author: Matthias Dodt, Johannes T. Roehr, Rina Ahmed, Christoph Dieterich DOI: 10.3390/biology1030895 OS: All Licence: GPL3 Category: Sequence Tools 12 Ea ENA 2012 Ea ENA ENA About: The European Nucleotide Archive (ENA) captures and presents information relating to experimental workflows that are based around nucleotide sequencing. A typical workflow includes the isolation and preparation of material for sequencing, a run of a sequencing machine in which sequencing data are produced and a subsequent bioinformatic analysis pipeline. ENA records this information in a data model that covers input information (sample, experimental setup, machine configuration), output machine data (sequence traces, reads and quality scores) and interpreted information (assembly, mapping, functional annotation). Year of release: 2012 No altmetric.com data available. Download: http://www.ebi.ac.uk/ena/home Author: Key publication: OS: Online Licence: Free Category: Database/Warehouse 11 Pf Pfam 2011 Pf Pfam Pfam About: The Pfam database is a large collection of protein families, each represented by multiple sequence alignments and hidden Markov models (HMMs). Year of release: 2011 Download: http://pfam.sanger.ac.uk/ Author: L. Aravind, Adam Godik and Val Wood DOI: 10.1093/nar/gkr1065 OS: Linux Licence: Creative Commons Zero Category: Database/Warehouse 07 Sk SeqMonk 2007 Sk SeqMonk SeqMonk About: Year of release: 2007 No altmetric.com data available. Download: http://www.bioinformatics.babraham.ac.uk/projects/seqmonk/ Author: Simon Andrews Key publication: OS: Linux, Mac OS X, Windows Licence: GPL v3 or later Category: Genome Browsers 10 Sv Savant 2010 Sv Savant Savant About: Savant is a next-generation genome browser designed for the latest generation of genome data. Year of release: 2010 Download: http://genomesavant.com/p/savant/index Author: Fiume M, Smith EJ, Brook A, Strbenac D, Turner B, Mezlini AM, Robinson MD, Wodak SJ, Brudno M. DOI: 10.1093/nar/gks427 OS: All Licence: Apache 2.0 Category: Genome Browsers XX Tp Tripod XX Tp Tripod Tripod About: Tripod is a user-friendly chemical genomics browser that is currently being developed by the informatics group at the NIH Chemical Genomics Center. The main goal of Tripod is to facilitate easy access to chemical and biological data in an intuitive, user-friendly tool. To this end, the development of Tripod is inspired by the ubiquitous iTunes software, whereby browsing and managing of media contents are being adapted to chemical and biological data. Year of release: XX No altmetric.com data available. Download: http://tripod.nih.gov/ Author: Ajit Jadhav et al. Key publication: OS: Online Licence: Category: Genome Browsers
|
ITV News has learned that Ford has drawn up plans to cut 1,160 jobs at its engine plant in Bridgend over the next four years. It's understood that Ford is responding to a fall in global demand and problems with efficiency - including restrictive work practices - specific to Bridgend. The company expects headcount to fall from its current level of 1,760 to around 600 by 2021, mostly through natural attrition. The Sigma engine that is built at Bridgend is exported to EU as well other global markets, including China and the United States, but is in the process of being phased out. The new Dragon engine which will replace it will be made for Ford's European operations only. Demand for the Dragon engine is expected to be lower than for the Sigma.
|
“… each of the players brought in today can start a game, and can certainly come off the bench [in the] early weeks of the season and impact the game in a positive manner.” — Sky Blue FC head coach Christy Holly S ky Blue FC fans could be forgiven for some confusion as they awoke to the news on the Friday morning of the 2016 National Women’s Soccer League (NWSL) College Draft that Nadia Nadim had been traded to the Portland Thorns FC in a deal for, primarily, a draft pick. That the team would deprive itself of the goal-scoring prowess of Nadim up top, in what could be legend Christie Rampone’s final year, speaks to a key change in Sky Blue’s approach. The rebuild is here. And judging by the haul the team came away with at the draft, it could mean the franchise’s best days are coming soon. Bolstering the Attack “There’s definitely been a few changes,” Sky Blue FC head coach Christy Holly, one of those changes himself after Jim Gabarra left for the Washington Spirit, said as several of his new players posed for photographs at the Baltimore Convention Center during the draft. “And this is something that we’ve been looking at since toward the end of last season. We knew the changes we needed. We had an extremely strong starting eleven. We struggled when players were injured or players were away on national team duty. This is something — each of the players brought in today can start a game, and can certainly come off the bench [in the] early weeks of the season and impact the game in a positive manner. It’s really important to me that we have that bounce across the team.” The group of four new Sky Blue players really begins with Raquel Rodriguez, the midfielder from Penn State who dominated College Cup and captured the 2015 MAC Hermann Trophy. The consensus is that few players come in as ready to contribute as Rodriguez, providing a traffic cop for the Sky Blue attack. It is just as notable, when evaluating Sky Blue’s difficulty breaking into the sports marketplace (with attendance well shy of 2,000 per game, the lowest in the NWSL) that Rodriguez is extremely well-spoken, cognizant of the need to connect with fans, and eager to get started replicating the following she’s built in Costa Rica in New Jersey. “The media at home — this is the first time I’ve ever felt so announced in Costa Rica,” Rodriguez said shortly after getting picked. “They’re very aware. And I can see that change. And in social media more than actual news, people aren’t waiting on the news, they’re already aware of what happened here. But like you said, it has to increase, hopefully, to the point it’s like the men’s coverage. But I’m also aware it takes time.” “My goal is to be a role model that my generation did not have. And I hope that this grows the game. And I just want to be that role model, and take that responsibility.” Picking up the attacking slack following Nadim’s departure will be Sky Blue’s second pick, and the 13th overall pick in the draft, Leah Galton of Hofstra by way of Harrogate, England. “Leah’s a very talented player,” Holly said of Galton. “I wasn’t sure how many teams had her on their radar. But she brings the balance with her left foot. She’s more than capable of scoring goals. She’s got that burst of speed which is very beneficial for us on the left wing. She can play as an 11, she can play as a 9. She didn’t develop her game in America, her early years were in England. So she comes with a different level of savviness within the game, and it makes her unique.” Added Defensive Depth While Rodriguez and Galton figure to play key roles in the Sky Blue attack this season, the team took precautions with the back line and in goal as well. And few will know the terrain at Yurcak Field better than the team’s third pick and the 23rd overall selection, Erica Skroski, a central defender who paired with fellow 2016 NWSL draftee Brianne Reed (who went 18th to FC Kansas City) to lead Rutgers to its first College Cup berth. The Galloway, New Jersey native is Jersey through and through, having spent many evenings watching Sky Blue through her college days, and now will get to play on the same field she experienced her greatest collegiate triumphs, next to the ultimate prototype of a Jersey defender in Rampone. “It’s amazing,” Skroski said, shortly after she was selected. “Everything I could’ve hoped for in my next step as a soccer player. To play at Rutgers for four years after growing up in New Jersey, to play professionally where Rutgers plays its soccer games means so much to me.” “Absolutely. When we’re out at the store, people don’t just know about me, about Rutgers and this season — but they know about the professional league. Whether that’s guys, that’s girls, they know about it — the whole country knows about it, and everybody is benefitting.” Sky Blue’s final pick lasted until 29th overall, but the team made a trade to get her — dealing picks 33 and 36 to Chicago for the 29th slot and taking William & Mary goalkeeper Caroline Casey. The intellectually driven Casey wasn’t even sure she wanted to play professional soccer a year ago, but a summer with the Washington Spirit Reserves team and a senior campaign that included an All-America selection led her to continue her professional pursuit. “I’d been wrestling with the idea of what to do. I was pre-med in college, and eventually I’ll pursue that path,” Casey said after the pick, her parents nearby waiting to take her out to celebrate.” “But with the help of my coaches, teammates, and other individuals along the way really convinced me to pursue this path. So many opportunities it provides, so I absolutely wanted to jump at the chance. I was training with the Washington Spirit Reserve team, and I just fell in love with the environment, and just felt like I was 13 years old again — when you just love soccer so much, and there’s no pressure, any of that. So in a sense, I fell in love with the game all over again. So I think, from that point forward, I knew this is what I wanted to do.” I t all should make for a fascinating summer of novelty and growth in Piscataway. And while Holly was realistic about the difficulties in getting a young team to play at an elite level so quickly, he didn’t sound ready to punt the season. Not with his optimism about Rampone, who he’d spoken to the day before the draft, and said “what I expect from Christie is to play every game, and lead the team.” And not with so much young talent added to the roster in a single day. “The players we brought in were extremely successful,” Holly said. “You have a Hermann winner, you have a first team All-American, you have a player who went to the Final Four. So we’re extremely fortunate. A lot of what they bring in is the humility, the hard-working attitude. And that’s a lot of it. There’s no question about their ability. The application of that, the transition from the college game will be that much smoother.” All images courtesy of Cynthia Hobgood.
|
The law, SB 469, essentially bars a levee district in New Orleans’ East Bank – the Southeast Louisiana Flood Protection Authority-East, or SLFPA-E – from pressing forward in its lawsuit against 97 oil and gas companies, which it blames for exposing New Orleans to catastrophic damage from hurricanes Rita and Katrina by cutting thousands of miles of pipes and canals through sensitive barrier islands and wetlands that otherwise would have protected the coastal city. The lawsuit, filed last summer, sought to force energy companies to restore the wetlands, fill in the canals, and pay for past damages. “We are looking to the industry to fix the part of the problem that they created,” SLFPA-E vice president John Barry told the Times-Picayune last year. “We’re not asking them to fix everything. We only want them to address the part of the problem that they created.” SB 469 upends that effort by stipulating only certain limited groups may bring lawsuits against companies for their activities along the coast, such as oil exploration. Its backers in the state legislature, Sens. Bret Allain and Robert Adley, asserted the measure will help avoid “enriching lawyers and certain individuals” through “frivolous lawsuits.” [ALSO: EPA Emissions Rule May Influence Decision on Keystone XL] Zygmunt Plater, a professor at Boston College Law who specializes in environmental law, disagrees. "Clearly they're jumping at the snap of the fingers of the [Louisiana] Oil and Gas Association," he says. "LOGA is powerful" Oil and gas donations, in fact, easily make up the largest chunk of campaign donations to Jindal, Allain and Adley, finance records show. And indeed, SB 469 not only halts SLFPA-E's lawsuit, but also potentially undercuts government claims against BP, whose Deepwater Horizon oil rig exploded in the Gulf of Mexico in 2010, killing 11 people and spilling 210 million gallons of oil in the worst marine oil spill in history.
|
SEATTLE -- The Boston Red Sox have added a much-needed starter after an earlier trade fell through, acquiring oft-injured left-hander Erik Bedard from the Seattle Mariners in a three-team deal at the deadline Sunday. The Red Sox, whose deal for Oakland's Rich Harden fell apart late Saturday night, also got right-hander Josh Fields, a 2008 first-round draft pick. "He was real tough on us," Boston manager Terry Francona said of Bedard, who started his major league career with the Baltimore Orioles. "He's a guy who has shown he can pitch in the American League East. First half of the year before he tweaked that knee he was pretty solid," Francona said. "His stuff was good. We're excited." Francona said he would talk to Bedard before lining up his rotation for the coming week. "I know that (general manager) Theo (Epstein) talked to him and he sounded like he was excited. We'll kind of build him back up," Francona said. "He just came off the DL Friday. We'll build him back up and get him in there." Boston sent catcher Tim Federowicz and right-handers Juan Rodriguez and Stephen Fife to the Los Angeles Dodgers, who dealt OF Trayvon Robinson to the Red Sox. The Red Sox then sent Robinson and OF Chih-Hsien Chiang to Seattle. Bedard, who is 4-7 in 16 starts with a 3.45 ERA this season for the Mariners, wasn't so good against an AL East team on Friday when he faced Tampa.
|
LUXEMBOURG (AFP) — Andy Schleck on Tuesday received the winner’s yellow jersey for the 2010 Tour de France. The younger of the Schleck brothers finished second behind Alberto Contador in the race but was declared as the winner after the Spaniard was stripped of the title in February in the wake of a positive doping test. Contador was also stripped of the 2011 Giro d’Italia and banned from cycling for two years, a ban that runs until August this year. Schleck, 26, has finished second in each of the last three Tours de France. “It’s great to receive this jersey, but for me, it changes nothing: it’s not like a victory,” said Schleck. “It’s not the same feeling as climbing on a podium. “That said, I’m happy that this ceremony took part with people I wanted to see today.” RadioShack-Nissan team manager Johan Bruyneel, at odds with the Schlecks after brother Fränk’s withdrawal from the Giro d’Italia earlier this month, was present, along with Tour de France organiser Christian Prudhomme. “I can only hope that this jersey leads to others, and I think there will be others,” Prudhomme said during the ceremony in Schleck’s home town of Mondorf-les-Bains. “Everyone contends that the 2012 Tour isn’t one that suits Andy. But I am convinced otherwise.” Spain’s Oscar Pereiro was handed the winner’s yellow jersey for the 2006 Tour de France more than a year after the end of the race after American Floyd Landis had been stripped of his title for a positive dope test. Landis later admitted using performance enhancing drugs during 2006 and at other times in his professional career.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.